Buying Blind: A Framework for AI Procurement Integrity

The United States is accelerating toward a corruption crisis of its own making. In its race to rapidly acquire artificial intelligence (AI), current policy risks undermining longstanding procurement integrity safeguards.

In my new article, Buying Blind: Corruption Risk and the Erosion of Oversight in Federal AI Procurement (forthcoming, Public Contract Law Journal, Vol. 55, No. 2 (Winter 2026)), I argue that AI does not merely amplify familiar corruption risks; it also creates new integrity vulnerabilities that existing oversight mechanisms are not calibrated to address.

WHY THIS MATTERS NOW

How the government acquires AI today determines the integrity vulnerabilities it will inherit tomorrow. Missing audit rights, weak testing requirements, and opaque supply chains are acquisition-phase choices that become significant corruption risks as AI expands across the federal procurement system.

Recent federal AI policy has accelerated adoption while narrowing regulatory oversight, leaving “regulation by contract” as the de facto governance model—even as agencies sign deals without adequate protections against corruption and integrity risks that will be exponentially harder to reverse once procurement dependencies harden.

The article challenges a dangerous assumption driving current federal AI policy: that governance impedes innovation. The evidence demonstrates the opposite. Governance encourages competition by preventing contractor lock-in. It safeguards innovation by promoting fair processes resistant to manipulation. And it enables sustainable AI adoption by building the institutional trust necessary for continued technological deployment. When oversight is treated as secondary to innovation, the procurement system itself becomes the risk.


What This Article Does

The article serves dual purposes:

  • For scholars and policymakers, it establishes analytical foundations for understanding how AI introduces novel corruption vulnerabilities that existing frameworks inadequately address.
  • For acquisition professionals and agency counsel navigating AI procurement today, it offers practical risk-mitigation strategies implemented within existing authorities.

Foundations

  • Introduces the U.S. Government Procurement Anti-Corruption Ecosystem and its core pillars.
  • Surveys the evolving statutory, regulatory, and sub-regulatory landscape governing federal AI acquisition.
  • Provides a “wedding cake” schematic of the AI technology stack, intentionally simplified to help acquisition professionals, counsel, and policymakers identify corruption and procurement integrity risks across the AI supply chain.
This wedding cake serves as a conceptual visualization of the AI tech stack

How Agencies Buy AI

  • Maps federal AI acquisition pathways and explains how the selected method shapes the government’s ability to secure enforceable governance terms.
  • Examines recent consolidation efforts, including GSA’s OneGov “AI deals” that offer leading AI platforms at promotional prices, raising significant buy-in and vendor capture concerns.

The Risk Landscape

  • Procurement Corruption Risks
    • Organizational conflicts of interest (including novel “foundation model conflicts” arising from AI supply chain complexity)
    • Fraud and AI-enabled deception (document fabrication, deepfakes, algorithmic concealment)
    • Supply chain manipulation (data poisoning, evasion attacks, and prompt injection) that can distort competition, evaluation, and contract performance
  • Systemic Procurement Integrity Risks
    • Vendor lock-in and switching costs
    • Promotional pricing strategies that front-load adoption and back-load recoupment
    • Automation bias compounded by workforce capacity gaps
    • Opacity and limited auditability
    • Technical failures (incumbency bias, hallucinations) that become exploitable in evaluation, protest defense, and post-award oversight
  • Each category is analyzed using acquisition fact patterns and practical hypotheticals to show how these risks materialize throughout the acquisition lifecycle.

Acquisition Gaps Become Operational Risks

  • Identifies the specific acquisition terms that determine deployment risk: audit rights, testing and evaluation requirements, documentation and disclosure obligations, data and model access, and remedies for nonconformance.
  • Explains how acquisition-phase gaps materialize at deployment once AI is embedded in procurement processes through automation bias, diminished human review mandates and capacity, opaque model updates, and degraded traceability for accountability and investigations.

What Agencies Can Do Now

  • Provides a prioritized safeguards menu that agencies can implement now under existing authority, distinguishing baseline protections from heightened requirements for high-risk use cases:
    • Applying existing OCI mitigation frameworks to AI supply chains.
    • Combating AI-generated fraud through detection infrastructure, enhanced disclosure requirements, and due process protections.
    • Negotiating anti-lock-in protections: portability, egress fee caps, and data control provisions.
    • Proposing enhanced transparency mechanisms, including an AI-BOM (AI Bill of Materials) disclosure model with tiered obligations, so documentation and verification requirements scale with risk.
    • Implementing red-teaming and system integrity testing proportionate to risk.
    • Developing an AI-literate acquisition workforce through intentional capacity-building initiatives.
    • Establishing AI Integrity Advocates modeled on the Competition Advocate structure, while leveraging existing technology-focused oversight infrastructure.


Subscribe to get the latest posts sent to your email.