Category: Artificial Intelligence

  • Blacklisting by Tweet Is Not a Thing: What the Federal Contracting Rules Require When Firing a Contractor (Like a Dog)

    Blacklisting by Tweet Is Not a Thing: What the Federal Contracting Rules Require When Firing a Contractor (Like a Dog)

     “I don’t know if it’s murder, but it looks like an attempt to cripple Anthropic. And specifically, my concern is whether Anthropic is being punished for criticizing the government’s contracting position in the press.”

    Those words, spoken by Judge Rita Lin at the March 24 hearing on Anthropic’s motion for a preliminary injunction against the Department of Defense and sixteen other federal agencies, address the question that government contractors and Silicon Valley have been asking for the past month.

    I have described contractor blacklisting as the “corporate death penalty,” a “corporate death sentence,” and the “corporate grim reaper,” but corporate “murder” is a first.

    In late February, the President directed every federal agency to stop using Anthropic’s AI technology. The Secretary of Defense also designated Anthropic a supply chain risk to national security under 10 U.S.C. § 3252 after the company refused to remove usage restrictions from its contracts with the Department. Anthropic filed suit in the Northern District of California alleging that the government’s actions were unlawful. It is separately challenging a covered procurement action under 41 U.S.C. § 4713, which is subject to exclusive review in the D.C. Circuit. I have discussed the underlying policy dispute, including the “all lawful use” directive and the structural gaps in AI procurement governance, in a prior Lawfare piece. Alan Rozenshtein’s Lawfare piece addresses the remedy question from a national security law perspective.

    On March 26, Judge Lin granted Anthropic’s motion for a preliminary injunction, enjoining the president’s social media directive, the secretary’s social media directive, and the Section 3252 supply chain designation. The order is stayed for seven days. The final paragraph of the preliminary injunction order captures in three sentences what this piece explains at length:

    This Order restores the status quo. It does not bar any Defendant from taking any lawful action that would have been available to it on February 27, 2026, prior to the issuances of the Presidential Directive and the Hegseth Directive and entry of the Supply Chain Designation. For example, this Order does not require the Department of War to use Anthropic’s products or services and does not prevent the Department of War from transitioning to other artificial intelligence providers, so long as those actions are consistent with applicable regulations, statutes, and constitutional provisions.

    Everyone expects the government to appeal, so that is probably next. But the question I keep hearing is, what do the federal contracting rules require when the government decides it no longer wants to work with a contractor?

    The procurement system provides detailed, well-established answers to that question, developed by policymakers over decades. Section 3252 required the Secretary to determine, in writing, that less intrusive measures were not reasonably available, and to tell Congress which alternatives were considered and why they were rejected. This piece describes some of those measures.

    The Tools the Government Had But Didn’t Use

    First, the government is not required to contract with Anthropic. Under Perkins v. Lukens Steel, the government enjoys broad discretion to determine with whom it will do business and on what terms. No one disputes that the Executive Branch can decide it no longer wants a particular vendor’s products or services, but it must do so consistent with the law. Beyond this, the federal contracting system offers a range of tools for ending a contractor relationship, and they escalate in severity. With respect to Anthropic, the government skipped to the most extreme one.

    The options available to the government for severing ties with Anthropic depend on the type of contract involved. Based on the public record, Anthropic had FAR-based contracts, including a GSA “OneGov” agreement; deployments through third-party contractors, including Palantir’s Maven Smart System; and a prototype Other Transaction (OT) with the Chief Digital and Artificial Intelligence Office. The tools available to end each relationship differ accordingly.

    We do not know the full picture, but the government had multiple defined processes available to end its relationship with Anthropic and begin removing it from federal systems.

    Termination

    Under Anthropic’s FAR-based agreements, the simplest way for the government to sever its relationship with the company is through a “termination for convenience“ (T4C). If the contract includes a termination for convenience clause, as most FAR-based contracts do, the contracting officer may terminate when doing so is in the government’s interest. The standard is broad, but not limitless, as a termination cannot be an act of bad faith or an abuse of discretion. The contractor is entitled to certain costs when the government exercises its T4C rights, so the parties negotiate a settlement; if they disagree, the disputes process outlined in FAR Part 33 is followed.

    If the concept of “broad government termination rights” sounds vaguely familiar, it is because you may recall that in early 2025, the government, through DOGE, exercised its T4C authority on a sweeping and unprecedented basis, terminating thousands of contracts for convenience across the federal government. The tool is neither obscure nor untested, and the government has shown absolutely no hesitation in deploying it.

    There is little doubt that the government could have demonstrated that terminating its agreements with Anthropic was in the government’s interest. The administration has articulated a policy rationale—it wants AI models free from vendor usage restrictions—and Anthropic declined to comply. Whether or not you agree with the policy, the threshold is not hard to meet. They’ve certainly done it for less.

    With respect to Anthropic’s OT agreement with CDAO, the picture is less certain, as the public record does not disclose the specific terms. Unlike FAR-based contracts, OTs are flexible instruments, so the FAR-based termination process does not automatically apply. But OTs often include FAR-like termination provisions, so the government may well have had a contractual off-ramp for the direct relationship. Beyond the government’s options, Anthropic itself offered to facilitate a transition to another provider.

    Termination addresses the government’s immediate concern: current operational reliance on a vendor it no longer trusts. The government had contractual tools to address that concern without invoking a supply chain exclusion determination. It did not use them.

    Suspension and Debarment

    Termination for convenience is common enough that the FAR has an entire disputes process built around it. What happened last year was jarring, but a T4C in this instance wouldn’t have surprised anyone. The next option on the less-intrusive-measures menu is a significant jump, more like moving from a speeding ticket to a life sentence. But even a life sentence has more process than what the government did here.

    Discretionary suspension and debarment are the federal procurement system’s mechanisms for excluding “nonresponsible” contractors from doing business with the government. Regulated by FAR 9.4, both tools are triggered by evidence of serious misconduct or grossly incompetent performance, but they serve different functions. Suspension is immediate and temporary, designed to protect the government while an investigation or legal proceeding is underway. Debarment is longer-term, typically following a criminal conviction, a civil judgment, or a finding that the contractor’s conduct is serious enough to affect its present responsibility. This is the FAR’s nuclear option. And given its potential consequences (both direct and collateral) it’s no wonder we call it the corporate death penalty.

    Before I continue, I want to stop and address something directly. I am not suggesting that the government should have pursued debarment. On these facts, I do not believe it would survive judicial scrutiny. At issue is a contractual dispute in which a vendor declined to accept the terms the government wanted. That is not the kind of triggering misconduct the system was designed to address. But if we are evaluating less intrusive measures, which Section 3252 requires the Secretary to consider, then understanding what even this most extreme tool requires is essential. Because it shows just how far the government departed from any recognized process.

    Specifically, the debarment process itself recognizes how consequential it can be for a contractor. FAR 9.402(b) establishes the core principle: debarment and suspension may be imposed “only in the public interest for the Government’s protection and not for purposes of punishment.”

    FAR 9.4 implements that principle through a framework that requires notice and an opportunity to respond, though the timing differs depending on the tool used. These determinations are made by a Suspension and Debarment Official (SDO), whose ultimate question is one of present responsibility: can this contractor be trusted to continue doing business with the federal government? In answering this question, the SDO will generally assess present responsibility in light of remedial measures, mitigating factors, and aggravating factors. Suspension and debarment generally preclude new prime contract awards and restrict certain future subcontracts, but they do not, by themselves, require the termination of existing agreements. FAR 9.405-1 expressly permits agencies to continue existing contracts and subcontracts unless the agency head directs otherwise.

    The Subcontractor Problem and Section 3252

    Anthropic’s technology reportedly runs through Palantir’s Maven Smart Systems in classified defense workflows. The government’s direct contractual relationship appears to be with Palantir, not Anthropic. That makes it harder to use ordinary tools for severing the relationship. Directing a prime to remove a deeply integrated supplier mid-performance is indirect and creates commercial and technical risk for the prime. Reuters has reported that removing Claude would require Palantir to replace the model and rebuild parts of its software. As discussed above, debarment would constrain certain future subcontracting, but it would not by itself compel a prime to unwind an existing relationship.

    Section 3252 offers something different. Congress established the original authority in the FY 2011 NDAA to address supply-chain risks in sensitive defense information technology procurements, and the statute defines “supply chain risk” as the “risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert” a covered system. Applying it to a domestic AI company in a dispute over contract terms pushes the statute well beyond its ordinary adversary-focused framing. Yet the structural difficulty of reaching a deeply embedded supplier through ordinary channels may help explain why the government reached for it.

    Among the covered procurement actions the statute authorizes is the decision to withhold consent for a subcontract with a particular source or to direct a contractor to exclude a particular source from consideration for a subcontract (as implemented by DFARS 239.73). That subcontract-specific authority is best read as prospective. It addresses future subcontracting decisions, not the unwinding of an already-performing subcontract. The government, therefore, could have used a narrower, forward-looking measure while existing arrangements transitioned off on a defined timeline.

    Instead, according to the public record in the Anthropic litigation, it chose a much broader covered procurement action, and the record does not show that narrower alternatives were meaningfully considered. Indeed, Judge Lin noted that the congressional notices required by § 3252(b)(3)(B) did not contain the required discussion of less intrusive measures, and the government conceded that omission at oral argument.

    An Unusual Course of Action . . . Even for Government Procurement

    There is no public record of Section 3252 being used to designate a domestic company as a supply chain risk. No president has ever directed government-wide exclusion of a named contractor by social media post. The government designated Anthropic a supply chain risk to national security and then gave itself six months to keep using Anthropic’s products on classified systems.

    According to Anthropic’s complaint and supporting declarations, the government at one point threatened to both invoke the Defense Production Act to compel Anthropic to provide its services and designate it a supply chain risk, thereby excluding it—contradictory remedies directed at the same company in the same dispute. And as I type this sentence, according to Anthropic’s court filings, the contractor the government branded a national security threat continues to maintain its Top Secret facility security clearance, issued by the same government that currently declares it a threat to the United States government.

    On February 27, the President posted on social media directing every federal agency to “IMMEDIATELY CEASE all use of Anthropic’s technology.” The post characterized Anthropic as a “RADICAL LEFT, WOKE COMPANY” and threatened “major civil and criminal consequences,” without citing a single source of authority for this extraordinary action. Court filings show that Anthropic had contracts or usage with at least 16 federal agencies. Treasury Secretary Bessent posted on X that the department was “terminating all use of Anthropic products.” The Federal Housing Finance Agency and the General Services Administration (GSA) followed. One after another, all on social media, none citing any statutory authority.

    Later that day, Secretary Hegseth posted on social media, directing the Department to “designate Anthropic a Supply-Chain Risk to National Security.” No statute was cited. The Secretary described Anthropic’s stance as “fundamentally incompatible with American principles,” criticized its “defective altruism” and “Silicon Valley ideology,” and declared: “This decision is final.” He also directed that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”

    On March 3, the joint recommendation and risk analysis materialized, along with the statutory bases, which had appeared in neither of the social media posts. The Secretary signed the Secretarial Determination the same day the recommendations were submitted. The government’s opposition brief characterizes the Secretary’s February 27 social media post as “the beginning” of the decision-making process and argues it was not final agency action.

    So, let’s take them at their word. The Secretary publicly announced the outcome, directed subordinates to produce the justification, and the justification confirmed the predetermined conclusion four days later. The record went from initiation to final determination without the contractor ever having an opportunity to respond before the decision was made.

    None of the Established Processes Were Used

    The government did not terminate Anthropic’s contracts for convenience; it directed every federal agency to stop using Anthropic’s products via social media. It did not initiate suspension or debarment proceedings before an SDO; the decision was made by a political appointee who had already announced the result on social media before any internal process had commenced. At the March 24 hearing, the government’s theory narrowed to speculation that Anthropic might install a “kill switch” or manipulate its software during operations. Anthropic’s counsel denied that the company has any such capability once Claude is deployed, and the government’s lawyer could not confirm otherwise.

    The Secretary’s post also directed that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” a directive for which the government later conceded there was no statutory basis. At the March 24 hearing, the government’s lawyer further conceded that the statement had “absolutely no legal effect at all.” Judge Lin then pressed the government on what, if anything, prevents the Department from changing its position on how the sentence would operate.

    The Exclusion Without a Name

    Courts have long recognized that a government agency’s conduct can effectively result in a government-wide exclusion without formal debarment proceedings ever being initiated. In Old Dominion Dairy Products, Inc. v. Secretary of Defense, the D.C. Circuit emphasized the severe economic and reputational consequences of effectively excluding a contractor from further government work, including the loss of contracts that would otherwise have been awarded and the effective destruction of the business.

    De facto debarment is notoriously difficult to prove. As Dominique Casimir and Alexandra Barbee-Garrett explain in their piece, The Government’s Just Not That Into You—Is it De Facto Contractor Debarment?, the contractor typically faces “an uphill battle,” where their proposals “simply lose out quietly to those of its competitors, again and again, making it difficult to discern that a de facto debarment is occurring.” Courts generally require either an agency statement that it will not award future contracts or agency conduct demonstrating the same.

    That’s what makes this case so remarkable. De facto debarment cases usually require painstaking reconstruction of a pattern of informal agency conduct, such as repeated nonresponsibility findings, back-channel communications between acquisition professionals, or unexplained refusals to award. This one came with a press release. And the President later stated: “I fired [them] like dogs.”

    The kinds of economic and reputational harm Old Dominion described were already underway before any statutory process had begun.

    What Kind of Business Partner Does the Government Want to Be?

    If the government can brand a contractor a national security threat for refusing to accept contract terms, every federal contract negotiation becomes existential. At the March 24 hearing, Judge Lin pressed the government on precisely this point, asking whether an IT vendor can be designated a supply chain risk because it “is stubborn and insists on certain terms and it asks annoying questions.” She called that “a pretty low bar.”

    The deterrent effect extends beyond Anthropic. It reaches every contractor that might push back on terms it considers unworkable, unsafe, or commercially unreasonable. The procurement system depends on good-faith negotiation between the government and its contractors. Contractors must be able to say no without the government branding them a national security threat—not for protection, but as punishment for driving a hard bargain.

    I have been writing, teaching, and advising on suspension and debarment for nearly two decades. I know what it looks like when the government excludes a contractor. I know what it looks like when the government abuses the process. And I know what it looks like when the government skips the process entirely.

    The corporate death penalty has rules. What happened here followed none of them.

  • What Rights Do AI Companies Have in Government Contracts?

    What Rights Do AI Companies Have in Government Contracts?

    It Depends on the Acquisition Pathway, the Contract Type, and the Contract Terms.

    The Anthropic-Pentagon dispute has drawn significant public attention and an equally large amount of misinformation. After Defense Secretary Pete Hegseth gave Anthropic an ultimatum to allow unrestricted use of its AI models “for all lawful purposes” and the company refused, President Trump directed federal agencies to stop using Anthropic’s products, and Hegseth designated the firm a “supply chain risk.”

    Hours later, OpenAI announced its own Pentagon deal, claiming it included the same two restrictions Anthropic had been fighting for (no mass domestic surveillance, no fully autonomous weapons) while simultaneously agreeing to the “any lawful use” standard Anthropic rejected.

    The public reaction has been chaotic, but most of the commentary, from both sides, reflects a fundamental misunderstanding of how the government buys AI. Commentators are debating whether AI companies should be able to restrict the government’s use of their technology, as if this were a novel question. It is not. Contractors restrict the government’s use of their products all the time. Whether and to what extent they can do so in any particular case depends on three things: the acquisition pathway, the contract type, and the negotiated contract terms.

    Understanding these variables is essential to evaluating what happened with Anthropic, what OpenAI’s deal accomplishes, and what any of this means for the future of AI in the defense space.


    How the Government Buys AI and Why It Matters

    The federal government does not acquire AI through a single, uniform process. It uses multiple acquisition pathways, each of which creates a different allocation of rights and leverage between the government and the contractor. As I have detailed in my article, Buying Blind: Corruption Risk and the Erosion of Oversight in Federal AI Procurement, understanding these pathways is essential to understanding the governance risks that follow from each. Below, I list the most common pathways (there are others, but I won’t attempt to catalog them here).

    Commercial Acquisition (FAR Part 12)

    The most common pathway for federal AI procurement treats these systems as ordinary commercial software. Federal Acquisition Regulation (FAR) Part 12 is designed for the government to acquire goods and services already sold in the commercial marketplace, on commercial terms. The regulation explicitly limits the government’s ability to impose requirements beyond what is customary in the marketplace. Contractors selling commercial AI products are not required to grant the government broader usage rights than those granted to other customers. The government can request expanded rights, but contractors must agree, and this often requires additional consideration.

    This means that when the government buys AI commercially, the vendor’s standard terms and conditions, including its acceptable use policies, are the default starting point. Restrictions on use are not some exotic demand by activist AI companies. This is the consequence of the government buying commercial products on commercial terms.

    License Upgrades and Enterprise Agreements

    Many agencies acquire AI capabilities not through standalone procurements but as add-on features to existing enterprise software agreements, such as Microsoft Copilot or Google Gemini. Because these AI capabilities are offered as upgrades to existing licenses, they fall under the terms of the base agreement. Renegotiating AI-specific terms means renegotiating the entire enterprise deal, which means commercial defaults typically prevail.

    GSA Multiple Award Schedule

    When agencies order through the GSA Schedule, they inherit whatever terms GSA negotiated at the master agreement level. Downstream ordering agencies have limited authority to modify those baseline terms. If the master agreement includes the vendor’s acceptable use policy, individual agencies generally cannot override those restrictions.

    Negotiated Procurements (FAR Part 15)

    FAR Part 15 gives agencies the broadest latitude to negotiate tailored terms, including provisions regarding usage rights, data rights, transparency, and governance. But this pathway comes with high procedural costs and longer timelines. In practice, agencies often avoid Part 15 for fast-moving AI purchases because it is slower and more process-heavy, and DoD leadership has emphasized commercial-first, rapid pathways.

    Other Transactions (OTs)

    OTs are non-FAR-based agreements used for research, prototyping, and certain production activities. They offer substantially more flexibility than FAR-based contracts. In 2025, the DoD used this pathway to award agreements valued at up to $200 million each to Anthropic, OpenAI, Google, and xAI. OTs are exempt from FAR requirements, so the terms are whatever the parties negotiate. An agency can use this flexibility to secure broad usage rights. A contractor can use it to embed restrictions. Either way, the terms are a product of negotiation, not a regulatory default.

    What This Means in Practice

    Each of these pathways produces a different set of contractual rights and obligations. The idea that a contractor categorically cannot restrict government use of its products, or that doing so is somehow illegitimate, reflects a fundamental misunderstanding of how government procurement law works. The scope of restrictions is determined by the specific acquisition pathway and what the parties negotiate. None of this is novel or controversial. It is basic procurement law.


    What Does “Any Lawful Use” Mean?

    This brings us to the central confusion in the public debate: what does the “any lawful use” standard actually do?

    OpenAI has published relevant language from its Pentagon contract. It reads, in part:

    The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.

    The contract addresses autonomous weapons, surveillance, and domestic law enforcement. In the provisions OpenAI has published, the operative language is largely framed by reference to existing legal authorities, including DoD Directive 3000.09, the Fourth Amendment, the Foreign Intelligence Surveillance Act, Executive Order 12333, and the Posse Comitatus Act. The system “shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities.” Domestic law enforcement use is permitted only “as permitted by the Posse Comitatus Act and other applicable law.”

    Read on its face, the published excerpt does not give OpenAI an Anthropic-style, free-standing right to prohibit otherwise-lawful government use. The operative standard is “all lawful purposes,” conditioned on applicable law and related government requirements and protocols.

    That does not mean the language is meaningless. There is an important distinction between the scope of a restriction and its enforceability. Restating legal requirements in a contract may not change what the law requires, but it can change remedies if the government’s use violates a contractual commitment. OpenAI generally would not be a proper plaintiff to assert Fourth Amendment rights on behalf of third parties, but it could frame noncompliance as a breach of its own agreement, and OpenAI states it could terminate the contract if the government violates its terms.

    Contrast this with what Anthropic sought: explicit exceptions to “lawful use” that would have barred certain uses even if the government viewed them as lawful. Anthropic describes the impasse as turning on two requested exceptions: mass domestic surveillance of Americans and fully autonomous weapons. From the government’s perspective, that approach would effectively place a private contractor in the position of deciding which otherwise lawful uses were off-limits.


    The Safety Stack: Where Real Leverage May Exist

    The contract language is not the entire story. In a detailed blog post accompanying its announcement, OpenAI described a multi-layered enforcement approach that goes beyond the four corners of the agreement. This is where the analysis gets considerably more interesting from a procurement law perspective.

    OpenAI claims three additional sources of leverage:

    Cloud-only deployment with architectural control. OpenAI states that this is a cloud-only deployment—models are not provided on edge devices (where they could be used for autonomous lethal weapons). OpenAI retains what it describes as “full discretion” over its safety stack, including the ability to run and update classifiers that monitor use. The company says this deployment architecture enables it to “independently verify” that its red lines are not crossed.

    Cleared OpenAI personnel in the loop. The company states that its own security-cleared employees will be involved in the deployment, and that its safety and alignment researchers will “be in the loop and help improve systems over time.”

    Termination rights. OpenAI states that, as with any contract, it could terminate the agreement if the government violates the terms (though the scope of any termination right depends on the specific agreement, including notice, cure, and dispute resolution provisions that have not been disclosed).

    Additionally, OpenAI makes a notable claim about the temporal scope of the contract’s legal references. According to the company, the contract “explicitly references the surveillance and autonomous weapons laws and policies as they exist today, so that even if those laws or policies change in the future, use of our systems must still remain aligned with the current standards reflected in the agreement.” If the full contract language supports this claim, it would constitute a genuine contractual restriction that goes beyond restating current law. Even if Congress amended FISA or DoD revised Directive 3000.09 to permit broader use, the contract could still bind the government to earlier standards.

    The published excerpt offers limited support for that reading. The DoDD 3000.09 reference is versioned (“dtd 25 January 2023”), suggesting the contract is keyed to a specific iteration of that directive, though it is not conclusive without seeing the incorporation language. Whether the agreement “locks in” today’s standards, therefore, depends on contract language OpenAI has not published. For example, whether it incorporates these authorities “as in effect on” a particular date, or instead tracks them as amended over time.

    These claims deserve careful scrutiny because they reveal something important about where the real contractual leverage lies in AI procurement, and it may not be where most people expect.

    If OpenAI retains full discretion over its safety stack and deploys only on its own cloud infrastructure, the practical constraints on government use are architectural, not merely contractual. The government can use the system for “any lawful purpose,” but only to the extent OpenAI’s classifiers and safety systems permit. If a classifier blocks a particular use, the question is whether the government has a contractual right to demand its removal. OpenAI asserts that it retains “full discretion” over those systems.

    This creates tension at the heart of the agreement. The contract permits use “for all lawful purposes,” subject to “operational requirements” and “well-established safety and oversight protocols.” OpenAI says it retains full discretion over the safety stack it runs in a cloud-only deployment. If the safety stack blocks a lawful use, which provision controls? The answer depends on the specific contract language governing the relationship between the permissive use standard and the deployment framework—language that has not been made public.

    It is also worth noting the irony. The Pentagon’s objection to Anthropic was, at its core, that a private company should not be able to constrain the military’s use of AI technology. Yet the OpenAI arrangement appears to give the company significant operational control over how the technology functions in practice through infrastructure, personnel, and classifiers that OpenAI can update unilaterally. Whether this amounts to the kind of constraint the Pentagon sought to avoid with Anthropic depends entirely on the terms governing OpenAI’s discretion and whether the government retains any contractual right to override the safety stack for lawful uses.


    Why the Acquisition Pathway Matters for What Comes Next

    The public debate has focused almost entirely on whether AI companies should have the right to impose ethical restrictions on the military. That is a legitimate policy question. But it is the wrong frame for understanding what happened here, and it obscures the procurement realities that will shape AI governance going forward.

    As I argue in Buying Blind, the government is acquiring AI technologies through pathways that systematically limit its ability to negotiate protective terms, not just protections for the company, but protections for the government itself. The same commercial acquisition methods that make it difficult for companies like Anthropic to enforce use restrictions also make it difficult for the government to secure adequate transparency requirements, audit rights, data protections, and safeguards against contractor lock-in. The emphasis on speed and commercial terms is a double-edged sword: it limits both parties’ ability to impose terms that deviate from commercial defaults.

    The Anthropic dispute has focused attention on one direction of this dynamic: companies restricting the government. But the more consequential governance failure runs in the opposite direction: the government’s inability to secure the protections it needs when buying AI through commercial pathways that were not designed for technologies this complex and this consequential.

    The government’s punitive response to Anthropic compounds this problem. If the consequence of negotiating aggressively with the government is being designated a supply chain risk—a mechanism more commonly associated with foreign adversary threats—companies have strong incentives to simply accept whatever terms the government demands. OpenAI itself said it does not believe the supply chain risk designation should have been applied to Anthropic. That may lead to faster procurement, but it will yield worse governance outcomes. Companies that are afraid to negotiate are companies that will not push back when the government’s proposed terms are inadequate for either party.

    The question the public debate should be asking is not whether AI companies have the right to tell the Pentagon what to do. They do (within the limits of the contract they negotiate), depending on the acquisition pathway, contract type, and the terms the parties agree to. The question is whether the government’s current approach to AI procurement produces contracts that adequately protect the public interest. Based on the evidence, the answer is no. The Anthropic-Pentagon dispute, for all the attention it has received, is a symptom of that deeper problem, not its cause.

  • Why a Wedding Cake? Mapping AI’s Hidden Procurement Supply Chain

    Why a Wedding Cake? Mapping AI’s Hidden Procurement Supply Chain

    Whenever I speak to groups about AI procurement, I always start by asking: “Who here has heard of the AI tech stack?” Maybe one person raises their hand. On rare occasions, a handful of people do.

    It’s understandable. GenAI adoption is relatively new, and most people don’t see beyond their preferred AI chat interface. Yet globally, particularly in the United States, governments are rushing to buy and deploy AI systems before stakeholders understand what they’re buying. This is what my new article calls “buying blind.”

    Scope note: This blog post focuses on machine learning and generative AI systems (such as ChatGPT, Claude, and other large language models)—systems that rely on foundation models, training data, and the layered infrastructure described by the wedding cake framework.

    Here’s what I’ve learned from teaching procurement law over the past 20 years: when dealing with complex subjects, analogies help new learners bridge the gap between the abstract and the concrete. That’s why I use a wedding cake to explain the AI tech stack in my article “Buying Blind: Corruption Risk and the Erosion of Oversight in Federal AI Procurement.” The cake’s tiers, stand, and frosting reflect the AI supply chain’s complex ecosystem. Each tier represents a layer of the AI supply chain and a category of procurement risk that most current federal contracts don’t consistently address.

    What the Wedding Cake Layers Represent

    The wedding cake isn’t a technical diagram—it’s a risk identification tool designed for procurement professionals, not technologists. I made specific choices (like combining foundation models with customization layers to illustrate conflicts of interest) that emphasize where corruption risks emerge rather than technical precision. Each “layer” represents categories of procurement risk that current contracts don’t consistently account for. If you only ever see the button you push—which is increasingly common in the federal space, where most AI applications are licensed as a service—you’re missing the risks that originate in the lower tiers and cascade upward through the entire system.

    Cake Stand: Infrastructure

    This is the foundation supporting the entire cake: semiconductors, AI chips, and cloud platforms for data storage and processing. Extreme market concentration begins here— NVIDIA dominates the advanced AI chip market, TSMC leads production of the most advanced semiconductors, and three providers (Amazon, Microsoft, Google) collectively account for a majority of the global cloud market. The risk: Agencies believe they’re contracting with one vendor without understanding the concentrated infrastructure dependencies underneath, creating lock-in and jurisdictional exposure they never evaluated.

    Tier 1: Foundation Model & Customization

    This is the AI model itself (trained on massive datasets), which can be customized for government use through fine-tuning, prompt engineering, and retrieval systems. I grouped these layers to illustrate potential conflicts of interest. For example, when contractors customize models using their own methodologies and terminology, then compete for contracts evaluated by tools built on those customizations. The risk: Agencies inherit biased training data or embedded conflicts of interest they cannot see downstream, because they have only licensed an AI application and have limited visibility into the AI supply chain. 

    Tier 2: Applications & Integration

    These user-facing tools, such as contract clause review applications, anomaly detectors, solicitation drafting tools, and document analyzers, connect humans to AI models. Acquisition professionals see only the polished interface and seamless functionality during demos, while embedded dependencies, third-party components, and subcontractor relationships remain invisible. The risk: lock-in through technical integration, vendor control of competitively sensitive information, and security breaches from undisclosed supply chain entities, all of which current contractual practices do not adequately capture.

    Tier 3: Human Oversight & Accountability

    The essential review step in which government employees verify, question, and, if necessary, override AI outputs before they influence decisionsInstead, agency decision-making becomes prone to automation bias and rubber-stamping, particularly in agencies that lack formal requirements for documented human judgment. The risk: AI influences procurement decisions without meaningful human oversight. When challenged, agencies cannot prove that a qualified person reviewed the AI’s reasoning rather than simply adopting its output without meaningful scrutiny.

    Frosting: Governance & Security

    The laws, contract terms, policies, and safeguards that should wrap around the entire stack—defining permissible use, verification rights, testing requirements, and cybersecurity controls. What exists instead: “regulation by contract,” which expects individual contracting officers to negotiate adequate protections without standardized requirements, binding rules, or the workforce capacity to implement them consistently. The risk: Vulnerabilities at any layer cascade through the system because the governance wrapper protecting integrity was applied inconsistently, inadequately, or not at all.

    Why This Matters Today: The AI Literacy Crisis             

    The wedding cake framework reveals hidden risks. But understanding what to look for doesn’t help if agencies lack the capacity to act.

    GAO has repeatedly flagged workforce capability and implementation challenges that affect AI acquisition and oversight. Acquisition professionals can’t ask the right questions if they don’t understand the complex ecosystem that supports their tool. Current administration policies—the Plan and recent DOD AI strategy—place heavy emphasis on accelerating AI adoption and reducing barriers, which can compress the time for diligence if agencies do not deliberately account for it during the procurement process. Agencies face pressure to adopt AI rapidly, further compressing the already insufficient time available to build the necessary understanding.

    The combination is dangerous: accelerated acquisition + literacy gaps + inadequate oversight = the “buying blind” crisis I document in my article.

    Why the Wedding Cake Helps Government Procurement Stakeholders

    For acquisition professionals: The framework prompts concrete questions before you award contracts. What’s in the training data, and how do we verify vendor claims about testing and data governance? Who are all the vendors in the supply chain? Given market concentration, what are our realistic alternatives if this relationship fails? Where is our data stored? What happens when this contract ends?

    Without a visual model of what lies beneath the surface, these questions never get asked. The wedding cake makes these layers visible enough for acquisition professionals to begin asking the questions necessary to protect agencies’ interests and ensure mission success.

    For government contracts attorneys: The layers reveal gaps in risk allocation. Traditional terms address surface-level functionality, but many risks reside in layers the solicitation never mentions: training data bias, supply chain security, infrastructure dependencies, and lock-in through market concentration. When disputes arise, contractual language may be found inadequate because it addresses only the top tier of the cake. Is the failure in the training data, the foundation model, the supply chain, or the infrastructure? Contracts that address only “the button” can’t resolve disputes about underlying components.

    The coming wave of litigation—over biased outputs, supply chain failures, and security breaches—will require tracing problems through a complex ecosystem that current contracts don’t map.

    AI Procurement by Tier: Sample Questions & Contractual Terms

    For each layer of the AI stack, this table identifies baseline questions and contract terms. These are starting points, not exhaustive checklists. Every procurement requires additional diligence tailored to your use case, data sensitivity, and regulatory environment. Because leverage varies by jurisdiction and buying pathway, some protections may be more difficult to negotiate, particularly in commercial SaaS acquisitions. I map these buying pathways and governance constraints in Buying Blind. Future posts will explore additional protections and diligence strategies in depth.

    Important: If you are an acquisition professional, you must consult with legal counsel, technical experts, and security teams to develop comprehensive evaluation criteria for your acquisition. This is not legal advice and does not substitute for consultation with qualified counsel.

    Wedding-cake layerSample Questions to AskSample Contractual Terms
    Cake Stand: InfrastructureWhere is data stored/processed (jurisdiction), and what security environment are we using? What happens during an outage or incident?Data residency/processing locations; baseline security controls; continuity and disaster recovery; incident notice and cooperation
    Tier 1: Foundation model + customizationWhich model/provider/version? Who controls updates? What independent testing exists for bias and performance? Who performed customization and what relationships do they have to potential competitors?Version/change control; vendor assurance package; testing/validation rights; limits on agency data use for training; disclosure of customization entities and relationships to potential offerors
    Tier 2: Applications + integrationWhat dependencies are embedded (APIs, third-party components, subcontractors)? What data is collected and retained during use, including procurement-sensitive information? Does any supply-chain entity also compete for contracts that this tool might support? What data is collected/retained (including logs), and who can access it?Supply-chain disclosure and change control; integration documentation; data minimization and retention limits; exportability/interoperability; procurement-sensitive handling restrictions; organizational conflict of interest disclosure and mitigation plans; log retention/export
    Tier 3: Human oversight + accountabilityWho reviews outputs and with what override authority? What documentation proves meaningful review? Which uses are authorized vs prohibited in the acquisition lifecycle (drafting, market research, evaluation support)?Required human review where appropriate; audit logs; defined accountability; training and escalation paths; prohibited uses where appropriate
    Frosting: Governance + securityWhat independent verification backs vendor claims (attestations, audits, testing)? What safeguards wrap the full stack? What records/logs exist to support oversight and challenges?Standard assurance terms; audit/testing rights (risk-based); security requirements; ongoing security reporting; organizational conflict of interest disclosure and mitigation; log retention + production support
    Market structure (all layers)Who controls choke points (cloud, model access, specialized compute)? What is the real switching cost, including data egress and usage fees?Modularity; termination and transition assistance; avoidance of exclusivity; pricing protections beyond promotional periods; egress fee disclosure/limits
    Testing (all layers)How do we validate performance and security at each layer, not just the front-end interface? Who conducts testing (agency, vendor, independent third parties), including after updates?Acceptance criteria; evaluation plan and cadence; remedies for failures; test access (including post-update regression where appropriate)
    Exit/transition (all layers)How do we extract our data and the materials needed to migrate (prompts, retrieval index, customizations)? Is switching realistic given concentration? What artifacts must be exportable/reusable?Data export formats/timelines; transition services; post-termination deletion certification; portability commitments for data and configurations; reuse rights for deliverables created under this contract

  • Buying Blind: A Framework for AI Procurement Integrity

    Buying Blind: A Framework for AI Procurement Integrity

    The United States is accelerating toward a corruption crisis of its own making. In its race to rapidly acquire artificial intelligence (AI), current policy risks undermining longstanding procurement integrity safeguards.

    In my new article, Buying Blind: Corruption Risk and the Erosion of Oversight in Federal AI Procurement (forthcoming, Public Contract Law Journal, Vol. 55, No. 2 (Winter 2026)), I argue that AI does not merely amplify familiar corruption risks; it also creates new integrity vulnerabilities that existing oversight mechanisms are not calibrated to address.

    WHY THIS MATTERS NOW

    How the government acquires AI today determines the integrity vulnerabilities it will inherit tomorrow. Missing audit rights, weak testing requirements, and opaque supply chains are acquisition-phase choices that become significant corruption risks as AI expands across the federal procurement system.

    Recent federal AI policy has accelerated adoption while narrowing regulatory oversight, leaving “regulation by contract” as the de facto governance model—even as agencies sign deals without adequate protections against corruption and integrity risks that will be exponentially harder to reverse once procurement dependencies harden.

    The article challenges a dangerous assumption driving current federal AI policy: that governance impedes innovation. The evidence demonstrates the opposite. Governance encourages competition by preventing contractor lock-in. It safeguards innovation by promoting fair processes resistant to manipulation. And it enables sustainable AI adoption by building the institutional trust necessary for continued technological deployment. When oversight is treated as secondary to innovation, the procurement system itself becomes the risk.


    What This Article Does

    The article serves dual purposes:

    • For scholars and policymakers, it establishes analytical foundations for understanding how AI introduces novel corruption vulnerabilities that existing frameworks inadequately address.
    • For acquisition professionals and agency counsel navigating AI procurement today, it offers practical risk-mitigation strategies implemented within existing authorities.

    Foundations

    • Introduces the U.S. Government Procurement Anti-Corruption Ecosystem and its core pillars.
    • Surveys the evolving statutory, regulatory, and sub-regulatory landscape governing federal AI acquisition.
    • Provides a “wedding cake” schematic of the AI technology stack, intentionally simplified to help acquisition professionals, counsel, and policymakers identify corruption and procurement integrity risks across the AI supply chain.
    This wedding cake serves as a conceptual visualization of the AI tech stack

    How Agencies Buy AI

    • Maps federal AI acquisition pathways and explains how the selected method shapes the government’s ability to secure enforceable governance terms.
    • Examines recent consolidation efforts, including GSA’s OneGov “AI deals” that offer leading AI platforms at promotional prices, raising significant buy-in and vendor capture concerns.

    The Risk Landscape

    • Procurement Corruption Risks
      • Organizational conflicts of interest (including novel “foundation model conflicts” arising from AI supply chain complexity)
      • Fraud and AI-enabled deception (document fabrication, deepfakes, algorithmic concealment)
      • Supply chain manipulation (data poisoning, evasion attacks, and prompt injection) that can distort competition, evaluation, and contract performance
    • Systemic Procurement Integrity Risks
      • Vendor lock-in and switching costs
      • Promotional pricing strategies that front-load adoption and back-load recoupment
      • Automation bias compounded by workforce capacity gaps
      • Opacity and limited auditability
      • Technical failures (incumbency bias, hallucinations) that become exploitable in evaluation, protest defense, and post-award oversight
    • Each category is analyzed using acquisition fact patterns and practical hypotheticals to show how these risks materialize throughout the acquisition lifecycle.

    Acquisition Gaps Become Operational Risks

    • Identifies the specific acquisition terms that determine deployment risk: audit rights, testing and evaluation requirements, documentation and disclosure obligations, data and model access, and remedies for nonconformance.
    • Explains how acquisition-phase gaps materialize at deployment once AI is embedded in procurement processes through automation bias, diminished human review mandates and capacity, opaque model updates, and degraded traceability for accountability and investigations.

    What Agencies Can Do Now

    • Provides a prioritized safeguards menu that agencies can implement now under existing authority, distinguishing baseline protections from heightened requirements for high-risk use cases:
      • Applying existing OCI mitigation frameworks to AI supply chains.
      • Combating AI-generated fraud through detection infrastructure, enhanced disclosure requirements, and due process protections.
      • Negotiating anti-lock-in protections: portability, egress fee caps, and data control provisions.
      • Proposing enhanced transparency mechanisms, including an AI-BOM (AI Bill of Materials) disclosure model with tiered obligations, so documentation and verification requirements scale with risk.
      • Implementing red-teaming and system integrity testing proportionate to risk.
      • Developing an AI-literate acquisition workforce through intentional capacity-building initiatives.
      • Establishing AI Integrity Advocates modeled on the Competition Advocate structure, while leveraging existing technology-focused oversight infrastructure.