On February 26, 2026, Anthropic told the Pentagon no. Two phrases — mass surveillance and autonomous weapons — sat inside a set of guardrails the company refused to strip from Claude, its flagship AI model. The Department of Defense wanted those guardrails gone. Anthropic's CEO Dario Amodei published a public statement instead, calling the demands incompatible with democratic values. Within days, the Pentagon had branded Anthropic a "supply chain risk" — a designation previously reserved for foreign adversaries like Huawei and Kaspersky.1 Within weeks, OpenAI had signed its own deal with the military.2
A federal judge blocked the blacklisting. Judge Rita Lin called it "classic illegal First Amendment retaliation," rejecting what she termed the "Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government."3
The internet cheered. ChatGPT uninstalls surged 295%.4 Anthropic's revenue kept climbing.5 Caitlin Kalinowski, a senior member of OpenAI's robotics team, resigned, saying the guardrails around certain AI uses "were not sufficiently defined" before the Pentagon agreement was announced.6 The narrative wrote itself: principled company stands up to military overreach, markets reward integrity, the system works.
Except the system didn't work. The system revealed that it doesn't exist.
The Governance Vacuum
Here's what actually happened: the United States has no statutory framework for governing how AI is used in warfare. None. No law specifies what AI can target, what requires human authorization, what constitutes acceptable surveillance, or who decides any of this. So the rules are being set by bilateral vendor contracts between the Pentagon and whichever company picks up the phone.7
The Lawfare Institute nailed the structural problem: "the rules governing the military's use of AI are not derived from statutes or regulations, but from bilateral agreements between the government and individual vendors" — agreements that "were not designed to provide the democratic accountability, public deliberation, and institutional durability that statutes provide."7
Military AI policy in the world's most powerful democracy is currently set through procurement negotiations. Not legislation. Not democratic debate. Contracts.
This is the context everyone celebrating Anthropic's stand seems determined to ignore. The question is not whether Anthropic was right — it almost certainly was. The question is whether we want to live in a system where the guardrails on lethal AI depend on which CEO happens to have principles this quarter.
The Substitution Problem
When Anthropic refused, the Pentagon didn't pause to reconsider its position. It called OpenAI. The Electronic Frontier Foundation reviewed OpenAI's resulting contract language and noted that lawyers call such terms "weasel words" — language that "creates ambiguity that protects one side or another from real accountability for contract violations."2 The EFF pointed out that OpenAI's agreement relied on the assumption that federal agencies would simply follow the law. For anyone who remembers the name Edward Snowden, that is not exactly a bulletproof safeguard.
The net result: the Pentagon got its AI. But instead of Anthropic's hard guardrails, it got OpenAI's soft language. Corporate refusal didn't prevent anything; it just ensured the military partnered with whoever offered the least resistance.
This is what economists call adverse selection, and it's the most damning structural argument against relying on corporate conscience as a governance mechanism. The principled actor exits. The unprincipled actor fills the gap. The outcome is worse than if the principled actor had stayed and negotiated.
Google traced this same arc years earlier. In 2018, over 3,100 Google employees signed a letter telling CEO Sundar Pichai that "Google should not be in the business of war," and the company dropped its Project Maven Pentagon contract.8 By 2025, Google had removed its AI ethics pledges entirely, citing "a global competition taking place for AI leadership within an increasingly complex geopolitical landscape." Corporate guardrails are temporary. Market conditions are permanent.
The Democratic Deficit on Both Sides
Amodei's public statement contained a line that cut deeper than he may have intended: "These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security."1 He was pointing out the Pentagon's logical incoherence — you can't simultaneously claim a company is too dangerous to work with and too important to be allowed to refuse.
But there's a symmetrical incoherence in the pro-corporate-autonomy position that nobody wants to talk about.
The Defense Production Act has given the U.S. president authority to compel private companies to prioritize national defense contracts since 1950 — reauthorized by Congress repeatedly over seven decades.9 Under 28 U.S.C. Section 1498, the federal government can use any patented invention without the owner's consent, paying only "reasonable and entire compensation."10 The Supreme Court's Youngstown Steel decision struck down President Truman's unilateral seizure of steel mills, but the majority explicitly noted that Congress would have had the constitutional authority to compel production.11
The legal architecture for government override is extensive, tested, and — crucially — democratic. It runs through Congress.
The question of whether telecoms should cooperate with surveillance was ultimately decided not by AT&T's board but by elected representatives who passed the FISA Amendments Act of 2008, granting retroactive immunity to companies that had participated in NSA programs.12 You can hate that law. Many do. But it was a law — debated, voted on, challengeable in court. That is categorically different from a procurement negotiation in a conference room.
When we celebrate corporate refusal as the primary check on military AI, we are accepting — perhaps without realizing it — that the most consequential technology governance decisions of the century will be made by whichever billionaire happens to control the servers. If Dario Amodei has good values, we get good guardrails. If the next CEO doesn't, we get "weasel words." This is not a system. It is a coin flip dressed up as principle.
The IBM Problem
The counterargument writes itself: sometimes the government is the one you need protection from. And history makes this case with brutal clarity.
IBM didn't refuse. Under CEO Thomas Watson Sr., operating from his Madison Avenue office, IBM "knowingly organized all six phases of the Holocaust: identification, exclusion, confiscation, ghettoization, deportation, and even extermination," as documented in Edwin Black's IBM and the Holocaust.13 The company leased its tabulating machines, serviced them on-site at concentration camps, and supplied the custom punch cards without which the machines could not function.
Everything IBM did was legal under German law. Legality was not a sufficient ethical test then. It is not now.
The Manhattan Project tells the same story from a different angle. Leo Szilard, the physicist who first conceived the nuclear chain reaction, circulated a petition among scientists urging that the bomb not be used on a civilian population. General Groves made every effort to prevent the petition from reaching President Truman. By the time Groves finally forwarded it, Stimson's assistant simply filed it "Secret." It never reached the president.14 The people who understood the technology best were silenced — and we've spent eighty years reckoning with the consequences.
The Bernstein v. DOJ ruling established in the 1990s that software source code is speech protected by the First Amendment.15 If code is speech, then an AI model's guardrails — baked into its training, its constitution, its refusal behaviors — are expressive choices. Compelling a company to strip those guardrails is compelling speech. The First Amendment framework here isn't a stretch; it's a straight line from three decades of jurisprudence.
So the corporate veto has real constitutional grounding and genuine historical justification. IBM's complicity in genocide. Scientists silenced on nuclear weapons. Telecoms that cooperated with warrantless surveillance. The track record of "just comply with the government" is, to put it gently, mixed.
The Real Question Nobody Is Asking
Both sides of this debate are arguing over who should hold a power that shouldn't be unilateral in the first place.
The pro-government position says: elected officials should decide how AI is used in warfare. Correct — but Congress has decided nothing. There is no AI weapons statute. No autonomous weapons framework. No surveillance threshold legislation. The democratic institutions that are supposed to govern this space have produced exactly zero binding rules. The Pentagon isn't acting on congressional authority; it's acting in a congressional vacuum, using procurement contracts as shadow legislation.
The pro-corporate position says: companies understand their technology's risks better than procurement officials. Also correct — but corporate ethics are contingent on market conditions, leadership changes, and competitive pressure. Google's AI ethics principles lasted seven years. OpenAI's military use ban lasted until January 2024. Anthropic's own flagship safety pledge — the binding commitment to halt development if safety measures fell behind model capability — was quietly revised in February 2026, with Chief Science Officer Jared Kaplan explaining that "we didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments... if competitors are blazing ahead."16
Every major AI company that has publicly committed to safety guardrails has eventually weakened or abandoned them.
The major AI companies collectively spent more than $100 million lobbying the U.S. government in 2025 alone — Meta leading at $26.29 million, more than any company in any industry.17 Corporate ethical autonomy and corporate political influence are not opposing forces; they are exercised by the same industry, in the same quarters, toward the same strategic ends. The idea that these companies are disinterested ethical actors rather than sophisticated political operators requires a credulity that the lobbying data does not support.
What Would Actually Work
The honest answer is that neither corporate refusal nor government coercion solves the underlying problem. Both are improvisations in the absence of law.
What's needed is straightforward — and, precisely because it is straightforward, it is politically difficult. Congress needs to pass legislation that establishes clear, enforceable limits on military AI: what requires human authorization, what constitutes prohibited surveillance, what thresholds trigger independent review. Not vendor contracts. Not corporate policies. Law — subject to democratic debate, judicial review, and amendment as the technology evolves.
The Supreme Court established in Umbehr that the government cannot retaliate against contractors for exercising free speech.18 Judge Lin correctly applied that principle to Anthropic. These are essential protections. But they are protections against government abuse of process — they say nothing about the substantive question of what AI should and should not do in warfare.
Anthropic drew a line. That line held, this time, because one CEO had the conviction and the financial cushion to absorb the consequences.
Governance that depends on individual courage is not governance. It is luck.
And the next time the question arises — with a different company, a different CEO, a different competitive landscape — luck may not be enough.
The uncomfortable truth is that we are watching a structural power struggle masquerade as an ethics debate. The Pentagon wants unconstrained access to AI capability. AI companies want to maintain market positioning through visible ethical commitments. Both sides are operating rationally within a system that has no rules, and both will continue to improvise — sometimes admirably, sometimes not — until someone builds the actual institution that should have existed before any of this started.
That institution is Congress. And Congress, as usual, is nowhere to be found.