Policy

Pentagon Told Anthropic They Were 'Nearly Aligned' — A Week After Trump Declared the Relationship Kaput

New court filings reveal the government was privately negotiating while publicly calling Anthropic a national security threat. The timeline doesn't add up — and Tuesday's hearing could set precedent for AI-gov relations.

2026-03-21 Source: TechCrunch
Pentagon Told Anthropic They Were 'Nearly Aligned' — A Week After Trump Declared the Relationship Kaput

In a stunning legal filing late Friday evening, Anthropic dropped a bombshell that threatens to unravel the Pentagon's case against the AI safety company: less than a week after President Trump and Defense Secretary Pete Hegseth publicly declared they were cutting ties with Anthropic, a top Pentagon official privately emailed CEO Dario Amodei to say the two sides were "very close" on resolving the exact issues now being cited as evidence that Anthropic poses an "unacceptable risk to national security."

The revelation, contained in sworn declarations from Anthropic's Head of Policy Sarah Heck and Head of Public Sector Thiyagu Ramasamy, exposes a yawning gap between the Pentagon's public posture and its private negotiations. It also raises serious questions about whether the government's unprecedented supply-chain risk designation — the first ever applied to an American company — was genuinely motivated by national security concerns, or whether it was retaliation for Anthropic's public stance on AI safety and military use of its technology.

The hearing before Judge Rita Lin in San Francisco this coming Tuesday, March 24, now carries stakes that extend far beyond Anthropic's $200 million defense contract. At issue is a fundamental question about the relationship between AI companies and government power: Can the Pentagon punish an American AI company for refusing to allow unrestricted military use of its technology? And if the answer is yes, what does that mean for every other AI company that might want to set ethical boundaries?

The Timeline That Doesn't Add Up

To understand why Anthropic's Friday filing is so explosive, you need to follow the timeline carefully. Because the sequence of events suggests something very different from the government's narrative of a straightforward national security determination.

Late February: President Trump and Defense Secretary Hegseth publicly declare they are "cutting ties" with Anthropic after the company refused to allow unrestricted military use of its Claude AI models. The public statements are aggressive, suggesting Anthropic is unwilling to support national defense.

February 24: CEO Dario Amodei sits down with Hegseth and Pentagon Under Secretary Emil Michael for what Heck describes in her declaration as negotiations over the company's "red lines" — specifically its positions on autonomous weapons and mass surveillance of Americans. According to Heck, who was present at the meeting, at no point did Anthropic demand "approval role over military operations," contrary to what the government would later claim.

March 3: The Pentagon formally designates Anthropic as a supply-chain risk — the first time this classification has ever been applied to an American company. The designation effectively bars Anthropic from defense contracts and sends a signal to other government agencies that the company is untrustworthy.

March 4: The day after the designation is finalized, Under Secretary Michael emails Amodei. According to Heck's declaration, which attaches the email as an exhibit, Michael writes that the two sides are "very close" on the two issues the government now cites as evidence of Anthropic's national security threat: autonomous weapons and mass surveillance.

Think about that for a moment. The government's own official, the day after declaring Anthropic an unacceptable risk to national security, privately says they're almost in agreement on the very issues being used to justify that declaration.

March 5: Amodei publishes a statement saying Anthropic has been having "productive conversations" with the Pentagon. He doesn't mention the email, presumably because he still believes negotiations are ongoing.

March 6: Under Secretary Michael posts on X that "there is no active Department of War negotiation with Anthropic." A week after that, he tells CNBC there is "no chance" of renewed talks.

The contradiction is stark. On March 4, Michael is saying they're "very close." By March 6, he's saying there are no active negotiations. And within a week, he's declaring there's "no chance" of renewed talks. Meanwhile, the government's court filings continue to cite Anthropic's positions on autonomous weapons and mass surveillance as evidence of its unacceptable risk — the same positions Michael said they were "very close" to aligning on.

Heck's declaration stops short of explicitly accusing the government of using the designation as a bargaining chip. But the timeline she lays out leaves the question hanging in the air. If Anthropic's stance on these issues made it a national security threat, why was the Pentagon's own under secretary saying they were nearly aligned on exactly those issues the day after the designation was finalized?

The Technical Claims Fall Apart

The timeline isn't the only problem with the government's case. Anthropic's filings also take aim at the technical claims underpinning the national security designation — and Ramasamy's declaration suggests the Pentagon may not fully understand how its own classified systems work.

The government's central concern, as laid out in its court filings, is that Anthropic could theoretically interfere with military operations by disabling its technology or altering how it behaves mid-operation. This "operational veto" power, the government argues, makes Anthropic an unacceptable risk to national security.

There's just one problem: according to Ramasamy, it's technically impossible.

Ramasamy is not a random Anthropic employee making technical assertions. Before joining Anthropic in 2025, he spent six years at Amazon Web Services managing AI deployments for government customers, including classified environments. At Anthropic, he built the team that brought Claude models into national security settings, including the $200 million defense contract announced last summer. He knows how these systems work because he built them.

In his sworn declaration, Ramasamy explains that once Claude is deployed inside a government-secured, "air-gapped" system operated by a third-party contractor, Anthropic has no access to it. None. There is no remote kill switch, no backdoor, no mechanism to push unauthorized updates. Any change to the model would require the Pentagon's explicit approval and action to install. The idea that Anthropic could unilaterally disable or alter the technology mid-operation is, in Ramasamy's telling, a fiction.

"Anthropic can't even see what government users are typing into the system, let alone extract that data," Ramasamy writes. The company has no visibility into operations, no ability to intervene, and no mechanism to alter behavior without going through the same approval processes as any other software vendor.

If Ramasamy is correct — and his credentials suggest he knows what he's talking about — then the government's central claim about Anthropic's national security risk is based on a technical misunderstanding. The Pentagon is worried about a capability Anthropic doesn't actually have.

The government also raised concerns about Anthropic's hiring of foreign nationals as a potential security risk. Ramasamy addresses this too, noting that Anthropic employees have undergone U.S. government security clearance vetting — the same background check process required for access to classified information. He adds, perhaps pointedly, that "to my knowledge," Anthropic is the only AI company where cleared personnel actually built the AI models designed to run in classified environments.

In other words: if cleared personnel are a security risk, every defense contractor in America is compromised. And if they're not, then the government's concern about Anthropic's hiring practices is either misplaced or pretextual.

The First Amendment Question

Underneath the technical disputes and timeline contradictions lies a more fundamental question: Can the government punish an AI company for its views?

Anthropic's lawsuit argues that the supply-chain risk designation amounts to government retaliation for the company's publicly stated views on AI safety, in violation of the First Amendment. The company has been vocal about its concerns regarding autonomous weapons and mass surveillance — positions that some defense officials apparently find inconvenient.

The government, in a 40-page filing earlier this week, rejected this framing entirely. The designation was a straightforward national security call, the government argues, not punishment for protected speech. Anthropic's refusal to allow all lawful military uses of its technology was a business decision, not protected speech, and the designation reflects genuine security concerns rather than retaliation for the company's views.

But the timeline Heck lays out complicates this narrative. If the designation was genuinely based on Anthropic's unacceptable risk to national security, why was the Pentagon's own under secretary privately saying they were "very close" to agreement on the exact issues being cited as security threats? Why the rush to finalize the designation on March 3, followed immediately by an email on March 4 suggesting negotiations were nearly successful?

And if the government's technical claims about Anthropic's "operational veto" power are based on misunderstandings — as Ramasamy's declaration suggests — then what exactly is the national security justification for the designation?

These questions matter beyond Anthropic's $200 million contract. If the government can designate an American AI company as a supply-chain risk based on disputed technical claims and convenient timing, every AI company that wants to set ethical boundaries is potentially at risk. The designation isn't just a contract issue — it's a reputational death sentence that signals to other government agencies, private customers, and international partners that the company cannot be trusted.

That's why Tuesday's hearing before Judge Rita Lin matters so much. The court won't just be deciding whether Anthropic gets its defense contract back. It will be setting precedent for how AI companies can relate to government power, what boundaries they can set, and what happens when they refuse to give the government everything it wants.

The Broader Context: AI Safety vs Government Power

The Anthropic case arrives at a pivotal moment for AI governance. Just this week, the Trump administration unveiled its AI framework, which emphasizes innovation over regulation and places child safety responsibility on parents rather than platforms. The framework also targets state-level AI laws, creating a federal preemption that would override stricter regulations in places like California and New York.

Against this backdrop, the Anthropic dispute looks less like a routine contract disagreement and more like a test of how the federal government will treat AI companies that don't fall in line. Anthropic is not some fringe player — it's one of the most well-funded AI labs in the world, with a $200 million Pentagon contract and partnerships across the defense ecosystem. If the government can do this to Anthropic, what happens to smaller AI companies that try to set boundaries?

The case also highlights a tension that's only going to grow more acute as AI becomes more powerful. AI companies are increasingly being asked to make decisions about what their technology should and shouldn't be used for. Some uses — like autonomous weapons — raise obvious ethical concerns. Others — like mass surveillance of Americans — touch on constitutional values. Companies that take these concerns seriously are going to clash with government agencies that want unrestricted access to the most powerful tools available.

Anthropic is betting that the courts will protect its right to set those boundaries. The government is betting that national security concerns trump corporate conscience. Tuesday's hearing will be the first major test of which vision prevails.

What to Watch at Tuesday's Hearing

The hearing before Judge Rita Lin on March 24 will be the first opportunity for both sides to present their arguments in open court. Here's what to watch for:

The Email: Will the government try to explain Under Secretary Michael's March 4 email? Or will they argue that it's irrelevant to the national security determination? The email is attached as an exhibit to Heck's declaration, so the court will see it either way. The question is how the government frames it.

Technical Claims: Will the government defend its claims about Anthropic's "operational veto" power? Or will they shift to other justifications for the designation? Ramasamy's declaration is detailed and technical — the government will need to respond specifically to his claims about air-gapped systems and access controls.

First Amendment: How does Judge Lin view the free speech claims? The First Amendment argument is novel in this context — courts haven't previously had to decide whether an AI company's safety stance counts as protected speech. Lin's questions during the hearing may signal how she's thinking about this issue.

Precedent: Does either side address the precedent this case would set? Anthropic has an obvious interest in framing this as a case about government overreach that could affect the entire AI industry. The government has an interest in keeping the focus narrow — this is just about one company's contract, not about broader principles.

Timeline: Does the court focus on the timeline contradictions? Heck's declaration lays out a sequence that's hard to reconcile with the government's public statements. If Judge Lin finds the timeline suspicious, it could undermine the government's credibility on other issues.

🔥 Our Hot Take: The Government Overplayed Its Hand

Let's call this what it is: a government agency that picked a fight with an AI company over safety boundaries, and now finds itself defending a case that doesn't hold up to scrutiny.

The Pentagon's problem isn't that Anthropic refused to cooperate with national defense. Anthropic has a $200 million defense contract. Its head of public sector is a former AWS executive with six years of experience deploying AI in classified environments. Its head of policy is a former National Security Council official. This is not a company that's hostile to working with government.

The Pentagon's problem is that Anthropic wouldn't give them everything they wanted without restrictions. The company has ethical lines it won't cross — autonomous weapons and mass surveillance of Americans. And when Anthropic refused to move those lines, the government reached for the nuclear option: a supply-chain risk designation that effectively blacklists the company from future contracts.

The problem with nuclear options is that they're hard to justify in court. The government has to explain why Anthropic poses an unacceptable risk to national security. And based on the filings so far, their explanation relies on technical claims that appear to be wrong and a timeline that suggests they were still negotiating when they pulled the trigger on the designation.

Under Secretary Michael's March 4 email is the smoking gun here. If Anthropic's positions on autonomous weapons and mass surveillance were truly unacceptable from a national security perspective, why was Michael saying they were "very close" on those exact issues? The government can't have it both ways. Either these positions are disqualifying, in which case the Pentagon never should have been negotiating at all, or they're subject to negotiation, in which case the designation looks like retaliation for Anthropic's refusal to give the government everything it wanted.

And then there's the First Amendment issue. The government is arguing that Anthropic's safety stance is a "business decision," not protected speech. But Anthropic's safety stance is literally the company's public position on how AI should and shouldn't be used. It's speech about public policy, ethics, and the future of technology. If that's not protected by the First Amendment, then neither is any corporate statement about social responsibility, environmental policy, or ethical business practices.

The government picked this fight thinking Anthropic would back down. Instead, Anthropic went to court with sworn declarations, email exhibits, and a First Amendment claim. Now the Pentagon has to defend a designation that looks increasingly like retaliation for protected speech, based on technical claims that appear to be wrong, following a timeline that contradicts its own official's statements.

Tuesday's hearing will tell us whether the courts will hold the government accountable. If Judge Lin rules for Anthropic, it will establish that AI companies can set ethical boundaries without facing government retaliation. If she rules for the government, it will signal that the Pentagon can punish any AI company that refuses to give it unrestricted access to the most powerful technology ever created.

The stakes couldn't be higher. And based on the evidence so far, the government's case is looking shakier by the hour.

What Happens Next

Regardless of how Judge Lin rules on Tuesday, this case is likely to have ripple effects across the AI industry. Here are the storylines to watch in the coming weeks:

Other AI Companies: How do OpenAI, Google DeepMind, and other major AI labs respond to the Anthropic case? If Anthropic wins, it creates cover for other companies to set ethical boundaries. If Anthropic loses, it sends a message that refusing government requests carries serious consequences.

Congressional Attention: Does Congress get involved? The intersection of AI, defense contracts, and First Amendment rights is exactly the kind of issue that generates hearings and legislation. If the Anthropic case gets enough attention, lawmakers may want to clarify the rules around AI safety and government contracts.

International Implications: How do allies view the case? If the U.S. government is punishing AI companies for setting safety boundaries, what does that mean for international cooperation on AI governance? Other countries are watching how the U.S. handles AI safety, and the Anthropic case sends signals about whether America is serious about ethical AI development.

The Appeal: Whichever way Judge Lin rules, the losing side will likely appeal. This case could end up setting precedent at the appellate level or even the Supreme Court. The question of whether AI safety advocacy is protected speech is novel and important enough to merit higher court review.

The Contract: What happens to Anthropic's $200 million defense contract? Even if Anthropic wins in court, the government could still cancel the contract on other grounds. Or Anthropic could decide that working with a government that tried to blacklist it isn't worth the trouble. The future of the contract is separate from the legal case, but the two are obviously connected.

One thing is certain: the Anthropic case is not just about one company's contract. It's about the future of AI governance, the limits of government power, and whether AI companies can set ethical boundaries without facing retaliation. Tuesday's hearing is just the beginning of a story that will shape the AI industry for years to come.

Enjoyed this analysis?

Share it with your network and help us grow.

More Intelligence

Policy

Chinese AI Firms Are Marketing Iran War Intelligence on US Military Movements — And Nobody's Stopping Them

Policy

70% Chance of Extinction — The Ex-OpenAI Researcher Who Quit Because Nobody Was Listening

Policy

China Just Dropped the World's First National Standard for Embodied AI — And Nobody's Ready

Back to Home View Archive