Google Grants Pentagon Sweeping AI Access After Anthropic's Refusal
Industry·2 min read·TechCrunch

Google Grants Pentagon Sweeping AI Access After Anthropic's Refusal

Google has agreed to let the U.S. Department of Defense use its AI on classified networks for essentially all lawful purposes — terms Anthropic refused over concerns about domestic mass surveillance and autonomous weapons.

Share:

Google has signed an agreement giving the U.S. Department of Defense access to its AI for classified networks under terms that allow essentially all lawful uses, multiple outlets reported on April 28, 2026. The deal positions Google as the third major AI provider, after OpenAI and xAI, to capitalize on a gap in Pentagon supply created when Anthropic publicly refused similar terms earlier this year.

Anthropic had pushed back against unrestricted Pentagon use of its models, asking specifically for guardrails against domestic mass surveillance and the deployment of AI in autonomous weapons systems. The Pentagon responded by designating Anthropic a "supply-chain risk," a label normally reserved for foreign adversaries. Anthropic is currently litigating that designation and is operating under an injunction that has temporarily preserved its eligibility for defense contracts while the case proceeds.

Google's contract reportedly contains language stating the company does not intend its AI to be used for domestic mass surveillance or autonomous weapons, but legal analysts cited in the reporting question whether such intent clauses are enforceable in practice. Inside Google, roughly 950 employees signed an open letter opposing the deal, echoing internal protests that derailed the company's original Project Maven contract in 2018. Google has not commented publicly on the letter and proceeded with signing.

The contract underscores how quickly AI procurement has become a flashpoint for the major labs. OpenAI and xAI have already moved aggressively to take Pentagon classified-network workloads, and Google's entry leaves Anthropic increasingly isolated as the only frontier lab insisting on use-case restrictions. The episode also highlights a deeper split in the industry over how — and whether — to constrain frontier models in national-security settings, an argument that is likely to intensify as agentic AI moves from chatbots into operational decision-making.

Related Articles