The president ordered every federal agency to stop using Anthropic’s products, and the Pentagon declared the company a national security risk — a designation usually reserved for adversaries like China.
In an extraordinary escalation of the growing battle between the Trump administration and the tech industry, President Trump on Friday ordered every federal agency to immediately stop using products made by Anthropic, the artificial intelligence company behind the AI assistant Claude.
The move came after Anthropic refused to remove safety restrictions that prevent the military from using its AI for mass domestic surveillance or fully autonomous weapons systems.
Defense Secretary Pete Hegseth went even further, officially designating Anthropic as a “Supply-Chain Risk to National Security” — a classification typically applied to companies from hostile nations like China and Russia.
The designation means that any company doing business with the U.S. military is now banned from conducting any commercial activity with Anthropic.
What Happened
The confrontation had been building for months. Anthropic holds a contract with the Pentagon worth up to $200 million and its AI model, Claude, is the only one currently approved for use on the military’s classified networks.
As part of that contract, Anthropic included restrictions — specifically, prohibitions against using Claude for mass surveillance of American citizens and for autonomous weapons that can fire without human involvement.
The Pentagon demanded Anthropic drop those restrictions and allow its AI to be used for “all lawful purposes.”

Defense officials argued that existing laws and military policies already prevent misuse, and that a private company shouldn’t have veto power over how the government uses tools it has purchased.
Anthropic’s CEO, Dario Amodei, met with Hegseth on Tuesday in what sources described as a cordial meeting.
But on Thursday, Amodei publicly rejected the Pentagon’s final offer, writing in a detailed statement that the proposed compromise language was paired with legal loopholes that would have effectively gutted the safeguards.
“We cannot in good conscience accede to their request,” Amodei wrote.
He added that he believes deeply in using AI to defend the United States and democratic nations, but argued that mass surveillance and autonomous weapons are “outside the bounds of what today’s technology can safely and reliably do.”
The response from the Pentagon was immediate and personal.
Emil Michael, the undersecretary for research and engineering who had been leading negotiations, called Amodei “a liar” with “a God-complex” who was “ok putting our nation’s safety at risk.”
Trump’s Response
Shortly before the Pentagon’s 5:01 p.m. deadline on Friday, Trump took to Truth Social to announce the ban.
Trump called the company “A RADICAL LEFT, WOKE COMPANY.”
“The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War.”
He gave agencies that rely on Anthropic’s technology — including the Pentagon — six months to phase it out, and threatened to use “the Full Power of the Presidency” to force compliance if the company doesn’t cooperate during the transition.
Hegseth echoed the message, declaring that Anthropic had “delivered a master class in arrogance and betrayal.”
“America’s warfighters will never be held hostage by the ideological whims of Big Tech.”
Why This Matters
This isn’t just about one company and one contract. This is a precedent-setting fight over who gets to decide how artificial intelligence — arguably the most powerful technology ever created — is used by the world’s most powerful military.
The supply-chain risk designation could have devastating ripple effects for Anthropic far beyond the $200 million Pentagon deal.
Anthropic, valued at roughly $380 billion, depends heavily on enterprise contracts with major corporations.
Many of those companies also hold government contracts. Under Hegseth’s order, those companies would have to prove they don’t use anything related to Anthropic in their Pentagon work — potentially forcing them to choose between Anthropic and the U.S. military.
But the consequences could cut both ways. Defense officials privately admitted to reporters that replacing Claude in classified systems would be, in the words of one official, a “huge pain in the ass.”
Claude was used in the operation to capture Venezuelan leader Nicolás Maduro and could play a role in potential operations involving Iran.
While Elon Musk’s xAI has been approved for classified settings, defense officials acknowledge Grok is not as advanced as Claude.
The Industry Responds
In a significant development, OpenAI CEO Sam Altman said Friday that his company shares Anthropic’s “red lines” on autonomous weapons and mass surveillance.
He told employees in a memo that OpenAI would push for the same limitations in its own Pentagon dealings.
More than a hundred employees at Google signed a letter asking their company to mirror Anthropic’s position, and staffers at Microsoft and Amazon have made similar demands.
Senator Mark Warner, the top Democrat on the Senate Intelligence Committee, accused the administration of making national security decisions based on politics rather than analysis.
He warned that the move could discourage the private sector from working with the military, and suggested the real motive may be to steer contracts toward a preferred vendor — widely understood as a reference to Musk’s xAI.
“I’m deeply disturbed by reports that the Department of Defense is working to bully a leading U.S. company, which has already provided enormous utility to the intelligence community and warfighter. Most Americans oppose unsupervised autonomous weapon systems and AI-facilitated surveillance.
“Unfortunately, this is further indication that the Department of Defense seeks to completely ignore AI governance – something the Administration’s own Office of Management and Budget and Office of Science and Technology Policy have described as fundamental enablers of effective AI usage – and further underscores the need for Congress to enact strong, binding AI governance mechanisms for national security contexts.”
The standoff raises a question that will only grow louder as AI becomes more integrated into warfare, policing, and governance: Should private companies have the right to set ethical boundaries on how their most powerful technologies are used?
Or does the government’s authority override those concerns the moment it signs a check?
For now, the Trump administration has given its answer. Whether the rest of the tech industry — and the American public — agrees remains to be seen.

