Anthropic is warning a federal judge that its business could face a death blow if a new government blacklist isn’t stopped. The AI startup, known for its Claude assistant, says it stands to lose multiple billions of dollars in revenue this year alone. Anthropic is estimated to be a $380 billion company, as indicated in the image below.
Claim 55% Off TipRanks
- Unlock hedge fund-level data and powerful investing tools for smarter, sharper decisions
- Discover top-performing stock ideas and upgrade to a portfolio of market leaders with Smart Investor Picks

This crisis started after the company refused a Pentagon demand to let its AI be used for mass surveillance and autonomous weapons. In response, the Trump administration labeled Anthropic a “supply-chain risk,” effectively banning it from working with the U.S. government and its thousands of partners.
Judge Rita Lin Accelerates the Court Case
Given the stakes, U.S. District Judge Rita F. Lin has moved up the timeline for a key hearing. Originally set for April, the court will now hear Anthropic’s request to block the government’s order on March 24. Anthropic’s legal team argued that the situation is urgent because over 100 enterprise customers have already reached out to express doubt about working with the firm. The company is asking the court to immediately remove the “supply-chain risk” label, which they claim is an illegal act of retaliation.
The Federal Government Refuses to Budge
Despite the warnings of financial ruin, the Justice Department is holding its ground. When asked to promise that the government wouldn’t take further actions against Anthropic before the next hearing, like issuing a new executive order, lawyer James Harlow said he was “not prepared to offer any commitments.” This leaves the startup in a state of limbo. Already, a $50 million deal with a financial firm has been paused, and other customers are cutting their contracts in half because they fear getting caught in the government’s crosshairs.
Tech Giants Create a United Front
In a rare move, Anthropic’s biggest rivals are stepping up to help. Researchers from OpenAI and Google (GOOGL) filed a joint letter to the judge, supporting Anthropic’s stance on AI safety. They argued that current AI models simply aren’t ready to handle “lethal autonomous targeting” safely. Microsoft (MSFT) also filed its own brief, warning that banning Anthropic could cause massive delays for all IT services across the Department of Defense, as many suppliers have no easy way to replace the company’s unique software.

