OpenAI has provided additional details about its recent agreement with the U.S. Department of Defense (now the Department of War) to deploy its artificial intelligence models in classified environments. The AI company announced the deal on February 28, 2026, shortly after rival Anthropic’s negotiations with the Pentagon failed.
Claim 55% Off TipRanks
- Unlock hedge fund-level data and powerful investing tools for smarter, sharper decisions
- Discover top-performing stock ideas and upgrade to a portfolio of market leaders with Smart Investor Picks
OpenAI published a blog post on March 1, 2026, detailing safeguards and deployment architecture to address concerns over military AI use. This development follows heightened tensions between the U.S. government and AI companies such as xAI and Anthropic, regarding ethical boundaries for national security applications.
OpenAI Outlines Safeguards and Cloud-Only Deployment Architecture
The Pentagon agreement prohibits OpenAI’s models from being used in three specific areas: mass domestic surveillance, autonomous weapon systems, and high-stakes automated decisions (such as social credit systems). Deployment is limited to cloud-only API access, preventing direct integration with weapons, sensors, or operational hardware.
OpenAI noted that it will retain control over its safety stack, deploy cleared personnel to assist the government, and include contractual protections aligned with U.S. laws, including “Executive Order 12333.” The contract also allows the DoD to use the AI systems for all lawful purposes consistent with applicable laws and safety protocols.
OpenAI emphasized that models remain safety-trained, with no guardrails-off versions provided. CEO Sam Altman described the deal as rushed but aimed at de-escalating tensions between industry and government. Head of national security partnerships Katrina Mulligan highlighted that the cloud-based architecture and human oversight provide stronger protections than reliance on policy alone.
The deal followed Anthropic’s refusal to remove certain restrictions on its Claude AI, which led to its designation as a supply-chain risk and to a government phase-out directive. OpenAI’s arrangement, potentially valued at a level comparable to prior contracts (up to $200 million for similar pilots), positions it as a key partner in classified AI deployments.
Broader Implications for AI and Defense Collaboration
OpenAI’s deal with the Pentagon follows weeks of negotiations and a public dispute between the DoD and Anthropic, culminating in the latter’s exclusion from federal use on February 27, 2026. OpenAI’s move came amid administration pressure for broader AI access in defense, including amid ongoing strikes in the Middle East.
Investor sentiment toward OpenAI has been mixed due to the optics of partnering with the military, though the company frames it as a responsible step to enable safe collaboration. This positions OpenAI favorably for government contracts while maintaining stated ethical red lines.
The takeaway is that OpenAI has secured a framework for classified AI use with technical and contractual safeguards, potentially setting a precedent for other AI firms if the terms are extended industry-wide.
How Much Money did OpenAI Get from the Government?
OpenAI’s recent agreement with the Pentagon is part of a series of similar contracts awarded to major AI companies. Reports indicate the deal is worth up to $200 million, consistent with prior Pentagon agreements signed with Anthropic, Google (GOOGL), and others in the past year to prototype frontier AI capabilities for national security contexts.
The exact amount OpenAI will receive remains undisclosed in official statements from the AI private company or the government, as the contract focuses on deployment terms, safeguards, and lawful use rather than public financial specifics.


