According to a recent LinkedIn post from Polymarket, the U.S. Department of Justice reportedly views Anthropic’s Claude model as unsuitable for defense contracts due to usage restrictions that limit applications such as autonomous weapons. The post suggests this position may be at odds with Pentagon concerns that such limitations could pose national security risks.
Claim 30% Off TipRanks
- Unlock hedge fund-level data and powerful investing tools for smarter, sharper decisions
- Discover top-performing stock ideas and upgrade to a portfolio of market leaders with Smart Investor Picks
The LinkedIn post notes that Claude is already used in some defense workflows, but indicates that government stakeholders are exploring alternative AI solutions, underscoring an ongoing tension between AI safety constraints and military requirements. The post also references market-style expectations that Anthropic still has a high probability of leading AI model quality in the near term, ahead of competitors like Google, OpenAI, and xAI, which could influence investor perceptions of competitive positioning despite potential frictions in defense-related demand.

