tiprankstipranks
Advertisement
Advertisement

Anthropic Hires Expert to Block AI from Enabling Deadly Explosives and Dirty Bombs

Story Highlights

– Anthropic is recruiting a chemical weapons expert to prevent “catastrophic misuse” of its AI models.
– The move will prevent Claude from being used to enable explosives or dirty bombs.

Anthropic Hires Expert to Block AI from Enabling Deadly Explosives and Dirty Bombs

Anthropic, the San Francisco-based artificial intelligence (AI) firm behind Claude, is looking to recruit a chemical weapons expert to prevent “catastrophic misuse” of its software models from enabling deadly explosives or dirty bombs. This bold hire underscores the firm’s relentless push for “responsible AI” development as it scales rapidly toward a projected $20 billion revenue run rate.

Claim 55% Off TipRanks

Forget margin or options. Here's how the pros trade AMZN

Anthropic Brings on Weapons Expert to Guard Against AI Misuse

In a March 7 LinkedIn job posting, Anthropic said it is recruiting a Policy Manager specializing in chemical weapons and high-yield explosives, offering a New York-based salary between $245,000 to $285,000. The company is seeking candidates with at least five years of experience in chemical weapons or explosives defense, along with expertise in energetic materials, hazardous agents, and radiological dispersal devices (also known as dirty bombs). 

According to the job posting, the hire will be responsible for building evaluation frameworks that test whether AI systems like Claude could produce dangerous outputs, and for creating safeguards against potential misuse. Anthropic confirmed that the role fits within a broader pattern of safety hiring across biosecurity and cybersecurity, reflecting the company’s push toward “responsible AI” development amid the absence of global oversight. 

Even so, the explicit focus on weapons-related risks signals a meaningful shift in how seriously the company is approaching the threat of its technology being weaponized. The concern carries real weight. Anthropic has previously cautioned that advancing AI could facilitate “heinous crimes” such as chemical weapons development, making the case for stronger protective measures more urgent than ever.

OpenAI Follows Suit in High-Stakes Safety Race

Anthropic is not the only firm to hire someone for such a role. Earlier, OpenAI advertised a similar senior role for chemical and biological risks, with compensation reportedly reaching as high as $460,000 plus equity, nearly double what Anthropic is offering. 

The company acknowledged that while the frontier AI models have the potential to benefit humanity, they also pose severe risks. OpenAI said its Preparedness team is tasked with identifying, tracking, and preparing for catastrophic threats related to these advanced models. 

Critics, however, including security expert Dr Stephanie Hare, caution that embedding weapons knowledge into AI systems could backfire if implemented without public oversight or international regulations.

What Is the Best AI Stock to Buy?

While Anthropic is still a private company ahead of its potential initial public offering (IPO) in 2026, investors interested in AI stocks can consider top players such as Nvidia (NVDA), Microsoft (MSFT), Meta (META), and Amazon (AMZN) on TipRanks Stocks Comparison Center.

Disclaimer & DisclosureReport an Issue

1