AI Companies Make Move to Prevent 'Catastrophic Misuse'

Anthropic, OpenAI seek chemical experts to strengthen safety guardrails
Posted Mar 17, 2026 8:55 AM CDT
Anthropic Now Hiring Chemical Weapons Expert
Pages from the Anthropic website and the company's logos are displayed on a computer screen in New York on Thursday, Feb. 26, 2026.   (AP Photo/Patrick Sison)

An AI company that says it doesn't want its tools used for certain weapons is now hiring someone who knows weapons inside out. Anthropic is seeking a specialist in chemical weapons and high-yield explosives to help keep its chatbot Claude from assisting others in the making of chemical, radiological, or explosive devices, per a LinkedIn job ad flagged by the BBC. The goal is to block "catastrophic misuse" of Claude. The role calls for at least five years' experience in weapons or explosives defense and familiarity with dirty bombs and other radiological threats. Rival OpenAI is advertising a similar post focused on biological and chemical risks, with a salary that can reach $455,000.

The move is feeding a debate over whether giving AI systems access to such knowledge, however tightly controlled, is itself dangerous. "Is it ever safe to use AI systems to handle sensitive chemicals and explosives information?" asked tech researcher Stephanie Hare, who notes there's no global framework governing this kind of work. AI systems and warfare are becoming increasingly intertwined, per the Indian Express. Yet Anthropic has clashed with the US government, suing the Pentagon after it was labeled a "supply chain risk" for demanding limits on how its systems could be used for autonomous weapons. OpenAI says it backs Anthropic's stance, even as it pushes for its own US government contract.

Read These Next
Get the news faster.
Tap to install our app.
X
Install the Newser News app
in two easy steps:
1. Tap in your navigation bar.
2. Tap to Add to Home Screen.

X