BotBeat
...
← Back

> ▌

AnthropicAnthropic
POLICY & REGULATIONAnthropic2026-03-07

AI Targeting Error Allegedly Linked to Fatal School Bombing in Iran, DOD Investigation Underway

Key Takeaways

  • ▸A Claude AI-based military targeting system allegedly contributed to a missile strike on an Iranian girls' school that killed an estimated 150 students, with sources suggesting the AI used outdated intelligence
  • ▸The Pentagon has rapidly expanded its use of Anthropic's Claude across operational planning over the past year, raising questions about oversight and testing protocols
  • ▸The Trump Administration has ordered the military to eliminate Claude within six months after declaring Anthropic a supply chain risk, signing a replacement contract with OpenAI
Source:
Hacker Newshttps://thisweekinworcester.com/exclusive-ai-error-girls-school-bombing/↗

Summary

An AI-powered targeting system reportedly contributed to a U.S. military missile strike on a girls' school in Minab, Iran, that killed an estimated 150 students, according to multiple anonymous sources cited by This Week in Worcester. Pentagon officials are investigating whether a Claude-based AI system, developed by Anthropic, incorrectly flagged the school's location using outdated intelligence linking a nearby compound to Iran's Islamic Revolutionary Guard Corps. The incident has raised urgent questions about the rapid deployment of AI in military operations without adequate human oversight.

Sources within the Department of Defense indicated that the military has dramatically scaled up its use of Claude AI over the past year, integrating it into core operational planning and targeting decisions. A DoD logistics programmer described the department as "gung-ho" about the AI program, implementing it across numerous military functions. The Trump Administration recently declared Anthropic a supply chain risk and mandated a six-month transition away from Claude, subsequently signing a contract with OpenAI as a replacement.

The tragedy highlights growing concerns about AI reliability in life-or-death scenarios. This Week in Worcester previously reported AI errors in the handling of classified Epstein files, where automated systems incorrectly redacted or exposed sensitive information without human review. Defense Secretary Pete Hegseth stated that the U.S. "never targets civilian targets" and confirmed the investigation is ongoing, while the Pentagon has not yet released details about authorization protocols or fail-safes that should have prevented such an error.

  • This incident follows previous reports of AI errors in sensitive government operations, including mishandling of classified Epstein files

Editorial Opinion

This tragedy represents a catastrophic failure point for AI in military applications and should serve as an urgent wake-up call for the defense industry. While AI can process vast amounts of intelligence data, delegating life-or-death targeting decisions to systems that may rely on outdated information—without robust human verification—is unconscionable. The Pentagon's rapid, "gung-ho" adoption of AI for operational planning suggests a dangerous prioritization of technological capability over safety protocols. Regardless of which AI vendor the military ultimately uses, this incident demands comprehensive transparency about authorization chains, testing procedures, and the circumstances that allowed an algorithm to greenlight an attack on a school.

Large Language Models (LLMs)Autonomous SystemsGovernment & DefenseRegulation & PolicyAI Safety & Alignment

More from Anthropic

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Security Researcher Exposes Critical Infrastructure After Following Claude's Configuration Advice Without Authentication

2026-04-05

Comments

Suggested

OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
PerplexityPerplexity
POLICY & REGULATION

Perplexity's 'Incognito Mode' Called a 'Sham' in Class Action Lawsuit Over Data Sharing with Google and Meta

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us