BotBeat
...
← Back

> ▌

AnthropicAnthropic
POLICY & REGULATIONAnthropic2026-03-07

Pentagon Refuses to Disclose If AI Selected Elementary School as Bombing Target in Iran Strike

Key Takeaways

  • ▸The Pentagon used Anthropic's Claude AI to plan military strikes in Iran but refuses to confirm if it selected an elementary school that was bombed, killing 165 students and staff
  • ▸The Shajareh Tayyebeh girls' school in Minab was hit in a 'double tap' attack, with the second strike targeting first responders and parents
  • ▸Israel previously used an AI system called 'Lavender' to select over 37,000 targets in Gaza, with human operators reportedly spending only seconds reviewing each AI recommendation
Source:
Hacker Newshttps://futurism.com/artificial-intelligence/pentagon-ai-claude-bombing-elementary-school↗

Summary

The Pentagon has declined to confirm whether artificial intelligence was used to select an Iranian elementary school as a bombing target, following airstrikes that killed 165 students and staff at the Shajareh Tayyebeh girls' school in Minab, Iran. According to Wall Street Journal reporting, the Pentagon deployed Anthropic's Claude AI model to assist in planning military strikes on Iran over the weekend. When questioned specifically about AI's role in targeting the school—where most victims were children aged 7-12—US CENTCOM responded only that they had "nothing for you on this at this time."

The incident represents a disturbing escalation in the military use of AI for targeting decisions. The school was reportedly struck twice in a "double tap" attack, with the second strike hitting first responders and parents attempting to rescue survivors. This pattern mirrors previous controversial US military operations and raises urgent questions about whether AI systems are making life-or-death decisions about civilian targets.

The use of AI in military targeting is not unprecedented. Israel's military previously employed an AI system called "Lavender" in Gaza operations, which identified over 37,000 Palestinians as potential targets. Israeli intelligence officers reported treating Lavender's recommendations "as if it were a human decision," with one operative admitting to spending only 20 seconds reviewing each AI-selected target. The Pentagon's reported use of Claude in Iran suggests this technology is becoming standard in modern warfare, despite profound ethical concerns about algorithmic decision-making in lethal military operations.

  • The incident marks a troubling new phase of warfare where it's unclear whether humans or algorithms are making final decisions on deploying lethal force against civilian areas
AI AgentsGovernment & DefenseRegulation & PolicyEthics & BiasAI Safety & Alignment

More from Anthropic

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Security Researcher Exposes Critical Infrastructure After Following Claude's Configuration Advice Without Authentication

2026-04-05

Comments

Suggested

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us