BotBeat
...
← Back

> ▌

OpenAIOpenAI
POLICY & REGULATIONOpenAI2026-03-02

OpenAI's Pentagon Contract 'Red Lines' Criticized for Using NSA-Style Language Loopholes

Key Takeaways

  • ▸OpenAI's Pentagon contract references Executive Order 12333, which critics identify as a major loophole for surveilling Americans' communications collected overseas
  • ▸The contract's 'red lines' against mass surveillance are defined through legal authorities that have historically allowed 'incidental' collection of U.S. persons' data without volume limits
  • ▸The Pentagon blacklisted a competitor for refusing to drop safety commitments, then praised OpenAI for a deal that supposedly preserved similar protections
Source:
Hacker Newshttps://www.techdirt.com/2026/03/02/openais-red-lines-are-written-in-the-nsas-dictionary-where-words-mean-what-the-nsa-wants-them-to-mean/↗

Summary

OpenAI has come under scrutiny for the language used in its recently announced Pentagon contract, which critics argue contains loopholes similar to those historically exploited by the NSA for domestic surveillance. While the company publicly outlined three 'red lines'—prohibiting mass domestic surveillance, autonomous weapons direction, and high-stakes automated decisions—the actual contract language references legal authorities including Executive Order 12333, which has been documented as a major loophole for surveilling Americans' communications outside U.S. borders.

The controversy erupted after the Pentagon blacklisted one AI company for refusing to drop safety commitments, then hours later praised OpenAI for signing a deal that supposedly preserved similar commitments. Critics, citing declassified documents and whistleblower testimony, point out that EO 12333 allows for 'incidental' collection of U.S. persons' communications content without court orders when collected overseas during foreign intelligence operations, with no limits on volume.

The contract defines 'lawful' behavior by referencing the Fourth Amendment, National Security Act of 1947, FISA, EO 12333, and DoD directives, stating the AI system 'shall not be used for unconstrained monitoring of U.S. persons' private information as consistent with these authorities.' Former State Department official John Napier Tye noted in 2014 that EO 12333 contains no protections for U.S. persons if collection occurs outside U.S. borders and has never been subject to meaningful Congressional or court oversight. Privacy advocates argue that by defining its restrictions through these authorities, OpenAI has effectively adopted the intelligence community's redefined terminology, where common words permit activities they appear to prohibit.

  • Privacy advocates argue OpenAI has adopted intelligence community terminology where words have been redefined to permit activities they appear to prohibit
Government & DefenseRegulation & PolicyEthics & BiasAI Safety & AlignmentPrivacy & Data

More from OpenAI

OpenAIOpenAI
INDUSTRY REPORT

AI Chatbots Are Homogenizing College Classroom Discussions, Yale Students Report

2026-04-05
OpenAIOpenAI
FUNDING & BUSINESS

OpenAI Announces Executive Reshuffle: COO Lightcap Moves to Special Projects, Simo Takes Medical Leave

2026-04-04
OpenAIOpenAI
PARTNERSHIP

OpenAI Acquires TBPN Podcast to Control AI Narrative and Reach Influential Tech Audience

2026-04-04

Comments

Suggested

OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
PerplexityPerplexity
POLICY & REGULATION

Perplexity's 'Incognito Mode' Called a 'Sham' in Class Action Lawsuit Over Data Sharing with Google and Meta

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us