OpenAI Negotiating Pentagon Deal After Trump Orders End to Anthropic Contracts
Key Takeaways
- ▸OpenAI is negotiating a Pentagon contract that would allow the company to maintain control over safety measures and limit AI deployment to specific use cases
- ▸The deal includes OpenAI's "red lines" prohibiting autonomous weapons, domestic mass surveillance, and critical decision-making—the same restrictions that led to Anthropic losing its government contracts
- ▸President Trump ordered all federal agencies to stop using Anthropic's technology after the company refused to remove safeguards from its Claude model
Summary
OpenAI CEO Sam Altman informed staff during an all-hands meeting that the company is in active negotiations with the U.S. Department of Defense for a contract to provide AI models and tools to the Pentagon. The discussions come amid a public dispute between the government and Anthropic that resulted in President Trump ordering all federal agencies to cease using Anthropic's technology, with a six-month phase-out period for existing contracts including a partnership worth up to $200 million.
According to sources present at the meeting, the Pentagon has agreed to several major concessions that previously caused the breakdown with Anthropic. The government would allow OpenAI to maintain its own "safety stack" of controls, retain authority over technical safeguards and model deployment, and notably, include OpenAI's stated "red lines" in the contract—prohibiting use of AI for autonomous weapons, domestic mass surveillance, or critical decision-making. OpenAI would also limit deployment to cloud environments rather than edge systems like aircraft and drones.
The conflict with Anthropic reportedly stemmed from the company's refusal to remove safeguards restricting military uses of its Claude model, despite Pentagon demands that AI systems be available for "all lawful purposes." OpenAI leadership acknowledged at the meeting that the most challenging aspect of negotiations involves balancing concerns about AI-driven surveillance threatening democracy with the reality that governments conduct international intelligence operations. Company officials referenced threat intelligence showing China already using AI to target dissidents overseas, suggesting OpenAI may be willing to support some surveillance activities while maintaining restrictions on domestic use.
- OpenAI faces the same fundamental tension as Anthropic between maintaining AI safety principles and meeting government demands for unrestricted access to AI capabilities



