Google DeepMind Workers Unionize Over Military AI Contracts
Key Takeaways
- ▸DeepMind employees in London have unionized to oppose military AI contracts with the US and Israeli militaries
- ▸Alphabet's removal of its AI weapons development pledge in February 2025 triggered the unionization push
- ▸Google's "any lawful government purpose" Pentagon deal has sparked widespread internal opposition from approximately 600 US-based employees
Summary
Employees at Google DeepMind's London office have voted to unionize in response to the company's military AI partnerships with the US and Israeli militaries. The workers are demanding that the Communication Workers Union and Unite the Union be recognized as joint representatives, and are pushing Google to terminate its military contracts.
The unionization effort intensified after Alphabet removed its pledge against developing AI for weapons and mass surveillance from its ethics guidelines in February 2025. "The direction of travel is to further militarization of the AI models we're building here," one anonymous DeepMind employee told WIRED. The push comes as other AI companies face similar pressure—DeepMind and OpenAI staff signed a letter supporting Anthropic after the Pentagon sought to designate it a supply chain risk over its refusal to work on autonomous weapons.
Google's recent deal with the Pentagon to use AI for "any lawful government purpose" has sparked internal opposition, with approximately 600 US-based employees signing a protest letter. Workers plan to demand greater transparency over how AI products are used and seek assurance regarding layoffs driven by automation. If Google fails to engage with the unionization demands, workers will escalate to arbitration to compel union recognition.
- The effort reflects growing industry-wide concern about militarization of AI systems and conflicts with stated ethical AI principles
- If Google doesn't engage, workers plan to escalate to arbitration for mandatory union recognition
Editorial Opinion
This unionization effort represents a critical moment for AI worker advocacy and corporate accountability. As AI capabilities grow more powerful, workers' willingness to collectively demand ethical guardrails serves as a necessary counterbalance to corporate military partnerships justified by national security rhetoric. The vague "any lawful government purpose" clause that DeepMind employees rightfully challenge demonstrates how easily stated ethical commitments can be undermined through legal language—a pattern the industry must address to retain researchers committed to responsible AI development.



