Trump Administration Used ChatGPT to Flag HVAC Grant as DEI, Court Documents Reveal
Key Takeaways
- ▸DOGE used ChatGPT with simple prompts to categorize over 1,163 NEH grant proposals as DEI-related, leading to cuts of more than $100 million in projected funding
- ▸The AI system misclassified unrelated infrastructure projects (like HVAC replacements) as DEI initiatives, demonstrating the risks of deploying LLMs for consequential policy decisions
- ▸Four major academic organizations are suing, arguing the grant cancellations violate constitutional rights and constitute illegal discrimination based on protected characteristics
Summary
Court filings expose that the Trump administration's Department of Government Efficiency (DOGE) used OpenAI's ChatGPT to identify and slash over $100 million in projected National Endowment for the Humanities (NEH) funding—approximately half the agency's annual budget—by categorizing grants as diversity, equity, and inclusion (DEI) initiatives. DOGE employees Justin Fox and Nate Cavanaugh deployed ChatGPT with standardized prompts to evaluate grant proposals, resulting in the cancellation of funding for projects ranging from humanities scholarships to basic infrastructure improvements. One notable casualty was a $350,000 grant to the High Point Museum in North Carolina for replacing an aging HVAC system, which ChatGPT flagged as "#DEI" because it claimed the improved preservation conditions would provide "greater access to diverse audiences."
The American Council of Learned Societies, American Historical Association, Modern Language Association, and Authors Guild have filed a joint lawsuit challenging the cuts as unconstitutional violations of First Amendment rights and equal protection guarantees. The organizations argue that canceling grants based on DEI classifications amounts to illegal discrimination on the basis of race, ethnicity, gender, and other protected qualities. The case highlights broader questions about the use of AI systems in high-stakes government decision-making and the potential for algorithmic bias when AI tools are deployed without adequate human oversight or contextual understanding.
- The case raises serious concerns about government use of AI without human expertise, transparency, or accountability mechanisms in budget and policy decisions
Editorial Opinion
This incident reveals a troubling pattern: relying on large language models to make consequential policy decisions without proper oversight or domain expertise. ChatGPT's broad interpretation of DEI—conflating basic museum infrastructure with diversity initiatives—exposes the dangers of algorithmic decision-making at scale in government. While AI can assist human judgment, using it as the primary filter for $100 million in cuts, without contextual expertise or appeal mechanisms, undermines both sound governance and the rule of law.


