BotBeat
...
← Back

> ▌

OpenAIOpenAI
POLICY & REGULATIONOpenAI2026-03-20

Trump Administration Used ChatGPT to Flag HVAC Grant as DEI, Court Documents Reveal

Key Takeaways

  • ▸DOGE used ChatGPT with simple prompts to categorize over 1,163 NEH grant proposals as DEI-related, leading to cuts of more than $100 million in projected funding
  • ▸The AI system misclassified unrelated infrastructure projects (like HVAC replacements) as DEI initiatives, demonstrating the risks of deploying LLMs for consequential policy decisions
  • ▸Four major academic organizations are suing, arguing the grant cancellations violate constitutional rights and constitute illegal discrimination based on protected characteristics
Source:
Hacker Newshttps://fortune.com/2026/03/19/doge-cancelled-350000-hvac-grant-dei-lawsuit-elon-musk/↗

Summary

Court filings expose that the Trump administration's Department of Government Efficiency (DOGE) used OpenAI's ChatGPT to identify and slash over $100 million in projected National Endowment for the Humanities (NEH) funding—approximately half the agency's annual budget—by categorizing grants as diversity, equity, and inclusion (DEI) initiatives. DOGE employees Justin Fox and Nate Cavanaugh deployed ChatGPT with standardized prompts to evaluate grant proposals, resulting in the cancellation of funding for projects ranging from humanities scholarships to basic infrastructure improvements. One notable casualty was a $350,000 grant to the High Point Museum in North Carolina for replacing an aging HVAC system, which ChatGPT flagged as "#DEI" because it claimed the improved preservation conditions would provide "greater access to diverse audiences."

The American Council of Learned Societies, American Historical Association, Modern Language Association, and Authors Guild have filed a joint lawsuit challenging the cuts as unconstitutional violations of First Amendment rights and equal protection guarantees. The organizations argue that canceling grants based on DEI classifications amounts to illegal discrimination on the basis of race, ethnicity, gender, and other protected qualities. The case highlights broader questions about the use of AI systems in high-stakes government decision-making and the potential for algorithmic bias when AI tools are deployed without adequate human oversight or contextual understanding.

  • The case raises serious concerns about government use of AI without human expertise, transparency, or accountability mechanisms in budget and policy decisions

Editorial Opinion

This incident reveals a troubling pattern: relying on large language models to make consequential policy decisions without proper oversight or domain expertise. ChatGPT's broad interpretation of DEI—conflating basic museum infrastructure with diversity initiatives—exposes the dangers of algorithmic decision-making at scale in government. While AI can assist human judgment, using it as the primary filter for $100 million in cuts, without contextual expertise or appeal mechanisms, undermines both sound governance and the rule of law.

Large Language Models (LLMs)Government & DefenseRegulation & PolicyAI Safety & Alignment

More from OpenAI

OpenAIOpenAI
INDUSTRY REPORT

AI Chatbots Are Homogenizing College Classroom Discussions, Yale Students Report

2026-04-05
OpenAIOpenAI
FUNDING & BUSINESS

OpenAI Announces Executive Reshuffle: COO Lightcap Moves to Special Projects, Simo Takes Medical Leave

2026-04-04
OpenAIOpenAI
PARTNERSHIP

OpenAI Acquires TBPN Podcast to Control AI Narrative and Reach Influential Tech Audience

2026-04-04

Comments

Suggested

Whish MoneyWhish Money
INDUSTRY REPORT

As Lebanon's Humanitarian Crisis Deepens, Digital Wallets Emerge as Lifeline for Displaced Millions

2026-04-05
MicrosoftMicrosoft
OPEN SOURCE

Microsoft Releases Agent Governance Toolkit: Open-Source Runtime Security for AI Agents

2026-04-05
MicrosoftMicrosoft
POLICY & REGULATION

Microsoft's Copilot Terms Reveal Entertainment-Only Classification Despite Business Integration

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us