Delaware Court Rules Against KRAFTON CEO for Using ChatGPT to Dodge $250M Bonus, Orders Leadership Reinstated
Key Takeaways
- ▸Delaware court found CEO used ChatGPT to engineer removal of studio leaders to avoid $250M bonus payment, establishing important precedent on AI use in corporate decision-making
- ▸Court emphasized that executives have a duty to exercise independent human judgment and cannot outsource material business decisions to AI chatbots
- ▸KRAFTON's strategy backfired when the AI-drafted public message alarmed the gaming community and exposed the company's true intentions
Summary
In a landmark corporate governance case, a Delaware judge ruled that KRAFTON CEO Changhan Kim improperly used ChatGPT to devise a strategy to remove Unknown Worlds Entertainment executives and avoid paying a $250 million earn-out bonus following the 2021 acquisition. Kim, concerned the acquisition deal was unfavorable, bypassed his legal team and consulted the AI chatbot for a "Project X" corporate takeover strategy, which detailed steps including controlling publishing rights, framing conflicts around quality rather than finances, and preparing legal defenses. The court found that KRAFTON executives failed to exercise independent human judgment and improperly ousted CEO Ted Gill and cofounders Charlie Cleveland and Max McGuire without legitimate cause. Vice Chancellor Lori Will's ruling emphasized that corporate leaders are expected to make good-faith decisions independently rather than outsource critical business judgments to AI systems. The court has ordered the reinstatement of all three executives, extended the earn-out period to account for disruption, and denied KRAFTON's attempt to sidestep its contractual obligations.
- Ruling reinforces that AI assistance should supplement rather than replace human oversight and fiduciary responsibility in corporate governance
Editorial Opinion
This case represents a critical moment for AI governance in corporate settings. While ChatGPT demonstrated its capability to generate sophisticated strategic frameworks, the ruling correctly identifies that AI tools cannot and should not replace human judgment in decisions with fiduciary implications. The case serves as a cautionary tale about over-reliance on AI without proper human oversight—not because the technology is inherently problematic, but because executives have professional and legal duties that require authentic human deliberation, particularly when conflicts of interest exist.



