BotBeat
...
← Back

> ▌

OpenAIOpenAI
POLICY & REGULATIONOpenAI2026-03-20

Delaware Court Rules Against KRAFTON CEO for Using ChatGPT to Dodge $250M Bonus, Orders Leadership Reinstated

Key Takeaways

  • ▸Delaware court found CEO used ChatGPT to engineer removal of studio leaders to avoid $250M bonus payment, establishing important precedent on AI use in corporate decision-making
  • ▸Court emphasized that executives have a duty to exercise independent human judgment and cannot outsource material business decisions to AI chatbots
  • ▸KRAFTON's strategy backfired when the AI-drafted public message alarmed the gaming community and exposed the company's true intentions
Source:
Hacker Newshttps://fortune.com/2026/03/17/krafton-subnautica-chatgpt-delaware-court-ruling-ceo-reinstated/↗

Summary

In a landmark corporate governance case, a Delaware judge ruled that KRAFTON CEO Changhan Kim improperly used ChatGPT to devise a strategy to remove Unknown Worlds Entertainment executives and avoid paying a $250 million earn-out bonus following the 2021 acquisition. Kim, concerned the acquisition deal was unfavorable, bypassed his legal team and consulted the AI chatbot for a "Project X" corporate takeover strategy, which detailed steps including controlling publishing rights, framing conflicts around quality rather than finances, and preparing legal defenses. The court found that KRAFTON executives failed to exercise independent human judgment and improperly ousted CEO Ted Gill and cofounders Charlie Cleveland and Max McGuire without legitimate cause. Vice Chancellor Lori Will's ruling emphasized that corporate leaders are expected to make good-faith decisions independently rather than outsource critical business judgments to AI systems. The court has ordered the reinstatement of all three executives, extended the earn-out period to account for disruption, and denied KRAFTON's attempt to sidestep its contractual obligations.

  • Ruling reinforces that AI assistance should supplement rather than replace human oversight and fiduciary responsibility in corporate governance

Editorial Opinion

This case represents a critical moment for AI governance in corporate settings. While ChatGPT demonstrated its capability to generate sophisticated strategic frameworks, the ruling correctly identifies that AI tools cannot and should not replace human judgment in decisions with fiduciary implications. The case serves as a cautionary tale about over-reliance on AI without proper human oversight—not because the technology is inherently problematic, but because executives have professional and legal duties that require authentic human deliberation, particularly when conflicts of interest exist.

Large Language Models (LLMs)Regulation & PolicyEthics & BiasAI Safety & Alignment

More from OpenAI

OpenAIOpenAI
INDUSTRY REPORT

AI Chatbots Are Homogenizing College Classroom Discussions, Yale Students Report

2026-04-05
OpenAIOpenAI
FUNDING & BUSINESS

OpenAI Announces Executive Reshuffle: COO Lightcap Moves to Special Projects, Simo Takes Medical Leave

2026-04-04
OpenAIOpenAI
PARTNERSHIP

OpenAI Acquires TBPN Podcast to Control AI Narrative and Reach Influential Tech Audience

2026-04-04

Comments

Suggested

OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
PerplexityPerplexity
POLICY & REGULATION

Perplexity's 'Incognito Mode' Called a 'Sham' in Class Action Lawsuit Over Data Sharing with Google and Meta

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us