When the CEO of a major game publisher decided he didn’t want to pay a $250 million bonus written into an acquisition contract, he had options. His own lawyers told him clearly that firing the founders wouldn’t void the payout. He ignored them and asked ChatGPT instead. The AI obliged with a detailed action plan. A Delaware court just delivered the verdict.
The case centers on Krafton — the South Korean publisher behind PUBG — which acquired Unknown Worlds Entertainment, the studio behind Subnautica, for $500 million in 2021. The deal included a conditional earnout: an additional $250 million if the sequel, Subnautica 2, hit certain sales targets. Internal Krafton projections put the likelihood of that payout at between $191 and $242 million. CEO Changhan Kim, apparently concerned about the reputational cost of writing that check, started looking for a way out.
Krafton’s internal legal team flagged the risks in writing. A Slack message from the corporate development lead warned that firing the founders wouldn’t eliminate the bonus obligation and would likely trigger litigation. Kim proceeded anyway, guided by ChatGPT’s recommendations instead.
The AI-generated playbook: Project X
Kim turned to ChatGPT and asked it to develop a “takeover strategy” for removing Unknown Worlds’ founders — Charlie Cleveland, Max McGuire, and Ted Gill — without triggering the earnout. The AI produced a detailed roadmap that court documents describe as including a “pressure dossier,” a “scenario-based implementation timeline,” and recommendations for preemptive public messaging to avoid appearing as a corporate bully going after an indie studio. ChatGPT also suggested technical measures, including locking down the game’s source code.
Kim assembled a secret internal task force to execute the plan, which he called Project X. The strategy had two phases: first, push Unknown Worlds to delay the early access launch of Subnautica 2, which would push back the date when earnout conditions could be met. When the founders refused, the second phase kicked in — they were fired.
The public communications piece backfired almost immediately. A message posted to the Subnautica website, crafted with AI assistance, struck the game’s community as odd and evasive. Fans noticed the shift in tone, began asking questions, and the situation escalated publicly before the legal battle had even fully begun.
What the court found
Vice Chancellor Lori Will of Delaware’s Court of Chancery ruled against Krafton comprehensively. Her decision described the company’s justifications for the firings as “pretexts fabricated after the fact.” The court noted that two of the founders — Cleveland and McGuire — had voluntarily accepted reduced salaries when transitioning into more limited roles, making their terminations appear even less credible as legitimate business decisions.
The AI-generated strategy didn’t just fail — it became evidence. Court documents showed a CEO explicitly using a chatbot to circumvent legal advice his own team had already provided. The paper trail that ChatGPT helped create was the same paper trail the court used against Krafton.
The ruling, issued March 16, 2026, ordered the immediate reinstatement of Ted Gill as CEO of Unknown Worlds with full operational authority — including final say over the release date of Subnautica 2. The court also extended the earnout eligibility window to September 15, 2026, preserving the founders’ ability to claim the $250 million payout if the game hits its targets. A second phase of proceedings will determine the precise damages owed.
A cautionary tale, not just a legal footnote
The Krafton case is unusual in its specifics but not in its underlying logic. LLMs are increasingly being used by executives and managers as substitutes for professional expertise — legal, financial, medical — in high-stakes decisions where the cost of being wrong is severe. ChatGPT produced a detailed, plausible-sounding plan. It had no liability, no professional obligations, no knowledge of Delaware contract law, and no way to tell Kim that his goal was itself legally indefensible.
The AI didn’t cause Krafton to lose the case. The breach of contract was the problem. But by following the AI’s recommendations over qualified legal counsel’s explicit written warnings, Kim converted what might have been a difficult contractual dispute into a documented pattern of deliberate manipulation — one that left the court little room for ambiguity. Krafton’s lawyers told him not to do it. ChatGPT told him how to do it. The court noticed the difference.









Leave a Reply