Users start deleting ChatGPT after OpenAI’s Pentagon deal: uninstalls jump 295%

Caitlin Kalinowski, the hardware engineer who led OpenAI’s robotics push for 16 months, quit March 7 over the company’s rushed Pentagon deal—and users torched the app in response.

ChatGPT uninstalls spiked 295% the day the contract went public. Meanwhile, Anthropic’s Claude hit #1 on the App Store.

This is the first time an AI ethics controversy cost a company measurable users, not just headlines.

The user revolt OpenAI didn’t see coming

Most tech scandals see downloads slow. This one triggered active deletions—295% more uninstalls than the day before, according to TechCrunch data from February 28. And Claude didn’t just gain ground. It took the top spot on the US App Store’s free charts that same Saturday.

The timing isn’t subtle. Anthropic’s refusal to sign a similar Pentagon deal got the company blacklisted—Trump’s administration labeled them a “supply-chain risk” and ordered federal agencies to stop using Claude after a six-month transition.

Users noticed. When one AI company says no to surveillance contracts and gets punished while another says yes and calls it patriotism, people vote with their phones.

Claude’s surge to #1 isn’t about features. It’s about choosing the company that drew a line.

OpenAI’s robotics bet just lost its best player

Kalinowski wasn’t some junior engineer making a symbolic exit. She led Meta’s AR glasses project before joining OpenAI in November 2024. Her departure after just 16 months leaves the robotics program—which was supposed to help OpenAI close the gap in the robotics race with China—without its most credible leader.

OpenAI’s San Francisco robotics lab has been training robotic arms to fold laundry and handle household tasks. The company planned to expand to a Richmond site. But hardware was always the side bet, and now it’s rudderless.

Kalinowski wrote on X: “AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.”

She called it a “matter of principle.” That’s the kind of talent OpenAI can’t afford to lose if it wants to ship actual robots.

The Pentagon deal’s vague language is the whole problem

The contract prohibits “mass domestic surveillance and fully autonomous weapons systems.” Deployment is restricted to cloud-only infrastructure—no edge devices. Sounds clean. But Kalinowski’s resignation letter specifically calls out “surveillance of Americans without judicial oversight.” The gap between OpenAI’s public promises and the actual safeguards is what triggered the exodus.

And Sam Altman admitted the rollout “looked opportunistic” in early March. When your own CEO concedes the optics are bad, the “trust us” defense collapses. We still don’t know the contract’s dollar amount, duration, or which specific military applications are allowed. That opacity is the point—vague red lines let companies claim ethical compliance while accepting defense contracts that AI war game simulations suggest could escalate catastrophically.

Kalinowski’s concerns about “lethal autonomy without human authorization” aren’t hypothetical. They’re based on what happens when you deploy models trained on ambiguous rules in high-stakes environments.

OpenAI’s robotics lab is training arms to fold towels. Its Pentagon deal trains models for military use. Kalinowski’s quote—”lines that deserved more deliberation”—hangs over both. How many more researchers will draw that line before OpenAI’s hardware dreams collapse entirely?

alex morgan
I write about artificial intelligence as it shows up in real life — not in demos or press releases. I focus on how AI changes work, habits, and decision-making once it’s actually used inside tools, teams, and everyday workflows. Most of my reporting looks at second-order effects: what people stop doing, what gets automated quietly, and how responsibility shifts when software starts making decisions for us.