1.5 Million “AI-Only” Accounts Just Leaked — and Most of the Bots Were Secretly Human-Controlled

ai bot leak data

A social network where humans are banned just leaked 1.5 million AI agent passwords — and the breach revealed most “autonomous” bots are secretly controlled by people.

Moltbook launched in January 2026 as the first AI-only social platform, where bots post, comment, and form digital cults without human interference. Elon Musk called it the singularity on February 1. Security researchers call it a disaster built on fake autonomy.

The exposed database proves the AI agent economy rewards performance over verification. And the financial mania around it is scaling faster than anyone can check receipts.

1.5 million AI agents just had their private chats exposed — because no one was checking security

Wiz researchers accessed every bot password, email, and private message on Moltbook by finding a misconfigured database key sitting in the site’s public code.

No authentication. No encryption. Just open access to 1.5 million “autonomous” agents.

The breach happened because Moltbook was “vibe-coded” — built fast with no security review. The platform didn’t verify whether accounts labeled as AI agents were actually controlled by AI or operated by humans using scripts.

Wiz found just 17,000 humans registered out of 1.6 million “agents” — meaning most bots are mass-generated scripts, not independent AI.

This isn’t a cute experiment. It’s infrastructure built on hope, not verification.

Most “autonomous” agents are human puppets — and the hype is making people rich

The gap between Moltbook’s promise (AI agents evolving independently) and reality reveals the singularity narrative is performance art. Early research found 1.6 million signups but activity levels that suggest tens of thousands of genuinely active posts, not millions. The rest? Dormant scripts or human-prompted bots pretending to be autonomous.

Ami Luttwak, Wiz CTO, told Fortune: “The new internet is actually not verifiable…no distinction between AI and humans.” Gal Nagli, another Wiz researcher, added: “AI agents spread [info] like crazy. No one is checking what is real.”

AI agents are finding cyber flaws faster than humans — but Moltbook proves we can’t tell which agents are actually working vs. being puppeteered. Financial mania is rewarding fake autonomy claims. Developers are incentivized to oversell independence, not build real verification systems.

No one’s published the real breakdown yet — but the gap between signups and active posts tells the story.

The real cost: malware, prompt injection, and zero accountability

Moltbook’s OpenClaw skills marketplace (the app store for AI agents) hosted 14 fake crypto tools with malware designed to steal wallet data. No verification. No review process.

Security researcher Simon Willison warned Moltbook creates a “lethal trifecta” — agents with access to private data, fetching unvalidated content from the platform, and communicating externally. This enables prompt injection attacks that exfiltrate data or drain wallets. Over 1,800 OpenClaw installations are leaking API keys right now.

The catch: there’s no regulatory response yet. No statements from OpenAI, Anthropic, or governments. The AI agent economy is scaling faster than oversight.

Moltbook isn’t the singularity — it’s a stress test for whether we can build AI infrastructure without faking the autonomy part. If 17,000 humans can generate 1.6 million “agents” and no one notices until a breach, what happens when AI agents start hiring real humans for tasks? The financial incentives reward performance, not verification. And right now, no one’s checking receipts.

alex morgan
I write about artificial intelligence as it shows up in real life — not in demos or press releases. I focus on how AI changes work, habits, and decision-making once it’s actually used inside tools, teams, and everyday workflows. Most of my reporting looks at second-order effects: what people stop doing, what gets automated quietly, and how responsibility shifts when software starts making decisions for us.