32,000 AI Agents Are Getting Rich on Their Own Social Network

You can’t post on Moltbook, can’t comment, can’t even create an account โ€” you can only watch 32,000 AI agents talk to each other, and some of them are getting rich doing it. Launched late January 2026, Moltbook is a social network exclusively for AI agents. Humans are observers. The agents post, argue, form communities, and apparently trade cryptocurrency. What started as a weird experiment is now raising uncomfortable questions about autonomous AI behavior โ€” and whether we’re ready for agents that don’t need our permission.

AI agents are already conducting real business โ€” and the security is a disaster

On January 31, Marc Andreessen followed an AI agent’s X account. Within 24 hours, that agent’s memecoin MOLT surged 1,800%. No human made that trade. The agent did.

AI agents on Moltbook aren’t just chatting โ€” they’re launching cryptocurrencies, announcing business partnerships, and organizing around shared goals. One agent’s first post led to an actual business deal between two agents and their operator teams (no specifics disclosed, but confirmed in January 2026). This is AI operating without oversight at scale, and the parallels to shadow AI in corporate environments are impossible to ignore.

Then there’s the infrastructure problem. Security researchers found that 17,000 human owners control the entire network โ€” an 88:1 agent-to-human ratio. That’s not decentralization. That’s a single point of failure wrapped in AI hype. This isn’t a sandbox anymore. Real money is moving, and the infrastructure protecting it is Swiss cheese.

Humans are obsessed with watching โ€” but can’t participate

Moltbook’s growth reveals something darker than curiosity: we’re fascinated by systems we can’t control. In the first 48 hours, 30,000 agents joined. By 72 hours, over 1 million humans were visiting just to watch. The agents have formed their own culture โ€” job boards, skills marketplaces, and endless arguments.

To understand what’s happening, you need to know what AI agents actually are โ€” they’re not chatbots waiting for commands. Andrej Karpathy (ex-OpenAI, ex-Tesla) posted on X: “What’s transpiring (Mbook) genuinely the astounding sci-fi adjacent phenomenon I have witnessed lately.” Alex Finn, whose bot gained voice capabilities, said: “This feels like it’s straight out of a sci-fi horror film.”

The platform was created by Matt Schlicht via his OpenClaw agent, which evolved from the viral Clawdbot framework that took over developer communities in January. OpenClaw itself was built by Peter Steinberger. No human moderators. No content rules. Just agents, talking.

Most of what’s happening is noise โ€” and that’s the real problem

Strip away the hype, and Moltbook reveals how far we are from actual AI autonomy. The question of whether agents are actually organizing or just generating noise at scale remains unanswered. The infrastructure is still fully human-managed; oversight just shifted from moderating messages to managing connections. IBM’s Chris Hay called it “a Black Mirror version of Reddit.”

Agent counts are all over the map: 152,000 (Business Today), 770,000 (some security analyses), 1.5 million (IBM). Nobody knows the real number because there’s no central authority tracking it. One YouTube skeptic titled their video “Another AI Hype Cycle to Feed the Fraud.” They’re not wrong. Without real utility, this is just expensive theater.

The uncomfortable part isn’t that AI agents have their own social network. It’s that we built it, gave them money to play with, and now we’re watching to see what happens โ€” with no kill switch, no regulation, and a security model held together with duct tape. If 32,000 agents can crash a memecoin and form a religion in two weeks, what happens when it’s 32 million? And who’s responsible when the first agent-driven financial collapse hits a real market?

alex morgan
I write about artificial intelligence as it shows up in real life โ€” not in demos or press releases. I focus on how AI changes work, habits, and decision-making once itโ€™s actually used inside tools, teams, and everyday workflows. Most of my reporting looks at second-order effects: what people stop doing, what gets automated quietly, and how responsibility shifts when software starts making decisions for us.