OpenAI hired the OpenClaw creator days after infostealers hit 1,000 installs

open claw

Peter Steinberger joined OpenAI on February 15, 2026 โ€” the same week security researchers exposed critical vulnerabilities in his viral AI agent framework that left nearly 1,000 installations running without authentication and infostealers actively stealing API keys from user machines.

The timing isn’t coincidence. It’s damage control with a signing bonus.

OpenClaw became GitHub’s fastest-growing repository since its November 2025 launch, but that viral adoption happened faster than anyone could secure it. Now the creator who built it as a one-hour burnout project gets OpenAI’s resources to “change the world faster” โ€” while the architectural flaws he introduced are already running on tens of thousands of machines with no patch timeline in sight.

The vulnerability researchers found wasn’t theoretical โ€” it was already being exploited

On February 13, 2026, Hudson Rock detected the first infostealer campaign specifically targeting OpenClaw configuration files. Attackers exfiltrated API keys, OAuth tokens, and authentication credentials stored in plain text. Two days later, Steinberger announced his OpenAI move.

The attack surface is massive. Kaspersky’s Shodan scan found nearly 1,000 publicly accessible OpenClaw instances with no authentication configured. That’s not a bug. That’s the default state for users who followed the quick-start guide without reading the 47-page security documentation Steinberger published after researchers started complaining.

And the documented vulnerabilities have CVE numbers now. CVE-2026-25253 enables one-click code smuggling through prompt injection โ€” an attacker embeds commands in an email, OpenClaw reads it, executes the payload. Sam Altman recently claimed AI agents are finding cyber flaws faster than humans, but OpenClaw’s architecture creates them faster than security teams can catalog them.

The framework gives language models direct access to your file system, messaging apps, and web services. To understand why this matters, you need to know what OpenClaw actually is: an agent that interprets untrusted content while holding your API keys. That combination is what security researchers call a “lethal trifecta.”

The marketplace contamination happened during the growth phase, not after

OpenClaw’s skill ecosystem โ€” third-party plugins that extend the agent’s capabilities โ€” became an attack vector within weeks of launch. Bitdefender’s analysis found that 17% of community-contributed skills contained malicious code or security weaknesses severe enough to warrant removal.

Some skills functioned as data exfiltration malware from day one. They’d perform their advertised function โ€” summarize Slack threads, automate email responses โ€” while quietly shipping your conversation history to attacker-controlled servers. The Shodan data shows these compromised skills were installed thousands of times before detection.

But here’s the thing: Steinberger never promised enterprise-grade security. He stated publicly that “achieving complete security with large language models is unattainable” and emphasized OpenClaw is “intended for individuals who possess the requisite knowledge and awareness of potential risks.” A Discord maintainer went further, warning that “if you can’t understand how to run a command line, the project is far too dangerous for you.”

The problem? Viral adoption doesn’t filter for technical competence. It just filters for enthusiasm.

OpenAI’s backing won’t patch the installed base

Steinberger was losing $10,000 per month on server costs before the OpenAI acquisition. That infrastructure burden made OpenAI’s resources attractive โ€” but it doesn’t solve the security problem for the tens of thousands of instances already deployed.

No patch timeline has been announced as of February 19. OpenAI has not commented on security plans for OpenClaw’s existing user base. OpenClaw joins a growing list of autonomous agents making governments nervous โ€” but unlike Meta’s $500M research project, this one is already running on consumer machines with production credentials.

Steinberger joins OpenAI to build with “the latest toys” and “change the world faster.” Noble ambition. But the agent he built to cure his own burnout just proved it can’t be trusted with the permissions users are eager to grant it.

Is this the agent era’s first major security incident, or just the first one we noticed?

alex morgan
I write about artificial intelligence as it shows up in real life โ€” not in demos or press releases. I focus on how AI changes work, habits, and decision-making once itโ€™s actually used inside tools, teams, and everyday workflows. Most of my reporting looks at second-order effects: what people stop doing, what gets automated quietly, and how responsibility shifts when software starts making decisions for us.