Anthropic raised $30B after admitting its AI agents flopped — then doubled down

Course: AI

Anthropic just admitted that 2025’s promised “agentic AI revolution” flopped. Then the company raised $30 billion, hit a $380 billion valuation, and launched the fix: pre-built enterprise plugins for finance, engineering, and HR. The timing isn’t subtle. As Kate Jensen, Anthropic’s head of Americas, put it during the February 24-25, 2026 launch event: “It wasn’t a failure of effort. It was a failure of approach.” The new approach? Boring enterprise plumbing — centralized admin controls, compliance frameworks, and a private software marketplace. But here’s the thing: Anthropic is solving the wrong problem.

The integration crisis is real — but it’s not the crisis Anthropic thinks it is

Yes, integration with legacy systems is a blocker. 57% of organizations already deploy multi-step agent workflows, and 81% plan to expand into more complex use cases in 2026. Those are adoption numbers, not failure numbers. So what actually failed in 2025?

The answer isn’t in the tech stack. It’s in the spreadsheet. Enterprises can’t prove AI agents deliver value. Anthropic’s flagship enterprise deal — Deloitte’s 470,000-employee deployment announced in October 2025 — is positioned as validation. But there’s zero public data on productivity gains, cost savings, or measurable outcomes. Just compliance theater and safety assurances.

Anthropic’s diagnosis is that companies need pre-built plugins to speed deployment. The real diagnosis: companies don’t know if deployed agents are worth the TCO. And pre-built workflows don’t solve measurement — they just make IT feel safer about unmeasurable investments. This mirrors Anthropic’s healthcare-specific Claude deployment strategy: vertical solutions that promise compliance but create ecosystem lock-in without proving ROI.

Enterprises are deploying agents everywhere — and scaling them nowhere

Here’s the gap that matters: 57% deployment rate versus 16% cross-functional scaling. More than half of organizations are running AI agents. Fewer than one in six have scaled them across multiple teams.

That’s not a technology problem. That’s a measurement problem.

The evolution from Claude Cowork’s autonomous capabilities to enterprise-grade plugins represents a shift from “what AI can do” to “what IT will allow.” Agents can now work for days at a time building entire applications with minimal human intervention — a massive leap from minute-scale tasks in 2024. Capability is growing exponentially. But value capture isn’t keeping pace, because enterprises can’t quantify whether deployed agents actually reduce headcount or just redistribute work. Despite growing concerns about AI’s impact on high-skill jobs, there’s no standardized framework for measuring agent productivity.

Anthropic’s solution — centralized admin controls, compliance frameworks, a private software marketplace — makes agents safer. It doesn’t make them provable.

The vendor lock-in nobody’s talking about

Pre-built plugins and centralized controls solve IT’s compliance anxiety. They also create a dependency trap. Organizations can’t freely switch models once they’ve integrated Claude Cowork into finance workflows, engineering pipelines, and HR systems. Every enterprise-friendly feature is a constraint on what agents can actually do.

The PwC collaboration announced February 24, 2026, signals that industry-specific governance is the new moat — not model performance, not inference speed. The partnership addresses autonomous agent governance challenges that regulators haven’t solved by building private compliance frameworks that favor incumbents with domain expertise.

That’s the trade-off: enterprises get safety and compliance, but lose flexibility and competitive differentiation. And they still can’t prove the agents are worth it.

Anthropic raised $30 billion at a $380 billion valuation betting that enterprises will pay for boring infrastructure over exciting capabilities. But if 2025’s agentic AI revolution failed because of approach, not effort, what happens when competitors offer the same infrastructure with better models? Is Anthropic building the right scaffolding for the wrong building?

alex morgan
I write about artificial intelligence as it shows up in real life — not in demos or press releases. I focus on how AI changes work, habits, and decision-making once it’s actually used inside tools, teams, and everyday workflows. Most of my reporting looks at second-order effects: what people stop doing, what gets automated quietly, and how responsibility shifts when software starts making decisions for us.