Google’s AI grew 258% while OpenAI and Anthropic fought in court

google ai

On March 11, 2026, OpenAI and Anthropic turned their rivalry into a legal spectacle, fighting over the Pentagon’s “supply chain risk” designation in federal court. Google won anyway.

While the two AI darlings battled over government contractsโ€”with more than 30 employees from both companies signing an amicus brief supporting Anthropic’s positionโ€”Google’s Gemini quietly captured what actually matters: your workflow. The AI race isn’t being won by the best model. It’s being won by the company that controls the spreadsheet you’re already in.

And the numbers prove it.

The company winning the AI race isn’t the one you’re using

Gemini grew 258% year-over-year in paid subscribers, outpacing Claude’s 200% growth, according to market analysis from March 2026. That contradicts every narrative on tech Twitter, where Claude dominates developer mindshare and OpenAI commands consumer attention. But Google doesn’t need the best model when it owns Gmail, Drive, and Docs.

The scale disparity is massive: OpenAI claims 900 million weekly active users, dwarfing Google’s 750 million monthly users plus 8 million paid enterprise seats in Workspace. But notice the unit shiftโ€”weekly versus monthly. Growth rate tells a different story than total users.

The pattern mirrors recent enterprise AI decisions where integration trumps performance. Gemini dominates spreadsheet editing and creative tasks inside Workspaceโ€”because it’s bundled with the tools teams can’t quit. Pull it out of that ecosystem, and the winner becomes less obvious.

OpenAI makes $25 billion but Anthropic will break even first

Here’s the paradox: OpenAI’s $25 billion annualized revenue rate dwarfs everyone. But Anthropic projects 2028 break-even versus OpenAI’s 2030 target. Why? Unit economics. Despite OpenAI’s financial trajectory showing massive revenue, the path to profitability remains longer than Anthropic’s leaner approach.

Anthropic built for efficiency. OpenAI built for scale. And GPT-5.4 just posted an 83% win/tie rate in blind expert reasoning tasksโ€”technical superiority that doesn’t translate to market dominance.

Traditional model performance comparisons miss the point entirely. The winner isn’t determined in benchmarks but in daily workflow capture. Microsoft understands this better than anyoneโ€”it’s the invisible fourth player, owning the enterprise orchestration layer through Azure OpenAI Service while everyone else fights over model performance.

The real competition isn’t about who has the smartest AI. It’s about who controls the infrastructure where work actually happens.

The Pentagon fight reveals who has no distribution strategy

Anthropic’s lawsuit over the “supply chain risk” designation exposes brutal reality: having the most rigorous safety practices doesn’t protect you from exclusion by government mandate. Anthropic’s Pentagon stance on ethics and military use, while principled, highlights the cost of distribution disadvantage in government contracting.

The 30+ employee amicus briefโ€”signed by workers from both OpenAI and Googleโ€”is the tell. Even insiders know Anthropic built the most rigorous system. But rigor without distribution is just expensive R&D.

Google wins by bundling. OpenAI wins by consumer ubiquity. Anthropic wins technical arguments and loses contracts.

“When two major firms are at odds, it presents an opportunity for another company to observe their errors and ascend to the top,” PitchBook analyst Harrison Rolfes told reporters. He was talking about the Pentagon fight. But the real ascent is happening in enterprise workflows, not defense contracts.

The best AI model in 2026 isn’t the one with the highest benchmark scoreโ€”it’s the one already embedded in the tools you can’t quit.

alex morgan
I write about artificial intelligence as it shows up in real life โ€” not in demos or press releases. I focus on how AI changes work, habits, and decision-making once itโ€™s actually used inside tools, teams, and everyday workflows. Most of my reporting looks at second-order effects: what people stop doing, what gets automated quietly, and how responsibility shifts when software starts making decisions for us.