Claude crashed at the exact moment it became the most politically charged AI platform in America

claude down

Anthropicโ€™s AI chatbot Claude experienced an extended outage on March 2, just hours after reaching a major milestone in popularity. The disruption came at a moment of surging user demand, political controversy, and record-breaking growth.

According to Anthropic, Claude was affected by “elevated errors”,โ€leaving consumer-facing services such as claude.ai and its apps temporarily unavailable.

The company later confirmed that services were fully restored after deploying a fix.

Service Disruption Affected Chat, Apps, and Claude Code

Users began reporting problems connecting to Claude shortly after 6 a.m. ET, with Downdetector logging more than 2,000 reports during the outage. In India alone, more than 300 users flagged issues, with complaints spanning Claude Chat, the mobile app, the website, and Claude Code.

Anthropic acknowledged that some API methods were not working and confirmed that login and logout paths were affected. However, the Claude API that powers business customers remained operational throughout the disruption.

The company stated:

Claude is currently unavailable on our consumer-facing surfaces such as claude.ai and our apps. The Claude API that powers businesses remains unaffected. Our team is working to restore full service.

By approximately 11 a.m. ET, Anthropic confirmed that Claude was back online across its platforms.

The Timing Raised Eyebrows

The outage occurred only hours after Claude climbed to the number one position among free apps in Appleโ€™s U.S. App Store, dethroning OpenAIโ€™s ChatGPT. Free Claude users have increased more than 60 percent since January, according to the company.

Anthropic described the recent surge as โ€œunprecedented demand,โ€ noting record-breaking daily signups in the past week.

This rapid growth follows a highly public dispute between Anthropic and the U.S. government.

Political Pressure Intensified Days Before the Outage

On Friday, President Donald Trump ordered federal agencies to stop using Anthropicโ€™s AI tools, canceling more than $200 million in contracts. Defense Secretary Pete Hegseth described the company as a national security “supply chain risk.”

Anthropic CEO Dario Amodei responded that the company was being punished for refusing to loosen restrictions on how its AI models could be used by the U.S. military.

Despite the directive, reports indicated that Claude was used during U.S. strikes on Iran shortly afterward.

Meanwhile, OpenAI signed an agreement with the U.S. Department of Defense to deploy its models within a classified government network.

Second Outage in Less Than 24 Hours

The March 2 disruption was not an isolated incident. Claude experienced another outage between 5 p.m. and 8 p.m. in several regions, marking the second service interruption within 24 hours.

Anthropicโ€™s status page initially stated that the issue had been identified and that a fix was being implemented. After restoration, the status page indicated that all systems were operational and that monitoring was ongoing.

Growth and Strain at the Same Time

The outage comes at a pivotal moment for Anthropic. Backed by Amazon and Alphabetโ€™s Google, the company has seen accelerated consumer adoption even as it faces mounting political scrutiny.

With daily signups breaking records and the app topping download charts, Claudeโ€™s rapid rise appears to be testing its infrastructure in real time.

Anthropic thanked users for their patience, noting that the team is working to โ€œmatch the incredible demand weโ€™ve seen for Claude in recent days.โ€

For now, Claude is back online. But the combination of record growth, government friction, and repeated service disruptions places the platform at the center of one of the most closely watched moments in the AI race.

alex morgan
I write about artificial intelligence as it shows up in real life โ€” not in demos or press releases. I focus on how AI changes work, habits, and decision-making once itโ€™s actually used inside tools, teams, and everyday workflows. Most of my reporting looks at second-order effects: what people stop doing, what gets automated quietly, and how responsibility shifts when software starts making decisions for us.