17 AI startups raised $100M+ in 49 days: then a Google VP killed their business model

Source: AI

Seventeen US AI companies raised over $100 million between January 1 and February 17, 2026โ€”just 49 days.

Four days after that tally closed, a Google VP publicly declared the business model behind many of those bets dead on arrival.

The AI funding market has decoupled from business fundamentals so completely that investors are writing nine-figure checks to startups building on architectures Google insiders say won’t survive commoditization. This isn’t optimismโ€”it’s willful ignorance at industrial scale.

The warning came four days too late for $1.4 billion in bets

Darren Mowry, a vice president at Google, warned on February 21 that “thin” LLM wrappers and aggregators face existential risk as base models absorb their core functions. Routing queries? GPT-5 does that natively now. Tool orchestration? Native AI features in base models like Gemini have it built in. The middleman layerโ€”the entire value proposition for dozens of startupsโ€”collapses when the infrastructure they sit on top of just… does the thing itself.

But January’s mega-rounds closed before Mowry spoke. SkildAI raised $1.4 billion in a Series C on January 14 for robotics AI. Flapping Airplanes secured $180 million in seed funding on January 28. These aren’t Series B rounds stretched thinโ€”they’re seed and early-stage deals carrying valuations that would’ve been considered reckless for mature companies three years ago.

Even OpenAI’s own financial struggles suggest the wrapper economics don’t work at scale. Yet investors funded 17 companies in 49 days to replicate variations of the same model.

Either these investors know something Mowry doesn’t, or they’re betting they can exit before commoditization hits. History suggests the second scenario is more likelyโ€”and messier.

Seed rounds now cost more than Series B did in 2023

AI seed valuations carry a 42% premium over non-AI startups, according to recent Crunchbase data. Series A rounds for AI companies average $51.9 millionโ€”30% higher than their non-AI counterparts. This isn’t normal risk pricing. It’s FOMO institutionalized.

OpenEvidence raised $250 million in January 2026 for medical AI chatbotsโ€”a vertical-specific wrapper betting it can build a moat before base models commoditize clinical reasoning. Maybe it can. But the pricing assumes certainty in a market where Google VPs are publicly warning about structural collapse.

The valuation inflation isn’t subtle. Seed rounds that would’ve raised $15 million in 2023 are now closing at $150 million. And the justification? “AI changes everything.” It does. Just not always in the direction founders expect.

The last AI funding wave left $2.1 billion in wreckage

No major AI wrapper has publicly shut down or pivoted in February 2026 yet. Mowry’s warning is forward-looking, not reactive. But 47 AI startups burned through $2.1 billion in combined funding during 2025 chasing similar models, according to industry tracking. And two AI unicorns vanished entirely last yearโ€”proof that billion-dollar valuations don’t guarantee survival when the architecture shifts underneath you.

The pattern is consistent: capital floods in, founders build on someone else’s infrastructure, that infrastructure evolves to absorb the startup’s entire value proposition, margins compress, funding dries up. Rinse, repeat.

What’s different this time is the speed. It took years for the 2025 cohort to fail. The 2026 cohort is raising at multiples of those valuations with the same structural vulnerabilitiesโ€”and Mowry’s warning suggests the timeline to commoditization has compressed dramatically.

Seventeen companies raised nine figures in 49 days. A Google VP declared their business model terminal four days later. One of these signals is wrong. The market is betting it’s the VP. History suggests otherwise.

alex morgan
I write about artificial intelligence as it shows up in real life โ€” not in demos or press releases. I focus on how AI changes work, habits, and decision-making once itโ€™s actually used inside tools, teams, and everyday workflows. Most of my reporting looks at second-order effects: what people stop doing, what gets automated quietly, and how responsibility shifts when software starts making decisions for us.