Google TV’s new AI costs $20/month — for a device nobody talks to

Illustration for: Google TV’s new AI costs $20/month — for a device nobody talks to

Google TV just got three new Gemini features. Most people will never use them — and Google knows it.

Starting today, March 28, 2026, rollout begins in the U.S. and Canada for visual answers that embed recipe videos mid-search, live sports scorecards with viewing info, and narrated “deep dives” on health or economics topics. It’s sub-second AI video search on your TV — if you’re willing to talk to it, pay $20+ per month for the good parts, and own hardware most people don’t have yet.

The nut graf: Google built genuine video understanding for the one device in your home designed for passive consumption, then locked the killer features behind subscription tiers that make zero sense for shared household screens.

The tech works — the interface doesn’t

Google’s engineering here is real. Gemini can parse embedded recipe videos, generate live NBA scorecards with “where to watch” links, and narrate breakdowns on complex topics — all through voice commands. The same voice-commanded workflows reshaping office work now face their biggest test: convincing people to interrogate their TVs like research assistants.

But TVs aren’t laptops.

You lean back on a couch with a remote, not lean forward with a keyboard. The cognitive load of voice commands — “Hey Google, show me a deep dive on Mediterranean diet benefits with embedded cooking demos” — fights against how people actually use living room screens. One YouTube creator hyped Gemini’s hidden power: “Gemini is the only tool that can go through and interact with tons of different files… understand and watch videos.” True. Also irrelevant if the interface makes you work harder than just opening YouTube on your phone.

And the really impressive stuff? That’s where the paywall hits.

Google’s $20/month bet on a problem nobody asked to solve

Richer visual answers, narrated deep dives, and sports briefs all require Google AI Pro or Ultra subscriptions. AI Pro costs $19.99/month. Ultra runs $249.99/month. Free users get basic voice search — the feature Google TV already had before today.

This isn’t an upgrade for most users. It’s a tier gate.

Compare this to how people actually use TVs: they pick a show, press play, and zone out. They don’t investigate. They don’t narrate. They don’t deep-dive. Google’s subscription tiers follow a familiar pattern — free users get the demo, paid users get the product. But on a shared household device where one person’s $20/month unlocks features for everyone watching? The value prop collapses. Nobody’s paying monthly fees to ask their TV about sports scores when their phone does it for free.

Rollout expands to Australia, New Zealand, and the U.K. this spring. The pricing problem travels with it.

The hardware problem Google isn’t talking about

Even if you wanted to pay, your TV probably can’t run it. These features require Android TV OS 14+, which most Google TV owners don’t have yet. Gemini first launched on Google TV in September 2025 — but only on select TCL models. These CES 2026 previews promised broader availability, but the hardware requirements weren’t disclosed until launch.

This is a slow-burn rollout disguised as a product announcement. The install base won’t support mass adoption for years, and the people who do have compatible hardware are the least likely to need AI-powered video search — they’re early adopters who already use voice assistants everywhere else.

Google built the AI. The video understanding is legitimate. Gemini’s growing reach across Google’s ecosystem makes this TV integration feel inevitable — even if the execution doesn’t.

Now it needs to figure out if anyone actually wants to interrogate their TV — and whether they’ll pay monthly for the privilege.

alex morgan
I write about artificial intelligence as it shows up in real life — not in demos or press releases. I focus on how AI changes work, habits, and decision-making once it’s actually used inside tools, teams, and everyday workflows. Most of my reporting looks at second-order effects: what people stop doing, what gets automated quietly, and how responsibility shifts when software starts making decisions for us.