The Pope’s AI Warning: Use It Too Much, and Your Brain May Stop Working

Source: AI

Pope Leo XIV gathered Rome’s clergy behind closed doors on Feb 19, 2026 to deliver a warning that should terrify anyone who’s handed their thinking to an algorithm: use AI to write your homilies, and your brain will atrophy like an unused muscle.

What the Pope calls “brain death from unused intellect” isn’t a religious problem—it’s already happening to developers, writers, and anyone who’s let an LLM do the cognitive heavy lifting.

The story hit Hacker News hard. Engineers recognized the pattern immediately.

The tech community saw themselves in the Pope’s warning

By Feb 24, 2026, the discussion was generating hundreds of comments from people who’d left organized religion decades ago but still understood what was being lost. One user noted their local parishes “often love posting AI generated devotional pictures” despite the images looking objectively terrible—”I saw sooo many AI Marys,” they wrote. Another admitted: “I left the church… but this still makes me sad.”

The sadness isn’t about theology. It’s about watching a profession optimize itself into irrelevance. Priests face the same pressure as every other knowledge worker: produce more content, get more engagement, hit the metrics. AI threatening high-skill cognitive work doesn’t discriminate between sermon preparation and software architecture—the cognitive muscles atrophy the same way.

The Pope’s framing was visceral. If you don’t use your brain, “it dies.” Not metaphorically. Actually dies.

Churches already adopted AI faster than the Vatican can regulate it

The numbers tell the real story. Two-thirds of Protestant pastors already use AI to prepare sermons, according to a Dec 2025 survey. Nearly 90% of pastors want AI education and training. The adoption curve isn’t hypothetical—it’s already happened.

And the Vatican knows it. Pope Leo XIV has prioritized AI ethics since his first week in office back in May 2025, framing the disruption as Industrial Revolution-scale. But warnings don’t slow adoption when the tool saves hours and generates TikTok engagement.

The honest problem: clergy aren’t resisting because the technology solves a real pain point. Writing a thoughtful homily every week is hard. An LLM makes it easy. The fact that it might also hollow out the entire practice? That’s a tomorrow problem.

This mirrors shadow AI adoption across corporate environments—employees use tools without approval because the efficiency gains are immediate and the cognitive costs are invisible until it’s too late.

The Pope identified who’s most vulnerable, but offered no enforcement mechanism

Young clergy—digital natives seeking efficiency—and elderly clergy—lacking technical discernment—are most at risk, according to reports from the Feb 19 meeting. But the Vatican issued a warning, not a policy. No verification system. No enforcement mechanism. Just: please don’t do this, it’ll destroy the cognitive process that makes ministry meaningful.

Which is exactly how most companies handle AI governance. “Use responsibly” policies with zero guardrails. The Pope’s intervention might be too late and too soft, but at least he’s naming the threat. Most organizations won’t even do that.

The Vatican offered no cognitive maintenance practices to counter AI dependence. Priests are left to navigate the atrophy risk alone, the same as every developer who’s stopped reading documentation because Copilot autocompletes everything.

Here’s what’s actually happening: parishes love AI devotional art despite its uncanny wrongness. Congregations aren’t rejecting the slop—they’re embracing it. The Pope says AI “will never be able to share faith” because it lacks the prayer-rooted preparation that transmits meaning. But if the output gets likes and saves time, does anyone actually care about the cognitive cost?

Efficiency and authenticity are now permanently at odds. Every knowledge worker is choosing a side, whether they realize it or not.

alex morgan
I write about artificial intelligence as it shows up in real life — not in demos or press releases. I focus on how AI changes work, habits, and decision-making once it’s actually used inside tools, teams, and everyday workflows. Most of my reporting looks at second-order effects: what people stop doing, what gets automated quietly, and how responsibility shifts when software starts making decisions for us.