Talking to ChatGPT Doesn’t Reduce Loneliness… A Random Stranger Does, Study Finds

loneliness

The pitch sounds almost too good: an AI companion that’s always available, infinitely patient, and programmed to be empathetic. For millions of young people navigating social isolation, that promise has real appeal. But a new wave of research is producing an uncomfortable finding — chatbots may be doing far less for loneliness than assumed, and in some cases, they may be making it worse.

The most striking evidence comes from the University of British Columbia, where researchers ran a tightly controlled two-week experiment with 300 first-semester college students — a group chosen precisely because they’re at peak vulnerability for social isolation. Students were split into three groups: daily text exchanges with a randomly assigned fellow student, a daily one-sentence journaling exercise, and daily conversations with a Discord chatbot powered by ChatGPT-4o mini. The results, published in the Journal of Experimental Social Psychology, were unambiguous.

Students who texted a stranger experienced roughly a 9% reduction in loneliness. Students who used the chatbot? Around 2% — statistically indistinguishable from writing alone in a journal. The technology designed to simulate human connection performed on par with talking to a blank page.

💡 Key Insight

Despite the chatbot displaying higher levels of empathy than human conversation partners, it produced only a 2% reduction in loneliness — the same result as solo journaling. The quality of expressed empathy did not translate into felt connection.

Why empathy from a machine doesn’t land the same way?

One of the most telling details from the UBC study is the empathy gap — in the wrong direction. The chatbot actually expressed more empathy than the human texting partners. Yet that didn’t move the needle. This points to something researchers are only beginning to articulate: reducing loneliness may depend not just on receiving empathy, but on giving it. In human exchanges, both parties are emotionally active. With a chatbot, that reciprocity collapses. Users received understanding but had no one to genuinely understand in return — and that asymmetry may be precisely what limits the effect.

The message volume was comparable across groups: eight to ten messages per day in both the human and chatbot conditions. This rules out the obvious explanation that people just weren’t engaging with the bot. They were — just without meaningful effect.

→ What this means

Loneliness relief may require a two-way emotional transaction — not just being heard, but having someone worth listening to. Chatbots, by definition, remove one side of that equation.

Short-term comfort, long-term cost

The UBC findings are only part of the picture. The same research lab published a companion study in Psychological Science, tracking over 2,000 people across 12 months with quarterly check-ins. The pattern was consistent and troubling: heavier chatbot use was associated with higher loneliness over time, and higher loneliness in turn predicted increased chatbot use — a reciprocal loop that co-author Dr. Dunigan Folk described as suggestive of a negative feedback cycle, even while stopping short of calling it a spiral.

Folk’s characterization of chatbots as “social junk food” captures the dynamic well. They deliver a real, measurable short-term benefit — post-interaction mood does improve, and researchers acknowledge this — but they don’t build anything durable. The relief doesn’t compound. And if chatbot use substitutes for human interaction rather than supplementing it, the long-term cost may outweigh the momentary comfort.

A separate four-week study from the MIT Media Lab, conducted jointly with OpenAI, reinforced this picture. Across all interaction formats — text, voice, and others — heavier daily LLM usage correlated with greater loneliness, increased dependence, and reduced real-world socialization. The delivery mechanism didn’t matter; frequency of use was the determining factor.

Dimension Human Texting (stranger) Chatbot Conversation
2-week loneliness reduction ~9% ~2% (≈ journaling)
Empathy expressed Moderate High
Daily message volume 8–10 messages 8–10 messages
Long-term loneliness trend Improves Worsens with heavy use
Technology required SMS / basic messaging LLM infrastructure

The simpler the intervention, the better it works

Lead researcher Ruo-Ning Li called the human-pairing approach “such a low-tech, simple intervention” — and that simplicity is part of the point. Matching two strangers over text, with no algorithm optimizing the experience and no AI moderating the tone, produced results that sophisticated conversational AI could not replicate. The implication for campus mental health programs is direct: peer-pairing schemes may be significantly more effective than deploying chatbot companions, at a fraction of the infrastructure cost.

This doesn’t mean AI has no role in emotional support. Li’s position is nuanced: AI can genuinely help reduce negative feelings in the moment, and that matters. The concern is substitution — when chatbot use crowds out the human interactions that actually build lasting connection. The problem isn’t that chatbots are useless; it’s that they may be too easy, training users to reach for a low-effort alternative that delivers just enough comfort to reduce the motivation to seek something more nourishing.

→ What this means

For product designers and mental health app developers, these findings raise a genuine design ethics question: should AI companions be built to actively encourage human connection rather than replace it? The data suggests the current default — AI as a sufficient substitute — is the wrong framing entirely.

Three independent studies, different methodologies, same conclusion: chatbots can soften a difficult moment but cannot build the relational fabric that actually counters isolation. The most powerful antidote to loneliness remains something no language model can replicate — another person, genuinely present, on the other end of the line.


Sources
https://www.sciencedirect.com/science/article/pii/S0022103126000417

alex morgan
I write about artificial intelligence as it shows up in real life — not in demos or press releases. I focus on how AI changes work, habits, and decision-making once it’s actually used inside tools, teams, and everyday workflows. Most of my reporting looks at second-order effects: what people stop doing, what gets automated quietly, and how responsibility shifts when software starts making decisions for us.