AI just beat the average human at creativity. Not matchedโbeat.
A University of Montreal study published January 25, 2026 in Scientific Reports tested more than 100,000 humans against GPT-4, Claude, and Gemini on the Divergent Association Taskโa standardized creativity benchmark that measures how well you connect unrelated concepts. The result: AI models scored higher than the median human. If you’re an average creative professional, you’re now competing with software that costs $20/month and never sleeps.
This isn’t theoretical. It’s measurable, reproducible, and it redefines what “creative work” means in 2026.
The creativity test that killed the ‘uniquely human’ myth
The DAT asks participants to generate ten words as semantically distant from each other as possible. “Cat, symphony, nitrogen, betrayal”โthat kind of thing. It’s not about quality. It’s about divergent thinking: can you break free from obvious associations?
According to the study, GPT-4 outperformed the average human on this task. The models didn’t just participateโthey won. Researchers tested temperature settings (the randomness dial that controls AI output variety) and found that cranking it higher made AI responses more unpredictable, more “creative” by the test’s definition.
But here’s the thing: creativity benchmarks measure one narrow slice of originality. The DAT doesn’t test whether you can write a story that makes someone cry, design a brand identity that shifts culture, or solve a client problem nobody saw coming. It tests whether you can generate semantic distance on demand.
And on that specific task, AI beat most people. Creative professionals join the growing list of high-skill jobs AI is targeting, but the threat isn’t replacementโit’s commodification of average performance.
But the top 10% just became untouchable
The counterintuitive finding: elite human creators still crush every AI model on poetry, storytelling, and complex creative tasks. The top 50% of humans consistently outperform every AI on the DAT. The top 10% open an even wider gap.
Translation: if you’re in the top half of creative ability, you’re fine. If you’re not, you just became replaceable.
This isn’t about AI replacing creativityโit’s about AI exposing a brutal new stratification. When researchers tested haikus, movie synopses, and flash fiction, human-written samples were still significantly more creative than AI-generated ones. But only the best human samples. The average ones? Indistinguishable from machine output.
The top 10% aren’t just naturally giftedโthey’ve developed skills that make them irreplaceable, even as AI handles routine creative tasks. The middle just disappeared.
The hidden cost: AI creativity runs on human prompts and fossil fuels
AI creativity is entirely dependent on human guidance. Temperature settings, prompt engineering, iterative refinementโwithout that, AI defaults to generic output. At low temperature, it’s cautious and boring. At high temperature, it’s varied but often incoherent. The sweet spot requires human judgment.
And there’s the environmental cost nobody’s pricing in. Training these models requires massive energy and water resources. Every “creative” output has a carbon footprint.
Then there’s the cognitive cost. Outsourcing creative judgment to AI might save time today, but it skips the cognitive development at work that builds expertise over time. Students and junior creatives who lean on AI for ideation never develop the pattern recognition that separates good ideas from great ones.
Despite benchmark wins, AI fails at real work that requires context, nuance, and sustained creative judgmentโexactly what the top 10% excel at. The tool works, but only if you’re already good enough to use it properly.
AI beat the average human at creativity. The top 10% are more valuable than ever. If you don’t know which group you’re in, the market is about to tell you.








Leave a Reply