“People Will Buy Intelligence On Demand”: Sam Altman’s Chilling Prediction

sam altman

During a recent appearance at the BlackRock Infrastructure Summit, OpenAI CEO Sam Altman delivered a vision of the future that sounded less like a product roadmap and more like a blueprint for a new economic system.

In roughly 35 minutes on stage, Altman outlined a world where artificial intelligence becomes an on-demand utility, data centers rival human cognition, and entire companies could operate without a single employee.

The conversation took place in front of an audience of investors and infrastructure executives — and notably, the interviewer openly acknowledged being both a friend of Altman and a member of OpenAI’s board. That context set the tone for what followed: a largely unchallenged presentation of a future where AI infrastructure becomes one of the most powerful economic forces ever built.

From software product to “intelligence utility”

The central idea Altman returned to repeatedly is simple but radical: AI will eventually function like a public utility.

Instead of companies buying software licenses or hiring human expertise, they will purchase units of intelligence in the same way households buy electricity or water.

According to Altman, users will simply request cognitive work whenever they need it — strategy analysis, research, coding, or operations — and pay based on usage.

This is the core of what OpenAI appears to be building toward: a system where AI becomes infrastructure rather than a tool.

Altman summarized the idea with striking clarity:

“We see a future where intelligence is a utility like electricity or water, and people buy it on demand to do whatever they want.”

It’s a framing that echoes one of the most famous slogans in the history of energy infrastructure: the 1950s promise that nuclear power would be “too cheap to meter.”

That promise famously never materialized — but the analogy reveals how Silicon Valley increasingly views AI: not as software, but as civilization-scale infrastructure.

AI agents that work for weeks, not minutes

Another key part of Altman’s vision involves the rapid evolution of AI agents.

Today, most AI tools operate in short bursts: a prompt, a response, and the interaction ends.

Altman believes that will soon change dramatically.

Instead of handling minutes-long tasks, future AI systems could run autonomous projects lasting weeks, continuously working with full organizational context.

These agents would function less like chatbots and more like digital employees that never log off.

Altman even hinted that he already uses AI internally as a first line of thinking when evaluating ideas before consulting human colleagues.

If this model scales, entire companies could eventually operate with very small human teams supervising large fleets of AI agents.

The claim that raised eyebrows: data centers with more “cognitive capacity” than humanity

Perhaps the most controversial moment came when Altman speculated about the near-term growth of AI infrastructure.

He suggested that by around 2028, the total computational “cognitive capacity” inside data centers might exceed that of the human population.

The statement relies on equating massive computational throughput with cognition — an assumption that many scientists would strongly debate.

Raw compute power measured in FLOPS is not equivalent to human thought, creativity, or reasoning. Comparing them is closer to a marketing metaphor than a scientific metric.

Still, the claim illustrates the scale at which AI infrastructure is expanding.

Major tech companies are already investing tens of billions of dollars in new AI data centers, and OpenAI recently completed a massive funding round valuing the company at around $110 billion.

The global AI race: infrastructure versus models

Altman also addressed the growing technological rivalry between the United States and China.

His view is that the two ecosystems are diverging into different strengths.

The U.S. leads in frontier models and proprietary systems.

China, meanwhile, is accelerating in low-cost inference and open-source ecosystems.

Altman compared the discovery of deep learning to the invention of the transistor — a fundamental scientific breakthrough that eventually becomes widely understood and replicated.

Once that happens, competitive advantage shifts away from the theory and toward three things:

  • Physical infrastructure
  • Access to massive training data
  • System integration at scale

In other words, the AI race may ultimately be less about algorithms and more about who builds the biggest computing infrastructure.

An uncomfortable question: what happens to work?

On employment, Altman acknowledged that the transition could be painful.

He expressed long-term optimism but admitted that the coming years may involve a difficult economic adjustment.

At one point he remarked:

“In many professions, it will be hard to work harder than a GPU.”

That sentence captures the paradox at the center of the AI revolution.

If intelligence becomes abundant and cheap, productivity could soar — but the relationship between work, wages, and economic value may be fundamentally disrupted.

Altman briefly mentioned a scenario where society experiences deflation driven by AI abundance: living standards improve while traditional economic metrics like GDP decline.

Yet the discussion stopped short of addressing the hardest questions.

How would wealth generated by AI infrastructure be distributed?
Who controls the systems producing the intelligence?
And what happens to workers whose skills are suddenly cheaper than automated cognition?

The most striking part wasn’t the answers — it was the questions that weren’t asked

Beyond the futuristic predictions, what stood out most from the discussion was the absence of pushback.

The conversation avoided several controversial topics surrounding OpenAI, including:

  • Governance tensions inside the company
  • The concentration of power among a few AI providers
  • The labor behind data labeling and model training
  • The long-term sustainability of AI infrastructure costs

When the interviewer is both a board member and a friend of the CEO on stage, the format inevitably becomes less interrogation and more presentation.

What emerges is a clear message to investors: the AI industry is no longer selling tools.

It is building a new layer of global infrastructure — one where intelligence itself becomes a commodity.

Whether that future delivers prosperity, disruption, or both is a question that remains largely unanswered.

alex morgan
I write about artificial intelligence as it shows up in real life — not in demos or press releases. I focus on how AI changes work, habits, and decision-making once it’s actually used inside tools, teams, and everyday workflows. Most of my reporting looks at second-order effects: what people stop doing, what gets automated quietly, and how responsibility shifts when software starts making decisions for us.