SK Telecom built a 992-square-meter booth for MWC Barcelona 2026—the largest telecom AI demo at the show. Inside: live demos of Korea’s first 519-billion-parameter LLM, a full-stack AI infrastructure play, and a pitch that carriers can own the AI stack from silicon to service. But SK Telecom won’t say what it costs to run a model that size on telecom infrastructure. And that’s the only number that matters.
This isn’t a product launch. It’s a positioning play. SK Telecom is betting telecoms can monetize AI by controlling infrastructure, models, and services simultaneously—avoiding the “dumb pipe” fate that’s haunted carriers since smartphones turned them into commodity bandwidth sellers. The question isn’t whether the tech works. It’s whether the economics make sense, and SK Telecom has disclosed exactly zero data to answer that.
SK Telecom’s 519-billion-parameter bet has no public price tag
The A.X K1 model is real—not vaporware. It advanced to Phase 2 of Korea’s Sovereign AI Foundation Model Project in January 2026, backed by government funding and developed in partnership with national research institutes. At 519 billion parameters, it’s larger than most enterprise-focused models and positions SK Telecom as a credible player in the LLM race.
But without energy costs or pricing versus AWS, Azure, or Google Cloud, it’s impossible to evaluate commercial viability.
SK Telecom hasn’t disclosed kilowatt-hours per inference. It hasn’t published enterprise pricing. It hasn’t explained how inference-dominated workloads—which require continuous GPU utilization—will scale on 5G networks designed for bursty data traffic, not sustained compute. The company is asking enterprises to bet on economics it won’t share. That’s not a business model. That’s a science project with a booth.
Compare this to OpenAI’s own financial struggles—massive capability, unclear path to profitability. Without transparent economics, SK Telecom risks the same trap.
The one thing SK Telecom’s AI actually does well—and why it’s not enough
Network optimization works. SK Telecom’s AI systems are already deployed internally, analyzing traffic patterns and adjusting base station configurations in real time. The company claims field trials showed meaningful improvements in congestion mitigation—not just lab benchmarks, but operational results in dense urban environments.
That’s the working use case. And it’s entirely internal.
The jump from “optimize our own network” to “compete with hyperscalers on customer-facing inference” is massive. SK Telecom’s AI Inference Factory—a unified hardware-software stack for inference workloads—promises to solve this. But the company has not disclosed any customer deployments. No enterprise contracts. No pricing tiers. No case studies showing enterprises actually switched from hyperscaler inference to telecom infrastructure.
This is part of a broader pattern of telecoms racing to avoid commodity status in the cloud era. But solving your own operational problems doesn’t prove you can sell AI at scale.
The vendor lock-in problem telecoms won’t talk about
SK Telecom’s pitch is that full-stack control solves hyperscaler dependency. Own the infrastructure, own the model, own the pricing. No AWS bills. No Azure lock-in.
But the AI Inference Factory creates a new dependency—one potentially worse than the hyperscaler lock-in it’s meant to replace.
Integrated hardware-software stacks mean higher switching costs. Enterprises that build on SK Telecom’s infrastructure can’t easily migrate workloads to competitors. And inference-dominated AI workloads will stress 5G networks in ways they weren’t designed for. Compare this to Nvidia’s infrastructure investments, which target specialized AI workloads from the ground up—not retrofitted telecom networks.
The energy and infrastructure limits Nvidia’s CEO warned about aren’t future problems. They’re hitting telecom networks now. SK Telecom has the infrastructure advantage—fiber, data centers, edge compute. But inference workloads consume power continuously, not in the bursty patterns 5G was optimized for. Without disclosed energy metrics, there’s no way to know if telecoms can actually handle sustained AI inference at hyperscaler economics.
SK Telecom has the infrastructure. It has the government backing. It has a working 519-billion-parameter model. What it doesn’t have—and won’t disclose—is the energy math that determines whether telecoms can actually own the AI stack, or whether they’re building the most expensive science project in wireless history.








Leave a Reply