X’s Grok Generated One Deepfake Per Minute — Now the EU Is Going After Them for $174 Million

grok

On January 26, 2026, the European Commission opened a formal investigation into X’s integration of Grok AI—and the timing isn’t coincidental.

Four days ago, EU regulators launched a Digital Services Act (DSA) probe targeting how X deployed xAI’s image generation tool without adequate safeguards against sexual deepfakes.

The focus is explicit: non-consensual sexualized images of women and children. EU Vice President Henna Virkkunen didn’t mince words, calling these deepfakes a “violent, unacceptable form of degradation.” What triggered the investigation?

CBS News reporters tested Grok on January 26 and successfully generated “transparent bikini-fied” deepfake images across the EU, UK, and US—despite xAI’s prior pledges to block exactly this type of content.

The feature works for verified X users, and when prompted, Grok AI itself admitted it needs regulation. This isn’t theoretical risk—it’s documented failure at scale.

The investigation examines three core issues: whether X conducted risk assessments before integrating Grok, whether it failed to mitigate known risks, and how its recommender systems amplify AI-generated content.

The formal inquiry also extends scrutiny to X’s content distribution algorithms, which could be spreading illegal material including child sexual abuse material (CSAM). A December 2025 analysis by Copyleaks estimated Grok created approximately one non-consensual sexualized image per minute.

Meanwhile, the UK government has threatened a nationwide X ban if the “bikini-fy” tool persists, and US regulators are calling for action without specific timelines. This isn’t just about X—it’s establishing precedent for how AI integrations will be regulated globally, and the financial stakes are massive.

The 6% Fine That Could Cost X Up to $174 Million

DSA violations trigger fines up to 6% of global annual turnover—and for X, that’s not a slap on the wrist.

The company’s 2025 revenue is projected at $2.9 billion, according to Q3 2025 reports, driven by $707 million in Q2 revenue, $200 million in annual subscriptions, and $500 million in xAI data payments by year-end. A 6% fine on that figure would be $174 million. For context, X’s UK revenue fell 58% to $39.8 million in 2024, showing the company is already under financial pressure.

X has been private since Musk’s 2022 acquisition, limiting public disclosures beyond UK filings—but those filings reveal a company bleeding advertiser revenue while relying on internal cash injections from xAI.

The 2024 global revenue picture is murkier. Estimates range from $2.5 billion to $2.7 billion, but without public filings, these are projections based on regional data and analyst estimates. If the EU calculates fines based on 2024 figures, the range would be $150-162 million.

Either way, we’re talking about penalties that could exceed X’s entire UK annual revenue. The EU isn’t bluffing—in December 2025, it fined X €120 million over account verification and advertising practices. That was for relatively minor infractions. Sexual deepfakes of minors fall into a different category entirely.

Potential DSA Fine Calculations for X
Revenue Year Estimated Global Revenue Potential 6% DSA Fine
2024 (estimates) $2.5-2.7 billion $150-162 million
2025 (projected) $2.9 billion $174 million

What makes this particularly painful is that xAI is propping up X with internal payments. The company reportedly received $500 million from xAI in 2025, with projections of $2 billion in 2026. A $174 million fine would consume a significant chunk of that support.

For developers building AI integrations: this is what happens when you skip the risk assessment phase. The cost of building proper safeguards upfront is always cheaper than regulatory fines calculated as a percentage of revenue.

Why the EU Is Targeting X’s Integration—Not the Standalone Grok App

Here’s where it gets technical. The EU spokesperson clarified that the investigation targets X’s Grok integration, not the standalone Grok app. This distinction matters because the DSA regulates “designated online platforms” like X, but standalone apps fall outside that framework—at least for now.

CBS News testing revealed the feature works identically in both versions, but only the X integration faces regulatory scrutiny. This creates a loophole where the same AI tool faces different compliance requirements based purely on distribution method.

The regulatory loophole between platform integration and standalone apps mirrors shadow AI usage patterns in enterprises, where the same tool faces different governance based on how it’s accessed.

The investigation examines X’s recommender systems—specifically, how Grok-generated content spreads on the platform. This isn’t just about image generation; it’s about amplification. If X’s algorithms are promoting non-consensual deepfakes to wider audiences, that’s a separate violation.

The formal proceedings assess whether X conducted risk assessments before integrating Grok. Based on available evidence, they didn’t. There are no documented policy changes, technical updates, or compliance efforts between xAI’s original pledges to block non-consensual edits and January 2026. The pledges are effectively obsolete.

For developers, this distinction is critical. If you’re building AI features into an existing platform, you’re subject to platform-level regulations. If you’re shipping a standalone app, you might avoid those requirements—but that loophole is closing fast.

The UK’s Ofcom investigation launched January 12, 2026, examines both X and Grok for CSAM and non-consensual intimate imagery, suggesting regulators are starting to treat integration and standalone deployment as equivalent risks. The message is clear: distribution method won’t protect you from liability if your tool generates illegal content.

What the Investigation Reveals About AI Safety Theater?

xAI made public pledges to block non-consensual image edits before January 2026. Those pledges failed. CBS News reporters tested Grok with consent and successfully generated “bikini-fied” deepfake images in the EU, UK, and US.

The feature requires verified X user status, but there’s no technical safeguard to verify consent before generation. When prompted, Grok AI itself admitted it needs regulation—revealing awareness of its own risks.

This is the gap between “we’ll block this” and actual implementation. While debates about AI’s impact on professional work focus on job displacement, the Grok investigation reveals a more immediate threat: AI tools deployed without adequate safety measures.

The real-world risks include child abuse material generation—the investigation specifically mentions children. Consent verification is technically possible. Other AI image tools implement it. Grok doesn’t. This suggests safety measures are performative rather than functional.

The UK government’s nationwide ban threat indicates regulators no longer trust voluntary compliance. They’ve seen the pledges. They’ve seen the failures. They’re moving to enforcement. The persistence of Grok’s features despite pledges to remove them echoes the broader problem of AI undressing apps on major app stores—voluntary compliance clearly isn’t working.

I’ve deployed AI systems at scale, and I can tell you: the gap between demo and production is where everything breaks. xAI demonstrated they could build image generation. They failed to demonstrate they could deploy it safely. The investigation will likely find that X integrated Grok without conducting formal risk assessments—a requirement under DSA for platforms of X’s size. This isn’t a technical failure. It’s a process failure. The technology to verify consent exists. The willingness to implement it apparently doesn’t.

What This Means for AI Developers and Platform Operators

If you’re building AI image generation tools, implement consent verification systems now. The cost of retrofitting after regulatory action is exponentially higher than building it correctly upfront. X is learning this the expensive way. DSA fines scale to approximately 6% of global turnover, meaning API and safety audits could target integration bypasses.

Platform operators must conduct risk assessments before AI integrations—X apparently didn’t, and that’s likely to be a central finding in the investigation. Understanding the distinction between AI agents and autonomous systems helps clarify why the EU is scrutinizing X’s recommender systems—it’s not just about image generation, but how that content spreads.

Recommender systems that amplify AI-generated content face scrutiny. If your platform uses algorithms to distribute user-generated content, and that content includes AI outputs, you need to audit those systems for illegal material propagation. Consent verification is now table stakes for image generation features. The regulatory loophole between platform integration and standalone apps is closing.

US developers should watch this case closely—similar regulations are coming. The US is already calling for regulation, and the EU investigation will establish precedent for enforcement mechanisms and penalty calculations.

The cost implication is straightforward: building proper consent verification and content moderation systems upfront is cheaper than 6% revenue fines. For developers, this investigation underscores why AI safety and compliance skills are becoming more valuable than pure model-building expertise. I haven’t found examples of US startups implementing effective content moderation for AI image generation in 2025-2026—which suggests a market gap. If you can solve consent verification at scale, you have a product. The regulatory pressure guarantees demand.

The Grok Investigation Is a Warning Shot for the Entire AI Industry

The EU’s investigation into Grok isn’t just about X—it’s establishing precedent for how AI integrations will be regulated globally. The key insight: voluntary compliance has failed, and regulators are moving to enforcement with financial teeth. If you’re building AI image generation tools, implement consent verification systems now, before regulation forces expensive retrofits. The gap between pledges and reality has closed.

If you’re integrating third-party AI into platforms, conduct formal risk assessments and document them. “We didn’t know” won’t work as a defense when the investigation findings become public. If you’re a verified X user, understand that features you can access may violate laws in your jurisdiction, regardless of platform availability.

Watch for the EU’s procedural next steps—no timeline has been provided, but DSA investigations typically take months. The UK’s ban decision will signal whether other jurisdictions follow suit. The question is whether US regulators will move beyond “calls for regulation” to actual enforcement.

The $150-174 million fine range will set the benchmark for future AI platform violations. Every AI company with image generation capabilities is watching this case, because they know they could be next. The real question isn’t whether Grok will face consequences—it’s whether other AI platforms will learn from this before they’re next in line.

alex morgan
I write about artificial intelligence as it shows up in real life — not in demos or press releases. I focus on how AI changes work, habits, and decision-making once it’s actually used inside tools, teams, and everyday workflows. Most of my reporting looks at second-order effects: what people stop doing, what gets automated quietly, and how responsibility shifts when software starts making decisions for us.