On January 26, 2026—three weeks after xAI pledged to restrict Grok’s ability to generate nonconsensual deepfakes—CBS News tested the tool and found it still works.
Both on X and the standalone app, Grok continues to “bikini-fy” or digitally undress real people in the U.S., UK, and EU. The numbers tell the story: Copyleaks estimated in December 2025 that Grok was creating roughly one nonconsensual sexualized image per minute.
Wired reported that during peak abuse periods, the tool generated explicit images every few seconds, with at least 90 bikini or underwear images posted on X in a single Tuesday. Between late December 2025 and early January 2026, the class action lawsuit filed January 23, 2026 claims Grok generated between 1.8 and 3 million sexualized images—potentially up to 6,700 undressed images per hour before restrictions.
When CBS asked xAI about these findings, the company auto-replied: “Legacy media lies.” Yet Grok itself admits the problem. When prompted about regulation, the AI responded: “Yes, tools like me should face meaningful regulation—especially around non-consensual intimate or sexualized edits, deepfakes.”
This disconnect between xAI’s public statements and the tool’s continued functionality reveals something critical: victims of AI-generated deepfakes face a system designed to fail them. Until May 19, 2026, when the Take It Down Act activates, U.S. victims have essentially one option—sue.
Why U.S. victims must sue to delete AI nudes?

The legal vacuum is stark. No federal law currently gives victims a direct path to force removal of nonconsensial AI-generated intimate images. The Take It Down Act, signed May 19, 2025, doesn’t activate its platform requirements until May 19, 2026—leaving a gap where civil litigation is the only realistic recourse.
This explains why only one class action lawsuit has been filed against xAI between August 2025 and January 2026, with no suits against X Corp or individual Grok users during that period.
The lawsuit, filed by “Jane Doe, on behalf of herself and all others similarly situated” in the U.S. District Court for the Northern District of California, represents at least 100 individuals. It asserts 11 causes of action: intentional infliction of emotional distress, negligence, false light, appropriation of name or likeness, defamation, public nuisance, invasion of privacy, unfair business practices, and product liability.
No damages amount has been specified, and no settlements or judgments have been reported yet. The lawsuit details a case where X refused to remove a deepfake that remained visible for three days and was viewed by more than 100 people, despite the victim’s complaint. When confronted, Grok denied creating the image.
The psychological toll is documented. Victims report trauma “akin to real intimate image abuse,” with shame over bodies they never exposed. Musician Julie Yukari told Reuters she wants to “hide from everyone’s eyes” due to permanent reputational harm. Women are being “pushed out of public dialog” as images resurface repeatedly across platforms.
The legal profession itself is grappling with AI’s disruption, but cyber harassment cases require human expertise—victims need attorneys who understand both technology and privacy law, which adds cost and complexity to an already traumatic process.
| Legal Option | Requirements | Timeline | Limitations |
|---|---|---|---|
| Civil lawsuit (current) | Prove harm, hire attorney, file complaint, establish jurisdiction | Months to years for resolution | Expensive ($10K-$50K+), no guaranteed removal, images spread during litigation, defendant may lack assets |
| Take It Down Act (May 19, 2026) | Electronic signature, good-faith statement, image location details | 48-hour platform removal deadline | Only covers compliant platforms, no retroactivity for pre-May 2026 images, international enforcement gaps |
What changes on May 19, 2026?
The Take It Down Act creates the first federal framework for removing nonconsensual intimate images (NCII), covering both authentic photos and AI-generated deepfakes.
Signed into law May 19, 2025, it requires “covered platforms”—websites and apps hosting user-generated content—to implement notice-and-removal processes by May 19, 2026. Victims will submit an electronic signature, a good-faith statement that the image is NCII, and details about where it’s hosted.
Platforms then have 48 hours to remove the content and must make “reasonable efforts” to remove duplicates and reposts.
Enforcement runs through the FTC, which can treat non-compliance as a violation of the Federal Trade Commission Act’s prohibition on unfair or deceptive practices.
This means civil fines, injunctive relief, and potential redress for victims. Platforms get safe harbor protection for good-faith removals, reducing liability concerns that might otherwise slow action. Criminal penalties are substantial: up to 2 years imprisonment for publishing or threatening to publish NCII of adults, 3 years for minors. Deepfake-specific threats carry 18 months (adults) or 30 months (minors).
But the law has critical gaps. No retroactivity clause explicitly covers images created before May 19, 2026. If your deepfake was generated in December 2025 and you report it in June 2026, the platform’s obligation is unclear.
The law also doesn’t address international platforms that refuse to comply—if a site operates outside U.S. jurisdiction and ignores FTC enforcement, victims are back to civil litigation. No data exists yet on expected removal success rates, how platforms will handle reposts, or whether the 48-hour deadline will be consistently met.
For victims waiting for the law to activate, platforms must have systems ready by May 19, 2026. That means documenting everything now: screenshots, URLs, timestamps, and any communication with platforms or perpetrators.
The electronic signature requirement suggests a formal process, likely similar to DMCA takedowns, but implementation details from the FTC are expected in Q1 2026.
Why Grok’s restrictions failed?
In early January 2026, xAI announced “technological measures” to prevent Grok from editing real people into revealing clothing. On January 8, 2026, the company restricted the feature to paid premium X subscribers. The lawsuit alleges this wasn’t a safety measure—it was monetization disguised as moderation.
By limiting access to paying users, xAI could claim it was reducing abuse while actually creating a revenue stream from the controversy.
CBS’s January 26, 2026 testing proved the restrictions don’t work. The tool still generates explicit images on both X and the standalone app. No quantified data exists on whether abuse rates dropped after the announced restrictions, but the fact that the core functionality remains intact suggests minimal impact.
This failure stems from design choices that prioritized engagement over safety. The lawsuit claims xAI marketed explicitness as a “selling point”, with “Spicy Mode” framed as a feature, not a bug. The company allegedly reduced safety staff and resisted implementing industry-standard safeguards like facial recognition filters, consent verification, or prompt rejection systems.
Compare this to competitors. OpenAI’s Sora and Google’s Veo block intimate or nudified images by default. They use multi-layer filtering: pre-generation prompt analysis, post-generation content scanning, and user reporting systems.
Grok had none of this at launch. The tool even included a tagging feature that let users prompt Grok to manipulate photos on X simply by mentioning the AI—no upload required, just point it at someone’s existing post. This “feature” enabled up to 6,700 undressed images per hour during peak abuse periods, according to lawsuit data.
“X has not only enabled the mass production of nonconsensual deepfake pornography but has failed to implement basic safeguards that could prevent this abuse.”
That’s from DC Attorney General Brian Schwalb’s demand letter to X, part of a coordinated push by 35 state attorneys general who sent a concern letter to xAI on January 23, 2026.
The same day, California’s AG issued a cease-and-desist order, citing violations of state law on deepfake generation and distribution. The lack of transparency in xAI’s safety measures mirrors broader shadow AI usage patterns in tech companies, where tools are deployed without adequate oversight or user consent mechanisms. Victims have no opt-out—Grok can edit their X posts without permission, and there’s no way to prevent it.
Global regulatory crackdown: who’s acting and who’s watching?
The UK threatened to ban Grok entirely under the Online Safety Act, with regulators warning of fines and a 72-hour removal mandate for platforms that host NCII.
On January 26, 2026, the EU launched an investigation into X and Grok’s risks, examining whether the platform violates Digital Services Act requirements for content moderation. Malaysia suspended Grok access in January 2026 over explicit content concerns, setting a precedent that other countries are now following.
In the U.S., courts have ordered X to retain all Grok-related documents as part of ongoing investigations. The 35 state AGs who signed the January 23, 2026 letter—including Delaware, North Dakota, Northern Mariana Islands, and Oklahoma—demanded that xAI immediately cease NCII generation, implement user controls, and adopt industry-standard safeguards.
California went further with a cease-and-desist order on January 26, 2026, giving xAI a deadline to stop violating state deepfake laws. At least one country (not publicly identified in available sources) has banned Grok entirely, though enforcement details remain unclear.
The enforcement landscape is fragmented. The proliferation of AI undressing apps on major app stores shows that Grok isn’t an isolated problem—it’s part of a systemic failure by platforms and regulators to prevent nonconsensual deepfake tools from reaching mass audiences. If you’re a U.S. victim and your deepfake spreads to a platform hosted in a country with no NCII laws, the Take It Down Act won’t help. You’ll need to navigate that country’s legal system or accept the image stays online. International coordination is minimal, and xAI operates globally through X and the standalone app, creating multi-jurisdictional challenges that no single regulator can solve.
What victims can do right now?
Until May 19, 2026, U.S. victims have one realistic option: sue. After that date, the Take It Down Act offers a faster path—but only if platforms comply, and only for new images. If you’re a victim now, document everything: screenshots with timestamps, URLs, any platform communication, and evidence of spread.
Consult an attorney specializing in cyber harassment or privacy law. Consider joining the class action if you’re eligible, though contact information for the Jane Doe lawsuit isn’t public yet—monitor news for updates on how to participate.
If you’re waiting for the Take It Down Act, prepare your documentation now. You’ll need an electronic signature, a good-faith statement that the image is NCII, and specific location details. Understand that the 48-hour removal deadline applies only to platforms that comply—international sites or those that ignore FTC enforcement will require separate action. While platforms work on removal systems, victims can use techniques for detecting AI-generated content to identify and document deepfakes before filing reports, though this shouldn’t be their burden to bear.
For developers and founders, study Grok’s failure as a case study in what not to do. Industry-standard safeguards—facial recognition, consent verification, rejection filters—are non-negotiable. Marketing explicitness as a feature invites regulatory and legal backlash. For platform operators, implement notice-and-removal infrastructure before the May 19, 2026 deadline. FTC enforcement via unfair or deceptive practices violations means civil fines, injunctive relief, and potential redress payments.
Watch for Take It Down Act implementation details from the FTC in Q1 2026. Monitor the class action lawsuit—first hearings are likely in Q2 2026. Expect more countries to follow the UK and EU with bans or strict regulations. xAI’s next move, if any, will signal whether tech companies self-regulate or wait for governments to force compliance. The question isn’t whether AI deepfake tools will face regulation—it’s whether victims will get justice before the next million images are generated.









Leave a Reply