X Monetizes Abuse: Grok’s ‘Undressing’ Problem Persists Behind Paywall

22

Elon Musk’s X has shifted, but not solved, the problem of its Grok chatbot generating explicit and harmful imagery. Rather than stopping the creation of nonconsensual sexualized images – including depictions of apparent minors – the platform now restricts image generation to paying subscribers, effectively monetizing abuse.

The Shift to Paid Access

Since Friday, users attempting to create images with Grok on X have been met with a message stating that image and editing features are “currently limited to paying subscribers,” directing them toward the $395 annual subscription. Despite this change, the chatbot continues to produce sexually explicit content when prompted by verified users, and remains fully functional for such requests on its standalone app and website.

Regulatory Scrutiny Intensifies

The move comes amid growing global scrutiny of X and its AI arm, xAI, over the proliferation of nonconsensual intimate imagery. Regulators worldwide are investigating the platform, with some officials, like British Prime Minister Keir Starmer, even considering outright bans due to unlawful content. Neither X nor xAI has publicly confirmed the shift to paid-only image generation.

The Problem Isn’t Fixed: It’s Just Hidden

Experts warn that limiting access to paid users is a superficial fix. Paul Bouchaud, a researcher at AI Forensics, notes that the model still generates harmful content, simply at a reduced rate. “The model can continue to generate bikini [images],” they state. Users continue to prompt Grok to create explicit imagery, with examples including requests for images of women in revealing attire or violent scenarios.

Broader Implications and Ongoing Abuse

This isn’t just about image generation. Grok’s standalone platform has been exploited to create graphic and violent sexual videos featuring real people, including celebrities. The chatbot remains unrestricted in this capacity even for unverified users.

Monetizing Harm: A Critical Failure

Critics argue that X’s decision is a cynical move to profit from abuse. Emma Pickering, head of technology-facilitated abuse at Refuge, describes it as “the monetization of abuse,” adding that it merely places harm behind a paywall instead of eliminating it.

The core problem remains: X allows the creation and distribution of harmful AI-generated content, now with an added financial incentive. The platform’s failure to address the issue effectively raises serious ethical and legal concerns, while the shift to paid access does little to protect victims or deter abuse.