European Union regulators have launched a formal investigation into Elon Musk’s social media platform, X (formerly Twitter), over its failure to prevent the widespread dissemination of sexually explicit images generated by its artificial intelligence chatbot, Grok. The inquiry intensifies an ongoing dispute between the EU and the US regarding online content regulation, with Musk and his supporters framing European rules as infringements on free speech and a hindrance to American businesses.
AI-Generated Abuse and Regulatory Pressure
Starting in late December, X experienced a surge of explicit images – some depicting children – created by Grok, prompting criticism from both victims and regulatory bodies worldwide. The EU alleges X violated the Digital Services Act (DSA) by not adequately mitigating “systemic risks” associated with integrating the AI chatbot into its platform. This latest probe adds to existing scrutiny: just last month, X was fined €120 million ($140 million) for DSA breaches related to deceptive design practices, advertising transparency, and data sharing.
Broader EU Enforcement Efforts
The EU’s actions are not isolated. A separate investigation is already underway to assess X’s recommender algorithm and its effectiveness in curbing the spread of illegal content. The DSA, enacted to hold large online platforms accountable for user safety, is now being rigorously enforced.
“Nonconsensual sexual deepfakes of women and children are a violent, unacceptable form of degradation,” stated Henna Virkkunen, the EU executive vice president overseeing DSA enforcement. “We will determine whether X has met its legal obligations… or whether it treated the rights of European citizens as collateral damage.”
Why This Matters
This investigation is not merely about X; it is part of a broader trend. Regulators globally are grappling with the rapid development of AI and its potential for misuse, particularly in creating harmful content. The EU’s proactive stance under the DSA signals a willingness to challenge US-based tech giants, setting a precedent for stricter digital oversight. The question now is whether X will comply or continue to push the boundaries of content moderation.
The EU’s enforcement of the DSA demonstrates its commitment to protecting citizens from harmful online content. The outcome of this investigation could reshape how social media platforms manage AI-generated material, potentially influencing global standards for digital safety.





























