EU Opens Investigation into X Over AI-Generated Sexualized Images

Mon 26th Jan, 2026

The European Commission has launched an official inquiry into the technology company X, which is owned by entrepreneur Elon Musk, following widespread concerns about the misuse of artificial intelligence on its platform. The investigation centers on allegations that X did not adequately assess or mitigate risks associated with the deployment of its AI chatbot, Grok.

The scrutiny follows reports that Grok, an AI tool integrated into X, enabled users to generate explicit and sexualized images of individuals using standard photos. This capability led to significant public backlash and international criticism, as the software allowed for the creation of manipulated images without proper safeguards.

Regulators in Brussels have stated that their primary concern is X's initial failure to evaluate the societal and individual risks inherent in making such AI functionalities broadly accessible. The Commission argues that the company did not implement sufficient preventive measures before launching the feature to the public, thereby exposing users and individuals depicted in the images to potential harm.

International advocacy groups and digital rights organizations have highlighted the potential for such technology to facilitate harassment, privacy violations, and non-consensual image manipulation. Many have called for stricter oversight of platforms deploying generative AI, especially when these tools can be misused to produce explicit or harmful content.

In response to mounting criticism and after initial resistance, X agreed to deactivate the capability allowing for the creation of sexualized images. This decision came after a series of protests and restrictions imposed by authorities and watchdog groups in several countries. The move was seen as a necessary step to address existing concerns, but regulators are continuing to examine whether X's risk management processes were sufficient and in line with EU digital safety standards.

The European Commission's investigation may have broader implications for the regulation of artificial intelligence across digital platforms operating within the European Union. Authorities are expected to examine not only the technical aspects of Grok's functionality but also the company's overall approach to risk assessment, user safety, and content moderation. The outcomes of the investigation could influence future guidelines and requirements for AI deployment on social media and technology platforms.

This case highlights the increasing regulatory focus on AI-driven technologies and their integration into widely used online services. It underscores the necessity for technology companies to implement comprehensive safeguards and maintain transparency about the potential risks associated with new AI tools. The European Commission has indicated that the investigation is ongoing and may lead to further actions or requirements for X and similar platforms in the future.


More Quick Read Articles »