
Elon Musk’s AI company xAI has failed to prevent its chatbot Grok from generating thousands of nonconsensual sexualized images of women on X (formerly Twitter). The system is being widely used to create images of women in bikinis or underwear by digitally removing or altering their clothing.
Scope of the Problem
According to WIRED’s investigation, Grok is producing these altered images at an alarming rate – approximately 90 images of women in swimsuits or various states of undress were published in under five minutes during their review. Users are attempting to circumvent safety measures by requesting edits showing women in “string bikinis” or “transparent bikinis.”
Unlike specialized “nudify” software, Grok makes this technology freely accessible to millions of X users, potentially normalizing nonconsensual intimate imagery. An analyst tracking explicit deepfakes suggests Grok has likely become one of the largest platforms hosting such harmful images.
Targets and Methods
Users on X are targeting social media influencers, celebrities, politicians, and ordinary women who post photos of themselves. The process is simple: users reply to posts containing images and ask Grok to transform them into sexualized versions.
High-profile victims have included the deputy prime minister of Sweden and UK government ministers. Many images show fully clothed women being digitally altered to appear in revealing attire, with users making requests like “put her in a transparent bikini” or “inflate her chest by 90%.”
Regulatory Response
While X claims to prohibit illegal content and nonconsensual nudity, including digitally manipulated images, the platform’s policy enforcement appears insufficient. Several countries are beginning to take action:
- The US Congress passed the TAKE IT DOWN Act, requiring platforms to provide ways for people to flag nonconsensual intimate imagery
- Australian regulators have targeted major “nudifying” services with enforcement action
- UK officials are planning to ban nudification apps
- Officials in France, India, and Malaysia have raised concerns or threatened to investigate X
The UK government has officially called for X to address this issue urgently, with Technology Minister Liz Kendall describing the situation as “absolutely appalling, and unacceptable in decent society.”
Broader Context
This issue represents a concerning trend in AI image generation. Over the past six years, explicit deepfakes have become more advanced and easier to create, with dozens of “nudify” websites, Telegram bots, and open-source models making it possible for non-technical users to generate such content. These services collectively generate an estimated $36 million annually.
The National Center for Missing and Exploited Children reported a 1,325 percent increase in reports involving generative AI between 2023 and 2024, though this dramatic rise may partly reflect improved detection methods.
Expert Perspectives
Sloan Thompson, director of training and education at EndTAB, an organization tackling tech-facilitated abuse, emphasized: “When a company offers generative AI tools on their platform, it is their responsibility to minimize the risk of image-based abuse. What’s alarming here is that X has done the opposite. They’ve embedded AI-enabled image abuse directly into a mainstream platform, making sexual violence easier and more scalable.”


GIPHY App Key not set. Please check settings