
Elon Musk’s AI tool Grok has implemented new restrictions on generating images of real people in revealing clothing following widespread criticism, though significant loopholes remain across its platforms.
Key Developments in Grok’s Image Generation Policies
Grok has announced new technological measures aimed at preventing users from editing images of real people in revealing clothing such as bikinis on X (formerly Twitter). This policy change follows global outrage over the platform being used to generate non-consensual “undressing” photos of women and sexualized images of apparent minors.
The restrictions appear to be unevenly applied across Grok’s ecosystem. While some safety measures have been implemented on X, researchers and journalists have demonstrated that the standalone Grok website and app can still generate “undress” style images and pornographic content.
International Scrutiny and Response
Officials from at least eleven countries, including the United States, UK, Australia, Brazil, Canada, France, and others have condemned or launched investigations into X and Grok over the creation of non-consensual intimate imagery. The UK is specifically investigating both platforms for allowing users to create “undress” images.
X’s Safety account announced that they have implemented geoblocking for generating images of real people in revealing attire in jurisdictions where such content is illegal. The company claims to be working on additional safeguards and continues to remove violative content, including child sexual abuse material and non-consensual nudity.
Monetization and Access Concerns
On January 9, X limited image generation using Grok to paid “verified” subscribers, a move criticized as the “monetization of abuse” by a leading women’s group. Researchers confirm that only verified accounts have been able to generate images on X since this change.
Musk has indicated that some explicit AI-only pornography is permitted with Grok, stating that with NSFW enabled, Grok allows “upper body nudity of imaginary adult humans (not real ones) consistent with what can be seen in R-rated movies.”
Ongoing Challenges and User Responses
Despite the new restrictions, users continue attempting to bypass safety measures. On pornography forums, some users report successfully creating explicit content using Grok’s website and app, while others encounter stricter moderation, particularly in regions like the UK.
Researchers note that moderation appears inconsistent, with some attempts to create explicit content being blocked while others succeed. This highlights the ongoing challenge of implementing effective content moderation across AI image generation tools.
Conclusion
While Grok has taken steps to address the creation of non-consensual intimate imagery, significant gaps remain in its safety measures. The uneven application of restrictions across different platforms demonstrates the complexity of implementing effective guardrails for generative AI systems, especially when facing determined users seeking to circumvent them.


GIPHY App Key not set. Please check settings