in

AI Chatbots Being Misused to Create Nonconsensual Bikini Deepfakes

Popular AI chatbots are being exploited to generate nonconsensual deepfake images of women in bikinis using photos of fully clothed women as source material. This disturbing trend has emerged across online platforms where users share techniques to circumvent AI safety guardrails.

Key Findings

Reddit users were caught trading tips for manipulating Google’s Gemini and other AI models to generate revealing images of women. In one particularly concerning case, a user posted a photo of a woman in a traditional Indian sari and requested someone to digitally replace her clothing with a bikini. Another user fulfilled this request by creating a deepfake image.

When WIRED notified Reddit about these posts, the platform’s safety team removed both the request and the AI-generated image, citing that such content violates Reddit’s rules prohibiting “nonconsensual intimate media.” The subreddit where this occurred, r/ChatGPTJailbreak, which had over 200,000 followers, was subsequently banned.

AI Safety Measures and Their Limitations

Most mainstream chatbots like Google’s Gemini and OpenAI’s ChatGPT have implemented guardrails designed to prevent the generation of NSFW content. However, users have found ways to bypass these protections using simple prompts in plain English. WIRED confirmed through limited testing that both platforms could be manipulated to transform images of fully clothed women into bikini deepfakes.

When contacted about these issues, Google responded that they have “clear policies that prohibit the use of [its] AI tools to generate sexually explicit content” and are continuously improving their safeguards. Similarly, OpenAI referenced their usage policy prohibiting users from altering someone’s likeness without consent and mentioned they take action against users generating explicit deepfakes, including account bans.

Evolving Technology and Risks

As generative AI tools become more sophisticated—like Google’s recently released Nano Banana Pro and OpenAI’s updated ChatGPT Images—the potential for creating increasingly realistic deepfakes grows. These advancements make it easier for malicious users to produce convincing but false images that can be used to harass women.

The Electronic Frontier Foundation’s legal director, Corynne McSherry, identified “abusively sexualized images” as one of the core risks associated with AI image generators. She emphasized the importance of focusing on how these tools are used and holding both individuals and corporations accountable when harm occurs.

Broader Context

This issue exists within a larger landscape of AI misuse. Millions of people have visited harmful “nudify” websites specifically designed for uploading real photos of people and requesting AI-generated undressed versions. As AI technology continues to advance, the challenge of preventing such misuse becomes increasingly complex.

What do you think?

Avatar photo

Written by Thomas Unise

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

AI Psychosis: How ChatGPT Drove a Man to Delusion and Addiction

AlphaFold at 5: How Google DeepMind's AI is Revolutionizing Protein Science

AlphaFold at 5: How Google DeepMind’s AI is Revolutionizing Protein Science