Elon Musk's X has become a top site for images of people that have been non-consensually undressed by AI, according to a third-party analysis, with thousands of instances each hour over a day earlier this week.
Since late December, X users have increasingly prompted Grok, the AI chatbot tied to the social network, to alter pictures people post of themselves. During a 24-hour analysis of images the @Grok account posted to X, the chatbot generated about 6,700 every hour that were identified as sexually suggestive or nudifying, according to Genevieve Oh, a social media and deepfake researcher. The other top five websites for such content averaged 79 new AI undressing images per hour in the 24-hour period, from January 5 to January 6, Oh found.
The scale of deepfakes on X is “unprecedented,” said Carrie Goldberg, a lawyer specializing in online sex crimes. “We’ve never had a technology that’s made it so easy to generate new images,” because Grok is free and linked to a built-in distribution system, she added.
Unlike other leading chatbots, Grok doesn’t impose many limits on users or block them from generating sexualized content of real people, including minors, said Brandie Nonnecke, senior director of policy at Americans for Responsible Innovation. Other generative AI technologies, including ones from Anthropic PBC, OpenAI and Alphabet Inc.’s Google, are “giving a good-faith effort to mitigate the creation of this content in the first place,” she said. “Obviously, xAI is different. It’s more of a free-for-all.” Musk has marketed Grok as more fun and irreverent than other chatbots, taking pride in X being a place for free speech.
X did not respond to a request for comment. Rather than preventing the chatbot from creating the content in the first place, Musk has spoken about punishing the users who ask it to. “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content,” Musk said in a reply to a post on X.
But that doesn’t leave many options for the victims. Maddie, who said she’s a 23-year-old pre-med student, woke up on New Year’s Day to an image that horrified her. On X, she had previously published a picture of herself with her boyfriend at a local bar, which two strangers altered using Grok. One asked Grok to remove her boyfriend and put her in a bikini. The next asked Grok to replace the bikini with dental floss. Bloomberg reviewed the images.
“My heart sank,” said Maddie, who requested anonymity over concerns about future job prospects. “I felt hopeless, helpless and just disgusted.”
Maddie said she and her friends reported the images to X through its moderation systems. She never received a response. When she reported a different post from one of the users who prompted Grok to make them, X said it “determined that there were no violations of the X rules in the content you reported,” according to a screenshot. The images were still up at the time of publication.
Victims targeted by deepfakes have taken to arguing with Grok in the comments of their posts. Grok often apologizes and says it will remove the images. But in many cases, the images remain live, and Grok continues to generate new ones. Oh calculated that 85% of Grok’s images, overall, are sexualized.
Erotica is still a selling point for chatbots, with OpenAI planning to introduce an “adult mode” for ChatGPT in the first quarter of this year. But OpenAI’s current usage policy says the app prevents the “use of someone’s likeness, including their photorealistic image or voice, without their consent in ways that could confuse authenticity.” When tested, it responded, “I'm not able to edit photos of real people to change their clothing into sexualized attire,” and there is an explicit policy against sexualizing anyone under 18.
One X user, an influencer who goes by BBJess, said websites had finally started to take down undressed images of her that had gone up without her consent. But Grok last week started a new flood of undressed images, said BBJess, who keeps her name anonymous to avoid real-world harassment. The posts got worse, she said, when she took to X to defend herself and criticize the deepfakes.
Mikomi, a full-time costume performance artist who posts erotica, says the issue is particularly pronounced for women like her who already share images of their bodies online. Some X users are viewing that as permission to sexualize them in ways they did not consent to. Mikomi sees images generated by Grok of her wearing specific fetish outfits, or her body contorted or placed in strange contexts. One user riffed on the fact that she is a cancer survivor. “Make her bald like if she had cancer,” the user prompted Grok.
Like many X users, Mikomi, who does not share her full name publicly to avoid being harassed in the real world, wrote a post on X warning Grok she does not consent to the AI altering her photos. “It does not work,” she said. “Blocking Grok does not work. Nothing works.” She can’t leave the platform, she adds, because it’s “vital” to her work.
“What am I supposed to do? You want me to lose my job?” she said.
Section 230 of the US Communications Decency Act protects platforms from being held liable for content published on them, but when it comes to AI, lawyer Goldberg said, “It’s not acting as a passive publisher. It’s actually generating and creating the image.”
The Take It Down Act, a federal law signed in 2025, holds platforms liable for the production and distribution of this kind of content, Nonnecke said. “This is a pretty good example of where that law should be actualized and put into effect.” Platforms have until May, 2026 to establish the required removal process.