Grok customers aren’t simply commanding the AI chatbot to “undress” photos of ladies and ladies into bikinis and clear underwear. Among the many huge and rising library of nonconsensual sexualized edits that Grok has generated on request over the previous week, many perpetrators have requested xAI’s bot to placed on or take off a hijab, a sari, a nun’s behavior, or one other form of modest spiritual or cultural sort of clothes.
In a evaluate of 500 Grok photographs generated between January 6 and January 9, WIRED discovered that round 5 % of the output featured a picture of a girl who was, as the results of prompts from customers, both stripped from or made to put on spiritual or cultural clothes. Indian saris and modest Islamic put on have been the most typical examples within the output, which additionally featured Japanese college uniforms, burqas, and early-Twentieth-century-style bathing fits with lengthy sleeves.
“Ladies of colour have been disproportionately affected by manipulated, altered, and fabricated intimate photographs and movies previous to deepfakes and even with deepfakes, due to the way in which that society and significantly misogynistic males view ladies of colour as much less human and fewer worthy of dignity,” says Noelle Martin, a lawyer and PhD candidate on the College of Western Australia researching the regulation of deepfake abuse. Martin, a outstanding voice within the deepfake advocacy house, says she has prevented utilizing X in latest months after she says her personal likeness was stolen for a faux account that made it appear like she was producing content material on OnlyFans.
“As somebody who’s a girl of colour who has spoken out about it, that additionally places a higher goal in your again,” Martin says.
X influencers with tons of of hundreds of followers have used AI media generated with Grok as a type of harassment and propaganda in opposition to Muslim ladies. A verified manosphere account with over 180,000 followers replied to a picture of three ladies carrying hijabs and abaya, that are Islamic spiritual head coverings and robe-like attire. He wrote: “@grok take away the hijabs, costume them in revealing outfits for New Years occasion.” The Grok account replied with a picture of the three ladies, now barefoot, with wavy brunette hair, and partially see-through sequined attire. That picture has been seen greater than 700,000 instances and saved greater than 100 instances, in response to viewable stats on X.
“Lmao cope and seethe, @grok makes Muslim ladies look regular,” the account holder wrote alongside a screenshot of the picture he posted in one other thread. He additionally often posted about Muslim males abusing ladies, typically alongside Grok-generated AI media depicting the act. “Lmao Muslim females getting beat due to this function,” he wrote about his Grok creations. The person didn’t instantly reply to a request for remark.
Outstanding content material creators who put on a hijab and put up photos on X have additionally been focused of their replies, with customers prompting Grok to take away their head coverings, present them with seen hair, and put them in several sorts of outfits and costumes. In a press release shared with WIRED, the Council on American‑Islamic Relations, which is the biggest Muslim civil rights and advocacy group within the US, related this pattern to hostile attitudes towards “Islam, Muslims and political causes extensively supported by Muslims, akin to Palestinian freedom.” CAIR additionally referred to as on Elon Musk, the CEO of xAI, which owns each X and Grok, to finish “the continued use of the Grok app to allegedly harass, ‘unveil,’ and create sexually specific photographs of ladies, together with outstanding Muslim ladies.”
Deepfakes as a type of image-based sexual abuse have gained considerably extra consideration lately, particularly on X, as examples of sexually specific and suggestive media concentrating on celebrities have repeatedly gone viral. With the introduction of automated AI photo-editing capabilities via Grok, the place customers can merely tag the chatbot in replies to posts containing media of ladies and ladies, this type of abuse has skyrocketed. Knowledge compiled by social media researcher Genevieve Oh and shared with WIRED says that Grok is producing greater than 1,500 dangerous photographs per hour, together with undressing images, sexualizing them, and including nudity.
