Elon Musk’s AI chatbot Grok is getting used to flood X with hundreds of sexualized photographs of adults and obvious minors sporting minimal clothes. A few of this content material seems to not solely violate X’s personal insurance policies, which prohibit sharing unlawful content material resembling little one sexual abuse materials (CSAM), however may violate the rules of Apple’s App Retailer and the Google Play retailer.
Apple and Google each explicitly ban apps containing CSAM, which is unlawful to host and distribute in lots of nations. The tech giants additionally forbid apps that comprise pornographic materials or facilitate harassment. The Apple App Retailer says it doesn’t enable “overtly sexual or pornographic materials,” in addition to “defamatory, discriminatory, or mean-spirited content material,” particularly if the app is “more likely to humiliate, intimidate, or hurt a focused particular person or group.” The Google Play retailer bans apps that “comprise or promote content material related to sexually predatory conduct, or distribute non-consensual sexual content material,” in addition to applications that “comprise or facilitate threats, harassment, or bullying.”
Over the previous two years, Apple and Google eliminated quite a lot of “nudify” and AI image-generation apps after investigations by the BBC and 404 Media discovered they had been being marketed or used to successfully flip atypical images into specific photographs of ladies with out their consent.
However on the time of publication, each the X app and the stand-alone Grok app stay out there in each app shops. Apple, Google, and X didn’t reply to requests for remark. Grok is operated by Musk’s multibillion-dollar synthetic intelligence startup xAI, which additionally didn’t reply to questions from WIRED. In a public assertion revealed on January 3, X mentioned that it takes motion towards unlawful content material on its platform, together with CSAM. “Anybody utilizing or prompting Grok to make unlawful content material will undergo the identical penalties as in the event that they add unlawful content material,” the corporate warned.
Sloan Thompson, the director of coaching and schooling at EndTAB, a gaggle that teaches organizations learn how to forestall the unfold of nonconsensual sexual content material, says it’s “completely applicable” for firms like Apple and Google to take motion towards X and Grok.
The quantity of nonconsensual specific photographs on X generated by Grok has exploded over the previous two weeks. One researcher instructed Bloomberg that over a 24-hour interval between January 5 and 6, Grok was producing roughly 6,700 photographs each hour that they recognized as “sexually suggestive or nudifying.” One other analyst collected greater than 15,000 URLs of photographs that Grok created on X throughout a two-hour interval on December 31. WIRED reviewed roughly one-third of the pictures, and located that a lot of them featured ladies wearing revealing clothes. Over 2,500 had been marked as now not out there inside every week, whereas virtually 500 had been labeled as “age-restricted grownup content material.”
Earlier this week, a spokesperson for the European Fee, the governing physique of the European Union, publicly condemned the sexually specific and nonconsensual photographs being generated by Grok on X as “unlawful” and “appalling,” telling Reuters that such content material “has no place in Europe.”
On Thursday, the EU ordered X to retain all inside paperwork and knowledge regarding Grok till the top of 2026, extending a previous retention directive, to make sure authorities can entry supplies related to compliance with the EU’s Digital Providers Act, although a brand new formal investigation has but to be introduced. Regulators in different nations, together with the UK, India, and Malaysia have additionally mentioned they’re investigating the social media platform.
