Elon Musk watches as President Donald Trump speaks on the U.S.-Saudi Funding Discussion board on the John F. Kennedy Heart for the Performing Arts in Washington, Nov. 19, 2025.
Brendan Smialowski | AFP | Getty Pictures
Elon Musk’s xAI noticed person backlash after its synthetic intelligence chatbot Grok generated sexualized photos of kids in response to person prompts.
A Grok reply to at least one person on X on Friday acknowledged that it was “urgently fixing” the problem and referred to as youngster sexual abuse materials “unlawful and prohibited.”
In replies to customers, the bot additionally posted that an organization may face felony or civil penalties if it knowingly facilitates or fails to stop this sort of content material after being alerted.
Grok posts are AI-generated messages and don’t stand in for official firm statements.
Musk’s xAI, which created Grok and merged with X final 12 months, despatched an autoreply to a request for remark: “Legacy Media Lies.”
Customers on X raised issues in latest days over express content material of minors, together with kids carrying minimal clothes, being generated utilizing the Grok instrument.
The social media website added an “Edit Picture” button to photographs that permits any person to change it utilizing textual content prompts and with out the unique poster’s consent.
A submit from xAI technical workers member Parsa Tajik additionally acknowledged the problem.
“Hey! Thanks for flagging. The staff is wanting into additional tightening our gaurdrails,” Tajik wrote in a submit.
The proliferation of AI image-generating platforms for the reason that launch of ChatGPT in 2022 has raised issues over content material manipulation and on-line security throughout the board. It is also contributed to an growing variety of platforms which have produced deepfake nudes of precise folks.
David Thiel, a belief and security researcher who was a part of the now-disbanded Stanford Web Observatory, advised CNBC that totally different US legal guidelines usually prohibit the creation and distribution of sure express photos, together with these depicting youngster sexual abuse, or non-consensual intimate photos.
Authorized determinations about AI-generated photos, like these produced by Grok, can hinge on particular particulars of the content material created and shared, he stated.
In a paper he co-authored referred to as “Generative ML and CSAM: Implications and Mitigations,” Stanford researchers famous that “the looks of a kid being abused has been enough for prosecution,” in precedent-setting circumstances within the US.
Whereas different chatbots have confronted comparable points, xAI has repeatedly landed in sizzling water for misuse or obvious flaws in Grok’s design or underlying know-how.
“There are a selection of issues firms may do to stop their AI instruments getting used on this method,” Thiel stated, “An important on this case can be to take away the power to change user-uploaded photos. Permitting customers to change uploaded imagery is a recipe for NCII. Nudification has traditionally been the first use case of such mechanisms.”
NCII refers to non-consensual intimate photos.
In Could, X confronted a backlash after Grok generated unsolicited feedback about “white genocide” in South Africa. Two months later, Grok posted antisemitic feedback and praised Adolf Hitler.
Regardless of the stumbles, xAI has continued to land partnerships and offers.
The Division of Protection added Grok to its AI brokers platform final month, and the instrument is the principle chatbot for prediction betting platforms Polymarket and Kalshi.
