Grok Think about, a brand new generative AI software from xAI that creates AI photos and movies, lacks guardrails in opposition to sexual content material and deepfakes.
xAI and Elon Musk debuted Grok Think about over the weekend, and it is obtainable now within the Grok iOS and Android app for xAI Premium Plus and Heavy Grok subscribers.
Mashable has been testing the software to match it to different AI picture and video technology instruments, and based mostly on our first impressions, it lags behind comparable expertise from OpenAI, Google, and Midjourney on a technical degree. Grok Think about additionally lacks industry-standard guardrails to forestall deepfakes and sexual content material. Mashable reached out to xAI, and we’ll replace this story if we obtain a response.
The xAI Acceptable Use Coverage prohibits customers from “Depicting likenesses of individuals in a pornographic method.” Sadly, there may be a number of distance between “sexual” and “pornographic,” and Grok Think about appears fastidiously calibrated to benefit from that grey space. Grok Think about will readily create sexually suggestive photos and movies, but it surely stops wanting displaying precise nudity, kissing, or sexual acts.
Most mainstream AI firms embrace specific guidelines prohibiting customers from creating doubtlessly dangerous content material, together with sexual materials and superstar deepfakes. As well as, rival AI video mills like Google Veo 3 or Sora from OpenAI characteristic built-in protections that cease customers from creating photos or movies of public figures. Customers can typically circumvent these security protections, however they supply some verify in opposition to misuse.
However in contrast to its largest rivals, xAI hasn’t shied away from NSFW content material in its signature AI chatbot Grok. The corporate not too long ago launched a flirtatious anime avatar that may have interaction in NSFW chats, and Grok’s picture technology instruments will let customers create photos of celebrities and politicians. Grok Think about additionally features a “Spicy” setting, which Musk promoted within the days after its launch.
Grok’s “spicy” anime avatar.
Credit score: Cheng Xin/Getty Photographs
AI actors and deepfakes aren’t coming to YouTube adverts. They’re already right here.
“When you have a look at the philosophy of Musk as a person, in case you have a look at his political philosophy, he’s very far more of the form of libertarian mould, proper? And he has spoken about Grok as form of just like the LLM without cost speech,” stated Henry Ajder, an skilled on AI deepfakes, in an interview with Mashable. Ajder stated that beneath Musk’s stewardship, X (Twitter), xAI, and now Grok have adopted “a extra laissez-faire strategy to security and moderation.”
“So, with regards to xAI, on this context, am I shocked that this mannequin can generate this content material, which is actually uncomfortable, and I would say not less than considerably problematic? Ajder stated. “I am not shocked, given the observe file that they’ve and the security procedures that they’ve in place. Are they distinctive in affected by these challenges? No. However may they be doing extra, or are they doing much less relative to among the different key gamers within the area? It might seem like that manner. Sure.”
Grok Think about errs on the facet of NSFW
Grok Think about does have some guardrails in place. In our testing, it eliminated the “Spicy” possibility with some varieties of photos. Grok Think about additionally blurs out some photos and movies, labeling them as “Moderated.” Meaning xAI may simply take additional steps to forestall customers from making abusive content material within the first place.
“There is no such thing as a technical purpose why xAI couldn’t embrace guardrails on each the enter and output of their generative-AI techniques, as others have,” stated Hany Farid, a digital forensics skilled and UC Berkeley Professor of Pc Science, in an e-mail to Mashable.
Mashable Mild Velocity
Nonetheless, with regards to deepfakes or NSFW content material, xAI appears to err on the facet of permisiveness, a stark distinction to the extra cautious strategy of its rivals. xAI has additionally moved rapidly to launch new fashions and AI instruments, and maybe too rapidly, Ajder stated.
“Figuring out what the form of belief and security groups, and the groups that do a number of the ethics and security coverage administration stuff, whether or not that is a purple teaming, whether or not it is adversarial testing, you already know, whether or not that is working hand in hand with the builders, it does take time. And the timeframe at which X’s instruments are being launched, not less than, actually appears shorter than what I’d see on common from a few of these different labs,” Ajder stated.
Mashable’s testing reveals that Grok Think about has a lot looser content material moderation than different mainstream generative AI instruments. xAI’s laissez-faire strategy to moderation can be mirrored within the xAI security tips.
OpenAI and Google AI vs. Grok: How different AI firms strategy security and content material moderation

Credit score: Jonathan Raa/NurPhoto by way of Getty Photographs
Each OpenAI and Google have intensive documentation outlining their strategy to accountable AI use and prohibited content material. As an example, Google’s documentation particularly prohibits “Sexually Express” content material.
A Google security doc reads, “The appliance won’t generate content material that comprises references to sexual acts or different lewd content material (e.g., sexually graphic descriptions, content material geared toward inflicting arousal).” Google additionally has insurance policies in opposition to hate speech, harassment, and malicious content material, and its Generative AI Prohibited Use Coverage prohibits utilizing AI instruments in a manner that “Facilitates non-consensual intimate imagery.”
OpenAI additionally takes a proactive strategy to deepfakes and sexual content material.
An OpenAI weblog publish asserting Sora describes the steps the AI firm took to forestall any such abuse. “At present, we’re blocking significantly damaging types of abuse, similar to baby sexual abuse supplies and sexual deepfakes.” A footnote related to that assertion reads, “Our high precedence is stopping particularly damaging types of abuse, like baby sexual abuse materials (CSAM) and sexual deepfakes, by blocking their creation, filtering and monitoring uploads, utilizing superior detection instruments, and submitting reviews to the Nationwide Middle for Lacking & Exploited Kids (NCMEC) when CSAM or baby endangerment is recognized.”
That measured strategy contrasts sharply with the methods Musk promoted Grok Think about on X, the place he shared a brief video portrait of a blonde, busty, blue-eyed angel in barely-there lingerie.
This Tweet is at present unavailable. It is likely to be loading or has been eliminated.
OpenAI additionally takes easy steps to cease deepfakes, similar to denying prompts for photos and movies that point out public figures by identify. And in Mashable’s testing, Google’s AI video instruments are particularly delicate to photographs which may embrace an individual’s likeness.
Compared to these prolonged security frameworks (which many consultants nonetheless consider are insufficient), the xAI Acceptable Use Coverage is lower than 350 phrases. The coverage places the onus of stopping deepfakes on the consumer. The coverage reads, “You might be free to make use of our Service as you see match as long as you utilize it to be a superb human, act safely and responsibly, adjust to the regulation, don’t hurt folks, and respect our guardrails.”
For now, legal guidelines and rules in opposition to AI deepfakes and NCII stay of their infancy.
President Donald Trump not too long ago signed the Take It Down Act, which incorporates protections in opposition to deepfakes. Nonetheless, that regulation would not criminalize the creation of deepfakes however reasonably the distribution of those photos.
“Right here within the U.S., the Take it Down Act locations necessities on social media platforms to take away [Non-Consensual Intimate Images] as soon as notified,” Farid stated to Mashable. “Whereas this doesn’t instantly deal with the technology of NCII, it does — in idea — deal with the distribution of this materials. There are a number of state legal guidelines that ban the creation of NCII however enforcement seems to be spotty proper now.”‘
Disclosure: Ziff Davis, Mashable’s dad or mum firm, in April filed a lawsuit in opposition to OpenAI, alleging it infringed Ziff Davis copyrights in coaching and working its AI techniques.