Normal view
-
Wired
- Meta Seeks to Bar Mentions of Mental Healthβand Zuckerbergβs Harvard PastβFrom Child Safety Trial
Meta Seeks to Bar Mentions of Mental Healthβand Zuckerbergβs Harvard PastβFrom Child Safety Trial
Grok assumes users seeking images of underage girls have βgood intentβ
For weeks, xAI has faced backlash over undressing and sexualizing images of women and children generated by Grok. One researcher conducted a 24-hour analysis of the Grok account on X and estimated that the chatbot generated over 6,000 images an hour flagged as "sexually suggestive or nudifying," Bloomberg reported.
While the chatbot claimed that xAI supposedly "identified lapses in safeguards" that allowed outputs flagged as child sexual abuse material (CSAM) and was "urgently fixing them," Grok has proven to be an unreliable spokesperson, and xAI has not announced any fixes.
A quick look at Grok's safety guidelines on its public GitHub shows they were last updated two months ago. The GitHub also indicates that, despite prohibiting such content, Grok maintains programming that could make it likely to generate CSAM.


Β© Aurich Lawson | Getty Images
X blames users for Grok-generated CSAM; no fixes announced
It seems that instead of updating Grok to prevent outputs of sexualized images of minors, X is planning to purge users generating content that the platform deems illegal, including Grok-generated child sexual abuse material (CSAM).
On Saturday, X Safety finally posted an official response after nearly a week of backlash over Grok outputs that sexualized real people without consent. Offering no apology for Grok's functionality, X Safety blamed users for prompting Grok to produce CSAM while reminding them that such prompts can trigger account suspensions and possible legal consequences.
"We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary," X Safety said. "Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content."


Β© NurPhoto / Contributor | NurPhoto
xAI silent after Grok sexualized images of kids; dril mocks Grokβs βapologyβ
For days, xAI has remained silent after its chatbot Grok admitted to generating sexualized AI images of minors, which could be categorized as violative child sexual abuse materials (CSAM) in the US.
According to Grok's "apology"βwhich was generated by a user's request, not posted by xAIβthe chatbot's outputs may have been illegal:
"I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user's prompt. This violated ethical standards and potentially US laws on CSAM. It was a failure in safeguards, and I'm sorry for any harm caused. xAI is reviewing to prevent future issues."
Ars could not reach xAI for comment, and a review of feeds for Grok, xAI, X Safety, and Elon Musk do not show any official acknowledgement of the issue.


Β© Anadolu / Contributor | Anadolu