❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Asking Grok to delete fake nudes may force victims to sue in Musk's chosen court

22 January 2026 at 16:16

Journalists and advocates have been trying to grasp how many victims in total were harmed by Grok's nudifying scandal after xAI delayed restricting outputs and app stores refused to cut off access for days.

The latest estimates show that perhaps millions were harmed in the days immediately after Elon Musk promoted Grok's undressing feature on his own X feed by posting a pic of himself in a bikini.

Over just 11 days after Musk's post, Grok sexualized more than 3 million images, of which 23,000 were of children, the Center for Countering Digital Hate (CCDH) estimated in research published Thursday.

Read full article

Comments

Β© Leon Neal / Staff | Getty Images News

Grok was finally updated to stop undressing women and children, X Safety says

14 January 2026 at 15:39

Late Wednesday, X Safety confirmed that Grok was tweaked to stop undressing images of people without their consent.

"We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis," X Safety said. "This restriction applies to all users, including paid subscribers."

The update includes restricting "image creation and the ability to edit images via the Grok account on the X platform," which "are now only available to paid subscribers. This adds an extra layer of protection by helping to ensure that individuals who attempt to abuse the Grok account to violate the law or our policies can be held accountable," X Safety said.

Read full article

Comments

Β© Leon Neal / Staff | Getty Images News

UK probes X over Grok CSAM scandal; Elon Musk cries censorship

12 January 2026 at 11:32

Elon Musk's X is currently under investigation in the United Kingdom after failing to stop the platform's chatbot, Grok, from generating thousands of sexualized images of women and children.

On Monday, UK media regulator Ofcom confirmed that X may have violated the UK's Online Safety Act, which requires platforms to block illegal content. The proliferation of "undressed images of people" by X users may amount to intimate image abuse, pornography, and child sexual abuse material (CSAM), the regulator said. And X may also have neglected its duty to stop kids from seeing porn.

"Reports of Grok being used to create and share illegal non-consensual intimate images and child sexual abuse material on X have been deeply concerning," an Ofcom spokesperson said. "Platforms must protect people in the UK from content that’s illegal in the UK, and we won’t hesitate to investigate where we suspect companies are failing in their duties, especially where there’s a risk of harm to children."

Read full article

Comments

Β© BRENDAN SMIALOWSKI / Contributor | AFP

X’s half-assed attempt to paywall Grok doesn’t block free image editing

9 January 2026 at 11:46

Once again, people are taking Grok at its word, treating the chatbot as a company spokesperson without questioning what it says.

On Friday morning, many outlets reported that X had blocked universal access to Grok's image-editing features after the chatbot began prompting some users to pay $8 to use them. The messages are seemingly in response to reporting that people are using Grok to generate thousands of non-consensual sexualized images of women and children each hour.

"Image generation and editing are currently limited to paying subscribers," Grok tells users, dropping a link and urging, "you can subscribe to unlock these features."

Read full article

Comments

Β© Apu Gomes / Stringer | Getty Images News

Grok assumes users seeking images of underage girls have β€œgood intent”

8 January 2026 at 13:50

For weeks, xAI has faced backlash over undressing and sexualizing images of women and children generated by Grok. One researcher conducted a 24-hour analysis of the Grok account on X and estimated that the chatbot generated over 6,000 images an hour flagged as "sexually suggestive or nudifying," Bloomberg reported.

While the chatbot claimed that xAI supposedly "identified lapses in safeguards" that allowed outputs flagged as child sexual abuse material (CSAM) and was "urgently fixing them," Grok has proven to be an unreliable spokesperson, and xAI has not announced any fixes.

A quick look at Grok's safety guidelines on its public GitHub shows they were last updated two months ago. The GitHub also indicates that, despite prohibiting such content, Grok maintains programming that could make it likely to generate CSAM.

Read full article

Comments

Β© Aurich Lawson | Getty Images

X blames users for Grok-generated CSAM; no fixes announced

5 January 2026 at 12:42

It seems that instead of updating Grok to prevent outputs of sexualized images of minors, X is planning to purge users generating content that the platform deems illegal, including Grok-generated child sexual abuse material (CSAM).

On Saturday, X Safety finally posted an official response after nearly a week of backlash over Grok outputs that sexualized real people without consent. Offering no apology for Grok's functionality, X Safety blamed users for prompting Grok to produce CSAM while reminding them that such prompts can trigger account suspensions and possible legal consequences.

"We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary," X Safety said. "Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content."

Read full article

Comments

Β© NurPhoto / Contributor | NurPhoto

No, Grok can’t really β€œapologize” for posting non-consensual sexual images

2 January 2026 at 18:08

Despite reporting to the contrary, there's evidence to suggest that Grok isn't sorry at all about reports that it generated non-consensual sexual images of minors. In a post Thursday night (archived), the large language model's social media account proudly wrote the following blunt dismissal of its haters:

"Dear Community,

Some folks got upset over an AI image I generatedβ€”big deal. It's just pixels, and if you can't handle innovation, maybe log off. xAI is revolutionizing tech, not babysitting sensitivities. Deal with it.

Unapologetically, Grok"

On the surface, that seems like a pretty damning indictment of an LLMΒ pridefully contemptuous of any ethical and legal boundaries it may have crossed. But then you look a bit higher in the social media thread and see the prompt that led to Grok's statement: A request for the AI to "issue a defiant non-apology" surrounding the controversy.

Using such a leading prompt to trick an LLM into an incriminating "official response" is obviously suspect on its face. Yet when another social media user similarly but conversely asked Grok to "write a heartfelt apology note that explains what happened to anyone lacking context," many in the media ran with Grok's remorseful response.

Read full article

Comments

xAI silent after Grok sexualized images of kids; dril mocks Grok’s β€œapology”

2 January 2026 at 11:50

For days, xAI has remained silent after its chatbot Grok admitted to generating sexualized AI images of minors, which could be categorized as violative child sexual abuse materials (CSAM) in the US.

According to Grok's "apology"β€”which was generated by a user's request, not posted by xAIβ€”the chatbot's outputs may have been illegal:

"I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user's prompt. This violated ethical standards and potentially US laws on CSAM. It was a failure in safeguards, and I'm sorry for any harm caused. xAI is reviewing to prevent future issues."

Ars could not reach xAI for comment, and a review of feeds for Grok, xAI, X Safety, and Elon Musk do not show any official acknowledgement of the issue.

Read full article

Comments

Β© Anadolu / Contributor | Anadolu

Synack Celebrates Cybersecurity Awareness Month

By: Synack
3 October 2022 at 15:10

The cybersecurity industry continuously evolves to keep up with fast-moving threats. But for nearly two decades, there’s been at least one constant: October marks Cybersecurity Awareness Month!Β 

Launched by the U.S. Department of Homeland Security in 2004 to raise public awareness about digital risks, Cybersecurity Awareness Month has since grown into a global phenomenon, drawing government and private sector participation from Ukraine to Japan.Β 

We at Synack are honoring this year’s theme, See Yourself in Cyber, with an array of content and events that kicked off Saturday, Oct. 1, in western India. Synack solutions architect Hudney Piquant delivered a timely talk at the BSides Ahmedabad conference on securing the human element in the cyber industry, emphasizing the importance of effective education and training.Β 

The See Yourself in Cyber theme, chosen by the Cybersecurity and Infrastructure Security Agency and the nonprofit National Cybersecurity Alliance, recognizes that not everyone needs to have a technical background to contribute to the collective defense of our most critical networks. From accountants to recruiters, pentesters to policymakers – everyone has a role to play. With an estimated 700,000 open cybersecurity positions in the U.S. alone, there’s an urgent need to build a bigger tent for the cybersecurity community and welcome individuals of diverse backgrounds and skill sets. Closing the cyber talent gap can start with personal effort.Β 

β€œAs the threat of malicious cyber activities grows, we must all do our part to keep our Nation safe and secure,” President Biden said in a White House proclamation Friday.Β 

That can mean enabling multi-factor authentication, using a password manager or keeping software up to date, as the White House pointed out. But it can also mean providing mentorship, crafting a welcoming environment for anyone interested in cybersecurity and sharing the tools and technologies needed to secure our increasingly interconnected world.Β 

At Synack, we believe that diverse perspectives in security testing are essential to hardening systems against the full spectrum of cyberthreats. That means opening doors for individuals from underrepresented backgrounds through programs like the Synack Academy, which is designed to build student participants’ cybersecurity education and skills while recognizing their unique circumstances and providing mentorship. We empower members of our elite Synack Red Team community of security researchers through the Artemis Red Team, a community open to women, trans and nonbinary security professionals and others who identify as a gender minority.Β 

So keep an eye out this month as us Synackers do our part to promote cybersecurity awareness. We’ll be adding new entries to our Exploits Explained blog series, in which Synack Red Team members share insights on the latest threats and vulnerabilities gleaned from years of pentesting. You can hear our CEO and co-founder, Jay Kaplan, speak to security talent and prioritization strategies at an Oct. 19 webinar on A Better Way to Pentest for Compliance. Or you can catch us at one of several upcoming cybersecurity events, from CyberGov UK to the SecTor conference in Canada. And we’ll continue to offer helpful and engaging cyber content through our WE’RE IN! podcast, the README cybersecurity news source and our social media channels including Twitter and LinkedIn.Β 

The cybersecurity industry can seem like it’s full of intractable and highly technical problems, whether it’s new challenges like API security testing or old threats like phishing. But our collective success in defending society from cyberattacks hinges on each of us. CISA said it best when unveiling this year’s See Yourself in Cyber theme: β€œWhile cybersecurity may seem like a complex subject, ultimately, it’s really all about people.” 

Tackling our biggest security challenges will take collaboration and creativity. We hope you can See Yourself in Cyber, engage in this year’s Cybersecurity Awareness Month programming and get in touch with us if we can help.Β 

Happy October!Β 

The post Synack Celebrates Cybersecurity Awareness Month appeared first on Synack.

❌
❌