Reading view

There are new articles available, click to refresh the page.

Asking Grok to delete fake nudes may force victims to sue in Musk's chosen court

Journalists and advocates have been trying to grasp how many victims in total were harmed by Grok's nudifying scandal after xAI delayed restricting outputs and app stores refused to cut off access for days.

The latest estimates show that perhaps millions were harmed in the days immediately after Elon Musk promoted Grok's undressing feature on his own X feed by posting a pic of himself in a bikini.

Over just 11 days after Musk's post, Grok sexualized more than 3 million images, of which 23,000 were of children, the Center for Countering Digital Hate (CCDH) estimated in research published Thursday.

Read full article

Comments

© Leon Neal / Staff | Getty Images News

ChatGPT wrote “Goodnight Moon” suicide lullaby for man who later killed himself

OpenAI is once again being accused of failing to do enough to prevent ChatGPT from encouraging suicides, even after a series of safety updates were made to a controversial model, 4o, which OpenAI designed to feel like a user's closest confidant.

It's now been revealed that one of the most shocking ChatGPT-linked suicides happened shortly after Sam Altman claimed on X that ChatGPT 4o was safe. OpenAI had "been able to mitigate the serious mental health issues" associated with ChatGPT use, Altman claimed in October, hoping to alleviate concerns after ChatGPT became a "suicide coach" for a vulnerable teenager named Adam Raine, the family's lawsuit said.

Altman's post came on October 14. About two weeks later, 40-year-old Austin Gordon, died by suicide between October 29 and November 2, according to a lawsuit filed by his mother, Stephanie Gray.

Read full article

Comments

© murat bilgin | iStock / Getty Images Plus

Grok was finally updated to stop undressing women and children, X Safety says

Late Wednesday, X Safety confirmed that Grok was tweaked to stop undressing images of people without their consent.

"We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis," X Safety said. "This restriction applies to all users, including paid subscribers."

The update includes restricting "image creation and the ability to edit images via the Grok account on the X platform," which "are now only available to paid subscribers. This adds an extra layer of protection by helping to ensure that individuals who attempt to abuse the Grok account to violate the law or our policies can be held accountable," X Safety said.

Read full article

Comments

© Leon Neal / Staff | Getty Images News

Great Trains, Not So Great AI Chatbot Security

A joy of covering the world of the European hackerspace community is that it offers the chance for train travel across the continent using the ever-good-value Interrail pass. For a British traveler such a journey inevitably starts with a Eurostar train that whisks you in comfort through the Channel Tunnel, so a report of an AI vulnerability on the Eurostar website from [Ross Donald] particularly caught our eye. What it reveals goes beyond the train company, and tells us some interesting tidbits about how safeguards in AI chatbots can be circumvented.

The bot sits on the Eurostar website, and is a simple HTML and JavaScript client that talks to the LLM back-end itself through an API. The API queries contain the whole conversation, because as AI toy manufacturers whose products have been persuaded to spout adult context will tell you, large language models (LLM)s as commonly implemented do not have a context memory for the conversation in hand.

The Eurostar developers had not made a bot without guardrails, but the vulnerability lay in those guardrails only being applied to the most recent message. Thus an innocuous or empty message could be sent, with a payload concealed in a previous message in the conversation. He demonstrates the bot returning system information about itself, and embedding injected HTML and JavaScript in its responses.

He notes that the target of the resulting output could only be himself and that he was unable to access any data from other customers, so perhaps in this case the train operator was fortunately spared the risk of a breach. From his description though, we agree they could have responded to the disclosure in a better manner.


Header image: Eriksw, CC BY-SA 4.0.

UK probes X over Grok CSAM scandal; Elon Musk cries censorship

Elon Musk's X is currently under investigation in the United Kingdom after failing to stop the platform's chatbot, Grok, from generating thousands of sexualized images of women and children.

On Monday, UK media regulator Ofcom confirmed that X may have violated the UK's Online Safety Act, which requires platforms to block illegal content. The proliferation of "undressed images of people" by X users may amount to intimate image abuse, pornography, and child sexual abuse material (CSAM), the regulator said. And X may also have neglected its duty to stop kids from seeing porn.

"Reports of Grok being used to create and share illegal non-consensual intimate images and child sexual abuse material on X have been deeply concerning," an Ofcom spokesperson said. "Platforms must protect people in the UK from content that’s illegal in the UK, and we won’t hesitate to investigate where we suspect companies are failing in their duties, especially where there’s a risk of harm to children."

Read full article

Comments

© BRENDAN SMIALOWSKI / Contributor | AFP

5 new proposals to regulate AI in Washington state, from classrooms to digital companions

The Legislative Building in Olympia, Wash., is home to the state’s Legislature. (GeekWire Photo / Lisa Stiffler)

Washington state lawmakers are taking another run at regulating artificial intelligence, rolling out a slate of bills this session aimed at curbing discrimination, limiting AI use in schools, and imposing new obligations on companies building emotionally responsive AI products.

The state has passed narrow AI-related laws in the past — including limits on facial recognition and distributing deepfakes — but broader efforts have often stalled, including proposals last year focused on AI development transparency and disclosure.

This year’s bills focus on children, mental health, and high-stakes decisions like hiring, housing, and lending. The bills could affect HR software vendors, ed-tech companies, mental health startups, and generative AI platforms operating in Washington.

The proposals come as Congress continues to debate AI oversight with little concrete action, leaving states to experiment with their own guardrails. An interim report issued recently by the Washington state AI Task Force notes that the federal government’s “hands-off approach” to AI has created “a crucial regulatory gap that leaves Washingtonians vulnerable.”

Here’s a look at five AI-related bills that were pre-filed before the official start of the legislative session, which kicks off Monday.

HB 2157

This sweeping bill would regulate so-called high-risk AI systems used to make or significantly influence decisions about employment, housing, credit, health care, education, insurance, and parole.

Companies that develop or deploy these systems in Washington would be required to assess and mitigate discrimination risks, disclose when people are interacting with AI, and explain how AI contributed to adverse decisions. Consumers could also receive explanations for decisions influenced by AI.

The proposal would not apply to low-risk tools like spam filters or basic customer-service chatbots, nor to AI used strictly for research. Still, it could affect a wide range of tech companies, including HR software vendors, fintech firms, insurance platforms, and large employers using automated screening tools. The bill would go into effect on Jan. 1, 2027.

SB 5984

This bill, requested by Gov. Bob Ferguson, focuses on AI companion chatbots and would require repeated disclosures that an AI chatbot is not human, prohibit sexually explicit content for minors, and mandate suicide-prevention protocols. Violations would fall under Washington’s Consumer Protection Act.

The bill’s findings warn that AI companion chatbots can blur the line between human and artificial interaction and may contribute to emotional dependency or reinforce harmful ideation, including self-harm, particularly among minors.

These rules could directly impact mental health and wellness startups experimenting with AI-driven therapy or emotional support tools — including companies exploring AI-based mental health services, such as Seattle startup NewDays.

Babak Parviz, CEO of NewDays and a former leader at Amazon, said he believes the bill has good intentions but added that it would be difficult to enforce as “building a long-term relationship is so vaguely defined here.”

Parviz said it’s important to examine systems that interact with minors to make sure they don’t cause harm. “For critical AI systems that interact with people, it’s important to have a layer of human supervision,” he said. “For example, our AI system in clinic use is under the supervision of an expert human clinician.”

OpenAI and Common Sense Media are partnering on a ballot initiative in California also focused on chatbots and minors.

SB 5870

A related bill goes even further, creating a potential civil liability when an AI system is alleged to have contributed to a person’s suicide.

Under this bill, companies could face lawsuits if their AI system encouraged self-harm, provided instructions, or failed to direct users to crisis resources — and would be barred from arguing that the harm was caused solely by autonomous AI behavior.

If enacted, the measure would explicitly link AI system design and operation to wrongful-death claims. The bill comes amid growing legal scrutiny of companion-style chatbots, including lawsuits involving Character.AI and OpenAI.

SB 5956

Targets AI use in K–12 schools, banning predictive “risk scores” that label students as likely troublemakers and prohibiting real-time biometric surveillance such as facial recognition.

Schools would also be barred from using AI as the sole basis for suspensions, expulsions, or referrals to law enforcement, reinforcing that human judgment must remain central to discipline decisions.

Educators and civil rights advocates have raised alarms about predictive tools that can amplify disparities in discipline.

SB 5886

This proposal updates Washington’s right-of-publicity law to explicitly cover AI-generated forged digital likenesses, including convincing voice clones and synthetic images.

Using someone’s AI-generated likeness for commercial purposes without consent could expose companies to liability, reinforcing that existing identity protections apply in the AI era — and not just for celebrities and public figures.

X’s half-assed attempt to paywall Grok doesn’t block free image editing

Once again, people are taking Grok at its word, treating the chatbot as a company spokesperson without questioning what it says.

On Friday morning, many outlets reported that X had blocked universal access to Grok's image-editing features after the chatbot began prompting some users to pay $8 to use them. The messages are seemingly in response to reporting that people are using Grok to generate thousands of non-consensual sexualized images of women and children each hour.

"Image generation and editing are currently limited to paying subscribers," Grok tells users, dropping a link and urging, "you can subscribe to unlock these features."

Read full article

Comments

© Apu Gomes / Stringer | Getty Images News

Grok assumes users seeking images of underage girls have “good intent”

For weeks, xAI has faced backlash over undressing and sexualizing images of women and children generated by Grok. One researcher conducted a 24-hour analysis of the Grok account on X and estimated that the chatbot generated over 6,000 images an hour flagged as "sexually suggestive or nudifying," Bloomberg reported.

While the chatbot claimed that xAI supposedly "identified lapses in safeguards" that allowed outputs flagged as child sexual abuse material (CSAM) and was "urgently fixing them," Grok has proven to be an unreliable spokesperson, and xAI has not announced any fixes.

A quick look at Grok's safety guidelines on its public GitHub shows they were last updated two months ago. The GitHub also indicates that, despite prohibiting such content, Grok maintains programming that could make it likely to generate CSAM.

Read full article

Comments

© Aurich Lawson | Getty Images

Robot Lawyer Barred From Fighting Traffic Ticket in Court

(Credit: AndreyPopov/Getty Images)
We may have robot frycooks, robot bartenders, and even robot shoe-shiners, but robot lawyers are apparently where we draw the line. Human lawyers have prevented an artificial intelligence-equipped robot from appearing in court, where it was scheduled to fight a defendant’s speeding ticket.

The “robot lawyer” is the latest creation from DoNotPay, a New York startup known for its AI chatbot of the same name. Last year our colleagues at PCMag reported that DoNotPay had successfully negotiated down people’s Comcast bills and canceled their forgotten free trials. Since then, the chatbot has expanded to help users block spam texts, file corporate complaints, renew their Florida driver’s licenses, and otherwise take care of tasks that would be annoying or burdensome without DoNotPay’s help.

But it appears DoNotPay has taken things a bit too far. Shortly after the startup added legal capabilities to its chatbot’s feature set, a user “hired” the bot to fight their speeding ticket. On Feb. 22, the bot was scheduled to “appear” in court by way of smart glasses worn on the human defendant’s head. These glasses would record court proceedings while using text generators like ChatGPT and DaVinci to dictate responses into the defendant’s ear. According to NPR, the appearance was set to become the first-ever AI-powered legal defense.

DoNotPay’s UI, as illustrated on its website.

As human lawyers found out about DoNotPay, however, the chatbot and its defendant were required to revise their plan. DoNotPay CEO Joshua Browder told NPR that multiple state bar associations threatened the startup, even going so far as to mention a district attorney’s office referral, prosecution, and prison time. Such consequences would be made possible by rules prohibiting unauthorized law practice in the courtroom. Eventually, Browder said, the threat of criminal charges forced the startup to wave a white flag.

Unfortunately for Browder, this isn’t the end of DoNotPay’s legal scrutiny. Several state bar associations are now investigating the startup and its chatbot for the same reason as above. Browder reportedly believes in AI’s eventual place in the courtroom, saying it could someday provide affordable legal representation for people who wouldn’t be able to swing a human attorney’s fees. But if DoNotPay hopes to make robot lawyers a real thing, it’ll have to rethink its strategy: It’s illegal to record audio during a live legal proceeding in federal and some state courts, which collapses the whole smart glasses technique.

DoNotPay still lists multiple legal disputes on its website, indicating that the startup might have faith in its ability to escape from these probes unscathed.

Now Read:

❌