Reading view

There are new articles available, click to refresh the page.

Overrun with AI slop, cURL scraps bug bounties to ensure "intact mental health"

The project developer for one of the Internet’s most popular networking tools is scrapping its vulnerability reward program after being overrun by a spike in the submission of low-quality reports, much of it AI-generated slop.

“We are just a small single open source project with a small number of active maintainers,” Daniel Stenberg, the founder and lead developer of the open source app cURL, said Thursday. “It is not in our power to change how all these people and their slop machines work. We need to make moves to ensure our survival and intact mental health.”

Manufacturing bogus bugs

His comments came as cURL users complained that the move was treating the symptoms caused by AI slop without addressing the cause. The users said they were concerned the move would eliminate a key means for ensuring and maintaining the security of the tool. Stenberg largely agreed, but indicated his team had little choice.

Read full article

Comments

© Getty Images

eBay bans illicit automated shopping amid rapid rise of AI agents

On Tuesday, eBay updated its User Agreement to explicitly ban third-party "buy for me" agents and AI chatbots from interacting with its platform without permission, first spotted by Value Added Resource. On its face, a one-line terms of service update doesn't seem like major news, but what it implies is more significant: The change reflects the rapid emergence of what some are calling "agentic commerce," a new category of AI tools designed to browse, compare, and purchase products on behalf of users.

eBay's updated terms, which go into effect on February 20, 2026, specifically prohibit users from employing "buy-for-me agents, LLM-driven bots, or any end-to-end flow that attempts to place orders without human review" to access eBay's services without the site's permission. The previous version of the agreement contained a general prohibition on robots, spiders, scrapers, and automated data gathering tools but did not mention AI agents or LLMs by name.

At first glance, the phrase "agentic commerce" may sound like aspirational marketing jargon, but the tools are already here, and people are apparently using them. While fitting loosely under one label, these tools come in many forms.

Read full article

Comments

© Westend61 via Getty Images

Millions of people imperiled through sign-in links sent by SMS

Websites that authenticate users through links and codes sent in text messages are imperiling the privacy of millions of people, leaving them vulnerable to scams, identity theft, and other crimes, recently published research has found.

The links are sent to people seeking a range of services, including those offering insurance quotes, job listings, and referrals for pet sitters and tutors. To eliminate the hassle of collecting usernames and passwords—and for users to create and enter them—many such services instead require users to provide a cell phone number when signing up for an account. The services then send authentication links or passcodes by SMS when the users want to log in.

Easy to execute at scale

A paper published last week has found more than 700 endpoints delivering such texts on behalf of more than 175 services that put user security and privacy at risk. One practice that jeopardizes users is the use of links that are easily enumerated, meaning scammers can guess them by simply modifying the security token, which usually appears at the right of a URL. By incrementing or randomly guessing the token—for instance, by first changing 123 to 124 or ABC to ABD and so on—the researchers were able to access accounts belonging to other users. From there, the researchers could view personal details, such as partially completed insurance applications.

Read full article

Comments

Wikipedia volunteers spent years cataloging AI tells. Now there's a plugin to avoid them.

On Saturday, tech entrepreneur Siqi Chen released an open source plugin for Anthropic's Claude Code AI assistant that instructs the AI model to stop writing like an AI model. Called "Humanizer," the simple prompt plugin feeds Claude a list of 24 language and formatting patterns that Wikipedia editors have listed as chatbot giveaways. Chen published the plugin on GitHub, where it has picked up over 1,600 stars as of Monday.

"It's really handy that Wikipedia went and collated a detailed list of 'signs of AI writing,'" Chen wrote on X. "So much so that you can just tell your LLM to... not do that."

The source material is a guide from WikiProject AI Cleanup, a group of Wikipedia editors who have been hunting AI-generated articles since late 2023. French Wikipedia editor Ilyas Lebleu founded the project. The volunteers have tagged over 500 articles for review and, in August 2025, published a formal list of the patterns they kept seeing.

Read full article

Comments

© Getty Images

10 things I learned from burning myself out with AI coding agents

If you've ever used a 3D printer, you may recall the wondrous feeling when you first printed something you could have never sculpted or built yourself. Download a model file, load some plastic filament, push a button, and almost like magic, a three-dimensional object appears. But the result isn't polished and ready for mass production, and creating a novel shape requires more skills than just pushing a button. Interestingly, today's AI coding agents feel much the same way.

Since November, I have used Claude Code and Claude Opus 4.5 through a personal Claude Max account to extensively experiment with AI-assisted software development (I have also used OpenAI's Codex in a similar way, though not as frequently). Fifty projects later, I'll be frank: I have not had this much fun with a computer since I learned BASIC on my Apple II Plus when I was 9 years old. This opinion comes not as an endorsement but as personal experience: I voluntarily undertook this project, and I paid out of pocket for both OpenAI and Anthropic's premium AI plans.

Throughout my life, I have dabbled in programming as a utilitarian coder, writing small tools or scripts when needed. In my web development career, I wrote some small tools from scratch, but I primarily modified other people's code for my needs. Since 1990, I've programmed in BASIC, C, Visual Basic, PHP, ASP, Perl, Python, Ruby, MUSHcode, and some others. I am not an expert in any of these languages—I learned just enough to get the job done. I have developed my own hobby games over the years using BASIC, Torque Game Engine, and Godot, so I have some idea of what makes a good architecture for a modular program that can be expanded over time.

Read full article

Comments

© Aurich Lawson | Getty Images

Rackspace customers grapple with “devastating” email hosting price hike

Rackspace’s new pricing for its email hosting services is “devastating,” according to a partner that has been using Rackspace as its email provider since 1999.

In recent weeks, Rackspace updated its email hosting pricing. Its standard plan is now $10 per mailbox per month. Businesses can also pay for the Rackspace Email Plus add-on for an extra $2/mailbox/month (for “file storage, mobile sync, Office-compatible apps, and messaging”), and the Archiving add-on for an extra $6/mailbox/month (for unlimited storage).

As recently as November 2025, Rackspace charged $3/mailbox/month for its Standard plan, and an extra $1/mailbox/month for the Email Plus add-on, and an additional $3/mailbox/month for the Archival add-on, according to the Internet Archive’s Wayback Machine.

Read full article

Comments

© Thomas Fuller/SOPA Images/LightRocket via Getty Images

OpenAI to test ads in ChatGPT as it burns through billions

On Friday, OpenAI announced it will begin testing advertisements inside the ChatGPT app for some US users in a bid to expand its customer base and diversify revenue. The move represents a reversal for CEO Sam Altman, who in 2024 described advertising in ChatGPT as a "last resort" and expressed concerns that ads could erode user trust, although he did not completely rule out the possibility at the time.

The banner ads will appear in the coming weeks for logged-in users of the free version of ChatGPT as well as the new $8 per month ChatGPT Go plan, which OpenAI also announced Friday is now available worldwide. OpenAI first launched ChatGPT Go in India in August 2025 and has since rolled it out to over 170 countries.

Users paying for the more expensive Plus, Pro, Business, and Enterprise tiers will not see advertisements.

Read full article

Comments

© OpenAI / Benj Edwards

Mandiant releases rainbow table that cracks weak admin password in 12 hours

Security firm Mandiant has released a database that allows any administrative password protected by Microsoft’s NTLM.v1 hash algorithm to be hacked in an attempt to nudge users who continue using the deprecated function despite known weaknesses.

The database comes in the form of a rainbow table, which is a precomputed table of hash values linked to their corresponding plaintext. These generic tables, which work against multiple hashing schemes, allow hackers to take over accounts by quickly mapping a stolen hash to its password counterpart. NTLMv1 rainbow tables are particularly easy to construct because of NTLMv1’s limited keyspace, meaning the relatively small number of possible passwords the hashing function allows for. NTLMv1 rainbow tables have existed for two decades but typically require large amounts of resources to make any use of them.

New ammo for security pros

On Thursday, Mandiant said it had released an NTLMv1 rainbow table that will allow defenders and researchers (and, of course, malicious hackers, too) to recover passwords in under 12 hours using consumer hardware costing less than $600 USD. The table is hosted in Google Cloud. The database works against Net-NTLMv1 passwords, which are used in network authentication for accessing resources such as SMB network sharing.

Read full article

Comments

© Getty Images

TSMC says AI demand is “endless” after record Q4 earnings

On Thursday, Taiwan Semiconductor Manufacturing Company (TSMC) reported record fourth-quarter earnings and said it expects AI chip demand to continue for years. During an earnings call, CEO C.C. Wei told investors that while he cannot predict the semiconductor industry's long-term trajectory, he remains bullish on AI.

TSMC manufactures chips for companies including Apple, Nvidia, AMD, and Qualcomm, making it a linchpin of the global electronics supply chain. The company produces the vast majority of the world's most advanced semiconductors, and its factories in Taiwan have become a focal point of US-China tensions over technology and trade. When TSMC reports strong demand and ramps up spending, it signals that the companies designing AI chips expect years of continued growth.

"All in all, I believe in my point of view, the AI is real—not only real, it's starting to grow into our daily life. And we believe that is kind of—we call it AI megatrend, we certainly would believe that," Wei said during the call. "So another question is 'can the semiconductor industry be good for three, four, five years in a row?' I'll tell you the truth, I don't know. But I look at the AI, it looks like it's going to be like an endless—I mean, that for many years to come."

Read full article

Comments

© BING-JHEN HONG via Getty Images

Wikipedia signs major AI firms to new priority data access deals

On Thursday, the Wikimedia Foundation announced API access deals with Microsoft, Meta, Amazon, Perplexity, and Mistral AI, expanding its effort to get major tech companies to pay for high-volume API access to Wikipedia content, which these companies use to train AI models like Microsoft Copilot and ChatGPT.

The deals mean that most major AI developers have now signed on to the foundation's Wikimedia Enterprise program, a commercial subsidiary that sells high-speed API access to Wikipedia's 65 million articles at higher speeds and volumes than the free public APIs provide. Wikipedia's content remains freely available under a Creative Commons license, but the Enterprise program charges for faster, higher-volume access to the data. The foundation did not disclose the financial terms of the deals.

The new partners join Google, which signed a deal with Wikimedia Enterprise in 2022, as well as smaller companies like Ecosia, Nomic, Pleias, ProRata, and Reef Media. The revenue helps offset infrastructure costs for the nonprofit, which otherwise relies on small public donations while watching its content become a staple of training data for AI models.

Read full article

Comments

© Wikipedia

A single click mounted a covert, multistage attack against Copilot

Microsoft has fixed a vulnerability in its Copilot AI assistant that allowed hackers to pluck a host of sensitive user data with a single click on a legitimate URL.

The hackers in this case were white-hat researchers from security firm Varonis. The net effect of their multistage attack was that they exfiltrated data, including the target’s name, location, and details of specific events from the user’s Copilot chat history. The attack continued to run even when the user closed the Copilot chat, with no further interaction needed once the user clicked the link, a legitimate Copilot one, in the email. The attack and resulting data theft bypassed enterprise endpoint security controls and detection by endpoint protection apps.

It just works

“Once we deliver this link with this malicious prompt, the user just has to click on the link and the malicious task is immediately executed,” Varonis security researcher Dolev Taler told Ars. “Even if the user just clicks on the link and immediately closes the tab of Copilot chat, the exploit still works.”

Read full article

Comments

© Microsoft

Bandcamp bans purely AI-generated music from its platform

On Tuesday, Bandcamp announced on Reddit that it will no longer permit AI-generated music on its platform. "Music and audio that is generated wholly or in substantial part by AI is not permitted on Bandcamp," the company wrote in a post to the r/bandcamp subreddit. The new policy also prohibits "any use of AI tools to impersonate other artists or styles."

The policy draws a line that some in the music community have debated: Where does tool use end and full automation begin? AI models are not artists in themselves, since they lack personhood and creative intent. But people do use AI tools to make music, and the spectrum runs from using AI for minor assistance (cleaning up audio, suggesting chord progressions) to typing a prompt and letting a model generate an entire track. Bandcamp's policy targets the latter end of that spectrum while leaving room for human artists who incorporate AI tools into a larger creative process.

The announcement emphasized the platform's desire to protect its community of human artists. "The fact that Bandcamp is home to such a vibrant community of real people making incredible music is something we want to protect and maintain," the company wrote. Bandcamp asked users to flag suspected AI-generated content through its reporting tools, and the company said it reserves "the right to remove any music on suspicion of being AI generated."

Read full article

Comments

© Malte Mueller via Getty Images

The RAM shortage’s silver lining: Less talk about “AI PCs”

RAM prices have soared, which is bad news for people interested in buying, building, or upgrading a computer this year, but it's likely good news for people exasperated by talk of so-called AI PCs.

As Ars Technica has reported, the growing demands of data centers, fueled by the AI boom, have led to a shortage of RAM and flash memory chips, driving prices to skyrocket.

In an announcement today, Ben Yeh, principal analyst at technology research firm Omdia, said that in 2025, “mainstream PC memory and storage costs rose by 40 percent to 70 percent, resulting in cost increases being passed through to customers.”

Read full article

Comments

© Getty

Never-before-seen Linux malware is “far more advanced than typical”

Researchers have discovered a never-before-seen framework that infects Linux machines with a wide assortment of modules that are notable for the range of advanced capabilities they provide to attackers.

The framework, referred to as VoidLink by its source code, features more than 30 modules that can be used to customize capabilities to meet attackers' needs for each infected machine. These modules can provide additional stealth and specific tools for reconnaissance, privilege escalation, and lateral movement inside a compromised network. The components can be easily added or removed as objectives change over the course of a campaign.

A focus on Linux inside the cloud

VoidLink can target machines within popular cloud services by detecting if an infected machine is hosted inside AWS, GCP, Azure, Alibaba, and Tencent, and there are indications that developers plan to add detections for Huawei, DigitalOcean, and Vultr in future releases. To detect which cloud service hosts the machine, VoidLink examines metadata using the respective vendor’s API.

Read full article

Comments

© Getty Images

Hegseth wants to integrate Musk’s Grok AI into military networks this month

On Monday, US Defense Secretary Pete Hegseth said he plans to integrate Elon Musk's AI tool, Grok, into Pentagon networks later this month. During remarks at the SpaceX headquarters in Texas reported by The Guardian, Hegseth said the integration would place "the world's leading AI models on every unclassified and classified network throughout our department."

The announcement comes weeks after Grok drew international backlash for generating sexualized images of women and children, although the Department of Defense has not released official documentation confirming Hegseth's announced timeline or implementation details.

During the same appearance, Hegseth rolled out what he called an "AI acceleration strategy" for the Department of Defense. The strategy, he said, will "unleash experimentation, eliminate bureaucratic barriers, focus on investments, and demonstrate the execution approach needed to ensure we lead in military AI and that it grows more dominant into the future."

Read full article

Comments

© Bloomberg via Getty Images

Microsoft vows to cover full power costs for energy-hungry AI data centers

On Tuesday, Microsoft announced a new initiative called "Community-First AI Infrastructure" that commits the company to paying full electricity costs for its data centers and refusing to seek local property tax reductions.

As demand for generative AI services has increased over the past year, Big Tech companies have been racing to spin up massive new data centers for serving chatbots and image generators that can have profound economic effects on the surrounding areas where they are located. Among other concerns, communities across the country have grown concerned that data centers are driving up residential electricity rates through heavy power consumption and by straining water supplies due to server cooling needs.

The International Energy Agency (IEA) projects that global data center electricity demand will more than double by 2030, reaching around 945 TWh, with the United States responsible for nearly half of total electricity demand growth over that period. This growth is happening while much of the country's electricity transmission infrastructure is more than 40 years old and under strain.

Read full article

Comments

© Bloomberg via Getty Images

Google removes some AI health summaries after investigation finds “dangerous” flaws

On Sunday, Google removed some of its AI Overviews health summaries after a Guardian investigation found people were being put at risk by false and misleading information. The removals came after the newspaper found that Google's generative AI feature delivered inaccurate health information at the top of search results, potentially leading seriously ill patients to mistakenly conclude they are in good health.

Google disabled specific queries, such as "what is the normal range for liver blood tests," after experts contacted by The Guardian flagged the results as dangerous. The report also highlighted a critical error regarding pancreatic cancer: The AI suggested patients avoid high-fat foods, a recommendation that contradicts standard medical guidance to maintain weight and could jeopardize patient health. Despite these findings, Google only deactivated the summaries for the liver test queries, leaving other potentially harmful answers accessible.

The investigation revealed that searching for liver test norms generated raw data tables (listing specific enzymes like ALT, AST, and alkaline phosphatase) that lacked essential context. The AI feature also failed to adjust these figures for patient demographics such as age, sex, and ethnicity. Experts warned that because the AI model's definition of "normal" often differed from actual medical standards, patients with serious liver conditions might mistakenly believe they are healthy and skip necessary follow-up care.

Read full article

Comments

© Getty Images

ChatGPT Health lets you connect medical records to an AI that makes things up

On Wednesday, OpenAI announced ChatGPT Health, a dedicated section of the AI chatbot designed for "health and wellness conversations" intended to connect a user's health and medical records to the chatbot in a secure way.

But mixing generative AI technology like ChatGPT with health advice or analysis of any kind has been a controversial idea since the launch of the service in late 2022. Just days ago, SFGate published an investigation detailing how a 19-year-old California man died of a drug overdose in May 2025 after 18 months of seeking recreational drug advice from ChatGPT. It's a telling example of what can go wrong when chatbot guardrails fail during long conversations and people follow erroneous AI guidance.

Despite the known accuracy issues with AI chatbots, OpenAI's new Health feature will allow users to connect medical records and wellness apps like Apple Health and MyFitnessPal so that ChatGPT can provide personalized health responses like summarizing care instructions, preparing for doctor appointments, and understanding test results.

Read full article

Comments

© Pakin Songmor via Getty Images

ChatGPT falls to new data-pilfering attack as a vicious cycle in AI continues

There’s a well-worn pattern in the development of AI chatbots. Researchers discover a vulnerability and exploit it to do something bad. The platform introduces a guardrail that stops the attack from working. Then, researchers devise a simple tweak that once again imperils chatbot users.

The reason more often than not is that AI is so inherently designed to comply with user requests that the guardrails are reactive and ad hoc, meaning they are built to foreclose a specific attack technique rather than the broader class of vulnerabilities that make it possible. It’s tantamount to putting a new highway guardrail in place in response to a recent crash of a compact car but failing to safeguard larger types of vehicles.

Enter ZombieAgent, son of ShadowLeak

One of the latest examples is a vulnerability recently discovered in ChatGPT. It allowed researchers at Radware to surreptitiously exfiltrate a user's private information. Their attack also allowed for the data to be sent directly from ChatGPT servers, a capability that gave it additional stealth, since there were no signs of breach on user machines, many of which are inside protected enterprises. Further, the exploit planted entries in the long-term memory that the AI assistant stores for the targeted user, giving it persistence.

Read full article

Comments

© Getty Images

❌