❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

A single click mounted a covert, multistage attack against Copilot

14 January 2026 at 17:03

Microsoft has fixed a vulnerability in its Copilot AI assistant that allowed hackers to pluck a host of sensitive user data with a single click on a legitimate URL.

The hackers in this case were white-hat researchers from security firm Varonis. The net effect of their multistage attack was that they exfiltrated data, including the target’s name, location, and details of specific events fromΒ the user’s Copilot chat history. The attack continued to run even when the user closed the Copilot chat, with no further interaction needed once the user clicked the link, a legitimate Copilot one, in the email. The attack and resulting data theft bypassed enterprise endpoint security controls and detection by endpoint protection apps.

It just works

β€œOnce we deliver this link with this malicious prompt, the user just has to click on the link and the malicious task is immediately executed,” Varonis security researcher Dolev Taler told Ars. β€œEven if the user just clicks on the link and immediately closes the tab of Copilot chat, the exploit still works.”

Read full article

Comments

Β© Microsoft

ChatGPT falls to new data-pilfering attack as a vicious cycle in AI continues

8 January 2026 at 09:00

There’s a well-worn pattern in the development of AI chatbots. Researchers discover a vulnerability and exploit it to do something bad. The platform introduces a guardrail that stops the attack from working. Then, researchers devise a simple tweak that once again imperils chatbot users.

The reason more often than not is that AI is so inherently designed to comply with user requests that the guardrails are reactive and ad hoc, meaning they are built to foreclose a specific attack technique rather than the broader class of vulnerabilities that make it possible. It’s tantamount to putting a new highway guardrail in place in response to a recent crash of a compact car but failing to safeguard larger types of vehicles.

Enter ZombieAgent, son of ShadowLeak

One of the latest examples is a vulnerability recently discovered in ChatGPT. It allowed researchers at Radware to surreptitiously exfiltrate a user's private information. Their attack also allowed for the data to be sent directly from ChatGPT servers, a capability that gave it additional stealth, since there were no signs of breach on user machines, many of which are inside protected enterprises. Further, the exploit planted entries in the long-term memory that the AI assistant stores for the targeted user, giving it persistence.

Read full article

Comments

Β© Getty Images

❌
❌