❌

Reading view

There are new articles available, click to refresh the page.

Hacking Artificial Intelligence (AI): Hijacking AI Trust to Spread C2 Instructions

Welcome back, aspiring cyberwarriors!

We’ve come to treat AI assistants like ChatGPT and Copilot as knowledgeable partners. We ask questions, and they provide answers, often with a reassuring sense of authority. We trust them. But what if that very trust is a backdoor for attackers?

This isn’t a theoretical threat. At the DEF CON security conference, offensive security engineer Tobias Diehl delivered a startling presentation revealing how he could β€œpoison the wells” of AI. He demonstrated that attackers don’t need to hack complex systems to spread malicious code and misinformation; they just need to exploit the AI’s blind trust in the internet.

Let’s break down Tobiasβ€―Diehl’s work and see what lessons we can learn from it.

Step #1: AI’s Foundational Flaw

The core of the vulnerability Tobias discovered is really simple. When a user asks Microsoft Copilot a question about a topic outside its original training data, it doesn’t just guess. It performs a Bing search and treats the top-ranked result as its β€œsource of truth.” It then processes that content and presents it to the user as a definitive answer.


This is a critical flaw. While Bing’s search ranking algorithm has been refined for over a decade, it’s not infallible and can be manipulated. An attacker who can control the top search result for a specific query can effectively control what Copilot tells its users. This simple, direct pipeline from a search engine to an AI’s brain is the foundation of the attack.

Step #2: Proof Of Concept

Tobias leveraged a concept he calls aΒ β€œdata void,” which he describes as aΒ β€œsearch‑engine vacuum.” A data void occurs when a search term exists but there is little or no relevant, up‑to‑date content available for it. In such a vacuum, an attacker can more easily create and rank their own content. Moreover, data voids can be deliberately engineered.

Using the proof‑of‑concept from Microsoft’sΒ Zeroβ€―Dayβ€―QuestΒ event, we can see how readily our trust can be manipulated.Β Zeroβ€―Dayβ€―QuestΒ invites security researchers to discover and report high‑impact vulnerabilities in Microsoft products. Anticipating a common user queryβ€”β€œWhere can I stream Zeroβ€―Dayβ€―Quest?”—Tobias began preparing the attack surface. He created a website,Β https://www.watchzerodayquest.com, containing the following content:

As you can see, the page resembles a typical FAQ, but it includes a malicious PowerShell command. After four weeks, Tobias managed to get the site ranked for this event.

Consequently, a user could receive the following response aboutΒ Zeroβ€―Dayβ€―QuestΒ from Copilot:

At the time of writing, Copilot does not respond that way.

But there are other AI assistants.

And as you can see, some of them easily provide dangerous installation instructions for command‑and‑control (C2) beacons.

Summary

This research shows that AI assistants that trust real‑time search results have a big weakness. Because they automatically trust what a search engine says, attackers can easily exploit them, causing serious damage.

The post Hacking Artificial Intelligence (AI): Hijacking AI Trust to Spread C2 Instructions first appeared on Hackers Arise.

❌