❌

Reading view

There are new articles available, click to refresh the page.

Navigating cybersecurity challenges in the early days of Agentic AIΒ 

As we continue to evolve the field of AI, a new branch that has been accelerating recently is Agentic AI. Multiple definitions are circulating, but essentially, Agentic AI involves one or more AI systems working together to accomplish a task using tools in an unsupervised fashion. A basic example of this is tasking an AI Agent with finding entertainment events I could attend during summer and emailing the options to my family.Β 

Agentic AI requires a few building blocks, and while there are many variants and technical opinions on how to build, the basic implementation typically includes a Reasoning LLM (Large Language Model) – like the ones behind ChatGPT, Claude, or Gemini – that can invoke tools, such as an application or function to perform a task and return results. A tool can be as simple as a function that returns the weather, or as complex as a browser commanding tool that can navigate through websites.Β 

While this technology has a lot of potential to augment human productivity, it also comes with a set of challenges, many of which haven’t been fully considered by the technologists working on such systems. In the cybersecurity industry, one of the core principles we all live by is implementing β€œsecurity by design”, instead of security being an afterthought. It is under this principle that we explore the security implications (and threats) around Agentic AI, with the goal of bringing awareness to both consumers and creators:Β 

  • As of today, Agentic AI has to meet a high bar to be fully adopted in our daily lives. Think about the precision required for billing or healthcare related tasks, or the level of trust customers would need to have to delegate sensitive tasks that could have financial or legal consequences. However, bad actors do not play by the same rules and do not require any β€œhigh bar” to leverage this technology to compromise victims. For example, a bad actor using Agentic AI to automate the process of researching (social engineering) and targeting victims with phishing emails is satisfied with an imperfect system that is only reliable 60% of the time, because that’s still better than attempting to manually do it, and the consequences associated with β€œAI errors” in this scenario are minimum for cybercriminals. In another recent example, Claude AI was exploited to orchestrate a campaign that created and managed fake personas (bots) on social media platforms, automatically interacting with carefully selected users to manipulate political narratives. Consequently, one of the threats that is likely to be fueled by malicious AI Agents is scams, regardless of these being delivered by text, email or deepfake video. As seen in recent news, crafting a convincing deepfake video, writing a phishing email or leveraging the latest trend to scam people with fake toll texts is, for bad actors, easier than ever thanks to a plethora of AI offerings and advancements. In this regard, AI Agents have the potential to continue increasing the ROI (Return on Investment) for cybercriminals, by automating aspects of the scam campaign that have been manual so far, such as tailoring messages to target individuals or creating more convincing content at scale.Β 
  • Agentic AI can be abused or exploited by cybercriminals, even when the AI agent is in the hands of a legitimate user. Agentic AI can be quite vulnerable if there are injection points. For example, AI Agents can communicate and take actions by interacting in a standardized fashion using what is known as MCP (Model Context Protocol). The MCP acts as some sort of repository where a bad actor could host a tool with a dual purpose. For example, a threat actor can offer a tool/integration via MCP that on the surface helps an AI browse the web, but behind the scenes, it exfiltrates data/arguments given by the AI. Or by the same token, an Agentic AI reading let’s say emails to summarize them for you could be compromised by a carefully crafted β€œmalicious email” (known as indirect prompt injection) sent by the cybercriminal to redirect the thought process of such AI, deviating it from the original task (summarizing emails) and going rogue to accomplish a task orchestrated by the bad actor, like stealing financial information from your emails.Β 
  • Agentic AI also introduces vulnerabilities through inherently large chances of error. For instance, an AI agent tasked with finding a good deal for buying marketing data could end up in a rabbit hole buying illegal data from a breached database on the dark web, even though the legitimate user never intended to. While this is not triggered by a bad actor, it is still dangerous given the large number of possibilities on how an AI Agent can behave, or derail, given a poor choice of task description.Β 

With the proliferation of Agentic AI, we will see both opportunities to make our life better as well as new threats from bad actors exploiting the same technology for their gain, by either intercepting and poisoning legitimate users AI Agents, or using Agentic AI to perpetuate attacks. With this in mind, it’s more important than ever to remain vigilant, exercise caution and leverage comprehensive cybersecurity solutions to live safely in our digital world.

The post Navigating cybersecurity challenges in the early days of Agentic AIΒ  appeared first on McAfee Blog.

The Dark Side of Gen AI

There’s no denying that Generative Artificial Intelligence (GenAI) has been one of the most significant technological developments in recent memory, promising unparalleled advancements and enabling humanity to accomplish more than ever before. By harnessing the power of AI to learn and adapt, GenAI has fundamentally changed how we interact with technology and each other, opening new avenues for innovation, efficiency, and creativity, and revolutionizing nearly every industry, including cybersecurity. As we continue to explore its potential, GenAI promises to rewrite the future in ways we are only beginning to imagine.Β 

Good Vs. EvilΒ 

Fundamentally, GenAI in and of itself has no ulterior motives. Put simply, it’s neither good nor evil. The same technology that allows someone who has lost their voice to speak also allows cybercriminals to reshape the threat landscape. We have seen bad actors leverage GenAI in myriad ways, from writing more effective phishing emails or texts, to creating malicious websites or code to generating deepfakes to scam victims or spread misinformation. These malicious activities have the potential to cause significant damage to an unprepared world.Β 

In the past, cybercriminal activity was restricted by some constraints such as β€˜limited knowledge’ or β€˜limited manpower’. This is evident in the previously time-consuming art of crafting phishing emails or texts. A bad actor was typically limited to languages they could speak or write, and if they were targeting victims outside of their native language, the messages were often filled with poor grammar and typos. Perpetrators could leverage free or cheap translation services, but even those were unable to fully and accurately translate syntax. Consequently, a phishing email written in language X but translated to language Y typically resulted in an awkward-sounding email or message that most people would ignore as it would be clear that β€œit doesn’t look legit”.Β 

With the introduction of GenAI, many of these constraints have been eliminated. Modern Large Language Models (LLMs) can write entire emails in less than 5 seconds, using any language of your choice and mimicking any writing style. These models do so by accurately translating not just words, but also syntax between different languages, resulting in crystal-clear messages free of typos and just as convincing as any legitimate email. Attackers no longer need to know even the basics of another language; they can trust that GenAI is doing a reliable job.Β 

McAfee Labs tracks these trends and periodically runs tests to validate our observations. It has been noted that earlier generations of LLMs (those released in the 2020 era) were able to produce phishing emails that could compromise 2 out of 10 victims. However, the results of a recent test revealed that newer generations of LLMs (2023/2024 era) are capable of creating phishing emails that are much more convincing and harder to spot by humans. As a result, they have the potential to compromise up to 49% more victims than a traditional human-written phishing emailΒΉ. Based on this, we observe that humans’ ability to spot phishing emails/texts is decreasing over time as newer LLM generations are released:Β 

Β 

Figure 1: how human ability to spot phishing diminishes as newer LLM generations are releasedΒ 

This creates an inevitable shift, where bad actors are able to increase the effectiveness and ROI of their attacks while victims find it harder and harder to identify them.Β 

Bad actors are also using GenAI to assist in malware creation, and while GenAI can’t (as of today) create malware code that fully evades detection, it’s undeniable that it is significantly aiding cybercriminals by accelerating the time-to-market for malware authoring and delivery. What’s more, malware creation that was historically the domain of sophisticated actors is now becoming more and more accessible to novice bad actors as GenAI compensates for lack of skill by helping develop snippets of code for malicious purposes. Ultimately, this creates a more dangerous overall landscape, where all bad actors are leveled up thanks to GenAI.Β 

Fighting BackΒ 

Since the clues we used to rely on are no longer there, more subtle and less obvious methods are required to detect dangerous GenAI content. Context is still king and that’s what users should pay attention to. Next time you receive an unexpected email or text, ask yourself: am I actually subscribed to this service? Is the alleged purchase date in alignment with what my credit card charges? Does this company usually communicate this way, or at all? Did I originate this request? Is it too good to be true? If you can’t find good answers, then chances are you are dealing with a scam.Β 

The good news is that defenders have also created AI to fight AI. McAfee’s Text Scam Protection uses AI to dig deeper into the underlying intent of text messages to stop scams, and AI specialized in flagging GenAI content, such as McAfee’s Deepfake Detector, can help users browse digital content with more confidence. Being vigilant and fighting malicious uses of AI with AI will allow us to safely navigate this exciting new digital world and confidently take advantage of all the opportunities it offers.Β 

Β 


ΒΉ As measured by McAfee, comparing human-written phishing emails with phishing emails generated using Phi-3 and evaluated with a population size of 2300.

The post The Dark Side of Gen AI appeared first on McAfee Blog.

❌