Reading view

There are new articles available, click to refresh the page.

This Year in Scams: A 2025 Retrospective, and a Look Ahead at 2026

By: McAfee
The Top Scams of 2025

They came by phone, by text, by email, and they even weaseled their way into people’s love lives—an entire host of scams that we covered here in our blogs throughout the year.

Today, we look back, picking five noteworthy scams that firmly established new trends, along with one in particular that gives us a hint at the face of scams to come.

Let’s start it off with one scam that pinged plenty of phones over the spring and summer: those toll road texts.

1 – The Texts That Jammed Everyone’s Phones: The Toll Road Scam

It was the hot new scam of 2025 that increased by 900% in one year: the toll road scam.

There’s a good chance you got a few of these this year,scam texts that say you have an unpaid tab for tolls and that you need to pay right away. And as always, they come with a handy link where you can pay up and avoid that threat of a “late fee.”

 

Of course, links like those took people to phishing sites where people gave scammers their payment info, which led to fraudulent charges on their cards. In some instances, the scammers took it a step further by asking for driver’s license and Social Security numbers, key pieces of info for big-time identity theft.

Who knows what the hot new text scam for 2026 will be, yet here are several ways you can stop text scams in their tracks, no matter what form they take:

How Can I Stop Text Scams?

Don’t click on any links in unexpected texts (or respond to them, either). Scammers want you to react quickly, but it’s best to stop and check it out.

Check to see if the text is legit. Reach out to the company that apparently contacted you using a phone number or website you know is real—not the info from the text.

Get our Scam Detector. It automatically detects scams by scanning URLs in your text messages. If you accidentally tap or click? Don’t worry, it blocks risky sites if you follow a suspicious link.

2 – Romancing the Bot: AI Chatbots and Images Finagle Their Way Into Romance Scams

It started with a DM. And a few months later, it cost her $1,200.

Earlier this year, we brought you the story of 25-year-old computer programmer Maggie K. who fell for a romance scam on Instagram. Her story played out like so many. When she and her online boyfriend finally agreed to meet in person, he claimed he missed his flight and needed money to rebook. Desperate to finally see him, she sent the money and never heard from him again.

But here’s the twist—he wasn’t real in the first place.

When she reported the scam to police, they determined his images were all made with AI. In Maggie’s words, “That was the scariest part—I had trusted someone who never even existed.”

Maggie isn’t alone. Our own research earlier this year revealed that more than half (52%) of people have been scammed out of money or pressured to send money or gifts by someone they met online.

Moreover, we found that scammers have fueled those figures with the use of AI. Of people we surveyed, more than 1 in 4 (26%) said they—or someone they know—have been approached by an AI chatbot posing as a real person on a dating app or social media.

We expect this trend will only continue, as AI tools make it easier and more efficient to pull off romance scams on an increasingly larger scale.

Even so, the guidelines for avoiding romance scams remain the same:

  • Never send money to someone you’ve never met in person.
  • Things move too fast, too soon—like when the other person starts talking about love almost right away.
  • They say they live far away and can’t meet in person because they live abroad, all part of a scammers story that they’re there for charity or military service.
  • Look out for stories of urgent financial need, such as sudden emergencies or requests for help with travel expenses to meet you.
  • Also watch out for people who ask for payment in gift cards, crypto, wire transfers, or other forms of payment that are tough to recover. That’s a sign of a scam.

3 – Paying to Get Paid: The New Job Scam That Raked in Millions

The job offer sounds simple enough … go online, review products, like videos, or do otherwise simple tasks and get paid doing it—until it’s time to get paid.

It’s a new breed of job scam that took root this spring, one where victims found themselves “paying to get paid.”

The FTC dubbed these scams as “gamified job scams” or “task scams.” Given the way these scams work, the naming fits.

It starts with a text or direct message from a “recruiter” offering work with the promise of making good money by “liking” or “rating” sets of videos or product images in an app, all with the vague purpose of “product optimization.” With each click, you earn a “commission” and see your “earnings” rack up in the app. You might even get a payout, somewhere between $5 and $20, just to earn your trust.

Then comes the hook.

Like a video game, the scammer sweetens the deal by saying the next batch of work can “level up” your earnings. But if you want to claim your “earnings” and book more work, you need to pay up. So you make the deposit, complete the task set, and when you try to get your pay the scammer and your money are gone. It was all fake.

This scam and others like it fall right in line with McAfee data that uncovered a spike in job-related scams of 1,000% between May and July,which undoubtedly built on 2024’s record-setting job scam losses of $501 million.

Whatever form they take, here’s how you can avoid job scams:

Step one—ignore job offers over text and social media

A proper recruiter will reach out to you by email or via a job networking site. Moreover, per the FTC, any job that pays you to “like” or “rate” content is against the law. That alone says it’s a scam.

Step two—look up the company

In the case of job offers in general, look up the company. Check out their background and see if it matches up with the job they’re pitching. In the U.S., The Better Business Bureau (BBB) offers a list of businesses you can search.

Step three—never pay to start a job.

Any case where you’re asked to pay to up front, with any form of payment, refuse, whether that’s for “training,” “equipment,” or more work. It’s a sign of a scam.

4 – Seeing is Believing is Out the Window: The Al Roker Deepfake Scam

Prince Harry, Taylor Swift, and now the Today show’s Al Roker, too, they’ve all found themselves as the AI-generated spokesperson for deepfake scams.

In the past, a deepfake Prince Harry pushed bogus investments, while another deepfake of Taylor Swift hawked a phony cookware deal. Then, this spring, a deepfake of Al Roker used his image and voice to promote a bogus hypertension cure—claiming, falsely, that he had suffered “a couple of heart attacks.”

 

The fabricated clip appeared on Facebook, which appeared convincing enough to fool plenty of people, including some of Roker’s own friends. “I’ve had some celebrity friends call because their parents got taken in by it,” said Roker.

While Meta quickly removed the video from Facebook after being contacted by TODAY, the damage was done. The incident highlights a growing concern in the digital age: how easy it is to create—and believe—convincing deepfakes.

Roker put it plainly, “We used to say, ‘Seeing is believing.’ Well, that’s kind of out the window now.”

In all, this stands as a good reminder to be skeptical of celebrity endorsements on social media. If public figure fronts an apparent deal for an investment, cookware, or a hypertension “cure” in your feed, think twice. And better yet, let our Scam Detector help you spot what’s real and what’s fake out there.

5 – September 2025: The First Agentic AI Attack Spotted in The Wild

And to close things out, a look at some recent news, which also serves as a look ahead.

Last September, researchers spotted something unseen before:a cyberattack almost entirely run by agentic AI.

What is Agentic AI?

Definition: Artificial intelligence systems that can independently plan, make decisions, and work toward specific goals with minimal human intervention; in this way, it executes complex tasks by adapting to new info and situations on its own.

Reported by AI researcher Anthropic, a Chinese state-sponsored group allegedly used the company’s Claude Code agent to automate most of an espionage campaign across nearly thirty organizations. Attackers allegedly bypassed guardrails that typically prevent such malicious use with jailbreaking techniques, which broke down their attacks into small, seemingly innocent tasks. That way, Claude orchestrated a large-scale attack it wouldn’t otherwise execute.

Once operational, the agent performed reconnaissance, wrote exploit code, harvested credentials, identified high-value databases, created backdoors, and generated documentation of the intrusion. By Anthropic’s estimate, they completed 80–90% of the work without any human involvement.

According to Anthropic: “At the peak of its attack, the AI made thousands of requests, often multiple per second—an attack speed that would have been, for human hackers, simply impossible to match.”

We knew this moment was coming, and now the time has arrived: what once took weeks of human effort to execute a coordinated attack now boils down to minutes as agentic AI does the work on someone’s behalf.

In 2026, we can expect to see more attacks led by agentic AI, along with AI-led scams as well, which raises an important question that Anthropic answers head-on:

If AI models can be misused for cyberattacks at this scale, why continue to develop and release them? The answer is that the very abilities that allow Claude to be used in these attacks also make it crucial for cyber defense. When sophisticated cyberattacks inevitably occur, our goal is for Claude—into which we’ve built strong safeguards—to assist cybersecurity professionals to detect, disrupt, and prepare for future versions of the attack.

That gets to the heart of security online: it’s an ever-evolving game. As new technologies arise, those who protect and those who harm one-up each other in a cycle of innovation and exploits. As we’re on the side of innovation here, you can be sure we’ll continue to roll out protections that keep you safer out there. Even as AI changes the game, our commitment remains the same.

Happy Holidays!

We’re taking a little holiday break here and we’ll be back with our weekly roundups again in 2026. Looking forward to catching up with you then and helping you stay safer in the new year.

The post This Year in Scams: A 2025 Retrospective, and a Look Ahead at 2026 appeared first on McAfee Blog.

Surge of OAuth Device Code Phishing Attacks Targets M365 Accounts

Financially motivated and nation-state threat groups are behind a surge in the use of device code phishing attacks that abuse Microsoft's legitimate OAuth 2.0 device authorization grant flow to trick users into giving them access to their M365 accounts, Proofpoint researchers say.

The post Surge of OAuth Device Code Phishing Attacks Targets M365 Accounts appeared first on Security Boulevard.

NCC Group Taps Qualys to Extend Managed Security Service into Shadow IT Realm

NCC Group this week revealed it has allied with Qualys to expand the scope of its managed attack surface management (ASM) services to address instances of shadow IT. Amber Mitchell, lead product manager for ASM at NCC Group, said the managed security service provider (MSSP) already provides a managed attack surface service, but aligning with..

The post NCC Group Taps Qualys to Extend Managed Security Service into Shadow IT Realm appeared first on Security Boulevard.

4 Pillars of Network Risk Reduction: A Guide to Network Security Risk Management

By: FireMon

Large enterprises today find themselves stuck in the “messy middle” of digital transformation, managing legacy on-premise firewalls from Palo Alto, Check Point, and Fortinet while simultaneously governing fast-growing cloud environments....

The post 4 Pillars of Network Risk Reduction: A Guide to Network Security Risk Management appeared first on Security Boulevard.

NDSS 2025 – Interventional Root Cause Analysis Of Failures In Multi-Sensor Fusion Perception Systems

Session 6C: Sensor Attacks

Authors, Creators & Presenters: Shuguang Wang (City University of Hong Kong), Qian Zhou (City University of Hong Kong), Kui Wu (University of Victoria), Jinghuai Deng (City University of Hong Kong), Dapeng Wu (City University of Hong Kong), Wei-Bin Lee (Information Security Center, Hon Hai Research Institute), Jianping Wang (City University of Hong Kong)

PAPER
NDSS 2025 - Interventional Root Cause Analysis Of Failures In Multi-Sensor Fusion Perception Systems

Autonomous driving systems (ADS) heavily depend on multi-sensor fusion (MSF) perception systems to process sensor data and improve the accuracy of environmental perception. However, MSF cannot completely eliminate uncertainties, and faults in multiple modules will lead to perception failures. Thus, identifying the root causes of these perception failures is crucial to ensure the reliability of MSF perception systems. Traditional methods for identifying perception failures, such as anomaly detection and runtime monitoring, are limited because they do not account for causal relationships between faults in multiple modules and overall system failure. To overcome these limitations, we propose a novel approach called interventional root cause analysis (IRCA). IRCA leverages the directed acyclic graph (DAG) structure of MSF to develop a hierarchical structural causal model (H-SCM), which effectively addresses the complexities of causal relationships. Our approach uses a divide-and-conquer pruning algorithm to encompass multiple causal modules within a causal path and to pinpoint intervention targets. We implement IRCA and evaluate its performance using real fault scenarios and synthetic scenarios with injected faults in the ADS Autoware. The average F1-score of IRCA in real fault scenarios is over 95%. We also illustrate the effectiveness of IRCA on an autonomous vehicle testbed equipped with Autoware, as well as a cross-platform evaluation using Apollo. The results show that IRCA can efficiently identify the causal paths leading to failures and significantly enhance the safety of ADS.


ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – Interventional Root Cause Analysis Of Failures In Multi-Sensor Fusion Perception Systems appeared first on Security Boulevard.

Preventing This Week’s AWS Cryptomining Attacks: Why Detection Fails and Permissions Matter

The recent discovery of a cryptomining campaign targeting Amazon compute resources highlights a critical gap in traditional cloud defense. Attackers are bypassing perimeter defenses by leveraging compromised credentials to execute legitimate but privileged API calls like ec2:CreateLaunchTemplate, ecs:RegisterTaskDefinition, ec2:ModifyInstanceAttribute, and lambda:CreateFunctionUrlConfig. While detection tools identify anomalies after they occur, they do not prevent execution, lateral […]

The post Preventing This Week’s AWS Cryptomining Attacks: Why Detection Fails and Permissions Matter appeared first on Security Boulevard.

Keeper Security Bolsters Federal Leadership to Advance Government Cybersecurity Initiatives

Keeper Security has announced the appointment of two new additions to its federal team, with Shannon Vaughn as Senior Vice President of Federal and Benjamin Parrish, Vice President of Federal Operations. Vaughn will lead Keeper’s federal business strategy and expansion, while Parrish will oversee the delivery and operational readiness of Keeper’s federal initiatives, supporting civilian, defence and intelligence agencies as they modernise identity security to defend against pervasive cyber threats.

Vaughn brings more than two decades of private sector, government and military service, with a career focused on securing sensitive data, modernising federal technology environments and supporting mission-critical cybersecurity operations. Prior to joining Keeper, Vaughn served as General Manager of Virtru Federal, where he led business development, operations and delivery for the company’s federal engagements. During his career, he has held multiple senior leadership roles at high-growth technology companies, including Vice President of Technology, Chief Product Owner and Chief Innovation Officer, and has worked closely with U.S. government customers to deploy secure, scalable solutions.

“Federal agencies are operating in an elevated environment with unprecedented cyber risk. Next-generation privileged access management to enforce zero-trust security is essential,” said Darren Guccione, CEO and Co-founder of Keeper Security. “Shannon and Ben bring a unique combination of operational military experience, federal technology leadership and a deep understanding of zero-trust security. They know how agencies operate, how threats evolve and how to translate modern security architecture into real mission outcomes. These exceptional additions to our team will be instrumental as we expand Keeper’s role in securing the federal government’s most critical systems, personnel and warfighters.”

Vaughn is a career member of the U.S. Army with more than 20 years of service and currently holds the rank of Lieutenant Colonel in the Army Reserves. In addition to his operational leadership, Vaughn is a Non-Resident Fellow with the Asia Program at the Foreign Policy Research Institute, where he contributes research and analysis on the intersection of future technology threats and near-peer adversaries. He has a graduate degree from Georgetown University and undergraduate degrees from the University of North Georgia and the Department of Defence Language Institute.

To support execution across federal programs, Parrish oversees the delivery and operational readiness of Keeper’s federal initiatives. Parrish brings extensive experience leading federal operations, software engineering and secure deployments across highly regulated government environments. Prior to joining Keeper, he held senior leadership roles supporting federal customers, where he oversaw cross-functional teams responsible for platform reliability, customer success and large-scale deployments.

Parrish is a retired U.S. Army officer with more than 20 years of service across Field Artillery, Aviation and Cyber operations. His experience includes a combat deployment to Iraq and operational support to national cyber mission forces through the Joint Mission Operations Center. He has supported Department of Defence and Intelligence Community missions, including work with the White House Communications Agency, Joint Special Operations Command, Defence Intelligence Agency and National Reconnaissance Office. Parrish holds a graduate degree in Computer Science from Arizona State University and an undergraduate degree in Computer Science from James Madison University.

In his role at Keeper, Parrish aligns product, engineering, security and customer success teams and works closely with government stakeholders to ensure secure, reliable deployments that meet stringent federal mission, compliance and operational requirements.

“Federal agencies are being asked to modernise faster while defending against increasingly sophisticated, identity-driven attacks,” said Shannon Vaughn, Senior Vice President of Federal at Keeper Security. “I joined Keeper because we are focused on what actually produces tangible cyber benefits: controlling who has access to what, with full auditing and reporting – whether for credentials, endpoint or access management. We are going to win by being obsessive about access control that is easy to deploy and hard to break.”

These appointments come as federal agencies accelerate adoption of zero-trust architectures and modern privileged access controls in response to escalating credential-based attacks. The FedRAMP Authorised, FIPS 140-3 validated Keeper Security Government Cloud platform secures privileged access across hybrid and cloud environments for federal, state and local government agencies seeking to manage access to critical systems such as servers, web applications and databases.

The post Keeper Security Bolsters Federal Leadership to Advance Government Cybersecurity Initiatives appeared first on IT Security Guru.

CultureAI Selected for Microsoft’s Agentic Launchpad Initiative to Advance Secure AI Usage

UK-based AI safety and governance company CultureAI has been named as one of the participants in Microsoft’s newly launched Agentic Launchpad, a technology accelerator aimed at supporting startups working on advanced AI systems. The inclusion marks a milestone for CultureAI’s growth and signals broader industry interest in integrating AI safety and usage control into emerging autonomous AI ecosystems.

The Agentic Launchpad is a collaborative programme from Microsoft, NVIDIA, and WeTransact designed to support software companies in the United Kingdom and Ireland that are developing agentic AI solutions. With more than 500 companies applying, the selected cohort of 13 pioneering organisations represents some of the most forward-thinking solutions shaping the future of AI. The initiative is part of Microsoft’s wider investment in UK AI research and infrastructure, which includes nearly $30 billion committed to developing cloud, AI, and innovation capabilities in the region.

Selected companies in the program receive access to technical resources from Microsoft and NVIDIA, including engineering mentorship, cloud credits via Microsoft Azure, and participation in co-innovation sessions. Participants also gain commercial support, such as marketing assistance, networking opportunities and opportunities to showcase products to enterprise customers and investors.

CultureAI’s inclusion underscores an increasing industry emphasis on safe and compliant AI deployment. The company’s platform focuses on detecting unsafe AI usages, enforcing organisational policies during AI interactions, and providing real-time coaching to guide secure behaviour. This type of AI usage control has drawn interest from sectors with strict data governance and security requirements, including finance, healthcare, and regulated industries.

By working within the Agentic Launchpad cohort, CultureAI gains a strategic opportunity to integrate its usage risk and compliance controls with agentic AI development frameworks — an area where autonomous systems may introduce new vectors for inadvertent data exposure or misuse if not carefully governed.

Agentic AI represents a next stage of artificial intelligence that extends beyond generative tasks like text or image creation toward systems that can plan, act and autonomously execute sequences of decisions. This shift brings potential benefits in efficiency and automation, but also raises new challenges for risk management and governance in production environments.

Experts have noted that while initiatives like the Agentic Launchpad aim to accelerate innovation, they also emphasise robust tooling and ecosystem support to address security, operational governance and compliance in emerging AI applications. In this context, companies specialising in usage control and risk detection, such as CultureAI, might play a growing role as enterprises adopt more autonomous AI technologies.

The inclusion of AI safety-oriented companies like CultureAI in accelerator programmes reflects a broader trend in the industry toward embedding governance and risk mitigation into the core of AI development cycles. As agentic AI systems begin to move from laboratories into real-world use cases, particularly in sensitive or regulated domains, ensuring safe interaction with data and policy compliance may become a key differentiator for enterprise adoption.

“This recognition reflects the urgency organisations face today,” said James Moore, Founder & CEO of CultureAI. “AI is now embedded across everyday workflows, and companies need a safe, scalable way to adopt it. Our mission is to give them that confidence — through visibility, real-time coaching and adaptive guardrails that protect data without slowing innovation.”

The post CultureAI Selected for Microsoft’s Agentic Launchpad Initiative to Advance Secure AI Usage appeared first on IT Security Guru.

Vulnerability Management’s New Mandate: Remediate What’s Real

Live from AWS re:Invent, Snir Ben Shimol makes the case that vulnerability management is at an inflection point: visibility is no longer the differentiator—remediation is. Organizations have spent two decades getting better at scanning, aggregating and reporting findings. But the uncomfortable truth is that many of today’s incidents still trace back to vulnerabilities that were..

The post Vulnerability Management’s New Mandate: Remediate What’s Real appeared first on Security Boulevard.

Amazon Warns Perncious Fake North Korea IT Worker Threat Has Become Widespread

report, LayerX

Amazon is warning organizations that a North Korean effort to impersonate IT workers is more extensive than many cybersecurity teams may realize after discovering the cloud service provider was also victimized. A North Korean imposter was uncovered working as a remote systems administrator in the U.S. after their keystroke input lag raised suspicions. Normally, keystroke..

The post Amazon Warns Perncious Fake North Korea IT Worker Threat Has Become Widespread appeared first on Security Boulevard.

NDSS 2025 – PowerRadio: Manipulate Sensor Measurement Via Power GND Radiation

Session 6C: Sensor Attacks

Authors, Creators & Presenters: Yan Jiang (Zhejiang University), Xiaoyu Ji (Zhejiang University), Yancheng Jiang (Zhejiang University), Kai Wang (Zhejiang University), Chenren Xu (Peking University), Wenyuan Xu (Zhejiang University)

PAPER
NDSS 2025 - PowerRadio: Manipulate Sensor Measurement Via Power GND Radiation

Sensors are key components to enable various applications, e.g., home intrusion detection, and environment monitoring. While various software defenses and physical protections are used to prevent sensor manipulation, this paper introduces a new threat vector, PowerRadio, which can bypass existing protections and change the sensor readings at a distance. PowerRadio leverages interconnected ground (GND) wires, a standard practice for electrical safety at home, to inject malicious signals. The injected signal is coupled by the sensor's analog measurement wire and eventually, it survives the noise filters, inducing incorrect measurement. We present three methods that can manipulate sensors by inducing static bias, periodical signals, or pulses. For instance, we show adding stripes into the captured images of a surveillance camera or injecting inaudible voice commands into conference microphones. We study the underlying principles of PowerRadio and find its root causes: (1) the lack of shielding between ground and data signal wires and (2) the asymmetry of circuit impedance that enables interference to bypass filtering. We validate PowerRadio against a surveillance system, broadcast system, and various sensors. We believe that PowerRadio represents an emerging threat that exhibits the pros of both radiated and conducted EMI, e.g., expanding the effective attack distance of radiated EMI yet eliminating the requirement of line-of-sight or approaching physically. Our insights shall provide guidance for enhancing the sensors' security and power wiring during the design phases.


ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – PowerRadio: Manipulate Sensor Measurement Via Power GND Radiation appeared first on Security Boulevard.

Cybersecurity Snapshot: Cyber Pros Emerge as Bold AI Adopters, While AI Changes Data Security Game, CSA Reports Say

Formerly “AI shy” cyber pros have done a 180 and become AI power users, as AI forces data security changes, the CSA says. Plus, PwC predicts orgs will get serious about responsible AI usage in 2026, while the NCSC states that, no, prompt injection isn’t the new SQL injection. And much more!

Key takeaways

  1. Cyber pros have pivoted to AI: Formerly AI-reluctant, cybersecurity teams have rapidly become enthusiastic power users, with over 90% of surveyed professionals now testing or planning to use AI to combat cyber threats.
  2. Data security requires an AI overhaul: The Cloud Security Alliance warns that traditional data security pillars require a "refresh" to address unique AI risks such as prompt injection, model inversion, and multi-modal data leakage.
  3. Prompt injection isn't a quick fix: Unlike SQL injection, which can be solved with secure coding, prompt injection exploits the fundamental "confusability" of LLMs and requires ongoing risk management rather than a simple patch.

Here are five things you need to know for the week ending December 19.

1 - CSA-Google study: Cyber teams heart AI security tools

Who woulda thunk it?

Once seen as artificial intelligence (AI) laggards, cybersecurity teams have become their organizations’ most enthusiastic AI users.

That’s one of the key findings from “The State of AI Security and Governance Survey Report” from the Cloud Security Alliance (CSA) and Google Cloud, published this week.

“AI in security has reached an inflection point. After years of being cautious followers, security teams are now among the earliest adopters of AI, demonstrating both curiosity and confidence,” the report reads. 

Specifically, more than 90% of respondents are assessing how AI can enhance detection, investigation, or response processes by either already testing AI security capabilities (48%), or planning to do so within the next year (44%). 

“This proactive posture not only improves defensive capabilities but also reshapes the role of security — from a function that reacts to new technologies, to one that helps lead and shape how they are safely deployed,” the report adds.
 

Chart from “The State of AI Security and Governance Survey Report” from the Cloud Security Alliance (CSA) and Google Cloud

(Source: “The State of AI Security and Governance Survey Report” from the Cloud Security Alliance (CSA) and Google Cloud, December 2025)

Here are more findings from the report, which is based on a global survey of 300 IT and security professionals:

  • Governance maturity begets AI readiness and innovation: Organizations with comprehensive policies are nearly twice as likely to adopt agentic AI (46%) than those with partial guidelines or in-development policies.
  • The "Big Four" dominate: The AI landscape is consolidated around a few major players: OpenAI’s GPT (70%), Google’s Gemini (48%), Anthropic’s Claude (29%) and Meta’s LLaMa (20%).
  • There’s a confidence gap: While 70% of executives say they are aware of AI security implications, 73% remain neutral or lack confidence in their organization's ability to execute a security strategy.
  • Organizations’ AI security priorities are misplaced: Respondents cite data exposure (52%) as their top concern, often overlooking AI-specific threats like model integrity (12%) and data poisoning (10%).

“This year’s survey confirms that organizations are shifting from experimentation to meaningful operational use. What’s most notable throughout this process is the heightened awareness that now accompanies the pace of [AI] deployment,” Hillary Baron, the CSA’s Senior Technical Research Director, said in a statement. 

Recommendations from the report include:

  • Expand your AI governance using AI-specific industry frameworks, and complement these efforts with independent assessments and advisory services.
  • Boost your AI cybersecurity skills through training, upskilling, and cross-team collaboration.
  • Adopt secure-by-design principles when developing AI systems.
  • Track key AI metrics for things such as AI incidents, training completion rates, AI systems under governance, and AI projects reviewed for risk and threats.

“Strong governance is how you create stability in the face of rapid change. It’s how you ensure AI accelerates the business rather than putting it at risk,” reads a CSA blog.

For more information about using AI for cybersecurity:

2 - CSA: You need new data security controls in AI environments

Do the classic pillars of data security – confidentiality, integrity and availability – still hold up in the age of generative AI? According to a new white paper from the Cloud Security Alliance (CSA), they remain essential, but they require a significant overhaul to survive the unique pressures of modern AI.

The paper, titled “Data Security within AI Environments,” maps existing security controls to the AI data lifecycle and identifies critical gaps where current safeguards fall short. It argues that the rise of agentic AI and multi-modal systems creates attack vectors that traditional perimeter security simply cannot address.
 

Cover page of CSA report  “Data Security within AI Environments”

Here are a few key takeaways and recommendations from the report:

  • New controls proposed: The CSA suggests adding four new controls to its AI Controls Matrix (AICM) to specifically address prompt injection defense; model inversion and membership inference protection; federated learning governance; and shadow AI detection.
  • Multi-modal risks: Systems that process text, images and audio simultaneously introduce "unprecedented cross-modal data leakage risks," where information from one modality can inadvertently expose sensitive data from another. The CSA suggests enforcing clear standards and isolation controls to prevent such cross-modal leaks.
  • Third-party guardrails: As regulatory scrutiny increases, organizations must adopt enforceable policies, such as data tagging and contractual safeguards, to ensure proprietary client data is not used to train third-party models.
  • Dynamic defense: Because AI threats evolve rapidly, static measures are insufficient. The report recommends establishing a peer review cycle every 6 to 12 months to reassess safeguards.

"The foundational principles of data security—confidentiality, integrity, and availability—remain essential, but they must be applied differently in modern AI systems," reads the report.

For more information about securing data in AI systems:

3 - PwC: Responsible AI will gain traction in 2026

Is your organization still treating responsible AI usage as a compliance checkbox, or are you leveraging it to drive growth? 

A new prediction from PwC suggests that 2026 will be the year companies finally stop just talking about responsible AI and start making it work for their bottom line.

In its “2026 AI Business Predictions,” PwC forecasts that responsible AI is moving "from talk to traction." This shift is being driven not just by regulatory pressure, but by the realization that governance delivers tangible business value. In fact, almost 60% of executives in PwC's “2025 Responsible AI Survey” reported that their investments in this area are already boosting return on investment (ROI).
 

Chart from PwC report “2026 AI Business Predictions”

To capitalize on this trend, PwC advises organizations to stop treating AI governance as a siloed function, and to instead take steps including:

  • Integrate early: Bring IT, risk and AI specialists together from the start of the project lifecycle.
  • Automate oversight: Explore new technical capabilities that can operationalize testing and monitoring.
  • Add assurance: For high-risk or high-value systems, independent assessments may be critical for managing performance and risk.

“2026 could be the year when companies overcome this challenge and roll out repeatable, rigorous responsible AI practices,” the report states.

For more information about secure and responsible AI use, check out these Tenable resources:

4 - Report: Ransomware victims paid $2.1B from 2022 to 2024

If you thought ransomware activity felt explosive in recent years, the U.S. Treasury Department has the receipts to prove you right. 

Ransomware skyrocketed between 2022 and 2024, a three-year period in which incidents and ransom payments grew exponentially compared with the previous nine years.

The finding comes from the U.S. Financial Crimes Enforcement Network (FinCEN) report titled “Ransomware Trends in Bank Secrecy Act Data Between 2022 and 2024.” 

Between January 2022 and December 2024, FinCEN received almost 7,400 reports tied to almost 4,200 ransomware incidents totaling more than $2.1 billion in ransomware payments.

By contrast, during the previous nine-year period – 2013 through 2021 – FinCEN received 3,075 reports totaling approximately $2.4 billion in ransomware payments. 

The report is based on Bank Secrecy Act (BSA) data submitted by financial institutions to FinCEN, which is part of the U.S. Treasury Department.

Chart from U.S. Financial Crimes Enforcement Network (FinCEN) report titled “Ransomware Trends in Bank Secrecy Act Data Between 2022 and 2024”

(Source: U.S. Financial Crimes Enforcement Network (FinCEN) report titled “Ransomware Trends in Bank Secrecy Act Data Between 2022 and 2024,” December 2025)

Here are a few key findings from the report:

  • Record-breaking 2023: Ransomware incidents and payments peaked in 2023, with 1,512 incidents and about $1.1 billion, a dollar amount increase of 77% from 2022.
  • Slight dip, still high: While 2024 saw a slight decrease to 1,476 incidents and $734 million in payments, it remained the third-highest yearly total on record.
  • Median payment amounts: The median amount of a ransom payment was about $124,000 in 2022; $175,000 in 2023; and $155,250 in 2024.
  • Most targeted sectors: The financial services, manufacturing and healthcare industries reported the highest number of incidents and payment amounts.
  • Top variants: FinCEN identified 267 unique ransomware variants, with Akira, ALPHV/BlackCat, LockBit, Phobos and Black Basta being the most frequently reported.
  • Crypto choice: Bitcoin remains the primary payment method, accounting for 97% of reported transactions, followed distantly by Monero (XMR).

How can organizations better align their financial compliance and cybersecurity operations to combat ransomware? The report emphasizes the importance of integrating financial intelligence with technical defense mechanisms. 

FinCEN recommends the following actions for organizations:

  • Leverage threat data: Incorporate indicators of compromise (IOCs) from threat data sources into intrusion detection and security alert systems to enable active blocking or reporting.
  • Engage law enforcement: Contact federal agencies immediately regarding activity and consult the U.S. Office of Foreign Assets Control (OFAC) to check for sanction nexuses.
  • Enhance reporting: When reporting suspicious activity to FinCEN, include specific IOCs such as file hashes, domains and convertible virtual currency (CVC) addresses.
  • Update compliance programs: Review anti-money laundering (AML) programs to incorporate red flag indicators associated with ransomware payments.

For more information about current ransomware trends:

5 - NCSC: Don’t conflate SQL injection and prompt injection

SQL injection and prompt injection aren’t interchangeable terms, the U.K.’s cybersecurity agency wants you to know.

In the blog post “Prompt injection is not SQL injection (it may be worse),” the National Cyber Security Centre unpacks the key differences between these two types of cyber attacks, saying that knowing the differences is critical.

“On the face of it, prompt injection can initially feel similar to that well known class of application vulnerability, SQL injection. However, there are crucial differences that if not considered can severely undermine mitigations,” the blog reads.

While both issues involve an attacker mixing malicious "data" with system "instructions," the fundamental architecture of large language models (LLMs) makes prompt injection significantly harder to fix.
 

UK NCSC logo

The reason is that SQL databases operate on rigid logic where data and commands can be clearly separated via, for example, parameterization. Meanwhile, LLMs operate probabilistically, predicting the "next token" without inherently understanding the difference between a user's input and a developer's instruction.

“Current large language models (LLMs) simply do not enforce a security boundary between instructions and data inside a prompt,” the blog reads.

So how can you mitigate the prompt injection risk? Here are some of the NCSC’s recommendations:

  • Developer and organization awareness: Since prompt injection is a relatively new and often misunderstood vulnerability, organizations must ensure developers receive specific training. Security teams should treat it as a residual risk that requires ongoing management through design and operation, rather than relying on a single product to fix it.
  • Secure design: Because LLMs are “inherently confusable,” designers should implement deterministic, non-LLM safeguards to constrain system actions. A key principle is to limit the LLM's privileges to match the trust level of the user providing the input.
  • Make it harder: While no technique can stop prompt injection entirely, methods such as marking data sections or using XML tags can reduce the likelihood of success. The NCSC warns against relying on “deny-listing” specific phrases, as attackers can easily rephrase inputs to bypass filters.
  • Monitor: Organizations should log LLM inputs, outputs and API calls to detect suspicious activity. Monitoring for failed tool calls can help identify attackers who are honing their techniques against the system.

For more information about AI prompt injection attacks:

The post Cybersecurity Snapshot: Cyber Pros Emerge as Bold AI Adopters, While AI Changes Data Security Game, CSA Reports Say appeared first on Security Boulevard.

Google Shutting Down Dark Web Report Met with Mixed Reactions

OAuth, XSS, Google WhiteSource Log4j Deepfence threat report

Google is shutting down its dark web report tool, which was released in 2023 to alert users when their information was found available on the darknet. However, while the report sent alerts, Google said users found it didn't give them next steps to take if their data was detected.

The post Google Shutting Down Dark Web Report Met with Mixed Reactions appeared first on Security Boulevard.

❌