Authors, Creators & Presenters: Shuguang Wang (City University of Hong Kong), Qian Zhou (City University of Hong Kong), Kui Wu (University of Victoria), Jinghuai Deng (City University of Hong Kong), Dapeng Wu (City University of Hong Kong), Wei-Bin Lee (Information Security Center, Hon Hai Research Institute), Jianping Wang (City University of Hong Kong)
PAPER
NDSS 2025 - Interventional Root Cause Analysis Of Failures In Multi-Sensor Fusion Perception Systems
Autonomous driving systems (ADS) heavily depend on multi-sensor fusion (MSF) perception systems to process sensor data and improve the accuracy of environmental perception. However, MSF cannot completely eliminate uncertainties, and faults in multiple modules will lead to perception failures. Thus, identifying the root causes of these perception failures is crucial to ensure the reliability of MSF perception systems. Traditional methods for identifying perception failures, such as anomaly detection and runtime monitoring, are limited because they do not account for causal relationships between faults in multiple modules and overall system failure. To overcome these limitations, we propose a novel approach called interventional root cause analysis (IRCA). IRCA leverages the directed acyclic graph (DAG) structure of MSF to develop a hierarchical structural causal model (H-SCM), which effectively addresses the complexities of causal relationships. Our approach uses a divide-and-conquer pruning algorithm to encompass multiple causal modules within a causal path and to pinpoint intervention targets. We implement IRCA and evaluate its performance using real fault scenarios and synthetic scenarios with injected faults in the ADS Autoware. The average F1-score of IRCA in real fault scenarios is over 95%. We also illustrate the effectiveness of IRCA on an autonomous vehicle testbed equipped with Autoware, as well as a cross-platform evaluation using Apollo. The results show that IRCA can efficiently identify the causal paths leading to failures and significantly enhance the safety of ADS.
ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.
The recent discovery of a cryptomining campaign targeting Amazon compute resources highlights a critical gap in traditional cloud defense. Attackers are bypassing perimeter defenses by leveraging compromised credentials to execute legitimate but privileged API calls like ec2:CreateLaunchTemplate, ecs:RegisterTaskDefinition, ec2:ModifyInstanceAttribute, and lambda:CreateFunctionUrlConfig. While detection tools identify anomalies after they occur, they do not prevent execution, lateral […]
Live from AWS re:Invent, Snir Ben Shimol makes the case that vulnerability management is at an inflection point: visibility is no longer the differentiator—remediation is. Organizations have spent two decades getting better at scanning, aggregating and reporting findings. But the uncomfortable truth is that many of today’s incidents still trace back to vulnerabilities that were..
Amazon is warning organizations that a North Korean effort to impersonate IT workers is more extensive than many cybersecurity teams may realize after discovering the cloud service provider was also victimized. A North Korean imposter was uncovered working as a remote systems administrator in the U.S. after their keystroke input lag raised suspicions. Normally, keystroke..
Authors, Creators & Presenters: Yan Jiang (Zhejiang University), Xiaoyu Ji (Zhejiang University), Yancheng Jiang (Zhejiang University), Kai Wang (Zhejiang University), Chenren Xu (Peking University), Wenyuan Xu (Zhejiang University)
PAPER
NDSS 2025 - PowerRadio: Manipulate Sensor Measurement Via Power GND Radiation
Sensors are key components to enable various applications, e.g., home intrusion detection, and environment monitoring. While various software defenses and physical protections are used to prevent sensor manipulation, this paper introduces a new threat vector, PowerRadio, which can bypass existing protections and change the sensor readings at a distance. PowerRadio leverages interconnected ground (GND) wires, a standard practice for electrical safety at home, to inject malicious signals. The injected signal is coupled by the sensor's analog measurement wire and eventually, it survives the noise filters, inducing incorrect measurement. We present three methods that can manipulate sensors by inducing static bias, periodical signals, or pulses. For instance, we show adding stripes into the captured images of a surveillance camera or injecting inaudible voice commands into conference microphones. We study the underlying principles of PowerRadio and find its root causes: (1) the lack of shielding between ground and data signal wires and (2) the asymmetry of circuit impedance that enables interference to bypass filtering. We validate PowerRadio against a surveillance system, broadcast system, and various sensors. We believe that PowerRadio represents an emerging threat that exhibits the pros of both radiated and conducted EMI, e.g., expanding the effective attack distance of radiated EMI yet eliminating the requirement of line-of-sight or approaching physically. Our insights shall provide guidance for enhancing the sensors' security and power wiring during the design phases.
ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.
Formerly “AI shy” cyber pros have done a 180 and become AI power users, as AI forces data security changes, the CSA says. Plus, PwC predicts orgs will get serious about responsible AI usage in 2026, while the NCSC states that, no, prompt injection isn’t the new SQL injection. And much more!
Key takeaways
Cyber pros have pivoted to AI: Formerly AI-reluctant, cybersecurity teams have rapidly become enthusiastic power users, with over 90% of surveyed professionals now testing or planning to use AI to combat cyber threats.
Data security requires an AI overhaul: The Cloud Security Alliance warns that traditional data security pillars require a "refresh" to address unique AI risks such as prompt injection, model inversion, and multi-modal data leakage.
Prompt injection isn't a quick fix: Unlike SQL injection, which can be solved with secure coding, prompt injection exploits the fundamental "confusability" of LLMs and requires ongoing risk management rather than a simple patch.
Here are five things you need to know for the week ending December 19.
1 - CSA-Google study: Cyber teams heart AI security tools
Who woulda thunk it?
Once seen as artificial intelligence (AI) laggards, cybersecurity teams have become their organizations’ most enthusiastic AI users.
“AI in security has reached an inflection point. After years of being cautious followers, security teams are now among the earliest adopters of AI, demonstrating both curiosity and confidence,” the report reads.
Specifically, more than 90% of respondents are assessing how AI can enhance detection, investigation, or response processes by either already testing AI security capabilities (48%), or planning to do so within the next year (44%).
“This proactive posture not only improves defensive capabilities but also reshapes the role of security — from a function that reacts to new technologies, to one that helps lead and shape how they are safely deployed,” the report adds.
(Source: “The State of AI Security and Governance Survey Report” from the Cloud Security Alliance (CSA) and Google Cloud, December 2025)
Here are more findings from the report, which is based on a global survey of 300 IT and security professionals:
Governance maturity begets AI readiness and innovation: Organizations with comprehensive policies are nearly twice as likely to adopt agentic AI (46%) than those with partial guidelines or in-development policies.
The "Big Four" dominate: The AI landscape is consolidated around a few major players: OpenAI’s GPT (70%), Google’s Gemini (48%), Anthropic’s Claude (29%) and Meta’s LLaMa (20%).
There’s a confidence gap: While 70% of executives say they are aware of AI security implications, 73% remain neutral or lack confidence in their organization's ability to execute a security strategy.
Organizations’ AI security priorities are misplaced: Respondents cite data exposure (52%) as their top concern, often overlooking AI-specific threats like model integrity (12%) and data poisoning (10%).
“This year’s survey confirms that organizations are shifting from experimentation to meaningful operational use. What’s most notable throughout this process is the heightened awareness that now accompanies the pace of [AI] deployment,” Hillary Baron, the CSA’s Senior Technical Research Director, said in a statement.
Recommendations from the report include:
Expand your AI governance using AI-specific industry frameworks, and complement these efforts with independent assessments and advisory services.
Boost your AI cybersecurity skills through training, upskilling, and cross-team collaboration.
Adopt secure-by-design principles when developing AI systems.
Track key AI metrics for things such as AI incidents, training completion rates, AI systems under governance, and AI projects reviewed for risk and threats.
“Strong governance is how you create stability in the face of rapid change. It’s how you ensure AI accelerates the business rather than putting it at risk,” reads a CSA blog.
For more information about using AI for cybersecurity:
2 - CSA: You need new data security controls in AI environments
Do the classic pillars of data security – confidentiality, integrity and availability – still hold up in the age of generative AI? According to a new white paper from the Cloud Security Alliance (CSA), they remain essential, but they require a significant overhaul to survive the unique pressures of modern AI.
The paper, titled “Data Security within AI Environments,” maps existing security controls to the AI data lifecycle and identifies critical gaps where current safeguards fall short. It argues that the rise of agentic AI and multi-modal systems creates attack vectors that traditional perimeter security simply cannot address.
Here are a few key takeaways and recommendations from the report:
New controls proposed: The CSA suggests adding four new controls to its AI Controls Matrix (AICM) to specifically address prompt injection defense; model inversion and membership inference protection; federated learning governance; and shadow AI detection.
Multi-modal risks: Systems that process text, images and audio simultaneously introduce "unprecedented cross-modal data leakage risks," where information from one modality can inadvertently expose sensitive data from another. The CSA suggests enforcing clear standards and isolation controls to prevent such cross-modal leaks.
Third-party guardrails: As regulatory scrutiny increases, organizations must adopt enforceable policies, such as data tagging and contractual safeguards, to ensure proprietary client data is not used to train third-party models.
Dynamic defense: Because AI threats evolve rapidly, static measures are insufficient. The report recommends establishing a peer review cycle every 6 to 12 months to reassess safeguards.
"The foundational principles of data security—confidentiality, integrity, and availability—remain essential, but they must be applied differently in modern AI systems," reads the report.
For more information about securing data in AI systems:
3 - PwC: Responsible AI will gain traction in 2026
Is your organization still treating responsible AI usage as a compliance checkbox, or are you leveraging it to drive growth?
A new prediction from PwC suggests that 2026 will be the year companies finally stop just talking about responsible AI and start making it work for their bottom line.
In its “2026 AI Business Predictions,” PwC forecasts that responsible AI is moving "from talk to traction." This shift is being driven not just by regulatory pressure, but by the realization that governance delivers tangible business value. In fact, almost 60% of executives in PwC's “2025 Responsible AI Survey” reported that their investments in this area are already boosting return on investment (ROI).
To capitalize on this trend, PwC advises organizations to stop treating AI governance as a siloed function, and to instead take steps including:
Integrate early: Bring IT, risk and AI specialists together from the start of the project lifecycle.
Automate oversight: Explore new technical capabilities that can operationalize testing and monitoring.
Add assurance: For high-risk or high-value systems, independent assessments may be critical for managing performance and risk.
“2026 could be the year when companies overcome this challenge and roll out repeatable, rigorous responsible AI practices,” the report states.
For more information about secure and responsible AI use, check out these Tenable resources:
4 - Report: Ransomware victims paid $2.1B from 2022 to 2024
If you thought ransomware activity felt explosive in recent years, the U.S. Treasury Department has the receipts to prove you right.
Ransomware skyrocketed between 2022 and 2024, a three-year period in which incidents and ransom payments grew exponentially compared with the previous nine years.
Between January 2022 and December 2024, FinCEN received almost 7,400 reports tied to almost 4,200 ransomware incidents totaling more than $2.1 billion in ransomware payments.
By contrast, during the previous nine-year period – 2013 through 2021 – FinCEN received 3,075 reports totaling approximately $2.4 billion in ransomware payments.
The report is based on Bank Secrecy Act (BSA) data submitted by financial institutions to FinCEN, which is part of the U.S. Treasury Department.
(Source: U.S. Financial Crimes Enforcement Network (FinCEN) report titled “Ransomware Trends in Bank Secrecy Act Data Between 2022 and 2024,” December 2025)
Here are a few key findings from the report:
Record-breaking 2023: Ransomware incidents and payments peaked in 2023, with 1,512 incidents and about $1.1 billion, a dollar amount increase of 77% from 2022.
Slight dip, still high: While 2024 saw a slight decrease to 1,476 incidents and $734 million in payments, it remained the third-highest yearly total on record.
Median payment amounts: The median amount of a ransom payment was about $124,000 in 2022; $175,000 in 2023; and $155,250 in 2024.
Most targeted sectors: The financial services, manufacturing and healthcare industries reported the highest number of incidents and payment amounts.
Top variants: FinCEN identified 267 unique ransomware variants, with Akira, ALPHV/BlackCat, LockBit, Phobos and Black Basta being the most frequently reported.
Crypto choice: Bitcoin remains the primary payment method, accounting for 97% of reported transactions, followed distantly by Monero (XMR).
How can organizations better align their financial compliance and cybersecurity operations to combat ransomware? The report emphasizes the importance of integrating financial intelligence with technical defense mechanisms.
FinCEN recommends the following actions for organizations:
Leverage threat data: Incorporate indicators of compromise (IOCs) from threat data sources into intrusion detection and security alert systems to enable active blocking or reporting.
Engage law enforcement: Contact federal agencies immediately regarding activity and consult the U.S. Office of Foreign Assets Control (OFAC) to check for sanction nexuses.
Enhance reporting: When reporting suspicious activity to FinCEN, include specific IOCs such as file hashes, domains and convertible virtual currency (CVC) addresses.
Update compliance programs: Review anti-money laundering (AML) programs to incorporate red flag indicators associated with ransomware payments.
For more information about current ransomware trends:
5 - NCSC: Don’t conflate SQL injection and prompt injection
SQL injection and prompt injection aren’t interchangeable terms, the U.K.’s cybersecurity agency wants you to know.
In the blog post “Prompt injection is not SQL injection (it may be worse),” the National Cyber Security Centre unpacks the key differences between these two types of cyber attacks, saying that knowing the differences is critical.
“On the face of it, prompt injection can initially feel similar to that well known class of application vulnerability, SQL injection. However, there are crucial differences that if not considered can severely undermine mitigations,” the blog reads.
While both issues involve an attacker mixing malicious "data" with system "instructions," the fundamental architecture of large language models (LLMs) makes prompt injection significantly harder to fix.
The reason is that SQL databases operate on rigid logic where data and commands can be clearly separated via, for example, parameterization. Meanwhile, LLMs operate probabilistically, predicting the "next token" without inherently understanding the difference between a user's input and a developer's instruction.
“Current large language models (LLMs) simply do not enforce a security boundary between instructions and data inside a prompt,” the blog reads.
So how can you mitigate the prompt injection risk? Here are some of the NCSC’s recommendations:
Developer and organization awareness: Since prompt injection is a relatively new and often misunderstood vulnerability, organizations must ensure developers receive specific training. Security teams should treat it as a residual risk that requires ongoing management through design and operation, rather than relying on a single product to fix it.
Secure design: Because LLMs are “inherently confusable,” designers should implement deterministic, non-LLM safeguards to constrain system actions. A key principle is to limit the LLM's privileges to match the trust level of the user providing the input.
Make it harder: While no technique can stop prompt injection entirely, methods such as marking data sections or using XML tags can reduce the likelihood of success. The NCSC warns against relying on “deny-listing” specific phrases, as attackers can easily rephrase inputs to bypass filters.
Monitor: Organizations should log LLM inputs, outputs and API calls to detect suspicious activity. Monitoring for failed tool calls can help identify attackers who are honing their techniques against the system.
For more information about AI prompt injection attacks:
Google is shutting down its dark web report tool, which was released in 2023 to alert users when their information was found available on the darknet. However, while the report sent alerts, Google said users found it didn't give them next steps to take if their data was detected.
StackHawk co-founder and CSO Scott Gerlach has spent most of his career running security teams, and his take on application security is shaped by a simple reality: developers are still too often the last to know when their code ships with risk. Gerlach explains why that gap has widened in the age of modern CI/CD,..
Lefteris Tzelepis, CISO at Steelmet /Viohalco Companies, was shaped by cybersecurity. From his early exposure to real-world attacks at the Greek Ministry of Defense to building and leading security programs inside complex enterprises, his career mirrors the evolution of the CISO role itself. Now a group CISO overseeing security across multiple organizations, Lefteris brings a [...]
Dec 19, 2025 - Jeremy Snyder - New beginnings, such as new years, provide a nice opportunity to look back at what we have just experienced, as well as look forward to what to expect. 2022 was a year of transition in many ways, and 2023 may well be the same. I wanted to reflect on some of those transitions from a few different perspectives:
* The market and the world
* Venture capital
* Cybersecurity
* FireTail
WHAT IS GOING ON WITH THE MARKET?
ECONOMIC TRANSITION:
2022 started with a strong macroeconomic outlook, after a massive positive swing in 2021, but then delivered a very strong downward performance, -35% for the year:
S&P North America Tech Index performance, 2022
The “Internet” sector (if you can call that one sector) performed even worse for the year, down 45%:
Dow Jones Internet composite index performance, 2022
Perhaps one interesting observation there is that the correction on the internet side happened in late Q1 and throughout Q2, with a pretty flat performance for the second half of the year.
The consensus by the end of year is that the overall economic situation in 2022 was…weird. Layoffs in the tech sector started part way through the year, and continued until the very last days of 2022. Yet, the unemployment rate remains a very low 3.5% in the USA, and tech companies find it difficult to find good job candidates.
For most people, the worst aspect of the economic changes in 2022 will be the return of aggressive inflation.
TRANSITION IN THE WORLD AROUND US
It was also a transition year of “back to normal”, following the possible end of the COVID-19 pandemic.
* Travel got back to almost pre-pandemic levels
* The return to the office started, in preparation for an expected 90% return in 2023. Side note - this also led to the quiet quitting syndrome.
* The peace index was a mixed bag, with increased peace in 6 regions, but continued conflict in MENA, and of course the unprovoked and unjust Russian invasion of Ukraine.
WHAT ABOUT VENTURE CAPITAL?
A lot of innovative companies get their start with the support of venture capital, as does FireTail. In general, venture capital (VC) follows macroeconomic trends. So as you might expect, VC did indeed slow down in 2022. But there is nuance, according to TechCrunch:
> "In the second quarter of 2022, global venture totals dipped, but inside of that slowdown is a shift away from the super-late-stage deals that helped push the value of VC deal-making to all-time highs last year."
AND VENTURE CAPITAL FOR CYBERSECURITY?
Just like the macroeconomic climate, there are adjustments going on.
> "I would say it's more about a year of change, reacting to new realities, figuring out what a new normal looks like. In the end, start-up valuations are based on what the public market is doing. Even acquisitions, M&A activities, are going to follow what’s happening on the public markets."
> "We’ve seen public market valuations grow so quickly and then drop so quickly, and we're still figuring out what the new normal will be. There’s still a lot of uncertainty. I don't think any of us really knows what the rest of the year will look like or what the new normal will be. It’s all part of the ebb and flow of the economy." - Will Lin
That question of valuations rising and falling is especially striking, coming out of 2021. There were a number of so-called “unicorns” created in cybersecurity in 2021. Rumors and whispers, even at the time, suggested that many of these companies hadn’t reached the unwritten rule of $100M recurring revenue, based on a longstanding practice of valuing companies at 10 times their revenues. And what happens to these companies now - are they zombies?
> “We were also skeptical of some of these unicorns, with some receiving a $1B+ valuation at the same time we were hearing rumors of $5M ARR.” - The Cyber Why
How does this all match up? From one author and former industry analyst:
> “June 2022 is the most bizarre month I’ve ever seen. June announced both three new cybersecurity unicorns and 1500 employees laid off from 9 cybersecurity vendors in the same month.” - The Cyber Why
Another analysis shows a decline in cyber funding, so there’s an open question about what that means moving forward. But there’s also contradictory evidence showing a lot of cybersecurity activity in Q4 of 2022.
WHAT’S THE STATE OF CYBERSECURITY HEADING INTO 2023 THEN?
Cybersecurity is still a high priority. Is cybersecurity recession-proof? Perhaps.
> "First, our world is growing smarter and more technological by the minute. For example, the adoption of cloud and artificial intelligence technologies is rapidly increasing. As a result, our reliance on all things cyber to power our society and its critical infrastructure is on an extremely fast pace. Companies are using more and more devices connected to the internet. Information technology budgets have ballooned. A market correction may slow progress, but it will not reverse this trend."
> "A more connected society is also a more vulnerable one. These developments increase the attack surface for cybercriminals to exploit vulnerabilities and result in an increase in the frequency and severity of hacks, especially against critical infrastructure. With more technology and connectivity, there comes greater investment in cybersecurity." - Michael Steed, Paladin Capital
This is echoed by most people in the cybersecurity industry, especially those who have spent decades in the space. Recently, NightDragon held their annual kickoff event, where they wrapped up 2022 and gave some thoughts about 2023. Some of the highlights from their analysis include the following:
* The continuation of 'cyber super cycles', meaning periods of mass investment, both from financial backers (VCs and private equity firms) and customers, in their purchasing of cybersecurity products and services.
* 2022 was overall a record year for VC investment ($19B+) and M&A ($118.5B) in cybersecurity.
* Operational technology (aka OT; think power grids, electricity generators, elevators, HVAC, etc.), is a top area for investment for 2023. The Colonial Pipeline incident has sparked concern, and there are now three companies earning more than $100 million annually in OT security. The data intelligence of this space is with these systems reporting to central locations, all of which is done over APIs.
* 2022 also brought additional government involvement, which will spur regulation, new initiatives, perhaps offensive capabilities and almost certainly more spending.
* 2022 was anecdotally the first year of “best in suite” prioritization for customers, meaning that customers focused on buying not necessarily the best solution for defending against any single particular attack vector; instead looking at a broader category, and choosing a blend of depth and breadth. Average number of vendors is 75, needs to come down; 5 is not realistic, but something between 40-50 probably is more manageable.
* At the same time, increased cloud adoption and evolving application architectures brought very high complexity, and a difficult-to-monitor attack surface. As a result of this, companies started to experience technical debt in cybersecurity defense. This is currently the case in cloud security. In fact, the analysis here posits that cloud security is the number one need for enterprises in 2023. Enterprises have realized that cloud transformation is mandatory, and they need to refactor applications to get the cloud value and agility that they desire.
WHAT’S THE STATE OF API SECURITY?
Stay tuned. We’re putting together our analysis of the current state of API security, and some predictions for API security in 2023. We’ll be releasing that report soon.
WHAT’S THE STATE OF FIRETAIL?
This is the easiest transition to address - the state of FireTail is great! Admittedly, it’s easiest to adapt to market changes when you’re a young company, as we are. Fun fact - we officially incorporated on February 11, 2022. We enter 2023 having hit a number of great milestones for a young company:
* $5 million in seed funding raised earlier this year.
* Our team grew to 10 employees as of January 2023.
* The FireTail.app platform is live in production with customers.
WHAT’S NEXT FOR FIRETAIL?
We continue to push forward. We’re in a good position to expand beyond our current cohort of initial design partners in late Q1 2023. We also firmly believe what Dave DeWalt said during the NightDragon session:
> “Great companies get started during down cycles.”
Bob Ackerman’s quote also resonated with us:
> “Cyber is not a pick-up game; be committed or go home.”
We agree. We are also mindful of the macroeconomic environment around us. To that end, it’s always been part of our ethos to focus on security and customer outcomes first, and financial outcomes second. We believe that making our preventative API security middleware free and open source is the right thing to do, and we stand by that decision. If that means that many organizations will use it for its ability to block bad API calls, and never pay us, we accept that and still believe that it is a good outcome.
IS THERE ANYTHING WE CAN SHARE ABOUT THE FUTURE DIRECTION OF FIRETAIL’S TECHNOLOGY?
So much of our strategy is around solving security challenges for our customers. We will continue to produce versions of the FireTail middleware library as our customers need, and make sense for us to provide. And we will continue to expand its functionality as we learn of new attack vectors. We are also believers in examining a domain space holistically, so it’s not shift-left or shift-right; nor is it shift-left and defend-right, it’s:
> Shift everywhere.
STAY TUNED.
Please also check out my recent video where I discuss some of the themes covered in this blog in more detail.
As we head into 2026, I am thinking of a Japanese idiom, Koun Ryusui (行雲流水), to describe how enterprises should behave when facing a cyberattack. Koun Ryusui means “to drift like clouds and flow like water.” It reflects calm movement, adaptability, and resilience. For enterprises, this is an operating requirement. Cyber incidents are no longer isolated disruptions. They are recurring tests […]
Dec 19, 2025 - Jeremy Snyder - A recent posting by Dr. Chase Cunningham from Ericom Software on LinkedIn took an interesting view on web application firewalls, most commonly known as a WAF.
WAF’s Must Die Like the Password and VPN’s
Here at FireTail.io, we are also not fans of a WAF. Why? We do not believe that a WAF will catch most modern attacks. WAFs are fundamentally based on firewall (perimeter defense) structures that are designed to keep attackers out based on where they are coming from, where they are going to, and what they are trying to access. A simple search for bypassing a WAF returns quite a lot of results:
Bypass WAF, 1.28M results
Dr. Cunningham’s post shares some interesting opinions and statistics on WAFs:
* “WAFs are antithetical to the move to Zero Trust”
* “According to most innovators and experts, the pattern and rule-based engine used by WAFs are not aligned with current security needs.”
* “Ponemon conducted research at that time to probe the market for issues with WAF solutions, and more than 600 respondents made their point clear: WAFs aren’t helping.”
The Ponemon WAF research referenced also included some eye-opening statistics:
* While 66% of respondent organizations consider the WAF an important security tool, over 40% use their WAFs only to generate alerts (not to block attacks)
* 86% of organizations experienced application-layer attacks that bypassed their WAF in the last 12 months.
* Managing WAF deployments are complex and time-consuming, requiring an average of 2.5 security administrators who spend 45 hours per week processing WAF alerts, plus an additional 16 hours per week writing new rules to enhance WAF security.
* The CapEx and OpEx for WAFs together average $620K annually. This includes $420K for WAF products, plus an additional $200K annually for the skilled staffing required to manage the WAF.
SUMMARY OF WAF FAILURES FROM DR. CHASE CUNNINGHAM
If you wanted the tl;dr version of what Dr. Cunningham had to say, it’s this:
> In other words, WAFs are not stopping attacks, require continuous configuration and intensive management and security human capital, and are more expensive than other better-suited technologies.
WHAT IS A BETTER APPROACH THAN USING A WAF THEN?
This is where our view may both overlap with and also differ from from Dr. Cunningham’s. Dr. Cunningham speaks of the model of Web Application Isolation (WAI), whereby an application is effectively public on the Internet, but only behind a required authentication controller, and then creates a secure tunnel.
Our view on this is two-fold:
* For public or consumer applications, this can work. But it requires an immediate control for authorization. Too often, developers assume controlled inputs and no attempts at unauthorized access. But the provisioning of a “secure tunnel” is something that happens already via SSL / TLS, and there’s no need for another “secure tunnel”.
* Applications need to have a security configuration of their own that defines authorization options around various API routes and methods, because the API is both the future of application development paradigms, and the API will become the most frequently attacked surface / vector.
Please contact us if you want to hear more about our view on WAFs for API security.
Recently, Forrester, a globally renowned independent research and advisory firm, released the report “Navigate The AI Agent Ecosystem In China, Forrester Research, October 2025[1].” NSFOCUS was successfully included in this report. In the report, Forrester identified four key technological trends: With the rapid advancement of Artificial Intelligence, AI Agent technology is deepening its application within […]
By 2026, vulnerability scanning will no longer be about running a weekly scan and exporting a PDF. Modern environments are hybrid, ephemeral, API-driven, and constantly changing. Tools that haven’t adapted are already obsolete, even if they still have brand recognition. Therefore, we present to you the top 10 Best Vulnerability Scanning Tools for 2026, which […]