DDoS Protection Faces Fresh Challenges As Bot Traffic Reaches New Peak
As automated attack networks grow larger and more sophisticated, security teams are struggling to keep pace with a surge in malicious bot activity that is reshaping the DDoS threat landscape
In December 2025, Solana experienced one of the largest DDoS attacks in history, with traffic peaking at 6 Tbps. Although the attack continued over more than a week, Solana reported zero network down time. Had the attack succeeded, it could have scammed everyday retail investors out of millions.
Absorbing such a high volume of requests can’t be handled by instituting simple rate limits or perimeter controls, which raises questions about what effective DDoS protection looks like heading into 2026.
One big issue businesses have to tackle is the extent to which automated traffic has become normalised in the modern internet, blurring the line between legitimate and potentially dangerous. Let’s unpack these issues to understand how DDoS protection needs to evolve in this new traffic reality.
Bot Traffic at Record Levels
Automated traffic now makes up more than half of all web traffic. One recent report found that at the start of 2025, non-AI bots alone were responsible for roughly 50% of all HTML requests, and during peak periods, bot traffic exceeded human traffic by up to 25 percentage points.
Whether friendly or malicious, bot traffic behaves similarly at a technical level. It comes with high-frequency requests and not much deviance in interaction patterns. This creates a dilemma for defenders. If they block or apply rate-limiting too aggressively, they risk breaking core services such as APIs, integrations, mobile apps, and background processes that depend on legitimate, automated access to backend systems.
What’s more, malicious actors can “hide” among the noise of normal automation, making early-stage DDoS activity harder to detect.
Modern DDoS Attacks Are Multi-Layered
Modern DDoS attacks are multi-vector, meaning they hit multiple layers of the stack at once. Typically, this involves pairing a network flood (Layer 3) with an application layer or HTTP/API flood (Layer 7).
Traditional DDoS protection mainly covers the network layer, which deals with raw volume. However, attacks on the application layer do not require much volume to do damage. They trigger expensive backend work in the form of repeated page loads, authentication flows, and other operations that exhaust resources and slow down or break the application.
It’s worth noting that volumetric network-layer attacks are still extremely common, mainly because they are cheap to launch and still effective for stressing the target environment at the perimeter.
What’s Breaking and Why Defenders Are Struggling
One of the main challenges for defenders today when addressing the DDoS issue is establishing a reliable baseline of “normal” traffic. Automated traffic makes up an increasing proportion of overall activity, making the baseline noisy, repetitive, and non-human, which are the same characteristics traditionally used to spot malicious behaviour.
The main pain point is tuning protections in a way that blocks attack signatures without generating a high number of false positives. Overly aggressive rules risk blocking real users, while conservative tuning gives attackers room to operate.
Another detection bottleneck is that not all DDoS attacks today aim to take services fully offline. An increasingly common tactic is cost-exhaustion or “economic” DDoS, usually targeting applications. These attacks aim to silently degrade performance and drive up infrastructure costs. They are difficult to detect, because they often stay within normal-looking traffic patterns.
Then there is the dilemma of where to deploy defences. For many organisations, DDoS protection only focuses on absorbing or filtering raw traffic volume at the network layer. But as DDoS attacks are evolving into multi-vector campaigns, it may be time to consider solutions that tackle all layers of the stack.
What Effective DDoS Protection Looks Like Today
Effective DDoS protection today starts with how attacks are detected. High request rates should not be the only metric. Detection must shift toward behaviour-based analysis, examining how requests behave over time, how they interact with specific endpoints, and whether patterns deviate from expected usage for that service.
Detection alone is not enough. Mitigation is what actually matters when handling DDoS attacks, and it must be automatic and fast. In this context, automated mitigation means to rate-limit, challenge, or block abusive traffic in real time, with the goal of maintaining service even when an attack is unfolding.
Effective protection requires visibility and controls across all layers. Network-layer protection is typically handled by ISPs, cloud providers, or dedicated DDoS mitigation services designed to scale quickly under load.
To address application- and API-layer attacks, organisations must deploy controls closer to the application itself, where request context and behaviour are visible. This is commonly done through application delivery controllers, web application firewalls (WAFs), API gateways, or integrated WAAP platforms that sit in front of critical services.
Bot traffic has become the dominant form of internet activity, which changes the dynamics of how DDoS attacks are executed and defended against. At the same time, DDoS attacks remain easy to launch and increasingly common, with over 8 million recorded in the first half of 2025 alone.
For many organisations, even short disruptions can impact availability, performance, and user trust. As we move into 2026 and beyond, it’s clear that DDoS can no longer be treated as a secondary risk. It is a core availability challenge that requires modern, layered defences built to withstand today’s traffic reality.
The post DDoS Protection Faces Fresh Challenges As Bot Traffic Reaches New Peak appeared first on IT Security Guru.