Tricks, Treats, and Terabits
It's been a year since my last big distributed denial-of-service (DDoS) attack. I had been holding off about blogging about this for a few reasons. First, I wanted to make sure it was over. Second, I didn't want to tip off the attackers about what I learned about them. (I had been quietly telling other people about the attack details. It's even helped some other victims of the same attack.) And finally, I wanted to see if they would come back and trigger any of my traps. (Nope!)
The First Wave
On Wednesday, 23-Oct-2024 at 3pm local time (21:00 GMT), my servers came under a massive distributed denial of service attack.My servers are physically located in a machine room, about 20 feet away from my office. When my servers come under any kind of load, their fans rev up. Even though they are a room away, I can hear the fans pick up. (It sounds like a jet engine.) When the attack started, I heard two servers ramp up.
My first thought was that one of my customers was probably analyzing videos. That always causes a higher load, but it usually lasts a minute. When the sound continued, I checked the servers themselves. None of the virtual machines had any high-load processes running. In fact, the loads were all hovering around 0.1 (virtually no use). It took me a few moments to find the cause: my server was rejecting a huge number of packets. It was definitely a DDoS attack.
I don't know the exact volume of the attack. My servers were logging a sustained 300Mbps and over 150,000 packets per second. (The logging and packet rejections were enough to cause the fans to ramp up.) However, I'm sure the volume was more than that -- because the upstream router was failing. I later learned that it was even larger: the upstream router to the upstream router was failing. The 300Mbps was just the fraction that was getting through to me. The attacker wasn't just knocking my service offline; they took down a good portion of Northern Colorado. (I grabbed some sample packet captures for later analysis.)
I first confirmed that my servers had not been compromised. (Whew!) Then I called my ISP. They had already noticed since the attack was taking down a few hundred businesses that used the same router.
My ISP did the right thing: they issued a black-hole for my impacted IP address. This droped the traffic long before it reached my server or even the impacted routers.
The First Mitigation
Since the attackers were only going after one IP address, my ISP and I thought I could get away with changing my DNS and moving my impacted services to a different address. On that single IP address, I had a handful of services.- I first moved FotoForensics. No problem. Usually DNS records are cached for a few hours. Long before any of this happened, I had configured my DNS to only cache for 5 minutes (the minimum time). Five minutes after changing my DNS record, the service came back up and users were able to access it.
- I then moved some of my minor services. Again, after 5 minutes, they were back up and running.
- I could have moved all of my services at once, but I wanted to know which one was being attacked. The last service I moved was this blog. After 5 minutes, the DDoS returned, hitting the new address.
- The attack started at precisely 3:00pm and it lasted exactly 12 hours. This appeared to be a schedule attack.
- They were explicitly targeting my blog and Hacker Factor web service. (Why? What did I do this time? Or maybe, what did I write recently?)
- They were repeatedly checking DNS to see if I moved. They knew this was a logical step and they were watching for it. That's a level of sophistication that your typical script kiddie doesn't think about. Moreover, it appeared to be an automated check. (Automated? I might be able to use that for a counter attack.)
- It was a flood over UDP. With UDP, you can just shoot out packets (including with fake sender IP addresses) and overwhelm the recipient. This attack varied from targeting port 123/udp (network time protocol) and 699/udp (an unknown port). Neither of these existed on my server. It wasn't about taking down my servers; it was about taking down the routers that lead to my servers.
- Every UDP packet has a time-to-live (TTL) value that gets decremented with each router hop. The TTL values from the attack packets didn't match the sender's address in the UDP packets. That tells me that the sender IP addresses were forged. I run a bunch of honeypots that benchmark attacks year round. The packet TTLs and timings were consistent with traffic coming from Europe and Asia. I then tracked the attack to AS4134 (China).
- They were only attacking over IPv4, not IPv6. That's typical for most bulletproof hosting providers. (These are the types of companies with high bandwidth and no concerns about their customers causing massive network attacks.)
- When the network address was blocked (black hole), the DDoS stopped shortly afterwards. When my DNS changed, the attack restarted. This tells me that they were monitoring my address in order to see when it went down.
- After I changed IP address, I noticed something. Buried in the logs was a single IP address at a university (not in China). It was continually polling to see if my server was up. Blocking that one IP address caused the DDoS against the new IP address to turn off. The attackers appeared to be using this as a way to decide when to disable the attack. (Finding this needle in the 150,000 packets-per-second haystack was the hard part.)
Who and Why?
I turned the IP addresses, packet captures, and logs over to some of my, uh, friends. I do not know the details of their methods, but they are very effective.- They tracked the bulk of the DDoS attack to servers often associated with attacks from North Korea.
- They found that the university system was in a specific university lab. The lab members mostly had Korean names. We suspect that either (A) at least one of the students was North Korean posing a South Korean, or (B) one of the students had downloaded or clicked something that allowed North Korea to compromise the system.
There's nothing worse than a depressed, drunk man who has his finger on the nuclear button.It appears that this was enough to upset the North Korean government and make me a target for a massive network attack.
Hiding For Safety
Since I'm not taking down my blog, I decided to take additional steps in case the DDoS started up again.There are some online services that provide DDoS protection. I looked into them and decided to switch to CloudFlare. What they provide:
- Domain fronting. When you connect to hackerfactor.com or fotoforensics.com, you actually connect to one of CloudFlare's servers. They forward the request back to my services. If there is a network attack, then it will hit CloudFlare and not me.
- DDoS protection. I kind of felt bad for setting up CloudFlare for an attack. However, this is one of the things they explicitly offer: DDoS protection, even at the free account level.
- Content caching. By default, they will cache web content. This way, if a hundred people all ask for my blog, I only have to provide it once to CloudFlare. This cuts down on the network volume.
- Filtering rules. Even at the free tier, you can create filtering rules to stop bots, AI-scrapers, block bullet-proof hosting providers, etc. (I'm using their paid tier for some of my domains because I wanted more filter options.)
The downside of using CloudFlare is that I like to monitor my network attacks. Since CloudFlare gets these attacks instead of me, I don't have that insight. However, I still run some honeypots outside of CloudFlare so I still have baseline attack metrics.
The Second Wave
Even though my servers had been hit by a massive attack, I decided to slowly move them to the new service. (I'd rather be slow and cautious and get everything right, than to rush it and make a different problem.)On 28-Oct-2024 (five days after the first attack) at almost exactly 1:00 AM, the attack started again. Although I had moved my servers behind CloudFlare, they appeared to be directly attacking my previously-known location.
Unfortunately, they guessed correctly. Even though CloudFlare was protecting me from incoming attacks, CloudFlare was forwarding valid requests back to my servers. And my servers were still at the old IP addresses. By attacking the old addresses, the DDoS managed to take down my service again.
I called my ISP's emergency 24/7 support number to report the problem, but nobody answered so I left a message. I repeatedly called back every 30-60 minutes until I was able to reach a person -- at 7:20am. (I spoke to the head of my ISP's IT department. They will make sure the 24/7 support will actually be manned next time.) They issued another IP address black hole to stop the attack, and it stopped 20 minutes later.
At this point, I decided to switch around network addresses and bridge in a second ISP. If one ISP goes down, the other one should kick in.
The Third Wave
On 30-Oct-2024, the third wave happened. This one was kind of funny. While my servers were dual homed and on different IP addresses, I still had some equipment using the old addresses. I was working late at night and heard the server fans start up again...It took me a moment to check all of my diagnostics and determine that, yes, it was the DDoS again. It only took a minute for me to look up the ISP's 24/7 support number. However, as I picked up the phone, I heard the fans rev down. (Odd.) A few seconds later, a different server began revving up. After a minute, it spun down and a third server revved up.
That's when I realized what the attacker was doing. I had a sequential block of IP addresses. They were DDoS'ing one address and checking if my server went offline. After a minute, they moved the DDoS to the next IP address, then the next one. Here's the problems they were facing:
- I had moved my main services to different addresses. This meant that the attacker couldn't find me.
- My services were behind CloudFlare and they cache content. Even if the attacker did find me, their polling to see if I was down would see cached content and think I was still up.
Cloudflare5.6 terabits per second. Wow. When I wrote to CloudFlare asking if this was related to me, I received no reply. I'm certainly not saying that "this was due to me", but I kind of suspect that this might have been due to me. (Huge thanks to CloudFlare for offering free DDoS protection!)
@cloudflare@noc.social
We recently thwarted a massive UDP Flood attack from 8-9K IPs targeting ~50 IP addresses of a Magic Transit customer. This was part of a larger campaign we covered in our Q3 2024 report. Check out the full details here: https://blog.cloudflare.com/ddos-threa...
Keep in mind, CloudFlare says that they can handle 296 terabits per second, so 5.4Tbps isn't going to negatively impact them. But I can totally understand why my (now former) ISP couldn't handle the volume.
Tricks, Treats, and Terabits
I did lay out a couple of detectors and devised a few ways to automatically redirect this attack toward other targets. However, it hasn't resurfaced in a year. (I really wanted to redirect North Korea's high-volume DDoS attack against Russian targets. Now that I've had time to prepare a proper response, I'm sure I can do the redirection with no impact to my local network. I mean, they watch my DNS, so I'd just need to change my DNS to point to Russia. I wonder if this redirected attack would cause an international incident?)Halloween stories usually end when the monster is vanquished. The lights come back on, the hero breathes a sigh of relief. But for system administrators, the monsters don't die; they adapt. They change IPs, morph signatures, and wait for a moment of weakness.
Some people fear ghosts or ghouls. I fear the faint whine of server fans spinning up in the middle of the night. A sound that means something, somewhere, has found me again. The next time the servers ramps up, it might not be an innocent workload. It might be the North Korea bot army.
