Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Digital Forensics: Volatility – Memory Analysis Guide, Part 2

1 December 2025 at 10:31

Hello, aspiring digital forensics investigators!

Welcome back to our guide on memory analysis!

In the first part, we covered the fundamentals, including processes, dumps, DLLs, handles, and services, using Volatility as our primary tool. We created this series to give you more clarity and help you build confidence in handling memory analysis cases. Digital forensics is a fascinating area of cybersecurity and earning a certification in it can open many doors for you. Once you grasp the key concepts, you’ll find it easier to navigate the field. Ultimately, it all comes down to mastering a core set of commands, along with persistence and curiosity. Governments, companies, law enforcement and federal agencies are all in need of skilled professionals  As cyberattacks become more frequent and sophisticated, often with the help of AI, opportunities for digital forensics analysts will only continue to grow.

Now, in part two, we’re building on that to explore more areas that help uncover hidden threats. We’ll look at network info to see connections, registry keys for system changes, files in memory, and some scans like malfind and Yara rules to find malware. Plus, as promised, there are bonuses at the end for quick ways to pull out extra details

Network Information

As a beginner analyst, you’d run network commands to check for sneaky connections, like if malware is phoning home to hackers. For example, imagine investigating a company’s network after a data breach, these tools could reveal a hidden link to a foreign server stealing customer info, helping you trace the attacker.

Netscan‘ scans for all network artifacts, including TCP/UDP. ‘Netstat‘ lists active connections and sockets. In Vol 2, XP/2003-specific ones like ‘connscan‘ and ‘connections‘ focus on TCP, ‘sockscan‘ and ‘sockets‘ on sockets, but they’re old and not present in Vol 3.

Volatility 2:

vol.py -f “/path/to/file” ‑‑profile <profile> netscan

vol.py -f “/path/to/file” ‑‑profile <profile> netstat

XP/2003 SPECIFIC:

vol.py -f “/path/to/file” ‑‑profile <profile> connscan

vol.py -f “/path/to/file” ‑‑profile <profile> connections

vol.py -f “/path/to/file” ‑‑profile <profile> sockscan

vol.py -f “/path/to/file” ‑‑profile <profile> sockets

Volatility 3:

vol.py -f “/path/to/file” windows.netscan

vol.py -f “/path/to/file” windows.netstat

bash$ > vol -f Windows7.vmem windows.netscan

netscan in volatility

This output shows network connections with protocols, addresses, and PIDs. Perfect for spotting unusual traffic.

bash$ > vol -f Windows7.vmem windows.netstat

netstat in volatility

Here, you’ll get a list of active sockets and states, like listening or established links.

Note, the XP/2003 specific plugins are deprecated and therefore not available in Volatility 3, although are still common in the poorly financed government sector.

Registry

Hive List

You’d use hive list commands to find registry hives in memory, which store system settings malware often tweaks these for persistence. Say you’re checking a home computer after suspicious pop-ups. This could show changes to startup keys that launch bad software every boot.

hivescan‘ scans for hive structures. ‘hivelist‘ lists them with virtual and physical addresses.

Volatility 2:

vol.py -f “/path/to/file” ‑‑profile <profile> hivescan

vol.py -f “/path/to/file” ‑‑profile <profile> hivelist

Volatility 3:

vol.py -f “/path/to/file” windows.registry.hivescan

vol.py -f “/path/to/file” windows.registry.hivelist

bash$ > vol -f Windows7.vmem windows.registry.hivelist

hivelist in volatility

This lists the registry hives with their paths and offsets for further digging.

bash$ > vol -f Windows7.vmem windows.registry.hivescan

hivescan in volatility

The scan output highlights hive locations in memory.

Printkey

Printkey is handy for viewing specific registry keys and values, like checking for malware-added entries. For instance, in a ransomware case, you might look at keys that control file associations to see if they’ve been hijacked.

Without a key, it shows defaults, while -K or –key targets a certain path.

Volatility 2:

vol.py -f “/path/to/file” ‑‑profile <profile> printkey

vol.py -f “/path/to/file” ‑‑profile <profile> printkey -K “Software\Microsoft\Windows\CurrentVersion”

Volatility 3:

vol.py -f “/path/to/file” windows.registry.printkey

vol.py -f “/path/to/file” windows.registry.printkey ‑‑key “Software\Microsoft\Windows\CurrentVersion”

bash$ > vol -f Windows7.vmem windows.registry.printkey

windows registry print key in volatility

This gives a broad view of registry keys.

bash$ > vol -f Windows7.vmem windows.registry.printkey –key “Software\Microsoft\Windows\CurrentVersion”

widows registry printkey in volatility

Here, it focuses on the specified key, showing subkeys and values.

Files

File Scan

Filescan helps list files cached in memory, even deleted ones, great for finding malware files that were run but erased from disk. This can uncover temporary files from the infection.

Both versions scan for file objects in memory pools.

Volatility 2:

vol.py -f “/path/to/file” ‑‑profile <profile> filescan

Volatility 3:

vol.py -f “/path/to/file” windows.filescan

bash$ > vol -f Windows7.vmem windows.filescan

scanning files in volatility

This output lists file paths, offsets, and access types.

File Dump

You’d dump files to extract them from memory for closer checks, like pulling a suspicious script. In a corporate espionage probe, dumping a hidden document could reveal leaked secrets.

Without options, it dumps all. With offsets or PID, it targets specific ones. Vol 3 uses virtual or physical addresses.

Volatility 2:

vol.py -f “/path/to/file” ‑‑profile <profile> dumpfiles ‑‑dump-dir=“/path/to/dir”

vol.py -f “/path/to/file” ‑‑profile <profile> dumpfiles ‑‑dump-dir=“/path/to/dir” -Q <offset>

vol.py -f “/path/to/file” ‑‑profile <profile> dumpfiles ‑‑dump-dir=“/path/to/dir” -p <PID>

Volatility 3:

vol.py -f “/path/to/file” -o “/path/to/dir” windows.dumpfiles

vol.py -f “/path/to/file” -o “/path/to/dir” windows.dumpfiles ‑‑virtaddr <offset>

vol.py -f “/path/to/file” -o “/path/to/dir” windows.dumpfiles ‑‑physaddr <offset>

bash$ > vol -f Windows7.vmem windows.dumpfiles

duping files in volatility

This pulls all cached files Windows has in RAM.

Miscellaneous

Malfind

Malfind scans for injected code in processes, flagging potential malware.

Vol 2 shows basics like hexdump. Vol 3 adds more details like protection and disassembly.

Volatility 2:

vol.py -f “/path/to/file” ‑‑profile <profile> malfind

Volatility 3:

vol.py -f “/path/to/file” windows.malfind

bash$ > vol -f Windows7.vmem windows.malfind

scanning for suspcious injections with malfind in in volatility

This highlights suspicious memory regions with details.

Yara Scan

Yara scan uses rules to hunt for malware patterns across memory. It’s like a custom detector. For example, during a widespread attack like WannaCry, a Yara rule could quickly find infected processes.

Vol 2 uses file path. Vol 3 allows inline rules, file, or kernel-wide scan.

Volatility 2:

vol.py -f “/path/to/file” yarascan -y “/path/to/file.yar”

Volatility 3:

vol.py -f “/path/to/file” windows.vadyarascan ‑‑yara-rules <string>

vol.py -f “/path/to/file” windows.vadyarascan ‑‑yara-file “/path/to/file.yar”

vol.py -f “/path/to/file” yarascan.yarascan ‑‑yara-file “/path/to/file.yar”

bash$ > vol -f Windows7.vmem windows.vadyarascan –yara-file yara_fules/Wannacrypt.yar

scanning with yara rules in volatility

As you can see we found the malware and all related processes to it with the help of the rule

Bonus

Using the strings command, you can quickly uncover additional useful details, such as IP addresses, email addresses, and remnants from PowerShell or command prompt activities.

Emails

bash$ > strings Windows7.vmem | grep -oE "\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,4}\b"

viewing emails in a memory capture

IPs

bash$ > strings Windows7.vmem | grep -oE "\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,4}\b"

viewing ips in a memory capture

Powershell and CMD artifacts

bash$ > strings Windows7.vmem | grep -E "(cmd|powershell|bash)[^\s]+"

viewing powershell commands in a memory capture

Summary

By now you should feel comfortable with all the network analysis, file dumps, hives and registries we had to go through. As you practice, your confidence will grow fast. The commands covered here will help you solve most of the cases as they are fundamental. Also, don’t forget that Volatility has a lot more different plugins that you may want to explore. Feel free to come back to this guide anytime you want. Part 1 will remind you how to approach a memory dump, while Part 2 has the commands you need. In this part, we’ve expanded your Volatility toolkit with network scans to track connections, registry tools to check settings, file commands to extract cached items, and miscellaneous scans like malfind for injections and Yara for pattern matching. Together they give you a solid set of steps. 

If you want to turn this into a career, our digital forensics courses are built to get you there. Many students use this training to prepare for industry certifications and job interviews. Our focus is on the practical skills that hiring teams look for.

Subparse - Modular Malware Analysis Artifact Collection And Correlation Framework

By: Unknown
2 January 2023 at 06:30

Subparse, is a modular framework developed by Josh Strochein, Aaron Baker, and Odin Bernstein. The framework is designed to parse and index malware files and present the information found during the parsing in a searchable web-viewer. The framework is modular, making use of a core parsing engine, parsing modules, and a variety of enrichers that add additional information to the malware indices. The main input values for the framework are directories of malware files, which the core parsing engine or a user-specified parsing engine parses before adding additional information from any user-specified enrichment engine all before indexing the information parsed into an elasticsearch index. The information gathered can then be searched and viewed via a web-viewer, which also allows for filtering on any value gathered from any file. There are currently 3 parsing engine, the default parsing modules (ELFParser, OLEParser and PEParser), and 4 enrichment modules (ABUSEEnricher, C APEEnricher, STRINGEnricher and YARAEnricher).

 

Getting Started

Software Requirements

To get started using Subparse there are a few requrired/recommened programs that need to be installed and setup before trying to work with our software.

Software Status Link
Docker Required Installation Guide
Python3.8.1 Required Installation Guide
Pyenv Recommended Installation Guide

Additional Requirements

After getting the required/recommended software installed to your system there are a few other steps that need to be taken to get Subparse installed.


Python Requirements
Python requires some other packages to be installed that Subparse is dependent on for its processes. To get the Python set up completed navigate to the location of your Subparse installation and go to the *parser* folder. The following commands that you will need to use to install the Python requirements is:
sudo get apt install build-essential
pip3 install -r ./requirements.txt

Docker Requirements
Since Subparse uses Docker for its backend and web interface, the set up of the Docker containers needs to be completed before being able to use the program. To do this navigate to the root directory of the Subparse installation location, and use the following command to set up the docker instances:
docker-compose up

Note: This might take a little time due to downloading the images and setting up the containers that will be needed by Subparse.

 

Installation steps


Usage

Command Line Options

Command line options that are available for subparse/parser/subparse.py:

Argument Alternative Required Description
-h --help No Shows help menu
-d SAMPLES_DIR --directory SAMPLES_DIR Yes Directory of samples to parse
-e ENRICHER_MODULES --enrichers ENRICHER_MODULES No Enricher modules to use for additional parsing
-r --reset No Reset/delete all data in the configured Elasticsearch cluster
-v --verbose No Display verbose commandline output
-s --service-mode No Enters service mode allowing for mode samples to be added to the SAMPLES_DIR while processing

Viewing Results

To view the results from Subparse's parsers, navigate to localhost:8080. If you are having trouble viewing the site, make sure that you have the container started up in Docker and that there is not another process running on port 8080 that could cause the site to not be available.

 

General Information Collected

Before any parser is executed general information is collected about the sample regardless of the underlying file type. This information includes:

  • MD5 hash of the sample
  • SHA256 hash of the sample
  • Sample name
  • Sample size
  • Extension of sample
  • Derived extension of sample

Parser Modules

Parsers are ONLY executed on samples that match the file type. For example, PE files will by default have the PEParser executed against them due to the file type corresponding with those the PEParser is able to examine.

Default Modules


ELFParser
This is the default parsing module that will be executed against ELF files. Information that is collected:
  • General Information
  • Program Headers
  • Section Headers
  • Notes
  • Architecture Specific Data
  • Version Information
  • Arm Unwind Information
  • Relocation Data
  • Dynamic Tags

OLEParser
This is the default parsing module that will be executed against OLE and RTF formatted files, this uses the OLETools package to obtain data. The information that is collected:
  • Meta Data
  • MRaptor
  • RTF
  • Times
  • Indicators
  • VBA / VBA Macros
  • OLE Objects

PEParser
This is the default parsing module that will be executed against PE files that match or include the file types: PE32 and MS-Dos. Information that is collected:
  • Section code and count
  • Entry point
  • Image base
  • Signature
  • Imports
  • Exports

 

Enricher Modules

These modules are optional modules that will ONLY get executed if specified via the -e | --enrichers flag on the command line.

Default Modules


ABUSEEnricher
This enrichers uses the [Abuse.ch](https://abuse.ch/) API and [Malware Bazaar](https://bazaar.abuse.ch) to collect more information about the sample(s) subparse is analyzing, the information is then aggregated and stored in the Elastic database.
CAPEEnricher
This enrichers is used to communicate with a CAPEv2 Sandbox instance, to collect more information about the sample(s) through dynamic analysis, the information is then aggregated and stored in the Elastic database utilizing the Kafka Messaging Service for background processing.
STRINGEnricher
This enricher is a smart string enricher, that will parse the sample for potentially interesting strings. The categories of strings that this enricher looks for include: Audio, Images, Executable Files, Code Calls, Compressed Files, Work (Office Docs.), IP Addresses, IP Address + Port, Website URLs, Command Line Arguments.
YARAEnricher
This ericher uses a pre-compiled yara file located at: parser/src/enrichers/yara_rules. This pre-compiled file includes rules from VirusTotal and YaraRulesProject

 

Developing Custom Parsers & Enrichers

Subparse's web view was built using Bootstrap for its CSS, this allows for any built in Bootstrap CSS to be used when developing your own custom Parser/Enricher Vue.js files. We have also provided an example for each to help get started and have also implemented a few custom widgets to ease the process of development and to promote standardization in the way information is being displayed. All Vue.js files are used for dynamically displaying information from the custom Parser/Enricher and are used as templates for the data.

Note: Naming conventions with both class and file names must be strictly adheared to, this is the first thing that should be checked if you run into issues now getting your custom Parser/Enricher to be executed. The naming convention of your Parser/Enricher must use the same name across all of the files and class names.



Logging

The logger object is a singleton implementation of the default Python logger. For indepth usage please reference the Offical Doc. For Subparse the only logging methods that we recommend using are the logging levels for output. These are:

  • debug
  • warning
  • error
  • critical
  • exception
  • log
  • info


ACKNOWLEDGEMENTS

  • This research and all the co-authors have been supported by NSA Grant H98230-20-1-0326.


❌
❌