❌

Reading view

There are new articles available, click to refresh the page.

PowerShell for DFIR, Part 1: Log Analysis and System Hardening

Welcome back, aspiring DFIR defenders!

Welcome to the start of a new series dedicated to PowerShell for Defenders.

Many of you already know PowerShell as a tool of hackers. In our earlier PowerShell for Hackers series, we demonstrated just how much damage a skilled hacker can cause with it by taking over the entire organization with just one terminal window. In this new series, we flip the perspective. We are going to learn how to use it properly as defenders. There is far more to PowerShell than automation scripts and administrative shortcuts. For blue team operations, incident response, and digital forensics, PowerShell can become one of your most effective investigative instruments. It allows you to quickly process logs, extract indicators of compromise, and make sense of attacker behavior without waiting for heavy platforms.

Today, we will go through two PowerShell-based tools that are especially useful in defensive operations. The first one is DeepBlueCLI, developed by SANS, which helps defenders quickly analyze Windows event logs and highlight suspicious behavior. The second tool is WELA, a PowerShell script created by Yamato Security. WELA focuses on auditing and hardening Windows systems based on predefined security baselines. While both tools are PowerShell scripts, they serve different but complementary purposes. One helps you understand what already happened. The other helps you reduce the chance of it happening again.

DeepBlueCLI

DeepBlueCLIΒ is a PowerShell-based tool created to help defenders quickly identify suspicious behavior in Windows event logs. Its strength lies in simplicity. You do not need complex configurations, long rule files, or a deep understanding of Windows internals to get started. DeepBlueCLI takes common attack patterns and maps them directly to event log indicators, presenting the results in a way that is easy to read and easy to act upon.

There are two main ways to use DeepBlueCLI. The first approach is by analyzing exported event logs, which is very common during incident response or post-incident forensic analysis. The second approach is live analysis, where the tool queries logs directly from the system it is running on. Both approaches are useful depending on the situation. During a live incident, quick answers matter. During forensic work, accuracy and context matter more.

A very helpful feature of DeepBlueCLI is that it comes with example event logs provided by the developer. These are intentionally crafted logs that simulate real attack scenarios, making them perfect for learning and practice. You can experiment and learn how attacker activity appears in logs. The syntax is straightforward.

Example Event Logs

In the example below, we take a sample event log provided by the developer and run DeepBlueCLI against it:

PS > .\DeepBlue.ps1 -file .\evtx\sliver-security.evtx

running deepbluecli against windows event log with sliver c2 activity

Sliver is a modern command-and-control framework often used by red teamers and real attackers as well. In the output of this command, we can see several interesting indicators. There is cmd.exe accessing the ADMIN$ share, which is a classic sign of lateral movement or administrative access attempts. We also see cmd.exe being launched via WMI through C:\Windows\System32\wbem\WmiPrvSE.exe. This is especially important because WMI execution is commonly used to execute commands remotely while avoiding traditional process creation patterns. Above that, we also notice cmd.exe /Q /c JOINT_BALL.exe. This executable is a Sliver payload. Sliver often generates payloads with seemingly random names.

Another example focuses on PowerShell obfuscation, which is a very common technique used to evade detection:

PS > .\DeepBlue.ps1 -file .\evtx\Powershell-Invoke-Obfuscation-many.evtx

running deepbluecli against a windows event log with heavy obfuscation

In the results, we see very long command lines with heavily modified command names. This often looks like iNVOke variants or strange combinations of characters that still execute correctly. These commands usually pass through an obfuscation framework or an argument obfuscator, making them harder to read and harder for simple detections to catch. Occasionally, DeepBlueCLI struggles to fully decode these commands, especially when the obfuscation is layered or intentionally complex. This is not a weakness of the tool but rather a reflection of the logic behind obfuscation itself. The goal of obfuscation is to slow down defenders, and even partial visibility is already a win for us during investigation.

It is also worth mentioning that during real forensic or incident response work, you can export logs from any Windows machine and analyze them in exactly the same way. You do not need to run the tool on the compromised system itself.

exporting windows event logs

Live Analysis

In some cases, speed matters more than completeness. DeepBlueCLI allows us to perform a quick live analysis by running PowerShell as an administrator and querying logs directly:

PS > .\DeepBlue.ps1 -log security

running deepbluecli against a live security log

In this scenario, the tool immediately highlights suspicious behavior. For example, we can clearly see that several user accounts were subjected to brute-force attempts. One very practical feature here is that DeepBlueCLI counts the total number of failed logon attempts for us. Instead of manually filtering event IDs and correlating timestamps, we get an immediate overview that helps us decide whether further action is required.

WELA

WELA is a PowerShell script developed by Yamato Security that focuses on auditing and hardening Windows systems. Unlike DeepBlueCLI, which looks primarily at what happened in the past, WELA helps you understand the current security posture of a system and guides you toward improving it. It audits system settings against a predefined baseline and highlights areas where the configuration does not meet expected security standards. Because WELA uses advanced PowerShell techniques and low-level system queries, it is often flagged by antivirus as potentially malicious. This does not mean the script is harmful. The script is legitimate and intended for defensive use.

To begin, we can view the help menu to see what functionality the developer has included:

PS > .\WELA.ps1 help

wela help menu

From the available options, we can see that WELA supports auditing system settings using baselines provided by Yamato Security. This audit runs in the terminal and saves results to CSV files, which is often the preferred format for documentation and further analysis. For those who prefer a graphical interface, a GUI version is also available. Another option allows you to analyze the size of log files, either before or after configuration changes, which can be useful when tuning logging policies.

Updating Rules

Before performing any audit, it is a good idea to update the rules. For this to work smoothly, you first need to create a directory named config in the folder where the script resides:

PS > mkdir config

PS > .\WELA.ps1 update-rules

updating wela rules

This ensures that the script has a proper location to store updated configuration data and avoids unnecessary errors.

Auditing

Once the rules are up to date, we are ready to audit the system and see where it meets the baseline and where it falls short. Many defenders prefer starting with the terminal output, as it is faster to navigate:

PS > .\WELA.ps1 audit-settings -Baseline YamatoSecurity

auditing the system with wela

At this stage, the script reviews the current system settings and compares them against the selected baseline. The results clearly show which settings match expectations and which ones require attention.

The audit can be performed using the graphical interface:

PS > .\WELA.ps1 audit-settings -Baseline ASD -OutType gui

auditing the system with wela and gui menu

This option is particularly useful for presentations and reports.Β 

Check

After auditing, we can perform a focused check related to log file sizes:

PS > .\WELA.ps1 audit-filesize -Baseline YamatoSecurity

running wela check

The output shows that the system is not hardened enough. This is not uncommon and should be seen as an opportunity rather than a failure. The entire purpose of this step is to identify weaknesses before a hacker does.

Hardening

Finally, we move on to hardening the system:

PS > .\WELA.ps1 configure -Baseline YamatoSecurity

hardening windows with wela configurations

This process walks you through each setting step by step, allowing you to make informed decisions about what to apply. There is also an option to apply all settings in batch mode without prompts, which can be useful during large-scale deployments.

Summary

PowerShell remains one of the most decisive tools on a modern Windows system, and that reality applies just as much to defenders as it does to attackers. In this article, you saw two PowerShell-based tools that address different stages of defensive work but ultimately support the same goal of reducing uncertainty during incidents and improving the security baseline before an attacker can exploit it.

We are also preparing dedicated PowerShell training that will be valuable for both defenders and red teamers. This training will focus on practical, real-world PowerShell usage in both offensive and defensive security operations and will be available to Subscriber and Subscriber Pro students from March 10-12.

How AI Impacts the Cyber Market and The Future of SIEM

Security has always moved in waves. Not because we suddenly get smarter, but because we learn from past mistakes, identify gaps, hit limits, need to protect new technologies, and then go and do our best to solve those new security challenges with the technologies at hand. The era of AI (let’s be clear, we have […]

The post How AI Impacts the Cyber Market and The Future of SIEM first appeared on Future of Tech and Security: Strategy & Innovation with Raffy.

The post How AI Impacts the Cyber Market and The Future of SIEM appeared first on Security Boulevard.

Digital Forensics: Basic Linux Analysis After Data Exfiltration

Welcome back, aspiring DFIR investigators!

Linux machines are everywhere these days, running quietly in the background while powering the most important parts of modern companies. They host databases, file shares, internal tools, email services, and countless other systems that businesses depend on every single day. But the same flexibility that makes Linux so powerful also makes it attractive for attackers. A simple bash shell provides everything someone needs to move files around, connect to remote machines, or hide traces of activity. That is why learning how to investigate Linux systems is so important for any digital forensic analyst.

In an earlier article we walked through the basics of Linux forensics. Today, we will go a step further and look at a scenario, where a personal Linux machine was used to exfiltrate private company data. The employee worked for the organization that suffered the breach. Investigators first examined his company-issued Windows workstation and discovered several indicators tying him to the attack. However, the employee denied everything and insisted he was set up, claiming the workstation wasn’t actually used by him. To uncover the truth and remove any doubts, the investigation moved toward his personal machine, a Linux workstation suspected of being a key tool in the data theft.

Analysis

It is a simple investigation designed for those that are just getting started.

Evidence

Before looking at anything inside the disk, a proper forensic workflow always begins with hashing the evidence and documenting the chain of custody. After that, you create a hashed forensic copy to work on so the original evidence remains untouched. This is standard practice in digital forensics, and it protects the integrity of your findings.

showing the evidence

Once we open the disk image, we can see the entire root directory. To keep the focus on the main points, we will skip the simple checks covered in Basic Linux Forensics (OS-release, groups, passwd, etc.) and move straight into the artifacts that matter most for a case involving exfiltration.

Last Login

The first thing we want to know is when the user last logged in. Normally you can run last with no arguments on a live system, but here we must point it to the wtmp file manually:

bash# > last -f /var/log/wtmp

reading last login file on linux

This shows the latest login from the GNOME login screen, which occurred on February 28 at 15:59 (UTC).

To confirm the exact timestamp, we can check authentication events stored in auth.log, filtering only session openings from GNOME Display Manager:

bash# > cat /var/log/auth.log | grep -ai "session opened" | grep -ai gdm | grep -ai liam

finding when GNOME session was opened

From here we learn that the last GUI login occurred at 2025-02-28 10:59:07 (local time).

Timezone

Next, we check the timezone to ensure we interpret all logs correctly:

bash# > cat /etc/timezone

finding out the time zone

This helps ensure that timestamps across different logs line up properly.

USB

Data exfiltration often involves external USB drives. Some attackers simply delete their shell history, thinking that alone is enough to hide their actions. But they often forget that Linux logs almost everything, and those logs tell the truth even when the attacker tries to erase evidence.

To check for USB activity:

bash# > grep -i usb /var/log/*

finding out information on connected usb drives

Many entries appear, and buried inside them is a serial number from an external USB device.

finding the serial number

Syslog also records the exact moment this device was connected. Using the timestamp (2025-02-28 at 10:59:25) we can filter the logs further and collect more detail about the device.

syslog shows more activity on the the usb connections

We also want to know when it was disconnected:

bash# > grep -i usb /var/log/* | grep -ai disconnect

finding out when the usb drive was disconnected

The last disconnect occurred on 2025-02-28 at 11:44:00. This gives us a clear time window: the USB device was connected for about 45 minutes. Long enough to move large files.

Command History

Attackers use different tricks to hide their activity. Some delete .bash_history. Others only remove certain commands. Some forget to clear it entirely, especially when working quickly.

Here is the user’s history file:

bash# > cat /home/liam/.bash_history

exposing exfiltration activity in the bash history file

Here we see several suspicious entries. One of them is transferfiles. This is not a real Linux command, which immediately suggests it might be an alias. We also see a curl -X POST command, which hints that data was uploaded to an HTTP server. That’s a classic exfiltration method. There is also a hidden directory and a mysterious mth file, which we will explore later.

Malicious Aliases

Hackers love aliases, because aliases allow them to hide malicious commands behind innocent-looking names. For example, instead of typing out a long scp or rsync command that would look suspicious in a history file, they can simply create an alias like backup, sync, or transferfiles. To anyone reviewing the history later, it looks harmless. Aliases also help them blend into the environment. A single custom alias is easy to overlook during a quick review, and some investigators forget to check dotfiles for custom shell behavior.

To see what transferfiles really does, we search for it:

bash# > grep "transferfiles" . -r

finding malicious aliases on linux

This reveals the real command: it copied the entire folder β€œCritical Data TECH*” from a USB device labeled 46E8E28DE8E27A97 into /home/liam/Documents/Data.

finding remnants of exfiltrated data

This aligns perfectly with our earlier USB evidence. Files such as Financial Data, Revenue History, Stakeholder Agreement, and Tax Records were all transferred. Network logs suggest more files were stolen, but these appear to be the ones the suspect personally inspected.

Hosts

The /etc/hosts file is normally used to map hostnames to IP addresses manually. Users sometimes add entries to simplify access to internal services or testing environments. However, attackers also use this file to redirect traffic or hide the true destination of a connection.

Let’s inspect it:

bash# > cat /etc/hosts

finding hosts in the hosts file

In this case, there is an entry pointing to a host involved in the exfiltration. This tells us the suspect had deliberately configured the system to reach a specific external machine.

Crontabs

Crontabs are used to automate tasks. Many attackers abuse cron to maintain persistence, collect information, or quietly run malicious scripts.

There are three main places cron jobs can exist:

1. /etc/crontab –  system-wide

2. /etc/cron.d/ – service-style cron jobs

3. /var/spool/cron/crontabs/ – user-specific entries

Let’s check the user’s crontab:

bash# > cat /var/spool/cron/crontabs/liam

We can see a long string set to run every 30 minutes. This cronjob secretly sends the last five commands typed in the terminal to an attacker-controlled machine. This includes passwords typed in plain text, sudo commands, sensitive paths, and anything else the user entered recently.

This was unexpected. It suggests the system was accessed by someone else, meaning the main suspect may have been working with a third party, or possibly being monitored and guided by them.

To confirm this possibility, let’s check for remote login activity:

bash# > cat /var/log/auth.log | grep -ai accepted

finding authentication in the authlog

Here we find a successful SSH login from an external IP address. This could be that unidentified person entering the machine to retrieve the stolen data or to set up additional tools. At this stage it’s difficult to make a definitive claim, and we would need more information and further interrogation to connect all the pieces.

Commands and Logins in auth.log

The auth.log file stores not only authentication attempts but also certain command-related records. This is extremely useful when attackers use hidden directories or unusual locations to store files.

To list all logged commands:

bash# > cat /var/log/auth.log | grep -ai command

To search for one specific artifact:

bash# > cat /var/log/auth.log | grep -ai mth

exposing executed commands in auth log

This tells us that the file mth was created in /home/liam using nano by user liam. Although this file had nothing valuable, its creation shows the user was active and writing files manually, not through automated tools.

Timestomping

As a bonus step, we will introduce timestamps, which are essential in forensic work. They help investigators understand the sequence of events and uncover attempts at manipulation that might otherwise go unnoticed. Timestomping is the process of deliberately altering file timestamps to confuse investigators. Hackers use it to hide when a file was created or modified. However, Linux keeps several different timestamps for each file, and they don’t always match when something is tampered with.

The stat command helps reveal inconsistencies:

bash# > stat api

exposing timestomping on linux

The output shows:

Birth: Feb 28 2025

Change: Nov 17 2025

Modify: Jan 16 2001

This does not make sense. A file cannot be created in 2025, modified in 2001, and changed again in 2025. That means the timestamps were manually altered. A normal file would have timestamps that follow a logical order, usually showing similar creation and modification dates. By comparing these values across many files, investigators can often uncover when an attacker attempted to clean up their traces or disguise their activity.

Timeline

The investigation still requires more evidence, deeper log correlation, and proper interrogation of everyone involved before a final conclusion can be made. However, based on the artifacts recovered from the Linux machine, we can outline a reasonable assumption of how the events might have taken place.

In the days before the breach, Liam was approached by a third-party group interested in acquiring his company’s confidential data. They gained remote access to his computer via SSH, possibly through a proxy, appearing to log in from a public IP address that does not belong to the company network. Once inside, they installed a cronjob designed to collect Liam’s recent commands that acted as a simple keylogger. This allowed them to gather passwords and other sensitive information that Liam typed in the terminal.

With Liam’s cooperation, or possibly after promising him payment, the attackers guided him through the steps needed to steal the corporate files. On February 28, Liam logged in, connected a USB drive, and executed the hidden alias transferfiles, which copied sensitive folders onto his machine. Moments later, he uploaded parts of the data using a curl POST request to a remote server. When the transfer was done, the accomplices disconnected from the machine, leaving Liam with remnants of stolen data still sitting inside his Documents directory.

The combination of the installed cronjob, the remote SSH connection, and the structured method of transferring company files strongly suggests an insider operation supported by outside actors. Liam was not acting alone, he was assisting a third party, either willingly or under pressure.

Summary

The hardest part of digital forensics is interpreting what the evidence actually means and understanding the story it tells. Individual logs rarely show the full picture by themselves. But when you combine login times, USB events, alias behavior, cronjobs, remote connections and other artifacts a clear narrative begins to form. In this case, the Linux machine revealed far more than the suspect intended to hide. It showed how the data was copied, when the USB device was attached, how remote actors accessed the system, and even how attempts were made to hide the tracks through timestomping and aliases. Each artifact strengthened the overall story and connected the actions together into one coherent timeline. This is the true power of digital forensics that turns fragments of technical evidence into a readable account of what really happened. And with every investigation, your ability to find and interpret these traces grows stronger.

If you want skills that actually matter when systems are burning and evidence is disappearing, this is your next step. Our training takes you into real investigations, real attacks, and real analyst workflows. Built for people who already know the basics and want to level up fast, it’s on-demand, deep, and constantly evolving with the threat landscape.

Learn more

The post Digital Forensics: Basic Linux Analysis After Data Exfiltration first appeared on Hackers Arise.

Digital Forensics: Drone Forensics for Battlefield and Criminal Analysis

Welcome back, aspiring digital investigators!

Over the last few years, drones have moved from being niche gadgets to becoming one of the most influential technologies on the modern battlefield and far beyond it. The war in Ukraine accelerated this shift dramatically. During the conflict, drones evolved at an incredible pace, transforming from simple reconnaissance tools into precision strike platforms, electronic warfare assets, and logistics tools. This rapid adoption did not stop with military forces. Criminal organizations, including cartels and smuggling networks, quickly recognized the potential of drones for surveillance and contraband delivery. As drones became cheaper, more capable, and easier to modify, their use expanded into both legal and illegal activities. This created a clear need for digital forensics specialists who can analyze captured drones and extract meaningful information from them.

Modern drones are packed with memory chips, sensors, logs, and media files. Each of these components can tell a story about where the drone has been, how it was used, and who may have been controlling it. At its core, digital forensics is about understanding devices that store data. If something has memory, it can be examined.

U.S. Department of Defense Drone Dominance Initiative

Recognizing how critical drones have become, the United States government launched a major initiative focused on drone development and deployment. Secretary of War Pete Hegseth announced a one-billion-dollar β€œdrone dominance” program aimed at equipping the U.S. military with large numbers of cheap, scalable attack drones.

US Department of Defense Drone Dominance Initiative

Modern conflicts have shown that it makes little sense to shoot down inexpensive drones using missiles that cost millions of dollars. The program focuses on producing tens of thousands of small drones by 2026 and hundreds of thousands by 2027. The focus has shifted away from a quality-over-quantity mindset toward deploying unmanned systems at scale. Analysts must be prepared to examine drone hardware and data just as routinely as laptops, phones, or servers.

Drone Platforms and Their Operational Roles

Not all drones are built for the same mission. Different models serve very specific roles depending on their design, range, payload, and level of control. On the battlefield, FPV drones are often used as precision strike weapons. These drones are lightweight, fast, and manually piloted in real time, allowing operators to guide them directly into high-value targets. Footage from Ukraine shows drones intercepting and destroying larger systems, including loitering munitions carrying explosive payloads.

Ukrainian "Sting" drone striking a Russian Shahed carrying an R-60 air-to-air missile
Ukrainian β€œSting” drone striking a Russian Shahed carrying an R-60 air-to-air missile

To counter electronic warfare and jamming, many battlefield drones are now launched using thin fiber optic cables instead of radio signals. These cables physically connect the drone to the operator, making jamming ineffective. In heavily contested areas, forests are often covered with discarded fiber optic lines, forming spider-web-like patterns that reflect sunlight. Images from regions such as Kupiansk show how widespread this technique has become.

fiber optic cables in contested drone war zones

Outside of combat zones, drones serve entirely different purposes. Commercial drones are used for photography, mapping, agriculture, and infrastructure inspection. Criminal groups may use similar platforms for smuggling, reconnaissance, or intimidation. Each use case leaves behind different types of forensic evidence, which is why understanding drone models and their intended roles is so important during an investigation.

DroneXtractor – A Forensic Toolkit for DJI Drones

To make sense of all this data, we need specialized tools. One such tool is DroneXtractor, an open-source digital forensics suite available on GitHub and written in Golang. DroneXtractor is designed specifically for DJI drones and focuses on extracting and analyzing telemetry, sensor values, and flight data.

dronextractor a tool for drone forensics and drone file analysis

The tool allows investigators to visualize flight paths, audit drone activity, and extract data from multiple file formats. It is suitable for law enforcement investigations, military analysis, and incident response scenarios where understanding drone behavior is critical. With this foundation in mind, let us take a closer look at its main features.

Feature 1 – DJI File Parsing

DroneXtractor supports parsing common DJI file formats such as CSV, KML, and GPX. These files often contain flight logs, GPS coordinates, timestamps, altitude data, and other telemetry values recorded during a drone’s operation. The tool allows investigators to extract this information and convert it into alternative formats for easier analysis or sharing.

dji file parsing

In practical terms, this feature can help law enforcement reconstruct where a drone was launched, the route it followed, and where it landed. For military analysts, parsed telemetry data can reveal patrol routes, observation points, or staging areas used by adversaries. Even a single flight log can provide valuable insight into patterns of movement and operational habits.

Feature 2 – Steganography

Steganography refers to hiding information within other files, such as images or videos. DroneXtractor includes a steganography suite that can extract telemetry and other embedded data from media captured by DJI drones. This hidden data can then be exported into several different file formats for further examination.

stenography drone analysis

This capability is particularly useful because drone footage often appears harmless at first glance. An image or video shared online may still contain timestamps, unique identifiers and sensor readings embedded within it. For police investigations, this can link media to a specific location or event.

Feature 3 – Telemetry Visualization

Understanding raw numbers can be difficult, which is why visualization matters. DroneXtractor includes tools that generate flight path maps and telemetry graphs. The flight path mapping generator creates a visual map showing where the drone traveled and the route it followed. The telemetry graph visualizer plots sensor values such as altitude, speed, and battery levels over time.

telemetry drone visualization

Investigators can clearly show how a drone behaved during a flight, identify unusual movements, or detect signs of manual intervention. Military analysts can use these visual tools to assess mission intent, identify reconnaissance patterns, or confirm whether a drone deviated from its expected route.

Feature 4 – Flight and Integrity Analysis

The flight and integrity analysis feature focuses on detecting anomalies. The tool reviews all recorded telemetry values, calculates expected variance, and checks for suspicious gaps or inconsistencies in the data. These gaps may indicate file corruption, tampering, or attempts to hide certain actions.

drone flight analysis

Missing data can be just as meaningful as recorded data. Law enforcement can use this feature to determine whether logs were altered after a crime. Military analysts can identify signs of interference and malfunction, helping them assess the reliability of captured drone intelligence.

Usage

DroneXtract is built in Go, so before anything else you need to have Go installed on your system. This makes the tool portable and easy to deploy, even in restricted or offline environments such as incident response labs or field investigations.

We begin by copying the project to our computer

bash# > git clone https://github.com/ANG13T/DroneXtract.git

To build and run DroneXtract from source, you start by enabling Go modules. This allows Go to correctly manage dependencies used by the tool.

bash# > $ export GO111MODULE=on

Next, you fetch all required dependencies defined in the project. This step prepares your environment and ensures all components DroneXtract relies on are available.

bash# >Β  go get ./…

Once everything is in place, you can launch the tool directly:

bash# > go run main.go

At this point, DroneXtract is ready to be used for parsing files, visualizing telemetry, and performing integrity analysis on DJI drone data. The entire process runs locally, which is important when handling sensitive or classified material.

Airdata Usage

DJI drones store detailed flight information in .TXT flight logs. These files are not immediately usable for forensic analysis, so an intermediate step is required. For this, we rely on Airdata’s Flight Data Analysis tool, which converts DJI logs into standard forensic-friendly formats.

You can find the link here

Once the flight logs are processed through Airdata, the resulting files can be used directly with DroneXtract:

Airdata CSV output files can be used with:

1) the CSV parser

2) the flight path map generator

3) telemetry visualizations

Airdata KML output files can be used with:

1) the KML parser for geographic mapping

Airdata GPX output files can be used with:

1) the GPX parser for navigation-style flight reconstruction

This workflow allows investigators to move from a raw drone log to clear visual and analytical output without reverse-engineering proprietary formats themselves.

Configuration

DroneXtract also provides configuration options that allow you to tailor the analysis to your specific investigation. These settings are stored as environment variables in the .env file and control how much data is processed and how sensitive the analysis should be.

TELEMETRY_VIS_DOWNSAMPLE

This value controls how much telemetry data is sampled for visualization. Higher values reduce detail but improve performance, which is useful when working with very large flight logs.

FLIGHT_MAP_DOWNSAMPLE

This setting affects how many data points are used when generating the flight path map. It helps balance visual clarity with processing speed.

ANALYSIS_DOWNSAMPLE

This value controls the amount of data used during integrity analysis. It allows investigators to focus on meaningful changes without being overwhelmed by noise.

ANALYSIS_MAX_VARIANCE

This defines the maximum acceptable variance between minimum and maximum values during analysis. If this threshold is exceeded, it may indicate abnormal behavior, data corruption, or possible tampering.

Together, these settings give investigators control over both speed and precision, allowing DroneXtract to be effective in fast-paced operational environments and detailed post-incident forensic examinations.

Summary

Drone forensics is still a developing field, but its importance is growing rapidly. As drones become more capable, the need to analyze them effectively will only increase. Tools like DroneXtractor show how much valuable information can be recovered from devices that were once considered disposable.Β 

Looking ahead, it would be ideal to see fast, offline forensic tools designed specifically for battlefield conditions. Being able to quickly extract flight data, locations, and operational details from captured enemy drones could provide immediate tactical advantages. Drone forensics may soon become as essential as traditional digital forensics on computers and mobile devices.

The post Digital Forensics: Drone Forensics for Battlefield and Criminal Analysis first appeared on Hackers Arise.

Digital Forensics: An Introduction to Basic Linux Forensics

Welcome back, aspiring forensic investigators.Β 

Linux is everywhere today. It runs web servers, powers many smartphones, and can even be found inside the infotainment systems of cars. A few reasons for its wide use are that Linux is open source, available in many different distributions, and can be tailored to run on both powerful servers and tiny embedded devices. It is lightweight, modular, and allows administrators to install only the pieces they need. Those qualities make Linux a core part of many organizations and of our daily digital lives. Attackers favour Linux as well. Besides being a common platform for their tools, many Linux hosts suffer from weak monitoring. Compromised machines are frequently used for reverse proxies, persistence, reconnaissance and other tasks, which increases the need for forensic attention. Linux itself is not inherently complex, but it can hide activity in many small places. In later articles we will dive deeper into what you can find on a Linux host during an investigation. Our goal across the series is to build a compact, reliable cheat sheet you can return to while handling an incident. The same approach applies to Windows investigations as well.

Today we will cover the basics of Linux forensics. For many incidents this level of detail will be enough to begin an investigation and perform initial response actions. Let’s start.

OS & Accounts

OS Release Information

The first thing to check is the distribution and release information. Different Linux distributions use different defaults, package managers and filesystem layouts. Knowing which one you are examining helps you predict where evidence or configuration will live.Β 

bash> cat /etc/os-release

linux os release

Common distributions and their typical uses include Debian and Ubuntu, which are widely used on servers and desktops. They are stable and well documented. RHEL and CentOS are mainly in enterprise environments with long-term support. Fedora offers cutting-edge features, Arch is rolling releases for experienced users, Alpine is very small and popular in containers. Security builds such as Kali or Parrot have pentesting toolsets. Kali contains many offensive tools that hackers use and is also useful for incident response in some cases.

Hostname

Record the system’s hostname early and keep a running list of hostnames you encounter. Hostnames help you map an asset to network records, correlate logs across systems, identify which machine was involved in an event, and reduce ambiguity when combining evidence from several sources.

bash> cat /etc/hostname

bash> hostname

linux hostname

Timezone

Timezone information gives a useful hint about the likely operating hours of the device and can help align timestamps with other systems. You can read the configured timezone with:

bash> cat /etc/timezone

timezone on linux

User List

User accounts are central to persistence and lateral movement. Local accounts are recorded in /etc/passwd (account metadata and login shell) and /etc/shadow (hashed passwords and aging information). A malicious actor who wants persistent access may add an account or modify these files. To inspect the user list in a readable form, use:

bash> cat /etc/passwd | column -t -s :

listing users on linux

You can also list users who are allowed interactive shells by filtering the shell field:

bash> cat /etc/passwd | grep -i 'ash'

Groups

Groups control access to shared resources. Group membership can reveal privilege escalation or lateral access. Group definitions are stored in /etc/group. View them with:

bash> cat /etc/group

listing groups on linux

Sudoers List

Users who can use sudo can escalate privileges. The main configuration file is /etc/sudoers, but configuration snippets may also exist under /etc/sudoers.d. Review both locations:Β 

bash> ls -l /etc/sudoers.d/

bash> sudo cat /etc/sudoers

sudoers list on linux

Login Information

The /var/log directory holds login-related records. Two important binary files are wtmp and btmp. The first one records successful logins and logouts over time, while btmp records failed login attempts. These are binary files and must be inspected with tools such as last (for wtmp) and lastb (for btmp), for example:

bash> sudo last -f /var/log/wtmp

bash> sudo lastb -f /var/log/btmp

lastlog analysis on linux

System Configuration

Network Configuration

Network interface configuration can be stored in different places depending on the distribution and the network manager in use. On Debian-based systems you may see /etc/network/interfaces. For a quick look at configured interfaces, examine:

bash> cat /etc/network/interfaces

listing interfaces on linux

bash> ip a show

lisiting IPs and interfaces on linux

Active Network Connections

On a live system, active connections reveal current communications and can suggest where an attacker is connecting to or from. Traditional tools include netstat:

bash> netstat -natp

listng active network connections on linux

A modern alternative is ss -tulnp, which provides similar details and is usually available on newer systems.

Running Processes

Enumerating processes shows what is currently executing on the host and helps spot unexpected or malicious processes. Use ps for a snapshot or interactive tools for live inspection:

bash> ps aux

listing processes on linux

If available, top or htop give interactive views of CPU/memory and process trees.

DNS Information

DNS configuration is important because attackers sometimes alter name resolution to intercept or redirect traffic. Simple local overrides live in /etc/hosts. DNS server configuration is usually in /etc/resolv.conf. Often attackers might perform DNS poisoning or tampering to redirect victims to malicious services. Check the relevant files:

bash> cat /etc/hosts

hosts file analysis

bash> cat /etc/resolv.conf

resolv.conf file on linux

Persistence Methods

There are many common persistence techniques on Linux. Examine scheduled tasks, services, user startup files and systemd units carefully.

Cron Jobs

Cron is often used for legitimate scheduled tasks, but attackers commonly use it for persistence because it’s simple and reliable. System-wide cron entries live in /etc/crontab, and individual service-style cron jobs can be placed under /etc/cron.d/. User crontabs are stored under /var/spool/cron/crontabs on many distributions. Listing system cron entries might look like:

bash> cat /etc/crontab

crontab analysis

bash> ls /etc/cron.d/

bash> ls /var/spool/cron/crontabs

listing cron jobs

Many malicious actors prefer cron because it does not require deep system knowledge. A simple entry that runs a script periodically is often enough.

Services

Services or daemons start automatically and run in the background. Modern distributions use systemd units which are typically found under /etc/systemd/system or /lib/systemd/system, while older SysV-style scripts live in /etc/init.d/. A quick check of service scripts and unit files can reveal backdoors or unexpected startup items:

bash> ls /etc/init.d/

bash> systemctl list-unit-files --type=service

bash> ls /etc/systemd/system

listing linux services

.Bashrc and Shell Startup Files

Per-user shell startup files such as ~/.bashrc, ~/.profile, or ~/.bash_profile can be modified to execute commands when an interactive shell starts. Attackers sometimes add small one-liners that re-establish connections or drop a backdoor when a user logs in. The downside for attackers is that these files only execute for interactive shells. Services and non-interactive processes will not source them, so they are not a universal persistence method. Still, review each user’s shell startup files:

bash> cat ~/.bashrc

bash> cat ~/.profile

bashrc file on linux

Evidence of Execution

Linux can offer attackers a lot of stealth, as logging can be disabled, rotated, or manipulated. When the system’s logging is intact, many useful artifacts remain. When it is not, you must rely on other sources such as filesystem timestamps, process state, and memory captures.

Bash History

Most shells record commands to a history file such as ~/.bash_history. This file can show what commands were used interactively by a user, but it is not a guaranteed record, as users or attackers can clear it, change HISTFILE, or disable history entirely. Collect each user’s history (including root) where available:

bash> cat ~/.bash_history

bash history

Tmux and other terminal multiplexers themselves normally don’t provide a persistent command log. Commands executed in a tmux session run in normal shell processes. Whether those commands are saved depends on the tmux configurations.Β 

Commands Executed With Sudo

When a user runs commands with sudo, those events are typically logged in the authentication logs. You can grep for recorded COMMAND entries to see what privileged commands were executed:

bash> cat /var/log/auth.log* | grep -i COMMAND | less

Accessed Files With Vim

The Vim editor stores some local history and marks in a file named .viminfo in the user’s home directory. That file can include command-line history, search patterns and other useful traces of editing activity:

bash> cat ~/.viminfo

accessed files by vim

Log Files

Syslog

If the system logging service (for example, rsyslog or journald) is enabled and not tampered with, the files under /var/log are often the richest source of chronological evidence. The system log (syslog) records messages from many subsystems and services. Because syslog can become large, systems rotate older logs into files such as syslog.1, syslog.2.gz, and so on. Use shell wildcards and standard text tools to search through rotated logs efficiently:

bash> cat /var/log/syslog* | head

linux syslog analysis

When reading syslog entries you will typically see a timestamp, the host name, the process producing the entry and a message. Look for unusual service failures, unexpected cron jobs running, or log entries from unknown processes.

Authentication Logs

Authentication activity, such as successful and failed logins, sudo attempts, SSH connections and PAM events are usually recorded in an authentication log such as /var/log/auth.log. Because these files can be large, use tools like grep, tail and less to focus on the relevant lines. For example, to find successful logins you run this:

bash> cat /var/log/auth.log | grep -ai accepted

auth log accepted password

Other Log Files

Many services keep their own logs under /var/log. Web servers, file-sharing services, mail daemons and other third-party software will have dedicated directories there. For example, Apache and Samba typically create subdirectories where you can inspect access and error logs:

bash> ls /var/log

bash> ls /var/log/apache2/

bash> ls /var/log/samba/

different linux log files

Conclusion

A steady, methodical sweep of the locations described above will give you a strong start in most Linux investigations. You start by verifying the OS, recording host metadata, enumerating users and groups, then you move to examining scheduled tasks and services, collecting relevant logs and history files. Always preserve evidence carefully and collect copies of volatile data when possible. In future articles we will expand on file system forensics, memory analysis and tools that make formal evidence collection and analysis easier.

❌