Normal view
-
Security Boulevard
- Nevada’s Trojan Download, Penn’s 1.2M Donor Breach, and the Malware That Kills Your Defenses First
Nevada’s Trojan Download, Penn’s 1.2M Donor Breach, and the Malware That Kills Your Defenses First
In Nevada, a state employee downloaded what looked like a harmless tool from a search ad. The file had been tampered with, and that single moment opened the door to months of silent attacker movement across more than 60 agencies. That pattern shows up again and again in the latest ColorTokens Threat Intelligence Brief. Attackers rarely break in with […]
The post Nevada’s Trojan Download, Penn’s 1.2M Donor Breach, and the Malware That Kills Your Defenses First appeared first on ColorTokens.
The post Nevada’s Trojan Download, Penn’s 1.2M Donor Breach, and the Malware That Kills Your Defenses First appeared first on Security Boulevard.
SCADA (ICS) Hacking and Security: An Introduction to SCADA Forensics
Welcome back, my aspiring SCADA/ICS cyberwarriors!
SCADA (Supervisory Control and Data Acquisition) systems and the wider class of industrial control systems (ICS) run many parts of modern life, such as electricity, water, transport, factories. These systems were originally built to work in closed environments and not to be exposed to the public Internet. Over the last decade they have been connected more and more to corporate networks and remote services to improve efficiency and monitoring. That change has also made them reachable by the same attackers who target regular IT systems. When a SCADA system is hit by malware, sabotage, or human error, operators must restore service fast. At the same time investigators need trustworthy evidence to find out what happened and to support legal, regulatory, or insurance processes.
Forensics techniques from traditional IT are helpful, but they usually do not fit SCADA devices directly. Many field controllers run custom or minimal operating systems, lack detailed logs, and expose few of the standard interfaces that desktop forensics relies on. To address that gap, we are starting a focused, practical 3-day course on SCADA forensics. The course is designed to equip you with hands-on skills for collecting, preserving and analysing evidence from PLCs, RTUs, HMIs and engineering workstations.
Today we will explain how SCADA systems are built, what makes forensics in that space hard, and which practical approaches and tools investigators can use nowadays.
Background and SCADA Architecture
A SCADA environment usually has three main parts: the control center, the network that connects things, and the field devices.
The control center contains servers that run the supervisory applications, databases or historians that store measurement data, and operator screens (human-machine interfaces). These hosts look more like regular IT systems and are usually the easiest place to start a forensic investigation.
The network between control center and field devices is varied. It can include Ethernet, serial links, cellular radios, or specialized industrial buses. Protocols range from simple serial messages to industrial Ethernet and protocol stacks that are unique to vendors. That variety makes it harder to collect and interpret network traffic consistently.
Field devices sit at the edge. They include PLCs (programmable logic controllers), RTUs (remote terminal units), and other embedded controllers that handle sensors and actuators. Many of these devices run stripped-down or proprietary firmware, hold little storage, and are designed to operate continuously.
Understanding these layers helps set realistic expectations for what evidence is available and how to collect it without stopping critical operations.

Challenges in SCADA Forensics
SCADA forensics has specific challenges that change how an investigation is done.
First, some field devices are not built for forensics. They often lack detailed logs, have limited storage, and run proprietary software. That makes it hard to find recorded events or to run standard acquisition tools on the device.
Second, availability matters. Many SCADA devices must stay online to keep a plant, substation, or waterworks operating. Investigators cannot simply shut everything down to image drives. This requirement forces use of live-acquisition techniques that gather volatile data while systems keep running.
Third, timing and synchronization are difficult. Distributed devices often have different clocks and can drift. That makes correlating events across a wide system challenging unless timestamps are synchronized or corrected during analysis.
Finally, organizational and legal issues interfere. Companies often reluctant to share device details, firmware, or incident records because of safety, reputation, or legal concerns. That slows development of general-purpose tools and slows learning from real incidents.
All these challenges only increase the value of SCADA forensics specialists. Salary varies by location, experience, and roles, but can range from approximately $65,000 to over $120,000 per year.
Real-world attack chain
To understand why SCADA forensics matters, it helps to look at how real incidents unfold. The following examples show how a single compromise inside the corporate network can quickly spread into the operational side of a company. In both cases, the attack starts with the compromise of an HR employee’s workstation, which is a common low-privilege entry point. From there, the attacker begins basic domain reconnaissance, such as mapping users, groups, servers, and RDP access paths.
Case 1
In the first path, the attacker discovers that the compromised account has the right to replicate directory data, similar to a DCSync privilege. That allows the extraction of domain administrator credentials. Once the attacker holds domain admin rights, they use Group Policy to push a task or service that creates a persistent connection to their command-and-control server. From that moment, they can access nearly every machine in the domain without resistance. With such reach, pivoting into the SCADA or engineering network becomes a matter of time. In one real scenario, this setup lasted only weeks before attackers gained full control and eventually destroyed the domain.
Case 2
The second path shows a different but equally dangerous route. After gathering domain information, the attacker finds that the HR account has RDP access to a BACKUP server, which stores local administrator hashes. They use these hashes to move laterally, discovering that most domain users also have RDP access through an RDG gateway that connects to multiple workstations. From there, they hop across endpoints, including those used by engineers. Once inside engineering workstations, the attacker maps out routes to the industrial control network and starts interacting with devices by changing configurations, altering setpoints, or pushing malicious logic into PLCs.
Both cases end with full access to SCADA and industrial equipment. The common causes are poor segmentation between IT and OT, excessive privileges, and weak monitoring.
Frameworks and Methodologies
A practical framework for SCADA forensics has to preserve evidence and keep the process safe. The basic idea is to capture the most fragile, meaningful data first and leave more invasive actions for later or for offline testing.
Start with clear roles and priorities. You need to know who can order device changes, who will gather evidence, and who is responsible for restoring service. Communication between operations and security must be planned ahead of incidents.
As previously said, capture volatile and remote evidence first, then persistent local data. This includes memory contents, current register values, and anything stored only in RAM. Remote evidence includes network traffic, historian streams, and operator session logs. Persistent local data includes configuration files, firmware images, and file system contents. Capturing network traffic and historian data early preserves context without touching the device.
A common operational pattern is to use lightweight preservation agents or passive sensors that record traffic and key events in real time. These components should avoid any action that changes device behavior. Heavy analysis and pattern matching happen later on copies of captured data in a safe environment.
When device interaction is required, prefer read-only APIs, documented diagnostic ports, or vendor-supported tools. If hardware-level extraction is necessary, use controlled methods (for example JTAG reads, serial console captures, or bus sniffers) with clear test plans and safety checks. Keep detailed logs of every command and action taken during live acquisition so the evidence chain is traceable.
Automation helps, but only if it is conservative. Two-stage approaches are useful, where stage one performs simple, safe preservation and stage two runs deeper analyses offline. Any automated agent must be tested to ensure it never interferes with real-time control logic.

SCADA Network Forensics
Network captures are often the richest, least disruptive source of evidence. Packet captures and flow data show commands sent to controllers, operator actions, and any external systems that are connected to the control network.
Start by placing passive capture points in places that see control traffic without being in the critical data path, such as network mirrors or dedicated taps. Capture both raw packets and derived session logs as well as timestamps with a reliable time source.
Protocol awareness is essential. We will cover some of them in the next article. A lot more will be covered during the course. Industrial protocols like Modbus, DNP3, and vendor-specific protocols carry operational commands. Parsing these messages into readable audit records makes it much easier to spot abnormal commands, unauthorized writes to registers, or suspicious sequence patterns. Deterministic models, for example, state machines that describe allowed sequences of messages, help identify anomalies. But expect normal operations to be noisy and variable. Any model must be trained or tuned to the site’s own behavior to reduce false positives.
Network forensics also supports containment. If an anomaly is detected in real time, defenders can ramp up capture fidelity in critical segments and preserve extra context for later analysis. Because many incidents move from corporate IT into OT networks, collecting correlated data from both domains gives a bigger picture of the attacker’s path

Endpoint and Device Forensics
Field devices are the hardest but the most important forensic targets. The path to useful evidence often follows a tiered strategy, where you use non-invasive sources first, then proceed to live acquisition, and finally to hardware-level extraction only when necessary.
Non-invasive collection means pulling data from historians, backups, documented export functions, and vendor tools that allow read-only access. These sources often include configuration snapshots, logged process values, and operator commands.
Live acquisition captures runtime state without stopping the device. Where possible, use the device’s read-only interfaces or diagnostic links to get memory snapshots, register values, and program state. If a device provides a console or API that returns internal variables, collect those values along with timestamps and any available context.
If read-only or diagnostic interfaces are not available or do not contain the needed data, hardware extraction methods come next. This includes connecting to serial consoles, listening on fieldbuses, using JTAG or SWD to read memory, or intercepting firmware during upload processes. These operations require specialized hardware and procedures. It must be planned carefully to avoid accidental writes, timing interruptions, or safety hazards.
Interpreting raw dumps is often the bottleneck. Memory and storage can contain mixed content, such as configuration data, program code, encrypted blobs, and timestamps. But there are techniques that can help, including differential analysis (comparing multiple dumps from similar devices), data carving for detectable structures, and machine-assisted methods that separate low-entropy (likely structured) regions from high-entropy (likely encrypted) ones. Comparing captured firmware to a known baseline is a reliable way to detect tampering.
Where possible, create an offline test environment that emulates the device and process so investigators can replay traffic, exercise suspected malicious inputs, and validate hypotheses without touching production hardware.
SCADA Forensics Tooling
Right now the toolset is mixed. Investigators use standard forensic suites for control-center hosts, packet-capture and IDS tools extended with industrial protocol parsers for networks, and bespoke hardware tools or vendor utilities for field devices. Many useful tools exist, but most are specific to a vendor, a protocol, or a device family.
A practical roadmap for better tooling includes three points. First, create and adopt standardized formats for logging control-protocol events and for preserving packet captures with synchronized timestamps. Second, build non-disruptive acquisition primitives that work across device classes, ways to read key memory regions, configuration, and program images without stopping operation. Third, develop shared anonymized incident datasets that let researchers validate tools against realistic behaviors and edge cases.
In the meantime, it’s important to combine several approaches, such as maintaining high-quality network capture, work with vendors to understand diagnostic interfaces, prepare hardware tools and safe extraction procedures, while documenting everything. Establish and test standard operating procedures in advance so that when an incident happens the team acts quickly and consistently.
Conclusion
Attacks on critical infrastructure are rising, and SCADA forensics still trails IT forensics because field devices are often proprietary, have limited logging, and cannot be taken offline. We showed those gaps and gave practical actions. You will need to preserve network and historian data early, prefer read-only device collection, enforce strict IT/OT segmentation, reduce privileges, and rehearse incident response to protect those systems. In the next article, we will look at different protocols to give you a better idea of how everything works.
To support hands-on learning, our 3-day SCADA Forensics course starts in November that uses realistic ICS network topologies, breach simulations, and labs to teach how to reconstruct attack chains, identify IOCs, and analyze artifacts on PLCs, RTUs, engineering workstations and HMIs.
During the course you will use common forensic tools to complete exercises and focus on safe, non-disruptive procedures you can apply in production environments.
Learn more: https://hackersarise.thinkific.com/courses/scada-forensics
The post SCADA (ICS) Hacking and Security: An Introduction to SCADA Forensics first appeared on Hackers Arise.
2025 Cisco Segmentation Report Sheds Light on Evolving Technology
Defining a Standard Taxonomy for Segmentation
Get Ahead of the HIPAA Security Rule Update With Secure Workload
Understanding Network Traffic Flow and Segment Analysis
With every webpage loaded, email sent, or video streamed, network traffic takes a complex journey across multiple infrastructure nodes. From the device to the destination, data packets travel across various gateways, networks, through routers, switches, and service providers along the way. Understanding the network traffic paths and segments along the journey reveals much about performance,…
The post Understanding Network Traffic Flow and Segment Analysis appeared first on Exoprise.
Improve End-to-End Visibility With Network Segment Analysis
With the digital landscape today, maintaining seamless connectivity is a priority for most organizations. However, Internet Service Providers (ISPs), the Internet, and Software-Defined Wide Area Network (SDWAN) performance issues can severely impact operations, frustrate end-users, and can be costly when downtime occurs. Having recognized these challenges organizations face, we are excited to introduce our newest…
The post Improve End-to-End Visibility With Network Segment Analysis appeared first on Exoprise.
Contain Breaches and Gain Visibility With Microsegmentation
Organizations must grapple with challenges from various market forces. Digital transformation, cloud adoption, hybrid work environments and geopolitical and economic challenges all have a part to play. These forces have especially manifested in more significant security threats to expanding IT attack surfaces.
Breach containment is essential, and zero trust security principles can be applied to curtail attacks across IT environments, minimizing business disruption proactively. Microsegmentation has emerged as a viable solution through its continuous visualization of workload and device communications and policy creation to define what communications are permitted. In effect, microsegmentation restricts lateral movement, isolates breaches and thwarts attacks.
Given the spotlight on breaches and their impact across industries and geographies, how can segmentation address the changing security landscape and client challenges? IBM and its partners can help in this space.
Breach Landscape and Impact of Ransomware
Historically, security solutions have focused on the data center, but new attack targets have emerged with enterprises moving to the cloud and introducing technologies like containerization and serverless computing. Not only are breaches occurring and attack surfaces expanding, but also it has become easier for breaches to spread. Traditional prevention and detection tools provided surface-level visibility into traffic flow that connected applications, systems and devices communicating across the network. However, they were not intended to contain and stop the spread of breaches.
Ransomware is particularly challenging, as it presents a significant threat to cyber resilience and financial stability. A successful attack can take a company’s network down for days or longer and lead to the loss of valuable data to nefarious actors. The Cost of a Data Breach 2022 report, conducted by the Ponemon Institute and sponsored by IBM Security, cites $4.54 million as the average ransomware attack cost, not including the ransom itself.
In addition, a recent IDC study highlights that ransomware attacks are evolving in sophistication and value. Sensitive data is being exfiltrated at a higher rate as attackers go after the most valuable targets for their time and money. Ultimately, the cost of a ransomware attack can be significant, leading to reputational damage, loss of productivity and regulatory compliance implications.
Organizations Want Visibility, Control and Consistency
With a focus on breach containment and prevention, hybrid cloud infrastructure and application security, security teams are expressing their concerns. Three objectives have emerged as vital for them.
First, organizations want visibility. Gaining visibility empowers teams to understand their applications and data flows regardless of the underlying network and compute architecture.
Second, organizations want consistency. Fragmented and inconsistent segmentation approaches create complexity, risk and cost. Consistent policy creation and strategy help align teams across heterogeneous environments and facilitate the move to the cloud with minimal re-writing of security policy.
Finally, organizations want control. Solutions that help teams target and protect their most critical assets deliver the greatest return. Organizations want to control communications through selectively enforced policies that can expand and improve as their security posture matures towards zero trust security.
Microsegmentation Restricts Lateral Movement to Mitigate Threats
Microsegmentation (or simply segmentation) combines practices, enforced policies and software that provide user access where required and deny access everywhere else. Segmentation contains the spread of breaches across the hybrid attack surface by continually visualizing how workloads and devices communicate. In this way, it creates granular policies that only allow necessary communication and isolate breaches by proactively restricting lateral movement during an attack.
The National Institute of Standards and Technology (NIST) highlights microsegmentation as one of three key technologies needed to build a zero trust architecture, a framework for an evolving set of cybersecurity paradigms that move defense from static, network-based perimeters to users, assets and resources.
Suppose existing detection solutions fail and security teams lack granular segmentation. In that case, malicious software can enter their environment, move laterally, reach high-value applications and exfiltrate critical data, leading to catastrophic outcomes.
Ultimately, segmentation helps clients respond by applying zero trust principles like ‘assume a breach,’ helping them prepare in the wake of the inevitable.
IBM Launches Segmentation Security Services
In response to growing interest in segmentation solutions, IBM has expanded its security services portfolio with IBM Security Application Visibility and Segmentation Services (AVS). AVS is an end-to-end solution combining software with IBM consulting and managed services to meet organizations’ segmentation needs. Regardless of where applications, data and users reside across the enterprise, AVS is designed to give clients visibility into their application network and the ability to contain ransomware and protect their high-value assets.
AVS will walk you through a guided experience to align your stakeholders on strategy and objectives, define the schema to visualize desired workloads and devices and build the segmentation policies to govern network communications and ring-fence critical applications from unauthorized access. Once the segmentation policies are defined and solutions deployed, clients can consume steady-state services for ongoing management of their environment’s workloads and applications. This includes health and maintenance, policy and configuration management, service governance and vendor management.
IBM has partnered with Illumio, an industry leader in zero trust segmentation, to deliver this solution. Illumio’s software platform provides attack surface visibility, enabling you to see all communication and traffic between workloads and devices across the entire hybrid attack surface. In addition, it allows security teams to set automated, granular and flexible segmentation policies that control communications between workloads and devices, only allowing what is necessary to traverse the network. Ultimately, this helps organizations to quickly isolate compromised systems and high-value assets, stopping the spread of an active attack.
With AVS, clients can harden compute nodes across their data center, cloud and edge environments and protect their critical enterprise assets.
Start Your Segmentation Journey
IBM Security Services can help you plan and execute a segmentation strategy to meet your objectives. To learn more, register for the on-demand webinar now.
The post Contain Breaches and Gain Visibility With Microsegmentation appeared first on Security Intelligence.