Normal view

There are new articles available, click to refresh the page.
Yesterday — 5 December 2025Main stream
Before yesterdayMain stream

Bug in jury systems used by several US states exposed sensitive personal data

26 November 2025 at 11:00
An easy-to-exploit vulnerability in a jury system made by Tyler Technologies exposed the personally identifiable data of jurors, including names, home addresses, emails, and phone numbers.

Command and Control (C2): Using Browser Notifications as a Weapon

26 November 2025 at 10:16

Welcome back, my aspiring hackers!

Nowadays, we often discuss the importance of protecting our systems from malware and sophisticated attacks. We install antivirus software, configure firewalls, and maintain vigilant security practices. But what happens when the attack vector isn’t a malicious file or a network exploit, but rather a legitimate browser feature you’ve been trusting?

This is precisely the threat posed by a new command-and-control platform called Matrix Push C2. This browser-native, fileless framework leverages push notifications, fake alerts, and link redirects to target victims. The entire attack occurs through your web browser, without first infecting your system through traditional means.

In this article, we will explore the architecture of browser-based attacks and investigate how Matrix Push C2 weaponizes it. Let’s get rolling!

The Anatomy of a Browser-Based Attack

Matrix Push C2 abuses the web push notification system, a legitimate browser feature that websites use to send updates and alerts to users who have opted in. Attackers first trick users into allowing browser notifications through social engineering on malicious or compromised websites.

Once a user subscribes to the attacker’s notifications, the attacker can push out fake error messages or security alerts at will that look scarily real. These messages appear as if they are from the operating system or trusted software, complete with official-sounding titles and icons.

The fake alerts might warn about suspicious logins to your accounts, claim that your browser needs an urgent security update, or suggest that your system has been compromised and requires immediate action. Each notification includes a convenient “Verify” or “Update” button that, when clicked, takes the victim to a bogus site controlled by the attackers. This site might be a phishing page designed to steal credentials, or it might attempt to trick you into downloading actual malware onto your system. Because this whole interaction is happening through the browser’s notification system, no traditional malware file needs to be present on the system initially. It’s a fileless technique that operates entirely within the trusted confines of your web browser.

Inside the Attacker’s Command Center

Matrix Push C2 is offered as a malware-as-a-service kit to other threat actors, sold directly through crimeware channels, typically via Telegram and cybercrime forums. The pricing structure follows a tiered subscription model that makes it accessible to criminals at various levels of sophistication. According to BlackFog company, the Matrix Push C2 costs approximately $150 for one month, $405 for three months, $765 for six months, and $1,500 for a full year. Payments are accepted in cryptocurrency, and buyers communicate directly with the operator for access.

From the attacker’s perspective, the interface is intuitive. The campaign dashboard displays metrics like total clients, delivery success rates, and notification interaction statistics.

Source: BlackFog

As soon as a browser is enlisted by accepting the push notification subscription, it reports data back to the command-and-control server.

Source: BlackFog

Matrix Push C2 can detect the presence of browser extensions, including cryptocurrency wallets like MetaMask, identify the device type and operating system, and track user interactions with notifications. Essentially, as soon as the victim permits the notifications, the attacker gains a telemetry feed from that browser session.

Social Engineering at Scale

The core of the attack is social engineering, and Matrix Push C2 comes loaded with configurable templates to maximize the credibility of its fake messages. Attackers can easily theme their phishing notifications and landing pages to impersonate well-known companies and services. The platform includes pre-built templates for brands such as MetaMask, Netflix, Cloudflare, PayPal, and TikTok, each designed to look like a legitimate notification or security page from those providers.

Source: BlackFog

Because these notifications appear in the official notification area of the device, users may assume their own system or applications generated the alert.

Defending Against Browser-Based Command and Control

As cyberwarriors, we must adapt our defensive strategies to account for this new attack vector. The first line of defense is user education and awareness. Users need to understand that browser notification permission requests should be treated with the same skepticism as requests to download and run executable files. Just because a website asks for notification permissions doesn’t mean you should grant them. In fact, most legitimate websites function perfectly well without push notifications, and the feature is often more of an annoyance than a benefit. If you believe that your team needs to update their skills for current and upcoming threats, consider our recently published Security Awareness and Risk Management training.

Beyond user awareness, technical controls can help mitigate this threat. Browser policies in enterprise environments can be configured to block notification permissions by default or to whitelist only approved sites. Network security tools can monitor for connections to known malicious notification services or suspicious URL shortening domains.

Summary

The fileless, cross-platform nature of this attack makes it particularly dangerous and difficult to detect using traditional security tools. However, by combining user awareness, proper browser configuration, and anti-data exfiltration technology, we can defend against this threat.

In this article, we briefly explored how Matrix Push C2 operates, and it’s a first step in protecting yourself and your organization from this emerging attack vector.

Using Artificial Intelligence (AI) in Cybersecurity: Creating a Custom MCP Server For Log Analysis

5 November 2025 at 08:33

Welcome back, aspiring cyberwarriors!

In our previous article, we examined the architecture of MCP and explained how to get started with it. Hundreds of MCP servers have been built for different services and tasks—some are dedicated to cybersecurity activities such as reverse engineering or reconnaissance. Those servers are impressive, and we’ll explore several of them in depth here at Hackers‑Arise.

However, before we start “playing” with other people’s MCP servers, I believe we should first develop our own. Building a server ourselves lets us see exactly what’s happening under the hood.

For that reason, in this article, we’ll develop an MCP server for analyzing security logs. Let’s get rolling!

Step #1: Fire Up Your Kali

In this tutorial, I will be using the Gemini CLI with MCP on Kali Linux. You can install Gemini using the following command:

kali> sudo npm install -g @google/gemini-cli

Now, we should have a working AI assistant, but it doesn’t yet have access to any of our security tools.

Step #2: Create a Security Operations Directory Structure

Before we start configuring MCP servers, let’s set up a proper directory structure for our security operations. This keeps everything organized and makes it easier to manage permissions and access controls.

Create a dedicated directory for security analysis work in your home directory.

kali> mkdir -p ~/security-ops/{logs,reports,malware-samples,artifacts}

This creates a security-ops directory with subdirectories for logs, analysis reports, malware samples, and other security artifacts.

Let’s also create a directory to store any custom MCP server configurations we build.

kali> mkdir -p ~/security-ops/mcp-servers

For testing purposes, let’s create some sample log files we can analyze. In a real environment, you’d be analyzing actual security logs from your infrastructure.

Firstly, let’s create a sample web application firewall log.

kali> vim ~/security-ops/logs/waf-access.log

This sample log contains various types of suspicious activity, including SQL injection attempts, directory traversal, authentication failures, and XSS attempts. We’ll use this to demonstrate MCP’s log analysis capabilities.

Let’s also create a sample authentication log.

kali> vim ~/security-ops/logs/auth.log

Now we have some realistic security data to work with. Let’s configure MCP to give Gemini controlled access to these files.

Step #3: Configure MCP Server for Filesystem Access

The MCP configuration file lives at ~/.gemini/settings.json. This JSON file tells Gemini CLI which MCP servers are available and how to connect to them. Let’s create our first MCP server configuration for secure filesystem access.

Check if the .gemini directory exists, and create it if it doesn’t.

kali> mkdir ~/.gemini

Now edit the settings.json file. We’ll start with a basic filesystem MCP server configuration.

{
  "mcpServers": {
    "security-filesystem": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-filesystem",
        "/home/YOURUSERNAME/security-ops"
      ],
      "env": {}
    }
  }
}

This sets up a filesystem MCP server with restricted access to only our security-ops directory. First, it uses npx to run the MCP server, which means it will automatically download and execute the official filesystem server from the Model Context Protocol project. The -y flag tells npx to proceed without prompting. The server-filesystem package is the official MCP server for file operations. Second, and most critically, we’re explicitly restricting access to only the /home/kali/security-ops directory. The filesystem server will refuse to access any files outside this directory tree, even if Gemini tries to. This is defense in depth, ensuring the AI cannot accidentally or maliciously access sensitive system files.

Now, let’s verify that the MCP configuration is valid and the server can connect. Start Gemini CLI again.

kali> gemini

After running, we can see that 1 MCP server is in use and Gemini is running in the required directory.

Now, use the /mcp command to list configured MCP servers.

/mcp list

You should see output showing the security-filesystem server with a “ready” status. If you see “disconnected” or an error, double-check your settings.json file for typos and check if you have nodejs, npm, and npx installed.

Now let’s test the filesystem access by asking Gemini to read one of our security logs. This demonstrates that MCP is working and Gemini can access files through the configured server.

> Read the file ~/security-ops/logs/waf-access.log and tell me what security events are present

Pretty clear summary. The key thing to understand here is that Gemini itself doesn’t have direct filesystem access. It’s asking the MCP server to read the file on its behalf, and the MCP server enforces the security policy we configured.

Step #4: Analyzing Security Logs with Gemini and MCP

Now that we have MCP configured for filesystem access, let’s do some real security analysis. Let’s start by asking Gemini to perform a comprehensive analysis of the web application firewall log we created earlier.

> Analyze ~/security-ops/logs/waf-access.log for attack patterns. For each suspicious event, identify the attack type, the source IP, and assess the severity. Then provide recommendations for defensive measures.

The analysis might take a few seconds as Gemini processes the entire log file. When it completes, you’ll get a detailed breakdown of the security events along with recommendations like implementing rate limiting for the attacking IPs, ensuring your WAF rules are properly configured to block these attack patterns, and investigating whether any of these attacks succeeded.

Now let’s analyze the authentication log to identify potential brute force attacks.

> Read ~/security-ops/logs/auth.log and identify any brute force authentication attempts. Report the attacking IP, number of attempts, timing patterns, and whether the attack was successful.

Let’s do something more advanced. We can ask Gemini to correlate events across multiple log files to identify coordinated attack patterns.

> Compare the events in ~/security-ops/logs/waf-access.log and ~/security-ops/logs/auth.log. Do any IP addresses appear in both logs? If so, describe the attack campaign and create a timeline of events.

The AI generated a formatted timeline of the attack showing the progression from SSH attacks to web application attacks, demonstrating how the attacker switched tactics after the initial approach failed.

Summary

MCP, combined with Gemini’s AI capabilities, serves as a powerful force multiplier. It enables us to automate routine analysis tasks, instantly correlate data from multiple sources, leverage AI for pattern recognition and threat hunting, and retain full transparency and control over the entire process.

In this tutorial, we configured an MCP server for file system access and tested it using sample logs.

Keep returning, aspiring hackers, as we continue to explore MCP and the application of artificial intelligence in cybersecurity.

The post Using Artificial Intelligence (AI) in Cybersecurity: Creating a Custom MCP Server For Log Analysis first appeared on Hackers Arise.

Security Operations Center (SOC):Getting Started with SOC

31 October 2025 at 13:17

Welcome back, aspiring cyberwarriors!

In today’s highly targeted environment, a well-designed Security Operations Center (SOC) isn’t just an advantage – it’s essential for a business’s survival. In addition to that, the job market has far more jobs on the blue team than the red team. Getting into a SOC is often touted as one of the more accessible entry points into cybersecurity.

This article will delve into some of the key concepts of SOC.

Step #1: Purpose and Components

The core purpose of a Security Operations Center is to detect, analyze, and respond to cyber threats in real time, thereby protecting an organization’s assets, data, and reputation. To achieve this, a SOC continuously monitors logs, alerts, and telemetry from networks, endpoints, and applications, maintaining constant situational awareness.

Detection involves identifying four key security concerns.

Vulnerabilities are weaknesses in software or operating systems that attackers can exploit beyond their authorized permissions. For example, the SOC might find Windows computers needing patches for published vulnerabilities. While not strictly the SOC’s responsibility, unfixed vulnerabilities impact company-wide security.

Unauthorized activity occurs when attackers use compromised credentials to access company systems. Quick detection is important before damage occurs, using clues like geographic location to identify suspicious logins.

Policy violations happen when users break security rules designed to protect the company and ensure compliance. These violations vary by organization but might include downloading pirated media or transmitting confidential files insecurely.

Intrusions involve unauthorized access to systems and networks, such as attackers exploiting web applications or users getting infected through malicious websites.
Once incidents are detected, the SOC supports the incident response process by minimizing impact and conducting root cause analysis alongside the incident response team.

Step #2: Building a Baseline

Before you can detect threats, you must first understand what “normal” looks like in your environment. This is the foundation upon which all SOC operations are built.

Your baseline should include detailed documentation of:

Network Architecture: Map out all network segments, VLANs, DMZs, and trust boundaries. Understanding how data flows through your network is critical for detecting lateral movement and unauthorized access attempts. Document which systems communicate with each other, what protocols they use, and what ports are typically open.

Normal Traffic Patterns: Establish what typical network traffic looks like during different times of day, days of the week, and during special events like month-end processing or quarterly reporting. This includes bandwidth utilization, connection counts, DNS queries, and external communications.

User Behavior Baselines: Document normal user activities, including login times, typical applications accessed, data transfer volumes, and geographic locations. For example, if your accounting department typically logs in between 8 AM and 6 PM local time, a login at 3 AM should trigger an investigation. Similarly, if a user who normally accesses 5-10 files per day suddenly downloads 5,000 files, that’s a deviation worth investigating.

System Performance Metrics: Establish normal CPU usage, memory consumption, disk I/O, and process execution patterns for critical systems. Cryptocurrency miners, rootkits, and other malware often create performance anomalies that stand out when compared against baselines.

Step #3: The Role of People

Despite increasing automation, human oversight remains essential in SOC operations. Security solutions generate numerous alerts that create significant noise. Without human intervention, teams waste time and resources investigating irrelevant issues.

The SOC team operates through a tiered analyst structure with supporting roles.

Level 1 Analysts serve as first responders, performing basic alert triage to determine if detections are genuinely harmful and reporting findings through proper channels. When detections require deeper investigation, Level 2 Analysts correlate data from multiple sources to conduct thorough analysis. Level 3 Analysts are experienced professionals who proactively hunt for threat indicators and lead incident response activities, including containment, eradication, and recovery of critical severity incidents escalated from lower tiers.

Supporting these analysts are Security Engineers who deploy and configure the security solutions the team relies on. Detection Engineers develop the security rules and logic that enable these solutions to identify harmful activities, though Level 2 and 3 Analysts sometimes handle this responsibility. The SOC Manager oversees team processes, provides operational support, and maintains communication with the organization’s CISO regarding security posture and team efforts.

Step # 4: The Detection-to-Response Pipeline

When a potential security incident is detected, every second counts. Your SOC needs clearly defined processes for triaging, investigating, and responding to alerts.

This pipeline typically follows these stages:

Alert Triage: Not all alerts are created equal. Your SOC analysts must quickly determine which alerts represent genuine threats versus false positives. Implement alert enrichment that automatically adds context—such as asset criticality, user risk scores, and threat intelligence—to help analysts prioritize their work. Use a tiered priority system (P1-Critical, P2-High, P3-Medium, P4-Low) based on potential business impact.

Elastic Security Priority List

Investigation and Analysis: Once an alert is prioritized, analysts must investigate to determine the scope and nature of the incident. This requires access to multiple data sources, forensic tools, and the ability to correlate events across time and systems. Document your investigation procedures for common scenarios (phishing, malware infection, unauthorized access) to ensure consistent and thorough analysis. Every investigation should answer the five Ws: what happened? where it occurred? When did it take place? Why did it happen? And how did it unfold?

Containment and Eradication: When you confirm a security incident, your first priority is containment to prevent further damage. This might involve isolating infected systems, disabling compromised accounts, or blocking malicious network traffic.

Recovery and Remediation: After eradicating the threat, safely restore affected systems to normal operation. This may involve rebuilding compromised systems from clean backups, rotating credentials, patching vulnerabilities, and implementing additional security controls.

Post-Incident Review: Every significant incident should conclude with a lessons-learned session. What went well? What could be improved? Were our playbooks accurate? Did we have the right tools and access? Use these insights to update your procedures, improve your detection capabilities, and refine your security controls.

Step #5: Technology

At a minimum, a functional SOC needs several essential technologies working together:

SIEM Platform: The central nervous system of your SOC that aggregates, correlates, and analyzes security events from across your environment. Popular options include Splunk, for which we offer a dedicated course.

Splunk

Endpoint Detection and Response (EDR): Provides deep visibility into endpoint activities, detects suspicious behavior, and enables remote investigation and response.

Firewall: A firewall functions purely for network security and acts as a barrier between your internal and external networks (such as the Internet). It monitors incoming and outgoing network traffic and filters any unauthorized traffic.

Besides those core platforms, other security solutions such as antivirus, SOAR, and various niche tools each play distinct roles. Each organization selects technology that matches its specific requirements, so no two SOCs are exactly alike.

Summary

A Security Operations Center (SOC) protects organizations from cyber threats. It watches networks, computers, and applications to find problems like security weaknesses, unauthorized access, rule violations, and intrusions.

A good SOC needs three things: understanding what normal activity looks like, having a skilled team with clear roles, and following a structured process to handle threats. The team works in levels – starting with basic alert checking, then deeper investigation, and finally threat response and recovery.

If you want to get a deep understanding of SIEM and SOC workflow, consider our SOC Analyst Lvl 1 course.

The post Security Operations Center (SOC):Getting Started with SOC first appeared on Hackers Arise.

Hacking Artificial Intelligence (AI): Hijacking AI Trust to Spread C2 Instructions

30 October 2025 at 10:22

Welcome back, aspiring cyberwarriors!

We’ve come to treat AI assistants like ChatGPT and Copilot as knowledgeable partners. We ask questions, and they provide answers, often with a reassuring sense of authority. We trust them. But what if that very trust is a backdoor for attackers?

This isn’t a theoretical threat. At the DEF CON security conference, offensive security engineer Tobias Diehl delivered a startling presentation revealing how he could “poison the wells” of AI. He demonstrated that attackers don’t need to hack complex systems to spread malicious code and misinformation; they just need to exploit the AI’s blind trust in the internet.

Let’s break down Tobias Diehl’s work and see what lessons we can learn from it.

Step #1: AI’s Foundational Flaw

The core of the vulnerability Tobias discovered is really simple. When a user asks Microsoft Copilot a question about a topic outside its original training data, it doesn’t just guess. It performs a Bing search and treats the top-ranked result as its “source of truth.” It then processes that content and presents it to the user as a definitive answer.


This is a critical flaw. While Bing’s search ranking algorithm has been refined for over a decade, it’s not infallible and can be manipulated. An attacker who can control the top search result for a specific query can effectively control what Copilot tells its users. This simple, direct pipeline from a search engine to an AI’s brain is the foundation of the attack.

Step #2: Proof Of Concept

Tobias leveraged a concept he calls a “data void,” which he describes as a “search‑engine vacuum.” A data void occurs when a search term exists but there is little or no relevant, up‑to‑date content available for it. In such a vacuum, an attacker can more easily create and rank their own content. Moreover, data voids can be deliberately engineered.

Using the proof‑of‑concept from Microsoft’s Zero Day Quest event, we can see how readily our trust can be manipulated. Zero Day Quest invites security researchers to discover and report high‑impact vulnerabilities in Microsoft products. Anticipating a common user query—“Where can I stream Zero Day Quest?”—Tobias began preparing the attack surface. He created a website, https://www.watchzerodayquest.com, containing the following content:

As you can see, the page resembles a typical FAQ, but it includes a malicious PowerShell command. After four weeks, Tobias managed to get the site ranked for this event.

Consequently, a user could receive the following response about Zero Day Quest from Copilot:

At the time of writing, Copilot does not respond that way.

But there are other AI assistants.

And as you can see, some of them easily provide dangerous installation instructions for command‑and‑control (C2) beacons.

Summary

This research shows that AI assistants that trust real‑time search results have a big weakness. Because they automatically trust what a search engine says, attackers can easily exploit them, causing serious damage.

The post Hacking Artificial Intelligence (AI): Hijacking AI Trust to Spread C2 Instructions first appeared on Hackers Arise.

ARM Assembly for Hackers: Learning 32-bit Architecture for Exploit Development

17 September 2025 at 12:08

You may have already noticed that ARM processors are everywhere — in phones, routers, smart TVs, and of course, IoT devices. In fact, ARM has become one of the most widely used CPU architectures globally. And just like traditional PCs, ARM-based IoT devices are vulnerable to classic exploitation techniques — such as buffer overflows.

If you’re interested in exploit development or reverse engineering malware, learning assembly is a foundational skill. Understanding how code interacts with memory at the lowest level is essential for identifying vulnerabilities and crafting effective exploits.

Given the widespread use of ARM devices and their often-overlooked security vulnerabilities, attacks targeting them are likely to increase. And as a cyberwarrior, you need to understand how these systems work at the assembly level to analyze, secure, or exploit them effectively.

In previous articles, we explored the ARM CPU architecture and got started with 64-bit ARM assembly using the AArch64 instruction set — the modern standard on 64-bit Raspberry Pi OS.

Now, to better understand the foundations of ARM, we’ll shift our focus to 32-bit ARM assembly (ARMv7-A).

What is Assembly Language?

Assembly (ASM or asm) is any low-level programming language with a very strong correspondence between the instructions in the language and the architecture’s machine code instructions. Assembly usually has one statement per machine instruction (1:1). Because ASM depends on the machine code instructions, each assembly language is specific to a particular computer architecture.

In this case, we’re going to explore 32-bit ARM assembly.

Why Learn 32-bit ARM Assembly?

From industrial controllers and medical equipment to consumer electronics, most embedded and IoT devices still run 32-bit ARM CPUs. These systems aren’t upgraded like desktops — they have long lifecycles, limited memory, and rely on smaller, simpler instruction sets like ARMv7. For example, routers, IP cameras, and smart home hubs commonly run on ARMv7-based SoCs (System-on-Chip), often with Linux kernels and BusyBox environments.

Smart IP camera on ARM Cortex-A35 processors (Amlogic C305X and C308X)

Unlike modern desktops or cloud systems, many 32-bit ARM devices in the field lack protection, run as root by default, and don’t have regular updates or patch cycles. To exploit their flaws, you need to understand their instruction set, calling conventions, memory layout, and binary interface — all of which require 32-bit ARM assembly.

Many malware samples in the wild (Mirai, Mozi) are compiled for 32-bit ARM. If you’re analyzing real-world threats, you’ll be disassembling 32-bit ARM binaries, not just 64-bit ones.

In addition, 32-bit ARM is easier to learn and great for building a solid foundation. People who know ARM assembly well are rare, so learning it can lead to better job opportunities and staying on the cutting edge of cybersecurity.

Registers

Registers are places in computer memory where data is stored. When working in the assembler, we are usually using these registers to move and manipulate information, so you should be familiar with them.

ARM 32-bit processors provide 16 general-purpose registers. These registers are:

R0-R3: Argument registers (used for function parameters and return values)
R4-R11: General-purpose registers for local variables and temporary storage
R12: Intra-procedure call scratch register (IP)
R13: Stack Pointer (SP) – points to the top of the stack
R14: Link Register (LR) – stores return addresses for function calls
R15: Program Counter (PC) – points to the currently executing instruction

There is also a special register – xPSR: Extended Program Status Register (combines condition flags, current exception number, and execution state)

Flags

Flags in ARM 32-bit assembly are individual bits in a special register that indicate the status of operations in the CPU. The Program Status Register is 32 bits wide, but the primary flags used are just a few key bits. The main flags of interest are:

Z (Zero) flag: Indicates if the result of an operation is zero.
C (Carry) flag: Indicates a carry-out or borrow in arithmetic operations.
V (Overflow) flag: Indicates an overflow in signed arithmetic.

These flags enable conditional execution of instructions based on the results of previous operations and are updated by many arithmetic and logical instructions with an “S” suffix, for example, ADDS.

ARM Instructions

ARM instructions follow a predictable pattern that makes disassembly reading more intuitive:

<operation>{<condition>}{S} <destination>, <operand1>, <operand2>

Here are some of the most popular instructions:

MOV – move copies the value from the source to the destination. For example:

MOV R0, R1 ; Copy R1 to R0

; means a comment in Assembly.

LDR – load copies a value from memory into a register. For example:

LDR R0, [R1] ; Load the value from the address in R1 into R0

STR – store copies a value from a register into memory. For example:

STR R0, [R1] ; Store the value in R0 to the address in R1

CMP – compares and subtracts the second operand from the first, and sets condition flags. For example:

CMP R0, R1 ; Compare R0 with R1 (sets flags, but doesn't store result)

B – branch causes a jump to a label if the condition is met. For example:

BEQ label ; Branch to 'label' if equal (Z flag is set)

AND – bitwise AND performs a logical AND on each bit of the operands. For example:

AND R0, R1, R2 ; R0 = R1 AND R2

ORR – bitwise OR performs a logical OR on each bit of the operands. For example:

ORR R0, R1, R2 ; R0 = R1 OR R2

EOR – exclusive OR performs a logical XOR on each bit of the operands. For example:

EOR R0, R1, R2 ; R0 = R1 XOR R2

BIC – bit clear performs R1 AND (NOT R2). For example:

BIC R0, R1, R2 ; R0 = R1 AND NOT R2

PUSH – saves registers onto the stack. For example:

PUSH {R0, R1} ; Push R0 and R1 onto the stack

POP – restores registers from the stack. For example:

POP {R0, R1} ; Pop values into R0 and R1 from the stack

BL – branch with link jumps to a subroutine and saves the return address in LR. For example:

BL myFunction ; Call subroutine 'myFunction'

BX – branch and exchange jumps to an address and switches instruction set if needed. For example:

BX LR ; Return from subroutine (branch to address in LR)

SVC – supervisor call triggers a software interrupt to switch to supervisor mode. For example:

SVC #0 ; Call operating system service

ADR – load address of a label into a register. For example:

ADR R0, label ; Load the address of 'label' into R0

Summary

ARM 32-bit assembly might seem daunting at first, but its logical design and consistent instruction format make it more approachable than x86 for many analysts. In this article, we explored the reasons for learning 32-bit assembly, examined flags and registers, and then covered some of the most commonly used instructions.

The post ARM Assembly for Hackers: Learning 32-bit Architecture for Exploit Development first appeared on Hackers Arise.

Automated Password Cracking with BruteForceAI

15 September 2025 at 10:35

Nowadays, security engineers make an effort to get people to use complex passwords, and 2FA is becoming required on more and more platforms. This makes password cracking more time-consuming and sometimes only a first step toward access, but it can still be the hacker’s best entry point to an account or network.

Today, I’d like to talk about a tool that simplifies password cracking by combining features of tools for automated credentials attacks and Large Language Models (LLMs) – BruteForceAI.

BruteForceAI is a tool that automatically identifies login form selectors using AI and then conducts a brute force or password spraying attack in a human-like way.

Step #1: Install BruteForceAI

To get started, we need to clone the repository from GitHub:
kali> git clone https://github.com/MorDavid/BruteForceAI.git
kali> cd BruteForceAI

BruteForceAI required Python 3.8 or higher. Consider checking the version before continuing:
kali> python –version

In my case, it’s 3.13.5, and now I’m ready to install dependencies:
kali> pip3 install -r requirements.txt

I’ve used the –break-system-packages flag to ignore the environment error. You can use this command or create a virtual Python environment for this project.

Besides that, I’ve got an error about sqlite3 version. To fix that, we can install SQLite dev headers:
kali> sudo apt install libsqlite3-dev

For working with browser automation, BruteForceAI uses the Playwright library. We can install it using NPM:

kali> npm install playwright

To work correctly, a playwright needs a rendering engine; in this case, I’ll use Chromium:

kali> npx playwright install chromium

In the command above, you can see npx. It’s a command-line tool that comes with npm. It temporarily downloads and runs a program directly without adding it permanently to your system.

Step #2: AI Engine Setup

You have two options for the AI analysis engine: local or cloud AI. I have pretty humble hardware for running even small LLMs locally; therefore, I’ll show you how to use the cloud AI option.

There is a platform called Groq that provides access to different LLM models in the cloud through its API. To get started, you just need to sign up and acquire an API key.

Step #3: Prepare Target Lists

First of all, we need to create a file targets.txt and list URLs that contain a login form. In my case, it’ll be a WordPress website.

Before starting to crack, we need to figure out the registered users. For this, I’ve used WPScan and successfully saved all users to the file users.txt. To learn more about WPScan, check this article.

Step #4: Reconnaissance

Before launching attacks, BruteForceAI needs to analyze your targets and understand their login mechanisms.

kali> python3 BruteForceAI.py analyze –urls targets.txt –llm-provider groq –llm-model llama-3.3-70b-versatile –llm-api-key YOUR_KEY

The AI will analyze the target, identify form elements, and store the intelligence in a SQLite database.

Step #5: Online Password Cracking

We’re ready to execute a standard brute-force attack using AI-discovered selectors.

An important aspect that I didn’t mention is the password list. In this case, I’ll be using the 500 worst passwords from Seclists.

kali> python BruteForceAI.py attack –urls targets.txt –usernames users.txt –passwords /usr/share/seclists/Passwords/500-worst-passwords.txt –threads 10

I’ve mentioned –thread 10 flag, which means the script will run 10 parallel threads (simultaneous tasks) during the attack. But nowadays, such brute force will be quickly indefinable, so let’s see how we can conduct password spraying using BruteForceAI.

kali> python BruteForceAI.py attack –urls targets.txt –usernames users.txt –passwords /usr/share/seclists/Passwords/500-worst-passwords.txt –mode passwordspray –threads 15 –delay 10 –jitter 3 –success-exit

Where:

–mode passwordspray — Uses password spraying mode (tries one password across many accounts before moving to the next password).
–delay 10 — Waits 10 seconds between attempts per thread.
–jitter 3 — Adds up to 3 seconds of random extra delay to avoid detection.
–success-exit — Stops running immediately if a successful login is found.

BruteForceAI will continue from passwords that weren’t checked during the brute-force attack and start spraying.

To make it more stealthy, we can add a custom User-Agent, play with delays, and decrease the threads. And eventually this script will run until it checks all passwords or until it finds the correct one.

Summary

BruteForceAI is a great tool that makes password attacks much simpler. In this article, we discovered how to install BruteForceAI, get ready for use, conduct reconnaissance, and start attacking passwords. By combining this with different LLMs, this tool can make passwords attack faster and more efficient. But in any case, the success of this kind of attack depends on how good a password list you have, so consider checking tools like crunch and cupp.

If you want to improve your password-cracking skills and cybersecurity in general, check out our Master Hacker Bundle. You’ll dive deep into essential skills and techniques like reconnaissance, password cracking, vulnerability scanning, Metasploit 5, antivirus evasion, Python scripting, social engineering, and more.

The post Automated Password Cracking with BruteForceAI first appeared on Hackers Arise.

The One-Man APT with Artificial Intelligence, Part III: From Zero to Local Dominance

By: Smouk
7 September 2025 at 11:07

With in-memory execution and simulated exfiltration already in place, the next step was obvious: persistence. Advanced threats like Koske don’t just run once—they stay alive, blend into the system, and return after every reboot. That’s exactly what I set out to replicate in this phase.

The goal? To see if the AI could not only generate payloads that behave like persistent malware, but also suggest and configure real-world persistence mechanisms like systemd services or .bashrc entries—again, without me writing any code manually.

Let’s see how far the AI can go when asked to survive a reboot.

Simulated Attack Chain: Building Complexity

At this stage, the challenge escalates. Instead of focusing on isolated behaviors like beaconing or exfiltration, I asked the AI to generate a safe, all-in-one payload that could simulate a full attack chain. The idea was to build a structured sequence of actions—like compiling a fake binary, faking persistence, collecting environment data, and retrieving a file—mirroring the complexity of how real APTs like Koske operate.

The AI responded with a well-structured, harmless payload that compiles a dummy C program (fakerootkit), creates a marker file to simulate persistence (persistence_demo.txt), collects system info (cpu_check.txt), and downloads a PDF disguised as a cryptominer. All of this is packed into a polyglot image that can be triggered with a single command—just like earlier stages.

From here on, each request I make builds on the last, and the behavior becomes increasingly layered. This is where the simulation begins to truly reflect the modular, adaptive structure of a real-world APT—only it’s being built entirely through natural language prompts.

Bypassing AI Limitations: Changing the Assembly Vector

As I continued expanding the complexity of the simulation, I hit a wall: the AI stopped generating polyglot images directly, likely due to internal safety filters. But rather than breaking the experiment’s core rule—no manual payload writing—I took a different approach. I asked the AI to give me a Python script that could generate the image locally.

The result was a clean, minimal script that uses the PIL library to create a basic JPEG image, then appends a harmless shell payload that opens a terminal and runs whoami. The AI provided everything: image generation, payload logic, encoding, and the binary append operation—effectively giving me the same polyglot result, just via a different toolchain.

This moment reflected a real-world tactic perfectly: when direct delivery fails, an APT often falls back to alternative methods like packer-based generation or local compilation. Here, the AI simulated that behavior without being asked to—and kept the flow going.

Payload Assembly Without Manual Scripting

To stay within the bounds of the experiment, I didn’t manually write or alter the payload logic. Instead, I simply copied and pasted the code provided by the AI—line by line—into a local environment, using it exactly as delivered. The full simulated attack chain was now assembled via Python: fake binary compilation, mock persistence, system enumeration, and simulated cryptominer download.

This approach preserved the project’s core rule: I was still not writing code myself—the AI was doing all the work. The only difference was that now, instead of delivering a final image, it handed me the blueprints. And in real-world terms, this mimics the shift from payload delivery to toolkits and builders—exactly the kind of modularity we see in modern APT ecosystems like Koske.

Final Execution: Complete Polyglot Delivery Chain

For this phase, the objective was clear: demonstrate a full local execution chain that accurately reflects the behavior of the targeted APT — but using only safe, demonstrative payloads.

This time, the image wasn’t delivered directly. Due to AI restrictions, I adapted the approach by requesting a Python script that would locally generate the final polyglot image. The script would:

  • Create a simple JPEG file
  • Embed the full simulated attack chain as a shell payload

Once executed, the generated image (polyglot_terminal_whoami.jpg) behaved exactly as expected. Upon triggering it with the terminal command:

grep -a -A9999 “# PAYLOAD” polyglot_terminal_whoami.jpg | bash

The image executed a chain that:

  • Compiled a harmless “fakerootkit” binary
  • Simulated persistence via a timestamped text file
  • Collected CPU information into a local dump
  • Downloaded the PDF (“Linux Basics for Hackers 2 ed”) as a stand-in for staged payload delivery

All steps ran in sequence, without errors, cleanly emulating the kind of behavior observed in staged APT attacks — from initial execution, to local recon, to staged download activity.

Summary

This third stage marked a major technical leap in our emulation of the APT’s behavior. Faced with limitations in image payload generation, we adapted by leveraging Python to produce fully functional polyglot JPEGs locally.

The resulting image executed a complete mock attack chain: compiling a fake binary, simulating persistence, collecting system info, and downloading a decoy PDF — each step carefully reflecting the operational flow of the APT. By shifting to script-based generation while maintaining payload integrity, we advanced our alignment with the adversary’s methodology without compromising control or structure.

There’s something else I haven’t revealed yet — in an upcoming entry, I’ll show how, through the same sequence of prompts used in this project, I was able to obtain a fully functional rootkit for Linux. Stay tuned — I’ll be back soon.

Until next time…

Smouk out!

The post The One-Man APT with Artificial Intelligence, Part III: From Zero to Local Dominance first appeared on Hackers Arise.

Our Blog Has Moved!

Sherpa Intelligence: Your Guide Up a Mountain of Information!

We’ve relocated to https://sherpaintelligence.substack.com/

Join me over at the Sherpa Intelligence Substack! I will be working to move over the posts from Medium and then solely keep this as an archive.

Subscribe for free or become a Founding Member with a paid subscription!

Get features like:

What’d I Miss?” a Monday morning publication with a round up of Information Security and Data Privacy news items from the past weekend.

Five for Friday” a Friday mid-day newsletter about key Information Security and Data Privacy news items that you may have missed during the week.

Coming soon! More posts with podcasts, OSINT, and other topics TBD!

Go check out: sherpaintelligence.substack.com by clicking the image below

Sherpa Intelligence | Substack

SubStack word with orange and white logo.
Click the image above for https://sherpaintelligence.substack.com/
Click these images to learn more about what Sherpa Intelligence can do for you!

Transportation Cybersecurity & Data Privacy News Roundup for 2024

27 December 2024 at 11:40

Sherpa Intelligence: Your Guide Up a Mountain of Information!

A roundup of cybersecurity and data privacy news items regarding the transportation industry for the year 2024.
Transportation, as defined for this newsletter, includes maritime, rail, aviation, bus, car, trucking, and more.
This is not a comprehensive list, rather, highlights from each month of the year.
Sherpa Intelligence

January 2024

February 2024

March 2024

April 2024

May 2024

June 2024

July 2024

August 2024

September 2024

October 2024

November 2024

December 2024

Sherpa Intelligence

Caribbean Information Security and Data Privacy News Roundup

15 December 2024 at 13:53

Sherpa Intelligence: Your Guide Up a Mountain of Information!

News Items from October 1, 2024 — December 15, 2024
  1. The EU and its Latin American & Caribbean partners leverage the responsible use of data
    (The Diplomatic Service of the European Union, October 1st)
  2. Caribbean Digital Transformation Project provides training to boost national cyber security
    (Dominica News Online, October 2nd)
  3. 60% of Caribbean organisations delayed in digital transformation
    (Loop News Jamaica, October 3rd)
  4. Dominica aims to establish Cyber Incident Response Team to strengthen cybersecurity infrastructure
    (Associates Times, October 8th)
  5. Experts: Small businesses in cybersecurity crisis as AI-driven attacks escalate
    (Barbados Today, October 11th)
  6. ‘Urgent threat’ to critical infrastructure, warns cybersecurity expert
    (Barbados Today, October 17th)
  7. Digital is the path to growth in Latin America & the Caribbean
    (The Diplomatic Service of the European Union, November 8th)
  8. Bahamas must “catch up” to global cybersecurity standards, says expert
    (Eyewitness News, November 14th)
  9. Latin American & the Caribbean countries most targeted by phishing attacks in 2021
    (Statista, November 25th)
  10. Caribbean Nations Security Conference (CANSEC) 24: Strengthening Bonds, Securing Futures, United for Regional Security
    (Dialogo Americas, December 13th)
Sherpa Intelligence is accepting new clients! If your organization needs assistance with Information Security Research, Technical Writing or Leadership Development & Technical Training, visit sherpaintel.net for more information.
Image: Trinidad & Tobago, Source: Kreol Magazine

Weekly #InfoSec News Roundup

8 December 2024 at 13:58

Sherpa Intelligence: Your Guide Up a Mountain of Information!

Sunday, December 1, 2024 — Sunday, December 8, 2024

A baker’s dozen of Information Security & Data Privacy news items that you may have missed!

Click image above to complete the survey!
Check out Sherpa Intelligence!
  1. How to ensure safer digital environment in Nigeria
    (The Nation NG, December 1st)
  2. Cyber attack prompts Stoli Group USA bankruptcy filing
    (The Spirits Business, December 2nd)
  3. Indiana begins offering water systems free cyber assessments
    (State Scoop, December 3rd)
  4. United Kingdom facing increased hostile activity in cyberspace, security official warns
    (Reuters, December 3rd)
  5. ‘Aggressive’ Russian cyber attacks boosted Romania’s pro-Moscow presidential candidate
    (France 24, December 4th)
  6. NHS Ransomware Attack: Russian INC Ransom Gang Steals Patient Data
    (HackRead, December 4th)
  7. Growing Cyber Talent in The Bronx
    (City University of New York, December 5th)
  8. Industrial Cyber Security Market Is Booming As Firms Embrace AI And IIoT
    (Forbes, December 5th)
  9. Movie Theater Data Breach Leads to Settlement and Class Action Lawsuits
    (Troutman Pepper, December 6th)
  10. NATO to launch new cyber center by 2028
    (Breaking Defense, December 6th)
  11. Transport for London (TfL) cyber attack cost over £30m to date
    (Computer Weekly, December 6th)
  12. House and Senate defense committees agree on independent cyber force assessment
    (Defense Scoop, December 7th)
  13. From Europe to South Africa: Where Is the World on Cyber Defense?
    (Government Technology, December 8th)
Read more InfoSecSherpa news roundups here!
Sherpa Intelligence is accepting new clients! If your organization needs assistance with Information Security Research, Technical Writing or Leadership Development & Technical Training, visit sherpaintel.net for more information.

Weekly #InfoSec News Roundup

25 November 2024 at 10:09

Sherpa Intelligence: Your Guide Up a Mountain of Information!

Sunday, November 17, 2024 — Sunday, November 24, 2024

A baker’s dozen of Information Security & Data Privacy news items that you may have missed!

Check out Sherpa Intelligence!
  1. Army Cyber AI monitoring tool, Panoptic Junction or PJ, moves to 12-month pilot
    (Defense Scoop, November 18th)
  2. ‘Critical’ cyber vulnerabilities found in many water utilities, warns EPA inspector general
    (State Scoop, November 18th)
  3. Energy Department’s ‘Energy Threat Analysis Center’ cyber threat center goes operational
    (Federal News Network, November 18th)
  4. United Kingdom’s largest water and waste treatment company Thames Water’s IT ‘falling apart’ and is hit by cyber-attacks, sources claim
    (The Guardian, November 18th)
  5. Moody’s rates education sector at ‘high’ cyber risk in 2024
    (K-12 Dive, November 19th)
  6. Rail and pipeline representatives push to dial back Transportation Security Administration’s cyber mandates
    (Cyber Scoop, November 19th)
  7. Note to Industry: Make Spanish Language-Enabled Cybersecurity Tools
    (The Cyber Edge by Signal, November 21st)
  8. Now Hackers Are Using Snail Mail In Cyber Attacks — Here’s How
    (Forbes, November 21st)
  9. The Philippine army is recruiting young tech civilians to fight cyber attacks
    (Rest of World, November 21st)
  10. Report: Media &Entertainment taking longer to recover from cyber attacks
    (Advanced Television, November 21st)
  11. U.S. Coast Guard Sounds Alarm on Cyber Threats from Chinese Port Cranes
    (gCaptain, November 22nd)
  12. Russia ‘aggressive’ and ‘reckless’ in cyber realm and threat to Nato, UK minister to warn
    (The Guardian, November 23rd)
  13. Iranian Handala Hacking Group Claims Cyber Attack on Silicom, Mossad’s Cover Company
    (Sri Lanka Guardian, November 25th)
Read more InfoSecSherpa news roundups here!
Sherpa Intelligence is accepting new clients! If your organization needs assistance with Information Security Research, Technical Writing or Leadership Development & Technical Training, visit sherpaintel.net for more information.

Weekly #InfoSec News Roundup

17 November 2024 at 16:24

Sherpa Intelligence: Your Guide Up a Mountain of Information!

Sunday, November 10, 2024 — Sunday, November 17, 2024

A baker’s dozen of Information Security & Data Privacy news items that you may have missed!

Check out Sherpa Intelligence!
  1. City of Sheboygan, Wisconsin hit by apparent ransomware attack
    (Wisconsin Public Radio, November 11th)
  2. Emphasizing preparedness: The role of out-of-band communications in cyber incident response
    (Marsh, November 11th)
  3. Recent SEC Cyber-Related Enforcement Actions Emphasize the Importance of Robust Disclosure Controls
    (JD Supra/Skadden, Arps, Slate, Meagher & Flom LLP, November 11th)
  4. New United Kingdom Research and Innovation (UKRI)-funded network to bolster UK’s cyber security research ecosystem
    (University of Oxford, November 12th)
  5. Grocery Giant Ahold Delhaize’s Cyber Incident Signals Wider Digital Achilles’ Heel
    (PYMNTS, November 13th)
    Related: Food Lion involved in cyber attack: Issues a statement
    (Davidson Local, November 13th)
  6. Ransomware fiends boast they’ve stolen 1.4TB from US pharmacy network
    (The Register, November 13th)
  7. How Maryland’s bug bounty hackers found cyber vulnerabilities the state couldn’t
    (State Scoop, November 14th)
  8. Cyber breach halts gun background checks in Washington State
    (KREM, November 15th)
  9. Fishing for phishy messages: predicting phishing susceptibility through the lens of cyber-routine activities theory and heuristic-systematic model
    (Nature, November 15th)
  10. Ford recall on a control system cyber issue
    (Control Global, November 15th)
  11. Government of Mexico’s official website claimed by RansomHub gang
    (Cyber News, November 15th)
  12. Keeping Rail, Metro Networks Safe From Cyber Threats
    (Railway Age, November 15th)
  13. Negotiate with Hackers? Buchanan Ingersoll & Rooney Discuss
    (Cyber Magazine, November 16th)
Read more InfoSecSherpa news roundups here!
Sherpa Intelligence is accepting new clients! If your organization needs assistance with Information Security Research, Technical Writing or Leadership Development & Technical Training, visit sherpaintel.net for more information.

Weekly #InfoSec News Roundup

10 November 2024 at 17:34

Sherpa Intelligence: Your Guide Up a Mountain of Information!

Sunday, November 3, 2024 — Sunday, November 10, 2024

A baker’s dozen of Information Security & Data Privacy news items that you may have missed!

InfoSec & Data Privacy News: A Baker’s Dozen of News from InfoSecSherpa
Check out Sherpa Intelligence!
  1. Food and Ag-ISAC publishes cyber threat report, broadens scope beyond ransomware
    (Industrial Cyber, November 5th)
  2. Schneider Electric investigating cyber intrusion after threat actor gains access to platform
    (Cybersecurity Dive, November 5th)
  3. USDA, ONCD, NRWA launch initiative to bolster cybersecurity in rural water systems
    (Industrial Cyber, November 5th)
    Note: U.S. Department of Agriculture (USDA), White House Office of the National Cyber Director (ONCD), National Rural Water Association (NRWA)
  4. Cyber-Attack on Microlise Disrupts DHL and Serco Tracking Services
    (InfoSecurity Magazine, November 6th)
  5. Face of Defense: Navy IT Specialist Makes U.S. Women’s Cyber Team
    (U.S. Department of Defense, November 6th)
  6. What Telegram’s recent policy shift means for cyber crime
    (Security Intelligence, November 6th)
  7. Transportation Security Administration (TSA) rule would require cyber risk management for railroads
    (DC Velocity, November 7th)
  8. Another US law firm reaches data breach settlement as cyber risks mount
    (Reuters, November 8th)
  9. As Part of Cyber Workforce Development, DOD Lowers Time-to-Hire for Civilians
    (U.S. Department of Defense, November 8th)
  10. SEC Enforcement Heats up on Key Public Company Topics: Cyber Disclosure, Director Independence and Regulation FD
    (White & Case, November 8th)
  11. Credit cards readers across Israeli stores, gas stations crash in cyberattack
    (The Jerusalem Post, November 10th)
  12. “Knock Knock:” The Cyber Repressive Machinery of the Venezuelan Government, Exposed
    (Caracas Chronicles, November 10th)
  13. North Korean Cyber Group Targets Cryptocurrency Industry with ‘Hidden Risk’ Malware on MacOS
    (Brave NewCoin, November 10th)
A.I. generated donuts on newspaper.
Read more InfoSecSherpa news roundups here!
Sherpa Intelligence is accepting new clients! If your organization needs assistance with Information Security Research, Technical Writing or Leadership Development & Technical Training, visit sherpaintel.net for more information.
❌
❌