Normal view

There are new articles available, click to refresh the page.
Yesterday — 5 December 2025Detectify Blog

Security Update: Critical RCE in React Server Components & Next.js (CVE-2025-55182)

By: Detectify
5 December 2025 at 04:22

A Critical Remote Code Execution (RCE) vulnerability, identified as CVE-2025-55182, has been discovered in Next.js applications utilizing React Server Components (RSC) and Server Actions. This vulnerability stems from insecure deserialization within the underlying “Flight” protocol used by React. Unauthenticated remote attackers can exploit this flaw to execute arbitrary code on the server, potentially leading to a complete compromise of the application and underlying system.

Given the widespread adoption of Next.js and the critical severity of the flaw (CVSS 10.0), immediate action is required.

Affected Products

The vulnerability affects the React Server Components ecosystem, which is heavily integrated into modern frameworks like Next.js. Specifically, it impacts the react-server-dom-parcel, react-server-dom-turbopack, and react-server-dom-webpack packages.

Affected Versions:

  • React Server Components: Versions 19.0.0, 19.1.0, 19.1.1, and 19.2.0.
  • Next.js: Applications using App Router (Next.js 15.x, 16.x) or experimental Server Actions are likely affected by default.

Vulnerability Details

CVE-2025-55182 is an insecure deserialization vulnerability that occurs at “Server Function endpoints.”

The flaw exists because the server-side handler for the React “Flight” protocol unsafely deserializes payloads from HTTP requests. The server fails to properly validate serialized input before processing it. An attacker can trigger this vulnerability by sending a specially crafted POST request to the root path containing:

  1. Specific Next-Action headers.
  2. Malformed multipart data payloads.

When processed, this malformed payload triggers the insecure deserialization, allowing the attacker to inject and execute malicious code remotely.

Detection

Detectify customers can now test whether their applications are exposed to this RCE.

The vulnerability assessment released by Detectify checks for the presence of the insecure deserialization flaw by sending a specially crafted POST request to the root path with Next-Action headers and malformed multipart data. The test safely identifies the vulnerability by observing specific error responses from the server that confirm the deserialization failure, without executing malicious code.

Mitigation

Upgrade Immediately: The most effective mitigation is to upgrade the affected packages to their patched versions.

  • React Server Components: Upgrade react-server-dom-* packages to versions 19.0.1, 19.1.2, or 19.2.1 (or later).
  • Next.js: Upgrade to the latest patch release for your major version (e.g., Next.js 15.0.5+, 16.0.7+).

If immediate patching is not feasible: You may be able to mitigate the risk by applying Web Application Firewall (WAF) rules to block requests containing suspicious Next-Action headers or malformed multipart bodies, though this is not a substitute for patching.

Patch availability

The vulnerability is fixed in the following versions:

  • React Server Components: 19.0.1, 19.1.2, and 19.2.1.
  • Next.js: Various patch releases (check the official Next.js release log for your specific version branch).

Users are strongly advised to update to these versions.

Customers can always find updates in the “What’s New at Detectify” product log. Any questions can be directed to Customer Success representatives or Support. If you’re not already a customer, click here to sign up for a demo or a free trial and immediately start scanning. Go hack yourself!

References
Vendor Advisory 

The post Security Update: Critical RCE in React Server Components & Next.js (CVE-2025-55182) appeared first on Blog Detectify.

Before yesterdayDetectify Blog

Why traditional black box testing is failing modern AppSec teams

By: Detectify
28 November 2025 at 06:08

Applications have long evolved from monolithic structures to complex, cloud-native architectures. This means that the tried-and-true methods we rely on are becoming dangerously outdated. For AppSec to keep pace, we must look beyond current tooling and revisit the very fundamentals of DAST – the automated discipline of black box testing.

The basics of black box security testing

Before diving into modern challenges, let’s revisit the three pillars of any successful black box security test: a foundation that remains constant even as technology shifts:

  1. State: The application must be put into a specific condition that exposes potential vulnerabilities.
  2. Payloads: A relevant attack string must be sent to trigger the vulnerability. Payloads must be crafted to match the underlying technologies and the desired aggression (e.g., a simple SLEEP vs. a data-altering DELETE).
  3. Assertions: You need a reliable way to determine if the payload was successful. This can be as simple as a script alert(1) or as complex as measuring response time changes for a Blind SQL injection.

These fundamentals are always constrained by two major resources:

  • Server load: Can the system (especially a production system) handle the load of testing? Testing production is often ideal because it holds all business-critical data and is never truly equal to staging.
  • Scanning time & cost: Resources are finite. A scan running in a fast build pipeline needs a different time budget than one in a QA environment. Furthermore, computational costs for rendering, traffic, and even AI tokens must be factored in.

Why the old methods are breaking

The black box fundamentals are stable, but the applications we test have been completely revolutionized.

Monolithic legacy architecture (The “good old days”)
In the traditional LAMP stack world, things were simpler:

  • URL = State: Each state of the application was directly accessible via a URL.
  • Visible technology: The underlying tech stack was relatively easy to determine, and the alternatives were few.
  • Direct payload response: Payloads directly triggered the application you were testing, with minimal movement through system components.

Modern Application Architecture

Today, the architecture is complex and layered, breaking all the old assumptions:

  • URL ≠ State: Application state is now driven by actions (like clicking a button to add a product to a cart), not just URLs. Modern URLs often use fragments (#) and may change client-side via the JavaScript history API without triggering HTTP requests.
  • Hidden technology stack: Applications now consist of CDNs, cloud storage, container groups, message queues (like Kafka), and schedulers. The underlying tech is hidden and protected behind many layers.
  • Payloads trigger across components: A single payload might travel through a Kafka message bus and trigger in a separate system, potentially due to serialization/deserialization differences between coding languages, or even in a third-party service (e.g., a logging tool).

With architecture fundamentally changed, it is no wonder many black box tools, often based on decades-old underlying projects, are struggling to keep up.

The (very much) required shifts in black box methodology

To meet the challenges of modern apps, black box tools must evolve their approach to state, payloads, and assertions.

1. Generating State

  • Graph, not a tree: URL trees are obsolete. A modern web app must be modeled as a graph, where a node is a state and an edge is an educated guess of an action that modifies the state. This requires modeling both client-side and server-side state.
  • Recreation of state: You can no longer reliably recreate a state with just a URL or a HAR archive. Tools must replay the sequence of actions taken to reach a specific state.
  • Short-lived states: States are increasingly short-lived (e.g., JWTs with short TTLs), making it difficult for traditional crawlers to test them effectively later on.

2. Crafting payloads

  • Context-aware payloads: Since the full stack is hidden, payloads must be designed to work in multiple contexts. A single string must survive serialization/deserialization across different programming languages as it propagates through the system and potentially triggers in a different software stack.

3. Making Assertions

  • Delayed and out-of-band triggers: Payloads may now trigger much later, possibly after being queued for processing or returning from a different view. The Log4j vulnerability was a clear example of payloads triggering deep within the architecture, requiring out-of-band methods and network pingbacks.
  • Noisier Systems: Measuring system behaviors, like using response time for Blind SQL injection, is nearly impossible in an architecture based on message queues and load balancing.

The path forward 

The key is not to “just AI everything,” but to strategically use advanced methods to optimize decision-making. We at Detectify have already begun rolling out a couple of next-generation assessment updates to address this, with Dynamic Payload Rotation as a prime example for our API Scanner, and many more are planned for early next year.

This feature utilizes a near-infinite pool of payloads, mixing constant checks with experimental variations. If an experimental payload succeeds, it is immediately reused in future tests for that tech stack. This form of unsupervised machine learning allows the scanner to gain a permanent testing edge, ensuring that the fundamentals of state, payload, and assertion evolve as fast as the applications they protect.

 

The post Why traditional black box testing is failing modern AppSec teams appeared first on Blog Detectify.

Product comparison: Detectify vs. Holm Security

By: Detectify
20 November 2025 at 08:31

This comparison focuses on how Holm Security and Detectify address the core challenges faced by AppSec teams: gaining visibility and context, testing their web applications and APIs, and how quickly users can get value from these tools. Holm Security offers broad, unified coverage across the entire IT estate (internal, external, and cloud) and relies on a proprietary unified risk score for strategic prioritization, making it a good consolidated risk reporting and management tool. Detectify, by contrast, is a specialized EASM and DAST solution focused on external applications. Detectify utilizes its Asset Classification to provide explicit scanning recommendations and employs 100% payload-based testing to ensure a high-fidelity signal, directly reducing friction and the time spent validating findings.

Detectify vs. Holm Security: A Quick Comparison

We’ve built this comparison mainly based on the feedback from dialogues with prospective clients and past Holm Security users who decided to evaluate Detectify as its alternative, but also based on the following sources:

  • Holm Security’s official website & resources
  • Holm Security’s documentation
  • Holm Security’s publicly accessible demos

The post Product comparison: Detectify vs. Holm Security appeared first on Blog Detectify.

The researcher’s desk: FortiWeb Authentication Bypass (CVE-2025-64446)

By: Detectify
17 November 2025 at 03:35

Welcome to The researcher’s desk  – a content series where the Detectify security research team conducts a technical autopsy on vulnerabilities that are particularly interesting, complex, or persistent. For this issue, we look at CVE-2025-64446, a critical authentication bypass that has been actively exploited in the wild, targeting Fortinet’s Web Application Firewall (WAF) product, FortiWeb.

The Case File: Unauthenticated control

Vulnerability Type Authentication Bypass / Impersonation Flaw
Disclosure Date November 14, 2025
Score 9.8 (Critical)
Vector CVSS:3.1/AV:N/AC:L/PR:N­/UI:N/S:U/C:H/I:H/A:H
Identifier CVE-2025-64446
Vulnerable Component Fortinet FortiWeb (Web Application Firewall)
Final Impact Unauthenticated execution of administrative commands / complete control.
Observations Exploited in the wild; involved a “silent patch.”

What’s the root cause of CVE-2025-64446?

The core issue is a complex Unauthenticated Authentication Bypass flaw. It involves an improper handling mechanism within the FortiWeb appliance that is related to user impersonation functionality.

Essentially, the vulnerability allows an attacker to manipulate the way the system validates user identity, tricking the appliance into granting administrative privileges. The flaw is rooted in how a function intended to handle user context is improperly exposed or protected, enabling its misuse for unauthorized access.

What’s the mechanism behind CVE-2025-64446?

The mechanism involves bypassing the standard login procedure to gain full administrative privileges on the FortiWeb appliance.

  • Bypass the gate: The attacker first leverages a mechanism (reported as a Relative Path Traversal combined with a logic flaw) to reach a restricted executable or API endpoint on the FortiWeb device.
  • Impersonation: Once the endpoint is reached, the attacker sends specially crafted input (often within an HTTP header) containing fields designed to impersonate the built-in admin account.
  • Complete compromise: The appliance’s authentication function processes this untrusted input and grants the attacker an administrative context. This allows the attacker to execute administrative commands, often leading to the creation of a new, persistent administrator account with known credentials, which grants complete control over the WAF.

This flaw is interesting because it showcases the danger of authentication logic errors and how seemingly internal, administrative functions (like impersonation) can be weaponized when not properly secured. The flaw was exploited in attacks before a public patch was available, confirming its zero-day status.

Defensive takeaways

  • Patching: Fortinet issued updates to resolve this vulnerability. Users must apply the security patch immediately to all affected FortiWeb versions.
  • Management Access: Review the administrative user list for any new, unknown, or unauthorized accounts created since the beginning of October 2025 on the VPN appliance as a sign of compromise.
  • The Detectify Approach: Detectify customers are running payload-based tests to check for the specific combination of path manipulation and header injection required to trigger this unauthenticated authentication bypass, providing early warning for vulnerable assets.

Questions? We’re happy to hear from you via support@detectify or book a demo to learn more about Detectify.

References

The post The researcher’s desk: FortiWeb Authentication Bypass (CVE-2025-64446) appeared first on Blog Detectify.

The researcher’s desk: CVE-2025-59287

By: Detectify
14 November 2025 at 08:42

Welcome to The researcher’s desk  – a content series where the Detectify security research team conducts a technical autopsy on vulnerabilities that are particularly interesting, complex, or persistent. The goal here is not to report the latest research (for which you can refer to the Detectify release log); it is to take a closer look at certain vulnerabilities, regardless of their disclosure date, that still offer critical lessons.

For this issue, we analyze CVE-2025-59287, a critical remote code execution (RCE) flaw in Microsoft Windows Server Update Services (WSUS) that targets the core patch management infrastructure of the enterprise.

The Case File: WSUS Unauthenticated RCE

Disclosure Date October 14, 2025 (Initial Patch)
Vulnerability Type Unsafe Deserialization of Untrusted Data (CWE-502)
Identifier CVE-2025-59287 with CVSS 9.8 (Critical)
Vulnerable Component WSUS Reporting/Web Services (e.g., GetCookie endpoint)
Final Impact Unauthenticated Remote Code Execution (RCE) as SYSTEM
Observations Actively exploited in the wild; targets core update infrastructure.

What’s the root cause of CVE-2025-59287?

The access flaw, CVE-2025-59287, is due to unsafe deserialization of untrusted data in the WSUS reporting/web services.

This means the service accepts data sent by an external source and fails to validate its structure or content safely before processing it. This fundamental failure allows an attacker to inject arbitrary code instructions into the data stream that the service then executes.

What’s the mechanism behind CVE-2025-59287?

The mechanism enables a high-impact attack due to its low requirements and high privileges.

  • Unauthenticated Access: Attackers can send specially crafted events to unauthenticated endpoints of the WSUS service.
  • Arbitrary Code Execution: The unsafe deserialization flaw allows the attacker to execute arbitrary code remotely.
  • Privilege: This code executes with SYSTEM privileges on the target server, providing the highest level of control.

This flaw is interesting because it is actively exploited in the wild and targets core update management infrastructure in enterprises. It has been used to deploy infostealers and pre-ransomware payloads, which compromises sensitive data in regulated environments. The existence of public PoC exploits also accelerates the threat landscape.

Defensive takeaways

  • Patching: Apply vendor updates to mitigate this vulnerability.
  • The Detectify Approach: Detectify customers are running payload-based assessments to test for this vulnerability.

Questions? We’re happy to hear from you via support@detectify or book a demo to learn more about Detectify.

The post The researcher’s desk: CVE-2025-59287 appeared first on Blog Detectify.

Product comparison: Detectify vs. Halo Security

By: Detectify
14 November 2025 at 05:21

This review provides a direct comparison between two external security platforms, Halo Security and Detectify. The analysis will focus on three core areas critical to Application Security engineers: Visibility and Context, which examines how each platform discovers and classifies assets; Assessment, which compares their technical methodologies for finding vulnerabilities; and Usability, which evaluates the day-to-day workflow and operational efficiency of each tool.

Detectify vs. Halo Security: A Quick Comparison

We’ve built this comparison mainly based on the feedback from dialogues with prospective clients and past Halo Security users who decided to evaluate Detectify as its alternative, but also based on the following sources:

  • Halo Security’s official website & resources
  • Halo Security’s documentation
  • Halo Security’s publicly accessible demos

The post Product comparison: Detectify vs. Halo Security appeared first on Blog Detectify.

Detectify AI-Researcher Alfred gets smarter with threat actor intelligence

By: Detectify
10 November 2025 at 04:58

Six months after launch, Alfred, the AI Agent that autonomously builds security tests, has revolutionized our workflow. Alfred has delivered over 450 validated tests against high-priority threats (average CVSS 8.5) with 70% requiring zero manual adjustment, allowing our human security researchers to concentrate on more complex, high-impact issues. 

Now, we’re elevating Alfred’s capabilities by integrating real-world threat actor intelligence directly into its core system. This significant enhancement ensures that Alfred immediately prioritizes and generates tests for the most alarming, actively weaponized CVEs, dramatically increasing the speed and relevance of protection for all Detectify customers.

A deeper focus on threat actors

When we first built the vulnerability catalog that Alfred uses to source its assessments, the initial focus was identifying which CVEs were being utilized by Advanced Persistent Threats (APTs) and other active threat actors. Up until now, the system has primarily sourced raw vulnerability data (CVEs, along with their exploitability likelihood). However, in the spirit of our original intent, we’ve overhauled the pipeline to directly integrate active threat intelligence. 

This means that the vulnerability catalog used to feed the Alfred pipeline now sources two critical elements: vulnerabilities AND threat actors.

This change allows Alfred to place immediate and explicit emphasis on the CVEs that are being actively exploited by malicious actors in the wild. Alfred ensures that the most dangerous, actively weaponized CVEs are prioritized first for test generation and deployment onto the Detectify platform by adding up-to-the-minute threat actor behavior into our prioritization model.

Capturing even more relevant hits

In addition to this enhanced threat intelligence sourcing, we have also optimized Alfred’s processing pipeline. This alteration is designed to capture an even broader scope of relevant CVEs: specifically, those with a high likelihood of translating into actionable security tests that will help our customers find vulnerabilities in their assets.

We’re excited to deliver continuous and even higher-value security research by combining the power of the Detectify Crowdsource community with our AI Researcher Alfred.

The post Detectify AI-Researcher Alfred gets smarter with threat actor intelligence appeared first on Blog Detectify.

Product comparison: Detectify vs. Rapid7

By: Detectify
7 November 2025 at 07:10

For Application Security leaders and engineers, the choice between Rapid7 and Detectify is a decision between two fundamentally different philosophies: a broad, SOC-centric platform versus a purpose-built, practitioner-focused tool. Rapid7 presents a unified solution that correlates application flaws with holistic infrastructure risk, while Detectify is engineered exclusively for the external AppSec workflow. This analysis has compared both platforms through the three core use cases that matter to an AppSec team: their approach to visibility and attack surface discovery, the technical methodology and effectiveness of their assessment engines, and the practical usability of each tool in a modern, fast-paced remediation pipeline.

Detectify vs. Rapid7: A Quick Comparison

We’ve built this comparison mainly based on the feedback from dialogues with prospective clients and past Rapid7 users who decided to evaluate Detectify as its alternative, but also based on the following sources:

  • Rapid7’s official website & resources
  • Rapid7’s documentation
  • Rapid7’s publicly accessible demos

The post Product comparison: Detectify vs. Rapid7 appeared first on Blog Detectify.

Product comparison: Detectify vs. Invicti

By: Detectify
3 November 2025 at 05:36

This comparison reviews two security platforms, Detectify and Invicti, both engineered to provide vulnerability assessment and attack surface management. While both platforms compete, Detectify is built on a forward-looking philosophy, leveraging its proprietary, payload-based scanning engine and a multi-source intelligence model. This approach is powered by a private community of elite ethical hackers (Detectify Crowdsource), an AI researcher, and an internal team, enabling it to find the novel, non-CVE vulnerabilities that other tools miss. In contrast, Invicti’s value is rooted in its “Proof-Based Scanning” engine, an approach focused on confirming publicly known vulnerabilities, which requires a significant upfront time investment for configuration and cannot scan for emerging, 0-day threats. This core difference in assessment philosophy steers the platforms’ respective value, usability, and the day-to-day workflow for an AppSec team.

Detectify vs. Invicti: A Quick Comparison

We’ve built this comparison mainly based on the feedback from dialogues with prospective clients of Invicti who decided to evaluate Detectify as its alternative, but also based on the following sources:

  • Invicti’s official website & resources
  • Invicti’s documentation
  • Invicti’s publicly accessible demos

The post Product comparison: Detectify vs. Invicti appeared first on Blog Detectify.

The researcher’s desk: CVE-2025-20362

By: Detectify
31 October 2025 at 08:35

Welcome to The researcher’s desk – a content series where the Detectify security research team will conduct a technical autopsy on vulnerabilities that are particularly interesting, complex, or persistent. The goal here is not to report the latest research (for which you can refer to the Detectify release log); it is to take a closer look at certain vulnerabilities, regardless of their disclosure date, that still offer critical lessons. 

For our first case file, we examine the exploit chain targeting Cisco ASA and FTD firewalls, beginning with the unauthenticated access flaw, CVE-2025-20362.

The case file

  • Disclosure Date: September 25, 2025
  • Bypass Flaw: CVE-2025-20362 with CVSS 6.5
  • Execution Flaw: CVE-2025-20333 with CVSS 9.9
  • Vulnerable Component: VPN Web Server on Cisco ASA/FTD
  • Final Impact: Unauthenticated Remote Code Execution (RCE)
  • Observations: Flaws were actively exploited as zero-days before patches were available

What’s the root cause of CVE-2025-20362? 

The access flaw, CVE-2025-20362 (Missing Authorization, CWE-862), is essentially a failure in user input validation, typically manifesting as a Path Traversal/Normalization issue.

When an attacker sends a carefully crafted HTTP request containing specific directory traversal sequences, the VPN web server’s logic fails to correctly identify the request as unauthenticated. Instead, the server’s authorization component is bypassed, treating the request as if a session already exists. This grants the remote attacker access to critical, restricted URL endpoints—endpoints that are not designed for public interaction.

What’s the mechanism behind CVE-2025-20362? 

The primary lesson of this case is chainability. While CVE-2025-20362 alone carries a moderate score, its true severity is realized when it is used to nullify the only defense protecting the second vulnerability, CVE-2025-20333 (a Buffer Overflow).

  1. Latch Opened. The attacker uses a crafted request to exploit the input validation flaw CVE-2025-20362, bypassing the need for a login. 
  2. Execution Delivered. The attacker then targets the now-exposed critical endpoint with the payload designed to trigger the Buffer Overflow CVE-2025-20333. (CVE-2025-20333 alone requires valid VPN credentials)
  3. Result: The chain achieves Unauthenticated Remote Code Execution with privileges on the firewall: a complete takeover of the network perimeter.

Our team chose this flaw because it is an excellent example of a modern, high-stakes attack. The entire chain has been leveraged by sophisticated, state-sponsored campaigns, demonstrating that attackers prioritize the easiest way in, often starting with a moderate-severity bypass to unlock a critical vulnerability. It proves that defenders must identify and fix every link in a potential chain, not just the high-score vulnerabilities. 

Defensive takeaways

  • Patching: Immediately upgrade to the latest, fixed Cisco releases.
  • Segment and Isolate: If possible, restrict administrative and VPN web server access to only trusted IPs via upstream ACLs.
  • The Detectify Approach: Detectify customers are running payload-based testing to check for the precise input normalization failure of CVE-2025-20362. 

Questions? We’re happy to hear from you via support@detectify or book a demo to learn more about Detectify.

The post The researcher’s desk: CVE-2025-20362 appeared first on Blog Detectify.

Product comparison: Detectify vs. ProjectDiscovery

By: Detectify
31 October 2025 at 08:26

This comparison reviews two modern security platforms, ProjectDiscovery and Detectify, both engineered to provide high-signal, low-noise vulnerability assessment and attack surface management. While both are effective, they are built on fundamentally different philosophies. ProjectDiscovery is a platform where its value is rooted in its powerful open-source tools, like the Nuclei engine, which offer self-serve customization for newly disclosed public CVEs. In contrast, Detectify’s value lies in its proprietary, payload-based scanning engine , which is uniquely powered by a private community of elite ethical hackers (Detectify Crowdsource) to find novel, non-CVE vulnerabilities. This core difference in approach steers their respective strengths in assessment, usability, and the day-to-day workflow for an AppSec team.

Detectify vs. ProjectDiscovery: A Quick Comparison

We’ve built this comparison mainly based on the feedback from dialogues with prospective clients of ProjectDiscovery who decided to evaluate Detectify as its alternative, but also based on the following sources:

  • ProjectDiscovery’s official website & resources.
  • ProjectDiscovery’s documentation.
  • ProjectDiscovery’s publicly accessible demos.

The post Product comparison: Detectify vs. ProjectDiscovery appeared first on Blog Detectify.

The API vulnerabilities nobody talks about: excessive data exposure

28 October 2025 at 09:31

TLDR: Excessive Data Exposure (leaking internal data via API responses) is the silent, pervasive threat that is more dangerous than single dramatic flaws like SQL Injection. It amplifies every other API vulnerability (like BOLA) and happens everywhere because developers prioritize speed over explicit data filtering. Fixing it means systematically checking hundreds of endpoints for unneeded PII and sensitive internal data.

After writing about how API security is different from web app security – one thing that sticks is the idea that APIs can have hundreds of small issues that add up over time, rather than one big dramatic vulnerability.

Let me give you a concrete example of what I mean.

SQL injection is serious. Everyone knows that. But what about APIs that just… hand over sensitive data by design?
I’m not saying it’s worse than SQL injection. But it might be more insidious, because it amplifies every other vulnerability you have.

Excessive data exposure: the silent problem

The patterns even encourage this and you can see it everywhere. You have an endpoint like GET /api/users/123 and it returns something like:

{
    user_id: 42,
    name: "Joviane",
    email: "myemail@gmail.com",
    role: "student"
}
… but also returns
{
   internal_user_id: 64, 
   full_address: "Secure Street, 403", 
   ssn_last_4: 1234, 
   phone_number: "73737-7373"  
}

and a lot of stuff that you weren’t planning to expose. The frontend only displays name and email, but the API is returning EVERYTHING from the database.
You might think, “but only authenticated users can call this endpoint, so it’s fine!”. And yeah, that’s true. But what happens when an attacker compromises ANY user account? When a developer accidentally logs the full response? When a browser extension scrapes the data? When the response gets cached somewhere it shouldn’t be? All of that sensitive data is just sitting there, waiting.

The worst part? This compounded with other vulnerabilities. Say you have a BOLA vulnerability where users can access other users’ data by changing an ID. If your API only returned public fields, the impact would be limited. But if it’s leaking PII, internal IDs, or sensitive business data, now that BOLA just became a massive data breach waiting to happen.

Why this happens everywhere

Here’s the thing: this isn’t malicious. Usually, it’s convenient. Returning the whole object is faster than filtering fields. ORMs don’t help either, they return everything by default unless you explicitly use projection or select specific fields. Sometimes teams are trying to be clever and “future-proof” their APIs with fields they might need later. And sometimes? It’s just copy-paste. One endpoint did it this way, so all the others followed.
It makes sense from a development velocity perspective. I’ve done this myself when shipping features under pressure. You write a quick endpoint, test that the frontend displays correctly, and ship it. The API is returning 20 fields but the UI only uses 3? Nobody notices because it works.

The real-world impact

Let me give you a concrete example I’ve seen play out in a code review. An e-learning platform had an endpoint GET /api/courses/{courseId}/students that returned student enrollment data. Makes sense for instructors to see their students, right? But it wasn’t just returning names and progress percentages. It was also returning full email addresses, enrollment dates, payment status, quiz attempt histories with timestamps, discussion forum activity metrics, and even device information from where students were accessing the course.

The frontend displayed student names and their course completion percentage. That’s it. And if you were a student? You could only see your own status in the UI. But any enrolled student could hit that endpoint directly, change the course ID, and pull data from other courses. Someone could iterate through course IDs and build a complete database of who’s taking what courses, payment patterns, learning behaviors, and personal contact information. They didn’t need to break anything or find some clever exploit. The API was just handing it all over.
Luckily, this got caught before production, but as the feature was working fine in the UI and the API, this could’ve easily slipped through and reached production.

And let’s talk about the PII implications here. That leaked student data? We’re talking full names, email addresses, phone numbers, physical addresses, potentially payment information. In a lot of jurisdictions, that’s a GDPR violation or equivalent waiting to happen. Even if the attacker never uses the data maliciously, you’ve just exposed yourself to regulatory fines, mandatory breach notifications, and a PR nightmare. All because the API returned 15 extra fields that nobody actually needed. The business intelligence leak is bad for competitive reasons, sure. But the PII exposure? That’s the kind of thing that gets you on the front page of technical channels for all the wrong reasons.

Another common pattern: pagination endpoints that leak way too much. You call GET /api/students?page=1&limit=100 expecting a list of students, and you get back not just the students, but also their hashed passwords, API keys, internal permissions, last login times, IP addresses… all stuff that should never leave the backend.

The scale problem

SQL injection is one vulnerability. You can find it, fix it and you are done. Excessive data exposure? That’s hundreds of endpoints, each leaking a little data, compounding over time.

Which one is easier for an attacker to exploit at scale? The one that exists in every single endpoint. They don’t need to find a clever injection payload. They just need to iterate through your API and collect everything you’re giving them for free. And because it’s “technically working as designed,” it might not even trigger your security monitoring. No failed requests, no suspicious payloads, just normal API calls returning way too much information.

Other “boring” vulnerabilities that actually matter

There’s Mass Assignment – where a user sends {"name": "Deckan", "isAdmin": true} and the API just… accepts both fields. No validation on what should be updatable. Suddenly, regular users are admins. Or Improper Rate Limiting. No limits on password reset? Account takeover via brute force. No limits on OTP verification? Bye-bye 2FA. No limits on search? Congrats, someone just scraped your entire database.
And the classic: Predictable Resource IDs. /api/invoices/1001, /api/invoices/1002… you see where this is going. An attacker just iterates and collects everything. Classic BOLA.

What makes this hard

These aren’t the sexy zero-day exploits that make headlines. They’re architectural problems baked into dozens or hundreds of endpoints. Finding them means actually understanding what each endpoint does. You need to know what each endpoint returns, what it needs to return, and what’s just extra baggage. Then multiply that by every endpoint in your API. It’s tedious, but it matters.

This is why API security testing is tricky. You’re not hunting for one big vulnerability. You’re checking every single endpoint for these patterns. Data leaking where it shouldn’t, auth checks that are missing, rate limits that don’t exist. All these problems are everywhere and they add on top of each other. At Detectify, our API scanning handles the tedious part, systematically checking every endpoint for vulnerabilities. That way your team can spend time on the stuff that actually needs human judgment, like business logic vulnerabilities and understanding your specific app’s security context.

How does your team handle this?

And here’s the hard question that we’d love to hear about: when you’re building a new endpoint, how do you make sure developers only return the necessary fields? Code review? Automated checks? Response DTOs that force explicit field selection?

The post The API vulnerabilities nobody talks about: excessive data exposure appeared first on Blog Detectify.

New API testing category now available 

By: Detectify
23 October 2025 at 08:23

Our API scanner can test for dozens of vulnerability types like prompt injections and misconfigurations. We’re excited to share today that we’re releasing vulnerability tests for OAuth API authorization for organizations that use JWT tokens. These JWT, or JSON Web Tokens, are meant to prove that you have access to whatever it is you are accessing. One of the most critical JWT vulnerabilities is algorithm confusion. This occurs when an attacker tricks the server into verifying the token’s signature using a different, less secure algorithm than intended. There are plenty of other issues that can go wrong with managing JWT, which is why having a test to catch these misconfigurations is so useful. 

Our API scanner can now detect a variety of misconfigurations that occur with JWT, like timestamp confusion or even manipulating the header with ‘none.’  This will help mitigate the risk of misconfigurations that come up when things like managing complex infrastructure is a daily part of an AppSec team’s scope. 

So, how are we able to release new types of vulnerability tests that are outside of the OWASP API Top Ten or publicly available CVEs? 

When we set out to build our API scanning engine, we faced a fundamental choice: wrap an existing open-source tool like ZAP, or build something from the ground up. Unlike other vendors, we chose the latter. While open-source tools are invaluable to the security community, we believe our customers deserve more than repackaged checks and noisy results. To deliver on that, we engineered a proprietary engine guided by three core principles:

  1. Dynamic Payloads. Traditional API scanners run the same tests, and if your API hasn’t changed, you get the same results. This can create a false sense of security. We’ve taken a different approach. Our engine uses dynamic payloads that are randomized and rotated with every scan. This means that each scan is a unique event, a novel attempt to find vulnerabilities that static checks would miss. Even in an unchanged API, our scanner offers a continuous opportunity for discovery.
  2. Massive Scale, Reproducible Results. Our dynamic approach doesn’t come at the cost of consistency. It allows for a massive scale of test variations. For certain tests, like prompt injection, the number of potential payload permutations is theoretically over 922 quintillion. For command injections, we draw from a library of over 330,000 payloads. This isn’t chaos; it’s controlled and predictable randomization. We use a “seed” for each scan, much like a seed in Minecraft generates a specific world. This allows us to precisely reproduce the exact payload that identified a vulnerability, ensuring our findings are always verifiable and actionable for your developers. 
  3. Research-Led, High-Fidelity Findings. Our engine is the product of our internal security research team—the same team that powers the rest of Detectify. We prioritize exploitability, meaning we attempt to actually exploit a vulnerability rather than simply flagging a potential issue. This, combined with our proprietary fuzzing technology with a history of finding zero-days, results in high-accuracy findings you can trust. This drastically reduces the time you would otherwise waste on triaging false positives, allowing you to focus on what matters.

Research-led vulnerability testing means that our engines don’t rely on publicly available information.

Detectify builds and maintains its scanning engines in-house to tailor them specifically for modern application architectures. This approach is designed to yield more accurate results for custom applications by better understanding complex logic and API-driven front-ends. In contrast, general-purpose open-source engines can struggle with the nuances of modern apps and APIs.

Why does this matter to AppSec teams? Simply put, reduced noise. Many security vendors today rely heavily on open-source tooling, meaning no steps are taken by the tool to reduce noise. By building its own engines, Detectify can implement methodologies like testing for OAuth authorization, which helps curb the time users spend validating vulnerabilities.

This means that our API Scanner is not only dynamic in how it tests for vulnerabilities, but also not limited by what is possible at the moment 

One piece of feedback we’ve regularly received from our customers is that they find the breadth of coverage really useful. Not only can we discover high and critical vulnerabilities, but we can also find issues like misconfigured JWT tokens and missing security headers, something that isn’t commonly available in API scanners.

We can deliver this kind of experience to our users because our engines are built with the AppSec team in mind, meaning that we consider the use cases that our users need.  

Detectify’s research-led approach and proprietary fuzzing engine deliver high-fidelity, actionable results that empower AppSec teams to secure their APIs with confidence – try our API scanner to experience the difference.

The post New API testing category now available  appeared first on Blog Detectify.

Migrating Critical Messaging from Self-Hosted RabbitMQ to Amazon MQ

23 October 2025 at 08:00

TLDR: We successfully migrated our core RabbitMQ messaging infrastructure from a self-hosted cluster on EKS to managed Amazon MQ to eliminate the significant operational burden of being accidental RabbitMQ experts. The complexity stemmed from over 50 interconnected services and a zero-downtime, zero-loss requirement. Our strategy involved meticulous auditing, mirroring the setup, and a “downstream-first” live cutover using Shovel plugins as a critical failsafe. The result is a much more stable, predictable platform, freeing up engineering cycles to focus 100% on building security features for our customers instead of debugging infrastructure failures.

Picture this: it’s 3 AM, and your message broker is acting up. Queue depths are climbing, consumers are dropping off, and your on-call engineer is frantically restarting pods in a Kubernetes cluster they barely understand. Sound familiar?

For years, we lived this reality with our self-hosted RabbitMQ running on EKS. Sure, we had “complete control” over our messaging infrastructure, but that control came with a hidden cost: becoming accidental RabbitMQ experts, a costly operational distraction from our core mission: accelerating the release of features that directly benefit our customers and help them keep their assets secure. 

The breaking point came when we realized our message broker had become a single point of failure—not just technically, but organizationally. Only a handful of people could troubleshoot it, and with a mandatory Kubernetes upgrade looming, we knew it was time for a change.

Enter Amazon MQ: AWS’s managed RabbitMQ service that promised to abstract away the operational headaches.

But here’s the challenge: we couldn’t just flip a switch. We had over 50 services pumping business-critical messages through our queues 24/7. Payment processing, user notifications, data sync—the works. Losing a single message was unacceptable. One wrong move and we’d risk impacting our security platform’s reliability.

This is the story of how we carefully migrated our entire messaging infrastructure while maintaining zero downtime and absolute data integrity. It wasn’t simple, but the process yielded significant lessons in operational maturity.

Background: The Old vs. The New

The Old Setup (RabbitMQ on EKS)

Running your own RabbitMQ cluster feels empowering at first. You have complete control, can tweak every setting, and it’s “just another container” in your Kubernetes cluster. But that control comes with a price. When RabbitMQ starts acting up, someone on your team needs to know enough about clustering, memory management, and disk usage patterns to fix it. We found ourselves becoming accidental RabbitMQ experts when we really just wanted to send messages between services.

The bus factor was real. Only a handful of people felt comfortable diving into RabbitMQ issues. When those people were on vacation or busy with other projects, incidents would sit longer than they should. Every security patch meant carefully planning downtime windows. Every Kubernetes upgrade meant worrying about how it would affect our message broker. It was technical debt disguised as infrastructure.

The warning signs were there. Staging would occasionally have weird behavior—messages getting stuck, consumers dropping off, memory spikes that didn’t make sense. We’d restart the services and things would go back to normal, but you can only kick the can down the road for so long. When similar issues started appearing in production, even briefly, we knew we were on borrowed time.

Kubernetes doesn’t stand still, and neither should your clusters. But major version upgrades can be nerve-wracking when you have critical infrastructure running on top. The thought of our message broker potentially breaking during a Kubernetes upgrade—taking down half our platform in the process—was the final push we needed to look for alternatives.

The New Setup (Amazon MQ)

With Amazon MQ, someone else worries about keeping RabbitMQ running. AWS handles the clustering, the backups, the failover scenarios you hope never happen but probably will. It’s still RabbitMQ under the hood, but wrapped in the kind of operational expertise that comes from running thousands of message brokers across the globe.

AWS takes care of many of the routine operational tasks, though you still need to plan maintenance windows for major upgrades. The difference is that these are less frequent and more predictable than the constant patching and troubleshooting we dealt with before. The monitoring becomes simpler too—you may still use Grafana panels, but now they pull from CloudWatch instead of requiring Prometheus exporters and custom metrics collection.

Amazon MQ isn’t serverless though, so you still need to choose the right instance size and monitor both CPU, RAM, and disk usage carefully. Since disk space is tied to your instance type, running out of space is still a real concern that requires monitoring and planning. The key difference is that you’re monitoring well-defined resources rather than debugging mysterious cluster behavior.

Security by default is always better than security by choice. Amazon MQ doesn’t give you the option to run insecure connections, which means you can’t accidentally deploy something with plaintext message traffic. It’s one less thing to worry about during security audits and one less way for sensitive data to leak.

When your message broker just works, developers can focus on the business logic that actually matters. You still get Slack alerts when things go wrong and queue configuration is still something you need to think about, but you’re no longer troubleshooting clustering issues or debugging why nodes can’t talk to each other at 2 AM. The platform shifts from something that breaks unexpectedly to something that fails predictably with proper monitoring.

The Migration Challenge

Complex Service Dependencies

Over 50 services depended on RabbitMQ:

  • Some only consumed messages.
  • Some only produced messages.
  • Some did both.

Like many companies that have grown organically, our RabbitMQ usage had evolved into a complex web of dependencies. Fifty-plus services might not sound like a massive number in some contexts, but when each service potentially talks to multiple queues and many services interact with each other through messaging, the dependency graph becomes surprisingly intricate. Services that started simple had grown tentacles reaching into multiple queues. New features had been built on top of existing message flows. What looked like a straightforward “change the connection string” problem on paper turned into a careful choreography of moving pieces.

Zero Downtime Requirement

Messages were business-critical – downtime was not an option.

These weren’t just debug logs or nice-to-have notifications flowing through our queues. Payment processing, user notifications, data synchronization between systems—the kind of stuff that immediately breaks the user experience if it stops working. The pressure was real: migrate everything successfully or risk significant business impact.

Migration Risks

Risks included:

  • Dropped or duplicated messages.
  • Consumers/producers falling out of sync.
  • Unexpected latency or queue build-up.

Message systems have this nasty property where small problems can cascade quickly. A consumer that falls behind can cause queues to back up. A producer that starts sending to the wrong place can create ghost traffic that’s hard to trace. During a migration, you’re essentially rewiring the nervous system of your platform while it’s still running—there’s no room for “oops, let me try that again.”

We needed a plan to untangle dependencies and migrate traffic safely.

Our Migration Approach

1. Audit and Preparation

Service Mapping and Analysis

Before touching anything, we needed to understand what we were working with. This meant going through every service, every queue, every exchange, and drawing out the connections. Some were obvious—the email service clearly produces to the email queue. Others were surprising—why was the user service consuming from the analytics queue? Documentation helped, but sometimes the only way to be sure was reading code and checking configurations.

The RabbitMQ management API proved invaluable during this phase. We used it to query all the queues, exchanges, bindings, and connection details—basically everything we needed to get a complete picture of our messaging topology. This automated approach was much more reliable than trying to piece together information from scattered documentation and service configs.

Interactive topology chart showing service dependencies and message flows

With all this data, we created visual representations using go-echarts to generate interactive charts showing message flows and dependencies. We even fed the information to Figma AI to create clean visual maps of all the connections and queue relationships. Having these visual representations made it much easier to spot unexpected dependencies and plan our migration order.

Visual map of RabbitMQ connections and dependencies

The visualizations helped us identify both hotspots where many services converged, and “low hanging fruits”—services that weren’t dependent on many other components. By targeting these first, we could remove nodes from the dependency graph, which gradually un-nested the complexity and unlocked safer migration paths for upstream services. Each successful migration simplified the overall picture and reduced the risk for subsequent moves.

Service Categorization

  • Consumer-only
  • Producer-only
  • Consumer + Producer

This categorization became our migration strategy. Consumer-only services were the easiest—we could point them at the new broker without affecting upstream systems. Producer-only services were next—as long as consumers were already moved, producers could follow safely. The Consumer+Producer services were the trickiest and needed special handling.

Migration Roadmap

Having a plan beats winging it every time. We could see which services to migrate first, which ones to save for last, and where the potential problem areas were. It also helped with communication—instead of “we’re migrating RabbitMQ sometime soon,” we could say “we’re starting with the logging services this week, then the notification system the week after.”

Safety Net Strategy

Shovels became a critical part of our strategy from day one. These plugins can copy messages from one queue to another, even across different brokers, which meant we could ensure message continuity during the migration. Instead of trying to coordinate perfect timing between when producers stop sending to the old broker and consumers start reading from the new one, shovels would bridge that gap and guarantee no messages were lost in transit.

2. Build a Mirror Setup

Export Configuration

We exported the complete configuration from our old RabbitMQ cluster, including:

  • Queues
  • Exchanges
  • Users & permissions

RabbitMQ’s export feature became our best friend. Instead of manually recreating dozens of queues and exchanges, we could dump the entire configuration from the old cluster. This wasn’t just about saving time—it was about avoiding the subtle differences that could cause weird bugs later. Queue durability settings, exchange types, binding patterns—all the little details that are easy to get wrong when doing things by hand.

Mirror the Setup

We then imported everything into Amazon MQ to create an identical setup:

  • Same queue names and exchanges.
  • Same credentials.

The goal was to make the new broker as close to a drop-in replacement as possible. Services shouldn’t need to know they’re talking to a different broker—same queue names, same exchange routing, same user accounts. This consistency was crucial for a smooth migration and made it easier to roll back if something went wrong.

Queue Type Optimization

We made one strategic change: most queues were upgraded to quorum queues for better durability, except for one high-traffic queue where quorum caused performance issues.

Quorum queues are RabbitMQ’s answer to the classic “what happens if the broker crashes mid-message” problem. They’re more durable but use more resources. For most of our queues, the trade-off made sense. But we had one queue that handled hundreds of messages per second, and the overhead of quorum consensus was too much. Sometimes the best technical solution isn’t the right solution for your specific constraints.

3. Live Cutover

Once the new broker was ready, we began moving services live. But we didn’t start with production—we had the same setup mirrored in staging, which became our testing ground. For every service migration, we’d first execute the switch in staging to validate the process, then repeat the same steps in production once we were confident everything worked smoothly.

Consumer-Only Services

Consumer-only services were our practice round. We could move them over and if something went wrong, the blast radius was limited. The shovel plugin became our safety net—it copies messages from one queue to another, even across different brokers. This meant messages sent to the old queue while we were migrating would still reach consumers on the new broker. No lost messages, no service interruption.

Step 1: Switch consumer to Amazon MQ.

Step 2: Add shovel to forward messages from the EKS RabbitMQ Old Queue to the Amazon MQ New Queue.

Step 3: Consumers seamlessly read from Amazon MQ.

Producer-Only Services

Once consumers were happily reading from Amazon MQ, we could focus on the producers. Since the queue and exchange names were identical, producers just needed a new connection string. Messages would flow into the same logical destinations, just hosted on different infrastructure.

Note: For a producer to be eligible for migration, you want all the downstream consumers to be migrated first.

Step 1: Switch producer to Amazon MQ.

Step 2: Messages flow to the same queues via the new exchange/broker.

Step 3: Remove shovel (cleanup).

 

Key principle: migrate downstream consumers first, then producers to avoid lost messages.

This principle saved us from a lot of potential headaches. If you move producers first, you risk having messages sent to a new broker that doesn’t have consumers yet. Move consumers first, and you can always use shovels or other mechanisms to ensure messages reach them, regardless of where producers are sending.

Consumer + Producer Services

These services were the final boss of our migration. They couldn’t be treated as simple consumer-only or producer-only cases because they did both. Switching them all at once meant they’d start consuming from Amazon MQ before all the upstream producers had migrated, potentially missing messages. They’d also start producing to Amazon MQ before all the downstream consumers were ready.

The solution required a bit of code surgery. Instead of one RabbitMQ connection doing everything, these services needed separate connections for consuming and producing. This let us migrate the consumer and producer sides independently.

Step 1: Migrate consumer side to Amazon MQ (producer stays on old broker). The service now consumes from the new queue (via the shovel) but still produces to the old exchange.

Step 2: Switch producer side to Amazon MQ (full migration complete). Now both consuming and producing happen on Amazon MQ.

This gave us full control to migrate complex services step by step without downtime.

Post-Migration Monitoring

Old habits die hard, and one of those habits is checking dashboards when something feels off. We rebuilt our monitoring setup for Amazon MQ using Grafana panels that pull from CloudWatch instead of Prometheus. This simplified our metrics collection—no more custom exporters or scraping configurations. The metrics come directly from AWS, and we still get the same visual dashboards we’re used to, just with a cleaner data pipeline.

Essential Metrics

We focused on four key metrics that give us complete visibility into our message broker:

  • Queue Depth – Shows if consumers are keeping up with producers. A steadily increasing queue depth indicates a backlog building up.
  • Connection Counts – Helps spot services having trouble connecting to the broker. Sudden drops often indicate network or authentication issues.
  • Broker Health – Gives the big picture view of whether the infrastructure itself is working correctly. AWS provides comprehensive health checks.
  • CPU and Memory Usage – Critical since Amazon MQ runs on specific instance types, not serverless infrastructure. You need to size instances correctly and watch for resource exhaustion.

Alerting Strategy

We set up a layered alerting approach focusing on the infrastructure level and service-specific monitoring:

  • Resource Usage Alerts – CPU and memory alerts are crucial since you’re still responsible for choosing the right instance size. We set up both Slack notifications for warnings and PagerDuty alerts for critical thresholds.
  • Service-Level Monitoring – Each service has its own alerts on queue length and consumer counts. This gives teams ownership of their specific queues and helps them spot issues with their particular message flows before they become broker-wide problems.

Lessons Learned

Migration is Doable, But Plan Carefully

Live migration is possible, but it’s not trivial and definitely requires careful planning.

We managed to avoid downtime during our migration, but it took some preparation and a lot of small, careful steps. The temptation is always to move fast and get it over with, but with message systems, you really can’t afford to break things. We had a few close calls where our planning saved us from potential issues.

Patterns That Worked for Us

Some approaches that made our migration smoother:

  • Downstream-first approach: move consumers before producers.
  • Mirror everything: identical exchanges, queues, credentials.
  • Dual broker strategy: run old and new in parallel with shovels.
  • Flexible service design: separate configs for consumers and producers.

These aren’t revolutionary ideas, but they worked well in practice. The downstream-first approach felt scary at first but ended up reducing risk significantly. Having identical setups meant fewer surprises. Running both brokers in parallel gave us confidence and fallback options.

What We Got Out of It

The migration went better than we expected:

  • No outages during the process.
  • Didn’t lose any messages (that we know of).
  • Day-to-day operations are definitely easier now.

The new system has been more stable, though we still get alerts and have to monitor things carefully. The main difference is that when something goes wrong, it’s usually clearer what the problem is and how to fix it. Less time spent digging through Kubernetes logs trying to figure out why rabbit are unhappy.

Conclusion

Moving from RabbitMQ on EKS to Amazon MQ turned out to be worth the effort, though it wasn’t a simple flip-the-switch operation.

The main win was reducing the operational burden on our team. We still have to monitor and maintain things, but the day-to-day firefighting around clustering issues and mysterious failures has mostly gone away.

If you’re thinking about a similar migration:

  • Take time to really understand your current setup first.
  • Test everything in staging multiple times.
  • Plan for it to take longer than you think.

The migration itself was stressful, but the end result has been more time to focus on building features instead of babysitting infrastructure.

Looking back, the hardest part wasn’t the technical complexity—it was building confidence that we wouldn’t break anything important. But with good planning, visual dependency mapping, and a healthy respect for Murphy’s Law, it’s definitely doable.

Now when someone mentions “migrating critical infrastructure with zero downtime,” we don’t immediately think “impossible.” We think “challenging, but we’ve done it before.”

The post Migrating Critical Messaging from Self-Hosted RabbitMQ to Amazon MQ appeared first on Blog Detectify.

Product comparison: Detectify vs. Escape

By: Detectify
20 October 2025 at 03:26

Choosing the right tool is a critical decision that depends on a team’s specific goals, resources, and technical focus. This review provides an in-depth comparison of two leading platforms, Escape and Detectify, to help you make an informed choice. We will explore how each tool approaches three core pillars of any effective AppSec program: Visibility (discovering and understanding your attack surface), Assessment (accurately finding vulnerabilities), and Usability (ensuring the tool is efficient and enjoyable to use). By the end of this comparison, you will have a clear understanding of each platform’s strengths and weaknesses, enabling you to determine which solution is the better fit for your team’s unique operational style—whether you need a tool built for deep, customizable analysis or one optimized for speed and decisive, guided action.

Detectify vs. Escape: A Quick Comparison

We’ve built this comparison mainly based on the feedback from dialogues with prospective clients and past Escape users who decided to evaluate Detectify as its alternative, but also based on the following sources:

  • Escape’s official website & resources
  • Escape’s documentation
  • Escape’s publicly accessible demos

TL;DR

The post Product comparison: Detectify vs. Escape appeared first on Blog Detectify.

Why API security is different (and why it matters)

14 October 2025 at 04:23

Two months since I joined Detectify and I’ve realized something: API security is a completely different game from web application security. And honestly? I think a lot of teams don’t see this yet.

APIs are everywhere (but you might not know where)

Let’s look at the modern application. Your mobile app? APIs. Your crucial SaaS integrations? APIs. That complex checkout flow? Probably five or more API calls talking with each other. Modern applications are, fundamentally, just APIs talking to other APIs with a fancy UI layered on top.

But here’s what’s been catching me off guard: many companies don’t even have a complete inventory of their APIs. You’re trying to secure a perimeter you can’t even see the edges of. I have seen:

  • Shadow APIs: Old endpoints no one remembers deploying.
  • Zombie APIs: Test/staging endpoints that never got turned off.
  • Partner APIs: Third-party integrations that extend your attack surface.

How can you secure what you can’t see?

The attack vectors are different

When we talk about web vulnerabilities, usually we’re dealing with XSS, CSRF, clickjacking – stuff that messes with what users see or tricks them into clicking something they shouldn’t. API vulnerabilities are a different beast. We’re talking broken authentication, APIs exposing way too much data, weak rate limiting, injection attacks. 

These attacks skip the UI entirely. An attacker doesn’t need to trick a user into clicking something malicious. They just need to understand your API contract and find the weak spots. That’s it. The scary part? They can automate all of this.

Authentication is… well… complicated

Web apps usually use session-based authentication with cookies. It’s pretty standard, most frameworks handle it well, and there are well-known patterns to follow. APIs? That’s where things get messy. OAuth, JWT, API keys, mutual TLS, custom bearer tokens… There are so many different approaches, and each one has its own vulnerability patterns. I’ve been diving deep into the OWASP API Security Top 10, and honestly, the auth issues are wild. Broken Object Level Authorization, Broken Function Level Authorization… these things have scary-long names, but they’re everywhere. Even though everyone knows about them, they still pop up in production all the time.

Why does it matter?

API attacks are growing at an alarming rate for several reasons:

  1. Automation is Easy: APIs return structured data that is easier to parse than HTML, making it suitable for automation. This is great for developers, but even more perfect for attackers. 
  2. Weak Rate Limiting: Since APIs need to handle high-volume traffic, rate limiting is often weaker.
  3. Documentation as Blueprints: API documentation, while great for developers, also serves as a perfect attack blueprint, showing adversaries exactly where to poke.

This is exactly why we’re constantly enhancing our API Scanning capabilities at Detectify, because understanding these blind spots is the first step to fixing them.

How does your team handle this?

We’d love to hear how other teams are tackling this complex problem.

  • How do you maintain a complete, up-to-date inventory of ALL your endpoints, including the “zombie” ones?
  • What’s your strategy for testing authorization at scale when you have hundreds of different endpoints and authentication methods?
  • How do you approach API versioning and deprecation without accidentally leaving critical security holes in old versions?
  • What API security challenges keep you up at night?

FAQ

Q: What is the primary difference between web application security and API security?

A: Web application security often focuses on user-facing vulnerabilities like XSS, while API security is concerned with flaws like broken authentication and weak access control that attackers can exploit by directly interacting with the API endpoints, bypassing the UI.

Q: What are Shadow and Zombie APIs?

A: Shadow APIs are old endpoints that are forgotten but still deployed, while Zombie APIs are test or staging endpoints that were never turned off, and both extend the attack surface without the organization’s knowledge.

Q: Why are API attacks easily automated?

A: API attacks are easily automated because APIs return structured data (like JSON or XML) that is much easier for a script or bot to parse and manipulate than the more complex and varied structure of HTML pages.

The post Why API security is different (and why it matters) appeared first on Blog Detectify.

Product comparison: Detectify vs. Tenable

By: Detectify
10 October 2025 at 03:31

The difference between Detectify and Tenable lies in their core scope and the use cases they support. Detectify is a specialized, attacker-centric platform designed for the application security practitioner. Its focus is exclusively on the external, internet-facing attack surface with Dynamic Application Security Testing (DAST) to find exploitable vulnerabilities in web applications and APIs. In contrast, Tenable is a comprehensive exposure management platform built for the entire security and risk organization. It provides a holistic view of risk across the entire IT estate—from internal servers and cloud infrastructure to identity systems and the external perimeter—positioning itself as the central nervous system for enterprise-wide vulnerability and risk management.

Their differing scope dictates their strengths. Detectify’s primary advantage for an AppSec team is the high-fidelity, low-noise nature of its findings. Its unique reliance on payload-based testing, powered by a crowdsourced network of elite ethical hackers, delivers results that prove exploitability and are immediately actionable. This builds credibility with development teams and streamlines the remediation workflow, which is a significant usability win. Tenable’s strength lies in its unmatched breadth of coverage and its powerful risk contextualization through the Vulnerability Priority Rating (VPR) and Attack Path Analysis. It excels at showing how an application fits into an org’s risk profile, making it an indispensable tool for compliance and enterprise risk management.

Detectify vs. Tenable: A Quick Comparison

We’ve built this comparison mainly based on the feedback from dialogues with prospective clients and past Qualys users who decided to evaluate Detectify as its alternative, but also based on the following sources:

  • Tenable’s official website & resources
  • Tenable’s documentation
  • Tenable’s publicly accessible demos

TL;DR

The post Product comparison: Detectify vs. Tenable appeared first on Blog Detectify.

Product comparison: Detectify vs. Qualys

By: Detectify
3 October 2025 at 05:27

Your responsibilities cover the full spectrum of risk—from the applications your teams build and the products you ship to the overarching compliance mandates you must meet. The core challenge is achieving this with a lean team where every hour of engineering time is critical. Choosing the right tooling is not just a technical decision; it’s a strategic one that directly impacts your team’s efficiency and your organization’s security posture.

This review provides an in-depth, practical comparison of Qualys and Detectify across three critical dimensions for a security leader:

  • Visibility and Context: How well does it discover your complete attack surface and help you understand what’s important?
  • Vulnerability Assessment: How effective is it at finding truly exploitable vulnerabilities versus creating triage overhead?
  • Usability: Does the tool act as a force multiplier for your team or an operational burden?

Detectify vs. Qualys: A Quick Comparison

We’ve built this comparison mainly based on the feedback from dialogues with prospective clients and past Qualys users who decided to evaluate Detectify as it alternative, but also based on the following sources:

  • Qualys’ official website & resources
  • Qualys’ documentation
  • Qualys’ publicly accessible demos

TL;DR

The post Product comparison: Detectify vs. Qualys appeared first on Blog Detectify.

Product comparison: Detectify vs. Burp Enterprise

By: Detectify
26 September 2025 at 05:32

Choosing the right DAST tool is a critical decision that shapes the effectiveness of their entire program. Detectify and Burp Suite Enterprise, exemplify the innovation happening in this space. While both are powerful assessment tools, they are engineered to solve different core problems, stemming from fundamentally different approaches to visibility, vulnerability assessment, and usability. Understanding these differences is key to selecting the platform that aligns with your team’s specific needs, maturity, and security goals.

This comparison breaks down the core philosophies of each tool. Detectify operates on an “outside-in” model, starting with the crucial question: “What is my complete external attack surface?” It combines attack surface discovery with payload-based testing sourced from elite ethical hackers, the AI agent Alfred, and its internal security research team to provide immediate visibility and high-confidence, actionable findings. In contrast, Burp Suite Enterprise follows an “inside-out” model, built to answer: “Is this specific application I already know about secure?” It provides a powerful, highly customizable DAST scanner for mature security teams to perform deep, exhaustive scans on a known set of assets, prioritizing granular control and comprehensive coverage over automated discovery and ease of use.

Detectify vs. Burp Enterprise: A Quick Comparison

We’ve built this comparison mainly based on the feedback from dialogues with prospective clients and past Burp Enterprise users who decided to evaluate Detectify as its alternative, but also based on the following sources:

  • Burp’s official website & resources
  • Burp’s documentation
  • Burp’s publicly accessible demos

TL;DR

The post Product comparison: Detectify vs. Burp Enterprise appeared first on Blog Detectify.

Product update: Dynamic API Scanning, Recommendations & Classifications, and more

By: Detectify
26 September 2025 at 04:52

We know the importance of staying ahead of threats. At Detectify, we’re committed to providing you with the tools you need to secure your applications effectively. This update covers our new Dynamic API Scanning feature, updates over the last few months, and the latest additions to our vulnerability testing capabilities. 

What have we shipped to customers over the last few months?

Introducing Dynamic API Scanning

We’re excited to announce the launch of Dynamic API Scanning, now integrated into the Detectify platform. As APIs become increasingly critical to modern applications, they also present a growing attack surface. Our new API Scanning engine is designed to provide you with unified visibility and research-led testing for your APIs.

Key capabilities include:

  • Comprehensive Vulnerability Coverage: We test for a broad range of vulnerabilities, including the OWASP API Top 10, to ensure your APIs are protected against the most critical threats.
  • Unified Platform: By integrating API scanning into the Detectify platform, we provide a single pane of glass for managing the security of your entire attack surface.

This new feature will help you tackle challenges such as incomplete API inventories and the use of disparate testing solutions. The new API Scanner uses an advanced dynamic approach where the payloads used for testing are randomized and rotated with every single scan, meaning that every scan that we run against customer API is going to be unique; something that we never scanned before. Read more about Dynamic Payloads here.

Get started with Detectify API Scanning with this guide.

Not sure what to scan? We do. 

Prioritizing deep application scanning across hundreds of assets is a significant challenge. To solve this, our new Scan Recommendations feature helps you move from guessing to certainty. It analyzes your attack surface to identify complex, interactive web apps and recommends them for deeper scanning, ensuring your most critical assets are always covered.

Detectify now presents asset classification in a single view

To decide what to test, you first need to know what each asset does. Our new Asset Classification feature automates this by analyzing and categorizing your web assets (e.g., rich web apps, APIs). This gives you the insight needed to prioritize security testing and ensure your attack surface is covered.

We’ve also made major improvements to how Detectify performs

New improved subdomain discovery with 3x wordlist

We’ve enhanced active subdomain discovery. It now runs recursively to find deeply nested subdomains and uses a wordlist that is three times larger. This expanded wordlist is explored over time to uncover obscure assets with minimal impact. To support these improvements, passive subdomain discovery must be enabled to run active discovery.

Image #2

Filter Vulnerabilities based on a modification timestamp via API

We’ve improved vulnerability filtering in the API. The vulnerabilities endpoint now returns a <modified_at> timestamp that updates on any change, including manual actions. This allows for more granular queries using the new <modified_before> and <modified_after> filters.

We released a lot of new tests thanks to Alfred, Crowsource, and our internal Security Research team.

This product update would be very, very long if we listed all of the new vulnerabilities we implemented thanks to our Alfred, our AI Security Researcher, Crowdsource, and our incredible team of Security Researchers. So, you can check out all of our new tests here.

The post Product update: Dynamic API Scanning, Recommendations & Classifications, and more appeared first on Blog Detectify.

❌
❌