Normal view
-
Security Boulevard
- Web Bot Auth: Verifying User Identity & Ensuring Agent Trust Through the Customer Journey
Web Bot Auth: Verifying User Identity & Ensuring Agent Trust Through the Customer Journey
DataDome Bot Protect supports Web Bot Auth, enabling cryptographic verification of AI agents to eliminate fraud risk while maintaining business continuity.
The post Web Bot Auth: Verifying User Identity & Ensuring Agent Trust Through the Customer Journey appeared first on Security Boulevard.
-
Security Boulevard
- What’s On the Tube Or Rather in the Tube: Kimwolf Targets Android-based TVs and Streaming Devices
What’s On the Tube Or Rather in the Tube: Kimwolf Targets Android-based TVs and Streaming Devices
Kimwolf botnet exploits smart gadgets for DDoS attacks, highlighting security lapses in device protection and supply chains.
The post What’s On the Tube Or Rather in the Tube: Kimwolf Targets Android-based TVs and Streaming Devices appeared first on Security Boulevard.
-
Security Boulevard
- Inside the Rise of the Always Watching, Always Learning Enterprise Defense System
Inside the Rise of the Always Watching, Always Learning Enterprise Defense System
Perimeter security is obsolete. Modern cyberresilience demands zero-trust, continuous verification, and intelligent automation that detects and contains threats before damage occurs.
The post Inside the Rise of the Always Watching, Always Learning Enterprise Defense System appeared first on Security Boulevard.
8 federal agency data trends for 2026
If 2025 was the year federal agencies began experimenting with AI at-scale, then 2026 will be the year they rethink their entire data foundations to support it. What’s coming next is not another incremental upgrade. Instead, it’s a shift toward connected intelligence, where data is governed, discoverable and ready for mission-driven AI from the start.
Federal leaders increasingly recognize that data is no longer just an IT asset. It is the operational backbone for everything from citizen services to national security. And the trends emerging now will define how agencies modernize, secure and activate that data through 2026 and beyond.
Trend 1: Governance moves from manual to machine-assisted
Agencies will accelerate the move toward AI-driven governance. Expect automated metadata generation, AI-powered lineage tracking, and policy enforcement that adjusts dynamically as data moves, changes and scales. Governance will finally become continuous, not episodic, allowing agencies to maintain compliance without slowing innovation.
Trend 2: Data collaboration platforms replace tool sprawl
2026 will mark a turning point as agencies consolidate scattered data tools into unified data collaboration platforms. These platforms integrate cataloging, observability and pipeline management into a single environment, reducing friction between data engineers, analysts and emerging AI teams. This consolidation will be essential for agencies implementing enterprise-wide AI strategies.
Trend 3: Federated architectures become the federal standard
Centralized data architectures will continue to give way to federated models that balance autonomy and interoperability across large agencies. A hybrid data fabric — one that links but doesn’t force consolidation — will become the dominant design pattern. Agencies with diverse missions and legacy environments will increasingly rely on this approach to scale AI responsibly.
Trend 4: Integration becomes AI-first
Application programming interfaces (APIs), semantic layers and data products will increasingly be designed for machine consumption, not just human analysis. Integration will be about preparing data for real-time analytics, large language models (LLMs) and mission systems, not just moving it from point A to point B.
Trend 5: Data storage goes AI-native
Traditional data lakes will evolve into AI-native environments that blend object storage with vector databases, enabling embedding search and retrieval-augmented generation. Federal agencies advancing their AI capabilities will turn to these storage architectures to support multimodal data and generative AI securely.
Trend 6: Real-time data quality becomes non-negotiable
Expect a major shift from reactive data cleansing to proactive, automated data quality monitoring. AI-based anomaly detection will become standard in data pipelines, ensuring the accuracy and reliability of data feeding AI systems and mission applications. The new rule: If it’s not high-quality in real time, it won’t support AI at-scale.
Trend 7: Zero trust expands into data access and auditing
As agencies mature their zero trust programs, 2026 will bring deeper automation in data permissions, access patterns and continuous auditing. Policy-as-code approaches will replace static permission models, ensuring data is both secure and available for AI-driven workloads.
Trend 8: Workforce roles evolve toward human-AI collaboration
The rise of generative AI will reshape federal data roles. The most in-demand professionals won’t necessarily be deep coders. They will be connectors who understand prompt engineering, data ethics, semantic modeling and AI-optimized workflows. Agencies will need talent that can design systems where humans and machines jointly manage data assets.
The bottom line: 2026 is the year of AI-ready data
In the year ahead, the agencies that win will build data ecosystems designed for adaptability, interoperability and human–AI collaboration. The outdated mindset of “collect and store” will be replaced by “integrate and activate.”
For federal leaders, the mission imperative is clear: Make data trustworthy by default, usable by design, and ready for AI from the start. Agencies that embrace this shift will move faster, innovate safely, and deliver more resilient mission outcomes in 2026 and beyond.
Seth Eaton is vice president of technology & innovation at Amentum.
The post 8 federal agency data trends for 2026 first appeared on Federal News Network.

© Getty Images/iStockphoto/ipopba
A data mesh approach: Helping DoD meet 2027 zero trust needs
As the Defense Department moves to meet its 2027 deadline for completing a zero trust strategy, it’s critical that the military can ingest data from disparate sources while also being able to observe and secure systems that span all layers of data operations.
Gone are the days of secure moats. Interconnected cloud, edge, hybrid and services-based architectures have created new levels of complexity — and more avenues for bad actors to introduce threats.
The ultimate vision of zero trust can’t be accomplished through one-off integrations between systems or layers. For critical cybersecurity operations to succeed, zero trust must be based on fast, well-informed risk scoring and decision making that consider a myriad of indicators that are continually flowing from all pillars.
Short of rewriting every application, protocol and API schema to support new zero trust communication specifications, agencies must look to the one commonality across the pillars: They all produce data in the form of logs, metrics, traces and alerts. When brought together into an actionable speed layer, the data flowing from and between each pillar can become the basis for making better-informed zero trust decisions.
The data challenge
According to the DoD, achieving its zero trust strategy results in several benefits, including “the ability of a user to access required data from anywhere, from any authorized and authenticated user and device, fully secured.”
Every day, defense agencies are generating enormous quantities of data. Things get even more tricky when the data is spread across cloud platforms, on-prem systems, or specialized environments like satellites and emergency response centers.
It’s hard to find information, let alone use it efficiently. And with different teams working with many different apps and data formats, the interoperability challenge increases. The mountain of data is growing. While it’s impossible to calculate the amount of data the DoD generates per day, a single Air Force unmanned aerial vehicle can generate up to 70 terabytes of data within a span of 14 hours, according to a Deloitte report. That’s about seven times more data output than the Hubble Space Telescope generates over an entire year.
Access to that information is bottlenecking.
Data mesh is the foundation for modern DoD zero trust strategies
Data mesh offers an alternative answer to organizing data effectively. Put simply, a data mesh overcomes silos, providing a unified and distributed layer that simplifies and standardizes data operations. Data collected from across the entire network can be retrieved and analyzed at any or all points of the ecosystem — so long as the user has permission to access it.
Instead of relying on a central IT team to manage all data, data ownership is distributed across government agencies and departments. The Cybersecurity and Infrastructure Security Agency uses a data mesh approach to gain visibility into security data from hundreds of federal agencies, while allowing each agency to retain control of its data.
Data mesh is a natural fit for government and defense sectors, where vast, distributed datasets have to be securely accessed and analyzed in real time.
Utilizing a scalable, flexible data platform for zero trust networking decisions
One of the biggest hurdles with current approaches to zero trust is that most zero trust implementations attempt to glue together existing systems through point-to-point integrations. While it might seem like the most straightforward way to step into the zero trust world, those direct connections can quickly become bottlenecks and even single points of failure.
Each system speaks its own language for querying, security and data format; the systems were also likely not designed to support the additional scale and loads that a zero trust security architecture brings. Collecting all data into a common platform where it can be correlated and analyzed together, using the same operations, is a key solution to this challenge.
When implementing a platform that fits these needs, agencies should look for a few capabilities, including the ability to monitor and analyze all of the infrastructure, applications and networks involved.
In addition, agencies must have the ability to ingest all events, alerts, logs, metrics, traces, hosts, devices and network data into a common search platform that includes built-in solutions for observability and security on the same data without needing to duplicate it to support multiple use cases.
This latter capability allows the monitoring of performance and security not only for the pillar systems and data, but also for the infrastructure and applications performing zero trust operations.
The zero trust security paradigm is necessary; we can no longer rely on simplistic, perimeter-based security. But the requirements demanded by the zero trust principles are too complex to accomplish with point-to-point integrations between systems or layers.
Zero trust requires integration across all pillars at the data level –– in short, the government needs a data mesh platform to orchestrate these implementations. By following the guidance outlined above, organizations will not just meet requirements, but truly get the most out of zero trust.
Chris Townsend is global vice president of public sector at Elastic.
The post A data mesh approach: Helping DoD meet 2027 zero trust needs first appeared on Federal News Network.

© AP Illustration/Peter Hamlin)
EU Sets February Deadline for Verdict on Google’s $32B Wiz Acquisition
The record-breaking deal has already received a green light from the US government.
The post EU Sets February Deadline for Verdict on Google’s $32B Wiz Acquisition appeared first on SecurityWeek.
DoD expands login options beyond CAC
The Defense Department is expanding secure methods of authentication beyond the traditional Common Access Card, giving users more alternative options to log into its systems when CAC access is “impractical or infeasible.”
A new memo, titled “Multi-Factor Authentication (MFA) for Unclassified & Secret DoD Networks,” lays out when users can access DoD resources without CAC and public key infrastructure (PKI). The directive also updates the list of approved authentication tools for different system impact levels and applications.
In addition, the new policy provides guidance on where some newer technologies, such as FIDO passkeys, can be used and how they should be protected.
“This memorandum establishes DoD non-PKI MFA policy and identifies DoD-approved non-PKI MFAs based on use cases,” the document reads.
While the new memo builds on previous DoD guidance on authentication, earlier policies often did not clearly authorize specific login methods for particular use cases, leading to inconsistent implementation across the department.
Individuals in the early stages of the recruiting process, for example, may access limited DoD resources without a Common Access Card using basic login methods such as one-time passcodes sent by phone, email or text. As recruits move further through the process, they must be transitioned to stronger, DoD-approved multi-factor authentication before getting broader access to DoD resources.
For training environments, the department allows DoD employees, contractors and other partners without CAC to access training systems only after undergoing identity verification. Those users may authenticate using DoD-approved non-PKI multi-factor authentication — options such as one-time passcodes are permitted when users don’t have a smartphone. Access is limited to low-risk, non-mission-critical training environments.
Although the memo identifies 23 use cases, the list is expected to be a living document and will be updated as new use cases emerge.
Jeremy Grant, managing director of technology business strategy at Venable, said the memo provides much-needed clarity for authorizing officials.
“There are a lot of new authentication technologies that are emerging, and I continue to hear from both colleagues in government and the vendor community that it has not been clear which products can and cannot be used, and in what circumstances. In some cases, I have seen vendors claim they are FIPS 140 validated but they aren’t — or claim that their supply chain is secure, despite having notable Chinese content in their device. But it’s not always easy for a program or procurement official to know what claims are accurate. Having a smaller list of approved products will help components across the department know what they can buy,” Grant told Federal News Network.
DoD’s primary credential
The memo also clarifies what the Defense Department considers its primary credential — prior policies would go back and forth between defining DoD’s primary credential as DoD PKI or as CAC.
“From my perspective, this was a welcome — and somewhat overdue — clarification. Smart cards like the CAC remain a very secure means of hardware-based authentication, but the CAC is also more than 25 years old and we’ve seen a burst of innovation in the authentication industry where there are other equally secure tools that should also be used across the department. Whether a PKI certificate is carried on a CAC or on an approved alternative like a YubiKey shouldn’t really matter; what matters is that it’s a FIPS 140 validated hardware token that can protect that certificate,” Grant said.
Policy lags push for phishing-resistant authentication
While the memo expands approved authentication options, Grant said it’s surprising the guidance stops short of requiring phishing-resistant authenticators and continues to allow the use of legacy technologies such as one-time passwords that the National Institute of Standards and Technology, Cybersecurity and Infrastructure Security Agency and Office of Management and Budget have flagged as increasingly susceptible to phishing attacks.
Both the House and Senate have been pressing the Defense Department to accelerate its adoption of phishing-resistant authentication — Congress acknowledged that the department has established a process for new multi-factor authentication technologies approval, but few approvals have successfully made it through. Now, the Defense Department is required to develop a strategy to “ensure that phishing-resistant authentication is used by all personnel of the DoD” and to provide a briefing to the House and Senate Armed Services committees by May 1, 2026.
The department is also required to ensure that legacy, phishable authenticators such as one-time passwords are retired by the end of fiscal 2027.
“I imagine this document will need an update in the next year to reflect that requirement,” Grant said.
The post DoD expands login options beyond CAC first appeared on Federal News Network.

© Federal News Network
Common Threat Themes: Defending Against Lateral Movement (Part 1)
-
Cyber Defense Mag
- When Airports Go Dark: What The Weekend’s Cyber-attacks Tell Us About Business Risk
When Airports Go Dark: What The Weekend’s Cyber-attacks Tell Us About Business Risk
Varun Uppal, founder and CEO of Shinobi Security Over the weekend, airports across Europe were thrown into chaos after a cyber-attack on one of their technology suppliers rippled through airline...
The post When Airports Go Dark: What The Weekend’s Cyber-attacks Tell Us About Business Risk appeared first on Cyber Defense Magazine.
Innovator Spotlight: Seraphic
Reinventing Browser Security for the Enterprise The Browser: Enterprise’s Biggest Blind Spot On any given day, the humble web browser is where business happens – email, SaaS apps, file sharing,...
The post Innovator Spotlight: Seraphic appeared first on Cyber Defense Magazine.
Zero Trust in the Era of Agentic AI
Innovator Spotlight: OPSWAT
Zero Trust: The Unsung Hero of Cybersecurity Cybersecurity professionals are drowning in complexity. Acronyms fly like digital confetti, vendors promise silver bullets, and CISOs find themselves perpetually playing catch-up with...
The post Innovator Spotlight: OPSWAT appeared first on Cyber Defense Magazine.
Innovator Spotlight: DataKrypto
The Silent Threat: Why Your AI Could Be Your Biggest Security Vulnerability Imagine a digital Trojan horse sitting right in the heart of your organization’s most valuable asset – your...
The post Innovator Spotlight: DataKrypto appeared first on Cyber Defense Magazine.
Why Enterprises Need Preemptive Cybersecurity to Combat Modern Phishing
Phishing isn’t what it used to be. It’s no longer fake emails with bad grammar and sketchy links. With AI, modern phishing attacks have become slicker, more convincing, and dangerously...
The post Why Enterprises Need Preemptive Cybersecurity to Combat Modern Phishing appeared first on Cyber Defense Magazine.
AI Takes Center Stage at Black Hat USA 2025 – Booz Allen Leads the Conversation
Black Hat USA 2025 was nothing short of groundbreaking. The show floor and conference tracks were buzzing with innovation, but one theme stood above all others – the rapid advancement...
The post AI Takes Center Stage at Black Hat USA 2025 – Booz Allen Leads the Conversation appeared first on Cyber Defense Magazine.
Contain Breaches and Gain Visibility With Microsegmentation
Organizations must grapple with challenges from various market forces. Digital transformation, cloud adoption, hybrid work environments and geopolitical and economic challenges all have a part to play. These forces have especially manifested in more significant security threats to expanding IT attack surfaces.
Breach containment is essential, and zero trust security principles can be applied to curtail attacks across IT environments, minimizing business disruption proactively. Microsegmentation has emerged as a viable solution through its continuous visualization of workload and device communications and policy creation to define what communications are permitted. In effect, microsegmentation restricts lateral movement, isolates breaches and thwarts attacks.
Given the spotlight on breaches and their impact across industries and geographies, how can segmentation address the changing security landscape and client challenges? IBM and its partners can help in this space.
Breach Landscape and Impact of Ransomware
Historically, security solutions have focused on the data center, but new attack targets have emerged with enterprises moving to the cloud and introducing technologies like containerization and serverless computing. Not only are breaches occurring and attack surfaces expanding, but also it has become easier for breaches to spread. Traditional prevention and detection tools provided surface-level visibility into traffic flow that connected applications, systems and devices communicating across the network. However, they were not intended to contain and stop the spread of breaches.
Ransomware is particularly challenging, as it presents a significant threat to cyber resilience and financial stability. A successful attack can take a company’s network down for days or longer and lead to the loss of valuable data to nefarious actors. The Cost of a Data Breach 2022 report, conducted by the Ponemon Institute and sponsored by IBM Security, cites $4.54 million as the average ransomware attack cost, not including the ransom itself.
In addition, a recent IDC study highlights that ransomware attacks are evolving in sophistication and value. Sensitive data is being exfiltrated at a higher rate as attackers go after the most valuable targets for their time and money. Ultimately, the cost of a ransomware attack can be significant, leading to reputational damage, loss of productivity and regulatory compliance implications.
Organizations Want Visibility, Control and Consistency
With a focus on breach containment and prevention, hybrid cloud infrastructure and application security, security teams are expressing their concerns. Three objectives have emerged as vital for them.
First, organizations want visibility. Gaining visibility empowers teams to understand their applications and data flows regardless of the underlying network and compute architecture.
Second, organizations want consistency. Fragmented and inconsistent segmentation approaches create complexity, risk and cost. Consistent policy creation and strategy help align teams across heterogeneous environments and facilitate the move to the cloud with minimal re-writing of security policy.
Finally, organizations want control. Solutions that help teams target and protect their most critical assets deliver the greatest return. Organizations want to control communications through selectively enforced policies that can expand and improve as their security posture matures towards zero trust security.
Microsegmentation Restricts Lateral Movement to Mitigate Threats
Microsegmentation (or simply segmentation) combines practices, enforced policies and software that provide user access where required and deny access everywhere else. Segmentation contains the spread of breaches across the hybrid attack surface by continually visualizing how workloads and devices communicate. In this way, it creates granular policies that only allow necessary communication and isolate breaches by proactively restricting lateral movement during an attack.
The National Institute of Standards and Technology (NIST) highlights microsegmentation as one of three key technologies needed to build a zero trust architecture, a framework for an evolving set of cybersecurity paradigms that move defense from static, network-based perimeters to users, assets and resources.
Suppose existing detection solutions fail and security teams lack granular segmentation. In that case, malicious software can enter their environment, move laterally, reach high-value applications and exfiltrate critical data, leading to catastrophic outcomes.
Ultimately, segmentation helps clients respond by applying zero trust principles like ‘assume a breach,’ helping them prepare in the wake of the inevitable.
IBM Launches Segmentation Security Services
In response to growing interest in segmentation solutions, IBM has expanded its security services portfolio with IBM Security Application Visibility and Segmentation Services (AVS). AVS is an end-to-end solution combining software with IBM consulting and managed services to meet organizations’ segmentation needs. Regardless of where applications, data and users reside across the enterprise, AVS is designed to give clients visibility into their application network and the ability to contain ransomware and protect their high-value assets.
AVS will walk you through a guided experience to align your stakeholders on strategy and objectives, define the schema to visualize desired workloads and devices and build the segmentation policies to govern network communications and ring-fence critical applications from unauthorized access. Once the segmentation policies are defined and solutions deployed, clients can consume steady-state services for ongoing management of their environment’s workloads and applications. This includes health and maintenance, policy and configuration management, service governance and vendor management.
IBM has partnered with Illumio, an industry leader in zero trust segmentation, to deliver this solution. Illumio’s software platform provides attack surface visibility, enabling you to see all communication and traffic between workloads and devices across the entire hybrid attack surface. In addition, it allows security teams to set automated, granular and flexible segmentation policies that control communications between workloads and devices, only allowing what is necessary to traverse the network. Ultimately, this helps organizations to quickly isolate compromised systems and high-value assets, stopping the spread of an active attack.
With AVS, clients can harden compute nodes across their data center, cloud and edge environments and protect their critical enterprise assets.
Start Your Segmentation Journey
IBM Security Services can help you plan and execute a segmentation strategy to meet your objectives. To learn more, register for the on-demand webinar now.
The post Contain Breaches and Gain Visibility With Microsegmentation appeared first on Security Intelligence.
Announcing the 2021-22 Synack Acropolis, Legends and Featured Envoy Mentors
Trust. Honor. Excellence. These words are the foundational pillars for the Synack Acropolis and Synack Red Team. This past year, these core principles ushered in two new initiatives to address the rising cybersecurity talent gap and opportunity problems: Artemis Red Team and SRT Mentorship Program. ![]()
The Artemis Red Team is focused on creating the world’s best cybersecurity community for women, trans, non-binary and other gender minorities. The community’s purpose is to deliver opportunities for them to feel supported and expand their careers with like-minded excellence. Part of that mission is a commitment to helping researchers become their best by learning from the best, hence the formation of the SRT Mentorship program. Everyone has something to share, thus anyone can be a mentor. The SRT Mentorship program codifies this ideal by ensuring that mentors are recognized and turns what has historically been a purely philanthropic endeavor into an effective side-hustle earning researchers of all shapes and sizes additional opportunities, special rewards, pay incentives and recognition at the upper-echelons of the Synack Acropolis as an SRT Envoy or, even better, Mentor of the Year.
Together these programs amplify flames of curiosity, courage and camaraderie that have long existed in the hacker ethos, and the researchers who contribute their time, knowledge and passion will play an influential role in shaping the future of cybersecurity.
2021-22 Acropolis Winners
It is with great honor, pride and respect that Synack announces the following top award winners for this year’s recognition program:
The awards from left to right are SRT of the Year, Rookie of the Year, Guardian of Trust and Mentor of the Year.
In addition, the following researchers have opted-in to showcase their recognition for dedication and commitment to excellence this past year:
TITAN |
|
OLYMPIAN |
|
CIRCLE OF TRUST |
|
SRT ENVOY MENTORS |
|
Lifetime Achievement Program
The SRT Legends program is a lifetime achievement program that focuses SRT long-term goals directly towards addressing the cybersecurity talent-gap. The goal is for SRT to share their diverse skills with more than just a single program or customer. To achieve SRT Legend status, it takes time, dedication and a commitment to quality. This year Synack recognizes three new researchers to this hallowed class:
What’s Next
Who will claim next year’s coveted top spots? Who knows…it could be you! Take your first steps and apply to the Synack Red Team today, or reach out to @SynackRedTeam, @ryanrutan or on LinkedIn.
— Ryan Rutan
Sr. Director of Community, Synack Red Team
The post Announcing the 2021-22 Synack Acropolis, Legends and Featured Envoy Mentors appeared first on Synack.
Our Cyber Heroes: Announcing the 2020-21 Top Hackers on the Synack Acropolis
Back in July 2021, the Synack Red Team ushered in the close of the 2020-21 Synack Recognition period. This wildly popular program is a set of goals given to all researchers from July 1st to June 30th, every year, helping them understand what success looks like on the Synack platform. In exchange for researchers hitting their goals, they are awarded limited-edition prizes for their engagement. For SRT that opt-in, their accomplishments are put on display for the world to see on the Synack Acropolis.
![]()
SRT Legends Program
This year’s recognition program featured the launch of a new lifetime achievement component called SRT Legends. To qualify for the SRT Legends program, researchers must earn lifetime distinction across at least 1 of 5 criteria designed to test a researcher’s skill and adaptability, while quantifying their impact in the cybersecurity industry at large.
This program includes the following criteria as measured exclusively on the Synack platform:
- # of Unique Targets with Accepted Findings > 250
- # of Unique Companies with Accepted Findings > 100
- # of Accepted Vulnerabilities > 1500
- # of Accepted Critical Vulnerabilities (CVSS 9.0+) > 250
- $1 Million or more in lifetime earnings on the platform
As researchers ascend to the heights of an SRT Legend, the mark they leave on the world around them becomes ever more clear. Their exceptional commitment helps us all be more safe and secure, making them truly legendary hackers!
Announcing the inaugural 2021 class of publicly recognized SRT Legends:
2020-21 Recognition Winners
![]()
It is with great honor, pride, and respect that Synack announces the following top award winners for this year’s recognition program:
In addition, the following researchers have earned distinguished recognition for their dedication and commitment to excellence this past year:
TITAN |
|
OLYMPIAN |
|
HERO |
|
CIRCLE OF TRUST |
|
![]()
![]()
Many of the researchers listed above will be enjoying special opportunities on the platform over the next year, as well as dawning their much beloved, limited-edition SRT hoodies (above) in the coming months, or kicking back in a well-deserved #HackerThrone! The 2021-22 Synack Recognition Program is already underway with new prizes and recognition opportunities already on the horizon. Who will rise to the top and claim next year’s coveted top spots? Who knows … it could be you! — Take your first steps and apply to the Synack Red Team today, or reach out to @SynackRedTeam or @ryanrutan on Twitter.
The post Our Cyber Heroes: Announcing the 2020-21 Top Hackers on the Synack Acropolis appeared first on Synack.
-
Dark Web – Sec Intel
- Old Habits Die Hard: New Report Finds Businesses Still Introducing Security Risk into Cloud Environments
Old Habits Die Hard: New Report Finds Businesses Still Introducing Security Risk into Cloud Environments
While cloud computing and its many forms (private, public, hybrid cloud or multi-cloud environments) have become ubiquitous with innovation and growth over the past decade, cybercriminals have closely watched the migration and introduced innovations of their own to exploit the platforms. Most of these exploits are based on poor configurations and human error. New IBM Security X-Force data reveals that many cloud-adopting businesses are falling behind on basic security best practices, introducing more risk to their organizations.
Shedding light on the “cracked doors” that cybercriminals are using to compromise cloud environments, the 2022 X-Force Cloud Threat Landscape Report uncovers that vulnerability exploitation, a tried-and-true infection method, remains the most common way to achieve cloud compromise. Gathering insights from X-Force Threat Intelligence data, hundreds of X-Force Red penetration tests, X-Force Incident Response (IR) engagements and data provided by report contributor Intezer, between July 2021 and June 2022, some of the key highlights stemming from the report include:
- Cloud Vulnerabilities are on the Rise — Amid a sixfold increase in new cloud vulnerabilities over the past six years, 26% of cloud compromises that X-Force responded to were caused by attackers exploiting unpatched vulnerabilities, becoming the most common entry point observed.
- More Access, More Problems — In 99% of pentesting engagements, X-Force Red was able to compromise client cloud environments through users’ excess privileges and permissions. This type of access could allow attackers to pivot and move laterally across a victim environment, increasing the level of impact in the event of an attack.
- Cloud Account Sales Gain Grounds in Dark Web Marketplaces — X-Force observed a 200% increase in cloud accounts now being advertised on the dark web, with remote desktop protocol and compromised credentials being the most popular cloud account sales making rounds on illicit marketplaces.
Unpatched Software: #1 Cause of Cloud Compromise
As the rise of IoT devices drives more and more connections to cloud environments, the larger the potential attack surface becomes introducing critical challenges that many businesses are experiencing like proper vulnerability management. Case in point — the report found that more than a quarter of studied cloud incidents were caused due to known, unpatched vulnerabilities being exploited. While the Log4j vulnerability and a vulnerability in VMware Cloud Director were two of the more commonly leveraged vulnerabilities observed in X-Force engagements, most vulnerabilities observed that were exploited primarily affected the on-premises version of applications, sparing the cloud instances.
As suspected, cloud-related vulnerabilities are increasing at a steady rate, with X-Force observing a 28% rise in new cloud vulnerabilities over the last year alone. With over 3,200 cloud-related vulnerabilities disclosed in total to date, businesses face an uphill battle when it comes to keeping up with the need to update and patch an increasing volume of vulnerable software. In addition to the growing number of cloud-related vulnerabilities, their severity is also rising, made apparent by the uptick in vulnerabilities capable of providing attackers with access to more sensitive and critical data as well as opportunities to carry out more damaging attacks.
These ongoing challenges point to the need for businesses to pressure test their environments and not only identify weaknesses in their environment, like unpatched, exploitable vulnerabilities, but prioritize them based on their severity, to ensure the most efficient risk mitigation.
Excessive Cloud Privileges Aid in Bad Actors’ Lateral Movement
The report also shines a light on another worrisome trend across cloud environments — poor access controls, with 99% of pentesting engagements that X-Force Red conducted succeeding due to users’ excess privileges and permissions. Businesses are allowing users unnecessary levels of access to various applications across their networks, inadvertently creating a stepping stone for attackers to gain a deeper foothold into the victim’s cloud environment.
The trend underlines the need for businesses to shift to zero trust strategies, further mitigating the risk that overly trusting user behaviors introduce. Zero trust strategies enable businesses to put in place appropriate policies and controls to scrutinize connections to the network, whether an application or a user, and iteratively verify their legitimacy. In addition, as organizations evolve their business models to innovate at speed and adapt with ease, it’s essential that they’re properly securing their hybrid, multi-cloud environments. Central to this is modernizing their architectures: not all data requires the same level of control and oversight, so determining the right workloads, to put in the right place for the right reason is important. Not only can this help businesses effectively manage their data, but it enables them to place efficient security controls around it, supported by proper security technologies and resources.
Dark Web Marketplaces Lean Heavier into Cloud Account Sales
With the rise of the cloud comes the rise of cloud accounts being sold on the Dark Web, verified by X-Force observing a 200% rise in the last year alone. Specifically, X-Force identified over 100,000 cloud account ads across Dark Web marketplaces, with some account types being more popular than others. Seventy-six percent of cloud account sales identified were Remote Desktop Protocol (RDP) access accounts, a slight uptick from the year prior. Compromised cloud credentials were also up for sale, accounting for 19% of cloud accounts advertised in the marketplaces X-Force analyzed.
The going price for this type of access is significantly low making these accounts easily attainable to the average bidder. The price for RDP access and compromised credentials average $7.98 and $11.74 respectively. Compromised credentials’ 47% higher selling price is likely due to their ease of use, as well as the fact that postings advertising credentials often include multiple sets of login data, potentially from other services that were stolen along with the cloud credentials, yielding a higher ROI for cybercriminals.
As more compromised cloud accounts pop up across these illicit marketplaces for malicious actors to exploit, it’s important that organizations work toward enforcing more stringent password policies by urging users to regularly update their passwords, as well as implement multifactor authentication (MFA). Businesses should also be leveraging Identity and Access Management tools to reduce reliance on username and password combinations and combat threat actor credential theft.
To read our comprehensive findings and learn about detailed actions organizations can take to protect their cloud environments, review our 2022 X-Force Cloud Security Threat Landscape here.
If you’re interested in signing up for the “Step Inside a Cloud Breach: Threat Intelligence and Best Practices” webinar on Wednesday, September 21, 2022, at 11:00 a.m. ET you can register here.
If you’d like to schedule a consult with IBM Security X-Force visit: www.ibm.com/security/xforce?schedulerform
The post Old Habits Die Hard: New Report Finds Businesses Still Introducing Security Risk into Cloud Environments appeared first on Security Intelligence.