❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

8 federal agency data trends for 2026

If 2025 was the year federal agencies began experimenting with AI at-scale, then 2026 will be the year they rethink their entire data foundations to support it. What’s coming next is not another incremental upgrade. Instead, it’s a shift toward connected intelligence, where data is governed, discoverable and ready for mission-driven AI from the start.

Federal leaders increasingly recognize that data is no longer just an IT asset. It is the operational backbone for everything from citizen services to national security. And the trends emerging now will define how agencies modernize, secure and activate that data through 2026 and beyond.

Trend 1: Governance moves from manual to machine-assisted

Agencies will accelerate the move toward AI-driven governance. Expect automated metadata generation, AI-powered lineage tracking, and policy enforcement that adjusts dynamically as data moves, changes and scales. Governance will finally become continuous, not episodic, allowing agencies to maintain compliance without slowing innovation.

Trend 2: Data collaboration platforms replace tool sprawl

2026 will mark a turning point as agencies consolidate scattered data tools into unified data collaboration platforms. These platforms integrate cataloging, observability and pipeline management into a single environment, reducing friction between data engineers, analysts and emerging AI teams. This consolidation will be essential for agencies implementing enterprise-wide AI strategies.

Trend 3: Federated architectures become the federal standard

Centralized data architectures will continue to give way to federated models that balance autonomy and interoperability across large agencies. A hybrid data fabric β€” one that links but doesn’t force consolidation β€” will become the dominant design pattern. Agencies with diverse missions and legacy environments will increasingly rely on this approach to scale AI responsibly.

Trend 4: Integration becomes AI-first

Application programming interfaces (APIs), semantic layers and data products will increasingly be designed for machine consumption, not just human analysis. Integration will be about preparing data for real-time analytics, large language models (LLMs) and mission systems, not just moving it from point A to point B.

Trend 5: Data storage goes AI-native

Traditional data lakes will evolve into AI-native environments that blend object storage with vector databases, enabling embedding search and retrieval-augmented generation. Federal agencies advancing their AI capabilities will turn to these storage architectures to support multimodal data and generative AI securely.

Trend 6: Real-time data quality becomes non-negotiable

Expect a major shift from reactive data cleansing to proactive, automated data quality monitoring. AI-based anomaly detection will become standard in data pipelines, ensuring the accuracy and reliability of data feeding AI systems and mission applications. The new rule: If it’s not high-quality in real time, it won’t support AI at-scale.

Trend 7: Zero trust expands into data access and auditing

As agencies mature their zero trust programs, 2026 will bring deeper automation in data permissions, access patterns and continuous auditing. Policy-as-code approaches will replace static permission models, ensuring data is both secure and available for AI-driven workloads.

Trend 8: Workforce roles evolve toward human-AI collaboration

The rise of generative AI will reshape federal data roles. The most in-demand professionals won’t necessarily be deep coders. They will be connectors who understand prompt engineering, data ethics, semantic modeling and AI-optimized workflows. Agencies will need talent that can design systems where humans and machines jointly manage data assets.

The bottom line: 2026 is the year of AI-ready data

In the year ahead, the agencies that win will build data ecosystems designed for adaptability, interoperability and human–AI collaboration. The outdated mindset of β€œcollect and store” will be replaced by β€œintegrate and activate.”

For federal leaders, the mission imperative is clear: Make data trustworthy by default, usable by design, and ready for AI from the start. Agencies that embrace this shift will move faster, innovate safely, and deliver more resilient mission outcomes in 2026 and beyond.

Seth Eaton is vice president of technology & innovation at Amentum.

The post 8 federal agency data trends for 2026 first appeared on Federal News Network.

Β© Getty Images/iStockphoto/ipopba

AI, Machine learning, Hands of robot and human touching on big data network connection background, Science and artificial intelligence technology, innovation and futuristic.

A data mesh approach: Helping DoD meet 2027 zero trust needs

As the Defense Department moves to meet its 2027 deadline for completing a zero trust strategy, it’s critical thatΒ the military can ingest data from disparate sources while also being able to observe and secure systems that span all layers of data operations.

Gone are the days of secure moats. Interconnected cloud, edge, hybrid and services-based architectures have created new levels of complexity β€” and more avenues for bad actors to introduce threats.

The ultimate vision of zero trust can’t be accomplished through one-off integrations between systems or layers. For critical cybersecurity operations to succeed, zero trust must be based on fast, well-informed risk scoring and decision making that consider a myriad of indicators that are continually flowing from all pillars.

Short of rewriting every application, protocol and API schema to support new zero trust communication specifications, agencies must look to the one commonality across the pillars: They all produce data in the form of logs, metrics, traces and alerts. When brought together into an actionable speed layer, the data flowing from and between each pillar can become the basis for making better-informed zero trust decisions.

The data challenge

According to the DoD, achieving its zero trust strategy results in several benefits, including β€œthe ability of a user to access required data from anywhere, from any authorized and authenticated user and device, fully secured.”

Every day, defense agencies are generating enormous quantities of data. Things get even more tricky when the data is spread across cloud platforms, on-prem systems, or specialized environments like satellites and emergency response centers.

It’s hard to find information, let alone use it efficiently. And with different teams working with many different apps and data formats, the interoperability challenge increases. The mountain of data is growing. While it’s impossible to calculate the amount of data the DoD generates per day, a single Air Force unmanned aerial vehicle can generate up to 70 terabytes of data within a span of 14 hours, according to a Deloitte report. That’s about seven times more data output than the Hubble Space Telescope generates over an entire year.

Access to that information is bottlenecking.

Data mesh is the foundation for modern DoD zero trust strategies

Data mesh offers an alternative answer to organizing data effectively. Put simply, a data mesh overcomes silos, providing a unified and distributed layer that simplifies and standardizes data operations. Data collected from across the entire network can be retrieved and analyzed at any or all points of the ecosystem β€” so long as the user has permission to access it.

Instead of relying on a central IT team to manage all data, data ownership is distributed across government agencies and departments. The Cybersecurity and Infrastructure Security Agency uses a data mesh approach to gain visibility into security data from hundreds of federal agencies, while allowing each agency to retain control of its data.

Data mesh is a natural fit for government and defense sectors, where vast, distributed datasets have to be securely accessed and analyzed in real time.

Utilizing a scalable, flexible data platform for zero trust networking decisions

One of the biggest hurdles with current approaches to zero trust is that most zero trust implementations attempt to glue together existing systems through point-to-point integrations. While it might seem like the most straightforward way to step into the zero trust world, those direct connections can quickly become bottlenecks and even single points of failure.

Each system speaks its own language for querying, security and data format; the systems were also likely not designed to support the additional scale and loads that a zero trust security architecture brings. Collecting all data into a common platform where it can be correlated and analyzed together, using the same operations, is a key solution to this challenge.

When implementing a platform that fits these needs, agencies should look for a few capabilities, including the ability to monitor and analyze all of the infrastructure, applications and networks involved.

In addition, agencies must have the ability to ingest all events, alerts, logs, metrics, traces, hosts, devices and network data into a common search platform that includes built-in solutions for observability and security on the same data without needing to duplicate it to support multiple use cases.

This latter capability allows the monitoring of performance and security not only for the pillar systems and data, but also for the infrastructure and applications performing zero trust operations.

The zero trust security paradigm is necessary; we can no longer rely on simplistic, perimeter-based security. But the requirements demanded by the zero trust principles are too complex to accomplish with point-to-point integrations between systems or layers.

Zero trust requires integration across all pillars at the data level –– in short, the government needs a data mesh platform to orchestrate these implementations. By following the guidance outlined above, organizations will not just meet requirements, but truly get the most out of zero trust.

Chris Townsend is global vice president of public sector at Elastic.

The post A data mesh approach: Helping DoD meet 2027 zero trust needs first appeared on Federal News Network.

Β© AP Illustration/Peter Hamlin)

(AP Illustration/Peter Hamlin)US--Insider Q&A-Pentagon AI Chief

DoD expands login options beyond CAC

The Defense Department is expanding secure methods of authentication beyond the traditional Common Access Card, giving users more alternative options to log into its systems when CAC access is β€œimpractical or infeasible.”

A new memo, titled β€œMulti-Factor Authentication (MFA) for Unclassified & Secret DoD Networks,” lays out when users can access DoD resources without CAC and public key infrastructure (PKI). The directive also updates the list of approved authentication tools for different system impact levels and applications.

In addition, the new policy provides guidance on where some newer technologies, such as FIDO passkeys, can be used and how they should be protected.Β 

β€œThis memorandum establishes DoD non-PKI MFA policy and identifies DoD-approved non-PKI MFAs based on use cases,” the document reads.

While the new memo builds on previous DoD guidance on authentication, earlier policies often did not clearly authorize specific login methods for particular use cases, leading to inconsistent implementation across the department.

Individuals in the early stages of the recruiting process, for example, may access limited DoD resources without a Common Access Card using basic login methods such as one-time passcodes sent by phone, email or text. As recruits move further through the process, they must be transitioned to stronger, DoD-approved multi-factor authentication before getting broader access to DoD resources.

For training environments, the department allows DoD employees, contractors and other partners without CAC to access training systems only after undergoing identity verification. Those users may authenticate using DoD-approved non-PKI multi-factor authentication β€” options such as one-time passcodes are permitted when users don’t have a smartphone. Access is limited to low-risk, non-mission-critical training environments.

Although the memo identifies 23 use cases, the list is expected to be a living document and will be updated as new use cases emerge.

Jeremy Grant, managing director of technology business strategy at Venable, said the memo provides much-needed clarity for authorizing officials.

β€œThere are a lot of new authentication technologies that are emerging, and I continue to hear from both colleagues in government and the vendor community that it has not been clear which products can and cannot be used, and in what circumstances. In some cases, I have seen vendors claim they are FIPS 140 validated but they aren’t β€” or claim that their supply chain is secure, despite having notable Chinese content in their device. But it’s not always easy for a program or procurement official to know what claims are accurate. Having a smaller list of approved products will help components across the department know what they can buy,” Grant told Federal News Network.

DoD’s primary credential

The memo also clarifies what the Defense Department considers its primary credential β€” prior policies would go back and forth between defining DoD’s primary credential as DoD PKI or as CAC.Β 

β€œFrom my perspective, this was a welcome β€” and somewhat overdue β€” clarification. Smart cards like the CAC remain a very secure means of hardware-based authentication, but the CAC is also more than 25 years old and we’ve seen a burst of innovation in the authentication industry where there are other equally secure tools that should also be used across the department. Whether a PKI certificate is carried on a CAC or on an approved alternative like a YubiKey shouldn’t really matter; what matters is that it’s a FIPS 140 validated hardware token that can protect that certificate,” Grant said.

Policy lags push for phishing-resistant authentication

While the memo expands approved authentication options, Grant said it’s surprising the guidance stops short of requiring phishing-resistant authenticators and continues to allow the use of legacy technologies such as one-time passwords that the National Institute of Standards and Technology, Cybersecurity and Infrastructure Security Agency and Office of Management and Budget have flagged as increasingly susceptible to phishing attacks.

Both the House and Senate have been pressing the Defense Department to accelerate its adoption of phishing-resistant authentication β€” Congress acknowledged that the department has established a process for new multi-factor authentication technologies approval, but few approvals have successfully made it through. Now, the Defense Department is required to develop a strategy to β€œensure that phishing-resistant authentication is used by all personnel of the DoD” and to provide a briefing to the House and Senate Armed Services committees by May 1, 2026.

The department is also required to ensure that legacy, phishable authenticators such as one-time passwords are retired by the end of fiscal 2027.

β€œI imagine this document will need an update in the next year to reflect that requirement,” Grant said.

The post DoD expands login options beyond CAC first appeared on Federal News Network.

Β© Federal News Network

multifactor-authentificaton NIST

Innovator Spotlight: Seraphic

By: Gary
8 September 2025 at 17:26

Reinventing Browser Security for the Enterprise The Browser: Enterprise’s Biggest Blind Spot On any given day, the humble web browser is where business happens – email, SaaS apps, file sharing,...

The post Innovator Spotlight: Seraphic appeared first on Cyber Defense Magazine.

Innovator Spotlight: OPSWAT

By: Gary
3 September 2025 at 16:56

Zero Trust: The Unsung Hero of Cybersecurity Cybersecurity professionals are drowning in complexity. Acronyms fly like digital confetti, vendors promise silver bullets, and CISOs find themselves perpetually playing catch-up with...

The post Innovator Spotlight: OPSWAT appeared first on Cyber Defense Magazine.

Innovator Spotlight: DataKrypto

By: Gary
3 September 2025 at 10:13

The Silent Threat: Why Your AI Could Be Your Biggest Security Vulnerability Imagine a digital Trojan horse sitting right in the heart of your organization’s most valuable asset – your...

The post Innovator Spotlight: DataKrypto appeared first on Cyber Defense Magazine.

Contain Breaches and Gain Visibility With Microsegmentation

1 February 2023 at 09:00

Organizations must grapple with challenges from various market forces. Digital transformation, cloud adoption, hybrid work environments and geopolitical and economic challenges all have a part to play. These forces have especially manifested in more significant security threats to expanding IT attack surfaces.

Breach containment is essential, and zero trust security principles can be applied to curtail attacks across IT environments, minimizing business disruption proactively. Microsegmentation has emerged as a viable solution through its continuous visualization of workload and device communications and policy creation to define what communications are permitted. In effect, microsegmentation restricts lateral movement, isolates breaches and thwarts attacks.

Given the spotlight on breaches and their impact across industries and geographies, how can segmentation address the changing security landscape and client challenges? IBM and its partners can help in this space.

Breach Landscape and Impact of Ransomware

Historically, security solutions have focused on the data center, but new attack targets have emerged with enterprises moving to the cloud and introducing technologies like containerization and serverless computing. Not only are breaches occurring and attack surfaces expanding, but also it has become easier for breaches to spread. Traditional prevention and detection tools provided surface-level visibility into traffic flow that connected applications, systems and devices communicating across the network.Β  However, they were not intended to contain and stop the spread of breaches.

Ransomware is particularly challenging, as it presents a significant threat to cyber resilience and financial stability. A successful attack can take a company’s network down for days or longer and lead to the loss of valuable data to nefarious actors. The Cost of a Data Breach 2022 report, conducted by the Ponemon Institute and sponsored by IBM Security, cites $4.54 million as the average ransomware attack cost, not including the ransom itself.

In addition, a recent IDC study highlights that ransomware attacks are evolving in sophistication and value. Sensitive data is being exfiltrated at a higher rate as attackers go after the most valuable targets for their time and money. Ultimately, the cost of a ransomware attack can be significant, leading to reputational damage, loss of productivity and regulatory compliance implications.

Organizations Want Visibility, Control and Consistency

With a focus on breach containment and prevention, hybrid cloud infrastructure and application security, security teams are expressing their concerns. Three objectives have emerged as vital for them.

First, organizations want visibility. Gaining visibility empowers teams to understand their applications and data flows regardless of the underlying network and compute architecture.

Second, organizations want consistency. Fragmented and inconsistent segmentation approaches create complexity, risk and cost. Consistent policy creation and strategy help align teams across heterogeneous environments and facilitate the move to the cloud with minimal re-writing of security policy.

Finally, organizations want control. Solutions that help teams target and protect their most critical assets deliver the greatest return. Organizations want to control communications through selectively enforced policies that can expand and improve as their security posture matures towards zero trust security.

Microsegmentation Restricts Lateral Movement to Mitigate Threats

Microsegmentation (or simply segmentation) combines practices, enforced policies and software that provide user access where required and deny access everywhere else. Segmentation contains the spread of breaches across the hybrid attack surface by continually visualizing how workloads and devices communicate. In this way, it creates granular policies that only allow necessary communication and isolate breaches by proactively restricting lateral movement during an attack.

The National Institute of Standards and Technology (NIST) highlights microsegmentation as one of three key technologies needed to build a zero trust architecture, a framework for an evolving set of cybersecurity paradigms that move defense from static, network-based perimeters to users, assets and resources.

Suppose existing detection solutions fail and security teams lack granular segmentation. In that case, malicious software can enter their environment, move laterally, reach high-value applications and exfiltrate critical data, leading to catastrophic outcomes.

Ultimately, segmentation helps clients respond by applying zero trust principles like β€˜assume a breach,’ helping them prepare in the wake of the inevitable.

IBM Launches Segmentation Security Services

In response to growing interest in segmentation solutions, IBM has expanded its security services portfolio with IBM Security Application Visibility and Segmentation Services (AVS). AVS is an end-to-end solution combining software with IBM consulting and managed services to meet organizations’ segmentation needs. Regardless of where applications, data and users reside across the enterprise, AVS is designed to give clients visibility into their application network and the ability to contain ransomware and protect their high-value assets.

AVS will walk you through a guided experience to align your stakeholders on strategy and objectives, define the schema to visualize desired workloads and devices and build the segmentation policies to govern network communications and ring-fence critical applications from unauthorized access. Once the segmentation policies are defined and solutions deployed, clients can consume steady-state services for ongoing management of their environment’s workloads and applications. This includes health and maintenance, policy and configuration management, service governance and vendor management.

IBM has partnered with Illumio, an industry leader in zero trust segmentation, to deliver this solution.Β  Illumio’s software platform provides attack surface visibility, enabling you to see all communication and traffic between workloads and devices across the entire hybrid attack surface. In addition, it allows security teams to set automated, granular and flexible segmentation policies that control communications between workloads and devices, only allowing what is necessary to traverse the network. Ultimately, this helps organizations to quickly isolate compromised systems and high-value assets, stopping the spread of an active attack.

With AVS, clients can harden compute nodes across their data center, cloud and edge environments and protect their critical enterprise assets.

Start Your Segmentation Journey

IBM Security Services can help you plan and execute a segmentation strategy to meet your objectives. To learn more, register for the on-demand webinar now.

The post Contain Breaches and Gain Visibility With Microsegmentation appeared first on Security Intelligence.

Old Habits Die Hard: New Report Finds Businesses Still Introducing Security Risk into Cloud Environments

14 September 2022 at 06:00

While cloud computing and its many forms (private, public, hybrid cloud or multi-cloud environments) have become ubiquitous with innovation and growth over the past decade, cybercriminals have closely watched the migration and introduced innovations of their own to exploit the platforms. Most of these exploits are based on poor configurations and human error. New IBM Security X-Force data reveals that many cloud-adopting businesses are falling behind on basic security best practices, introducing more risk to their organizations.

Shedding light on the β€œcracked doors” that cybercriminals are using to compromise cloud environments, the 2022 X-Force Cloud Threat Landscape Report uncovers that vulnerability exploitation, a tried-and-true infection method, remains the most common way to achieve cloud compromise. Gathering insights from X-Force Threat Intelligence data, hundreds of X-Force Red penetration tests, X-Force Incident Response (IR) engagements and data provided by report contributor Intezer, between July 2021 and June 2022, some of the key highlights stemming from the report include:

  • Cloud Vulnerabilities are on the Rise β€” Amid a sixfold increase in new cloud vulnerabilities over the past six years, 26% of cloud compromises that X-Force responded to were caused by attackers exploiting unpatched vulnerabilities, becoming the most common entry point observed.Β 
  • More Access, More Problems β€” In 99% of pentesting engagements, X-Force Red was able to compromise client cloud environments through users’ excess privileges and permissions. This type of access could allow attackers to pivot and move laterally across a victim environment, increasing the level of impact in the event of an attack.
  • Cloud Account Sales Gain Grounds in Dark Web Marketplaces β€” X-Force observed a 200% increase in cloud accounts now being advertised on the dark web, with remote desktop protocol and compromised credentials being the most popular cloud account sales making rounds on illicit marketplaces.
Download the Report

Unpatched Software: #1 Cause of Cloud Compromise

As the rise of IoT devices drives more and more connections to cloud environments, the larger the potential attack surface becomes introducing critical challenges that many businesses are experiencing like proper vulnerability management. Case in point β€” the report found that more than a quarter of studied cloud incidents were caused due to known, unpatched vulnerabilities being exploited. While the Log4j vulnerability and a vulnerability in VMware Cloud Director were two of the more commonly leveraged vulnerabilities observed in X-Force engagements, most vulnerabilities observed that were exploited primarily affected the on-premises version of applications, sparing the cloud instances.

As suspected, cloud-related vulnerabilities are increasing at a steady rate, with X-Force observing a 28% rise in new cloud vulnerabilities over the last year alone. With over 3,200 cloud-related vulnerabilities disclosed in total to date, businesses face an uphill battle when it comes to keeping up with the need to update and patch an increasing volume of vulnerable software. In addition to the growing number of cloud-related vulnerabilities, their severity is also rising, made apparent by the uptick in vulnerabilities capable of providing attackers with access to more sensitive and critical data as well as opportunities to carry out more damaging attacks.

These ongoing challenges point to the need for businesses to pressure test their environments and not only identify weaknesses in their environment, like unpatched, exploitable vulnerabilities, but prioritize them based on their severity, to ensure the most efficient risk mitigation.

Excessive Cloud Privileges Aid in Bad Actors’ Lateral Movement

The report also shines a light on another worrisome trend across cloud environments β€” poor access controls, with 99% of pentesting engagements that X-Force Red conducted succeeding due to users’ excess privileges and permissions. Businesses are allowing users unnecessary levels of access to various applications across their networks, inadvertently creating a stepping stone for attackers to gain a deeper foothold into the victim’s cloud environment.

The trend underlines the need for businesses to shift to zero trust strategies, further mitigating the risk that overly trusting user behaviors introduce. Zero trust strategies enable businesses to put in place appropriate policies and controls to scrutinize connections to the network, whether an application or a user, and iteratively verify their legitimacy. In addition, as organizations evolve their business models to innovate at speed and adapt with ease, it’s essential that they’re properly securing their hybrid, multi-cloud environments. Central to this is modernizing their architectures: not all data requires the same level of control and oversight, so determining the right workloads, to put in the right place for the right reason is important. Not only can this help businesses effectively manage their data, but it enables them to place efficient security controls around it, supported by proper security technologies and resources.

Dark Web Marketplaces Lean Heavier into Cloud Account Sales

With the rise of the cloud comes the rise of cloud accounts being sold on the Dark Web, verified by X-Force observing a 200% rise in the last year alone. Specifically, X-Force identified over 100,000 cloud account ads across Dark Web marketplaces, with some account types being more popular than others. Seventy-six percent of cloud account sales identified were Remote Desktop Protocol (RDP) access accounts, a slight uptick from the year prior. Compromised cloud credentials were also up for sale, accounting for 19% of cloud accounts advertised in the marketplaces X-Force analyzed.

The going price for this type of access is significantly low making these accounts easily attainable to the average bidder. The price for RDP access and compromised credentials average $7.98 and $11.74 respectively. Compromised credentials’ 47% higher selling price is likely due to their ease of use, as well as the fact that postings advertising credentials often include multiple sets of login data, potentially from other services that were stolen along with the cloud credentials, yielding a higher ROI for cybercriminals.

As more compromised cloud accounts pop up across these illicit marketplaces for malicious actors to exploit, it’s important that organizations work toward enforcing more stringent password policies by urging users to regularly update their passwords, as well as implement multifactor authentication (MFA). Businesses should also be leveraging Identity and Access Management tools to reduce reliance on username and password combinations and combat threat actor credential theft.

To read our comprehensive findings and learn about detailed actions organizations can take to protect their cloud environments, review our 2022 X-Force Cloud Security Threat Landscape here.

If you’re interested in signing up for the β€œStep Inside a Cloud Breach: Threat Intelligence and Best Practices” webinar on Wednesday, September 21, 2022, at 11:00 a.m. ET you can register here.

If you’d like to schedule a consult with IBM Security X-Force visit: www.ibm.com/security/xforce?schedulerform

The post Old Habits Die Hard: New Report Finds Businesses Still Introducing Security Risk into Cloud Environments appeared first on Security Intelligence.

X-Force Report: No Shortage of Resources Aimed at Hacking Cloud Environments

15 September 2021 at 06:05

As cybercriminals remain steadfast in their pursuit of unsuspecting ways to infiltrate today’s businesses, a new report by IBM Security X-Force highlights the top tactics of cybercriminals, the open doors users are leaving for them and the burgeoning marketplace for stolen cloud resources on the dark web. The big takeaway from the data is businesses still control their own destiny when it comes to cloud security. Misconfigurations across applications, databases and policies could have stopped two-thirds of breached cloud environments observed by IBM in this year’s report.

IBM’s 2021 X-Force Cloud Security Threat Landscape Report has expanded from the 2020 report with new and more robust data, spanning Q2 2020 through Q2 2021. Data sets we used include dark web analysis, IBM Security X-Force Red penetration testing data, IBM Security Services metrics, X-Force Incident Response analysis and X-Force Threat Intelligence research. This expanded dataset gave us an unprecedented view across the whole technology estate to make connections for improving security. Here are some quick highlights:

  • Configure it Out β€” Two out of three breached cloud environments studied were caused by improperly configured Application Programming Interface (APIs). X-Force incident responders also observed virtual machines with default security settings that were erroneously exposed to the Internet, including misconfigured platforms and insufficiently enforced network controls.
  • Rulebreakers Lead to Compromise β€” X-Force Red found password and policy violations in the vast majority of cloud penetration tests conducted over the past year. The team also observed a significant growth in the severity of vulnerabilities in cloud-deployed applications, while the number of disclosed vulnerabilities in cloud-deployed applications rocketed 150% over the last five years.
  • Automatic for the Cybercriminals β€” With nearly 30,000 compromised cloud accounts for sale at bargain prices on dark web marketplaces and Remote Desktop Protocol accounting for 70% of cloud resources for sale, cybercriminals have turnkey options to further automate their access to cloud environments.
  • All Eyes on Ransomware & Cryptomining β€” Cryptominers and ransomware remain the top dropped malware into cloud environments, accounting for over 50% of detected system compromises, based on the data analyzed.
Download the report

Modernization Is the New Firewall

More and more businesses are recognizing the business value of hybrid cloud and distributing their data across a diverse infrastructure. In fact, the 2021 Cost of a Data Breach Report revealed that breached organizations implementing a primarily public or private cloud approach suffered approximately $1 million more in breach costs than organizations with a hybrid cloud approach.

With businesses seeking heterogeneous environments to distribute their workloads and better control where their most critical data is stored, modernization of those applications is becoming a point of control for security. The report is putting a spotlight on security policies that don’t encompass the cloud, increasing the security risks businesses are facing in disconnected environments. Here are a few examples:

  • The Perfect Pivot β€” As enterprises struggle to monitor and detect cloud threats, cloud environments today. This has contributed to threat actors pivoting from on-premise into cloud environments, making this one of the most frequently observed infection vectors targeting cloud environments β€” accounting for 23% of incidents IBM responded to in 2020.
  • API Exposure β€” Another top infection vector we identified was improperly configured assets. Two-thirds of studied incidents involved improperly configured APIs. APIs lacking authentication controls can allow anyone, including threat actors, access to potentially sensitive information. On the other side, APIs being granted access to too much data can also result in inadvertent disclosures.

Many businesses don’t have the same level of confidence and expertise when configuring security controls in cloud computing environments compared to on-premise, which leads to a fragmented and more complex security environment that is tough to manage. Organizations need to manage their distributed infrastructure as one single environment to eliminate complexity and achieve better network visibility from cloud to edge and back. By modernizing their mission critical workloads, not only will security teams achieve speedier data recovery, but they will also gain a vastly more holistic pool of insights around threats to their organization that can inform and accelerate their response.

Trust That Attackers Will Succeed & Hold the Line

Evidence is mounting every day that the perimeter has been obliterated and the findings in the report just add to that corpus of data. That is why taking a zero trust approach is growing in popularity and urgency. It removes the element of surprise and allows security teams to get ahead of any lack of preparedness to respond. By applying this framework, organizations can better protect their hybrid cloud infrastructure, enabling them to control all access to their environments and to monitor cloud activity and proper configurations. This way organizations can go on offense with their defense, uncovering risky behaviors and enforcing privacy regulation controls and least privilege access. Here’s some of the evidence derived from the report:

  • Powerless Policy β€” Our research suggests that two-thirds of studied breaches into cloud environments would have likely been prevented by more robust hardening of systems, such as properly implementing security policies and patching.
  • Lurking in the Shadows β€” β€œShadow IT”, cloud instances or resources that have not gone through an organization’s official channels, indicate that many organizations aren’t meeting today’s baseline security standards. In fact, X-Force estimates the use of shadow IT contributed to over 50% of studied data exposures.
  • Password is β€œadmin 1” β€” The report illustrates X-Force Red data accumulated over the last year, revealing that the vast majority of the team’s penetration tests into various cloud environments found issues with either passwords or policy adherence.

The recycling use of these attack vectors emphasizes that threat actors are repetitively relying on human error for a way into the organization. It’s imperative that businesses and security teams operate with the assumption of compromise to hold the line.

Dark Web Flea Markets Selling Cloud Access

Cloud resources are providing an excess of corporate footholds to cyber actors, drawing attention to the tens of thousands of cloud accounts available for sale on illicit marketplaces at a bargain. The report reveals that nearly 30,000 compromised cloud accounts are on display on the dark web, with sales offers that range from a few dollars to over $15,000 (depending on geography, amount of credit on the account and level of account access) and enticing refund policies to sway buyers’ purchasing power.

But that’s not the only cloud β€œtool” for sale on dark web markets with our analysis highlighting that Remote Desktop Protocol (RDP) accounts for more than 70% of cloud resources for sale β€” a remote access method that greatly exceeds any other vector being marketed. While illicit marketplaces are the optimal shopping grounds for threat actors in need of cloud hacks, concerning us the most is a persistent pattern in which weak security controls and protocols β€” preventable forms of vulnerability β€” are repeatedly exploited for illicit access.

To read our comprehensive findings and learn about detailed actions organizations can take to protect their cloud environments, review our 2021 X-Force Cloud Security Threat Landscape here.

Want to hear from an expert? Schedule a consultation with an X-Force team member and register for our cloud security webinar to learn more.

The post X-Force Report: No Shortage of Resources Aimed at Hacking Cloud Environments appeared first on Security Intelligence.

❌
❌