Copilot's No-Code AI Agents Liable to Leak Company Data
The Chrome zero-day does not have a CVE and it's unclear who reported it and which browser component it affects.
The post Google Patches Mysterious Chrome Zero-Day Exploited in the Wild appeared first on SecurityWeek.
Attackers are planning around your holidays. The question is whether you’ve done the same.

How Are Non-Human Identities Revolutionizing Cybersecurity? Have you ever considered the pivotal role that Non-Human Identities (NHIs) play in cyber defense frameworks? When businesses increasingly shift operations to the cloud, safeguarding these machine identities becomes paramount. But what exactly are NHIs, and why is their management vital across industries? NHIs, often referred to as machine […]
The post What makes smart secrets management essential? appeared first on Entro.
The post What makes smart secrets management essential? appeared first on Security Boulevard.
Can Agentic AI Revolutionize Cybersecurity Practices? Where digital threats consistently challenge organizations, how can cybersecurity teams leverage innovations to bolster their defenses? Enter the concept of Agentic AI—a technology that could serve as a powerful ally in the ongoing battle against cyber threats. By enhancing the management of Non-Human Identities (NHIs) and secrets security management, […]
The post How does Agentic AI empower cybersecurity teams? appeared first on Entro.
The post How does Agentic AI empower cybersecurity teams? appeared first on Security Boulevard.
AI is transforming enterprise productivity and reshaping the threat model at the same time. Unlike human users, agentic AI and autonomous agents operate at machine speed and inherit broad network permissions and embedded credentials. This creates new security and compliance … Read More
The post Ring-fencing AI Workloads for NIST and ISO Compliance appeared first on 12Port.
The post Ring-fencing AI Workloads for NIST and ISO Compliance appeared first on Security Boulevard.
Very little new information has been released since Rahmanullah Lakanwal murdered West Virginia National Guard member Sarah Beckstrom in Washington, DC, two weeks ago. He also shot and injured Andrew Wolfe, another National Guardsman, in the same attack. Prosecutors have since charged Lakanwal with murder, assault with intent to kill while armed, and possession of a firearm during a violent crime. Terrorism charges are absent because prosecutors do not yet know his motives. The FBI is conducting a terrorism investigation to discover those.
In federal courtrooms across America, a pattern has emerged in cases in which immigrants are being rounded up and jailed without a hearing. That’s a departure from fundamental constitutional protections in the U.S. that provide the right to a hearing before indefinite imprisonment.
Session 5C: Federated Learning 1
Authors, Creators & Presenters: Phillip Rieger (Technical University of Darmstadt), Alessandro Pegoraro (Technical University of Darmstadt), Kavita Kumari (Technical University of Darmstadt), Tigist Abera (Technical University of Darmstadt), Jonathan Knauer (Technical University of Darmstadt), Ahmad-Reza Sadeghi (Technical University of Darmstadt)
PAPER
SafeSplit: A Novel Defense Against Client-Side Backdoor Attacks in Split Learning
Split Learning (SL) is a distributed deep learning approach enabling multiple clients and a server to collaboratively train and infer on a shared deep neural network (DNN) without requiring clients to share their private local data. The DNN is partitioned in SL, with most layers residing on the server and a few initial layers and inputs on the client side. This configuration allows resource-constrained clients to participate in training and inference. However, the distributed architecture exposes SL to backdoor attacks, where malicious clients can manipulate local datasets to alter the DNN's behavior. Existing defenses from other distributed frameworks like Federated Learning are not applicable, and there is a lack of effective backdoor defenses specifically designed for SL. We present SafeSplit, the first defense against client-side backdoor attacks in Split Learning (SL). SafeSplit enables the server to detect and filter out malicious client behavior by employing circular backward analysis after a client's training is completed, iteratively reverting to a trained checkpoint where the model under examination is found to be benign. It uses a two-fold analysis to identify client-induced changes and detect poisoned models. First, a static analysis in the frequency domain measures the differences in the layer's parameters at the server. Second, a dynamic analysis introduces a novel rotational distance metric that assesses the orientation shifts of the server's layer parameters during training. Our comprehensive evaluation across various data distributions, client counts, and attack scenarios demonstrates the high efficacy of this dual analysis in mitigating backdoor attacks while preserving model utility.
ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.
Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.
The post SafeSplit: A Novel Defense Against Client-Side Backdoor Attacks In Split Learning appeared first on Security Boulevard.

via the comic artistry and dry wit of Randall Munroe, creator of XKCD
The post Randall Munroe’s XKCD ‘Beam Dump’ appeared first on Security Boulevard.
Session 5C: Federated Learning 1
Authors, Creators & Presenters: Xiaochen Zhu (National University of Singapore & Massachusetts Institute of Technology), Xinjian Luo (National University of Singapore & Mohamed bin Zayed University of Artificial Intelligence), Yuncheng Wu (Renmin University of China), Yangfan Jiang (National University of Singapore), Xiaokui Xiao (National University of Singapore), Beng Chin Ooi (National University of Singapore)
PAPER
Passive Inference Attacks on Split Learning via Adversarial Regularization
Split Learning (SL) has emerged as a practical and efficient alternative to traditional federated learning. While previous attempts to attack SL have often relied on overly strong assumptions or targeted easily exploitable models, we seek to develop more capable attacks. We introduce SDAR, a novel attack framework against SL with an honest-but-curious server. SDAR leverages auxiliary data and adversarial regularization to learn a decodable simulator of the client's private model, which can effectively infer the client's private features under the vanilla SL, and both features and labels under the U-shaped SL. We perform extensive experiments in both configurations to validate the effectiveness of our proposed attacks. Notably, in challenging scenarios where existing passive attacks struggle to reconstruct the client's private data effectively, SDAR consistently achieves significantly superior attack performance, even comparable to active attacks. On CIFAR-10, at the deep split level of 7, SDAR achieves private feature reconstruction with less than 0.025 mean squared error in both the vanilla and the U-shaped SL, and attains a label inference accuracy of over 98% in the U-shaped setting, while existing attacks fail to produce non-trivial results.
ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.
Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.
The post NDSS 2025 – Passive Inference Attacks On Split Learning Via Adversarial Regularization appeared first on Security Boulevard.

AI-powered browsers give you much more than a window to the web. They represent an entirely new way to experience the internet, with an AI “agent” working by your side.
We’re entering an age where you can delegate all kinds of tasks to a browser, and with that comes a few things you’ll want to keep in mind when using AI browsers like ChatGPT’s Atlas, Perplexity’s Comet, and others.
So, what’s the allure of this new breed of browser? The answer is that it’s highly helpful, and plenty more.
By design, these “agentic” AI browsers actively assist you with the things you do online. They can automate tasks and interpret your intentions when you make a request. Further, they can work proactively by anticipating things you might need or by offering suggestions.
In a way, an AI browser works like a personal assistant. It can summarize the pages in several open tabs, conduct research on just about any topic you ask it to, or even track down the lowest airfare to Paris in the month of May. Want it to order ink for your printer and some batteries for your remote? It can do that too. And that’s just to name a few possibilities.
As you can see, referring to the AI in these browsers as “agentic” fits. It truly works like an agent on your behalf, a capability that promises to get more powerful over time.
But as with any new technology, early adopters should balance excitement with awareness, especially when it comes to privacy and security. You might have seen some recent headlines that shared word of security concerns with these browsers.
The reported exploits vary, as does the harm they can potentially inflict. That ranges from stealing personal info, gaining access to Gmail and Google Drive files, installing malware, and injecting the AI’s “memory” with malicious instructions, which can follow from session to session and device to device, wherever a user logs in.
Our own research has shown that some of these attacks are now tougher to pull off than they were initially, particularly as the AI browser companies continue to put guardrails in place. If anything, this reinforces a long-standing truth about online security, it’s a cat-and-mouse game. Tech companies put protections in place, bad actors discover an exploit, companies put further protections in place, new exploits crop up, and so on. It’s much the same in the rapidly evolving space of AI browsers. The technology might be new, but the game certainly isn’t.
While these reports don’t mean AI browsers are necessarily unsafe to use, they do underscore how fast this space is evolving…and why caution is smart as the tech matures.
It’s still early days for AI-powered browsers and understanding the security and privacy implications of their use. With that, we strongly recommend the following to help reduce your risk:
Don’t let an AI browser do what you wouldn’t let a stranger do. Handle things like your banking, finances, and health on your own. And the same certainly goes for all the info tied to those aspects of your life.
Pay attention to confirmations. As of today, agentic browsers still require some level of confirmation from the user to perform key actions (like processing a payment, sending an email, or updating a calendar entry). Pay close attention to them, so you can prevent your browser from doing something you don’t want it to do.
Use the “logged out” mode, if possible. As of this writing, at least one AI browser, Atlas, gives you the option to use the agent in the logged-out mode.i This limits its access to sensitive data and the risk of it taking actions on your behalf with your credentials.
If possible, disable “model learning.” By turning it off, you reduce the amount of personal info stored and processed by the AI provider for AI training purposes, which can minimize security and privacy risks.
Set privacy controls to the strictest options available. Further, understand what privacy policies the AI developer has in place. For example, some AI providers have policies that allow people to review your interactions with the AI as part of its training. These policies vary from company to company, and they tend to undergo changes. Keeping regular tabs on the privacy policy of the AI browser you use makes for a privacy-smart move.
Keep yourself informed. The capabilities, features, and privacy policies of AI-powered browsers continue to evolve rapidly. Set up news alerts about the AI browser you use and see if any issues get reported and, if so, how the AI developer has responded. Do routine searches pairing the name of the AI browser with “privacy.”
McAfee’s award-winning protection helps you browse safer, whether you’re testing out new AI tools or just surfing the web.
McAfee offers comprehensive privacy services, including personal info scans and removal plus a secure VPN.
Plus, protections like McAfee’s Scam Detector automatically alert you to suspicious texts, emails, and videos before harm can happen—helping you manage your online presence confidently and safeguard your digital life for the long term. Likewise, Web Protection can help you steer you clear of suspicious websites that might take advantage of AI browsers.
The post How to Stay Safe on Your New AI Browser appeared first on McAfee Blog.
AttackIQ has issued recommendations in response to the Cybersecurity Advisory (CSA) released by the Cybersecurity and Infrastructure Security Agency (CISA) on December 9, 2025, which details the ongoing targeting of critical infrastructure by pro-Russia hacktivists.
The post Response to CISA Advisory (AA25-343A): Pro-Russia Hacktivists Conduct Opportunistic Attacks Against US and Global Critical Infrastructure appeared first on AttackIQ.
The post Response to CISA Advisory (AA25-343A): Pro-Russia Hacktivists Conduct Opportunistic Attacks Against US and Global Critical Infrastructure appeared first on Security Boulevard.
At the beginning of this year, we launched the Year of Browser Bugs (YOBB) project, a commitment to research and share critical architectural vulnerabilities in the browser. Inspired by the iconic Months of Bugs tradition in the 2000s, YOBB was started with a similar purpose — to drive awareness and discussion around key security gaps and emerging threats in the browser.
Over the past decade, the browser has become the new endpoint, the primary gateway through which employees access SaaS apps, interact with sensitive data, and use the internet. The modern browser has also evolved significantly, with many capabilities that support complex web apps that parallel the performance of native apps. As with all new technologies, the very same features are also being used by malicious actors to exploit users, exploiting a massive security gap left by traditional solutions that primarily focus on endpoints and networks. Compounded with the release of AI Browsers, the browser has become the single most common initial access point for attackers. Yet, it remains to be poorly understood.
The YOBB project aims to demystify these vulnerabilities, by highlighting architectural limitations, behavioral trends and industry dynamics that cannot be fixed by a simple security patch. In the past 12 months, we released 11 research pieces, including major zero day vulnerabilities presented at DEF CON, Black Hat, RSA and BSides. Below is a recap of our findings, and the complete Year of Browser Bugs report is available for download here.


The Browser Syncjacking attack demonstrated that browser extensions, even just with simple read/write permissions available to popular extensions like Grammarly, can lead to full browser and device takeover by exploiting Google Workspace’s profile sync functionality. The attack unfolds in three escalating stages: profile hijacking, browser hijacking, and device hijacking.
👉 Learn more on our technical blog & demo

Polymorphic extensions are malicious extensions that can silently impersonate any extension, such as password managers and crypto wallets. The attack exploits end users’ reliance on visual cues to determine whether what they are interacting with is safe, and the fact that extensions can change their icons and appearance on the fly without any user warning. With additional permissions, these malicious extensions can even disable the real extension while they impersonate them.
👉 Learn more on our technical blog & demo

Browser-native ransomware represents a fundamental shift in ransomware delivery that enables ransomware attacks to be executed without any local files or process, bypassing traditional anti-ransomware and EDR tools. Due to the proliferation of cloud storage and SaaS services, over 80% of enterprise data now resides in the cloud and is primarily accessed through the browser. By combining identity attacks and agentic workflows, attackers can systematically exfiltrate and hold sensitive files and data hostage for ransom. While BNRs manifest in many ways, here a few case studies:
👉 Learn more on our technical blog & demo

Disclosed at BSides SF, Data Splicing Attacks represent a new class of data exfiltration techniques capable of bypassing major enterprise DLP solutions listed by Gartner’s Magic Quadrant. The research exposed fundamental architectural flaws in both endpoint-based and proxy-based DLP solutions that allow attackers to upload/paste/print any sensitive data through the browser with several techniques:
📽️ Watch the BSides SF talk here

While Browser-in-the-Middle (BitM) attacks have been known since 2021, they typically come with a major telltale sign — the parent window still displays a suspicious URL in the address bar, raising suspicion among security-aware users. Our research discovered that the Fullscreen API can be exploited to address this flaw, as any user interaction can be used to trigger a fullscreen popup containing the attacker controlled noVNC window. Not knowing that they are now interacting with an attacker-controlled browser, the victim continues their work, unknowingly giving attackers access to watch everything they do as they open additional tabs and access enterprise apps, all while thinking they’re on their own browser.
While all browsers are vulnerable to Fullscreen BitM, the attack works especially well on Safari due to the complete lack of visual indicators when entering fullscreen mode.
👉 Learn more on our technical blog & demo

Since OpenAI launched Operator, AI agents have exploded in adoption, with 79% of organizations deploying agentic workflows today. Unfortunately, these agents are trained to do tasks, not to be security aware, making them even more vulnerable than an average employee. We demonstrated how browser AI agents fall prey to rudimentary attacks like phishing and OAuth attacks, leading to data exfiltration and malicious file download. Critically, these agents operate at the same privilege level as users, having full access to the same enterprise resources with little guardrails on agentic workflows.
Since our research, multiple agentic AI providers have improved their security guardrails, often requiring permissions when high risk actions are performed. However, these features are built at the discretion of the AI vendor. There is yet to be an industry standard for AI vendors and enterprises alike when it comes to Agentic Identity and Agentic DLP, which becomes especially challenging with the volume of AI applications being built every day.
👉 Learn more on our technical blog & demo

The past few years witnessed a surge in malicious browser extensions, including Geco Colorpick and the Cyberhaven breach. Most extensions are downloaded from official stores like Chrome Web Store, leading enterprises to heavily depend on browser vendors to conduct security audits, trusting labels like “Verified” and “Chrome Featured Extension” as security indicators. Unfortunately, attackers can easily game the system with fake reviews and mass downloads. Indeed, numerous verified extensions have been discovered as malicious.
Yet, there is still very little end users can do to inspect extension behaviors in the browser, even with the Developer Tools provided by browser vendors. This YOBB highlights how trivial it is for malicious extensions to hide suspicious activity from DevTools by exploiting several key limitations:
👉 Learn more on our technical blog & demo

At DEF CON 33, we disclosed a major implementation flaw in passkeys that allows attackers to intercept and forge the passkey registration and authentication flows, replacing it with the attacker’s key pair.
Note that in both the registration and authentication flow, the user still enters their biometrics/PIN, a visual indicator that many associate with good security. However, in both scenarios, the authenticator’s response is dropped and replaced with the attacker’s public key/signed challenge before it ever reaches the server.
👉 Learn more on our technical blog & demo

When Perplexity released Comet in July 2025, it brought to light what the future of browsers could look like. Our research deep-dived into AI Browsers to uncover how attackers can exploit AI Browsers, including:
Many other researchers in the community have also voiced similar concerns on prompt injection attacks that led AI Browsers to go rogue. Since then, popular AI Browsers like Comet and Atlas have started adding guardrails that require explicit user permissions for certain agentic tasks. This marks an encouraging example of what can be achieved when security researchers and innovators collaborate to make emerging technologies more secure.
👉 Learn more on our technical blog & demo
Building on our previous AI Browser research, AI Sidebar Spoofing attacks involve malicious extensions that can inject a pixel-perfect replica of AI sidebars. By impersonating the very interface that users trust to interact with these AI browsers, it then generates malicious instructions that eventually lead to phishing, malicious file download and even device takeover.

👉 Learn more on our technical blog & demo
We discovered a poorly documented MCP API in Comet that allows its embedded extensions to execute arbitrary local commands without explicit user permission. Critically, the MCP API is made available by default to Comet’s embedded extensions, which is installed by default, hidden from the extension dashboard, and cannot be disabled by users even if it is compromised.
In our attack POC, we used extension stomping to demonstrate how the MCP API can be misused to execute ransomware. However, in reality, it is more likely that this exploit will be done via XSS and network MitM in the wild as it requires minimal end user involvement. One day after the release, Comet made a silent update that disabled the MCP API. While we have not received official acknowledgement of our bug report, the patch is a positive move towards making the AI Browser safer.

👉 Learn more on our technical blog & demo
SquareX’s browser extension turns any browser on any device into an enterprise-grade secure browser. SquareX’s industry-first Browser Detection and Response (BDR) solution empowers organizations to proactively defend against browser-native threats including rogue AI agents, Last Mile Reassembly Attacks, malicious extensions and identity attacks. Unlike dedicated enterprise browsers, SquareX seamlessly integrates with users’ existing consumer browsers, delivering security without compromising user experience.
Ready to experience award-winning browser security? Visit www.sqrx.com to learn more or sign up for an enterprise pilot.
2025 Year of Browser Bugs Recap:
A Year of Unmasking Critical Browser Vulnerabilities was originally published in SquareX Labs on Medium, where people are continuing the conversation by highlighting and responding to this story.
The post 2025 Year of Browser Bugs Recap:
A Year of Unmasking Critical Browser Vulnerabilities appeared first on Security Boulevard.

The U.S. National Institute of Standards and Technology (NIST) is building a taxonomy of attack and mitigations for securing artificial intelligence (AI) agents. Speaking at the AI Summit New York conference, Apostol Vassilev, a research team supervisor for NIST, told attendees that the arm of the U.S. Department of Commerce is working with industry partners..
The post NIST Plans to Build Threat and Mitigation Taxonomy for AI Agents appeared first on Security Boulevard.
In December 2025, a ransomware attack on Marquis Software Solutions, a data analytics and marketing vendor serving the financial sector, compromised sensitive customer information held by multiple banks and credit unions, according to Infosecurity Magazine. The attackers reportedly gained access through a known vulnerability in a firewall device connected to Marquis’s remote-access systems. The incident
The post When Vendors Become the Vulnerability: What the Marquis Software Breach Signals for Financial Institutions appeared first on Seceon Inc.
The post When Vendors Become the Vulnerability: What the Marquis Software Breach Signals for Financial Institutions appeared first on Security Boulevard.
See how Crédit Agricole Personal Finance & Mobility (CAPFM) uses DataDome to cut bot traffic by 40%, govern AI & LLM crawlers, and restore clean analytics, protecting all their domains without friction.
The post How CAPFM Uses DataDome to Govern AI & LLM Crawlers Without Compromising Security appeared first on Security Boulevard.