Reading view

There are new articles available, click to refresh the page.

Salt Security Launches Salt MCP Finder Technology

Salt Security has announced Salt MCP Finder technology, a dedicated discovery engine for Model Context Protocol (MCP) servers, the fast-proliferating infrastructure powering agentic AI. MCP Finder provides an organisation with a complete, authoritative view of its MCP footprint at a moment when MCP servers are being deployed rapidly, often without IT or security awareness.

As enterprises accelerate the adoption of agentic AI, MCP servers have emerged as the universal API broker that lets AI agents take action by retrieving data, triggering tools, executing workflows, and interfacing with internal systems. But this new power comes with a new problem: MCP servers are being deployed everywhere, by anyone, with almost no guardrails. MCPs are widely used for prototyping, integrating agents with SaaS tools, supporting vendor projects, and enabling shadow agentic workflows in production.

This wave of adoption sits atop fractured internal API governance in most enterprises, compounding risk. Once deployed, MCP servers become easily accessible, enabling agents to connect and execute workflows with minimal oversight. This becomes a major source of operational exposure.

The result is a rapidly growing API fabric of AI-connected infrastructure that is largely invisible to central security teams. Organisations currently lack visibility regarding how many MCP servers are deployed across the enterprise, who owns or controls them, which APIs and data they expose, what actions agents can perform through them, and whether corporate security standards and basic controls (like authentication, authorisation, and logging) are properly implemented.

Recent industry observations show why this visibility crisis matters. One study showed that only ten months after the launch of the MCP, there were over 16,000 MCP servers deployed across Fortune 500 companies. Another showed that in a scan of 1,000 MCP servers, 33% had critical vulnerability and the average MCP server had more than 5. MCP is quickly becoming one of the largest sources of “Shadow AI” as organisations scale their agentic workloads.

According to Gartner® “Most tech providers remain unprepared for the surge in agent-driven API usage. Gartner predicts that by 2028, 80% of organisations will see AI agents consume the majority of their APIs, rather than human developers.”

Gartner further stated, “As agentic AI transforms enterprise systems, tech CEOs who understand and implement MCP would drive growth, ensure responsible deployment and secure a competitive edge in the evolving AI landscape. Ignoring MCP risks falling behind as composability and interoperability become critical differentiators. Tech CEOs must prioritize MCP to lead in the era of agentic AI. MCP is foundational for secure, efficient collaboration among autonomous agents, directly addressing trust, security, and cost challenges.”*

Salt’s MCP Finder technology solves the foundational challenge: you cannot monitor, secure, or govern AI agents until you know what attack surfaces exist. MCP servers are a key component of that surface.

Nick Rago, VP of Product Strategy at Salt Security, said: “You can’t secure what you can’t see. Every MCP server is a potential action point for an autonomous agent. Our MCP Finder technology gives CISOs the single source of truth they need to finally answer the most important question in agentic AI: What can my AI agents do inside my enterprise?

Salt’s MCP Finder technology uniquely consolidates MCP discovery across three systems to build a unified, authoritative registry:

  1. External Discovery – Salt Surface
    Identifies MCP servers exposed to the public internet, including misconfigured, abandoned, and unknown deployments.
  2. Code Discovery – GitHub Connect
    Using Salt’s recently announced GitHub Connect capability, MCP Finder inspects private repositories to uncover MCP-related APIs, definitions, shadow integrations, and blueprint files before they’re deployed.
  3. Runtime Discovery – Agentic AI Behavior Mapping
    Analyses real traffic from agents to observe which MCP servers are in use, what tools they invoke, and how data flows through them.

Together, these sources give organisations the single source of truth required to visualise risk, enforce posture governance, and apply AI safety policies that extend beyond the model into the actual action layer.

Salt’s MCP Finder technology is available immediately as a core capability within the Salt Illuminate™ platform.

 

*Source: Gartner Research, Protect Your Customers: Next-Level Agentic AI With Model Context Protocol, By Adrian Lee, Marissa Schmidt, November 2025.

The post Salt Security Launches Salt MCP Finder Technology appeared first on IT Security Guru.

APIContext Introduces MCP Server Performance Monitoring to Ensure Fast and Reliable AI Workflows

Today, APIContext, has launched its Model Context Protocol (MCP) Server Performance Monitoring tool, a new capability that ensures AI systems respond fast enough to meet customer expectations.

Given that 85% of enterprises and 78% of SMBs are now using autonomous agents, MCP has emerged as the key enabler by providing an open standard that allows AI agents access tools, like APIs, databases, and SaaS apps, through a unified interface. Yet, while MCP unlocks scale for agent developers, it also introduces new complexity and operational strain for the downstream applications these agents rely on. Even small slowdowns or bottlenecks can cascade across automated workflows, impacting performance and end-user experience.

APIContext’s MCP server performance monitoring tool provides organisations with first-class observability for AI-agent traffic running over the MCP. This capability enables enterprises to detect latency, troubleshoot issues, and ensure AI workflows are complete within the performance budgets needed to meet user-facing SLAs. For example, consider a voice AI customer support system speaking with a caller. If the AI sends a query to the MCP server and has to wait for a response, the caller quickly becomes irritated and frustrated, often choosing to escalate to a human operator. This kind of latency prevents the business from realising the full value of its AI operations and disrupts the customer experience.

Key Benefits of MCP Performance Monitoring Includes:

  • Performance Budgeting for Agentic Workflows: Guarantees agent interactions are completed under required latency to maintain user-facing SLAs. 
  • Root Cause Diagnosis: Identifies whether delays are caused by the agent, MCP server, authentication, or downstream APIs. 
  • Reliability in Production: Detects drift and errors in agentic workflows before they affect customers.

AI workflows now depend on a distributed compute chain that enterprises don’t control. Silent failures happen outside logs, outside traces, and outside traditional monitoring,” said Mayur Upadhyaya, CEO of APIContext. “. With MCP performance monitoring, we give organisations a live resilience signal that shows how machines actually experience their digital services so they can prevent failures before customers ever feel them.”

For more information on APIContexts’ MCP Performance Monitoring Tool, visit https://apicontext.com/features/mcp-monitoring/ 

The post APIContext Introduces MCP Server Performance Monitoring to Ensure Fast and Reliable AI Workflows appeared first on IT Security Guru.

Black Duck SCA Adds AI Model Scanning to Strengthen Software Supply Chain Security

Black Duck has expanded its software composition analysis (SCA) capabilities to include AI model scanning, helping organisations gain visibility into the growing use of open-source AI models embedded in enterprise software.

With the release of version 2025.10.0, the company’s new AI Model Risk Insights capability allows teams to identify and analyse AI models used within applications, including details about their versions, datasets, and licensing. As businesses increasingly turn to AI to accelerate innovation, the feature aims to address mounting challenges around transparency, compliance, and risk management.

The new tool detects models sourced from repositories such as Hugging Face, even if they are hidden or not declared in build manifests. It displays metadata, such as model cards and training data, helping teams assess potential risks associated with licensing or data provenance. The feature also supports emerging governance requirements under frameworks such as the EU AI Act and the U.S. Executive Order on AI, providing audit-ready reports to simplify compliance.

“With the introduction of AI model scanning, Black Duck SCA is setting a new standard for software composition analysis,” said Jason Schmitt, CEO at Black Duck. “This innovation directly addresses the emerging security challenges of AI adoption, empowering companies to confidently integrate AI models securely while maintaining compliance and regulatory adherence. The capabilities now available through AI Model Risk Insights also represent a significant leap forward in Black Duck’s mission to help companies build and deliver secure and compliant software.”

The AI Model Risk Insights capability integrates seamlessly into existing Black Duck workflows through CodePrint scanning and the BOM Engine, ensuring minimal setup for users. Available as a licensed feature, it marks another step in Black Duck’s mission to help development teams manage risk across the evolving software supply chain.

The post Black Duck SCA Adds AI Model Scanning to Strengthen Software Supply Chain Security appeared first on IT Security Guru.

Arnica’s Arnie AI Reimagines Application Security For The Agentic Coding Era

As software development enters an era dominated by autonomous coding agents, application security programs are finding themselves structurally unprepared. AI models that generate and modify production code on demand can push thousands of changes per day, far beyond what traditional AppSec pipelines were built to handle.

Arnica has stepped into this gap with Arnie AI, a new security suite designed to operate natively inside the workflows of AI-assisted development. The platform introduces two core systems: AI SAST and the Agentic Rules Enforcer that together create what the company describes as continuous, in-process enforcement for AI-generated code.

Why Traditional AppSec Breaks Under Agentic Workflows

The rapid adoption of generative assistants such as GitHub Copilot, Anthropic Claude, and Gemini has transformed how code is written. But these tools are tuned for fluency and compile success, not for compliance or secure design. Embedding deep policy checks within the model itself would require costly token budgets and additional inference latency, tradeoffs most enterprises reject.

That optimisation choice leaves a critical gap: AI agents can now produce functional, deployable code that passes compilation but fails security review. Each commit potentially introduces new dependency chains, unsafe defaults, or context-blind logic decisions.

Generic prompts like “write secure code” offer little protection, since every enterprise maintains distinct libraries, secrets-management patterns, and compliance regimes. Once AI models begin producing code across multiple repositories, those differences multiply. The result, security researchers warn, is an attack surface expanding at algorithmic speed.

AI SAST: Fusing Determinism With Machine Context

Arnica’s AI SAST addresses the detection side of this problem by combining deterministic static analysis with an adaptive AI reasoning layer. The deterministic engine performs conventional control-flow, taint, and data-dependency tracing, while the AI component interprets developer intent, learning how different frameworks, language idioms, and business logic interact in practice.

By running on every push, pull request, and scheduled scan, AI SAST functions as a real-time guardrail rather than a downstream scanner. Its contextual fix engine generates repair suggestions that align with the developer’s existing framework and style, minimising false positives and rework.

The tool also produces auditable output artifacts suitable for regulatory reviews under SOC 2, ISO 27034, or OWASP ASVS benchmarks. Arnica claims this approach can compress mean time to remediation and eliminate the backlog cycles that plague traditional static analysis programs.

Agentic Rules Enforcer: Preventing Vulnerabilities Before They Exist

Where AI SAST detects issues, the Agentic Rules Enforcer prevents them outright. It embeds version-controlled policy sets directly within source repositories, allowing teams to encode their security standards as executable logic. These policies run at code generation time, intercepting unsafe patterns before the commit lands in source control.

The architecture is pipelineless, the rules operate independently of CI/CD pipelines and require no developer opt-in. Enforcement occurs the moment an AI agent or human contributor attempts a violating action, producing an inline explanation of which rule was triggered and why.

Because policies are stored and versioned in the repository, organisations maintain full traceability across teams and branches. Standards like OWASP ASVS or NIST 800-53 can be applied globally or customised per-project without configuration drift.

Architectural Implications

Arnie AI effectively collapses the traditional boundary between development and security operations. Instead of treating AppSec as a gatekeeper at the end of the pipeline, Arnica positions it as a governor that runs concurrently with code creation.

For DevSecOps teams, the impact is threefold:

  1. Immediate feedback replaces delayed scans and ticket queues.
  2. Rule propagation ensures uniform policy enforcement across distributed environments.
  3. Elastic scalability allows enforcement to match the output rate of autonomous agents.

“As AI systems increasingly write and modify production code, the industry is confronting a new kind of security gap, one born not of human error, but of machine speed,” said Tyler Shields, Principal Analyst at Omdia. “Solutions like Arnica’s Arnie AI that proactively secure AI-generated code represent the next frontier in application security, where policy enforcement and continuous validation must evolve to match the scale and autonomy of agentic development.”

A Different Philosophy of Security Automation

Arnica’s CEO Nir Valtman frames the approach as an inevitable response to the new development order. “AI systems are now active participants in the SDLC. To keep pace, security enforcement has to live alongside them not behind them,” he said. “Arnie AI was built to ensure velocity and trust can coexist.”

The company’s broader strategy reflects a growing movement away from pipeline-centric security toward deterministic governance controls that run continuously, require no manual invocation, and deliver consistent outcomes across both human and AI contributors.

As enterprises begin integrating agentic frameworks into production, the industry’s focus is shifting from detecting bad code to preventing its creation altogether. Arnica’s Arnie AI may not end that evolution, but it illustrates where AppSec is heading: toward an architecture where security logic executes at the same layer and the same speed as the code itself.

The post Arnica’s Arnie AI Reimagines Application Security For The Agentic Coding Era appeared first on IT Security Guru.

Proton Brings Privacy-Focused AI to the Workplace with Lumo for Business

Proton, the company best known for Proton Mail and Proton VPN, has launched Lumo for Business, a new version of its privacy-first AI assistant designed specifically for teams. The move marks the third major update to Lumo in just three months and signals Proton’s push to bring confidential, end-to-end encrypted AI to the enterprise market.

While generative AI tools such as ChatGPT and Google Gemini have become ubiquitous in the workplace, their use has raised growing concerns about data privacy and compliance. Many of these systems operate as closed “black boxes,” with little visibility into how they store or handle sensitive information. The risk of corporate data exposure or government access requests has led some companies to ban their use altogether.

Proton says Lumo for Business addresses this issue by combining the productivity benefits of AI with strict privacy and compliance safeguards. Protected by European data protection laws and Proton’s zero-access encryption, the platform allows teams to collaborate securely without risking leaks of customer or proprietary data.

“Generative AI has changed everything and stands to create the biggest societal shift since the creation of the internet itself. This is true for consumers, but possibly even more so for businesses. AI assistants boost productivity and are already widespread in the workplace. But they come with serious risks,” said Eamonn Maguire, Director of Engineering for AI at Proton. “Many businesses have already banned ChatGPT and we’re seeing reports of multinational companies building their own in-house AI because they can’t risk their data disappearing into a black box. But small businesses don’t have the resources to build their own ChatGPT from scratch. That’s the gap Lumo fills. Companies shouldn’t have to choose between competitive advantage and data security. With Lumo, they get both: enterprise-grade AI that keeps their sensitive data safe.”

Built on the same foundation of privacy and transparency as Proton’s other products, Lumo for Business offers encrypted chat storage, GDPR compliance, and open-source transparency, ensuring that both the AI models and codebase can be independently verified.

The service also integrates with Proton Drive, allowing users to securely upload and reference documents, such as PDFs, during conversations. Proton says this feature allows Lumo to generate more contextually accurate responses without compromising security.

Unlike many enterprise AI platforms that require complex setup or costly licensing, Lumo for Business is a self-service, affordable solution designed for teams of any size. Employees can get started directly via the Lumo website or mobile apps without IT support.

Key features of Lumo for Business include:

  • Zero-access encryption: Chat histories are stored securely and can only be decrypted by the individual user.

  • Data sovereignty: Hosted entirely in Europe, Lumo complies with GDPR and benefits from some of the world’s strongest privacy laws.

  • Transparency: Lumo’s code and models are open source, allowing public verification of its security and functionality.

  • Productivity tools: Teams can summarize meetings, analyze datasets, write code, and draft documents — all within a secure environment.

Proton reports that millions of individuals already use Lumo for personal productivity tasks such as summarizing information, drafting content, and searching the web. With this latest update, businesses can now access the same technology — but with the enterprise-grade privacy and compliance safeguards they require.

More information about Lumo for Business is available on the Proton blog.

The post Proton Brings Privacy-Focused AI to the Workplace with Lumo for Business appeared first on IT Security Guru.

AI Can Transform the Restaurant Industry But Only If It’s Built Securely

AI is transforming how restaurants operate. It’s automating calls, managing orders, handling reservations and even predicting customer demand. But, what lies beneath the surface? Beyond this exciting wave of innovation lies a growing security question that is, how safe is the data fuelling all this progress?

In an industry that deals daily with personal details, payment information and customer communication, cybersecurity simply cannot be an afterthought. 

The restaurant sector’s rush to adopt AI-driven solutions has created a tension between innovation and regulation, and the fact of the matter is that only the most security-conscious platforms will stand the test of time.

Innovation Without Safeguards Is a Recipe For Risk

The rise of generative AI and automation tools has lowered the barrier to entry for SaaS developers. Today, a small team can spin up a voice AI assistant or automated ordering system in weeks. But, that speed often comes at a price.

Many newer entrants to the restaurant tech space have been accused of bypassing telecom compliance standards and other data security obligations to get products to market faster. Some rely on unsecured APIs or unvetted cloud integrations, leaving customer data and business communications open to interception or misuse.

Restaurants, often unaware of the risks, end up inheriting the exposure from data leaks to compliance fines. In a world governed by GDPR, PCI DSS and emerging AI regulations, ignorance isn’t an excuse anymore.

So, for an industry built on trust and service, a single breach can undo years of reputation-building.

Secure AI with Compliance at Its Core

Long-standing AI providers rooted in secure telecommunications, such as ReachifyAI, are showing that innovation and security don’t have to be mutually exclusive. These companies illustrate how experience in regulated industries can shape AI solutions that are both functional and compliant. You really can have the best of both worlds. 

ReachifyAI’s platform handles core restaurant communication tasks, from taking phone orders and managing missed calls to routing messages, while embedding compliance and data protection into its design from the outset. 

Its infrastructure aligns with the regulatory standards that govern secure telecommunications, ensuring data is encrypted in transit and at rest. Sensitive information is kept under strict governance, reducing the risks that often accompany third-party integrations or unsecured APIs.

By taking a measured approach rather than racing to deploy at all costs, ReachifyAI demonstrates a principle increasingly recognised across the industry – that is, security and trust are not optional. 

Embedding compliance into the architecture ensures that automation can scale without compromising customer data, creating a model for other AI platforms in hospitality to follow.

This example highlights a key point for the broader restaurant sector ultimately, responsible AI deployment isn’t just about technology, it’s about preserving trust while modernising operations.

Understanding The Security Stakes

Indeed, AI in the restaurant industry isn’t just about efficiency – much like every other industry, it’s about trust at scale. Voice-driven AI systems, for instance, capture real-time customer data, voice recordings and sometimes payment information. Without strong identity verification and encryption, that data becomes an easy target for attackers.

Then there’s the issue of AI model leakage. That is, when sensitive data used to train or prompt large language models can unintentionally resurface. For a restaurant handling thousands of customer interactions per week, the exposure risk multiplies, more so than many people care to imagine.

ReachifyAI mitigates these risks through controlled data environments, compliant APIs and strict access policies. Its approach aligns with key cybersecurity principles – least privilege, encryption-by-default and regulatory transparency.

The result is a platform that not only helps restaurants automate and scale operations, but it also ensures that their customer data remains fully protected.

Compliance Isn’t a Checkbox, It’s a Competitive Advantage

Too often, compliance is viewed as a box to tick rather than a strategic differentiator, but this is where so many companies are going wrong.

In an era of rising cyber threats, adhering to frameworks like GDPR, CCPA and telecom regulations builds confidence with customers, investors and regulators alike.

ReachifyAI’s long-standing commitment to operating within these frameworks has made it a trusted partner in the restaurant industry, particularly for businesses that want to leverage AI without exposing themselves to unnecessary legal or cyber risk.

This compliance-first mindset is increasingly critical as governments around the world tighten oversight of AI systems. The EU’s forthcoming AI Act, for instance, will require companies to prove the safety, explainability and reliability of their AI models. So, the best move would be to prepare now rather than to wait for later.

A Safer Future For Restaurant AI

The restaurant industry is entering an AI boom, but not all solutions are created equal. Platforms that prioritise convenience over compliance may deliver short-term gains but face long-term vulnerabilities.

ReachifyAI is showing that security doesn’t have to slow innovation. By fusing telecom-grade compliance with next-generation AI, it’s giving restaurants the tools to modernise safely, sustainably and with confidence.

Because in the end, the question isn’t whether AI will transform the restaurant industry, it’s about who will build it securely enough to last.

The post AI Can Transform the Restaurant Industry But Only If It’s Built Securely appeared first on IT Security Guru.

Saviynt Unveils Major AI Capabilities for Identity Security

Saviynt, the leader in AI-powered identity security solutions, today unveiled groundbreaking advancements to its platform that redefine how enterprises manage and secure identities in the AI era. These new enhancements address two of the most pressing challenges facing enterprises today: the inability to onboard and govern all applications; and the lack of secure management for all identities – human and non-human, including AI agents.

Saviynt’s new AI-driven capabilities address these long standing challenges by accelerating and simplifying application onboarding, enabling all apps to be managed from a single, unified identity security platform, and extending Identity Security Posture Management (ISPM) to include every identity – human, non-human and AI agent – to help organizations strengthen their overall security posture.

Onboard All Applications with Agentic AI

Comprehensive application onboarding has long been one of the biggest roadblocks to realizing the full value of an identity security program. In fact, a Ponemon study found that 49% of organizations don’t even track how many disconnected apps they have – creating dangerous visibility gaps and expanding the attack surface.

Saviynt’s new Agentic AI Onboarding for Applications solves this challenge by harnessing agentic AI to accelerate and simplify the integration of both connected and disconnected applications across hybrid environments.The result is that every application—no matter where it resides – can now be seamlessly onboarded, governed, and secured under a single identity platform.

Secure All Identities — Human, Non-Human, and AI

As artificial intelligence transforms how enterprises operate, identity ecosystems are expanding at an unprecedented pace. Non-human identities and AI agents now outnumber human identities by more than 82 to 1, underscoring their explosive growth and the urgent need for stronger governance and control.

While AI agents are fueling major productivity gains, they also introduce a new class of identities that widens the attack surface. Most organizations lack the visibility and oversight to manage them effectively, leaving hidden risks across critical systems.

Saviynt is addressing this challenge head-on by extending its Identity Security Posture Management (ISPM) capabilities to cover all identities – human, non-human, and AI. These enhancements empower enterprises to confidently adopt AI while maintaining full visibility, governance, and compliance.

New capabilities include:

  • Identity Security Posture Management (ISPM) for AI Agents: Provides comprehensive visibility, governance, and audit readiness for AI agents and their core components – such as MCP servers and tools – through simplified discovery, prioritized risk insights, and integrated access maps enriched with signals from leading security solutions like CrowdStrike.
  • ISPM for Non-Human Identities (NHI): Enhanced NHI capabilities now include a unified inventory for all NHIs, their access policies, and detected violations, with support for one-click remediation.

“AI is reshaping enterprise security at every level. Identities no longer belong only to people – they now extend to non-human users like machines, applications, and AI agents,” said Sachin Nayyar, Chief Executive Officer, Saviynt. “Our latest AI innovations ensure that every identity is governed with the same rigor, context, and automation. With agentic AI onboarding and comprehensive identity security posture management across all identities, we’re enabling organizations to stay secure, compliant, and prepared for what’s next.”

Built for an AI-Driven Future

Together, these AI-driven capabilities enable unified identity security across all environments, simplifying application onboarding and extending protection to every identity.

“Saviynt has always been at the forefront of identity innovation,” said Vibhuti Sinha, Chief Product Officer, Saviynt. “While others are experimenting with AI overlays, we’re embedding AI natively into the fabric of identity security. This isn’t just about adding new features—it’s about delivering an end-to-end, AI-first platform that helps enterprises govern more effectively, scale seamlessly, and confidently embrace the future of digital business.”

Saviynt’s AI-powered platform seamlessly integrates identity governance, application governance, privileged access management, and security posture management for all identities. With the addition of AI-native capabilities, organizations can proactively reduce risk, accelerate decision-making, and enhance operational agility.

By unifying human and non-human identity security under a single platform, Saviynt empowers enterprises to achieve true Zero Trust at scale and ensure continuous compliance in today’s AI-driven world.

For more information on Saviynt’s AI-powered identity security platform, read the new blog. Saviynt will also showcase these new capabilities during its 2025 UNLOCK Roadshow, taking place in six cities around the world over the next two months.

The post Saviynt Unveils Major AI Capabilities for Identity Security appeared first on IT Security Guru.

Vanta introduces Vanta AI Agent for risk management

Vanta, the trust management platform, has announced a new set of capabilities that embed AI across core compliance and risk workflows. The expanded capabilities unify policy management with Vanta AI Agent, continuous monitoring for vendors, risk oversight, and deeper integrations, providing security leaders with a single system of record to act on risk before it escalates.
Risk management is typically fragmented across siloed tools, teams, and manual processes, the company said in a press release. Internal issues live in one system, vendor reviews in another and leadership reporting requires hours of manual consolidation. This disjointed approach keeps teams reactive, with critical risks often going unnoticed until it’s too late, slowing audits, delaying deals, and leaving organizations exposed.
Vanta approaches this with a new set of capabilities that eliminate this fragmentation by embedding agentic AI into policy and evidence workflows, centralising risk registers into enterprise rollups, enabling always-on vendor monitoring, and further powering collaboration. With one system of record that reflects how organisations actually operate, leaders gain both unified visibility and proactive control, reducing manual reporting, accelerating audits, and strengthening trust.
“Organisations have long struggled with fragmented systems and reactive reviews,” said Jeremy Epling, Chief Product Officer, Vanta. “By embedding AI in policy workflows and unifying risk oversight across registers and vendors, we are changing how security teams operate. These capabilities allow leaders to spend less time on manual reporting and more time addressing real risks and strengthening trust with their stakeholders.”
AI-Driven policy management
Policies are essential to every GRC programme, but drafting and maintaining them is often slow, complex, and resource-intensive. Delays in keeping documentation current can stall audits and increase exposure.
The Vanta AI Agent is now embedded in organisational context to also manage policy workflows. The agent generates audit-ready policies, executes bulk updates across entire libraries, and validates documentation for completeness. By extending the same proactive intelligence that already flags gaps in evidence, SLA inconsistencies, and more, and bringing it to policies, Vanta automates the most time-consuming tasks and keeps organizations continuously audit-ready.
Centralised enterprise risk oversight
As businesses grow, so does their exposure to risk – from internal systems to new departments to cross-functional processes. Too often, signals remain scattered across disconnected tools and various departments are categorizing risk differently, preventing leaders from prioritisation and seeing the full picture.
Vanta now offers a centralised, proactive approach to risk management, aligning business functions up to and through the boardroom. New functionality includes Multiple Risk Registers which allows organisations to structure risk management around their business units, with tailored views and risks for each function or category. Enterprise Risk Rollups then consolidate those registers into a unified, real-time dashboard for executive-level visibility leaders no longer rely on manual or fragmented reporting and risk management becomes proactive and aligned to business structure.
Continuous vendor risk management
Traditional point-in-time vendor reviews are no longer sufficient in today’s fast-changing threat environment. Vendors can shift their security posture overnight, leaving organisations exposed before reviews catch up. Vanta’s expanded Vendor Risk Management delivers continuous oversight, real-time vendor monitoring, and triggering alerts based on configurable thresholds. Through Continuous Monitoring and Alerts, powered by Vanta’s Riskey acquisition and an AI security review, summaries streamline questionnaires and surface key findings, ensuring vendor risks are identified proactively and that organisations can take immediate action.
Slack integration for security workflows
Effective security depends on cross-functional engagement, but collaboration slows when teams are forced to leave their daily tools. Vanta’s enhanced Slack integration embeds security workflows directly into the tools employees already use. Teams can submit and approve access requests, respond to reviews and questionnaires, and receive timely notifications all without leaving Slack. The result: faster decisions and greater accountability across the organisation.

The post Vanta introduces Vanta AI Agent for risk management appeared first on IT Security Guru.

❌