From open source libraries to AI-powered coding assistants, speed-driven development is introducing new third-party risks that threat actors are increasingly exploiting.
Whether youβre generating data from scratch or transforming sensitive production data, performant test data generators are critical tools for achieving compliance in development workflows.
Urban VPN Proxy, which claims to protect users' privacy, collects data from conversations with ChatGPT, Claude, Gemini, Copilot and other AI assistants.
Identity systems hold modern life together, yet we barely notice them until they fail. Every time someone starts a new job, crosses a border, or walks into a secure building, an official must answer one deceptively simple question: Is this person really who they claim to be? That single momentβmatching a living, breathing human to..
Over the past week, enterprise security teams observed a combination of covert malware communication attempts and aggressive probing of publicly exposed infrastructure. These incidents, detected across firewall and endpoint security layers, demonstrate how modern cyber attackers operate simultaneously. While quietly activating compromised internal systems, they also relentlessly scan external services for exploitable weaknesses. Although the
Breaking Free from Security Silos in the Modern Enterprise Todayβs organizations face an unprecedented challenge: securing increasingly complex IT environments that span on-premises data centers, multiple cloud platforms, and hybrid architectures. Traditional security approaches that rely on disparate point solutions are failing to keep pace with sophisticated threats, leaving critical gaps in visibility and response
Labeling adversary activity with ATT&CK techniques is a tried-and-true method for classifying behavior. But it rarely tells defenders how those behaviors are executed in real environments.
As organizations accelerate the adoption of Artificial Intelligence, from deploying Large Language Models (LLMs) to integrating autonomous agents and Model Context Protocol (MCP) servers, risk management has transitioned from a theoretical exercise to a critical business imperative. The NIST AI Risk Management Framework (AI RMF 1.0) has emerged as the standard for managing these risks, offering a structured approach to designing, developing, and deploying trustworthy AI systems.
However, AI systems do not operate in isolation. They rely heavily on Application Programming Interfaces (APIs) to ingest training data, serve model inferences, and facilitate communication between agents and servers. Consequently, the API attack surface effectively becomes the AI attack surface. Securing these API pathways is fundamental to achieving the "Secure and Resilient" and "Privacy-Enhanced" characteristics mandated by the framework.
Understanding the NIST AI RMF Core
The NIST AI RMF is organized around four core functions that provide a structure for managing risk throughout the AI lifecycle:
GOVERN: Cultivates a culture of risk management and outlines processes, documents, and organizational schemes.
MAP: Establishes context to frame risks, identifying interdependencies and visibility gaps.
MEASURE: Employs tools and methodologies to analyze, assess, and monitor AI risk and related impacts.
MANAGE: Prioritizes and acts upon risks, allocating resources to respond to and recover from incidents.
The Critical Role of API Posture Governance
While the "GOVERN" function in the NIST framework focuses on organizational culture and policies, API Posture Governance serves as the technical enforcement mechanism for these policies in operational environments.
Without robust API posture governance, organizations struggle to effectively Manage or Govern their AI risks. Unvetted AI models may be deployed via shadow APIs, and sensitive training data can be exposed through misconfigurations. Automating posture governance ensures that every API connected to an AI system adheres to security standards, preventing the deployment of insecure models and ensuring your AI infrastructure remains compliant by design.
How Salt Security Safeguards AI Systems
Salt Security provides a tailored solution that aligns directly with the NIST AI RMF. By securing the API layer (Agentic AI Action Layer), Salt Security helps organizations maintain the integrity of their AI systems and safeguard sensitive data. The key features, along with their direct correlations to NIST AI RMF functions, include:
Alignment: Supports the MAP function by establishing context and recognizing risk visibility gaps.
Outcome: Guarantees a complete inventory of all APIs, including shadow APIs used for AI training or inference, ensuring no part of the AI ecosystem is unmanaged.
Alignment: Operationalizes the GOVERN and MANAGE functions by enabling organizational risk culture and prioritizing risk treatment.
Outcome: Preserves secure APIs throughout their lifecycle, enforcing policies that prevent the deployment of insecure models and ensuring ongoing compliance with NIST standards.
Alignment: Assists in the MEASURE function by assessing system trustworthiness and testing for failure modes.
Outcome: Identifies logic flaws and misconfigurations in AI-connected APIs before they can be exploited by adversaries.
Conclusion
Trustworthy AI requires secure APIs. By implementing API Posture Governance and comprehensive security controls, organizations can confidently adopt the NIST AI RMF and innovate safely. Salt Security provides the visibility and protection needed to secure the critical infrastructure powering your AI. For a more in-depth understanding of API security compliance across multiple regulations, please refer to our comprehensive API Compliance Whitepaper.
SoundCloud confirmed today that it experienced a security incident involving unauthorized access to a supporting internal system, resulting in the exposure of certain user data. The company said the incident affected approximately 20 percent of its users and involved email addresses along with information already visible on public SoundCloud profiles. Passwords and financial information were [β¦]
This article was originally published in T.H.E. Journal on 12/10/25 by Charlie Sander. Device-based learning is no longer βnew,β but many schools still lack a coherent playbook for managing it. Many school districtsΒ dashedΒ to adopt 1:1 computing during the pandemic, spending $48 million on new devices to ensure every child had a platform to take classes ...
China is already the worldβs largest exporter of AI powered surveillance technology; new surveillance technologies and platforms developed in China are also not likely to simply stay there. By exposing the full scope of Chinaβs AI driven control apparatus, this report presents clear, evidence based insights for policymakers, civil society, the media and technology companies seeking to counter the rise of AI enabled repression and human rights violations, and Chinaβs growing efforts to project that repression beyond its borders...
What is the Australian Privacy Act? The Australian Privacy Act 1988 (Cth), commonly referred to as the Privacy Act, is the primary legislation governing the protection of personal information in Australia. It establishes how government agencies and private sector organizations collect, use, store, and disclose personal information, and grants individuals the right to access and [β¦]
Here at Approov, we always like to look ahead and try to predict what will happen in mobile cybersecurity in the coming year. Mobile app security is an issue which must be taken seriously, and having some insight into key trends is important, in order to be prepared.You can read our predictions for 2025 here. Our predictions for next year are coming shortly.Β