Reading view
The Attack Surface of Cloud-Based Generative AI Applications is Evolving

It is the right time to talk about this. Cloud-based Artificial Intelligence, or specifically those big, powerful Large Language Models we see everywhere, they’ve completely changed the game. They’re more than just a new application tier. They’re an entirely new attack surface. You’ve moved your critical applications to the public cloud. You did it for..
The post The Attack Surface of Cloud-Based Generative AI Applications is Evolving appeared first on Security Boulevard.
Securing GenAI in Enterprises: Lessons from the Field

Enterprise GenAI success depends on more than models—security, observability, evaluation, and integration are critical to move from fragile pilots to reliable, scalable AI.
The post Securing GenAI in Enterprises: Lessons from the Field appeared first on Security Boulevard.
mcp-scan – Real-Time Guardrail Monitoring and Dynamic Proxy for MCP Servers
Red Teaming LLMs 2025 – Offensive Security Meets Generative AI
mcp-scanner – Python MCP Scanner for Prompt-Injection and Insecure Agents
LLM Black Markets in 2025 – Prompt Injection, Jailbreak Sales & Model Leaks
AIPentestKit – AI-Augmented Red Team Toolkit for Recon, Fuzzing and Payload Generation
HexStrike AI – Multi-Agent LLM Orchestration for Automated Offensive Security
LLAMATOR – Red Team Framework for Testing LLM Security
Generative AI in Social Engineering & Phishing in 2025
Innovator Spotlight: Concentric AI

Data Security’s New Frontier: How Generative AI is Rewriting the Cybersecurity Playbook Semantic Intelligence™ utilizes context-aware AI to discover structured and unstructured data across cloud and on-prem environments. The “Content...
The post Innovator Spotlight: Concentric AI appeared first on Cyber Defense Magazine.
The EU is a Trailblazer, and the AI Act Proves It
Summary Bullets:
• On August 2, 2025, the second stage of the EU AI Act came into force, including obligations for general purpose models.
• The AI Act first came into force in February 2025, with the first set of applications built into law; the legislation follows a staggered approach with the last wave expected for August 2, 2027.
August 2025 has been marked by the enforcement of a new set of rules as part of the AI Act, the world’s first comprehensive AI legislation, which is being implemented in gradual stages. Like GDPR was for data privacy in the 2010s, the AI Act will be the global blueprint for governance of the transformative technology of AI, for decades to come. Recent news of the latest case of legal action, this time against OpenAI, by the parents of 16-year-old Adam Raine, who ended his life after months of intensive use of ChatGPT, has thrown into stark relief the potential for harm and the need to regulate the technology.
The AI Act follows a risk management approach; it aims to regulate transparency and accountability for AI systems and their developers. Although it was enacted into law in 2024, the first wave of enforcement proper was implemented last February (please see GlobalData’s take on The AI Act: landmark regulation comes into force) covering “unacceptable risk,” including AI systems considered a clear threat to societal safety. The second wave, implemented this month, covers general purpose AI (GPAI) models and arguably is the most important one, at least in terms of scope. The next steps are expected to follow in August 2026 (“high-risk systems”) and August 2027 (final steps of implementation).
From August 2, 2025, GPAI providers must comply with transparency and copyright obligations when placing their models on the EU market. This applies not only to EU-based companies but any organization with operations in the EU. GPAI models already on the market before August 2, 2025, must ensure compliance by August 2, 2027. For the intents and purposes of the law, GPAI models include those trained with over 10^23 floating point operations (FLOP) and capable of generating language (whether text or audio), text-to-image, or text-to-video.
Providers of GPAI systems must keep technical documentation about the model, including a sufficiently detailed summary of its training corpus. In addition, they must implement a policy to comply with EU copyright law. Within the group of GPAI models there is a special tier considered to be of “systemic risk,” very advanced models that only a small handful of providers develop. Firms within this tier face additional obligations, for instance, notify the European Commission when developing a model deemed with systemic risk and take steps to ensure the model’s safety and security. The classification of which models pose systemic risks can change over time as the technology evolves. There are exceptions: AI used for national security, military, and defense purposes is exempted in the act. Some open-source systems are also outside the reach of the legislation, as are AI models developed using publicly available code.
The European Commission has published a template to help providers summarize the data used to train their models, the GPAI Code of Practice, developed by independent experts as a voluntary tool for AI providers to demonstrate compliance with the AI Act. Signatories include Amazon, Anthropic, Cohere, Google, IBM, Microsoft, Mistral AI, OpenAI and ServiceNow, but some glaring absences include Meta (at the time of print). The code covers transparency and copyright rules that apply to all GPAI models, with additional safety and security rules for the systemic risk tier.
The AI Act has drawn criticism because of its disproportionate impact on startups and SMBs, with some experts arguing that it should include exceptions for technologies that are yet to have some hold on the general public, and don’t have a wide impact or potential for harm. Others say it could slow down progress among European organizations in the process of training their AI models, and that the rules are confusing. Last July, several tech lobbies including CCIA Europe, urged the EU to pause implementation of the act, arguing that the roll-out had been too rushed, without weighing in on the potential consequences… Sounds familiar?
However, it has been developed with the collaboration of thousands of stakeholders in the private sector, at a time when businesses are craving regulatory guidance. It is also introducing standard security practices across the EU, in a critical period of adoption. It is setting a global benchmark for others to follow in a time of great upheaval. After the AI Act, the US and other countries will find it increasingly harder to continue ignoring the calls for more responsible AI, a commendable effort that will make history.
IBM Think on Tour Singapore 2025: An Agentic Enterprise Comes Down to Tech, Infrastructure, Orchestration, and Optionality
Summary Bullets:
• Cloud will have a role in the AI journey, bit no longer the destination. The world will be hybrid, and multi-vendor.
• Agentic AI manifests from this new platform but will be double-edged sword. Autonomy is proportionate to risk. Any solution that goes to production needs governance.
The AI triathlon is underway. A year ago the race was about the size of the GenAI large language model (LLM). Today, it is the number AI agents connecting to internal systems to automate workflows, moving to the overall level of preparedness for the agentic enterprise. The latter seems about giving much higher levels of autonomy to AI agents to set own goals, self-learn and make decisions, possibly manage other agents from other vendors, that impact customers (e.g., approving home loans, dispute resolution, etc.). This, in turn, influences NPS, C-SAT, customer advocacy, compliance, and countless other metrics. It also raises many other legitimate legal, ethical, and regulatory concerns.
Blending Tech with Flexible Architectures
While AI in many of its current forms are nascent, getting things right often starts with placing the right bets. And the IBM vision, as articulated, aligns tightly to the trends on the ground. This is broadly automation, AI, hybrid and multi-cloud environments and data. Not every customer will go the same flight path, but multiple options are key in the era of disaggregation.
In February 2025 IBM acquired HashiCorp. This was a company that foresaw public cloud and on-prem integration challenges decades ago and invested early in dev tools, automation, and saw infrastructure as code. Contextualize to today’s language models, enterprises still will continue to have different needs. While public cloud will likely be the ideal environment for model training, inferencing or fine tuning may better at the edge. Hybrid is the way, and automation is the solution glue. The GlobalData CXO research shows that AI is accelerating edge infrastructure, not cloud. And there are many considerations such as performance, security, compliance, and cost causing the pendulum to swing back.
Watsonx Orchestrate
The acquisition of Red Hat six years ago helped to solidify the ‘open source’ approach into the IBM DNA. This is more relevant for AI now. Openness also translates to middleware and one of the standouts of the event with is the ‘headless architectures’ with Watsonx. The decoupling of UI/UX at the frontend with the backend databases and business logic focuses less on the number of agents, but rather how well autonomous tasks and actions are synchronized in a multi-vendor environment. Traditional vendors have a rich history of integration challenges. An open platform approach working across many of the established application environments with other frameworks is the most viable option. In this context, IBM shared examples of working with a global SaaS provider using Watsonx to support its own global orchestration roll-out; direct selling to the MNC with a large install base of competing solutions, to other scenarios of partners who have BYO agents. IBM likely wants to be seen as having the most open, less so the best technology in a tightly coupled stack.
The Opportunity
Agentic AI’s great potential has a double-edged sword. Autonomy is proportionate to risk. And risk can only be managed with governance. These can include guardrails (e.g., ethics) and process controls (e.g., explainability, monitoring and observability, etc.). Employees will need varying levels of accountability and oversight too. While IBM is a technology company with its own products and infrastructure, it also has its own consulting resources with 160,000 global staff. Most competitors will lean towards the partner-led approach. Whichever path is taken, both options are on the table for IBM. This is important for balancing risk with technology evolution. Still, very few AI peroof of concepts ever make it to production. And great concepts will require the extra consulting muscle, especially through multi-disciplinary teams, to show business value. Claims of internal capability needs to walk that tight rope with vendor agnosticism to keep both camps motivated and the markets confident.
Vegas, Vulnerabilities, and Voices: Black Hat and Squadcon 2025

The week of August 4th, I had the opportunity to attend two exciting conferences in the cybersecurity world: Black Hat USA 2025 and Squadcon which were held in Las Vegas....
The post Vegas, Vulnerabilities, and Voices: Black Hat and Squadcon 2025 appeared first on Cyber Defense Magazine.
A New Chapter for AI and Cybersecurity: SentinelOne Acquires Prompt Security
Organizations around the globe are rapidly adopting AI and embracing accelerated creativity and output, but with this vast opportunity come enormous challenges: visibility, compliance, security, control. From the growth of AI tool usage outside IT and infosec to the emergence of autonomous AI agents and agentic workflows, the undeniable benefits of AI often open the door to novel cyber threats and data privacy concerns, but even more often, to misuse and leakage of sensitive information.
SentinelOne pioneered AI Cybersecurity beginning at the endpoint and this strategy has rapidly evolved to the cloud, AI SIEM, and generative and agentic AI to protect every aspect of enterprise security. Now, we’re taking that strategy a step further, signing a definitive agreement to acquire Prompt Security – a rapidly growing company empowering and enabling organizations to use AI and AI agents securely – today. The immediate visibility and control Prompt Security delivers to all employee use of GenAI applications in the work environment is unparalleled.
Embrace AI without compromising visibility, security, or control
Prompt Security CEO Itamar Golan and his team were early champions of AI as a force for productivity, innovation, and transformation. As a cybersecurity veteran of Orca and Checkpoint, Golan was quick to realize that security risks would be the single biggest blocker to widespread AI adoption. This need is what has driven Prompt Security’s approach from the start – providing companies with the ability to encourage and deploy employee AI usage without compromise.
Prompt Security’s technology helps organizations by integrating across browsers, desktop applications, and API’s. This includes real-time visibility into how AI tools are accessed, what data is being stored, and automated enforcement to prevent prompt injections, sensitive data leakage, and misuse.
This design and approach is highly complementary to SentinelOne’s AI strategy and the Singularity Platform; creating a unique, integrated layer for securing AI in the enterprise – protecting tools where and how they are used, and creating customer value in a way no other solution in the market can match.
The Prompt Security Difference
Prompt Security enables organizations and users to confidently leverage tools like ChatGPT, Gemini, Claude, Cursor, and other custom LLMs by providing IT and security teams visibility, security, and real-time control – even over unmanaged AI use.
Real-Time AI Visibility
Prompt Security’s lightweight agent and browser extensions automatically discover both sanctioned GenAI apps and unsanctioned Shadow AI wherever employees work. This includes browsers, desktop IDEs, terminal-based assistants, APIs, and custom workflows. The platform maintains a live inventory of usage across thousands of AI tools and assistants. Every prompt and response is captured with full context, giving security teams searchable logs for audit and compliance. This is a great complement to our existing presence on the endpoint, and will enable us to accelerate our GenAI DLP capabilities.
Policy-Based Controls
Granular, policy-driven rules let teams redact or tokenize sensitive data on the fly, block high-risk prompts, and deliver inline coaching that helps users learn safe AI practices without losing productivity.
AI Attack Prevention
The platform inspects every interaction in real time to stop prompt injection, jailbreak attempts, malicious output manipulation, and prompt leaks. It is designed to maintain low latency so users experience no disruption.
Model Agnostic Coverage
Safeguards apply uniformly across all major LLM providers including OpenAI, Anthropic, and Google, as well as self-hosted or on-prem models. The fully provider-independent architecture fits into any stack, whether SaaS or self-hosted.
MCP Gateway Security
Prompt Security’s MCP Gateway sits between AI applications and more than 13,000 known MCP servers, intercepting every call, prompt template, and response. Each server receives a dynamic risk score, and the system enforces allow, block, filter, or redact actions.
The Future of AI Security
AI is the most transformative force in the world today, but without security, it becomes a liability. SentinelOne has long set the standard on how AI can transform cybersecurity. This acquisition unlocks a new frontier of platform expansion for SentinelOne and represents a step forward in our AI strategy – from AI for security to security for AI. It cements SentinelOne’s leadership in securing the modern AI-powered enterprise, and it also puts in the center the main thing that acquisitions are about- solving real customer problems, improving security, and creating tangible value for security teams- allowing them to lead their business safely and responsibly to the AI age.
Protecting the usage of AI tools without compromising safety or inhibiting productivity is critical to their continued adoption and together, SentinelOne and Prompt Security provide the tools and confidence to make that a reality.
The ink may still be drying but the next chapter of SentinelOne’s growth story has officially begun. On behalf of all Sentinels, our partners, and our customers, I couldn’t be happier to welcome the Prompt Security team to SentinelOne!
Forward Looking Statements
This blog post contains forward-looking statements. The achievement or success of the matters covered by such forward-looking statements involve risks, uncertainties and assumptions. If any such risks or uncertainties materialize or if any of the assumptions prove incorrect, our results could differ materially from the results expressed or implied by the forward-looking statements. Please refer to the documents we file from time to time with the U.S. Securities and Exchange Commission, in particular, our Annual Report on Form 10-K and our Quarterly Reports on Form 10-Q. These documents contain and identify important risk factors and other information that may cause our actual results to differ materially from those contained in our forward-looking statements. Any unreleased products, services or solutions referenced in this or other press releases or public statements are not currently available and may not be delivered on time or at all. Customers who purchase SentinelOne products, services and solutions should make their purchase decisions based upon offerings that are currently available.

PyRIT – AI-Powered Reconnaissance for Cloud Red Teaming
Deepfake misuse & deepfake detection (before it’s too late)
Micki Boland is a global cyber security warrior and evangelist with Check Point’s Office of the CTO. Micki has over 20 years in ICT, cyber security, emerging technology, and innovation. Micki’s focus is helping customers, system integrators, and service providers reduce risk through the adoption of emerging cyber security technologies. Micki is an ISC2 CISSP and holds a Master of Science in Technology Commercialization from the University of Texas at Austin, and an MBA with a global security concentration from East Carolina University.
In this dynamic and insightful interview, Check Point expert Micki Boland discusses how deepfakes are evolving, why that matters for organizations, and how organizations can take action to protect themselves. Discover on-point analyses that could reshape your decisions, improving cyber security and business outcomes. Don’t miss this.
Can you explain how deepfake technology works?
Deepfakes involve simulated video, audio, and images to be delivered as content via online news, mobile applications, and through social media platforms. Deepfake videos are created with Generative Adversarial Networks (GAN), a type of Artificial Neural Network that uses Deep Learning to create synthetic content.
GANs sound cool, but technical. Could you break down how they operate?
GAN are a class of machine learning systems that have two neural network models; a generator and discriminator which game each other. Training data in the form of video, still images, and audio is fed to the generator, which then seeks to recreate it. The discriminator then tries to discern the training data from the recreated data produced by the generator.
The two artificial intelligence engines repeatedly game each other, getting iteratively better. The result is convincing, high quality synthetic video, images, or audio. A good example of GAN at work is NVIDIA GAN. Navigate to the website https://thispersondoesnotexist.com/ and you will see a composite image of a human face that was created by the NVIDIA GAN using faces on the internet. Refreshing the internet browser yields a new synthetic image of a human that does not exist.
What are some notable examples of deepfake tech’s misuse?
Most people are not even aware of deepfake technologies, although these have now been infamously utilized to conduct major financial fraud. Politicians have also used the technology against their political adversaries. Early in the war between Russia and Ukraine, Russia created and disseminated a deepfake video of Ukrainian President Volodymyr Zelenskyy advising Ukrainian soldiers to “lay down their arms” and surrender to Russia.
How was the crisis involving the Zelenskyy deepfake video managed?
The deepfake quality was poor and it was immediately identified as a deepfake video attributable to Russia. However, the technology is becoming so convincing and so real that soon it will be impossible for the regular human being to discern GenAI at work. And detection technologies, while have a tremendous amount of funding and support by big technology corporations, are lagging way behind.
What are some lesser-known uses of deepfake technology and what risks do they pose to organizations, if any?
Hollywood is using deepfake technologies in motion picture creation to recreate actor personas. One such example is Bruce Willis, who sold his persona to be used in movies without his acting due to his debilitating health issues. Voicefake technology (another type of deepfake) enabled an autistic college valedictorian to address her class at her graduation.
Yet, deepfakes pose a significant threat. Deepfakes are used to lure people to “click bait” for launching malware (bots, ransomware, malware), and to conduct financial fraud through CEO and CFO impersonation. More recently, deepfakes have been used by nation-state adversaries to infiltrate organizations via impersonation or fake jobs interviews over Zoom.
How are law enforcement agencies addressing the challenges posed by deepfake technology?
Europol has really been a leader in identifying GenAI and deepfake as a major issue. Europol supports the global law enforcement community in the Europol Innovation Lab, which aims to develop innovative solutions for EU Member States’ operational work. Already in Europe, there are laws against deepfake usage for non-consensual pornography and cyber criminal gangs’ use of deepfakes in financial fraud.
What should organizations consider when adopting Generative AI technologies, as these technologies have such incredible power and potential?
Every organization is seeking to adopt GenAI to help improve customer satisfaction, deliver new and innovative services, reduce administrative overhead and costs, scale rapidly, do more with less and do it more efficiently. In consideration of adopting GenAI, organizations should first understand the risks, rewards, and tradeoffs associated with adopting this technology. Additionally, organizations must be concerned with privacy and data protection, as well as potential copyright challenges.
What role do frameworks and guidelines, such as those from NIST and OWASP, play in the responsible adoption of AI technologies?
On January 26th, 2023, NIST released its forty-two page Artificial Intelligence Risk Management Framework (AI RMF 1.0) and AI Risk Management Playbook (NIST 2023). For any organization, this is a good place to start.
The primary goal of the NIST AI Risk Management Framework is to help organizations create AI-focused risk management programs, leading to the responsible development and adoption of AI platforms and systems.
The NIST AI Risk Management Framework will help any organization align organizational goals for and use cases for AI. Most importantly, this risk management framework is human centered. It includes social responsibility information, sustainability information and helps organizations closely focus on the potential or unintended consequences and impact of AI use.
Another immense help for organizations that wish to further understand risk associated with GenAI Large Language Model adoption is the OWASP Top 10 LLM Risks list. OWASP released version 1.1 on October 16th, 2023. Through this list, organizations can better understand risks such as inject and data poisoning. These risks are especially critical to know about when bringing an LLM in house.
As organizations adopt GenAI, they need a solid framework through which to assess, monitor, and identify GenAI-centric attacks. MITRE has recently introduced ATLAS, a robust framework developed specifically for artificial intelligence and aligned to the MITRE ATT&CK framework.
For more of Check Point expert Micki Boland’s insights into deepfakes, please see CyberTalk.org’s past coverage. Lastly, to receive cyber security thought leadership articles, groundbreaking research and emerging threat analyses each week, subscribe to the CyberTalk.org newsletter.
The post Deepfake misuse & deepfake detection (before it’s too late) appeared first on CyberTalk.

