Normal view
-
TechCrunch
- No,Β youΒ canβtΒ get your AIΒ toΒ βadmitβΒ to being sexist, but itΒ probably is anyway
No,Β youΒ canβtΒ get your AIΒ toΒ βadmitβΒ to being sexist, but itΒ probably is anyway
Radware Adds Firewall for LLMs to Security Portfolio
Radware has developed a firewall for large language models (LLMs) that ensures governance and security policies are enforced in real time. Provided as an add-on to the companyβs Cloud Application Protection Services, Radware LLM Firewall addresses the top 10 risks and mitigations for LLMs and generative artificial intelligence (AI) applications defined by the OWASP GenAI..
The post Radware Adds Firewall for LLMs to Security Portfolio appeared first on Security Boulevard.
Securing GenAI in Enterprises: Lessons from the Field
Enterprise GenAI success depends on more than modelsβsecurity, observability, evaluation, and integration are critical to move from fragile pilots to reliable, scalable AI.
The post Securing GenAI in Enterprises: Lessons from the Field appeared first on Security Boulevard.
Follow Pragmatic Interventions to Keep Agentic AI in Check
Agentic AI speeds operations, but requires clear goals, least privilege, auditability, redβteaming, and human oversight to manage opacity, misalignment, and misuse.
The post Follow Pragmatic Interventions to Keep Agentic AI in Check appeared first on SecurityWeek.
Generative AI adoption: Strategic implications & security concerns
By Manuel Rodriguez. With more than 15 years of experience in cyber security, Manuel Rodriguez is currently the Security Engineering Manager for the North of Latin America at Check Point Software Technologies, where he leads a team of high-level professionals whose objective is to help organizations and businesses meet cyber security needs. Manuel joined Check Point in 2015 and initially worked as a Security Engineer, covering Central America, where he participated in the development of important projects for multiple clients in the region. He had previously served in leadership roles for various cyber security solution providers in Colombia.
Technology evolves very quickly. We often see innovations that are groundbreaking and have the potential to change the way we live and do business. Although artificial intelligence is not necessarily new, in November of 2022 ChatGPT was released, giving the general public access to a technology we know as Generative Artificial Intelligence (GenAI). It was in a short time from then to the point where people and organizations realized it could help them gain a competitive advantage.
Over the past year, organizational adoption of GenAI has nearly doubled, showing the growing interest in embracing this kind of technology. This surge isnβt a temporary trend; it is a clear indication of the impact GenAI is already having and that it will continue to have in the coming years across various industry sectors.
The surge in adoption
Recent data reveals that 65% of organizations are now regularly using generative AI, with overall AI adoption jumping to 72% this year. This rapid increase shows the growing recognition of GenAIβs potential to drive innovation and efficiency. One analyst firm predicts that by 2026, over 80% of enterprises will be utilizing GenAI APIs or applications, highlighting the importance that businesses are giving to integrating this technology into their strategic frameworks.
Building trust and addressing concerns
Although adoption is increasing very fast in organizations, the percentage of the workforce with access to this kind of technology still relatively low. In a recent survey by Deloitte, it was found that 46% of organizations provide approved Generative AI access to 20% or less of their workforce. When asked for the reason behind this, the main answer was around risk and reward. Aligned with that, 92% of business leaders see moderate to high-risk concerns with GenAI.
As organizations scale their GenAI deployments, concerns increase around data security, quality, and explainability. Addressing these issues is essential to generate confidence among stakeholders and ensure the responsible use of AI technologies.
Data security
The adoption of Generative AI (GenAI) in organizations comes with various data security risks. One of the primary concerns is the unauthorized use of GenAI tools, which can lead to data integrity issues and potential breaches. Shadow GenAI, where employees use unapproved GenAI applications, can lead to data leaks, privacy issues and compliance violations.
Clearly defining the GenAI policy in the organization and having appropriate visibility and control over the shared information through these applications will help organizations mitigate this risk and maintain compliance with security regulations. Additionally, real-time user coaching and training has proven effective in altering user actions and reducing data risks.
Compliance and regulations
Compliance with data privacy regulations is a critical aspect of GenAI adoption. Non-compliance can lead to significant legal and financial repercussions. Organizations must ensure that their GenAI tools and practices adhere to relevant regulations, such as GDPR, HIPPA, CCPA and others.
Visibility, monitoring and reporting are essential for compliance, as they provide the necessary oversight to ensure that GenAI applications are used appropriately. Unauthorized or improper use of GenAI tools can lead to regulatory breaches, making it imperative to have clear policies and governance structures in place. Intellectual property challenges also arise from generating infringing content, which can further complicate compliance efforts.
To address these challenges, organizations should establish a robust framework for GenAI governance. This includes developing a comprehensive AI ethics policy that defines acceptable use cases and categorizes data usage based on organizational roles and functions. Monitoring systems are essential for detecting unauthorized GenAI activities and ensuring compliance with regulations.
Specific regulations for GenAI
Several specific regulations and guidelines have been developed or are in the works to address the unique challenges posed by GenAI. Some of those are more focused on the development of new AI tools while others as the California GenAI Guidelines focused on purchase and use. Examples include:
EU AI Act: This landmark regulation aims to ensure the safe and trustworthy use of AI, including GenAI. It includes provisions for risk assessments, technical documentation standards, and bans on certain high-risk AI applications.
U.S. Executive Order on AI: Issued in October of 2023, this order focuses on the safe, secure, and trustworthy development and use of AI technologies. It mandates that federal agencies implement robust risk management and governance frameworks for AI.
California GenAI Guidelines: The state of California has issued guidelines for the public sectorβs procurement and use of GenAI. These guidelines emphasize the importance of training, risk assessment, and compliance with existing data privacy laws.
Department of Energy GenAI Reference Guide: This guide provides best practices for the responsible development and use of GenAI, reflecting the latest federal guidance and executive orders.
Recommendations
To effectively manage the risks associated with GenAI adoption, organizations should consider the following recommendations:
Establish clear policies and training: Develop and enforce clear policies on the approved use of GenAI. Provide comprehensive training sessions on ethical considerations and data protection to ensure that all employees understand the importance of responsible AI usage.
Continuously reassess strategies: Regularly reassess strategies and practices to keep up with technological advancements. This includes updating security measures, conducting comprehensive risk assessments, and evaluating third-party vendors.
Implement advanced GenAI security solutions: Deploy advanced GenAI solutions to ensure data security while maintaining comprehensive visibility into GenAI usage. Traditional DLP solutions based on keywords and patterns are not enough. GenAI solutions should give proper visibility by understanding the context without the need to define complicated data-types. This approach not only protects sensitive information, but also allows for real-time monitoring and control, ensuring that all GenAI activities are transparent and compliant with organizational and regulatory requirements.
Foster a culture of responsible AI usage: Encourage a culture that prioritizes ethical AI practices. Promote cross-department collaboration between IT, legal, and compliance teams to ensure a unified approach to GenAI governance.
Maintain transparency and compliance: Ensure transparency in AI processes and maintain compliance with data privacy regulations. This involves continuous monitoring and reporting, as well as developing incident response plans that account for AI-specific challenges.
By following these recommendations, organizations can make good use and take advantage of the benefits of GenAI while effectively managing the associated data security and compliance risks.
Β
Β
The post Generative AI adoption: Strategic implications & security concerns appeared first on CyberTalk.
Deepfake misuse & deepfake detection (before itβs too late)
Micki Boland is a global cyber security warrior and evangelist with Check PointβsΒ Office of the CTO. Micki has over 20 years in ICT, cyber security, emerging technology, and innovation. Mickiβs focus is helping customers, system integrators, and service providers reduce risk through the adoption of emerging cyber security technologies. Micki is an ISC2 CISSP and holds a Master of Science in Technology Commercialization from the University of Texas at Austin, and an MBA with a global security concentration from East Carolina University.
In this dynamic and insightful interview, Check Point expert Micki Boland discusses how deepfakes are evolving, why that matters for organizations, and how organizations can take action to protect themselves. Discover on-point analyses that could reshape your decisions, improving cyber security and business outcomes. Donβt miss this.
Can you explain how deepfake technology works?Β
Deepfakes involve simulated video, audio, and images to be delivered as content via online news, mobile applications, and through social media platforms. Deepfake videos are created with Generative Adversarial Networks (GAN), a type of Artificial Neural Network that uses Deep Learning to create synthetic content.
GANs sound cool, but technical. Could you break down how they operate?
GAN are a class of machine learning systems that have two neural network models; a generator and discriminator which game each other. Training data in the form of video, still images, and audio is fed to the generator, which then seeks to recreate it. The discriminator then tries to discern the training data from the recreated data produced by the generator.
The two artificial intelligence engines repeatedly game each other, getting iteratively better. The result is convincing, high quality synthetic video, images, or audio. A good example of GAN at work is NVIDIA GAN. Navigate to the website https://thispersondoesnotexist.com/ and you will see a composite image of a human face that was created by the NVIDIA GAN using faces on the internet. Refreshing the internet browser yields a new synthetic image of a human that does not exist.
What are some notable examples of deepfake techβs misuse?
Most people are not even aware of deepfake technologies, although these have now been infamously utilized to conduct major financial fraud. Politicians have also used the technology against their political adversaries. Early in the war between Russia and Ukraine, Russia created and disseminated a deepfake video of Ukrainian President Volodymyr Zelenskyy advising Ukrainian soldiers to βlay down their armsβ and surrender to Russia.
How was the crisis involving the Zelenskyy deepfake video managed?
The deepfake quality was poor and it was immediately identified as a deepfake video attributable to Russia. However, the technology is becoming so convincing and so real that soon it will be impossible for the regular human being to discern GenAI at work. And detection technologies, while have a tremendous amount of funding and support by big technology corporations, are lagging way behind.
What are some lesser-known uses of deepfake technology and what risks do they pose to organizations, if any?
Hollywood is using deepfake technologies in motion picture creation to recreate actor personas. One such example is Bruce Willis, who sold his persona to be used in movies without his acting due to his debilitating health issues. Voicefake technology (another type of deepfake) enabled an autistic college valedictorian to address her class at her graduation.
Yet, deepfakes pose a significant threat. Deepfakes are used to lure people to βclick baitβ for launching malware (bots, ransomware, malware), and to conduct financial fraud through CEO and CFO impersonation. More recently, deepfakes have been used by nation-state adversaries to infiltrate organizations via impersonation or fake jobs interviews over Zoom.
How are law enforcement agencies addressing the challenges posed by deepfake technology?
Europol has really been a leader in identifying GenAI and deepfake as a major issue. Europol supports the global law enforcement community in the Europol Innovation Lab, which aims to develop innovative solutions for EU Member Statesβ operational work. Already in Europe, there are laws against deepfake usage for non-consensual pornography and cyber criminal gangsβ use of deepfakes in financial fraud.
What should organizations consider when adopting Generative AI technologies, as these technologies have such incredible power and potential?
Every organization is seeking to adopt GenAI to help improve customer satisfaction, deliver new and innovative services, reduce administrative overhead and costs, scale rapidly, do more with less and do it more efficiently. In consideration of adopting GenAI, organizations should first understand the risks, rewards, and tradeoffs associated with adopting this technology. Additionally, organizations must be concerned with privacy and data protection, as well as potential copyright challenges.
What role do frameworks and guidelines, such as those from NIST and OWASP, play in the responsible adoption of AI technologies?
On January 26th, 2023, NIST released its forty-two page Artificial Intelligence Risk Management Framework (AI RMF 1.0) and AI Risk Management Playbook (NIST 2023). For any organization, this is a good place to start.
The primary goal of the NIST AI Risk Management Framework is to help organizations create AI-focused risk management programs, leading to the responsible development and adoption of AI platforms and systems.
The NIST AI Risk Management Framework will help any organization align organizational goals for and use cases for AI. Most importantly, this risk management framework is human centered. It includes social responsibility information, sustainability information and helps organizations closely focus on the potential or unintended consequences and impact of AI use.
Another immense help for organizations that wish to further understand risk associated with GenAI Large Language Model adoption is the OWASP Top 10 LLM Risks list. OWASP released version 1.1 on October 16th, 2023. Through this list, organizations can better understand risks such as inject and data poisoning. These risks are especially critical to know about when bringing an LLM in house.
As organizations adopt GenAI, they need a solid framework through which to assess, monitor, and identify GenAI-centric attacks. MITRE has recently introduced ATLAS, a robust framework developed specifically for artificial intelligence and aligned to the MITRE ATT&CK framework.
For more of Check Point expert Micki Bolandβs insights into deepfakes, please see CyberTalk.orgβs past coverage. Lastly, to receive cyber security thought leadership articles, groundbreaking research and emerging threat analyses each week,Β subscribeΒ to the CyberTalk.org newsletter.
Β
The post Deepfake misuse & deepfake detection (before itβs too late) appeared first on CyberTalk.