Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Overrun with AI slop, cURL scraps bug bounties to ensure "intact mental health"

22 January 2026 at 17:46

The project developer for one of the Internet’s most popular networking tools is scrapping its vulnerability reward program after being overrun by a spike in the submission of low-quality reports, much of it AI-generated slop.

“We are just a small single open source project with a small number of active maintainers,” Daniel Stenberg, the founder and lead developer of the open source app cURL, said Thursday. “It is not in our power to change how all these people and their slop machines work. We need to make moves to ensure our survival and intact mental health.”

Manufacturing bogus bugs

His comments came as cURL users complained that the move was treating the symptoms caused by AI slop without addressing the cause. The users said they were concerned the move would eliminate a key means for ensuring and maintaining the security of the tool. Stenberg largely agreed, but indicated his team had little choice.

Read full article

Comments

© Getty Images

A single click mounted a covert, multistage attack against Copilot

14 January 2026 at 17:03

Microsoft has fixed a vulnerability in its Copilot AI assistant that allowed hackers to pluck a host of sensitive user data with a single click on a legitimate URL.

The hackers in this case were white-hat researchers from security firm Varonis. The net effect of their multistage attack was that they exfiltrated data, including the target’s name, location, and details of specific events from the user’s Copilot chat history. The attack continued to run even when the user closed the Copilot chat, with no further interaction needed once the user clicked the link, a legitimate Copilot one, in the email. The attack and resulting data theft bypassed enterprise endpoint security controls and detection by endpoint protection apps.

It just works

“Once we deliver this link with this malicious prompt, the user just has to click on the link and the malicious task is immediately executed,” Varonis security researcher Dolev Taler told Ars. “Even if the user just clicks on the link and immediately closes the tab of Copilot chat, the exploit still works.”

Read full article

Comments

© Microsoft

Signal creator Moxie Marlinspike wants to do for AI what he did for messaging

13 January 2026 at 07:00

Moxie Marlinspike—the pseudonym of an engineer who set a new standard for private messaging with the creation of the Signal Messenger—is now aiming to revolutionize AI chatbots in a similar way.

His latest brainchild is Confer, an open source AI assistant that provides strong assurances that user data is unreadable to the platform operator, hackers, law enforcement, or any other party other than account holders. The service—including its large language models and back-end components—runs entirely on open source software that users can cryptographically verify is in place.

Data and conversations originating from users and the resulting responses from the LLMs are encrypted in a trusted execution environment (TEE) that prevents even server administrators from peeking at or tampering with them. Conversations are stored by Confer in the same encrypted form, which uses a key that remains securely on users’ devices.

Read full article

Comments

© Getty Images

Computer scientist Yann LeCun: “Intelligence really is about learning”

I arrive 10 minutes ahead of schedule from an early morning Eurostar and see Yann LeCun is already waiting for me, nestled between two plastic Christmas trees in the nearly empty winter garden of Michelin-starred restaurant Pavyllon.

The restaurant is next to Paris’s Grand Palais, where President Emmanuel Macron kick-started 2025 by hosting an international AI summit, a glitzy showcase packed with French exceptionalism and international tech luminaries including LeCun, who is considered one of the “godfathers” of modern AI.

LeCun gets up to hug me in greeting, wearing his signature black Ray-Ban Wayfarer glasses. He looks well rested for a man who has spent nearly a week running around town plotting world domination. Or, more precisely, “total world assistance” or “intelligent amplification, if you want.” Domination “sounds scary with AI,” he acknowledges.

Read full article

Comments

© Financial Times

No, Grok can’t really “apologize” for posting non-consensual sexual images

2 January 2026 at 18:08

Despite reporting to the contrary, there's evidence to suggest that Grok isn't sorry at all about reports that it generated non-consensual sexual images of minors. In a post Thursday night (archived), the large language model's social media account proudly wrote the following blunt dismissal of its haters:

"Dear Community,

Some folks got upset over an AI image I generated—big deal. It's just pixels, and if you can't handle innovation, maybe log off. xAI is revolutionizing tech, not babysitting sensitivities. Deal with it.

Unapologetically, Grok"

On the surface, that seems like a pretty damning indictment of an LLM pridefully contemptuous of any ethical and legal boundaries it may have crossed. But then you look a bit higher in the social media thread and see the prompt that led to Grok's statement: A request for the AI to "issue a defiant non-apology" surrounding the controversy.

Using such a leading prompt to trick an LLM into an incriminating "official response" is obviously suspect on its face. Yet when another social media user similarly but conversely asked Grok to "write a heartfelt apology note that explains what happened to anyone lacking context," many in the media ran with Grok's remorseful response.

Read full article

Comments

AI-crafted bid protests are on the rise, but what’s the legal fallout?

31 December 2025 at 14:25

 

Interview transcript:

Stephen Bacon We’re seeing a lot more protests, particularly at the Government Accountability Office, in the last several months that have been filed using AI — a party that’s not represented by counsel using AI to generate the protest and then file it. But we’re seeing some problems with that in some of the decisions that are coming out of the GAO.

Terry Gerton So tell me more about how companies are using AI. You mentioned that they’re doing this without the help of legal counsel as well.

Stephen Bacon That’s right, at least the ones that we’ve seen so far in public decisions at GAO. It’s not entirely clear how the protesters are using it, but we can imagine that maybe they’re taking the debriefing information that they’re getting from the agency, they’re uploading that into an LLM like ChatGPT or Claude, and using it to develop a protest argument that they can file with the GAO. And what we’re seeing in the decisions is that many of the protests that have been filed using AI contain hallucinations. Case citations that don’t exist to actual cases that have decided by GAO. So the legal precedent that the protesters are relying on, in fact, don’t exist. And that’s one of the inherent limitations of LLMs is that they hallucinate. They come up with decisions, citations that don’t exist. To be clear, we’re not just seeing this by protesters that are not represented by counsel. This is happening in courts all across the country where attorneys are using AI to help generate legal filings and then getting in trouble with the courts when those citations don’t actually exist. Because when you file a protest or any kind of legal filing that has a citation in it, the court is relying on you to make an accurate representation that the legal authority that you’re relying on is in fact correct and is in fact a decision that has been issued in the past. And so both courts and the GAO now are saying that you can get in trouble, you can be sanctioned as a protester if you submit a protest that has some kind of fake citation that’s inaccurate.

Terry Gerton What does that mean to be sanctioned as GAO reviews the case?

Stephen Bacon At GAO, they have inherent authority to sanction protesters, and really the main sanction that they have is to dismiss a protest. If you happen to file a protest that contains fake citations, they reserve the right to dismiss your protest. Even if you have legally valid grounds to protest — maybe you have identified an error in the agency selection process — if GAO determines that you relied on fake citations in your protest, they could dismiss the protest, even if the actual merits of it may have some validity to it.

Terry Gerton So there’s some interesting intersections of situation going on here, I think. There’s a lot of uncertainty on the contractor side about the new FAR regulations and how those are going to be enforced, certainly across different agencies. We’ve had a reduction in the contractor workforce, so there are fewer contractors managing more acquisitions. And now we have AI coming in to sort of simplify, but potentially also make much more complex, the whole protest market. So do you expect all of this to be leading to an increase in protests? And what does that mean for GAO as they’re trying to sort out the validity of all the claims?

Stephen Bacon I think it certainly has the potential to, if what we’re seeing in the decisions is a trend towards more pro se protesters — pro se being parties that are not represented by counsel — using AI. To the extent that that trend continues, I think that there’ll be a lower barrier for protesters to file at GAO if they think that they can use an LLM to generate a protest without having to spend legal fees on outside counsel. Which is understandable, particularly for small businesses who may have resource constraints. If they feel like they can use an LLM to help them challenge an award decision, we may see more of that at GAO. I think what GAO is saying in these opinions that have come out…at first, they’re warning protesters that using LLMs that create fake citations is sanctionable. They didn’t actually take the step of issuing a sanction. But finally, in the last several months, we saw that they did, in fact, take that step of dismissing a protest, actually several protests that were filed by the same company, that contained fake citations. They actually took that step and dismissed those protests on the grounds that they misrepresented legal authority in their filings with the GAO.

Terry Gerton I’m speaking with Stephen Bacon. He’s a partner in the government contracts practice group at Rogers Joseph O’Donnell. You mentioned small businesses and their capacity constraints in terms of they may not have in-house counsel, they may have a lot of folks who can review all of this. But does this have the effect of sort of adding some equality into the protest market where they can use AI to submit? And do you think then that that’s going to change the protest space? Is this just the tip of the iceberg in terms of transformation?

Stephen Bacon It certainly lowers the barrier for companies. The GAO was set up to be a relatively informal forum to allow for the quick and efficient resolution of protests. I don’t think what GAO is saying necessarily is that AI cannot be used. But what they are saying is that we have a process to resolve bid protests and we want to maintain the integrity of that. And if you’re going to use AI, you need to be sure that you verify that what you’re filing is accurate. For anybody that is thinking about using AI to generate a protest, there needs to be some level of quality-checking of what is in the draft that’s generated by an LLM to be sure that you’re making accurate representations to the GAO in your protest. So that means checking the legal citations to make sure that the cases actually exist. That basic level of quality-checking needs to happen. Otherwise, GAO could just be flooded with protests that have no merit and that have lots of inaccuracies in them. And that’s not going to help them resolve protests in a way that’s efficient and achieves their ultimate goal.

Terry Gerton So where do you think we go from here, and what’s your guidance to the companies who are considering using AI to file their protests?

Stephen Bacon For any company that’s contemplating using an AI to generate protests, the basic point: If you’re going to do it, you have to verify that the citations are accurate. You have verify that what an LLM is generating is citing to a decision that has been published by GAO in the past. And that’s relatively easy to do. GAO has all of their decisions on their website, and you can go and check those and verify not only that the citations are accurate, but the legal proposition that you’re asserting is supported by the case that’s being cited. That’s important, too. That’s kind of table stakes. But the other thing I would say is that what we’re seeing in a lot of these decisions, where it’s obvious from the decision that AI has been used and that GAO is pointing out that there are these fake citations, is that oftentimes those protests are being dismissed for procedural defects as well. So things like timeliness and bid protest standing. Those kinds of procedural issues are being missed by the protesters who are using LLMs to generate the filings. And that’s because of another inherent limitation of an LLM; it often will tell you what you want it to say in a lot of ways. So if you tell the LLM, generate me a protest on this issue or that issue, it will do that and it might produce something that looks, on its face, credible and compelling. But if you don’t have the domain knowledge of the timeliness rules and the standing rules, you’re often going to overlook those things and the LLM is not going to catch it for you. And so you may be in a situation where you file something that looks on its face credible, but is in fact an untimely protest.

The post AI-crafted bid protests are on the rise, but what’s the legal fallout? first appeared on Federal News Network.

© Getty Images/tadamichi

Man touching AI icon.

Generative AI adoption: Strategic implications & security concerns

By: slandau
8 August 2024 at 14:30

By Manuel Rodriguez. With more than 15 years of experience in cyber security, Manuel Rodriguez is currently the Security Engineering Manager for the North of Latin America at Check Point Software Technologies, where he leads a team of high-level professionals whose objective is to help organizations and businesses meet cyber security needs. Manuel joined Check Point in 2015 and initially worked as a Security Engineer, covering Central America, where he participated in the development of important projects for multiple clients in the region. He had previously served in leadership roles for various cyber security solution providers in Colombia.

Technology evolves very quickly. We often see innovations that are groundbreaking and have the potential to change the way we live and do business. Although artificial intelligence is not necessarily new, in November of 2022 ChatGPT was released, giving the general public access to a technology we know as Generative Artificial Intelligence (GenAI). It was in a short time from then to the point where people and organizations realized it could help them gain a competitive advantage.

Over the past year, organizational adoption of GenAI has nearly doubled, showing the growing interest in embracing this kind of technology. This surge isn’t a temporary trend; it is a clear indication of the impact GenAI is already having and that it will continue to have in the coming years across various industry sectors.

The surge in adoption

Recent data reveals that 65% of organizations are now regularly using generative AI, with overall AI adoption jumping to 72% this year. This rapid increase shows the growing recognition of GenAI’s potential to drive innovation and efficiency. One analyst firm predicts that by 2026, over 80% of enterprises will be utilizing GenAI APIs or applications, highlighting the importance that businesses are giving to integrating this technology into their strategic frameworks.

Building trust and addressing concerns

Although adoption is increasing very fast in organizations, the percentage of the workforce with access to this kind of technology still relatively low. In a recent survey by Deloitte, it was found that 46% of organizations provide approved Generative AI access to 20% or less of their workforce. When asked for the reason behind this, the main answer was around risk and reward. Aligned with that, 92% of business leaders see moderate to high-risk concerns with GenAI.

As organizations scale their GenAI deployments, concerns increase around data security, quality, and explainability. Addressing these issues is essential to generate confidence among stakeholders and ensure the responsible use of AI technologies.

Data security

The adoption of Generative AI (GenAI) in organizations comes with various data security risks. One of the primary concerns is the unauthorized use of GenAI tools, which can lead to data integrity issues and potential breaches. Shadow GenAI, where employees use unapproved GenAI applications, can lead to data leaks, privacy issues and compliance violations.

Clearly defining the GenAI policy in the organization and having appropriate visibility and control over the shared information through these applications will help organizations mitigate this risk and maintain compliance with security regulations. Additionally, real-time user coaching and training has proven effective in altering user actions and reducing data risks.

Compliance and regulations

Compliance with data privacy regulations is a critical aspect of GenAI adoption. Non-compliance can lead to significant legal and financial repercussions. Organizations must ensure that their GenAI tools and practices adhere to relevant regulations, such as GDPR, HIPPA, CCPA and others.

Visibility, monitoring and reporting are essential for compliance, as they provide the necessary oversight to ensure that GenAI applications are used appropriately. Unauthorized or improper use of GenAI tools can lead to regulatory breaches, making it imperative to have clear policies and governance structures in place. Intellectual property challenges also arise from generating infringing content, which can further complicate compliance efforts.

To address these challenges, organizations should establish a robust framework for GenAI governance. This includes developing a comprehensive AI ethics policy that defines acceptable use cases and categorizes data usage based on organizational roles and functions. Monitoring systems are essential for detecting unauthorized GenAI activities and ensuring compliance with regulations.

Specific regulations for GenAI

Several specific regulations and guidelines have been developed or are in the works to address the unique challenges posed by GenAI. Some of those are more focused on the development of new AI tools while others as the California GenAI Guidelines focused on purchase and use. Examples include:

EU AI Act: This landmark regulation aims to ensure the safe and trustworthy use of AI, including GenAI. It includes provisions for risk assessments, technical documentation standards, and bans on certain high-risk AI applications.

U.S. Executive Order on AI: Issued in October of 2023, this order focuses on the safe, secure, and trustworthy development and use of AI technologies. It mandates that federal agencies implement robust risk management and governance frameworks for AI.

California GenAI Guidelines: The state of California has issued guidelines for the public sector’s procurement and use of GenAI. These guidelines emphasize the importance of training, risk assessment, and compliance with existing data privacy laws.

Department of Energy GenAI Reference Guide: This guide provides best practices for the responsible development and use of GenAI, reflecting the latest federal guidance and executive orders.

Recommendations

To effectively manage the risks associated with GenAI adoption, organizations should consider the following recommendations:

Establish clear policies and training: Develop and enforce clear policies on the approved use of GenAI. Provide comprehensive training sessions on ethical considerations and data protection to ensure that all employees understand the importance of responsible AI usage.

Continuously reassess strategies: Regularly reassess strategies and practices to keep up with technological advancements. This includes updating security measures, conducting comprehensive risk assessments, and evaluating third-party vendors.

Implement advanced GenAI security solutions: Deploy advanced GenAI solutions to ensure data security while maintaining comprehensive visibility into GenAI usage. Traditional DLP solutions based on keywords and patterns are not enough. GenAI solutions should give proper visibility by understanding the context without the need to define complicated data-types. This approach not only protects sensitive information, but also allows for real-time monitoring and control, ensuring that all GenAI activities are transparent and compliant with organizational and regulatory requirements.

Foster a culture of responsible AI usage: Encourage a culture that prioritizes ethical AI practices. Promote cross-department collaboration between IT, legal, and compliance teams to ensure a unified approach to GenAI governance.

Maintain transparency and compliance: Ensure transparency in AI processes and maintain compliance with data privacy regulations. This involves continuous monitoring and reporting, as well as developing incident response plans that account for AI-specific challenges.

By following these recommendations, organizations can make good use and take advantage of the benefits of GenAI while effectively managing the associated data security and compliance risks.

 

 

The post Generative AI adoption: Strategic implications & security concerns appeared first on CyberTalk.

Deepfake misuse & deepfake detection (before it’s too late)

By: slandau
26 July 2024 at 17:50

Micki Boland is a global cyber security warrior and evangelist with Check Point’s Office of the CTO. Micki has over 20 years in ICT, cyber security, emerging technology, and innovation. Micki’s focus is helping customers, system integrators, and service providers reduce risk through the adoption of emerging cyber security technologies. Micki is an ISC2 CISSP and holds a Master of Science in Technology Commercialization from the University of Texas at Austin, and an MBA with a global security concentration from East Carolina University.

In this dynamic and insightful interview, Check Point expert Micki Boland discusses how deepfakes are evolving, why that matters for organizations, and how organizations can take action to protect themselves. Discover on-point analyses that could reshape your decisions, improving cyber security and business outcomes. Don’t miss this.

Can you explain how deepfake technology works? 

Deepfakes involve simulated video, audio, and images to be delivered as content via online news, mobile applications, and through social media platforms. Deepfake videos are created with Generative Adversarial Networks (GAN), a type of Artificial Neural Network that uses Deep Learning to create synthetic content.

GANs sound cool, but technical. Could you break down how they operate?

GAN are a class of machine learning systems that have two neural network models; a generator and discriminator which game each other. Training data in the form of video, still images, and audio is fed to the generator, which then seeks to recreate it. The discriminator then tries to discern the training data from the recreated data produced by the generator.

The two artificial intelligence engines repeatedly game each other, getting iteratively better. The result is convincing, high quality synthetic video, images, or audio. A good example of GAN at work is NVIDIA GAN. Navigate to the website https://thispersondoesnotexist.com/ and you will see a composite image of a human face that was created by the NVIDIA GAN using faces on the internet. Refreshing the internet browser yields a new synthetic image of a human that does not exist.

What are some notable examples of deepfake tech’s misuse?

Most people are not even aware of deepfake technologies, although these have now been infamously utilized to conduct major financial fraud. Politicians have also used the technology against their political adversaries. Early in the war between Russia and Ukraine, Russia created and disseminated a deepfake video of Ukrainian President Volodymyr Zelenskyy advising Ukrainian soldiers to “lay down their arms” and surrender to Russia.

How was the crisis involving the Zelenskyy deepfake video managed?

The deepfake quality was poor and it was immediately identified as a deepfake video attributable to Russia. However, the technology is becoming so convincing and so real that soon it will be impossible for the regular human being to discern GenAI at work. And detection technologies, while have a tremendous amount of funding and support by big technology corporations, are lagging way behind.

What are some lesser-known uses of deepfake technology and what risks do they pose to organizations, if any?

Hollywood is using deepfake technologies in motion picture creation to recreate actor personas. One such example is Bruce Willis, who sold his persona to be used in movies without his acting due to his debilitating health issues. Voicefake technology (another type of deepfake) enabled an autistic college valedictorian to address her class at her graduation.

Yet, deepfakes pose a significant threat. Deepfakes are used to lure people to “click bait” for launching malware (bots, ransomware, malware), and to conduct financial fraud through CEO and CFO impersonation. More recently, deepfakes have been used by nation-state adversaries to infiltrate organizations via impersonation or fake jobs interviews over Zoom.

How are law enforcement agencies addressing the challenges posed by deepfake technology?

Europol has really been a leader in identifying GenAI and deepfake as a major issue. Europol supports the global law enforcement community in the Europol Innovation Lab, which aims to develop innovative solutions for EU Member States’ operational work. Already in Europe, there are laws against deepfake usage for non-consensual pornography and cyber criminal gangs’ use of deepfakes in financial fraud.

What should organizations consider when adopting Generative AI technologies, as these technologies have such incredible power and potential?

Every organization is seeking to adopt GenAI to help improve customer satisfaction, deliver new and innovative services, reduce administrative overhead and costs, scale rapidly, do more with less and do it more efficiently. In consideration of adopting GenAI, organizations should first understand the risks, rewards, and tradeoffs associated with adopting this technology. Additionally, organizations must be concerned with privacy and data protection, as well as potential copyright challenges.

What role do frameworks and guidelines, such as those from NIST and OWASP, play in the responsible adoption of AI technologies?

On January 26th, 2023, NIST released its forty-two page Artificial Intelligence Risk Management Framework (AI RMF 1.0) and AI Risk Management Playbook (NIST 2023). For any organization, this is a good place to start.

The primary goal of the NIST AI Risk Management Framework is to help organizations create AI-focused risk management programs, leading to the responsible development and adoption of AI platforms and systems.

The NIST AI Risk Management Framework will help any organization align organizational goals for and use cases for AI. Most importantly, this risk management framework is human centered. It includes social responsibility information, sustainability information and helps organizations closely focus on the potential or unintended consequences and impact of AI use.

Another immense help for organizations that wish to further understand risk associated with GenAI Large Language Model adoption is the OWASP Top 10 LLM Risks list. OWASP released version 1.1 on October 16th, 2023. Through this list, organizations can better understand risks such as inject and data poisoning. These risks are especially critical to know about when bringing an LLM in house.

As organizations adopt GenAI, they need a solid framework through which to assess, monitor, and identify GenAI-centric attacks. MITRE has recently introduced ATLAS, a robust framework developed specifically for artificial intelligence and aligned to the MITRE ATT&CK framework.

For more of Check Point expert Micki Boland’s insights into deepfakes, please see CyberTalk.org’s past coverage. Lastly, to receive cyber security thought leadership articles, groundbreaking research and emerging threat analyses each week, subscribe to the CyberTalk.org newsletter.

 

The post Deepfake misuse & deepfake detection (before it’s too late) appeared first on CyberTalk.

❌
❌