Reading view

There are new articles available, click to refresh the page.

Mother of one of Elon Musk’s offspring sues xAI over sexualized deepfakes

Ashley St Clair, the influencer and mother of one of Elon Musk’s children, has sued the billionaire’s AI company, accusing its Grok chatbot of creating fake sexual imagery of her without her consent.

In the lawsuit, filed in New York state court, St Clair alleged that xAI’s Grok first created an AI-generated or altered image of her in a bikini earlier this month.

St Clair claims she made a request to xAI that no further such images be made, but nevertheless “countless sexually abusive, intimate, and degrading deepfake content of St. Clair [were] produced and distributed publicly by Grok.”

Read full article

Comments

© Laura Brett/ZUMA Press Wire/Shutterstock

Hegseth wants to integrate Musk’s Grok AI into military networks this month

On Monday, US Defense Secretary Pete Hegseth said he plans to integrate Elon Musk's AI tool, Grok, into Pentagon networks later this month. During remarks at the SpaceX headquarters in Texas reported by The Guardian, Hegseth said the integration would place "the world's leading AI models on every unclassified and classified network throughout our department."

The announcement comes weeks after Grok drew international backlash for generating sexualized images of women and children, although the Department of Defense has not released official documentation confirming Hegseth's announced timeline or implementation details.

During the same appearance, Hegseth rolled out what he called an "AI acceleration strategy" for the Department of Defense. The strategy, he said, will "unleash experimentation, eliminate bureaucratic barriers, focus on investments, and demonstrate the execution approach needed to ensure we lead in military AI and that it grows more dominant into the future."

Read full article

Comments

© Bloomberg via Getty Images

Lawmakers ‘Bullseye and Bait’ in AI-Driven Deepfake Campaigns

OPINION — Elected officials are both the bullseye and the bait for AI-driven influence campaigns launched by foreign adversaries. They are targeted with disinformation meant to sway their opinions and votes, while also serving as the raw material for deepfakes used to deceive others. It’s a problem so widespread that it threatens our faith in democratic institutions.

Even seemingly trivial posts can add to divisions already infecting the nation. Over the summer, a deepfake video depicting Rep. Alexandria Ocasio-Cortez (D-NY) discussing the perceived racist overtones of a jeans commercial went viral. At least one prominent news commentator was duped, sharing misinformation with his audience. While the origin of the fake is unknown, foreign adversaries, namely China, Russia, and Iran, often exploit domestic wedge issues to erode trust in elected officials.

Last year, Sen. Ben Cardin (D-MD) was deceived by a deepfake of Dmytro Kuleba, the former foreign minister of Ukraine, in an attempt to get the senator to reveal sensitive information about Ukrainian weaponry. People briefed on the FBI’s investigation into the incident suggest that the Russian government could be behind the deepfake, and that the Senator was being goaded into making statements that could be used for propaganda purposes.

In another incident, deepfake audio recordings of Secretary of State Marco Rubio deceived at least five government officials and three foreign ministers. The State Department diplomatic cable announcing the deepfake discovery also referenced an additional investigation into a Russia-linked cyber actor who had “posed as a fictitious department official.”

Meanwhile, researchers at Vanderbilt University’s Institute of National Security revealed that a Chinese company, GoLaxy, has used artificial intelligence to build psychological profiles of individuals including 117 members of Congress and 2,000 American thought leaders. Using these profiles, GoLaxy can tailor propaganda and target it with precision.

While the company denies that it — or its backers in the Chinese Communist Party — plan to use its advanced AI toolkit for influence operations against U.S. leaders, it allegedly has already done so in Hong Kong and Taiwan. Researchers say that in both places, GoLaxy profiled opposition voices and thought leaders and targeted them with curated messages on X (formerly Twitter), working to change their perception of events. The company also allegedly attempted to sway Hong Kongers’ views on a draconian 2020 national security law. That GoLaxy is now mapping America’s political leadership should be deeply concerning, but not surprising.

GoLaxy is far from the only actor reportedly using AI to influence public opinion. The same AI-enabled manipulation that now focuses on national leaders will inevitably be turned on mayors, school board members, journalists, CEOs — and eventually, anyone — deepening divisions in an already deeply divided nation.

Limiting the damage will require a coordinated response drawing on federal resources, private-sector innovation, and individual vigilance.

The Cipher Brief brings expert-level context to national and global security stories. It’s never been more important to understand what’s happening in the world. Upgrade your access to exclusive content by becoming a subscriber.

The White House has an AI Action Plan that lays out recommendations for how deepfake detection can be improved. It starts with turning the National Institute of Standards and Technology’s Guardians of Forensic Evidence deepfake evaluation program into formal guidelines. These guidelines would establish trusted standards that courts, media platforms, and consumer apps could use to evaluate deepfakes.

These standards are important because some AI-produced videos may be impossible to detect with the human eye. Instead, forensic tools can reveal deepfake giveaways. While far from perfect, this burgeoning deepfake detection field is adapting to rapidly evolving threats. Analyzing the distribution channels of deepfakes can also help determine their legitimacy, particularly for media outlets that want to investigate the authenticity of a video.

Washington must also coordinate with the tech industry, especially social media platforms, through the proposed AI Information Sharing and Analysis Center framework to build an early warning system to monitor, detect, and inform the public of influence operations exploiting AI-generated content.

The White House should also expand collaboration between the federal Cybersecurity and Infrastructure Security Agency, FBI, and the National Security Agency on deepfake responses. This combined team would work with Congress, agency leaders, and other prominent targets to minimize the spread of unauthorized synthetic content and debunk misleading information.

Lastly, public figures need to create rapid response communication playbooks to address the falsehoods head on and educate the public when deepfakes circulate. The United States can look to democratic allies like Taiwan for inspiration in how to deal with state-sponsored disinformation. The Taiwanese government has adopted the “222 policy” releasing 200 words and two photos within two hours of disinformation detection.

Deepfakes and AI-enabled influence campaigns represent a generational challenge to truth and trust. Combating this problem will be a cat-and-mouse game, with foreign adversaries constantly working to outmaneuver the safeguards meant to stop them. No individual solution will be enough to stop them, but by involving the government, the media, and individuals, it may be possible to limit their damage.

The Cipher Brief is committed to publishing a range of perspectives on national security issues submitted by deeply experienced national security professionals.

Opinions expressed are those of the author and do not represent the views or opinions of The Cipher Brief.

Have a perspective to share based on your experience in the national security field? Send it to Editor@thecipherbrief.com for publication consideration.

Read more expert-driven national security insights, perspective and analysis in
The Cipher Brief

Spamouflage’s advanced deceptive behavior reinforces need for stronger email security

By: slandau

EXECUTIVE SUMMARY:

Ahead of the U.S. elections, adversaries are weaponizing social media to gain political sway. Russian and Iranian efforts have become increasingly aggressive and transparent. However, China appears to have taken a more carefully calculated and nuanced approach.

China’s seeming disinformation efforts have little to do with positioning one political candidate as preferable to another. Rather, the country’s maneuvers may aim to undermine trust in voting systems, elections and America, in general; amplifying criticism and sowing discord.

Spamouflage

In recent months, the Chinese disinformation network, known as Spamouflage, has pursued “advanced deceptive behavior.” It has quietly launched thousands of accounts across more than 50 domains, and used them to target people across the United States.

The group has been active since 2017, but has recently reinforced its efforts.

Fake profiles

The Spamouflage network’s fake online accounts present fake identities, which sometimes change on a whim. The accounts/profiles have been spotted on X, TikTok and elsewhere.

For example:

Harlan claimed to be a New York resident and an Army veteran, age 29. His profile picture showed a well-groomed young man. However, a few months later, his account shifted personas. Suddenly, Harlan appeared to be from Florida and a 31 year-old
Republican influencer. 

At least four different accounts were found to mimic Trump supporters – part of a tactic with the moniker “MAGAflage.”

The fake profiles, including the fake photos, may have been generated through artificial intelligence tools, according to analysts.

Accounts have exhibited certain patterns, using hashtags like #American, while presenting themselves as voters or groups that “love America” but feel alienated by political issues that range from women’s healthcare to Ukraine.

In June, one post on X read “Although I am American, I am extremely opposed to NATO and the behavior of the U.S. government in war. I think soldiers should protect their own country’s people and territory…should not initiate wars on their own…” The text was accompanied by an image showing NATO’s expansion across Europe.

Email security implications

Disinformation campaigns that create (and weaponize) fake profiles, as described above, will have a high degree of success when crafting and distributing phishing emails, as the emails will appear to come from credible sources.

This makes it essential for organizations to implement and for employees to adhere to advanced verification methods that can ensure the veracity of communications.

Advanced email security protocols

Within your organization, if you haven’t done so already, consider implementing the following:

  • Multi-factor authentication. Even if credentials are compromised via phishing, MFA can help protect against unauthorized account access.
  • Email authentication protocols. Technologies such as SPF, DKIM and DMARC can assist with verifying the legitimacy of email senders and spoofing prevention.
  • Advanced threat detection. Advanced threat detection solutions that are powered by AI and machine learning can enhance email traffic security.
  • Employee awareness. Remind employees to not only think before they click, but to also think before they link to information – whether in their professional roles or their personal lives.
  • Incident response plans. Most organizations have incident response plans. But are they routinely updated? Can they address disinformation and deepfake threats?

Further thoughts

To effectively counter threats, organizations need to pursue a dynamic, multi-dimensional approach. But it’s tough.

To get expert guidance, please visit our website or contact our experts. We’re here to help!

The post Spamouflage’s advanced deceptive behavior reinforces need for stronger email security appeared first on CyberTalk.

Deepfake misuse & deepfake detection (before it’s too late)

By: slandau

Micki Boland is a global cyber security warrior and evangelist with Check Point’s Office of the CTO. Micki has over 20 years in ICT, cyber security, emerging technology, and innovation. Micki’s focus is helping customers, system integrators, and service providers reduce risk through the adoption of emerging cyber security technologies. Micki is an ISC2 CISSP and holds a Master of Science in Technology Commercialization from the University of Texas at Austin, and an MBA with a global security concentration from East Carolina University.

In this dynamic and insightful interview, Check Point expert Micki Boland discusses how deepfakes are evolving, why that matters for organizations, and how organizations can take action to protect themselves. Discover on-point analyses that could reshape your decisions, improving cyber security and business outcomes. Don’t miss this.

Can you explain how deepfake technology works? 

Deepfakes involve simulated video, audio, and images to be delivered as content via online news, mobile applications, and through social media platforms. Deepfake videos are created with Generative Adversarial Networks (GAN), a type of Artificial Neural Network that uses Deep Learning to create synthetic content.

GANs sound cool, but technical. Could you break down how they operate?

GAN are a class of machine learning systems that have two neural network models; a generator and discriminator which game each other. Training data in the form of video, still images, and audio is fed to the generator, which then seeks to recreate it. The discriminator then tries to discern the training data from the recreated data produced by the generator.

The two artificial intelligence engines repeatedly game each other, getting iteratively better. The result is convincing, high quality synthetic video, images, or audio. A good example of GAN at work is NVIDIA GAN. Navigate to the website https://thispersondoesnotexist.com/ and you will see a composite image of a human face that was created by the NVIDIA GAN using faces on the internet. Refreshing the internet browser yields a new synthetic image of a human that does not exist.

What are some notable examples of deepfake tech’s misuse?

Most people are not even aware of deepfake technologies, although these have now been infamously utilized to conduct major financial fraud. Politicians have also used the technology against their political adversaries. Early in the war between Russia and Ukraine, Russia created and disseminated a deepfake video of Ukrainian President Volodymyr Zelenskyy advising Ukrainian soldiers to “lay down their arms” and surrender to Russia.

How was the crisis involving the Zelenskyy deepfake video managed?

The deepfake quality was poor and it was immediately identified as a deepfake video attributable to Russia. However, the technology is becoming so convincing and so real that soon it will be impossible for the regular human being to discern GenAI at work. And detection technologies, while have a tremendous amount of funding and support by big technology corporations, are lagging way behind.

What are some lesser-known uses of deepfake technology and what risks do they pose to organizations, if any?

Hollywood is using deepfake technologies in motion picture creation to recreate actor personas. One such example is Bruce Willis, who sold his persona to be used in movies without his acting due to his debilitating health issues. Voicefake technology (another type of deepfake) enabled an autistic college valedictorian to address her class at her graduation.

Yet, deepfakes pose a significant threat. Deepfakes are used to lure people to “click bait” for launching malware (bots, ransomware, malware), and to conduct financial fraud through CEO and CFO impersonation. More recently, deepfakes have been used by nation-state adversaries to infiltrate organizations via impersonation or fake jobs interviews over Zoom.

How are law enforcement agencies addressing the challenges posed by deepfake technology?

Europol has really been a leader in identifying GenAI and deepfake as a major issue. Europol supports the global law enforcement community in the Europol Innovation Lab, which aims to develop innovative solutions for EU Member States’ operational work. Already in Europe, there are laws against deepfake usage for non-consensual pornography and cyber criminal gangs’ use of deepfakes in financial fraud.

What should organizations consider when adopting Generative AI technologies, as these technologies have such incredible power and potential?

Every organization is seeking to adopt GenAI to help improve customer satisfaction, deliver new and innovative services, reduce administrative overhead and costs, scale rapidly, do more with less and do it more efficiently. In consideration of adopting GenAI, organizations should first understand the risks, rewards, and tradeoffs associated with adopting this technology. Additionally, organizations must be concerned with privacy and data protection, as well as potential copyright challenges.

What role do frameworks and guidelines, such as those from NIST and OWASP, play in the responsible adoption of AI technologies?

On January 26th, 2023, NIST released its forty-two page Artificial Intelligence Risk Management Framework (AI RMF 1.0) and AI Risk Management Playbook (NIST 2023). For any organization, this is a good place to start.

The primary goal of the NIST AI Risk Management Framework is to help organizations create AI-focused risk management programs, leading to the responsible development and adoption of AI platforms and systems.

The NIST AI Risk Management Framework will help any organization align organizational goals for and use cases for AI. Most importantly, this risk management framework is human centered. It includes social responsibility information, sustainability information and helps organizations closely focus on the potential or unintended consequences and impact of AI use.

Another immense help for organizations that wish to further understand risk associated with GenAI Large Language Model adoption is the OWASP Top 10 LLM Risks list. OWASP released version 1.1 on October 16th, 2023. Through this list, organizations can better understand risks such as inject and data poisoning. These risks are especially critical to know about when bringing an LLM in house.

As organizations adopt GenAI, they need a solid framework through which to assess, monitor, and identify GenAI-centric attacks. MITRE has recently introduced ATLAS, a robust framework developed specifically for artificial intelligence and aligned to the MITRE ATT&CK framework.

For more of Check Point expert Micki Boland’s insights into deepfakes, please see CyberTalk.org’s past coverage. Lastly, to receive cyber security thought leadership articles, groundbreaking research and emerging threat analyses each week, subscribe to the CyberTalk.org newsletter.

 

The post Deepfake misuse & deepfake detection (before it’s too late) appeared first on CyberTalk.

❌