Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Lawmakers ‘Bullseye and Bait’ in AI-Driven Deepfake Campaigns

7 October 2025 at 06:55
OPINION — Elected officials are both the bullseye and the bait for AI-driven influence campaigns launched by foreign adversaries. They are targeted with disinformation meant to sway their opinions and votes, while also serving as the raw material for deepfakes used to deceive others. It’s a problem so widespread that it threatens our faith in democratic institutions.

Even seemingly trivial posts can add to divisions already infecting the nation. Over the summer, a deepfake video depicting Rep. Alexandria Ocasio-Cortez (D-NY) discussing the perceived racist overtones of a jeans commercial went viral. At least one prominent news commentator was duped, sharing misinformation with his audience. While the origin of the fake is unknown, foreign adversaries, namely China, Russia, and Iran, often exploit domestic wedge issues to erode trust in elected officials.

Last year, Sen. Ben Cardin (D-MD) was deceived by a deepfake of Dmytro Kuleba, the former foreign minister of Ukraine, in an attempt to get the senator to reveal sensitive information about Ukrainian weaponry. People briefed on the FBI’s investigation into the incident suggest that the Russian government could be behind the deepfake, and that the Senator was being goaded into making statements that could be used for propaganda purposes.

In another incident, deepfake audio recordings of Secretary of State Marco Rubio deceived at least five government officials and three foreign ministers. The State Department diplomatic cable announcing the deepfake discovery also referenced an additional investigation into a Russia-linked cyber actor who had “posed as a fictitious department official.”

Meanwhile, researchers at Vanderbilt University’s Institute of National Security revealed that a Chinese company, GoLaxy, has used artificial intelligence to build psychological profiles of individuals including 117 members of Congress and 2,000 American thought leaders. Using these profiles, GoLaxy can tailor propaganda and target it with precision.

While the company denies that it — or its backers in the Chinese Communist Party — plan to use its advanced AI toolkit for influence operations against U.S. leaders, it allegedly has already done so in Hong Kong and Taiwan. Researchers say that in both places, GoLaxy profiled opposition voices and thought leaders and targeted them with curated messages on X (formerly Twitter), working to change their perception of events. The company also allegedly attempted to sway Hong Kongers’ views on a draconian 2020 national security law. That GoLaxy is now mapping America’s political leadership should be deeply concerning, but not surprising.

GoLaxy is far from the only actor reportedly using AI to influence public opinion. The same AI-enabled manipulation that now focuses on national leaders will inevitably be turned on mayors, school board members, journalists, CEOs — and eventually, anyone — deepening divisions in an already deeply divided nation.

Limiting the damage will require a coordinated response drawing on federal resources, private-sector innovation, and individual vigilance.

The Cipher Brief brings expert-level context to national and global security stories. It’s never been more important to understand what’s happening in the world. Upgrade your access to exclusive content by becoming a subscriber.

The White House has an AI Action Plan that lays out recommendations for how deepfake detection can be improved. It starts with turning the National Institute of Standards and Technology’s Guardians of Forensic Evidence deepfake evaluation program into formal guidelines. These guidelines would establish trusted standards that courts, media platforms, and consumer apps could use to evaluate deepfakes.

These standards are important because some AI-produced videos may be impossible to detect with the human eye. Instead, forensic tools can reveal deepfake giveaways. While far from perfect, this burgeoning deepfake detection field is adapting to rapidly evolving threats. Analyzing the distribution channels of deepfakes can also help determine their legitimacy, particularly for media outlets that want to investigate the authenticity of a video.

Washington must also coordinate with the tech industry, especially social media platforms, through the proposed AI Information Sharing and Analysis Center framework to build an early warning system to monitor, detect, and inform the public of influence operations exploiting AI-generated content.

The White House should also expand collaboration between the federal Cybersecurity and Infrastructure Security Agency, FBI, and the National Security Agency on deepfake responses. This combined team would work with Congress, agency leaders, and other prominent targets to minimize the spread of unauthorized synthetic content and debunk misleading information.

Lastly, public figures need to create rapid response communication playbooks to address the falsehoods head on and educate the public when deepfakes circulate. The United States can look to democratic allies like Taiwan for inspiration in how to deal with state-sponsored disinformation. The Taiwanese government has adopted the “222 policy” releasing 200 words and two photos within two hours of disinformation detection.

Deepfakes and AI-enabled influence campaigns represent a generational challenge to truth and trust. Combating this problem will be a cat-and-mouse game, with foreign adversaries constantly working to outmaneuver the safeguards meant to stop them. No individual solution will be enough to stop them, but by involving the government, the media, and individuals, it may be possible to limit their damage.

The Cipher Brief is committed to publishing a range of perspectives on national security issues submitted by deeply experienced national security professionals.

Opinions expressed are those of the author and do not represent the views or opinions of The Cipher Brief.

Have a perspective to share based on your experience in the national security field? Send it to Editor@thecipherbrief.com for publication consideration.

Read more expert-driven national security insights, perspective and analysis in
The Cipher Brief

New trends in phishing and scams: how AI and social media are changing the game

13 August 2025 at 04:00

Introduction

Phishing and scams are dynamic types of online fraud that primarily target individuals, with cybercriminals constantly adapting their tactics to deceive people. Scammers invent new methods and improve old ones, adjusting them to fit current news, trends, and major world events: anything to lure in their next victim.

Since our last publication on phishing tactics, there has been a significant leap in the evolution of these threats. While many of the tools we previously described are still relevant, new techniques have emerged, and the goals and methods of these attacks have shifted.

In this article, we will explore:

  • The impact of AI on phishing and scams
  • How the tools used by cybercriminals have changed
  • The role of messaging apps in spreading threats
  • Types of data that are now a priority for scammers

AI tools leveraged to create scam content

Text

Traditional phishing emails, instant messages, and fake websites often contain grammatical and factual errors, incorrect names and addresses, and formatting issues. Now, however, cybercriminals are increasingly turning to neural networks for help.

They use these tools to create highly convincing messages that closely resemble legitimate ones. Victims are more likely to trust these messages, and therefore, more inclined to click a phishing link, open a malicious attachment, or download an infected file.

Example of a phishing email created with DeepSeek

Example of a phishing email created with DeepSeek

The same is true for personal messages. Social networks are full of AI bots that can maintain conversations just like real people. While these bots can be created for legitimate purposes, they are often used by scammers who impersonate human users. In particular, phishing and scam bots are common in the online dating world. Scammers can run many conversations at once, maintaining the illusion of sincere interest and emotional connection. Their primary goal is to extract money from victims by persuading them to pursue “viable investment opportunities” that often involve cryptocurrency. This scam is known as pig butchering. AI bots are not limited to text communication, either; to be more convincing, they also generate plausible audio messages and visual imagery during video calls.

Deepfakes and AI-generated voices

As mentioned above, attackers are actively using AI capabilities like voice cloning and realistic video generation to create convincing audiovisual content that can deceive victims.

Beyond targeted attacks that mimic the voices and images of friends or colleagues, deepfake technology is now being used in more classic, large-scale scams, such as fake giveaways from celebrities. For example, YouTube users have encountered Shorts where famous actors, influencers, or public figures seemingly promise expensive prizes like MacBooks, iPhones, or large sums of money.

Deepfake YouTube Short

Deepfake YouTube Short

The advancement of AI technology for creating deepfakes is blurring the lines between reality and deception. Voice and visual forgeries can be nearly indistinguishable from authentic messages, as traditional cues used to spot fraud disappear.

Recently, automated calls have become widespread. Scammers use AI-generated voices and number spoofing to impersonate bank security services. During these calls, they claim there has been an unauthorized attempt to access the victim’s bank account. Under the guise of “protecting funds”, they demand a one-time SMS code. This is actually a 2FA code for logging into the victim’s account or authorizing a fraudulent transaction.

 

Example of an OTP (one-time password) bot call

Data harvesting and analysis

Large language models like ChatGPT are well-known for their ability to not only write grammatically correct text in various languages but also to quickly analyze open-source data from media outlets, corporate websites, and social media. Threat actors are actively using specialized AI-powered OSINT tools to collect and process this information.

The data so harvested enables them to launch phishing attacks that are highly tailored to a specific victim or a group of victims – for example, members of a particular social media community. Common scenarios include:

  • Personalized emails or instant messages from what appear to be HR staff or company leadership. These communications contain specific details about internal organizational processes.
  • Spoofed calls, including video chats, from close contacts. The calls leverage personal information that the victim would assume could not be known to an outsider.

This level of personalization dramatically increases the effectiveness of social engineering, making it difficult for even tech-savvy users to spot these targeted scams.

Phishing websites

Phishers are now using AI to generate fake websites too. Cybercriminals have weaponized AI-powered website builders that can automatically copy the design of legitimate websites, generate responsive interfaces, and create sign-in forms.

Some of these sites are well-made clones nearly indistinguishable from the real ones. Others are generic templates used in large-scale campaigns, without much effort to mimic the original.

Phishing pages mimicking travel and tourism websites

Phishing pages mimicking travel and tourism websites

Often, these generic sites collect any data a user enters and are not even checked by a human before being used in an attack. The following are examples of sites with sign-in forms that do not match the original interfaces at all. These are not even “clones” in the traditional sense, as some of the brands being targeted do not offer sign-in pages.

These types of attacks lower the barrier to entry for cybercriminals and make large-scale phishing campaigns even more widespread.

Login forms on fraudulent websites

Login forms on fraudulent websites

Telegram scams

With its massive popularity, open API, and support for crypto payments, Telegram has become a go-to platform for cybercriminals. This messaging app is now both a breeding ground for spreading threats and a target in itself. Once they get their hands on a Telegram account, scammers can either leverage it to launch attacks on other users or sell it on the dark web.

Malicious bots

Scammers are increasingly using Telegram bots, not just for creating phishing websites but also as an alternative or complement to these. For example, a website might be used to redirect a victim to a bot, which then collects the data the scammers need. Here are some common schemes that use bots:

  • Crypto investment scams: fake token airdrops that require a mandatory deposit for KYC verification
Telegram bot seemingly giving away SHIBARMY tokens

Telegram bot seemingly giving away SHIBARMY tokens

  • Phishing and data collection: scammers impersonate official postal service to get a user’s details under the pretense of arranging delivery for a business package.
Phishing site redirects the user to an "official" bot.

Phishing site redirects the user to an “official” bot.

  • Easy money scams: users are offered money to watch short videos.
Phishing site promises easy earnings through a Telegram bot.

Phishing site promises easy earnings through a Telegram bot.

Unlike a phishing website that the user can simply close and forget about when faced with a request for too much data or a commission payment, a malicious bot can be much more persistent. If the victim has interacted with a bot and has not blocked it, the bot can continue to send various messages. These might include suspicious links leading to fraudulent or advertising pages, or requests to be granted admin access to groups or channels. The latter is often framed as being necessary to “activate advanced features”. If the user gives the bot these permissions, it can then spam all the members of these groups or channels.

Account theft

When it comes to stealing Telegram user accounts, social engineering is the most common tactic. Attackers use various tricks and ploys, often tailored to the current season, events, trends, or the age of their target demographic. The goal is always the same: to trick victims into clicking a link and entering the verification code.

Links to phishing pages can be sent in private messages or posted to group chats or compromised channels. Given the scale of these attacks and users’ growing awareness of scams within the messaging app, attackers now often disguise these phishing links using Telegram’s message-editing tools.

This link in this phishing message does not lead to the URL shown

This link in this phishing message does not lead to the URL shown

New ways to evade detection

Integrating with legitimate services

Scammers are actively abusing trusted platforms to keep their phishing resources under the radar for as long as possible.

  • Telegraph is a Telegram-operated service that lets anyone publish long-form content without prior registration. Cybercriminals take advantage of this feature to redirect users to phishing pages.
Phishing page on the telegra.ph domain

Phishing page on the telegra.ph domain

  • Google Translate is a machine translation tool from Google that can translate entire web pages and generate links like https://site-to-translate-com.translate.goog/… Attackers exploit it to hide their assets from security vendors. They create phishing pages, translate them, and then send out the links to the localized pages. This allows them to both avoid blocking and use a subdomain at the beginning of the link that mimics a legitimate organization’s domain name, which can trick users.
Localized phishing page

Localized phishing page

  • CAPTCHA protects websites from bots. Lately, attackers have been increasingly adding CAPTCHAs to their fraudulent sites to avoid being flagged by anti-phishing solutions and evade blocking. Since many legitimate websites also use various types of CAPTCHAs, phishing sites cannot be identified by their use of CAPTCHA technology alone.
CAPTCHA on a phishing site

CAPTCHA on a phishing site

Blob URL

Blob URLs (blob:https://example.com/…) are temporary links generated by browsers to access binary data, such as images and HTML code, locally. They are limited to the current session. While this technology was originally created for legitimate purposes, such as previewing files a user is uploading to a site, cybercriminals are actively using it to hide phishing attacks.

Blob URLs are created with JavaScript. The links start with “blob:” and contain the domain of the website that hosts the script. The data is stored locally in the victim’s browser, not on the attacker’s server.

Blob URL generation script inside a phishing kit

Blob URL generation script inside a phishing kit

Hunting for new data

Cybercriminals are shifting their focus from stealing usernames and passwords to obtaining irrevocable or immutable identity data, such as biometrics, digital signatures, handwritten signatures, and voiceprints.

For example, a phishing site that asks for camera access supposedly to verify an account on an online classifieds service allows scammers to collect your biometric data.

Phishing for biometrics

Phishing for biometrics

For corporate targets, e-signatures are a major focus for attackers. Losing control of these can cause significant reputational and financial damage to a company. This is why services like DocuSign have become a prime target for spear-phishing attacks.

Phishers targeting DocuSign accounts

Phishers targeting DocuSign accounts

Even old-school handwritten signatures are still a hot commodity for modern cybercriminals, as they remain critical for legal and financial transactions.

Phishing for handwritten signatures

Phishing for handwritten signatures

These types of attacks often go hand-in-hand with attempts to gain access to e-government, banking and corporate accounts that use this data for authentication.

These accounts are typically protected by two-factor authentication, with a one-time password (OTP) sent in a text message or a push notification. The most common way to get an OTP is by tricking users into entering it on a fake sign-in page or by asking for it over the phone.

Attackers know users are now more aware of phishing threats, so they have started to offer “protection” or “help for victims” as a new social engineering technique. For example, a scammer might send a victim a fake text message with a meaningless code. Then, using a believable pretext – like a delivery person dropping off flowers or a package – they trick the victim into sharing that code. Since the message sender indeed looks like a delivery service or a florist, the story may sound convincing. Then a second attacker, posing as a government official, calls the victim with an urgent message, telling them they have just been targeted by a tricky phishing attack. They use threats and intimidation to coerce the victim into revealing a real, legitimate OTP from the service the cybercriminals are actually after.

Fake delivery codes

Fake delivery codes

Takeaways

Phishing and scams are evolving at a rapid pace, fueled by AI and other new technology. As users grow increasingly aware of traditional scams, cybercriminals change their tactics and develop more sophisticated schemes. Whereas they once relied on fake emails and websites, today, scammers use deepfakes, voice cloning and multi-stage tactics to steal biometric data and personal information.
Here are the key trends we are seeing:

  • Personalized attacks: AI analyzes social media and corporate data to stage highly convincing phishing attempts.
  • Usage of legitimate services: scammers are misusing trusted platforms like Google Translate and Telegraph to bypass security filters.
  • Theft of immutable data: biometrics, signatures, and voiceprints are becoming highly sought-after targets.
  • More sophisticated methods of circumventing 2FA: cybercriminals are using complex, multi-stage social engineering attacks.

How do you protect yourself?

  • Critically evaluate any unexpected calls, emails, or messages. Avoid clicking links in these communications, even if they appear legitimate. If you do plan to open a link, verify its destination by hovering over it on a desktop or long-pressing on a mobile device.
  • Verify sources of data requests. Never share OTPs with anyone, regardless of who they claim to be, even if they say they are a bank employee.
  • Analyze content for fakery. To spot deepfakes, look for unnatural lip movements or shadows in videos. You should also be suspicious of any videos featuring celebrities who are offering overly generous giveaways.
  • Limit your digital footprint. Do not post photos of documents or sensitive work-related information, such as department names or your boss’s name, on social media.

Spamouflage’s advanced deceptive behavior reinforces need for stronger email security

By: slandau
3 September 2024 at 16:43

EXECUTIVE SUMMARY:

Ahead of the U.S. elections, adversaries are weaponizing social media to gain political sway. Russian and Iranian efforts have become increasingly aggressive and transparent. However, China appears to have taken a more carefully calculated and nuanced approach.

China’s seeming disinformation efforts have little to do with positioning one political candidate as preferable to another. Rather, the country’s maneuvers may aim to undermine trust in voting systems, elections and America, in general; amplifying criticism and sowing discord.

Spamouflage

In recent months, the Chinese disinformation network, known as Spamouflage, has pursued “advanced deceptive behavior.” It has quietly launched thousands of accounts across more than 50 domains, and used them to target people across the United States.

The group has been active since 2017, but has recently reinforced its efforts.

Fake profiles

The Spamouflage network’s fake online accounts present fake identities, which sometimes change on a whim. The accounts/profiles have been spotted on X, TikTok and elsewhere.

For example:

Harlan claimed to be a New York resident and an Army veteran, age 29. His profile picture showed a well-groomed young man. However, a few months later, his account shifted personas. Suddenly, Harlan appeared to be from Florida and a 31 year-old
Republican influencer. 

At least four different accounts were found to mimic Trump supporters – part of a tactic with the moniker “MAGAflage.”

The fake profiles, including the fake photos, may have been generated through artificial intelligence tools, according to analysts.

Accounts have exhibited certain patterns, using hashtags like #American, while presenting themselves as voters or groups that “love America” but feel alienated by political issues that range from women’s healthcare to Ukraine.

In June, one post on X read “Although I am American, I am extremely opposed to NATO and the behavior of the U.S. government in war. I think soldiers should protect their own country’s people and territory…should not initiate wars on their own…” The text was accompanied by an image showing NATO’s expansion across Europe.

Email security implications

Disinformation campaigns that create (and weaponize) fake profiles, as described above, will have a high degree of success when crafting and distributing phishing emails, as the emails will appear to come from credible sources.

This makes it essential for organizations to implement and for employees to adhere to advanced verification methods that can ensure the veracity of communications.

Advanced email security protocols

Within your organization, if you haven’t done so already, consider implementing the following:

  • Multi-factor authentication. Even if credentials are compromised via phishing, MFA can help protect against unauthorized account access.
  • Email authentication protocols. Technologies such as SPF, DKIM and DMARC can assist with verifying the legitimacy of email senders and spoofing prevention.
  • Advanced threat detection. Advanced threat detection solutions that are powered by AI and machine learning can enhance email traffic security.
  • Employee awareness. Remind employees to not only think before they click, but to also think before they link to information – whether in their professional roles or their personal lives.
  • Incident response plans. Most organizations have incident response plans. But are they routinely updated? Can they address disinformation and deepfake threats?

Further thoughts

To effectively counter threats, organizations need to pursue a dynamic, multi-dimensional approach. But it’s tough.

To get expert guidance, please visit our website or contact our experts. We’re here to help!

The post Spamouflage’s advanced deceptive behavior reinforces need for stronger email security appeared first on CyberTalk.

Deepfake misuse & deepfake detection (before it’s too late)

By: slandau
26 July 2024 at 17:50

Micki Boland is a global cyber security warrior and evangelist with Check Point’s Office of the CTO. Micki has over 20 years in ICT, cyber security, emerging technology, and innovation. Micki’s focus is helping customers, system integrators, and service providers reduce risk through the adoption of emerging cyber security technologies. Micki is an ISC2 CISSP and holds a Master of Science in Technology Commercialization from the University of Texas at Austin, and an MBA with a global security concentration from East Carolina University.

In this dynamic and insightful interview, Check Point expert Micki Boland discusses how deepfakes are evolving, why that matters for organizations, and how organizations can take action to protect themselves. Discover on-point analyses that could reshape your decisions, improving cyber security and business outcomes. Don’t miss this.

Can you explain how deepfake technology works? 

Deepfakes involve simulated video, audio, and images to be delivered as content via online news, mobile applications, and through social media platforms. Deepfake videos are created with Generative Adversarial Networks (GAN), a type of Artificial Neural Network that uses Deep Learning to create synthetic content.

GANs sound cool, but technical. Could you break down how they operate?

GAN are a class of machine learning systems that have two neural network models; a generator and discriminator which game each other. Training data in the form of video, still images, and audio is fed to the generator, which then seeks to recreate it. The discriminator then tries to discern the training data from the recreated data produced by the generator.

The two artificial intelligence engines repeatedly game each other, getting iteratively better. The result is convincing, high quality synthetic video, images, or audio. A good example of GAN at work is NVIDIA GAN. Navigate to the website https://thispersondoesnotexist.com/ and you will see a composite image of a human face that was created by the NVIDIA GAN using faces on the internet. Refreshing the internet browser yields a new synthetic image of a human that does not exist.

What are some notable examples of deepfake tech’s misuse?

Most people are not even aware of deepfake technologies, although these have now been infamously utilized to conduct major financial fraud. Politicians have also used the technology against their political adversaries. Early in the war between Russia and Ukraine, Russia created and disseminated a deepfake video of Ukrainian President Volodymyr Zelenskyy advising Ukrainian soldiers to “lay down their arms” and surrender to Russia.

How was the crisis involving the Zelenskyy deepfake video managed?

The deepfake quality was poor and it was immediately identified as a deepfake video attributable to Russia. However, the technology is becoming so convincing and so real that soon it will be impossible for the regular human being to discern GenAI at work. And detection technologies, while have a tremendous amount of funding and support by big technology corporations, are lagging way behind.

What are some lesser-known uses of deepfake technology and what risks do they pose to organizations, if any?

Hollywood is using deepfake technologies in motion picture creation to recreate actor personas. One such example is Bruce Willis, who sold his persona to be used in movies without his acting due to his debilitating health issues. Voicefake technology (another type of deepfake) enabled an autistic college valedictorian to address her class at her graduation.

Yet, deepfakes pose a significant threat. Deepfakes are used to lure people to “click bait” for launching malware (bots, ransomware, malware), and to conduct financial fraud through CEO and CFO impersonation. More recently, deepfakes have been used by nation-state adversaries to infiltrate organizations via impersonation or fake jobs interviews over Zoom.

How are law enforcement agencies addressing the challenges posed by deepfake technology?

Europol has really been a leader in identifying GenAI and deepfake as a major issue. Europol supports the global law enforcement community in the Europol Innovation Lab, which aims to develop innovative solutions for EU Member States’ operational work. Already in Europe, there are laws against deepfake usage for non-consensual pornography and cyber criminal gangs’ use of deepfakes in financial fraud.

What should organizations consider when adopting Generative AI technologies, as these technologies have such incredible power and potential?

Every organization is seeking to adopt GenAI to help improve customer satisfaction, deliver new and innovative services, reduce administrative overhead and costs, scale rapidly, do more with less and do it more efficiently. In consideration of adopting GenAI, organizations should first understand the risks, rewards, and tradeoffs associated with adopting this technology. Additionally, organizations must be concerned with privacy and data protection, as well as potential copyright challenges.

What role do frameworks and guidelines, such as those from NIST and OWASP, play in the responsible adoption of AI technologies?

On January 26th, 2023, NIST released its forty-two page Artificial Intelligence Risk Management Framework (AI RMF 1.0) and AI Risk Management Playbook (NIST 2023). For any organization, this is a good place to start.

The primary goal of the NIST AI Risk Management Framework is to help organizations create AI-focused risk management programs, leading to the responsible development and adoption of AI platforms and systems.

The NIST AI Risk Management Framework will help any organization align organizational goals for and use cases for AI. Most importantly, this risk management framework is human centered. It includes social responsibility information, sustainability information and helps organizations closely focus on the potential or unintended consequences and impact of AI use.

Another immense help for organizations that wish to further understand risk associated with GenAI Large Language Model adoption is the OWASP Top 10 LLM Risks list. OWASP released version 1.1 on October 16th, 2023. Through this list, organizations can better understand risks such as inject and data poisoning. These risks are especially critical to know about when bringing an LLM in house.

As organizations adopt GenAI, they need a solid framework through which to assess, monitor, and identify GenAI-centric attacks. MITRE has recently introduced ATLAS, a robust framework developed specifically for artificial intelligence and aligned to the MITRE ATT&CK framework.

For more of Check Point expert Micki Boland’s insights into deepfakes, please see CyberTalk.org’s past coverage. Lastly, to receive cyber security thought leadership articles, groundbreaking research and emerging threat analyses each week, subscribe to the CyberTalk.org newsletter.

 

The post Deepfake misuse & deepfake detection (before it’s too late) appeared first on CyberTalk.

❌
❌