Reading view

There are new articles available, click to refresh the page.

The Human Algorithm: Why Disinformation Outruns Truth and What It Means for Our Future

EXPERT PERSPECTIVE — In recent years, the national conversation about disinformation has often focused on bot networks, foreign operatives, and algorithmic manipulation at industrial scale. Those concerns are valid, and I spent years inside CIA studying them with a level of urgency that matched the stakes. But an equally important story is playing out at the human level. It’s a story that requires us to look more closely at how our own instincts, emotions, and digital habits shape the spread of information.

This story reveals something both sobering and empowering: falsehood moves faster than truth not merely because of the technologies that transmit it, but because of the psychology that receives it. That insight is no longer just the intuition of intelligence officers or behavioral scientists. It is backed by hard data.

In 2018, MIT researchers Soroush Vosoughi, Deb Roy, and Sinan Aral published a groundbreaking study in Science titled The Spread of True and False News Online. It remains one of the most comprehensive analyses ever conducted on how information travels across social platforms.

The team examined more than 126,000 stories shared by 3 million people over a ten-year period. Their findings were striking. False news traveled farther, faster, and more deeply than true news. In many cases, falsehood reached its first 1,500 viewers six times faster than factual reporting. The most viral false stories routinely reached between 1,000 and 100,000 people, whereas true stories rarely exceeded a thousand.

One of the most important revelations was that humans, not bots, drove the difference. People were more likely to share false news because the content felt fresh, surprising, emotionally charged, or identity-affirming in ways that factual news often does not. That human tendency is becoming a national security concern.

For years, psychologists have studied how novelty, emotion, and identity shape what we pay attention to and what we choose to share. The MIT researchers echoed this in their work, but a broader body of research across behavioral science reinforces the point.

People gravitate toward what feels unexpected. Novel information captures our attention more effectively than familiar facts, which means sensational or fabricated claims often win the first click.

Emotion adds a powerful accelerant. A 2017 study published in the Proceedings of the National Academy of Sciences showed that messages evoking strong moral outrage travel through social networks more rapidly than neutral content. Fear, disgust, anger, and shock create a sense of urgency and a feeling that something must be shared quickly.

And identity plays a subtle, but significant role. Sharing something provocative can signal that we are well informed, particularly vigilant, or aligned with our community’s worldview. This makes falsehoods that flatter identity or affirm preexisting fears particularly powerful.

Taken together, these forces form what some have called the “human algorithm,” meaning a set of cognitive patterns that adversaries have learned to exploit with increasing sophistication.

Save your virtual seat now for The Cyber Initiatives Group Winter Summit on December 10 from 12p – 3p ET for more conversations on cyber, AI and the future of national security.

During my years leading digital innovation at CIA, we saw adversaries expand their strategy beyond penetrating networks to manipulating the people on those networks. They studied our attention patterns as closely as they once studied our perimeter defenses.

Foreign intelligence services and digital influence operators learned to seed narratives that evoke outrage, stoke division, or create the perception of insider knowledge. They understood that emotion could outpace verification, and that speed alone could make a falsehood feel believable through sheer familiarity.

In the current landscape, AI makes all of this easier and faster. Deepfake video, synthetic personas, and automated content generation allow small teams to produce large volumes of emotionally charged material at unprecedented scale. Recent assessments from Microsoft’s 2025 Digital Defense Report document how adversarial state actors (including China, Russia, and Iran) now rely heavily on AI-assisted influence operations designed to deepen polarization, erode trust, and destabilize public confidence in the U.S.

This tactic does not require the audience to believe a false story. Often, it simply aims to leave them unsure of what truth looks like. And that uncertainty itself is a strategic vulnerability.

If misguided emotions can accelerate falsehood, then a thoughtful and well-organized response can help ensure factual information arrives with greater clarity and speed.

One approach involves increasing what communication researchers sometimes call truth velocity, the act of getting accurate information into public circulation quickly, through trusted voices, and with language that resonates rather than lectures. This does not mean replicating the manipulative emotional triggers that fuel disinformation. It means delivering truth in ways that feel human, timely, and relevant.

Another approach involves small, practical interventions that reduce the impulse to share dubious content without thinking. Research by Gordon Pennycook and David Rand has shown that brief accuracy prompts (small moments that ask users to consider whether a headline seems true) meaningfully reduce the spread of false content. Similarly, cognitive scientist Stephan Lewandowsky has demonstrated the value of clear context, careful labeling, and straightforward corrections to counter the powerful pull of emotionally charged misinformation.

Sign up for the Cyber Initiatives Group Sunday newsletter, delivering expert-level insights on the cyber and tech stories of the day – directly to your inbox. Sign up for the CIG newsletter today.

Organizations can also help their teams understand how cognitive blind spots influence their perceptions. When people know how novelty, emotion, and identity shape their reactions, they become less susceptible to stories crafted to exploit those instincts. And when leaders encourage a culture of thoughtful engagement where colleagues pause before sharing, investigate the source, and notice when a story seems designed to provoke, it creates a ripple effect of more sound judgment.

In an environment where information moves at speed, even a brief moment of reflection can slow the spread of a damaging narrative.

A core part of this challenge involves reclaiming the mental space where discernment happens, what I refer to as Mind Sovereignty™. This concept is rooted in a simple practice: notice when a piece of information is trying to provoke an emotional reaction, and give yourself a moment to evaluate it instead.

Mind Sovereignty™ is not about retreating from the world or becoming disengaged. It is about navigating a noisy information ecosystem with clarity and steadiness, even when that ecosystem is designed to pull us off balance. It is about protecting our ability to think clearly before emotion rushes ahead of evidence.

This inner steadiness, in some ways, becomes a public good. It strengthens not just individuals, but the communities, organizations, and democratic systems they inhabit.

In the intelligence world, I always thought that truth was resilient, but it cannot defend itself. It relies on leaders, communicators, technologists, and more broadly, all of us, who choose to treat information with care and intention. Falsehood may enjoy the advantage of speed, but truth gains power through the quality of the minds that carry it.

As we develop new technologies and confront new threats, one question matters more than ever: how do we strengthen the human algorithm so that truth has a fighting chance?

All statements of fact, opinion, or analysis expressed are those of the author and do not reflect the official positions or views of the U.S. Government. Nothing in the contents should be construed as asserting or implying U.S. Government authentication of information or endorsement of the author's views.

Read more expert-driven national security insights, perspective and analysis in The Cipher Brief, because National Security is Everyone's Business.

Lawmakers ‘Bullseye and Bait’ in AI-Driven Deepfake Campaigns

OPINION — Elected officials are both the bullseye and the bait for AI-driven influence campaigns launched by foreign adversaries. They are targeted with disinformation meant to sway their opinions and votes, while also serving as the raw material for deepfakes used to deceive others. It’s a problem so widespread that it threatens our faith in democratic institutions.

Even seemingly trivial posts can add to divisions already infecting the nation. Over the summer, a deepfake video depicting Rep. Alexandria Ocasio-Cortez (D-NY) discussing the perceived racist overtones of a jeans commercial went viral. At least one prominent news commentator was duped, sharing misinformation with his audience. While the origin of the fake is unknown, foreign adversaries, namely China, Russia, and Iran, often exploit domestic wedge issues to erode trust in elected officials.

Last year, Sen. Ben Cardin (D-MD) was deceived by a deepfake of Dmytro Kuleba, the former foreign minister of Ukraine, in an attempt to get the senator to reveal sensitive information about Ukrainian weaponry. People briefed on the FBI’s investigation into the incident suggest that the Russian government could be behind the deepfake, and that the Senator was being goaded into making statements that could be used for propaganda purposes.

In another incident, deepfake audio recordings of Secretary of State Marco Rubio deceived at least five government officials and three foreign ministers. The State Department diplomatic cable announcing the deepfake discovery also referenced an additional investigation into a Russia-linked cyber actor who had “posed as a fictitious department official.”

Meanwhile, researchers at Vanderbilt University’s Institute of National Security revealed that a Chinese company, GoLaxy, has used artificial intelligence to build psychological profiles of individuals including 117 members of Congress and 2,000 American thought leaders. Using these profiles, GoLaxy can tailor propaganda and target it with precision.

While the company denies that it — or its backers in the Chinese Communist Party — plan to use its advanced AI toolkit for influence operations against U.S. leaders, it allegedly has already done so in Hong Kong and Taiwan. Researchers say that in both places, GoLaxy profiled opposition voices and thought leaders and targeted them with curated messages on X (formerly Twitter), working to change their perception of events. The company also allegedly attempted to sway Hong Kongers’ views on a draconian 2020 national security law. That GoLaxy is now mapping America’s political leadership should be deeply concerning, but not surprising.

GoLaxy is far from the only actor reportedly using AI to influence public opinion. The same AI-enabled manipulation that now focuses on national leaders will inevitably be turned on mayors, school board members, journalists, CEOs — and eventually, anyone — deepening divisions in an already deeply divided nation.

Limiting the damage will require a coordinated response drawing on federal resources, private-sector innovation, and individual vigilance.

The Cipher Brief brings expert-level context to national and global security stories. It’s never been more important to understand what’s happening in the world. Upgrade your access to exclusive content by becoming a subscriber.

The White House has an AI Action Plan that lays out recommendations for how deepfake detection can be improved. It starts with turning the National Institute of Standards and Technology’s Guardians of Forensic Evidence deepfake evaluation program into formal guidelines. These guidelines would establish trusted standards that courts, media platforms, and consumer apps could use to evaluate deepfakes.

These standards are important because some AI-produced videos may be impossible to detect with the human eye. Instead, forensic tools can reveal deepfake giveaways. While far from perfect, this burgeoning deepfake detection field is adapting to rapidly evolving threats. Analyzing the distribution channels of deepfakes can also help determine their legitimacy, particularly for media outlets that want to investigate the authenticity of a video.

Washington must also coordinate with the tech industry, especially social media platforms, through the proposed AI Information Sharing and Analysis Center framework to build an early warning system to monitor, detect, and inform the public of influence operations exploiting AI-generated content.

The White House should also expand collaboration between the federal Cybersecurity and Infrastructure Security Agency, FBI, and the National Security Agency on deepfake responses. This combined team would work with Congress, agency leaders, and other prominent targets to minimize the spread of unauthorized synthetic content and debunk misleading information.

Lastly, public figures need to create rapid response communication playbooks to address the falsehoods head on and educate the public when deepfakes circulate. The United States can look to democratic allies like Taiwan for inspiration in how to deal with state-sponsored disinformation. The Taiwanese government has adopted the “222 policy” releasing 200 words and two photos within two hours of disinformation detection.

Deepfakes and AI-enabled influence campaigns represent a generational challenge to truth and trust. Combating this problem will be a cat-and-mouse game, with foreign adversaries constantly working to outmaneuver the safeguards meant to stop them. No individual solution will be enough to stop them, but by involving the government, the media, and individuals, it may be possible to limit their damage.

The Cipher Brief is committed to publishing a range of perspectives on national security issues submitted by deeply experienced national security professionals.

Opinions expressed are those of the author and do not represent the views or opinions of The Cipher Brief.

Have a perspective to share based on your experience in the national security field? Send it to Editor@thecipherbrief.com for publication consideration.

Read more expert-driven national security insights, perspective and analysis in
The Cipher Brief

How to Spot Fake News in Your Social Media Feed

Spotting fake news in your feed has always been tough. Now it just got tougher, thanks to AI. 

Fake news crops up in plenty of places on social media. And it has for some time now. In years past, it took the form of misleading posts, image captions, quotes, and the sharing of outright false information in graphs and charts. Now with the advent of AI, we see fake news taken to new levels of deception:  

  • Deepfake videos that mimic the looks and parrot the words of well-known public figures.  
  • AI-generated voice clones that sound spooky close to the voices they mimic.  
  • Also, entire news websites generated by AI, rife with bogus stories and imagery.  

All of it’s out there. And knowing how to separate truth from fact has never been of more importance, particularly as more and more people get their news via social media.  

Pew Research found that about a third of Americans say they regularly get their news from Facebook and nearly 1 in 4 say they regularly get it from YouTube. Moreover, global research from Reuters uncovered that more people primarily get their news from social media (30%) rather than from an established news site or app (22%). This marks the first time that social media has toppled direct access to news. 

Yet, you can spot fake news. Plenty of it.  

The process starts with a crisp definition of what fake news is, followed by the forms it takes, and then a sense of what the goals behind it are. With that, you can apply a critical eye and pick out the telltale signs.  

We’ll cover it all here. 

What is fake news? 

A textbook definition of fake news goes something like this:  

A false news story, fabricated with no verifiable facts, and presented in a way to appear as legitimate news.  

As for its intent, fake news often seeks to damage the reputation of an individual, institution, or organization. It might also spout propaganda or attempt to undermine established facts. 

That provides a broad definition. Yet, like much fake news itself, the full definition is much more nuanced. Within fake news, you’ll find two categories: disinformation and misinformation: 

  • Disinformation: This is intentionally misleading information that’s been manipulated to create a flat-out lie—typically with an ulterior motive in mind. Here, the creator knows that the information is false. 
  • Example: As a bad joke, a person concocts a phony news story that a much-anticipated video game release just got canceled. However, the game will certainly see its release. In the meantime, word spreads and online fans whip up into a frenzy. 
  • Misinformation: This simply involves getting the facts wrong. Unknowingly so, which separates itself from disinformation. We’re only human, and sometimes that means we forget details or recall things incorrectly. Likewise, when a person shares disinformation, that’s a form of misinformation as well, if the person shares it without fact-checking.  
  • Example: A person sees a post that a celebrity has died and shares that post with their friends and followers—when in fact, that celebrity is still very much alive. 

From there, fake news gets more nuanced still. Misinformation and disinformation fall within a range. Some of it might appear comical, while other types might have the potential to do actual harm.  

Dr. Claire Wardle, the co-director of the Information Futures Lab at Brown University, cites seven types of misinformation and disinformation on a scale as visualized below: 

 Source – FirstDraftNews.org and Brown University 

Put in a real-life context, you can probably conjure up plenty of examples where you’ve seen. Like clickbait-y headlines that link to letdown articles with little substance. Maybe you’ve seen a quote pasted on the image of a public figure, a quote that person never made. Perhaps an infographic, loaded with bogus statistics and attributed to an organization that doesn’t even exist. It can take all forms.  

Who’s behind fake news? And why? 

The answers here vary as well. Greatly so. Fake news can begin with a single individual, or groups of like-minded individuals with an agenda, and it can even come from operatives for various nation-states. As for why, they might want to poke fun at someone, drive ad revenue through clickbait articles, or spout propaganda.  

Once more, a visualization provides clarity in this sometimes-murky mix of fake news:   

 Source – FirstDraftNews.org and Brown University 

In the wild, some examples of fake news and the reasons behind it might look like this: 

  • Imposter sites that pose as legitimate news outlets yet post entirely unfounded pieces of propaganda. 
  • Parody sites that can look legitimate, so much so that people might mistake their content for actual news. 
  • AI deepfakes, images, recordings, and videos of public figures in embarrassing situations, yet that get presented as “real news” to damage their reputation. 

Perhaps a few of these examples ring a bell. You might have come across somewhere you weren’t exactly sure if it was fake news or not.  

The following tools can help you know for sure. 

Spotting what’s real and fake in your social media feed. 

Consider the source 

Some of the oldest advice is the best advice, and that holds true here: consider the source. Take time to examine the information you come across. Look at its source. Does that source have a track record of honesty and dealing plainly with the facts?  

  • For an infographic, you can search for the name of its author or the institution that’s attributed to it. Are they even real in the first place? 
  • For news websites, check out their “About Us” pages. Many bogus sites skimp on information here, whereas legitimate sites will go to lengths about their editorial history and staff.  
  • For any content that has any citation listed to legitimize it as fact, search on it. Plenty of fake news uses sources and citations that are just as fake too. 

Check the date 

This falls under a similar category as “consider the source.” Plenty of fake news will take an old story and repost it or alter it in some way to make it appear relevant to current events. In recent years, we’ve seen fake news creators slap a new headline on a new photo, all to make it seem like it’s something current. Once again, a quick search can help you tell if it’s fake or not. Try a reverse image search and see what comes up. Is the photo indeed current? Who took it? When? Where? 

Check your emotions too 

Has a news story you’ve read or watched ever made you shake your fist at the screen or want to clap and cheer? How about something that made you fearful or simply laugh? Bits of content that evoke strong emotional responses tend to spread quickly, whether they’re articles, a post, or even a tweet. That’s a ready sign that a quick fact check might be in order. The content is clearly playing to your biases. 

There’s a good reason for that. Bad actors who wish to foment unrest, unease, or spread disinformation use emotionally driven content to plant a seed. Whether or not their original story gets picked up and viewed firsthand doesn’t matter to these bad actors. Their aim is to get some manner of disinformation out into the ecosystem. They rely on others who will re-post, re-tweet, or otherwise pass it along on their behalf—to the point where the original source of the information gets completely lost. This is one instance where people readily begin to accept certain information as fact, even if it’s not factual at all. 

Certainly, some legitimate articles will generate a response as well, yet it’s a good habit to do a quick fact-check and confirm what you’ve read.  

Expand your media diet 

A single information source or story won’t provide a complete picture. It might only cover a topic from a certain angle or narrow focus. Likewise, information sources are helmed by editors and stories are written by people—all of whom have their biases, whether overt or subtle. It’s for this reason that expanding your media diet to include a broad range of information sources is so important. 

So, see what other information sources have to say on the same topic. Consuming news across a spectrum will expose you to thoughts and coverage you might not otherwise get if you keep your consumption to a handful of sources. The result is that you’re more broadly informed and can compare different sources and points of view. Using the tips above, you can find other reputable sources to round out your media diet. 

Additionally, for a list of reputable information sources, along with the reasons they’re reputable, check out “10 Journalism Brands Where You Find Real Facts Rather Than Alternative Facts” published by Forbes and authored by an associate professor at The King’s College in New York City. It certainly isn’t the end all, be all of lists, yet it should provide you with a good starting point. 

Let an expert do the fact-checking for you 

De-bunking fake news takes time and effort. Often a bit of digging and research too. Professional fact-checkers at news and media organizations do this work daily. Posted for all to see, they provide a quick way to get your answers. Some fact-checking groups include: 

Three ways to spot AI-generated fakes  

As AI continues its evolution, it gets trickier and trickier to spot it in images, video, and audio. Advances in AI give images clarity and crispness that they didn’t have before, deepfake videos play more smoothly, and voice cloning gets uncannily accurate.  

Yet even with the best AI, scammers often leave their fingerprints all over the fake news content they create. Look for the following: 

1) Consider the context  

AI fakes usually don’t appear by themselves. There’s often text or a larger article around them. Inspect the text for typos, poor grammar, and overall poor composition. Look to see if the text even makes sense. And like legitimate news articles, does it include identifying information—like date, time, and place of publication, along with the author’s name.  

2) Evaluate the claim 

Does the image seem too bizarre to be real? Too good to be true? Today, “Don’t believe everything you read on the internet,” now includes “Don’t believe everything you see on the internet.” If a fake news story is claiming to be real, search for the headline elsewhere. If it’s truly noteworthy, other known and reputable sites will report on the event—and have done their own fact-checking. 

3) Check for distortions 

The bulk of AI technology still renders fingers and hands poorly. It often creates eyes that might have a soulless or dead look to them—or that show irregularities between them. Also, shadows might appear in places where they look unnatural. Further, the skin tone might look uneven. In deepfaked videos, the voice and facial expressions might not exactly line up, making the subject look robotic and stiff.  

Be safe out there 

The fact is that fake news isn’t going anywhere. It’s a reality of going online. And AI makes it tougher to spot. 

At least at first glance. The best tool for spotting fake news is a fact-check. You can do the work yourself, or you can rely on trusted resources that have already done the work.  

This takes time, which people don’t always spend because social platforms make it so quick and easy to share. If we can point to one reason fake news spreads so quickly, that’s it. In fact, social media platforms reward such behavior. 

With that, keep an eye on your own habits. We forward news in our social media feeds too—so make sure that what you share is truthful too. 

Plenty of fake news can lure you into sketchy corners of the internet. Places where malware and phishing sites take root. Consider using comprehensive online protection software with McAfee+ to keep safe. In addition to several features that protect your devices, privacy, and identity, they can warn you of unsafe sites too. While it might not sniff out AI content (yet), it offers strong protection against bad actors who might use fake news to steal your information or harm your data and devices.  

The post How to Spot Fake News in Your Social Media Feed appeared first on McAfee Blog.

❌