David Hogg to Democratic Establishment: Get the Message or Get Voted Out
EXPERT PERSPECTIVE — In recent years, the national conversation about disinformation has often focused on bot networks, foreign operatives, and algorithmic manipulation at industrial scale. Those concerns are valid, and I spent years inside CIA studying them with a level of urgency that matched the stakes. But an equally important story is playing out at the human level. It’s a story that requires us to look more closely at how our own instincts, emotions, and digital habits shape the spread of information.
This story reveals something both sobering and empowering: falsehood moves faster than truth not merely because of the technologies that transmit it, but because of the psychology that receives it. That insight is no longer just the intuition of intelligence officers or behavioral scientists. It is backed by hard data.
In 2018, MIT researchers Soroush Vosoughi, Deb Roy, and Sinan Aral published a groundbreaking study in Science titled The Spread of True and False News Online. It remains one of the most comprehensive analyses ever conducted on how information travels across social platforms.
The team examined more than 126,000 stories shared by 3 million people over a ten-year period. Their findings were striking. False news traveled farther, faster, and more deeply than true news. In many cases, falsehood reached its first 1,500 viewers six times faster than factual reporting. The most viral false stories routinely reached between 1,000 and 100,000 people, whereas true stories rarely exceeded a thousand.
One of the most important revelations was that humans, not bots, drove the difference. People were more likely to share false news because the content felt fresh, surprising, emotionally charged, or identity-affirming in ways that factual news often does not. That human tendency is becoming a national security concern.
For years, psychologists have studied how novelty, emotion, and identity shape what we pay attention to and what we choose to share. The MIT researchers echoed this in their work, but a broader body of research across behavioral science reinforces the point.
People gravitate toward what feels unexpected. Novel information captures our attention more effectively than familiar facts, which means sensational or fabricated claims often win the first click.
Emotion adds a powerful accelerant. A 2017 study published in the Proceedings of the National Academy of Sciences showed that messages evoking strong moral outrage travel through social networks more rapidly than neutral content. Fear, disgust, anger, and shock create a sense of urgency and a feeling that something must be shared quickly.
And identity plays a subtle, but significant role. Sharing something provocative can signal that we are well informed, particularly vigilant, or aligned with our community’s worldview. This makes falsehoods that flatter identity or affirm preexisting fears particularly powerful.
Taken together, these forces form what some have called the “human algorithm,” meaning a set of cognitive patterns that adversaries have learned to exploit with increasing sophistication.
Save your virtual seat now for The Cyber Initiatives Group Winter Summit on December 10 from 12p – 3p ET for more conversations on cyber, AI and the future of national security.
During my years leading digital innovation at CIA, we saw adversaries expand their strategy beyond penetrating networks to manipulating the people on those networks. They studied our attention patterns as closely as they once studied our perimeter defenses.
Foreign intelligence services and digital influence operators learned to seed narratives that evoke outrage, stoke division, or create the perception of insider knowledge. They understood that emotion could outpace verification, and that speed alone could make a falsehood feel believable through sheer familiarity.
In the current landscape, AI makes all of this easier and faster. Deepfake video, synthetic personas, and automated content generation allow small teams to produce large volumes of emotionally charged material at unprecedented scale. Recent assessments from Microsoft’s 2025 Digital Defense Report document how adversarial state actors (including China, Russia, and Iran) now rely heavily on AI-assisted influence operations designed to deepen polarization, erode trust, and destabilize public confidence in the U.S.
This tactic does not require the audience to believe a false story. Often, it simply aims to leave them unsure of what truth looks like. And that uncertainty itself is a strategic vulnerability.
If misguided emotions can accelerate falsehood, then a thoughtful and well-organized response can help ensure factual information arrives with greater clarity and speed.
One approach involves increasing what communication researchers sometimes call truth velocity, the act of getting accurate information into public circulation quickly, through trusted voices, and with language that resonates rather than lectures. This does not mean replicating the manipulative emotional triggers that fuel disinformation. It means delivering truth in ways that feel human, timely, and relevant.
Another approach involves small, practical interventions that reduce the impulse to share dubious content without thinking. Research by Gordon Pennycook and David Rand has shown that brief accuracy prompts (small moments that ask users to consider whether a headline seems true) meaningfully reduce the spread of false content. Similarly, cognitive scientist Stephan Lewandowsky has demonstrated the value of clear context, careful labeling, and straightforward corrections to counter the powerful pull of emotionally charged misinformation.
Sign up for the Cyber Initiatives Group Sunday newsletter, delivering expert-level insights on the cyber and tech stories of the day – directly to your inbox. Sign up for the CIG newsletter today.
Organizations can also help their teams understand how cognitive blind spots influence their perceptions. When people know how novelty, emotion, and identity shape their reactions, they become less susceptible to stories crafted to exploit those instincts. And when leaders encourage a culture of thoughtful engagement where colleagues pause before sharing, investigate the source, and notice when a story seems designed to provoke, it creates a ripple effect of more sound judgment.
In an environment where information moves at speed, even a brief moment of reflection can slow the spread of a damaging narrative.
A core part of this challenge involves reclaiming the mental space where discernment happens, what I refer to as Mind Sovereignty™. This concept is rooted in a simple practice: notice when a piece of information is trying to provoke an emotional reaction, and give yourself a moment to evaluate it instead.
Mind Sovereignty™ is not about retreating from the world or becoming disengaged. It is about navigating a noisy information ecosystem with clarity and steadiness, even when that ecosystem is designed to pull us off balance. It is about protecting our ability to think clearly before emotion rushes ahead of evidence.
This inner steadiness, in some ways, becomes a public good. It strengthens not just individuals, but the communities, organizations, and democratic systems they inhabit.
In the intelligence world, I always thought that truth was resilient, but it cannot defend itself. It relies on leaders, communicators, technologists, and more broadly, all of us, who choose to treat information with care and intention. Falsehood may enjoy the advantage of speed, but truth gains power through the quality of the minds that carry it.
As we develop new technologies and confront new threats, one question matters more than ever: how do we strengthen the human algorithm so that truth has a fighting chance?
All statements of fact, opinion, or analysis expressed are those of the author and do not reflect the official positions or views of the U.S. Government. Nothing in the contents should be construed as asserting or implying U.S. Government authentication of information or endorsement of the author's views.
Read more expert-driven national security insights, perspective and analysis in The Cipher Brief, because National Security is Everyone's Business.
U.S. focus on IW and its subset, cognitive warfare, has been erratic. The U.S. struggles with adapting its plans to the use of cognitive warfare while our leaders have consistently called for more expertise for this type of warfare. In 1962, President Kennedy challenged West Point graduates to understand: "another type of war, new in its intensity, ancient in its origin, that would require a whole new kind of strategy, a wholly different kind of force, forces which are too unconventional to be called conventional forces…" Over twenty years later, in 1987, Congress passed the Nunn-Cohen Amendment that established Special Operations Command (SOCOM) and the Defense Department’s Special Operations and Low-Intensity Conflict (SO/LIC) office. Another twenty years later, then Secretary of Defense Robert Gates said that DoD needed “to display a mastery of irregular warfare comparable to that which we possess in conventional combat.”
After twenty years of best practices of IW in the counter terrorism area, the 2020 Irregular Warfare Annex to the National Defense Strategy emphasized the need to institutionalize irregular warfare “as a core competency with sufficient, enduring capabilities to advance national security objectives across the spectrum of competition and conflict.” In December 2022, a RAND commentary pointed out that the U.S. military failed to master IW above the tactical level. I submit, we have failed because we have focused on technology at the expense of expertise and creativity, and that we need to balance technology with developing a workforce that thinks in a way that is different from the engineers and scientists that create our weapons and collection systems.
Adversaries Ahead of Us
IW and especially cognitive warfare is high risk and by definition uses manipulative practices to obtain results. Some policy leaders are hesitant to use this approach to develop influence strategies which has resulted in the slow development of tools and strategies to counter our adversaries. U.S. adversaries are experts at IW and do not have many of the political, legal, or oversight hurdles that U.S. IW specialists have.
Chinese military writings highlight the PRC’s use of what we would call IW in the three warfares. This involves using public opinion, legal warfare, and psychological operations to spread positive views of China and influence foreign governments in ways favorable to China. General Wang Haijiang, commander of the People's Liberation Army's (PLA) Western Theatre Command, wrote in an official People’s Republic of China (PRC) newspaper that the Ukraine war has produced a new era of hybrid warfare, intertwining “political warfare, financial warfare, technological warfare, cyber warfare, and cognitive warfare.” The PRC’s Belt and Road Initiative and Digital Silk Road are prime examples of using economic coercion as irregular warfare. Their Confucius Centers underscore how they are trying to influence foreign populations through language and cultural training.
Russia uses IW to attempt to ensure the battle is won before military operations begin and to enhance its conventional forces. Russia calls this hybrid war and we saw this with the use of “little green men” going into Crimea in 2014 and the use of the paramilitary Wagner forces around the world. Russia also has waged a disinformation campaign against the U.S. on digital platforms and even conducted assassinations and sabotage on foreign soil as ways to mold the battle space toward their goals.
What Is Needed
U.S. architects of IW seem to primarily focus on oversight structures and budget, and less on how to develop an enduring capability.
Through the counterterrorism fight, the U.S. learned how to use on-the-ground specialists, develop relationships at tribal levels, and understand cultures to influence the population. The U.S. has the tools and the lessons learned that would enable a more level playing field against its adversaries, but it is not putting enough emphasis on cognitive warfare. A key to the way forward is to develop SOF personnel and commensurate intelligence professionals to support the SOF community who understand the people, the geography, and the societies they are trying to influence and affect. We then must go further and reward creativity and cunning in developing cognitive warfare strategies.
The Department of Defense and the intelligence community have flirted with the need for expertise in the human domain or social cultural sphere for years. The Department of Defense put millions of dollars into socio cultural work in the 2015-time frame. This focus went away as we started concentrating more on near peer competition. Instead, we focused on technology, better weapons and more complex collection platforms as a way to compete with these adversaries. We even looked to cut Human Intelligence (HUMINT) to move toward what some call a lower risk approach to collection—using technology instead of humans.
SOF personnel are considered the military’s most creative members. They are chosen for their ability to adapt, blend in, and think outside the box. This ingenuity needs to be encouraged. We need a mindful balancing of oversight without stifling that uniqueness that makes IW so successful. While some of this creativity may come naturally, we need to ensure that we put in place training that speaks to inventiveness, that pulls out these members’ ability to think through the impossible. Focused military classes across the services must build on latest practices for underscoring creativity and out of the box thinking. This entrepreneurial approach is not typically rewarded in a military that is focused on planning, rehearsals, and more planning.
Need a daily dose of reality on national and global security issues? Subscriber to The Cipher Brief’s Nightcap newsletter, delivering expert insights on today’s events – right to your inbox. Sign up for free today.
Focusing on Intelligence and Irregular Warfare
An important part of the equation for irregular warfare is intelligence. This foundation for irregular warfare work is often left out in the examination of what is needed for the U.S. to move IW forward. In the SOF world, operators and intelligence professionals overlap more than in any other military space. Intelligence officers who support IW need to have the same creative mindset as the operators. They also need to be experts in their regional areas—just like the SOF personnel.
The intelligence community’s approach to personnel over the past twenty or so years works against support for IW. Since the fall of the Soviet Union, the intelligence community has moved from an expertise-based system to one that is more focused on processes. We used to have deep experts on all aspects of the adversary—analysts or collectors who had spent years focused on knowing everything about one foreign leader or one aspect of a country’s industry and with a deep knowledge of the language and culture of that country. With many more adversaries and with collection platforms that are much more expensive than those developed in the early days of the intelligence community, we cannot afford the detailed expert of yore anymore. The current premise is that if you know the processes for writing a good analytical piece or for being a good case officer, the community can plug and play you in any context. This means, we have put a premium on process while neglecting expertise. As with all things—we need to balance these two important aspects of intelligence work.
To truly understand and use IW, we need to develop expert regional analysts and human intelligence personnel. Those individuals who understand the human domain that they are studying. We need to understand how the enemy thinks to be able to provide that precision to the operator. This insight comes only after years of studying the adversary. We need to reward those experts and celebrate them just as much as we do the adaptable plug and play analyst or human intelligence personnel. Individuals who speak and understand the nuances of the languages of our adversaries, who understand the cultures and patterns of life are the SOF member’s best tool for advancing competition in IW. Developing this workforce must be a first thought, not an afterthought in the development of our irregular warfare doctrine.
CIA Director William Casey testified before Congress in 1981:
“The wrong picture is not worth a thousand words. No photo, no electronic impulse can substitute for direct on the scene knowledge of the key factors in a given country or region. No matter how spectacular a photo may be it cannot reveal enough about plans, intentions, internal political dynamics, economics, etc. Technical collection is of little help in the most difficult problem of all—political intentions. This is where clandestine human intelligence can make a difference.”
Not only are analytical experts important in support of IW but so are HUMINT experts. We have focused on technology to fill intelligence gaps to the detriment of human intelligence. The Defense Intelligence enterprise has looked for ways to cut its HUMINT capability when we should be increasing our use of HUMINT collection and HUMINT enabled intelligence activities. In 2020, Defense One reported on a Defense Intelligence Agency (DIA) plan to cut U.S. defense attaches in several West African countries and downgrade the ranks of others in eight countries. Many advocate for taking humans out of the loop as much as possible. The theory is that this lowers the risk for human capture or leaks. As any regional expert will tell you, while satellites and drones can provide an incredible amount of intelligence from pictures to bits of conversation, what they cannot provide is the context for those pictures or snippets of conversation. As Director Casey inferred, it is only the expert who has lived on the ground, among the people he/she is reporting on who can truly grasp nuances, understanding local contexts, allegiances, and sentiments.
While it is important to continue to upgrade technology and have specialists who fly drones and perform other data functions, those functions must be fused with human understanding of the adversary and the terrain. While algorithms can sift through vast amounts of data, human operatives and analysts ensure the contextual relevance of this data. Technologies cannot report on the nuances of feelings and emotions. The regional experts equip SOF operators with the nuanced understanding required to navigate the complexities that make up the “prior to bang” playing field. This expertise married with cunning and creativity will give us the tools we need to combat our adversary in the cognitive warfare domain.
The Cipher Brief brings expert-level context to national and global security stories. It’s never been more important to understand what’s happening in the world. Upgrade your access to exclusive content by becoming a subscriber.
Conclusion
The need for contextual, human-centric understanding for being able to develop plans and operations for cognitive warfare that can compete with our adversaries and keep us from a kinetic fight is paramount. Those who try to make warfare or intelligence into a science miss the truth, that to be proficient in either, art is a must. We need expertise to be able to decipher the stories, motives, and aspirations that make cognitive warfare unique. Regional intelligence experts discern the patterns, motives and vulnerabilities of adversaries; key needs for developing IW campaigns and for influencing individuals and societies. We need seasoned human intelligence personnel, targeters, and analysts who are experts on the adversary to be able to do this. We also need to develop and reward creativity, which is a must for this world.
We also have to be upfront and acknowledge the need to manipulate our adversaries. U.S. decision makers must concede that to win the next war, cognitive warfare is a must and it is essential for these leaders to take calculated risks to mount those campaigns to influence and manipulate.
The cost of cognitive warfare is but a rounding error when compared to the development of new technical intelligence collection platforms and the platforms’ massive infrastructures. This rounding error is a key lynchpin for irregular warfare and irregular warfare is our most likely avenue for avoiding a kinetic war. Human operatives, out of the box thinking, and expert analysts and human intelligence personnel are the needed bridges that connect data into actionable insights to allow our SOF community to practice the type of irregular warfare we have proven historically that the U.S..S. can provide and must provide to counter our adversaries and win the cognitive war we are currently experiencing.
Who’s Reading this? More than 500K of the most influential national security experts in the world. Need full access to what the Experts are reading?
Read more expert-driven national security insights, perspective and analysis in The Cipher Brief because National Security is Everyone’s Business.
Even seemingly trivial posts can add to divisions already infecting the nation. Over the summer, a deepfake video depicting Rep. Alexandria Ocasio-Cortez (D-NY) discussing the perceived racist overtones of a jeans commercial went viral. At least one prominent news commentator was duped, sharing misinformation with his audience. While the origin of the fake is unknown, foreign adversaries, namely China, Russia, and Iran, often exploit domestic wedge issues to erode trust in elected officials.
Last year, Sen. Ben Cardin (D-MD) was deceived by a deepfake of Dmytro Kuleba, the former foreign minister of Ukraine, in an attempt to get the senator to reveal sensitive information about Ukrainian weaponry. People briefed on the FBI’s investigation into the incident suggest that the Russian government could be behind the deepfake, and that the Senator was being goaded into making statements that could be used for propaganda purposes.
In another incident, deepfake audio recordings of Secretary of State Marco Rubio deceived at least five government officials and three foreign ministers. The State Department diplomatic cable announcing the deepfake discovery also referenced an additional investigation into a Russia-linked cyber actor who had “posed as a fictitious department official.”
Meanwhile, researchers at Vanderbilt University’s Institute of National Security revealed that a Chinese company, GoLaxy, has used artificial intelligence to build psychological profiles of individuals including 117 members of Congress and 2,000 American thought leaders. Using these profiles, GoLaxy can tailor propaganda and target it with precision.
While the company denies that it — or its backers in the Chinese Communist Party — plan to use its advanced AI toolkit for influence operations against U.S. leaders, it allegedly has already done so in Hong Kong and Taiwan. Researchers say that in both places, GoLaxy profiled opposition voices and thought leaders and targeted them with curated messages on X (formerly Twitter), working to change their perception of events. The company also allegedly attempted to sway Hong Kongers’ views on a draconian 2020 national security law. That GoLaxy is now mapping America’s political leadership should be deeply concerning, but not surprising.
GoLaxy is far from the only actor reportedly using AI to influence public opinion. The same AI-enabled manipulation that now focuses on national leaders will inevitably be turned on mayors, school board members, journalists, CEOs — and eventually, anyone — deepening divisions in an already deeply divided nation.
Limiting the damage will require a coordinated response drawing on federal resources, private-sector innovation, and individual vigilance.
The Cipher Brief brings expert-level context to national and global security stories. It’s never been more important to understand what’s happening in the world. Upgrade your access to exclusive content by becoming a subscriber.
The White House has an AI Action Plan that lays out recommendations for how deepfake detection can be improved. It starts with turning the National Institute of Standards and Technology’s Guardians of Forensic Evidence deepfake evaluation program into formal guidelines. These guidelines would establish trusted standards that courts, media platforms, and consumer apps could use to evaluate deepfakes.
These standards are important because some AI-produced videos may be impossible to detect with the human eye. Instead, forensic tools can reveal deepfake giveaways. While far from perfect, this burgeoning deepfake detection field is adapting to rapidly evolving threats. Analyzing the distribution channels of deepfakes can also help determine their legitimacy, particularly for media outlets that want to investigate the authenticity of a video.
Washington must also coordinate with the tech industry, especially social media platforms, through the proposed AI Information Sharing and Analysis Center framework to build an early warning system to monitor, detect, and inform the public of influence operations exploiting AI-generated content.
The White House should also expand collaboration between the federal Cybersecurity and Infrastructure Security Agency, FBI, and the National Security Agency on deepfake responses. This combined team would work with Congress, agency leaders, and other prominent targets to minimize the spread of unauthorized synthetic content and debunk misleading information.
Lastly, public figures need to create rapid response communication playbooks to address the falsehoods head on and educate the public when deepfakes circulate. The United States can look to democratic allies like Taiwan for inspiration in how to deal with state-sponsored disinformation. The Taiwanese government has adopted the “222 policy” releasing 200 words and two photos within two hours of disinformation detection.
Deepfakes and AI-enabled influence campaigns represent a generational challenge to truth and trust. Combating this problem will be a cat-and-mouse game, with foreign adversaries constantly working to outmaneuver the safeguards meant to stop them. No individual solution will be enough to stop them, but by involving the government, the media, and individuals, it may be possible to limit their damage.
The Cipher Brief is committed to publishing a range of perspectives on national security issues submitted by deeply experienced national security professionals.
Opinions expressed are those of the author and do not represent the views or opinions of The Cipher Brief.
Have a perspective to share based on your experience in the national security field? Send it to Editor@thecipherbrief.com for publication consideration.
Read more expert-driven national security insights, perspective and analysis in The Cipher BriefDEEP DIVE — When Secret Service agents swept into an inconspicuous building near the United Nations General Assembly late last month, they weren’t tracking guns or explosives. Instead, they dismantled a clandestine telecommunications hub that investigators say was capable of crippling cellular networks and concealing hostile communications.
According to federal officials, the operation seized more than 300 devices tied to roughly 100,000 SIM cards — an arsenal of network-manipulating tools that could disrupt the cellular backbone of New York City at a moment of geopolitical tension. The discovery, officials stressed, was not just a one-off bust but a warning sign of a much broader national security vulnerability.
The devices were designed to create what experts call a “SIM farm,” an industrial-scale operation where hundreds or thousands of SIM cards can be manipulated simultaneously. These setups are typically associated with financial fraud or bulk messaging scams. Still, the Secret Service warned that they can also be used to flood telecom networks, disable cell towers, and obscure the origin of communications.
In the shadow of the UN, where global leaders convene and security tensions are high, the proximity of such a system raised immediate questions about intent, attribution, and preparedness.
“(SIM farms) could jam cell and text services, block emergency calls, target first responders with fake messages, spread disinformation, or steal login codes,” Jake Braun, Executive Director of the Cyber Policy Initiative at the University of Chicago and former White House Acting Principal Deputy National Cyber Director, tells The Cipher Brief. “In short, they could cripple communications just when they’re needed most.”
Sign up for the Cyber Initiatives Group Sunday newsletter, delivering expert-level insights on the cyber and tech stories of the day – directly to your inbox. Sign up for the CIG newsletter today.
How SIM Farms Work
At their core, SIM farms exploit the fundamental architecture of mobile networks. Each SIM card represents a unique identity on the global communications grid. By cycling through SIMs at high speed, operators can generate massive volumes of calls, texts, or data requests that overwhelm cellular infrastructure. Such floods can mimic the effects of a distributed denial-of-service (DDoS) attack, except the assault comes through legitimate carrier channels rather than obvious malicious traffic.
“SIM farms are essentially racks of modems that cycle through thousands of SIM cards,” Dave Chronister, CEO of Parameter Security, tells The Cipher Brief. “Operators constantly swap SIM cards and device identifiers so traffic appears spread out rather than coming from a single source.”
That makes them extremely difficult to detect.
“They can mimic legitimate business texts and calls, hide behind residential internet connections, or scatter equipment across ordinary locations so there’s no single, obvious signal to flag,” Chronister continued. “Because SIM farms make it hard to tie a number back to a real person, they’re useful to drug cartels, human-trafficking rings and other organized crime, and the same concealment features could also be attractive to terrorists.”
That ability to blend in, experts highlight, is what makes SIM farms more than just a criminal nuisance.
While SIM farms may initially be used for financial fraud, their architecture can be easily repurposed for coordinated cyber-physical attacks. That dual-use nature makes them especially appealing to both transnational criminal groups and state-backed intelligence services.
Who Might Be Behind It?
The Secret Service, however, has not publicly attributed the network near the UN to any specific individual or entity. Investigators are weighing several possibilities: a transnational fraud ring exploiting the chaos of UN week to run large-scale scams, or a more concerning scenario where a state-backed group positioned the SIM farm as a contingency tool for disrupting communications in New York.
Officials noted that the operation’s sophistication suggested it was not a low-level criminal endeavor. The hardware was capable of sustained operations against multiple carriers, and its sheer scale — 100,000 SIM cards — far exceeded the typical scale of fraud schemes. That raised the specter of hostile governments probing U.S. vulnerabilities ahead of potential hybrid conflict scenarios.
Analysts note that Russia, China, and Iran have all been implicated in blending criminal infrastructure with state-directed cyber operations. Yet, these setups serve both criminals and nation-states, and attribution requires more details than are publicly available.
“Criminal groups use SIM farms to make money with scams and spam,” said Braun. “State actors can use them on a bigger scale to spy, spread disinformation, or disrupt communications — and sometimes they piggyback on criminal networks.”
One source in the U.S. intelligence community, who spoke on background, described that overlap as “hybrid infrastructure by design.”
“It can sit dormant as a criminal enterprise for years until a foreign government needs it. That’s what makes it so insidious,” the source tells The Cipher Brief.
From Chronister’s purview, the “likely explanation is that it’s a sophisticated criminal enterprise.”
“SIM-farm infrastructure is commonly run for profit and can be rented or resold. However, the criminal ecosystem is fluid: nation-states, terrorist groups, or hybrid actors can and do co-opt criminal capabilities when it suits them, and some state-linked groups cultivate close ties with criminal networks,” he said.
The Broader National Security Blind Spot
The incident during the United Nations General Assembly also underscores a growing blind spot in U.S. protective intelligence: telecommunications networks as contested terrain. For decades, federal resources have focused heavily on cybersecurity, counterterrorism, and physical threats. At the same time, the connective tissue of modern communications has often been treated as a commercial domain, monitored by carriers rather than security agencies.
The Midtown bust suggests that assumption no longer holds. The Secret Service itself framed the incident as a wake-up call.
“The potential for disruption to our country’s telecommunications posed by this network of devices cannot be overstated,” stated U.S. Secret Service Director Sean Curran. “The U.S. Secret Service’s protective mission is all about prevention, and this investigation makes it clear to potential bad actors that imminent threats to our protectees will be immediately investigated, tracked down and dismantled.”
However, experts warn that U.S. defenses remain fragmented. Carriers focus on fraud prevention, intelligence agencies monitor foreign adversaries, and law enforcement investigates domestic crime. The seams between those missions are precisely where SIM farms thrive.
The Cipher Brief brings expert-level context to national and global security stories. It’s never been more important to understand what’s happening in the world. Upgrade your access to exclusive content by becoming a subscriber.
Hybrid Warfare and the Next Front Line
The rise of SIM farms reflects the evolution of hybrid warfare, where the boundary between criminal activity and state action blurs, and adversaries exploit commercial infrastructure as a means of attack. Just as ransomware gangs can moonlight as proxies for hostile intelligence services, telecom fraud networks may double as latent disruption tools for foreign adversaries.
Additionally, the threat mirrors patterns observed abroad. In Ukraine, officials have reported Russian operations targeting cellular networks to disrupt battlefield communications and sow panic among civilians. In parts of Africa and Southeast Asia, SIM farms have been linked to both organized crime syndicates and intelligence-linked influence campaigns.
That same playbook, experts caution, could be devastating if applied in the heart of a global city.
“If activated during a crisis, such networks could flood phone lines, including 911 and embassy hotlines, to sow confusion and delay coordination. They can also blast fake alerts or disinformation to trigger panic or misdirect first responders, making it much harder for authorities to manage an already volatile situation,” Chronister said. “Because these setups are relatively cheap and scalable, they are an inexpensive but effective way to complicate emergency response, government decision-making, and even protective details.”
Looking Ahead
The dismantling of the clandestine telecom network in New York may have prevented an imminent crisis, but experts caution that it is unlikely to be the last of its kind. SIM farms are inexpensive to set up, scalable across borders, and often hidden in plain sight. They represent a convergence of cyber, criminal, and national security threats that the U.S. is only beginning to treat as a unified challenge.
When it comes to what needs to be done next, Braun emphasized the importance of “improving information sharing between carriers and government, investing in better tools to spot hidden farms, and moving away from SMS for sensitive logins.”
“Treat SIM farms as a national security threat, not just telecom fraud. Limit access to SIM farm hardware and punish abuse. Help smaller carriers strengthen defenses,” he continued. “And streamline legal steps so takedowns happen faster.”
Chronister acknowledged that while “carriers are much better than they were five or ten years ago, as they’ve invested in spam filtering and fraud analytics, attackers can still get through when they rotate SIMs quickly, use eSIM provisioning, or spread activity across jurisdictions.”
“Law enforcement and intelligence have powerful tools, but legal, technical, and cross-border constraints mean detection often outpaces confident attribution and rapid takedown. Make it harder to buy and cycle through SIMs in bulk and strengthen identity verification for phone numbers,” he added. “Require faster, real-time information-sharing between carriers and government during traffic spikes, improve authentication for public alerts, and run regular stress-tests and red-team exercises against telecom infrastructure. Finally, build joint takedown and mutual-assistance arrangements with allies so attackers can’t simply reconstitute operations in another country.”
Are you Subscribed to The Cipher Brief’s Digital Channel on YouTube? There is no better place to get clear perspectives from deeply experienced national security experts.
Read more expert-driven national security insights, perspective and analysis in The Cipher Brief because National Security is Everyone’s Business.

EXECUTIVE SUMMARY:
Ahead of the U.S. elections, adversaries are weaponizing social media to gain political sway. Russian and Iranian efforts have become increasingly aggressive and transparent. However, China appears to have taken a more carefully calculated and nuanced approach.
China’s seeming disinformation efforts have little to do with positioning one political candidate as preferable to another. Rather, the country’s maneuvers may aim to undermine trust in voting systems, elections and America, in general; amplifying criticism and sowing discord.
In recent months, the Chinese disinformation network, known as Spamouflage, has pursued “advanced deceptive behavior.” It has quietly launched thousands of accounts across more than 50 domains, and used them to target people across the United States.
The group has been active since 2017, but has recently reinforced its efforts.
The Spamouflage network’s fake online accounts present fake identities, which sometimes change on a whim. The accounts/profiles have been spotted on X, TikTok and elsewhere.
| For example: Harlan claimed to be a New York resident and an Army veteran, age 29. His profile picture showed a well-groomed young man. However, a few months later, his account shifted personas. Suddenly, Harlan appeared to be from Florida and a 31 year-old Republican influencer. At least four different accounts were found to mimic Trump supporters – part of a tactic with the moniker “MAGAflage.” |
The fake profiles, including the fake photos, may have been generated through artificial intelligence tools, according to analysts.
Accounts have exhibited certain patterns, using hashtags like #American, while presenting themselves as voters or groups that “love America” but feel alienated by political issues that range from women’s healthcare to Ukraine.
In June, one post on X read “Although I am American, I am extremely opposed to NATO and the behavior of the U.S. government in war. I think soldiers should protect their own country’s people and territory…should not initiate wars on their own…” The text was accompanied by an image showing NATO’s expansion across Europe.
Disinformation campaigns that create (and weaponize) fake profiles, as described above, will have a high degree of success when crafting and distributing phishing emails, as the emails will appear to come from credible sources.
This makes it essential for organizations to implement and for employees to adhere to advanced verification methods that can ensure the veracity of communications.
Within your organization, if you haven’t done so already, consider implementing the following:
To effectively counter threats, organizations need to pursue a dynamic, multi-dimensional approach. But it’s tough.
To get expert guidance, please visit our website or contact our experts. We’re here to help!
The post Spamouflage’s advanced deceptive behavior reinforces need for stronger email security appeared first on CyberTalk.