Normal view

There are new articles available, click to refresh the page.
Yesterday — 24 January 2026Main stream
Before yesterdayMain stream

Loosening the Gordian Knot of Global Terrorism: Why Legitimacy Must Anchor a Counterterrorism Strategy

23 January 2026 at 14:19

OPINION The global terrorism landscape in 2026 — the 25th anniversary year of the 9/11 terrorism attacks — is more uncertain, hybridized, and combustible than at any point since 9/11. Framing a sound U.S. counterterrorism strategy — especially in the second year of a Trump administration — will require more than isolated strikes against ISIS in Nigeria, punitive counterterrorism operations in Syria, or a tougher rhetorical posture.

A Trump administration counterterrorism strategy will require legitimacy: the domestic, international, and legal credibility that leverages a wide-range of counterterrorism tools, while engendering international counterterrorism cooperation. Without legitimacy, even tactically successful counterterrorism operations risk becoming illusory, politicized, and ultimately self-defeating.

The terrorist threat landscape

Extremist violence no longer conforms to clean ideological lines. Terrorist objectives and drivers are muddled in ways that are hard to understand — but evolving. There’s little ideological purity with those radicalizing in today’s extremist milieu.

At the same time, state-directed intelligence officers increasingly behave like terrorists. Russian intelligence-linked sabotage plots blur the line between terrorism and hybrid warfare. Islamic Revolutionary Guard Corps officers provide hands-on training to Lebanese Hizballah commanders. Addressing these kinds of risks requires legitimacy, too, especially among allies whose intelligence cooperation, legal authorities, and public support are indispensable.

Nowhere is this threat picture more tenuous than in the Middle East. Hamas’s October 7, 2023, attacks triggered a profound rebalancing of power in the region. Yet, Syria remains unfinished business. Power vacuums there invite foreign jihadists, threaten Israel's border communities, and create future opportunities for Iranian influence to rebound.

A modest but persistent U.S. presence in Syria with a friendly Ahmed al-Sharaa-led government remains a strategic hedge against an Islamic State resurgence, and is a strong signal of U.S. commitment that helps sustain partner confidence. The U.S. counterterrorism presence and alignment with al-Sharaa is not without its risks, though: in December, three Americans were killed by a lone ISIS gunman in central Syria. The country is, and will continue to be, plagued by sectarianism and terrorism, which means that restoring control over a deeply fractured Syria remains fraught.

Taken together, the current transnational terrorism threat landscape is volatile and difficult to predict, a challenge compounded by resource constraints. In such an environment, legitimacy becomes a force multiplier. A belief that America is a ‘force for good’, credible messaging, and confidence that U.S. government action is perceived as just, can go a long way.

This is not an abstract concern. Terrorism today thrives in contested information environments, polarized societies, and fragile states. In short, transnational jihadist networks now coexist with domestic violent extremists, and online radicalization ecosystems that blur the line between terrorism, insurgency, and hybrid warfare. Terrorist propaganda continues to resonate with individuals in the West, especially younger generations who radicalize online. In this environment, legitimacy is no longer a secondary benefit of sound strategy—it is a core guiding principle.

The Trump administration's counterterrorism approach

We are looking for more clarity on the trajectory of Trump 2.0 counterterrorism efforts. It’s still, premature to consider a strategy that has yet to be formally articulated, as many in the counterterrorism community eagerly await its release. History offers a useful reminder. The first Trump administration did not publish its National Strategy for Counterterrorism until its second year. When it appeared in 2018, critics and supporters alike acknowledged that it reflected professional judgment rather than ideological excess. That document recognized terrorism’s evolution and called for strengthening counterterrorism partnerships within the U.S. government, but abroad as well, with a range of longstanding allies.

What gave that strategy durability was its legitimacy. Authorities were grounded in law, threat assessments were evidence-based, policies were stress-tested for faulty assumptions, and foreign partnerships were treated as strategic assets rather than transactional relationships.

When the Biden administration publicly released a set of redacted rules secretly issued by President Trump in 2017 for counterterrorism operations — such as “direct action” strikes and special operations raids outside conventional war zones — those guidelines explicitly acknowledged the power of legitimacy. Counterterrorism succeeds when allies trust the U.S., and the American public believes force is used proportionately and lawfully.

That legacy of trust matters now more than ever, given signals that a second Trump administration could overcorrect on its counterterrorism priorities by redirecting and focusing resources on far-left extremist groups such as the Turtle Island Liberation Front (TILF) or Antifa, while downplaying far-right extremism—or being distracted from the more dangerous terrorism threats from ISIS and other violent jihadists. As the world recently witnessed during the holidays, from Bondi Beach to Syria, ISIS remains a threat. Far-Left terrorism in the U.S. is on the rise, but far-right terrorism accounts for greater lethality than did the left. And still, after 25 years, it’s ISIS and al-Qa’ida that remain the most persistent and enduring transnational terrorism threat against U.S interests.

The Trump National Security Strategy

It’s concerning that the recently published National Security Strategy (NSS) only tepidly addresses transnational terrorism, but notably links terrorism with cross-border threats and hemispheric cooperation against things like “narco-terrorists,” blurring the traditional separation between transnational organized crime and terrorism.

Still, the Trump administration’s emphasis on drug cartels is justifiable, if it does not detract from broader counterterrorism objectives, such as the ISIS or hybridizing terrorist threats that continue to emerge. Commentators claim, however, that the Trump administration is already losing sight of the ISIS and al-Qa’ida threats, though settling that debate here is quixotic at best — only time will tell.

Besides jihadi threats, the U.S. does not need the unintended consequences and risks of triggering a cycle of cartel retaliation – or provoking greater far-left violence – down-the-line in the U.S. homeland.

Contrastingly, the 2017 National Security Strategy saw radical Islamist terrorism as one of the priority transnational threats that could undermine U.S. security and stability. The strategy highlighted groups such as ISIS and al-Qa’ida as continuing dangers, stressing that terrorists had taken control of parts of the Middle East and remained a threat globally.

The Cipher Brief brings expert-level context to national and global security stories. It’s never been more important to understand what’s happening in the world. Upgrade your access to exclusive content by becoming a subscriber.

Addressing transnational terrorism during the first Trump administration required discipline and steadiness amid predictable frictions at the National Security Council (NSC) among policymakers who wanted a more rapid shift toward other priorities, such as great power competition. Still, terrorist labeling and designations, strategic messaging, and resource allocation for counterterrorism were grounded in evidence rather than politics.

So, overhyping some threats while minimizing others undermines legitimacy, invites backlash, and weakens the very moral authority needed to operationalize a cogent, thoughtful national security strategy. It also erodes trust between the government and the public and leads citizens to second-guess whether they are being told the truth or being led astray. The 2017 NSS carried weight precisely because it was grounded in intelligence, not politics. Moreover, the NSS helped frame the counterterrorism strategy that followed and proved highly effective in keeping Americans safe.

Drawing lessons from the 2018 National Strategy for Counterterrorism

The 2018 National Strategy for Counterterrorism (NSCT) remains a useful foundation for the second Trump administration—not because the world is unchanged, but because it embraced balance. The strategy emphasized foreign partnerships, non-military tools, and targeted direct action when necessary. It recognized a central legitimacy principle: the United States cannot and should not fight every terrorist everywhere with American troops when capable counterterrorism partners can do so in their own backyards, with local consent, and a more granular understanding of the grievances that motivate these terrorist groups and their supporters.

And still, U.S. counterterrorism pressure through direct action remains a necessary tool to disrupt terrorism planning. It seems that the second Trump Administration is following the playbook of the first Trump administration in terms of aggressive counterterrorism kinetic strikes in places like Somalia, Yemen, and Iraq.

President Trump rescinded Biden-era limits on counterterrorism drone strikes, allowing the kind of flexible operational framework used for counterterrorism throughout the President’s first term. Thus far, in the aggressive counter-narcotic campaign in international waters off Venezuela, the standoff U.S. strikes resemble counterterrorism operations in Yemen and Somalia during the first Trump administration. Operationally, direct action remains an indispensable counterterrorism tool for disrupting terror groups overseas, and more U.S. direct action will likely be necessary in West Africa and the Sahel to keep jihadist groups operating there off balance, forcing them to devote more time and resources to operational security.

But pressure without legitimacy is counterproductive. What works against jihadist networks does not necessarily translate cleanly to drug cartels or transnational criminal gangs. So, policymakers must be circumspect that expanding the scope of counterterrorism authorities and terrorist designations to canvas drug cartels, risks the unintended consequences of triggering destabilizing cycles of violence in the future, and straining more traditional counterterrorism resources.

Coming full circle, in light of the U.S. capture of Nicolás Maduro for narcoterrorism-related offenses, the idea of legitimacy will be fiercely debated in the days and weeks ahead. If the Trump National Security Strategy is the roadmap for focusing on narcoterrorism in the Western Hemisphere, then the need for publishing a clarifying and rational U.S. counterterrorism strategy for the rest of the world takes on even greater sense of urgency.

Pushing a boulder uphill

Drawing on past counterterrorism lessons to find a comprehensive strategy—from the Bush administration’s wartime footing, through 8 years of Obama counterterrorism work, to President Trump’s "war on terror" — is a Sisyphean task. But, in the wake of over two decades of relentless overseas counterterrorism work, a few ideas have come into sharper focus:

After more than two decades of counterterrorism, loosening the Gordian knot of modern terrorism requires balance, far greater clarity, and consistent, predictable national leadership.

Above all, counterterrorism strategy requires legitimacy. Without it, counterterrorism becomes reactive and politicized. With it, a Trump 2.0 counterterrorism strategy can still be firm, flexible, and credible in a far more dangerous world.

The Cipher Brief is committed to publishing a range of perspectives on national security issues submitted by deeply experienced national security professionals. Opinions expressed are those of the author and do not represent the views or opinions of The Cipher Brief.

Have a perspective to share based on your experience in the national security field? Send it to Editor@thecipherbrief.com for publication consideration.

Read more expert-driven national security insights, perspective and analysis in The Cipher Brief, because national security is everyone’s business.

Judicial Rackets: Judge Rakoff and the Fear of Monetary Exit

Bitcoin Magazine

Judicial Rackets: Judge Rakoff and the Fear of Monetary Exit

Judge Jed Rakoff’s essay It’s a Racket! reads less like analysis than confession.

He opens with a dictionary definition of cryptocurrency and proceeds to explain why systems that operate outside government control are dangerous. This framing reveals the core assumption beneath the essay: money is legitimate only when sanctioned, supervised, and reversible at the discretion of the state.

Bitcoin exists because that assumption failed.

The Genesis Block of the Bitcoin blockchain contains a timestamp referencing the 2008 bank bailouts. It marks the moment the modern financial system exposed itself as a closed hierarchy enforced by regulation, complexity, and rescue. Losses were socialized. Accountability vanished. Courts enforced the aftermath.

Bitcoin was created to exit that system.

Rakoff repeatedly treats “crypto” as a monolith, collapsing decentralized networks, centralized frauds, meme tokens, and algorithmic stablecoins into a single object of derision. This is not analysis; it is rhetorical convenience. The Terraform Labs fraud he describes depended on secrecy, centralization, and false representations — the very features Bitcoin was designed to eliminate.

Rakoff describes Bitcoin as gambling “untethered to economic reality.” But his definition of economic reality is faith-based: central bank discretion, elastic supply, and institutional trust. Bitcoin rejects those premises. It imposes a fixed supply. It makes monetary debasement impossible. It exposes failure instead of masking it.

That is why central planners hate it.

I watched the regulated financial system collapse in 2008 from inside a New York law firm. The catastrophe occurred not in unregulated back alleys but in the most supervised institutions on earth. When it ended, almost no one responsible was punished. Courts enforced the settlements. Central banks created money to paper over the wreckage.

Bitcoin refuses that bargain.

Rakoff leans heavily on blockchain surveillance claims asserting vast criminality. These claims rest on inference, not proof. The surveillance industry is unregulated, unvalidated, and commercially motivated. Yet courts increasingly treat its output as scientific fact. This is junk science with a badge.

The Silk Road prosecutions revealed the real anxiety. Ross Ulbricht proved Bitcoin was money. Goods and services could be exchanged without permission from banks or governments. His punishment was exemplary, not proportional. It was meant to deter autonomy.

Courts have always played this role. They enforced slavery. They upheld internment. They validated sterilization. They ratified segregation. Judicial neutrality is a myth told by the winners of each era.

Rakoff laments that regulation of cryptocurrency is being scaled back. What he calls deregulation, others call recognition: that Bitcoin cannot be regulated into submission without destroying the liberties it restores.

Tens of millions of Americans now hold Bitcoin. Institutions that once mocked it now custody it. A political constituency has formed around monetary sovereignty. That constituency is done asking for permission.

Rakoff calls Bitcoin a racket because it escapes the racket he knows: discretionary money, regulatory capture, and judicial enforcement of economic orthodoxy.

Bitcoin does not ask courts for legitimacy. It derives legitimacy from use.

The Genesis Block was not a marketing flourish. It was a declaration. The old system failed. A new one appeared. Courts can sneer, but code does not care.

This article is a Take. Opinions expressed are entirely the author’s and do not necessarily reflect those of BTC Inc or Bitcoin Magazine.

This post Judicial Rackets: Judge Rakoff and the Fear of Monetary Exit first appeared on Bitcoin Magazine and is written by Tor Ekeland and Michael Hassard.

Everyone wants AI sovereignty. No one can truly have it.

By: Cathy Li
21 January 2026 at 09:00

Governments plan to pour $1.3 trillion into AI infrastructure by 2030 to invest in “sovereign AI,” with the premise being that countries should be in control of their own AI capabilities. The funds include financing for domestic data centers, locally trained models, independent supply chains, and national talent pipelines. This is a response to real shocks: covid-era supply chain breakdowns, rising geopolitical tensions, and the war in Ukraine.  

But the pursuit of absolute autonomy is running into reality. AI supply chains are irreducibly global: Chips are designed in the US and manufactured in East Asia; models are trained on data sets drawn from multiple countries; applications are deployed across dozens of jurisdictions.  

If sovereignty is to remain meaningful, it must shift from a defensive model of self-reliance to a vision that emphasizes the concept of orchestration, balancing national autonomy with strategic partnership. 

Why infrastructure-first strategies hit walls 

A November survey by Accenture found that 62% of European organizations are now seeking sovereign AI solutions, driven primarily by geopolitical anxiety rather than technical necessity. That figure rises to 80% in Denmark and 72% in Germany. The European Union has appointed its first Commissioner for Tech Sovereignty. 

This year, $475 billion is flowing into AI data centers globally. In the United States, AI data centers accounted for roughly one-fifth of GDP growth in the second quarter of 2025. But the obstacle for other nations hoping to follow suit isn’t just money. It’s energy and physics. Global data center capacity is projected to hit 130 gigawatts by 2030, and for every $1 billion spent on these facilities, $125 million is needed for electricity networks. More than $750 billion in planned investment is already facing grid delays. 

And it’s also talent. Researchers and entrepreneurs are mobile, drawn to ecosystems with access to capital, competitive wages, and rapid innovation cycles. Infrastructure alone won’t attract or retain world-class talent.  

What works: An orchestrated sovereignty

What nations need isn’t sovereignty through isolation but through specialization and orchestration. This means choosing which capabilities you build, which you pursue through partnership, and where you can genuinely lead in shaping the global AI landscape. 

The most successful AI strategies don’t try to replicate Silicon Valley; they identify specific advantages and build partnerships around them. 

Singapore offers a model. Rather than seeking to duplicate massive infrastructure, it invested in governance frameworks, digital-identity platforms, and applications of AI in logistics and finance, areas where it can realistically compete. 

Israel shows a different path. Its strength lies in a dense network of startups and military-adjacent research institutions delivering outsize influence despite the country’s small size. 

South Korea is instructive too. While it has national champions like Samsung and Naver, these firms still partner with Microsoft and Nvidia on infrastructure. That’s deliberate collaboration reflecting strategic oversight, not dependence.  

Even China, despite its scale and ambition, cannot secure full-stack autonomy. Its reliance on global research networks and on foreign lithography equipment, such as extreme ultraviolet systems needed to manufacture advanced chips and GPU architectures, shows the limits of techno-nationalism. 

The pattern is clear: Nations that specialize and partner strategically can outperform those trying to do everything alone. 

Three ways to align ambition with reality 

1.  Measure added value, not inputs.  

Sovereignty isn’t how many petaflops you own. It’s how many lives you improve and how fast the economy grows. Real sovereignty is the ability to innovate in support of national priorities such as productivity, resilience, and sustainability while maintaining freedom to shape governance and standards.  

Nations should track the use of AI in health care and monitor how the technology’s adoption correlates with manufacturing productivity, patent citations, and international research collaborations. The goal is to ensure that AI ecosystems generate inclusive and lasting economic and social value.  

2. Cultivate a strong AI innovation ecosystem. 

Build infrastructure, but also build the ecosystem around it: research institutions, technical education, entrepreneurship support, and public-private talent development. Infrastructure without skilled talent and vibrant networks cannot deliver a lasting competitive advantage.   

3. Build global partnerships.  

Strategic partnerships enable nations to pool resources, lower infrastructure costs, and access complementary expertise. Singapore’s work with global cloud providers and the EU’s collaborative research programs show how nations advance capabilities faster through partnership than through isolation. Rather than competing to set dominant standards, nations should collaborate on interoperable frameworks for transparency, safety, and accountability.  

What’s at stake 

Overinvesting in independence fragments markets and slows cross-border innovation, which is the foundation of AI progress. When strategies focus too narrowly on control, they sacrifice the agility needed to compete. 

The cost of getting this wrong isn’t just wasted capital—it’s a decade of falling behind. Nations that double down on infrastructure-first strategies risk ending up with expensive data centers running yesterday’s models, while competitors that choose strategic partnerships iterate faster, attract better talent, and shape the standards that matter. 

The winners will be those who define sovereignty not as separation, but as participation plus leadership—choosing who they depend on, where they build, and which global rules they shape. Strategic interdependence may feel less satisfying than independence, but it’s real, it is achievable, and it will separate the leaders from the followers over the next decade. 

The age of intelligent systems demands intelligent strategies—ones that measure success not by infrastructure owned, but by problems solved. Nations that embrace this shift won’t just participate in the AI economy; they’ll shape it. That’s sovereignty worth pursuing. 

Cathy Li is head of the Centre for AI Excellence at the World Economic Forum.

The Long Arc Of American Power

20 January 2026 at 08:33

OPINION — “We [the U.S.] began as a sliver of a country and next thing you know we're a continental power, and we did not do that primarily through our great diplomacy and our good looks and our charm. We did that primarily by taking the land from other people.”

That was Michael O’Hanlon, the Brookings Institution’s Director of Research in the Foreign Policy program, speaking January 12, about his new book, To Dare Mighty Things: U.S. Defense Strategy Since the Revolution, on a panel with retired-Gen. David Petraeus and Historian Robert Kagan.

O’Hanlon continued, “Now, this is not a revisionist history that's meant to beat up on the United States for having become a world power, because if we hadn't done that, if we hadn't become this continental power, then we could never have prevailed in the World Wars…The world would have been a much worse place and we could never have played the role we did in the Cold War and at least up until recent times, the post-Cold-War world. So generally speaking, I'm glad for this American assertiveness, but to me, it's striking just how little we understand that about ourselves.”

Listening to that event eight days ago at Brookings, and looking around at what the Trump administration is doing at home and abroad today, I thought elements of what I heard from these three were worth repeating and reviewing.

For example, O’Hanlon pointed out a great amount of U.S. grand strategy and national security thinking took place during historic periods considered times of American isolationism and retrenchment.

O’Hanlon said, “A lot of the institutional machinery, a lot of the intellectual and leadership development capability of the United States began in this period starting in the late 19th century and accelerating into the inner [World] War years [1918-to-1941]. And without that, we would not have had the great leaders like [Gen. Dwight D.] Eisenhower, and [Gen. George C.] Marshall, trained in the way they were. I think that made them ready for World War II.”

He added, “We would not have had many of the innovations that occurred in this period of time -- so whether it's [Rear Admiral William A.] Moffett and [Navy] air power and [aircraft] carrier power, [Army Brig. Gen.] Billy Mitchell and the development of the Army Air Corps, [Marine Maj. Gen. John A.] Lejeune and the thinking about amphibious warfare. A lot of these great military leaders and innovators were doing their thing in the early decades of the 20th century and including in the inner war years in ways that prepared us for all these new innovations, all these new kinds of operations that would prove so crucial in World War II.”

“To me it's sort of striking,” O’Hanlon said, “how quickly we got momentum in World War II, given how underprepared we were in terms of standing armies and navies and capabilities. And by early 1943 at the latest, I think we're basically starting to win that war, which is faster than we've often turned things around in many of our conflicts in our history.”

Kagan, a Brookings senior fellow and author of the 2012 book The World America Made, picked up on American assertiveness. “Ideologically, the United States was expansive,” Kagan said, “We had a universalist ideology. We got upset when we saw liberalism being attacked, even back in the 1820s. You know, a lot of Americans wanted to help the Greek rebellion [against the Ottoman Empire]. The world was very ideological in the 19th century and we saw ourselves as being on the side of liberalism and freedom versus genuine autocracies like Russia and Austria and Prussia. And so we always had these sympathies. Now everybody would say wait a second it's none of our business blah blah blah blah, but nevertheless the general trend was we cared.”

Kagan went on, “People keep doing things out there that we're finding offensive in one way or another. And so we're like wanting to do something about it. So then we get dragged into, [or] we drag ourselves into these conflicts and then we say, ‘Wait a second, we're perfectly safe here [protected east and west by the Atlantic and Pacific Oceans]. Why are we involved in all this stuff?’ And then we want to come back. And so this tension between our essential security on the one hand and…our kind of busy bodyness in the world has just been has been a constant -- and I think explains why we have vacillated in terms of our military capability.”

Petraeus, began by saying, “I'm a soldier not a historian here,” and then defended some past U.S. interventions as “basically when we've been attacked,” citing Pearl Harbor and ships being sunk in the Atlantic. He added, “Sometimes it's and/or when we fear hostile powers especially, if they're aligned as it was during the Cold War with the communists, or now arguably with China and/or Russia or both taking control of again Eurasia, Southeast Asia, East Asia.”

Petraeus admitted, “We have sometimes misread that. You can certainly argue that Vietnam was arguably more nationalist [North Vietnamese seeking independence from France] maybe than it was communist. But that I think still applies. I think one of the motivations with respect to [Venezuelan President Nicolas] Maduro is that they [the Maduro Venezuelan leadership] were more closely than ever aligning with China, Iran to a degree, Russia and so forth. And we've seen that play out on a number of occasions as well.”

The Cipher Brief brings expert-level context to national and global security stories. It’s never been more important to understand what’s happening in the world. Upgrade your access to exclusive content by becoming a subscriber.

Petraeus, who played several roles in Iraq, said the U.S. had “to be very measured in what your objectives are if you're going to use force, and…try to avoid boots-on-the-ground. If they're going to be on the ground, then employ advise, assist, and enable operations where it's the host nation forces or partner forces that are on the front lines rather than Americans.”

Looking back, Petraeus said, “I think we were unprepared definitely intellectually for these operations after toppling regimes in Iraq and Afghanistan and not just [in] the catastrophically bad post-conflict as phase,” citing “horrific decisions to fire the entire Iraqi military without telling them what their future was. And then firing the Baath Party down to the level of bureaucrats. That meant that tens of thousands [of Iraqis] without an agreed reconciliation process are literally cast out. And by the way, they're the bureaucrats that we needed to actually help us run a country [Iraq] we didn't sufficiently understand.”

Describing another lesson learned, Petraeus said, “In looking back on Afghanistan, trying to distill what happened, what we did wrong, what we did right, I really concluded that we were never truly committed to Afghanistan nation building. Rather, we were repeatedly committed to exiting. And that was a huge challenge [for the 20 years the U.S. was there], because if you tell the enemy that you're going to draw down on a given date, during the speech in which you announce a buildup, really undermines the enemy's sense of your will in what is a contest of wills at the end of the day. Not saying that we didn't want to draw down, but to do it according to the right conditions. And of course then the other challenge was that the draw-down became much more based on conditions in Washington than it did on conditions in Afghanistan, which is again another pretty fatal flaw.”

Kagan gave his view on past American interventions with U.S. troops in foreign countries, and tied them sharply to today’s situation, not only in Caracas, but also in Washington. “You know, the United States did not go to war in Iraq to promote democracy despite the vast mythology that has grown up about that,” Kagan began.

He then continued, “It was primarily fear of security. Saddam was a serial aggressor. He certainly was working on weapons of mass destruction. Rightly or wrongly that was the primary motive [of the George W. Bush administration]. But then Americans, as always the case, and you know, all you have to do is look at what we did in Germany after World War II, what we did in Japan after World War II. Americans never felt very comfortable about moving into some country, taking it over for whatever reason and then turning it over to some dictator. We wanted to be able to say that we left something like democratic governance behind. Until now that has been such a key element of our self-perception and our character.”

Need a daily dose of reality on national and global security issues? Subscriber to The Cipher Brief’s Nightcap newsletter, delivering expert insights on today’s events – right to your inbox. Sign up for free today.

Kagan said the Bush administration then sent U.S. troops into Iraq “was not because we were dying to send troops into Iraq, but because we had concluded you cannot control countries from the air. And so we're now [with Venezuela] we’re back in that mode.”

But here, Kagan gave his view of an important change from the past. He said, “So here's what's different. We did not want to leave in Iraq Saddam's number two. Go ahead, take over. In Venezuela, we've gone after a regime head…[but] this isn't regime change. This is decapitation and now we've turned it over to the next, you know, part of the Maduro regime and said you take care of it. We'll run it, but you take care of it. That is a departure from American history and I think it is directly a consequence of the fact that for the first time I can say without any doubt we do not have a president who believes in the American principles of liberalism, but is actively hostile to them here in the United States as well as internationally. He is on the side of anti-liberalism. He is on the side of authoritarianism, both here and abroad. That, to my mind, it's not do we intervene in Latin America, Yes, we do, but for what purpose? And I think that is the huge break [from the past] that we're witnessing right now.”

To my mind and others, Kagan has it right. President Trump, facing political problems at home – affordability, the Epstein files, the upcoming November House and Senate elections – has tried to show expanding power abroad. Based on past success in Iran bombing nuclear sites and removing Maduro from Venezuela, Trump wants to absorb Greenland, send U.S. forces into Mexico after drug cartels, and threaten attacking the faltering regime in Iran.

Let me add a final element to Trump’s current eagerness to show power abroad. The one thing he doesn’t want is the death of any U.S. military personnel he sends into harm’s way. Trump and his top aides have repeatedly pointed out, whether it was in blowing up narco-trafficking boats or the Iran bombing or the Maduro snatch, no American lives were lost.

The Cipher Brief is committed to publishing a range of perspectives on national security issues submitted by deeply experienced national security professionals. Opinions expressed are those of the author and do not represent the views or opinions of The Cipher Brief.

Have a perspective to share based on your experience in the national security field? Send it to Editor@thecipherbrief.com for publication consideration.

Read more expert-driven national security insights, perspective and analysis in The Cipher Brief, because national security is everyone’s business.

Why Labeling Muslim Brotherhood “Chapters” as Terrorist Groups Is Problematic

14 January 2026 at 09:00

OPINION — The White House this past November issued a Presidential action statement designating certain Muslim Brotherhood “chapters” as terrorist organizations. On Tuesday, the U.S. State Department and U.S. Treasury Department announced the designations of the Lebanese, Jordanian, and Egyptian chapters of the Muslim Brotherhood as terrorist organizations. The Egyptian and Jordanian chapters received a Specially Designated Global Terrorist (SDGT) designation. The Lebanese chapter received both the SDGT designation and a Foreign Terrorist Organization (FTO) designation.

In the spring of 2019, Washington, responding to mounting pressure by Egyptian President Abdel Fattah al-Sisi, decided to brand the Egyptian Muslim Brotherhood (MB) a terrorist organization. There was no mention of “chapters” outside Egypt.

Having followed the MB and interviewed many of its members for years during my government service, I published an article in 2019 questioning the underlying assumptions of the plan. This article is a revised version of my 2019 piece.

I argued in the 2019 piece that the administration’s decision at the time did not reflect a deep knowledge of the origins of the Muslim Brotherhood and its connection to Muslim societies and political Islam.

In the fall of 2025, the leaders of the United Arab Republic, Jordan, Bahrain, and Lebanon pressured the administration to label the MB a terrorist group.

The Cipher Brief brings expert-level context to national and global security stories. It’s never been more important to understand what’s happening in the world. Upgrade your access to exclusive content by becoming a subscriber.

Context

The Egyptian Muslim Brotherhood was founded by schoolteacher Hassan al-Banna in 1928 in response to two fundamental realities: First, Egypt was under the influence of British colonialism embodied in the massive British military presence near the Suez Canal. Second, under the influence of the pro-Western corrupt monarchy lead first by King Fuad and later by his son King Faruk, the MB’s founder believed that Muslim Egypt was drifting away from Islam. Egypt of course is the home of Al-Azhar University, the oldest Muslim academic center of learning in the world.

In addition, Al-Azhar University represents the philosophical and theological thought of the three major Schools of Jurisprudence in Sunni Islam—the Hanafi, the Maliki, and the Shafi’i Schools. The fourth and smallest School of Jurisprudence—the Hanbali—is embodied in the Wahhabi-Salafi doctrine and is prevalent in Saudi Arabia.

Al-Banna’s two founding principles were: a) Islam is the solution to society’s ills (“Islam hua al-Hal”), and b) Islam is a combination of Faith (Din), Society (Dunya) and State (Dawla). He believed, correctly for the most part, that these principles, especially the three Arabic Ds, underpin all Sunni Muslim societies, other than perhaps the adherents of the Hanbali School.

In the past 98 years, the Muslim Brotherhood has undergone different reiterations from eschewing politics to accepting the authority of Muslim rulers to declaring war against some of them to participating in the political process through elections.

Certain MB thinkers and leaders over the past nine decades, including the Egyptian Sayyid Qutb, the Syrian Muhammad Surur, and the Palestinian Abdullah Azzam, adopted a radical violent view of Islamic jihad and either allied themselves with some Wahhabi clerics in Saudi Arabia or joined al-Qa’ida. The organization itself generally stayed away from violent jihad. Consequently, it would make sense to label certain leaders or certain actions as terrorist but not the entire group or the different Islamic political parties in several countries.

Need a daily dose of reality on national and global security issues? Subscriber to The Cipher Brief’s Nightcap newsletter, delivering expert insights on today’s events – right to your inbox. Sign up for free today.

In the early 1990s, the Egyptian MB rejected political violence and declared its support for peaceful gradual political change through elections, and in fact participated in several national elections. While Islamic Sunni parties in different countries adopted the basic theological organizing principles of the MB on the role of Islam in society, they were not “chapters” of the MB.

They are free standing Islamic political groups and movements, legally registered in their countries, which often focus on economic, health, and social issues of concern to their communities. They are not tied to the MB in command, control, or operations.

Examples of these Sunni Islamic political parties include the AKP in Turkey, the Islamic Action Front in Jordan, Justice and Development in Morocco, al-Nahda in Tunisia, the Islamic Constitutional Movement in Kuwait, the Islamic Movement (RA’AM) in Israel, PAS in Malaysia, PKS in Indonesia, the Islamic Party in Kenya, and the National Islamic Front in Sudan.

During my government career, my analysts and I spent years in conversations with representatives of these parties with an eye toward helping them moderate their political positions and encouraging them to enter the mainstream political process through elections. In fact, most of them did just that. They won some elections and lost others, and in the process, they were able to recruit thousands of young members.

Based on these conversations, we concluded that these groups were pragmatic, mainstream, and committed to the dictum that electoral politics was a process, and not “one man, one vote, one time.” Because they believed in the efficacy and value of gradual peaceful political change, they were able to convince their fellow Muslims that a winning strategy at the polls was to focus on bread-and-butter issues, including health, education, and welfare, that were of concern to their own societies. They projected to their members a moderate vision of Islam.

Labeling the Muslim Brotherhood and other mainstream Sunni Islamic political parties as terrorist organizations could radicalize some of the youth in these parties and opt out of electoral politics. Some of the party leaders would become reticent to engage with American diplomats, intelligence officers, and other officials at U.S. embassies.

Washington inadvertently would be sending a message to Muslim youth that the democratic process and peaceful participation in electoral politics are a sham, which could damage American national security and credibility in many Muslim countries.

The Cipher Brief is committed to publishing a range of perspectives on national security issues submitted by deeply experienced national security professionals.

Opinions expressed are those of the author and do not represent the views or opinions of The Cipher Brief.

Have a perspective to share based on your experience in the national security field? Send it to Editor@thecipherbrief.com for publication consideration.

Read more expert-driven national security insights, perspective and analysis in The Cipher Brief, because national security is everyone’s business.

AZAK develops futuristic wheel-centric unmanned ground vehicle

13 January 2026 at 03:48
Denver-based AZAK is positioning itself at the center of a shift in unmanned ground mobility with a chassis-free vehicle built around an autonomous wheel system rather than a traditional frame. In place of axles, drivetrains, and a fixed body, the company designed each wheel as a self-contained power unit that houses the motor, battery, and […]

Why some “breakthrough” technologies don’t work out

12 January 2026 at 06:15

Every year, MIT Technology Review publishes a list of 10 Breakthrough Technologies. In fact, the 2026 version is out today. This marks the 25th year the newsroom has compiled this annual list, which means its journalists and editors have now identified 250 technologies as breakthroughs. 

A few years ago, editor at large David Rotman revisited the publication’s original list, finding that while all the technologies were still relevant, each had evolved and progressed in often unpredictable ways. I lead students through a similar exercise in a graduate class I teach with James Scott for MIT’s School of Architecture and Planning. 

We ask these MIT students to find some of the “flops” from breakthrough lists in the archives and consider what factors or decisions led to their demise, and then to envision possible ways to “flip” the negative outcome into a success. The idea is to combine critical perspective and creativity when thinking about technology.

Although it’s less glamorous than envisioning which advances will change our future, analyzing failed technologies is equally important. It reveals how factors outside what is narrowly understood as technology play a role in its success—factors including cultural context, social acceptance, market competition, and simply timing.

In some cases, the vision behind a breakthrough was prescient but the technology of the day was not the best way to achieve it. Social TV (featured on the list in 2010) is an example: Its advocates proposed different ways to tie together social platforms and streaming services to make it easier to chat or interact with your friends while watching live TV shows when you weren’t physically together. 

This idea rightly reflected the great potential for connection in this modern era of pervasive cell phones, broadband, and Wi-Fi. But it bet on a medium that was in decline: live TV. 

Still, anyone who had teenage children during the pandemic can testify to the emergence of a similar phenomenon—youngsters started watching movies or TV series simultaneously on streaming platforms while checking comments on social media feeds and interacting with friends over messaging apps. 

Shared real-time viewing with geographically scattered friends did catch on, but instead of taking place through one centralized service, it emerged organically on multiple platforms and devices. And the experience felt unique to each group of friends, because they could watch whatever they wanted, whenever they wanted, independent of the live TV schedule.

Evaluating the record

Here are a few more examples of flops from the breakthroughs list that students in the 2025 edition of my course identified, and the lessons that we could take from each.

The DNA app store (from the 2016 list) was selected by Kaleigh Spears. It seemed like a great deal at the time—a startup called Helix could sequence your genome for just $80. Then, in the company’s app store, you could share that data with third parties that promised to analyze it for relevant medical info, or make it into fun merch. But Helix has since shut down the store and no longer sells directly to consumers.

Privacy concerns and doubts about the accuracy of third-party apps were among the main reasons the service didn’t catch on, particularly since there’s minimal regulation of health apps in the US. 

a Helix flow cell
HELIX

Elvis Chipiro picked universal memory (from the 2005 list). The vision was for one memory tech to rule them all—flash, random-access memory, and hard disk drives would be subsumed by a new method that relied on tiny structures called carbon nanotubes to store far more bits per square centimeter. The company behind the technology, Nantero, raised significant funds and signed on licensing partners but struggled to deliver a product on its stated timeline.

Nantero ran into challenges when it tried to produce its memory at scale because tiny variations in the way the nanotubes were arranged could cause errors. It also proved difficult to upend memory technologies that were already deeply embedded within the industry and well integrated into fabs.  

Light-field photography (from the 2012 list), chosen by Cherry Tang, let you snap a photo and adjust the image’s focus later. You’d never deal with a blurry photo ever again. To make this possible, the startup Lytro had developed a special camera that captured not just the color and intensity of light but also the angle of its rays. It was one of the first cameras of its kind designed for consumers. Even so, the company shut down in 2018.

Lytro field camera
Lytro’s unique light-field camera was ultimately not successful with consumers.
PUBLIC DOMAIN/WIKIMEDIA COMMONS

Ultimately, Lytro was outmatched by well-established incumbents like Sony and Nokia. The camera itself had a tiny display, and the images it produced were fairly low resolution. Readjusting the focus in images using the company’s own software also required a fair amount of manual work. And smartphones—with their handy built-in cameras—were becoming ubiquitous. 

Many students over the years have selected Project Loon (from the 2015 list)—one of the so-called “moonshots” out of Google X. It proposed using gigantic balloons to replace networks of cell-phone towers to provide internet access, mainly in remote areas. The company completed field tests in multiple countries and even provided emergency internet service to Puerto Rico during the aftermath of Hurricane Maria. But the company shut down the project in 2021, with Google X CEO Astro Teller saying in a blog post that “the road to commercial viability has proven much longer and riskier than hoped.” 

Sean Lee, from my 2025 class, saw the reason for its flop in the company’s very mission: Project Loon operated in low-income regions where customers had limited purchasing power. There were also substantial commercial hurdles that may have slowed development—the company relied on partnerships with local telecom providers to deliver the service and had to secure government approvals to navigate in national airspaces. 

""
One of Project Loon’s balloons on display at Google I/O 2016.
ANDREJ SOKOLOW/PICTURE-ALLIANCE/DPA/AP IMAGES

While this specific project did not become a breakthrough, the overall goal of making the internet more accessible through high-altitude connectivity has been carried forward by other companies, most notably Starlink with its constellation of low-orbit satellites. Sometimes a company has the right idea but the wrong approach, and a firm with a different technology can make more progress.

As part of this class exercise, we also ask students to pick a technology from the list that they think might flop in the future. Here, too, their choices can be quite illuminating. 

Lynn Grosso chose synthetic data for AI (a 2022 pick), which means using AI to generate data that mimics real-world patterns for other AI models to train on. Though it’s become more popular as tech companies have run out of real data to feed their models, she points out that this practice can lead to model collapse, with AI models trained exclusively on generated data eventually breaking the connection to data drawn from reality. 

And Eden Olayiwole thinks the long-term success of TikTok’s recommendation algorithm (a 2021 pick) is in jeopardy as awareness grows of the technology’s potential harms and its tendency to, as she puts it, incentive creators to “microwave” ideas for quick consumption. 

But she also offers a possible solution. Remember—we asked all the students what they would do to “flip” the flopped (or soon-to-flop) technologies they selected. The idea was to prompt them to think about better ways of building or deploying these tools. 

For TikTok, Olayiwole suggests letting users indicate which types of videos they want to see more of, instead of feeding them an endless stream based on their past watching behavior. TikTok already lets users express interest in specific topics, but she proposes taking it a step further to give them options for content and tone—allowing them to request more educational videos, for example, or more calming content. 

What did we learn?

It’s always challenging to predict how a technology will shape a future that itself is in motion. Predictions not only make a claim about the future; they also describe a vision of what matters to the predictor, and they can influence how we behave, innovate, and invest.

One of my main takeaways after years of running this exercise with students is that there’s not always a clear line between a successful breakthrough and a true flop. Some technologies may not have been successful on their own but are the basis of other breakthrough technologies (natural-language processing, 2001). Others may not have reached their potential as expected but could still have enormous impact in the future (brain-machine interfaces, 2001). Or they may need more investment, which is difficult to attract when they are not flashy (malaria vaccine, 2022). 

Despite the flops over the years, this annual practice of making bold and sometimes risky predictions is worthwhile. The list gives us a sense of what advances are on the technology community’s radar at a given time and reflects the economic, social, and cultural values that inform every pick. When we revisit the 2026 list in a few years, we’ll see which of today’s values have prevailed. 

Fabio Duarte is associate director and principal research scientist at the MIT Senseable City Lab.

Australian firm develops futuristic self-propelled Vendetta system

10 January 2026 at 03:24
Ungoverned, an Australia-founded mobility technology company, has developed a new self-propelled tracked platform designed to move across soft ground, mine-risk areas, and degraded terrain with less pressure than a human footstep. The company disclosed the development in a technical briefing and interview with Defence Blog, outlining the system’s performance characteristics and new field data measured […]

The U.S. Military’s Newest Enemy: Fentanyl

23 December 2025 at 07:39


OPINION — “There's no doubt that America's adversaries are trafficking fentanyl into the United States in part because they want to kill Americans. If this were a war, that would be one of the worst wars. I believe they killed over the last five or six years, per year, 200-to-300,000 people. You hear about a 100,000, which is a lot of people, but the number is much higher than that. That's been proven.”

That was President Trump in the Oval Office on December 15, explaining why he was signing an Executive Order (EO) designating “illicit fentanyl and its core precursor chemicals as Weapons of Mass Destruction (WMD).”

Notice Trump’s use of the word “war,” and the vast exaggeration of numbers of fentanyl drug deaths in the U.S. -- actually 48,000 in 2024. Also, does anyone really think that the cartels are pushing fentanyl into this country “to kill Americans?” Or is the real reason they are doing it is to make money – as is the case with most drug dealers.

I am focusing on this rather odd EO because to me it is another sign that President Trump is bringing the U.S. military into yet another essentially domestic American problem, drug use. I also see it as the Trump administration regularizing employment of the U.S. military to be a normal response to control civil issues.

Remember, President Trump has employed some 9,000 active and National Guard service members on the U.S. southern border to block what he termed an invasion of illegal immigrants. He has also federalized National Guard troops in U.S. cities like Washington, D.C. claiming they were needed to combat crime, and required hundreds of Marines and originally 4,000 California National Guard personnel in Los Angeles to put down protests against immigration raids.

There was even a military atmosphere in the Oval Office on December 15, because the President used that same meeting to make the first awards of a Mexican Border Defense Medal to 13 Army and Marine service members who provided military support to the Department of Homeland Security and U.S. Customs and Border Protection.

In the Oval Office meeting, Defense Secretary Pete Hegseth explained that the newly-issued medal exactly replicated the 1918 Mexican Border Defense Medal, but that one went to U.S. troops who patrolled the border during 1916-1917, when fear was of a German-inspired invasion by the paramilitary forces of Francisco "Pancho" Villa as part of the Mexican Revolution.

The Cipher Brief brings expert-level context to national and global security stories. It’s never been more important to understand what’s happening in the world. Upgrade your access to exclusive content by becoming a subscriber.

While President Trump said that “to kill Americans” was a purpose of trafficking fentanyl, the EO itself said there was a more complex goal. The EO said, “The production and sale of fentanyl by Foreign Terrorist Organizations and cartels fund these entities’ operations — which include assassinations, terrorist acts, and insurgencies around the world — and allow these entities to erode our domestic security and the well-being of our Nation.”

Here, this EO seeks to link up with one of President Trump’s first EOs, signed on January 20, that designated unspecified cartels as Foreign Terrorists Organizations to make them subject to laws Congress passed in the wake of the September 11, 2001, attacks.

The new December 15 EO goes on to say, “The two cartels that are predominantly responsible for the distribution of fentanyl in the United States engage in armed conflict over territory and to protect their operations, resulting in large-scale violence and death that go beyond the immediate threat of fentanyl itself.”

Inexplicably, the EO does not name those two cartels.

However, the Drug Enforcement Agency (DEA) in its 2025 National Drug Threat Assessment makes it clear who they are by saying, “The Sinaloa and Jalisco New Generation Cartels, in particular, control clandestine [fentanyl] production sites in Mexico, smuggling routes into the United States, and distribution hubs in key U.S. cities.”

Then both the new EO and 2025 National Drug Threat Assessment carry the exact same following sentence: “Further, the potential for fentanyl to be weaponized for concentrated, large-scale terror attacks by organized adversaries is a serious threat to the United States.”

It turns out that back in the 1990s, a number of countries investigated using fentanyl as part of an incapacitating agent, including the U.S. Defense Department. The U.S. dropped the idea because of a margin of safety issue – the difference between a dosage that would incapacitate and one that would kill a person.

However the Russians did create a fentanyl-based incapacitating agent and used it in October 2002, when 40 Chechen terrorists seized Moscow’s Dubrovka Theater and held some 800 people hostage. Russians finally released the fentanyl-based gas to incapacitate those in the theater and it killed some 130 of them.

Fentanyl is an FDA-approved synthetic opioid used medically as a pain reliever and anesthetic. It is close to 100 times stronger than morphine. Two milligrams of fentanyl -- equivalent to 10-to-15 grains of table salt – can be lethal. Unlike other illegal drugs such as cocaine, wholesale traffickers distribute fentanyl by the kilogram, equal to 2.2 pounds.

The DEA has found wide U.S. usage of illicit, manufactured, counterfeit fentanyl pills ranging from .02 to 5.1 milligrams, the latter more than twice the lethal dose depending on a person’s body size, tolerance and past usage.

Fentanyl illegal drug use has been a major problem in the U.S. since 2021 when overdose deaths reached 71,000. But as shown above overdose fentanyl deaths are on the way down. President Trump even recognized fentanyl use had gone down saying in the Oval Office on December 15, “We've also achieved a 50% drop in the amount of fentanyl coming across the border and China is working with us very closely and bringing down the number and the amount of fentanyl that's being shipped…We've got it down to a much lower number.” But Trump added, “Not satisfactory, but it will be satisfactory soon.”

Subscriber+Members get exclusive access to expert-driven briefings on the top national security issues we face today. Gain access to save your virtual seat now.

The term “weapon of mass destruction” has specific legal definitions, typically tied to nuclear, radiological, chemical, or biological weapons that are designed to cause large-scale death or bodily harm.

Under the Trump WMD EO, implementation calls for Defense Secretary Hegseth and Attorney General Pam Bondi to determine if the U.S. military is needed to enforce 10 U.S.C. 282, a post-9/11, 2002 counterterrorism law covering emergency situations involving weapons of mass destruction.

If they agree the military is needed, under 10 U.S.C. 282 Hegseth and Bondi are to “jointly prescribe regulations concerning the types of assistance that may be provided,” and “describe the actions that Department of Defense personnel may take in circumstances incident to the provision of assistance.”

There are provisions in 10 U.S.C. 282 prohibiting the military from authority to arrest individuals, directly participate in searches or seizures of evidence related law violations or collection of intelligence for law enforcement – but those provisions also can also be waived.

In addition, under the Trump EO, Hegseth is to consult with Secretary of Homeland Security Kristi Noem to “update all directives regarding the Armed Forces’ response to chemical incidents in the homeland to include the threat of illicit fentanyl.”

I go into all these details because I believe something other than fentanyl is involved here. Others are questioning the December 15 EO, such as Andrew McCarthy in National Review on December 20.

McCarthy wrote, “President Trump may despise ‘forever wars,’ but he sure seems to like pretend wars. The point of the fentanyl ‘designation’ is to shore up his case for using military force against drug traffickers — although its relevance to high seas around Venezuela is hard to fathom since fentanyl is neither produced nor imported from there. At any rate, fentanyl, a dangerous drug but one with legitimate medical uses, is a narcotic, not a weapon of mass destruction akin to a chemical or biological bomb.”

Yesterday, Military.com pointed out, “The [December 15] Executive Order does not spell out a specific military mission, and Pentagon officials have not yet stated whether the armed forces will take on a direct role under the new designation.”

Nonetheless, the EO creates yet another new, domestic area for military operations within the homeland, and what emerges needs to be watched.

The Cipher Brief is committed to publishing a range of perspectives on national security issues submitted by deeply experienced national security professionals.

Opinions expressed are those of the author and do not represent the views or opinions of The Cipher Brief.

Have a perspective to share based on your experience in the national security field? Send it to Editor@thecipherbrief.com for publication consideration.

Read more expert-driven national security insights, perspective and analysis in The Cipher Brief, because national security is everyone’s business.

Generative AI hype distracts us from AI’s more important breakthroughs

15 December 2025 at 05:00

On April 28, 2022, at a highly anticipated concert in Spokane, Washington, the musician Paul McCartney astonished his audience with a groundbreaking application of AI: He began to perform with a lifelike depiction of his long-deceased musical partner, John Lennon. 

Using recent advances in audio and video processing, engineers had taken the pair’s final performance (London, 1969), separated Lennon’s voice and image from the original mix and restored them with lifelike clarity.


This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next.


For years, researchers like me had taught machines to “see” and “hear” in order to make such a moment possible. As McCartney and Lennon appeared to reunite across time and space, the arena fell silent; many in the crowd began to cry. As an AI scientist and lifelong Beatles fan, I felt profound gratitude that we could experience this truly life-changing moment. 

Later that year, the world was captivated by another major breakthrough: AI conversation. For the first time in history, systems capable of generating new, contextually relevant comments in real time, on virtually any subject, were widely accessible owing to the release of ChatGPT. Billions of people were suddenly able to interact with AI. This ignited the public’s imagination about what AI could be, bringing an explosion of creative ideas, hopes, and fears.

Having done my PhD on AI language generation (long considered niche), I was thrilled we had come this far. But the awe I felt was rivaled by my growing rage at the flood of media takes and self-appointed experts insisting that generative AI could do things it simply can’t, and warning that anyone who didn’t adopt it would be left behind.

This kind of hype has contributed to a frenzy of misunderstandings about what AI actually is and what it can and cannot do. Crucially, generative AI is a seductive distraction from the type of AI that is most likely to make your life better, or even save it: Predictive AI. In contrast to AI designed for generative tasks, predictive AI involves tasks with a finite, known set of answers; the system just has to process information to say which answer is right. A basic example is plant recognition: Point your phone camera at a plant and learn that it’s a Western sword fern. Generative tasks, in contrast, have no finite set of correct answers: The system must blend snippets of information it’s been trained on to create, for example, a novel picture of a fern. 

The generative AI technology involved in chatbots, face-swaps, and synthetic video makes for stunning demos, driving clicks and sales as viewers run wild with ideas that superhuman AI will be capable of bringing us abundance or extinction. Yet predictive AI has quietly been improving weather prediction and food safety, enabling higher-quality music production, helping to organize photos, and accurately predicting the fastest driving routes. We incorporate predictive AI into our everyday lives without evening thinking about it, a testament to its indispensable utility.

To get a sense of the immense progress on predictive AI and its future potential, we can look at the trajectory of the past 20 years. In 2005, we couldn’t get AI to tell the difference between a person and a pencil. By 2013, AI still couldn’t reliably detect a bird in a photo, and the difference between a pedestrian and a Coke bottle was massively confounding (this is how I learned that bottles do kind of look like people, if people had no heads). The thought of deploying these systems in the real world was the stuff of science fiction. 

Yet over the past 10 years, predictive AI has not only nailed bird detection down to the specific species; it has rapidly improved life-critical medical services like identifying problematic lesions and heart arrhythmia. Because of this technology, seismologists can predict earthquakes and meteorologists can predict flooding more reliably than ever before. Accuracy has skyrocketed for consumer-facing tech that detects and classifies everything from what song you’re thinking of when you hum a tune to which objects to avoid while you’re driving—making self-driving cars a reality. 

In the very near future, we should be able to accurately detect tumors and forecast hurricanes long before they can hurt anyone, realizing the lifelong hopes of people all over the world. That might not be as flashy as generating your own Studio Ghibli–ish film, but it’s definitely hype-worthy. 

Predictive AI systems have also been shown to be incredibly useful when they leverage certain generative techniques within a constrained set of options. Systems of this type are diverse, spanning everything from outfit visualization to cross-language translation. Soon, predictive-generative hybrid systems will make it possible to clone your own voice speaking another language in real time, an extraordinary aid for travel (with serious impersonation risks). There’s considerable room for growth here, but generative AI delivers real value when anchored by strong predictive methods.

To understand the difference between these two broad classes of AI, imagine yourself as an AI system tasked with showing someone what a cat looks like. You could adopt a generative approach, cutting and pasting small fragments from various cat images (potentially from sources that object) to construct a seemingly perfect depiction. The ability of modern generative AI to produce such a flawless collage is what makes it so astonishing.

Alternatively, you could take the predictive approach: Simply locate and point to an existing picture of a cat. That method is much less glamorous but more energy-efficient and more likely to be accurate, and it properly acknowledges the original source. Generative AI is designed to create things that look real; predictive AI identifies what is real. A misunderstanding that generative systems are retrieving things when they are actually creating them has led to grave consequences when text is involved, requiring the withdrawal of legal rulings and the retraction of scientific articles.

Driving this confusion is a tendency for people to hype AI without making it clear what kind of AI they’re talking about (I reckon many don’t know). It’s very easy to equate “AI” with generative AI, or even just language-generating AI, and assume that all other capabilities fall out from there. That fallacy makes a ton of sense: The term literally references “intelligence,” and our human understanding of what “intelligence” might be is often mediated by the use of language. (Spoiler: No one actually knows what intelligence is.) But the phrase “artificial intelligence” was intentionally designed in the 1950s to inspire awe and allude to something humanlike. Today, it just refers to a set of disparate technologies for processing digital data. Some of my friends find it helpful to call it “mathy maths” instead.

The bias toward treating generative AI as the most powerful and real form of AI is troubling given that it consumes considerably more energy than predictive AI systems. It also means using existing human work in AI products against the original creators’ wishes and replacing human jobs with AI systems whose capabilities their work made possible in the first place—without compensation. AI can be amazingly powerful, but that doesn’t mean creators should be ripped off

Watching this unfold as an AI developer within the tech industry, I’ve drawn important lessons for next steps. The widespread appeal of AI is clearly linked to the intuitive nature of conversation-based interactions. But this method of engagement currently overuses generative methods where predictive ones would suffice, resulting in an awkward situation that’s confusing for users while imposing heavy costs in energy consumption, exploitation, and job displacement. 

We have witnessed just a glimpse of AI’s full potential: The current excitement around AI reflects what it could be, not what it is. Generation-based approaches strain resources while still falling short on representation, accuracy, and the wishes of people whose work is folded into the system. 

If we can shift the spotlight from the hype around generative technologies to the predictive advances already transforming daily life, we can build AI that is genuinely useful, equitable, and sustainable. The systems that help doctors catch diseases earlier, help scientists forecast disasters sooner, and help everyday people navigate their lives more safely are the ones poised to deliver the greatest impact. 

The future of beneficial AI will not be defined by the flashiest demos but by the quiet, rigorous progress that makes technology trustworthy. And if we build on that foundation—pairing predictive strength with more mature data practices and intuitive natural-language interfaces—AI can finally start living up to the promise that many people perceive today.

Dr. Margaret Mitchell is a computer science researcher and chief ethics scientist at AI startup Hugging Face. She has worked in the technology industry for 15 years, and has published over 100 papers on natural language generation, assistive technology, computer vision, and AI ethics. Her work has received numerous awards and has been implemented by multiple technology companies.

The era of AI persuasion in elections is about to begin

In January 2024, the phone rang in homes all around New Hampshire. On the other end was Joe Biden’s voice, urging Democrats to “save your vote” by skipping the primary. It sounded authentic, but it wasn’t. The call was a fake, generated by artificial intelligence.

Today, the technology behind that hoax looks quaint. Tools like OpenAI’s Sora now make it possible to create convincing synthetic videos with astonishing ease. AI can be used to fabricate messages from politicians and celebrities—even entire news clips—in minutes. The fear that elections could be overwhelmed by realistic fake media has gone mainstream—and for good reason.

But that’s only half the story. The deeper threat isn’t that AI can just imitate people—it’s that it can actively persuade people. And new research published this week shows just how powerful that persuasion can be. In two large peer-reviewed studies, AI chatbots shifted voters’ views by a substantial margin, far more than traditional political advertising tends to do.

In the coming years, we will see the rise of AI that can personalize arguments, test what works, and quietly reshape political views at scale. That shift—from imitation to active persuasion—should worry us deeply.  

The challenge is that modern AI doesn’t just copy voices or faces; it holds conversations, reads emotions, and tailors its tone to persuade. And it can now command other AIs—directing image, video, and voice models to generate the most convincing content for each target. Putting these pieces together, it’s not hard to imagine how one could build a coordinated persuasion machine. One AI might write the message, another could create the visuals, another could distribute it across platforms and watch what works. No humans required.

A decade ago, mounting an effective online influence campaign typically meant deploying armies of people running fake accounts and meme farms. Now that kind of work can be automated—cheaply and invisibly.

The same technology that powers customer service bots and tutoring apps can be repurposed to nudge political opinions or amplify a government’s preferred narrative. And the persuasion doesn’t have to be confined to ads or robocalls. It can be woven into the tools people already use every day—social media feeds, language learning apps, dating platforms, or even voice assistants built and sold by parties trying to influence the American public. That kind of influence could come from malicious actors using the APIs of popular AI tools people already rely on, or from entirely new apps built with the persuasion baked in from the start.

And it’s affordable. For less than a million dollars, anyone can generate personalized, conversational messages for every registered voter in America. The math isn’t complicated. Assume 10 brief exchanges per person—around 2,700 tokens of text—and price them at current rates for ChatGPT’s API. Even with a population of 174 million registered voters, the total still comes in under $1 million. The 80,000 swing voters who decided the 2016 election could be targeted for less than $3,000. 

Although this is a challenge in elections across the world, the stakes for the United States are especially high, given the scale of its elections and the attention they attract from foreign actors. If the US doesn’t move fast, the next presidential election in 2028, or even the midterms in 2026, could be won by whoever automates persuasion first. 

The 2028 threat 

While there have been indications that the threat AI poses to elections is overblown, a growing body of research suggests the situation could be changing. Recent studies have shown that GPT-4 can exceed the persuasive capabilities of communications experts when generating statements on polarizing US political topics, and it is more persuasive than non-expert humans two-thirds of the time when debating real voters. 

Two major studies published yesterday extend those findings to real election contexts in the United States, Canada, Poland, and the United Kingdom, showing that brief chatbot conversations can move voters’ attitudes by up to 10 percentage points, with US participant opinions shifting nearly four times more than it did in response to tested 2016 and 2020 political ads. And when models were explicitly optimized for persuasion, the shift soared to 25 percentage points—an almost unfathomable difference.

While previously confined to well-resourced companies, modern large language models are becoming increasingly easy to use. Major AI providers like OpenAI, Anthropic, and Google wrap their frontier models in usage policies, automated safety filters, and account-level monitoring, and they do sometimes suspend users who violate those rules.

But those restrictions apply only to traffic that goes through their platforms; they don’t extend to the rapidly growing ecosystem of open-source and open-weight models, which  can be downloaded by anyone with an internet connection. Though they’re usually smaller and less capable than their commercial counterparts, research has shown with careful prompting and fine-tuning, these models can now match the performance of leading commercial systems. 

All this means that actors, whether well-resourced organizations or grassroots collectives, have a clear path to deploying politically persuasive AI at scale. Early demonstrations have already occurred elsewhere in the world. In India’s 2024 general election, tens of millions of dollars were reportedly spent on AI to segment voters, identify swing voters, deliver personalized messaging through robocalls and chatbots, and more. In Taiwan, officials and researchers have documented China-linked operations using generative AI to produce more subtle disinformation, ranging from deepfakes to language model outputs that are biased toward messaging approved by the Chinese Communist Party.

It’s only a matter of time before this technology comes to US elections—if it hasn’t already. Foreign adversaries are well positioned to move first. China, Russia, Iran, and others already maintain networks of troll farms, bot accounts, and covert influence operators. Paired with open-source language models that generate fluent and localized political content, those operations can be supercharged. In fact, there is no longer a need for human operators who understand the language or the context. With light tuning, a model can impersonate a neighborhood organizer, a union rep, or a disaffected parent without a person ever setting foot in the country. Political campaigns themselves will likely be close behind. Every major operation already segments voters, tests messages, and optimizes delivery. AI lowers the cost of doing all that. Instead of poll-testing a slogan, a campaign can generate hundreds of arguments, deliver them one on one, and watch in real time which ones shift opinions.

The underlying fact is simple: Persuasion has become effective and cheap. Campaigns, PACs, foreign actors, advocacy groups, and opportunists are all playing on the same field—and there are very few rules.

The policy vacuum

Most policymakers have not caught up. Over the past several years, legislators in the US have focused on deepfakes but have ignored the wider persuasive threat.

Foreign governments have begun to take the problem more seriously. The European Union’s 2024 AI Act classifies election-related persuasion as a “high-risk” use case. Any system designed to influence voting behavior is now subject to strict requirements. Administrative tools, like AI systems used to plan campaign events or optimize logistics, are exempt. However, tools that aim to shape political beliefs or voting decisions are not.

By contrast, the United States has so far refused to draw any meaningful lines. There are no binding rules about what constitutes a political influence operation, no external standards to guide enforcement, and no shared infrastructure for tracking AI-generated persuasion across platforms. The federal and state governments have gestured toward regulation—the Federal Election Commission is applying old fraud provisions, the Federal Communications Commission has proposed narrow disclosure rules for broadcast ads, and a handful of states have passed deepfake laws—but these efforts are piecemeal and leave most digital campaigning untouched. 

In practice, the responsibility for detecting and dismantling covert campaigns has been left almost entirely to private companies, each with its own rules, incentives, and blind spots. Google and Meta have adopted policies requiring disclosure when political ads are generated using AI. X has remained largely silent on this, while TikTok bans all paid political advertising. However, these rules, modest as they are, cover only the sliver of content that is bought and publicly displayed. They say almost nothing about the unpaid, private persuasion campaigns that may matter most.

To their credit, some firms have begun publishing periodic threat reports identifying covert influence campaigns. Anthropic, OpenAI, Meta, and Google have all disclosed takedowns of inauthentic accounts. However, these efforts are voluntary and not subject to independent auditing. Most important, none of this prevents determined actors from bypassing platform restrictions altogether with open-source models and off-platform infrastructure.

What a real strategy would look like

The United States does not need to ban AI from political life. Some applications may even strengthen democracy. A well-designed candidate chatbot could help voters understand where the candidate stands on key issues, answer questions directly, or translate complex policy into plain language. Research has even shown that AI can reduce belief in conspiracy theories. 

Still, there are a few things the United States should do to protect against the threat of AI persuasion. First, it must guard against foreign-made political technology with built-in persuasion capabilities. Adversarial political technology could take the form of a foreign-produced video game where in-game characters echo political talking points, a social media platform whose recommendation algorithm tilts toward certain narratives, or a language learning app that slips subtle messages into daily lessons.

Evaluations, such as the Center for AI Standards and Innovation’s recent analysis of DeepSeek, should focus on identifying and assessing AI products—particularly from countries like China, Russia, or Iran—before they are widely deployed. This effort would require coordination among intelligence agencies, regulators, and platforms to spot and address risks.

Second, the United States should lead in shaping the rules around AI-driven persuasion. That includes tightening access to computing power for large-scale foreign persuasion efforts, since many actors will either rent existing models or lease the GPU capacity to train their own. It also means establishing clear technical standards—through governments, standards bodies, and voluntary industry commitments—for how AI systems capable of generating political content should operate, especially during sensitive election periods. And domestically, the United States needs to determine what kinds of disclosures should apply to AI-generated political messaging while navigating First Amendment concerns.

Finally, foreign adversaries will try to evade these safeguards—using offshore servers, open-source models, or intermediaries in third countries. That is why the United States also needs a foreign policy response. Multilateral election integrity agreements should codify a basic norm: States that deploy AI systems to manipulate another country’s electorate risk coordinated sanctions and public exposure. 

Doing so will likely involve building shared monitoring infrastructure, aligning disclosure and provenance standards, and being prepared to conduct coordinated takedowns of cross-border persuasion campaigns—because many of these operations are already moving into opaque spaces where our current detection tools are weak. The US should also push to make election manipulation part of the broader agenda at forums like the G7 and OECD, ensuring that threats related to AI persuasion are treated not as isolated tech problems but as collective security challenges.

Indeed, the task of securing elections cannot fall to the United States alone. A functioning radar system for AI persuasion will require partnerships with our partners and allies. Influence campaigns are rarely confined by borders, and open-source models and offshore servers will always exist. The goal is not to eliminate them but to raise the cost of misuse and shrink the window in which they can operate undetected across jurisdictions.

The era of AI persuasion is just around the corner, and America’s adversaries are prepared. In the US, on the other hand, the laws are out of date, the guardrails too narrow, and the oversight largely voluntary. If the last decade was shaped by viral lies and doctored videos, the next will be shaped by a subtler force: messages that sound reasonable, familiar, and just persuasive enough to change hearts and minds.

For China, Russia, Iran, and others, exploiting America’s open information ecosystem is a strategic opportunity. We need a strategy that treats AI persuasion not as a distant threat but as a present fact. That means soberly assessing the risks to democratic discourse, putting real standards in place, and building a technical and legal infrastructure around them. Because if we wait until we can see it happening, it will already be too late.

Tal Feldman is a JD candidate at Yale Law School who focuses on technology and national security. Before law school, he built AI models across the federal government and was a Schwarzman and Truman scholar. Aneesh Pappu is a PhD student and Knight-Hennessy scholar at Stanford University and research scientist at Google DeepMind who focuses on agentic AI, AI security, and technology policy. Before Stanford, he was a Marshall scholar.

Why the for-profit race into solar geoengineering is bad for science and public trust

Last week, an American-Israeli company that claims it’s developed proprietary technology to cool the planet announced it had raised $60 million, by far the largest known venture capital round to date for a solar geoengineering startup.

The company, Stardust, says the funding will enable it to develop a system that could be deployed by the start of the next decade, according to Heatmap, which broke the story.


Heat Exchange

MIT Technology Review’s guest opinion series, offering expert commentary on legal, political and regulatory issues related to climate change and clean energy. You can read the rest of the pieces here.


As scientists who have worked on the science of solar geoengineering for decades, we have grown increasingly concerned about the emerging efforts to start and fund private companies to build and deploy technologies that could alter the climate of the planet. We also strongly dispute some of the technical claims that certain companies have made about their offerings. 

Given the potential power of such tools, the public concerns about them, and the importance of using them responsibly, we argue that they should be studied, evaluated, and developed mainly through publicly coordinated and transparently funded science and engineering efforts.  In addition, any decisions about whether or how they should be used should be made through multilateral government discussions, informed by the best available research on the promise and risks of such interventions—not the profit motives of companies or their investors.

The basic idea behind solar geoengineering, or what we now prefer to call sunlight reflection methods (SRM), is that humans might reduce climate change by making the Earth a bit more reflective, partially counteracting the warming caused by the accumulation of greenhouse gases. 

There is strong evidence, based on years of climate modeling and analyses by researchers worldwide, that SRM—while not perfect—could significantly and rapidly reduce climate changes and avoid important climate risks. In particular, it could ease the impacts in hot countries that are struggling to adapt.  

The goals of doing research into SRM can be diverse: identifying risks as well as finding better methods. But research won’t be useful unless it’s trusted, and trust depends on transparency. That means researchers must be eager to examine pros and cons, committed to following the evidence where it leads, and driven by a sense that research should serve public interests, not be locked up as intellectual property.

In recent years, a handful of for-profit startup companies have emerged that are striving to develop SRM technologies or already trying to market SRM services. That includes Make Sunsets, which sells “cooling credits” for releasing sulfur dioxide in the stratosphere. A new company, Sunscreen, which hasn’t yet been announced, intends to use aerosols in the lower atmosphere to achieve cooling over small areas, purportedly to help farmers or cities deal with extreme heat.  

Our strong impression is that people in these companies are driven by the same concerns about climate change that move us in our research. We agree that more research, and more innovation, is needed. However, we do not think startups—which by definition must eventually make money to stay in business—can play a productive role in advancing research on SRM.

Many people already distrust the idea of engineering the atmosphere—at whichever scale—to address climate change, fearing negative side effects, inequitable impacts on different parts of the world, or the prospect that a world expecting such solutions will feel less pressure to address the root causes of climate change.

Adding business interests, profit motives, and rich investors into this situation just creates more cause for concern, complicating the ability of responsible scientists and engineers to carry out the work needed to advance our understanding.

The only way these startups will make money is if someone pays for their services, so there’s a reasonable fear that financial pressures could drive companies to lobby governments or other parties to use such tools. A decision that should be based on objective analysis of risks and benefits would instead be strongly influenced by financial interests and political connections.

The need to raise money or bring in revenue often drives companies to hype the potential or safety of their tools. Indeed, that’s what private companies need to do to attract investors, but it’s not how you build public trust—particularly when the science doesn’t support the claims.

Notably, Stardust says on its website that it has developed novel particles that can be injected into the atmosphere to reflect away more sunlight, asserting that they’re “chemically inert in the stratosphere, and safe for humans and ecosystems.” According to the company, “The particles naturally return to Earth’s surface over time and recycle safely back into the biosphere.”

But it’s nonsense for the company to claim they can make particles that are inert in the stratosphere. Even diamonds, which are extraordinarily nonreactive, would alter stratospheric chemistry. First of all, much of that chemistry depends on highly reactive radicals that react with any solid surface, and second, any particle may become coated by background sulfuric acid in the stratosphere. That could accelerate the loss of the protective ozone layer by spreading that existing sulfuric acid over a larger surface area.

(Stardust didn’t provide a response to an inquiry about the concerns raised in this piece.)

In materials presented to potential investors, which we’ve obtained a copy of, Stardust further claims its particles “improve” on sulfuric acid, which is the most studied material for SRM. But the point of using sulfate for such studies was never that it was perfect, but that its broader climatic and environmental impacts are well understood. That’s because sulfate is widespread on Earth, and there’s an immense body of scientific knowledge about the fate and risks of sulfur that reaches the stratosphere through volcanic eruptions or other means.

If there’s one great lesson of 20th-century environmental science, it’s how crucial it is to understand the ultimate fate of any new material introduced into the environment. 

Chlorofluorocarbons and the pesticide DDT both offered safety advantages over competing technologies, but they both broke down into products that accumulated in the environment in unexpected places, causing enormous and unanticipated harms. 

The environmental and climate impacts of sulfate aerosols have been studied in many thousands of scientific papers over a century, and this deep well of knowledge greatly reduces the chance of unknown unknowns. 

Grandiose claims notwithstanding—and especially considering that Stardust hasn’t disclosed anything about its particles or research process—it would be very difficult to make a pragmatic, risk-informed decision to start SRM efforts with these particles instead of sulfate.

We don’t want to claim that every single answer lies in academia. We’d be fools to not be excited by profit-driven innovation in solar power, EVs, batteries, or other sustainable technologies. But the math for sunlight reflection is just different. Why?   

Because the role of private industry was essential in improving the efficiency, driving down the costs, and increasing the market share of renewables and other forms of cleantech. When cost matters and we can easily evaluate the benefits of the product, then competitive, for-profit capitalism can work wonders.  

But SRM is already technically feasible and inexpensive, with deployment costs that are negligible compared with the climate damage it averts.

The essential questions of whether or how to use it come down to far thornier societal issues: How can we best balance the risks and benefits? How can we ensure that it’s used in an equitable way? How do we make legitimate decisions about SRM on a planet with such sharp political divisions?

Trust will be the most important single ingredient in making these decisions. And trust is the one product for-profit innovation does not naturally manufacture. 

Ultimately, we’re just two researchers. We can’t make investors in these startups do anything differently. Our request is that they think carefully, and beyond the logic of short-term profit. If they believe geoengineering is worth exploring, could it be that their support will make it harder, not easier, to do that?  

David Keith is the professor of geophysical sciences at the University of Chicago and founding faculty director of the school’s Climate Systems Engineering Initiative. Daniele Visioni is an assistant professor of earth and atmospheric sciences at Cornell University and head of data for Reflective, a nonprofit that develops tools and provides funding to support solar geoengineering research.

How the Internet of Things (IoT) became a dark web target – and what to do about it

By: slandau
23 May 2024 at 11:30

By Antoinette Hodes, Office of the CTO, Check Point Software Technologies.

The dark web has evolved into a clandestine marketplace where illicit activities flourish under the cloak of anonymity. Due to its restricted accessibility, the dark web exhibits a decentralized structure with minimal enforcement of security controls, making it a common marketplace for malicious activities.

The Internet of Things (IoT), with the interconnected nature of its devices, and its vulnerabilities, has become an attractive target for dark web-based cyber criminals. One weak link – i.e., a compromised IoT device – can jeopardize the entire network’s security. The financial repercussions of a breached device can be extensive, not just in terms of ransom demands, but also in terms of regulatory fines, loss of reputation and the cost of remediation.

With their interconnected nature and inherent vulnerabilities, IoT devices are attractive entry points for cyber criminals. They are highly desirable targets, since they often represent a single point of vulnerability that can impact numerous victims simultaneously.

Check Point Research found a sharp increase in cyber attacks targeting IoT devices, observing a trend across all regions and sectors. Europe experiences the highest number of incidents per week: on average, nearly 70 IoT attacks per organization.

WEF graphic

Gateways to the dark web

Based on research from PSAcertified, the average cost of a successful attack on an IoT device exceeds $330,000. Another analyst report reveals that 34% of enterprises that fell victim to a breach via IoT devices faced higher cumulative breach costs than those who fell victim to a cyber attack on non-IoT devices; the cost of which ranged between $5 million and $10 million.

Other examples of IoT-based attacks include botnet infections, turning devices into zombies so that they can participate in distributed denial-of-service (DDoS), ransomware and propagation attacks, as well as crypto-mining and exploitation of IoT devices as proxies for the dark web.

4% browsing, 90% confidentiality, 6% anonymity

The dark web relies on an arsenal of tools and associated services to facilitate illicit activities. Extensive research has revealed a thriving underground economy operating within the dark web. This economy is largely centered around services associated with IoT. In particular, there seems to be a huge demand for DDoS attacks that are orchestrated through IoT botnets: During the first half of 2023, Kaspersky identified over 700 advertisements for DDoS attack services across various dark web forums.

IoT devices themselves have become valuable assets in this underworld marketplace. On the dark web, the value of a compromised device is often greater than the retail price of the device itself. Upon examining one of the numerous Telegram channels used for trading dark web products and services, one can come across scam pages, tutorials covering various malicious activities, harmful configuration files with “how-to’s”, SSH crackers, and more. Essentially, a complete assortment of tools, from hacking resources to anonymization services, for the purpose of capitalizing on compromised devices can be found on the dark web. Furthermore, vast quantities of sensitive data are bought and sold there everyday.

AI’s dark capabilities

Adversarial machine learning can be used to attack, deceive and bypass machine learning systems. The combination of IoT and AI has driven dark web-originated attacks to unprecedented levels. This is what we are seeing:

  • Automated exploitation: AI algorithms automate the process of scanning for vulnerabilities and security flaws with subsequent exploitation methods. This opens doors to large-scale attacks with zero human interaction.
  • Adaptive attacks: With AI, attackers can now adjust their strategies in real-time by analyzing the responses and defenses encountered during an attack. This ability to adapt poses a significant challenge for traditional security measures in effectively detecting and mitigating IoT threats.
  • Behavioral analysis: AI-driven analytics enables the examination of IoT devices and user behavior, allowing for the identification of patterns, anomalies, and vulnerabilities. Malicious actors can utilize this capability to profile IoT devices, exploit their weaknesses, and evade detection from security systems.
  • Adversarial attacks: Adversarial attacks can be used to trick AI models and IoT devices into making incorrect or unintended decisions, potentially leading to security breaches. These attacks aim to exploit weaknesses in the system’s algorithms or vulnerabilities.

Zero-tolerance security

The convergence of IoT and AI brings numerous advantages, but it also presents fresh challenges. To enhance IoT security and device resilience while safeguarding sensitive data, across the entire IoT supply chain, organizations must implement comprehensive security measures based on zero-tolerance principles.

Factors such as data security, device security, secure communication, confidentiality, privacy, and other non-functional requirements like maintainability, reliability, usability and scalability highlight the critical need for security controls within IoT devices. Security controls should include elements like secure communication, access controls, encryption, software patches, device hardening, etc. As part of the security process, the focus should be on industry standards, such as “secure by design” and “secure by default”, along with the average number of IoT attacks per organization, as broken down by region every week.

Functional requirements, non-functional requirements

Collaborations and alliances within the industry are critical in developing standardized IoT security practices and establishing industry-wide security standards. By integrating dedicated IoT security, organizations can enhance their overall value proposition and ensure compliance with regulatory obligations.

In today’s cyber threat landscape, numerous geographic regions demand adherence to stringent security standards; both during product sales and while responding to Request for Information and Request for Proposal solicitations. IoT manufacturers with robust, ideally on-device security capabilities can showcase a distinct advantage, setting them apart from their competitors. Furthermore, incorporating dedicated IoT security controls enables seamless, scalable and efficient operations, reducing the need for emergency software updates.

IoT security plays a crucial role in enhancing the Overall Equipment Effectiveness (a measurement of manufacturing productivity, defined as availability x performance x quality), as well as facilitating early bug detection in IoT firmware before official release. Additionally, it demonstrates a solid commitment to prevention and security measures.

By prioritizing dedicated IoT security, we actively contribute to the establishment of secure and reliable IoT ecosystems, which serve to raise awareness, educate stakeholders, foster trust and cultivate long-term customer loyalty. Ultimately, they enhance credibility and reputation in the market. Ensuring IoT device security is essential in preventing IoT devices from falling into the hands of the dark web army.

This article was originally published via the World Economic Forum and has been reprinted with permission.

For more Cyber Talk insights from Antoinette Hodes, please click here. Lastly, to receive stellar cyber insights, groundbreaking research and emerging threat analyses each week, subscribe to the CyberTalk.org newsletter.

The post How the Internet of Things (IoT) became a dark web target – and what to do about it appeared first on CyberTalk.

❌
❌