This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
“Dr. Google” had its issues. Can ChatGPT Health do better?
For the past two decades, there’s been a clear first step for anyone who starts experiencing new medical symptoms: Look them up online. The practice was so common that it gained the pejorative moniker “Dr. Google.” But times are changing, and many medical-information seekers are now using LLMs. According to OpenAI, 230 million people ask ChatGPT health-related queries each week.
That’s the context around the launch of OpenAI’s new ChatGPT Health product, which debuted earlier this month. The big question is: can the obvious risks of using AI for health-related queries be mitigated enough for them to be a net benefit? Read the full story.
—Grace Huckins
America’s coming war over AI regulation
In the final weeks of 2025, the battle over regulating artificial intelligence in the US reached boiling point. On December 11, after Congress failed twice to pass a law banning state AI laws, President Donald Trump signed a sweeping executive order seeking to handcuff states from regulating the booming industry.
Instead, he vowed to work with Congress to establish a “minimally burdensome” national AI policy. The move marked a victory for tech titans, who have been marshaling multimillion-dollar war chests to oppose AI regulations, arguing that a patchwork of state laws would stifle innovation.
In 2026, the battleground will shift to the courts. While some states might back down from passing AI laws, others will charge ahead. Read our story about what’s on the horizon.
—Michelle Kim
This story is from MIT Technology Review’s What’s Next series of stories that look across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.
Measles is surging in the US. Wastewater tracking could help.
This week marked a rather unpleasant anniversary: It’s a year since Texas reported a case of measles—the start of a significant outbreak that ended up spreading across multiple states. Since the start of January 2025, there have been over 2,500 confirmed cases of measles in the US. Three people have died.
As vaccination rates drop and outbreaks continue, scientists have been experimenting with new ways to quickly identify new cases and prevent the disease from spreading. And they are starting to see some success with wastewater surveillance. Read the full story.
—Jessica Hamzelou
This story is from The Checkup, our weekly newsletter giving you the inside track on all things health and biotech. Sign up to receive it in your inbox every Thursday.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 The US is dismantling itself A foreign enemy could not invent a better chain of events to wreck its standing in the world. (Wired $) + We need to talk about whether Donald Trump might be losing it. (New Yorker $)
2 Big Tech is taking on more debt to fund its AI aspirations And the bubble just keeps growing. (WP $) + Forget unicorns. 2026 is shaping up to be the year of the “hectocorn.” (The Guardian) + Everyone in tech agrees we’re in a bubble. They just can’t agree on what happens when it pops. (MIT Technology Review)
3 DOGE accessed even more personal data than we thought Even now, the Trump administration still can’t say how much data is at risk, or what it was used for. (NPR)
4 TikTok has finalized a deal to create a new US entity Ending years of uncertainty about its fate in America. (CNN) + Why China is the big winner out of all of this. (FT $)
5 The US is now officially out of the World Health Organization And it’s leaving behind nearly $300 million in bills unpaid. (Ars Technica) + The US withdrawal from the WHO will hurt us all. (MIT Technology Review)
6 AI-powered disinformation swarms pose a threat to democracy A would-be autocrat could use them to persuade populations to accept cancelled elections or overturn results. (The Guardian) + The era of AI persuasion in elections is about to begin. (MIT Technology Review)
7 We’re about to start seeing more robots everywhere But exactly what they’ll look like remains up for debate. (Vox $) + Chinese companies are starting to dominate entire sectors of AI and robotics. (MIT Technology Review)
8 Some people seem to be especially vulnerable to loneliness If you’re ‘other-directed’, you could particularly benefit from less screentime. (New Scientist $)
9 This academic lost two years of work with a single click TL;DR: Don’t rely on ChatGPT to store your data. (Nature)
10 How animals develop a sense of direction Their ‘internal compass’ seems to be informed by landmarks that help them form a mental map. (Quanta $)
Quote of the day
“The rate at which AI is progressing, I think we have AI that is smarter than any human this year, and no later than next year.”
—Elon Musk simply cannot resist the urge to make wild predictions at Davos, Wired reports.
One more thing
ADAM DETOUR
Africa fights rising hunger by looking to foods of the past
After falling steadily for decades, the prevalence of global hunger is now on the rise—nowhere more so than in sub-Saharan Africa.
Africa’s indigenous crops are often more nutritious and better suited to the hot and dry conditions that are becoming more prevalent, yet many have been neglected by science, which means they tend to be more vulnerable to diseases and pests and yield well below their theoretical potential.
Now the question is whether researchers, governments, and farmers can work together in a way that gets these crops onto plates and provides Africans from all walks of life with the energy and nutrition that they need to thrive, whatever climate change throws their way. Read the full story.
—Jonathan W. Rosen
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ The only thing I fancy dry this January is a martini. Here’s how to make one. + If you absolutely adore the Bic crystal pen, you might want this lamp. + Cozy up with a nice long book this winter. ($) + Want to eat healthier? Slow down and tune out food ‘noise’. ($)
MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.
In the final weeks of 2025, the battle over regulating artificial intelligence in the US reached a boiling point. On December 11, after Congress failed twice to pass a law banning state AI laws, President Donald Trump signed a sweeping executive order seeking to handcuff states from regulating the booming industry. Instead, he vowed to work with Congress to establish a “minimally burdensome” national AI policy, one that would position the US to win the global AI race. The move marked a qualified victory for tech titans, who have been marshaling multimillion-dollar war chests to oppose AI regulations, arguing that a patchwork of state laws would stifle innovation.
In 2026, the battleground will shift to the courts. While some states might back down from passing AI laws, others will charge ahead, buoyed by mounting public pressure to protect children from chatbots and rein in power-hungry data centers. Meanwhile, dueling super PACs bankrolled by tech moguls and AI-safety advocates will pour tens of millions into congressional and state elections to seat lawmakers who champion their competing visions for AI regulation.
Trump’s executive order directs the Department of Justice to establish a task force that sues states whose AI laws clash with his vision for light-touch regulation. It also directs the Department of Commerce to starve states of federal broadband funding if their AI laws are “onerous.” In practice, the order may target a handful of laws in Democratic states, says James Grimmelmann, a law professor at Cornell Law School. “The executive order will be used to challenge a smaller number of provisions, mostly relating to transparency and bias in AI, which tend to be more liberal issues,” Grimmelmann says.
For now, many states aren’t flinching. On December 19, New York’s governor, Kathy Hochul, signed the Responsible AI Safety and Education (RAISE) Act, a landmark law requiring AI companies to publish the protocols used to ensure the safe development of their AI models and report critical safety incidents. On January 1, California debuted the nation’s first frontier AI safety law, SB 53—which the RAISE Act was modeled on—aimed at preventing catastrophic harms such as biological weapons or cyberattacks. While both laws were watered down from earlier iterations to survive bruising industry lobbying, they struck a rare, if fragile, compromise between tech giants and AI safety advocates.
If Trump targets these hard-won laws, Democratic states like California and New York will likely take the fight to court. Republican states like Florida with vocal champions for AI regulation might follow suit. Trump could face an uphill battle. “The Trump administration is stretching itself thin with some of its attempts to effectively preempt [legislation] via executive action,” says Margot Kaminski, a law professor at the University of Colorado Law School. “It’s on thin ice.”
But Republican states that are anxious to stay off Trump’s radar or can’t afford to lose federal broadband funding for their sprawling rural communities might retreat from passing or enforcing AI laws. Win or lose in court, the chaos and uncertainty could chill state lawmaking. Paradoxically, the Democratic states that Trump wants to rein in—armed with big budgets and emboldened by the optics of battling the administration—may be the least likely to budge.
In lieu of state laws, Trump promises to create a federal AI policy with Congress. But the gridlocked and polarized body won’t be delivering a bill this year. In July, the Senate killed a moratorium on state AI laws that had been inserted into a tax bill, and in November, the House scrapped an encore attempt in a defense bill. In fact, Trump’s bid to strong-arm Congress with an executive order may sour any appetite for a bipartisan deal.
The executive order “has made it harder to pass responsible AI policy by hardening a lot of positions, making it a much more partisan issue,” says Brad Carson, a former Democratic congressman from Oklahoma who is building a network of super PACs backing candidates who support AI regulation. “It hardened Democrats and created incredible fault lines among Republicans,” he says.
While AI accelerationists in Trump’s orbit—AI and crypto czar David Sacks among them—champion deregulation, populist MAGA firebrands like Steve Bannon warn of rogue superintelligence and mass unemployment. In response to Trump’s executive order, Republican state attorneys general signed a bipartisan letter urging the FCC not to supersede state AI laws.
With Americans increasingly anxious about how AI could harm mental health, jobs, and the environment, public demand for regulation is growing. If Congress stays paralyzed, states will be the only ones acting to keep the AI industry in check. In 2025, state legislators introduced more than 1,000 AI bills, and nearly 40 states enacted over 100 laws, according to the National Conference of State Legislatures.
Efforts to protect children from chatbots may inspire rare consensus. On January 7, Google and Character Technologies, a startup behind the companion chatbot Character.AI, settled several lawsuits with families of teenagers who killed themselves after interacting with the bot. Just a day later, the Kentucky attorney general sued Character Technologies, alleging that the chatbots drove children to suicide and other forms of self-harm. OpenAI and Meta face a barrage of similar suits. Expect more to pile up this year. Without AI laws on the books, it remains to be seen how product liability laws and free speech doctrines apply to these novel dangers. “It’s an open question what the courts will do,” says Grimmelmann.
While litigation brews, states will move to pass child safety laws, which are exempt from Trump’s proposed ban on state AI laws. On January 9, OpenAI inked a deal with a former foe, the child-safety advocacy group Common Sense Media, to back a ballot initiative in California called the Parents & Kids Safe AI Act, setting guardrails around how chatbots interact with children. The measure proposes requiring AI companies to verify users’ age, offer parental controls, and undergo independent child-safety audits. If passed, it could be a blueprint for states across the country seeking to crack down on chatbots.
Fueled by widespread backlash against data centers, states will also try to regulate the resources needed to run AI. That means bills requiring data centers to report on their power and water use and foot their own electricity bills. If AI starts to displace jobs at scale, labor groups might float AI bans in specific professions. A few states concerned about the catastrophic risks posed by AI may pass safety bills mirroring SB 53 and the RAISE Act.
Meanwhile, tech titans will continue to use their deep pockets to crush AI regulations. Leading the Future, a super PAC backed by OpenAI president Greg Brockman and the venture capital firm Andreessen Horowitz, will try to elect candidates who endorse unfettered AI development to Congress and state legislatures. They’ll follow the crypto industry’s playbook for electing allies and writing the rules. To counter this, super PACs funded by Public First, an organization run by Carson and former Republican congressman Chris Stewart of Utah, will back candidates advocating for AI regulation. We might even see a handful of candidates running on anti-AI populist platforms.
In 2026, the slow, messy process of American democracy will grind on. And the rules written in state capitals could decide how the most disruptive technology of our generation develops far beyond America’s borders, for years to come.
This week marked a rather unpleasant anniversary: It’s a year since Texas reported a case of measles—the start of a significant outbreak that ended up spreading across multiple states. Since the start of January 2025, there have been over 2,500 confirmed cases of measles in the US. Three people have died.
As vaccination rates drop and outbreaks continue, scientists have been experimenting with new ways to quickly identify new cases and prevent the disease from spreading. And they are starting to see some success with wastewater surveillance.
After all, wastewater contains saliva, urine, feces, shed skin, and more. You could consider it a rich biological sample. Wastewater analysis helped scientists understand how covid was spreading during the pandemic. It’s early days, but it is starting to help us get a handle on measles.
Globally, there has been some progress toward eliminating measles, largely thanks to vaccination efforts. Such efforts led to an 88% drop in measles deaths between 2000 and 2024, according to the World Health Organization. It estimates that “nearly 59 million lives have been saved by the measles vaccine” since 2000.
Still, an estimated 95,000 people died from measles in 2024 alone—most of them young children. And cases are surging in Europe, Southeast Asia, and the Eastern Mediterranean region.
Last year, the US saw the highest levels of measles in decades. The country is on track to lose its measles elimination status—a sorry fate that met Canada in November after the country recorded over 5,000 cases in a little over a year.
Public health efforts to contain the spread of measles—which is incredibly contagious—typically involve clinical monitoring in health-care settings, along with vaccination campaigns. But scientists have started looking to wastewater, too.
Along with various bodily fluids, we all shed viruses and bacteria into wastewater, whether that’s through brushing our teeth, showering, or using the toilet. The idea of looking for these pathogens in wastewater to track diseases has been around for a while, but things really kicked into gear during the covid-19 pandemic, when scientists found that the coronavirus responsible for the disease was shed in feces.
This led Marlene Wolfe of Emory University and Alexandria Boehm of Stanford University to establish WastewaterSCAN, an academic-led program developed to analyze wastewater samples across the US. Covid was just the beginning, says Wolfe. “Over the years we have worked to expand what can be monitored,” she says.
Two years ago, for a previous edition of the Checkup, Wolfe told Cassandra Willyard that wastewater surveillance of measles was “absolutely possible,” as the virus is shed in urine. The hope was that this approach could shed light on measles outbreaks in a community, even if members of that community weren’t able to access health care and receive an official diagnosis. And that it could highlight when and where public health officials needed to act to prevent measles from spreading. Evidence that it worked as an effective public health measure was, at the time, scant.
Since then, she and her colleagues have developed a test to identify measles RNA. They trialed it at two wastewater treatment plants in Texas between December 2024 and May 2025. At each site, the team collected samples two or three times a week and tested them for measles RNA.
Over that period, the team found measles RNA in 10.5% of the samples they collected, as reported in a preprint paper published at medRxiv in July and currently under review at a peer-reviewed journal. The first detection came a week before the first case of measles was officially confirmed in the area. That’s promising—it suggests that wastewater surveillance might pick up measles cases early, giving public health officials a head start in efforts to limit any outbreaks.
There are more promising results from a team in Canada. Mike McKay and Ryland Corchis-Scott at the University of Windsor in Ontario and their colleagues have also been testing wastewater samples for measles RNA. Between February and November 2025, the team collected samples from a wastewater treatment facility serving over 30,000 people in Leamington, Ontario.
These wastewater tests are somewhat limited—even if they do pick up measles, they won’t tell you who has measles, where exactly infections are occurring, or even how many people are infected. McKay and his colleagues have begun to make some progress here. In addition to monitoring the large wastewater plant, the team used tampons to soak up wastewater from a hospital lateral sewer.
They then compared their measles test results with the number of clinical cases in that hospital. This gave them some idea of the virus’s “shedding rate.” When they applied this to the data collected from the Leamington wastewater treatment facility, the team got estimates of measles cases that were much higher than the figures officially reported.
Their findings track with the opinions of local health officials (who estimate that the true number of cases during the outbreak was around five to 10 times higher than the confirmed case count), the team members wrote in a paper published on medRxiv a couple of weeks ago.
There will always be limits to wastewater surveillance. “We’re looking at the pool of waste of an entire community, so it’s very hard to pull in information about individual infections,” says Corchis-Scott.
Wolfe also acknowledges that “we have a lot to learn about how we can best use the tools so they are useful.” But her team at WastewaterSCAN has been testing wastewater across the US for measles since May last year. And their findings are published online and shared with public health officials.
In some cases, the findings are already helping inform the response to measles. “We’ve seen public health departments act on this data,” says Wolfe. Some have issued alerts, or increased vaccination efforts in those areas, for example. “[We’re at] a point now where we really see public health departments, clinicians, [and] families using that information to help keep themselves and their communities safe,” she says.
McKay says his team has stopped testing for measles because the Ontario outbreak “has been declared over.” He says testing would restart if and when a single new case of measles is confirmed in the region, but he also thinks that his research makes a strong case for maintaining a wastewater surveillance system for measles.
McKay wonders if this approach might help Canada regain its measles elimination status. “It’s sort of like [we’re] a pariah now,” he says. If his approach can help limit measles outbreaks, it could be “a nice tool for public health in Canada to [show] we’ve got our act together.”
This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
For the past two decades, there’s been a clear first step for anyone who starts experiencing new medical symptoms: Look them up online. The practice was so common that it gained the pejorative moniker “Dr. Google.” But times are changing, and many medical-information seekers are now using LLMs. According to OpenAI, 230 million people ask ChatGPT health-related queries each week.
That’s the context around the launch of OpenAI’s new ChatGPT Health product, which debuted earlier this month. It landed at an inauspicious time: Two days earlier, the news website SFGate had broken the story of Sam Nelson, a teenager who died of an overdose last year after extensive conversations with ChatGPT about how best to combine various drugs. In the wake of both pieces of news, multiplejournalists questioned the wisdom of relying for medical advice on a tool that could cause such extreme harm.
Though ChatGPT Health lives in a separate sidebar tab from the rest of ChatGPT, it isn’t a new model. It’s more like a wrapper that provides one of OpenAI’s preexisting models with guidance and tools it can use to provide health advice—including some that allow it to access a user’s electronic medical records and fitness app data, if granted permission. There’s no doubt that ChatGPT and other large language models can make medical mistakes, and OpenAI emphasizes that ChatGPT Health is intended as an additional support, rather than a replacement for one’s doctor. But when doctors are unavailable or unable to help, people will turn to alternatives.
Some doctors see LLMs as a boon for medical literacy. The average patient might struggle to navigate the vast landscape of online medical information—and, in particular, to distinguish high-quality sources from polished but factually dubious websites—but LLMs can do that job for them, at least in theory. Treating patients who had searched for their symptoms on Google required “a lot of attacking patient anxiety [and] reducing misinformation,” says Marc Succi, an associate professor at Harvard Medical School and a practicing radiologist. But now, he says, “you see patients with a college education, a high school education, asking questions at the level of something an early med student might ask.”
The release of ChatGPT Health, and Anthropic’s subsequent announcement of new health integrations for Claude, indicate that the AI giants are increasingly willing to acknowledge and encourage health-related uses of their models. Such uses certainly come with risks, given LLMs’ well-documented tendencies to agree with users and make up information rather than admit ignorance.
But those risks also have to be weighed against potential benefits. There’s an analogy here to autonomous vehicles: When policymakers consider whether to allow Waymo in their city, the key metric is not whether its cars are ever involved in accidents but whether they cause less harm than the status quo of relying on human drivers. If Dr. ChatGPT is an improvement over Dr. Google—and early evidence suggests it may be—it could potentially lessen the enormous burden of medical misinformation and unnecessary health anxiety that the internet has created.
Pinning down the effectiveness of a chatbot such as ChatGPT or Claude for consumer health, however, is tricky. “It’s exceedingly difficult to evaluate an open-ended chatbot,” says Danielle Bitterman, the clinical lead for data science and AI at the Mass General Brigham health-care system. Large language models score well on medical licensing examinations, but those exams use multiple-choice questions that don’t reflect how people use chatbots to look up medical information.
Sirisha Rambhatla, an assistant professor of management science and engineering at the University of Waterloo, attempted to close that gap by evaluating how GPT-4o responded to licensing exam questions when it did not have access to a list of possible answers. Medical experts who evaluated the responses scored only about half of them as entirely correct. But multiple-choice exam questions are designed to be tricky enough that the answer options don’t give them entirely away, and they’re still a pretty distant approximation for the sort of thing that a user would type into ChatGPT.
A different study, which tested GPT-4o on more realistic prompts submitted by human volunteers, found that it answered medical questions correctly about 85% of the time. When I spoke with Amulya Yadav, an associate professor at Pennsylvania State University who runs the Responsible AI for Social Emancipation Lab and led the study, he made it clear that he wasn’t personally a fan of patient-facing medical LLMs. But he freely admits that, technically speaking, they seem up to the task—after all, he says, human doctors misdiagnose patients 10% to 15% of the time. “If I look at it dispassionately, it seems that the world is gonna change, whether I like it or not,” he says.
For people seeking medical information online, Yadav says, LLMs do seem to be a better choice than Google. Succi, the radiologist, also concluded that LLMs can be a better alternative to web search when he compared GPT-4’s responses to questions about common chronic medical conditions with the information presented in Google’s knowledge panel, the information box that sometimes appears on the right side of the search results.
Since Yadav’s and Succi’s studies appeared online, in the first half of 2025, OpenAI has released multiple new versions of GPT, and it’s reasonable to expect that GPT-5.2 would perform even better than its predecessors. But the studies do have important limitations: They focus on straightforward, factual questions, and they examine only brief interactions between users and chatbots or web search tools. Some of the weaknesses of LLMs—most notably their sycophancy and tendency to hallucinate—might be more likely to rear their heads in more extensive conversations and with people who are dealing with more complex problems. Reeva Lederman, a professor at the University of Melbourne who studies technology and health, notes that patients who don’t like the diagnosis or treatment recommendations that they receive from a doctor might seek out another opinion from an LLM—and the LLM, if it’s sycophantic, might encourage them to reject their doctor’s advice.
Some studies have found that LLMs will hallucinate and exhibit sycophancy in response to health-related prompts. For example, one study showed that GPT-4 and GPT-4o will happily accept and run with incorrect drug information included in a user’s question. In another, GPT-4o frequently concocted definitions for fake syndromes and lab tests mentioned in the user’s prompt. Given the abundance of medically dubious diagnoses and treatments floating around the internet, these patterns of LLM behavior could contribute to the spread of medical misinformation, particularly if people see LLMs as trustworthy.
OpenAI has reported that the GPT-5 series of models is markedly less sycophantic and prone to hallucination than their predecessors, so the results of these studies might not apply to ChatGPT Health. The company also evaluated the model that powers ChatGPT Health on its responses to health-specific questions, using their publicly available HeathBench benchmark. HealthBench rewards models that express uncertainty when appropriate, recommend that users seek medical attention when necessary, and refrain from causing users unnecessary stress by telling them their condition is more serious that it truly is. It’s reasonable to assume that the model underlying ChatGPT Health exhibited those behaviors in testing, though Bitterman notes that some of the prompts in HealthBench were generated by LLMs, not users, which could limit how well the benchmark translates into the real world.
An LLM that avoids alarmism seems like a clear improvement over systems that have people convincing themselves they have cancer after a few minutes of browsing. And as large language models, and the products built around them, continue to develop, whatever advantage Dr. ChatGPT has over Dr. Google will likely grow. The introduction of ChatGPT Health is certainly a move in that direction: By looking through your medical records, ChatGPT can potentially gain far more context about your specific health situation than could be included in any Google search, although numerous experts have cautioned against giving ChatGPT that access for privacy reasons.
Even if ChatGPT Health and other new tools do represent a meaningful improvement over Google searches, they could still conceivably have a negative effect on health overall. Much as automated vehicles, even if they are safer than human-driven cars, might still prove a net negative if they encourage people to use public transit less, LLMs could undermine users’ health if they induce people to rely on the internet instead of human doctors, even if they do increase the quality of health information available online.
Lederman says that this outcome is plausible. In her research, she has found that members of online communities centered on health tend to put their trust in users who express themselves well, regardless of the validity of the information they are sharing. Because ChatGPT communicates like an articulate person, some people might trust it too much, potentially to the exclusion of their doctor. But LLMs are certainly no replacement for a human doctor—at least not yet.
This story first appeared in The Debrief, our subscriber-only newsletter about the biggest news in tech by Mat Honan, Editor in Chief. Subscribe to read the next edition as soon as it lands.
It’s supposed to be frigid in Davos this time of year. Part of the charm is seeing the world’s elite tromp through the streets in respectable suits and snow boots. But this year it’s positively balmy, with highs in the mid 30s, or a little over 1°C. The current conditions when I flew out of New York were colder, and definitely snowier. I’m told this is due to something called a föhn, a dry warm wind that’s been blowing across the Alps.
I’m no meteorologist, but it’s true that there is a lot of hot air here.
On Wednesday, President Donald Trump arrived in Davos to address the assembly, and held forth for more than 90 minutes, weaving his way through remarks about the economy, Greenland, windmills, Switzerland, Rolexes, Venezuela, and drug prices. It was a talk lousy with gripes, grievances and outright falsehoods.
One small example: Trump made a big deal of claiming that China, despite being the world leader in manufacturing windmill componentry, doesn’t actually use them for energy generation itself. In fact, it is the world leader in generation, as well.
I did not get to watch this spectacle from the room itself. Sad!
By the time I got to the Congress Hall where the address was taking place, there was already a massive scrum of people jostling to get in.
I had just wrapped up moderating a panel on “the intelligent co-worker,” ie: AI agents in the workplace. I was really excited for this one as the speakers represented a diverse cross-section of the AI ecosystem. Christoph Schweizer, CEO of BCG had the macro strategic view; Enrique Lores, HP CEO, could speak to both hardware and large enterprises, Workera CEO Kian Katanforoosh has the inside view on workforce training and transformation, Manjul Shah CEO of Hippocratic AI addressed working in the high stakes field of healthcare, and Kate Kallot CEO of Amini AI gave perspective on the global south and Africa in particular.
Interestingly, most of the panel shied away from using the term co-worker, and some even rejected the term agent. But the view they painted was definitely one of humans working alongside AI and augmenting what’s possible. Shah, for example, talked about having agents call 16,000 people in Texas during a heat wave to perform a health and safety check. It was a great discussion. You can watch the whole thing here.
But by the time it let out, the push of people outside the Congress Hall was already too thick for me to get in. In fact I couldn’t even get into a nearby overflow room. I did make it into a third overflow room, but getting in meant navigating my way through a mass of people, so jammed in tight together that it reminded me of being at a Turnstile concert.
The speech blew way past its allotted time, and I had to step out early to get to yet another discussion. Walking through the halls while Trump spoke was a truly surreal experience. He had truly captured the attention of the gathered global elite. I don’t think I saw a single person not starting at a laptop, or phone or iPad, all watching the same video.
Trump is speaking again on Thursday in a previously unscheduled address to announce his Board of Peace. As is (I heard) Elon Musk. So it’s shaping up to be another big day for elite attention capture.
I should say, though, there are elites, and then there are elites. And there are all sorts of ways of sorting out who is who. Your badge color is one of them. I have a white participant badge, because I was moderating panels. This gets you in pretty much anywhere and therefore is its own sort of status symbol. Where you are staying is another. I’m in Klosters, a neighboring town that’s a 40 minute train ride away from the Congress Centre. Not so elite.
There are more subtle ways of status sorting, too. Yesterday I learned that when people ask if this is your first time at Davos, it’s sometimes meant as a way of trying to figure out how important you are. If you’re any kind of big deal, you’ve probably been coming for years.
But the best one I’ve yet encountered happened when I made small talk with the woman sitting next to me as I changed back into my snow boots. It turned out that, like me, she lived in California–at least part time. “But I don’t think I’ll stay there much longer,” she said, “due to the new tax law.” This was just an ice cold flex.
Because California’s newly proposed tax legislation? It only targets billionaires.
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
Yann LeCun’s new venture is a contrarian bet against large language models
Yann LeCun is a Turing Award recipient and a top AI researcher, but he has long been a contrarian figure in the tech world. He believes that the industry’s current obsession with large language models is wrong-headed and will ultimately fail to solve many pressing problems.
Instead, he thinks we should be betting on world models—a different type of AI that accurately reflects the dynamics of the real world. Perhaps it’s no surprise, then, that he recently left Meta, where he had served as chief scientist for FAIR (Fundamental AI Research), the company’s influential research lab that he founded.
LeCun sat down with MIT Technology Review in an exclusive online interview from his Paris apartment to discuss his new venture, life after Meta, the future of artificial intelligence, and why he thinks the industry is chasing the wrong ideas. Read the full interview.
—Caiwei Chen
Why 2026 is a hot year for lithium
—Casey Crownhart
In 2026, I’m going to be closely watching the price of lithium.
If you’re not in the habit of obsessively tracking commodity markets, I certainly don’t blame you. (Though the news lately definitely makes the case that minerals can have major implications for global politics and the economy.)
But lithium is worthy of a close look right now. The metal is crucial for lithium-ion batteries used in phones and laptops, electric vehicles, and large-scale energy storage arrays on the grid.
Prices have been on quite the roller coaster over the last few years, and they’re ticking up again. What happens next could have big implications for mining and battery technology. Read the full story. This story first appeared in The Spark, our newsletter all about the tech we can use to combat the climate crisis. Sign up to receive it in your inbox every Wednesday.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Trump has climbed down from his plan for the US to take Greenland To the relief of many across Europe. (BBC) + Trump says he’s agreed a deal to access Greenland’s rare earths. Experts say that’s ‘bonkers.’ (CNN) + European leaders are feeling flummoxed about what’s going on. (FT $)
2 Apple is reportedly developing a wearable AI pin It’s still in the very early stages—but this could be a huge deal if it makes it to launch. (The Information $) + It’s also planning to revamp Siri and turn it into an AI chatbot. (Bloomberg $) + Are we ready to trust AI with our bodies? (MIT Technology Review)
3 CEOs say AI saves people time. Their employees disagree. Many even say that it’s currently dragging down their productivity. (WSJ $) + The AI boom will increase US carbon emissions—but it doesn’t have to. (Wired $) + Let’s also not forget that large language models remain a security nightmare. (IEEE Spectrum)
4 This chart shows how measles cases are exploding in America They’ve hit a 30-year high, with the US on track to lose its ‘elimination status.’ (Axios $) + Things are poised to get even worse this year. (Wired $)
5 Your first humanoid robot coworker will almost definitely be Chinese But will it be truly useful? That’s the even bigger question. (Wired $) + Nvidia CEO Jensen Huang says Europe could do more to compete in robotics and AI. (CNBC)
6 Bezos’ Blue Origin is about to compete with Starlink It plans to send the first ‘TeraWave’ satellites into space next year. (Reuters $) + On the ground in Ukraine’s largest Starlink repair shop. (MIT Technology Review)
7 Trump’s family made $1.4 billion off crypto last year Move along, no conflicts of interest to see here. (Bloomberg $)
8 Comic-Con has banned AI art After an artist-led backlash last week. (404 Media) + Hundreds of creatives are warning against an AI future built on ‘theft on a grand scale’. (The Verge $)
9 What it’s like living without a smartphone for a month Potentially blissful for you, but probably a bit annoying for everyone else. (The Guardian) + Why teens with ADHD are particularly vulnerable to the perils of social media. (Nature)
10 Elon Musk is feuding with a budget airline The airline is winning, in case you wondered. (WP $)
Quote of the day
“I wouldn’t edit anything about Donald Trump, because the man makes me insane.”
—Wikipedia founder Jimmy Wales tells Wired why he’s steering clear of the US President’s page.
One more thing
BOB O’CONNOR
How electricity could help tackle a surprising climate villain
Cement hides in plain sight—it’s used to build everything from roads and buildings to dams and basement floors. But it’s also a climate threat. Cement production accounts for more than 7% of global carbon dioxide emissions—more than sectors like aviation, shipping, or landfills.
One solution to this climate catastrophe might be coursing through the pipes at Sublime Systems. The startup is developing an entirely new way to make cement. Instead of heating crushed-up rocks in lava-hot kilns, Sublime’s technology zaps them in water with electricity, kicking off chemical reactions that form the main ingredients in its cement.
But it faces huge challenges: competing with established industry players, and persuading builders to use its materials in the first place. Read the full story.
—Casey Crownhart
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ Earth may be a garbage fire, but space is beautiful. + Do you know how to tie your shoelaces up properly? Are you sure?! + I defy British readers not to feel a pang of nostalgia at these crisp packets. + Going to bed around the same time every night seems to be a habit worth adopting. ($)
In 2026, I’m going to be closely watching the price of lithium.
If you’re not in the habit of obsessively tracking commodity markets, I certainly don’t blame you. (Though the news lately definitely makes the case that minerals can have major implications for global politics and the economy.)
But lithium is worthy of a close look right now.
The metal is crucial for lithium-ion batteries used in phones and laptops, electric vehicles, and large-scale energy storage arrays on the grid. Prices have been on quite the roller coaster over the last few years, and they’re ticking up again after a low period. What happens next could have big implications for mining and battery technology.
Before we look ahead, let’s take a quick trip down memory lane. In 2020, global EV sales started to really take off, driving up demand for the lithium used in their batteries. Because of that growing demand and a limited supply, prices shot up dramatically, with lithium carbonate going from under $10 per kilogram to a high of roughly $70 per kilogram in just two years.
And the tech world took notice. During those high points, there was a ton of interest in developing alternative batteries that didn’t rely on lithium. I was writing about sodium-based batteries, iron-air batteries, and even experimental ones that were made with plastic.
Researchers and startups were also hunting for alternative ways to get lithium, including battery recycling and processing methods like direct lithium extraction (more on this in a moment).
But soon, prices crashed back down to earth. We saw lower-than-expected demand for EVs in the US, and developers ramped up mining and processing to meet demand. Through late 2024 and 2025, lithium carbonate was back around $10 a kilogram again. Avoiding lithium or finding new ways to get it suddenly looked a lot less crucial.
That brings us to today: lithium prices are ticking up again. So far, it’s nowhere close to the dramatic rise we saw a few years ago, but analysts are watching closely. Strong EV growth in China is playing a major role—EVs still make up about 75% of battery demand today. But growth in stationary storage, batteries for the grid, is also contributing to rising demand for lithium in both China and the US.
Higher prices could create new opportunities. The possibilities include alternative battery chemistries, specifically sodium-ion batteries, says Evelina Stoikou, head of battery technologies and supply chains at BloombergNEF. (I’ll note here that we recently named sodium-ion batteries to our 2026 list of 10 Breakthrough Technologies.)
It’s not just batteries, though. Another industry that could see big changes from a lithium price swing: extraction.
Today, most lithium is mined from rocks, largely in Australia, before being shipped to China for processing. There’s a growing effort to process the mineral in other places, though, as countries try to create their own lithium supply chains. Tesla recently confirmed that it’s started production at its lithium refinery in Texas, which broke ground in 2023. We could see more investment in processing plants outside China if prices continue to climb.
This could also be a key year for direct lithium extraction, as Katie Brigham wrote in a recent story for Heatmap. That technology uses chemical or electrochemical processes to extract lithium from brine (salty water that’s usually sourced from salt lakes or underground reservoirs), quickly and cheaply. Companies including Lilac Solutions, Standard Lithium, and Rio Tinto are all making plans or starting construction on commercial facilities this year in the US and Argentina.
If there’s anything I’ve learned about following batteries and minerals over the past few years, it’s that predicting the future is impossible. But if you’re looking for tea leaves to read, lithium prices deserve a look.
This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.
Yann LeCun is a Turing Award recipient and a top AI researcher, but he has long been a contrarian figure in the tech world. He believes that the industry’s current obsession with large language models is wrong-headed and will ultimately fail to solve many pressing problems.
Instead, he thinks we should be betting on world models—a different type of AI that accurately reflects the dynamics of the real world. He is also a staunch advocate for open-source AI and criticizes the closed approach of frontier labs like OpenAI and Anthropic.
Perhaps it’s no surprise, then, that he recently left Meta, where he had served as chief scientist for FAIR (Fundamental AI Research), the company’s influential research lab that he founded. Meta has struggled to gain much traction with its open-source AI model Llama and has seen internal shake-ups, including the controversial acquisition of ScaleAI.
LeCun sat down with MIT Technology Review in an exclusive online interview from his Paris apartment to discuss his new venture, life after Meta, the future of artificial intelligence, and why he thinks the industry is chasing the wrong ideas.
Both the questions and answers below have been edited for clarity and brevity.
You’ve just announced a new company, Advanced Machine Intelligence (AMI). Tell me about the big ideas behind it.
It is going to be a global company, but headquartered in Paris. You pronounce it “ami”—it means “friend” in French. I am excited. There is a very high concentration of talent in Europe, but it is not always given a proper environment to flourish. And there is certainly a huge demand from the industry and governments for a credible frontier AI company that is neither Chinese nor American. I think that is going to be to our advantage.
So an ambitious alternative to the US-China binary we currently have. What made you want to pursue that third path?
Well, there are sovereignty issues for a lot of countries, and they want some control over AI. What I’m advocating is that AI is going to become a platform, and most platforms tend to become open-source. Unfortunately, that’s not really the direction the American industry is taking. Right? As the competition increases, they feel like they have to be secretive. I think that is a strategic mistake.
It’s certainly true for OpenAI, which went from very open to very closed, and Anthropic has always been closed. Google was sort of a little open. And then Meta, we’ll see. My sense is that it’s not going in a positive direction at this moment.
Simultaneously, China has completely embraced this open approach. So all leading open-source AI platforms are Chinese, and the result is that academia and startups, outside of the US, have basically embraced Chinese models. There’s nothing wrong with that—you know, Chinese models are good. Chinese engineers and scientists are great. But you know, if there is a future in which all of our information diet is being mediated by AI assistance, and the choice is either English-speaking models produced by proprietary companies always close to the US or Chinese models which may be open-source but need to be fine-tuned so that they answer questions about Tiananmen Square in 1989—you know, it’s not a very pleasant and engaging future.
They [the future models] should be able to be fine-tuned by anyone and produce a very high diversity of AI assistance, with different linguistic abilities and value systems and political biases and centers of interests. You need high diversity of assistance for the same reason that you need high diversity of press.
That is certainly a compelling pitch. How are investors buying that idea so far?
They really like it. A lot of venture capitalists are very much in favor of this idea of open-source, because they know for a lot of small startups, they really rely on open-source models. They don’t have the means to train their own model, and it’s kind of dangerous for them strategically to embrace a proprietary model.
You recently left Meta. What’s your view on the company and Mark Zuckerberg’s leadership? There’s a perception that Meta has fumbled its AI advantage.
I think FAIR [LeCun’s lab at Meta] was extremely successful in the research part. Where Meta was less successful is in picking up on that research and pushing it into practical technology and products. Mark made some choices that he thought were the best for the company. I may not have agreed with all of them. For example, the robotics group at FAIR was let go, which I think was a strategic mistake. But I’m not the director of FAIR. People make decisions rationally, and there’s no reason to be upset.
So, no bad blood? Could Meta be a future client for AMI?
Meta might be our first client! We’ll see. The work we are doing is not in direct competition. Our focus on world models for the physical world is very different from their focus on generative AI and LLMs.
You were working on AI long before LLMs became a mainstream approach. But since ChatGPT broke out, LLMs have become almost synonymous with AI.
Yes, and we are going to change that. The public face of AI, perhaps, is mostly LLMs and chatbots of various types. But the latest ones of those are not pure LLMs. They are LLM plus a lot of things, like perception systems and code that solves particular problems. So we are going to see LLMs as kind of the orchestrator in systems, a little bit.
Beyond LLMs, there is a lot of AI that is behind the scenes that runs a big chunk of our society. There are assistance driving programs in a car, quick-turn MRI images, algorithms that drive social media—that’s all AI.
You have been vocal in arguing that LLMs can only get us so far. Do you think LLMs are overhyped these days? Can you summarize to our readers why you believe that LLMs are not enough?
There is a sense in which they have not been overhyped, which is that they are extremely useful to a lot of people, particularly if you write text, do research, or write code. LLMs manipulate language really well. But people have had this illusion, or delusion, that it is a matter of time until we can scale them up to having human-level intelligence, and that is simply false.
The truly difficult part is understanding the real world. This is the Moravec Paradox (a phenomenon observed by the computer scientist Hans Moravec in 1988): What’s easy for us, like perception and navigation, is hard for computers, and vice versa. LLMs are limited to the discrete world of text. They can’t truly reason or plan, because they lack a model of the world. They can’t predict the consequences of their actions. This is why we don’t have a domestic robot that is as agile as a house cat, or a truly autonomous car.
We are going to have AI systems that have humanlike and human-level intelligence, but they’re not going to be built on LLMs, and it’s not going to happen next year or two years from now. It’s going to take a while. There are major conceptual breakthroughs that have to happen before we have AI systems that have human-level intelligence. And that is what I’ve been working on. And this company, AMI Labs, is focusing on the next generation.
And your solution is world models and JEPA architecture (JEPA, or “joint embedding predictive architecture,” is a learning framework that trains AI models to understand the world, created by LeCun while he was at Meta). What’s the elevator pitch?
The world is unpredictable. If you try to build a generative model that predicts every detail of the future, it will fail. JEPA is not generative AI. It is a system that learns to represent videos really well. The key is to learn an abstract representation of the world and make predictions in that abstract space, ignoring the details you can’t predict. That’s what JEPA does. It learns the underlying rules of the world from observation, like a baby learning about gravity. This is the foundation for common sense, and it’s the key to building truly intelligent systems that can reason and plan in the real world. The most exciting work so far on this is coming from academia, not the big industrial labs stuck in the LLM world.
The lack of non-text data has been a problem in taking AI systems further in understanding the physical world. JEPA is trained on videos. What other kinds of data will you be using?
Our systems will be trained on video, audio, and sensor data of all kinds—not just text. We are working with various modalities, from the position of a robot arm to lidar data to audio. I’m also involved in a project using JEPA to model complex physical and clinical phenomena.
What are some of the concrete, real-world applications you envision for world models?
The applications are vast. Think about complex industrial processes where you have thousands of sensors, like in a jet engine, a steel mill, or a chemical factory. There is no technique right now to build a complete, holistic model of these systems. A world model could learn this from the sensor data and predict how the system will behave. Or think of smart glasses that can watch what you’re doing, identify your actions, and then predict what you’re going to do next to assist you. This is what will finally make agentic systems reliable. An agentic system that is supposed to take actions in the world cannot work reliably unless it has a world model to predict the consequences of its actions. Without it, the system will inevitably make mistakes. This is the key to unlocking everything from truly useful domestic robots to Level 5 autonomous driving.
Humanoid robots are all the rage recently, especially ones built by companies from China. What’s your take?
There are all these brute-force ways to get around the limitations of learning systems, which require inordinate amounts of training data to do anything. So the secret of all the companies getting robots to do kung fu or dance is they are all planned in advance. But frankly, nobody—absolutely nobody—knows how to make those robots smart enough to be useful. Take my word for it.
You need an enormous amount of tele-operation training data for every single task, and when the environment changes a little bit, it doesn’t generalize very well. What this tells us is we are missing something very big. The reason why a 17-year-old can learn to drive in 20 hours is because they already know a lot about how the world behaves. If we want a generally useful domestic robot, we need systems to have a kind of good understanding of the physical world. That’s not going to happen until we have good world models and planning.
There’s a growing sentiment that it’s becoming harder to do foundational AI research in academia because of the massive computing resources required. Do you think the most important innovations will now come from industry?
No. LLMs are now technology development, not research. It’s true that it’s very difficult for academics to play an important role there because of the requirements for computation, data access, and engineering support. But it’s a product now. It’s not something academia should even be interested in. It’s like speech recognition in the early 2010s—it was a solved problem, and the progress was in the hands of industry.
What academia should be working on is long-term objectives that go beyond the capabilities of current systems. That’s why I tell people in universities: Don’t work on LLMs. There is no point. You’re not going to be able to rival what’s going on in industry. Work on something else. Invent new techniques. The breakthroughs are not going to come from scaling up LLMs. The most exciting work on world models is coming from academia, not the big industrial labs. The whole idea of using attention circuits in neural nets came out of the University of Montreal. That research paper started the whole revolution. Now that the big companies are closing up, the breakthroughs are going to slow down. Academia needs access to computing resources, but they should be focused on the next big thing, not on refining the last one.
You wear many hats: professor, researcher, educator, public thinker … Now you just took on a new one. What is that going to look like for you?
I am going to be the executive chairman of the company, and Alex LeBrun [a former colleague from Meta AI] will be the CEO. It’s going to be LeCun and LeBrun—it’s nice if you pronounce it the French way.
I am going to keep my position at NYU. I teach one class per year, I have PhD students and postdocs, so I am going to be kept based in New York. But I go to Paris pretty often because of my lab.
Does that mean that you won’t be very hands-on?
Well, there’s two ways to be hands-on. One is to manage people day to day, and another is to actually get your hands dirty in research projects, right?
I can do management, but I don’t like doing it. This is not my mission in life. It’s really to make science and technology progress as far as we can, inspire other people to work on things that are interesting, and then contribute to those things. So that has been my role at Meta for the last seven years. I founded FAIR and led it for four to five years. I kind of hated being a director. I am not good at this career management thing. I’m much more visionary and a scientist.
What makes Alex LeBrun the right fit?
Alex is a serial entrepreneur; he’s built three successful AI companies. The first he sold to Microsoft; the second to Facebook, where he was head of the engineering division of FAIR in Paris. He then left to create Nabla, a very successful company in the health-care space. When I offered him the chance to join me in this effort, he accepted almost immediately. He has the experience to build the company, allowing me to focus on science and technology.
You’re headquartered in Paris. Where else do you plan to have offices?
We are a global company. There’s going to be an office in North America.
New York, hopefully?
New York is great. That’s where I am, right? And it’s not Silicon Valley. Silicon Valley is a bit of a monoculture.
What about Asia? I’m guessing Singapore, too?
Probably, yeah. I’ll let you guess.
And how are you attracting talent?
We don’t have any issue recruiting. There are a lot of people in the AI research community who think the future of AI is in world models. Those people, regardless of pay package, will be motivated to come work for us because they believe in the technological future we are building. We’ve already recruited people from places like OpenAI, Google DeepMind, and xAI.
I heard that Saining Xie, a prominent researcher from NYU and Google DeepMind, might be joining you as chief scientist. Any comments?
Saining is a brilliant researcher. I have a lot of admiration for him. I hired him twice already. I hired him at FAIR, and I convinced my colleagues at NYU that we should hire him there. Let’s just say I have a lot of respect for him.
When will you be ready to share more details about AMI Labs, like financial backing or other core members?
There are many paths AI evolution could take. On one end of the spectrum, AI is dismissed as a marginal fad, another bubble fueled by notoriety and misallocated capital. On the other end, it’s cast as a dystopian force, destined to eliminate jobs on a large scale and destabilize economies. Markets oscillate between skepticism and the fear of missing out, while the technology itself evolves quickly and investment dollars flow at a rate not seen in decades.
All the while, many of today’s financial and economic thought leaders hold to the consensus that the financial landscape will stay the same as it has been for the last several years. Two years ago, Joseph Davis, global chief economist at Vanguard, and his team felt the same but wanted to develop their perspective on AI technology with a deeper foundation built on history and data. Based on a proprietary data set covering the last 130 years, Davis and his team developed a new framework, The Vanguard Megatrends Model, from research that suggested a more nuanced path than hype extremes: that AI has the potential to be a general purpose technology that lifts productivity, reshapes industries, andaugments human work rather than displaces it. In short, AI will be neither marginal nor dystopian.
“Our findings suggest that the continuation of the status quo, the basic expectation of most economists, is actually the least likely outcome,” Davis says. “We project that AI will have an even greater effect on productivity than the personal computer did. And we project that a scenario where AI transforms the economy is far more likely than one where AI disappoints and fiscal deficits dominate. The latter would likely lead to slower economic growth, higher inflation, and increased interest rates.”
Implications for business leaders and workers
Davis does not sugar-coat it, however. Although AI promises economic growth and productivity, it will be disruptive, especially for business leaders and workers in knowledge sectors. “AI is likely to be the most disruptive technology to alter the nature of our work since the personal computer,” says Davis. “Those of a certain age might recall how the broad availability of PCs remade many jobs. It didn’t eliminate jobs as much as it allowed people to focus on higher value activities.”
The team’s framework allowed them to examine AI automation risks to over 800 different occupations. The research indicated that while the potential for job loss exists in upwards of 20% of occupations as a result of AI-driven automation, the majority of jobs—likely four out of five—will result in a mixture of innovation and automation. Workers’ time will increasingly shift to higher value and uniquely human tasks.
This introduces the idea that AI could serve as a copilot to various roles, performing repetitive tasks and generally assisting with responsibilities. Davis argues that traditional economic models often underestimate the potential of AI because they fail to examine the deeper structural effects of technological change. “Most approaches for thinking about future growth, such as GDP, don’t adequately account for AI,” he explains. “They fail to link short-term variations in productivity with the three dimensions of technological change: automation, augmentation, and the emergence of new industries.” Automation enhances worker productivity by handling routine tasks; augmentation allows technology to act as a copilot, amplifying human skills; and the creation of new industries creates new sources of growth.
Implications for the economy
Ironically, Davis’s research suggests that a reason for the relatively low productivity growth in recent years may be a lack of automation. Despite a decade of rapid innovation in digital and automation technologies, productivity growth has lagged since the 2008 financial crisis, hitting 50-year lows. This appears to support the view that AI’s impact will be marginal. But Davis believes that automation has been adopted in the wrong places. “What surprised me most was how little automation there has been in services like finance, health care, and education,” he says. “Outside of manufacturing, automation has been very limited. That’s been holding back growth for at least two decades.” The services sector accounts for more than 60% of US GDP and 80% of the workforce and has experienced some of the lowest productivity growth. It is here, Davis argues, that AI will make the biggest difference.
One of the biggest challenges facing the economy is demographics, as the Baby Boomer generation retires, immigration slows, and birth rates decline. These demographic headwinds reinforce the need for technological acceleration. “There are concerns about AI being dystopian and causing massive job loss, but we’ll soon have too few workers, not too many,” Davis says. “Economies like the US, Japan, China, and those across Europe will need to step up function in automation as their populations age.”
For example, consider nursing, a profession in which empathy and human presence are irreplaceable. AI has already shown the potential to augment rather than automate in this field, streamlining data entry in electronic health records and helping nurses reclaim time for patient care. Davis estimates that these tools could increase nursing productivity by as much as 20% by 2035, a crucial gain as health-care systems adapt to ageing populations and rising demand. “In our most likely scenario, AI will offset demographic pressures. Within five to seven years, AI’s ability to automate portions of work will be roughly equivalent to adding 16 million to 17 million workers to the US labor force,” Davis says. “That’s essentially the same as if everyone turning 65 over the next five years decided not to retire.” He projects that more than 60% of occupations, including nurses, family physicians, high school teachers, pharmacists, human resource managers, and insurance sales agents, will benefit from AI as an augmentation tool.
Implications for all investors
As AI technology spreads, the strongest performers in the stock market won’t be its producers, but its users. “That makes sense, because general-purpose technologies enhance productivity, efficiency, and profitability across entire sectors,” says Davis. This adoption of AI is creating flexibility for investment options, which means diversifying beyond technology stocks might be appropriate as reflected in Vanguard’s Economic and Market Outlook for 2026. “As that happens, the benefits move beyond places like Silicon Valley or Boston and into industries that apply the technology in transformative ways.” And history shows that early adopters of new technologies reap the greatest productivity rewards. “We’re clearly in the experimentation phase of learning by doing,” says Davis. “Those companies that encourage and reward experimentation will capture the most value from AI.”
Looking globally, Davis sees the United States and China as significantly ahead in the AI race. “It’s a virtual dead heat,” he says. “That tells me the competition between the two will remain intense.” But other economies, especially those with low automation rates and large service sectors, like Japan, Europe, and Canada, could also see significant benefits. “If AI is truly going to be transformative, three sectors stand out: health care, education, and finance,” says Davis. “For AI to live up to its potential, it must fundamentally reshape these industries, which face high costs and rising demand for better, faster, more personalized services.”
However, Davis says Vanguard is more bullish on AI’s potential to transform the economy than it was just a year ago. Especially since that transformation requires application beyond Silicon Valley. “When I speak to business leaders, I remind them that this transformation hasn’t happened yet,” says Davis. “It’s their investment and innovation that will determine whether it does.”
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
Governments plan to pour $1.3 trillion into AI infrastructure by 2030 to invest in “sovereign AI,” with the premise being that countries should be in control of their own AI capabilities. The funds include financing for domestic data centers, locally trained models, independent supply chains, and national talent pipelines. This is a response to real shocks: covid-era supply chain breakdowns, rising geopolitical tensions, and the war in Ukraine.
But the pursuit of absolute autonomy is running into reality. AI supply chains are irreducibly global: Chips are designed in the US and manufactured in East Asia; models are trained on data sets drawn from multiple countries; applications are deployed across dozens of jurisdictions.
If sovereignty is to remain meaningful, it must shift from a defensive model of self-reliance to a vision that emphasizes the concept of orchestration, balancing national autonomy with strategic partnership.
This year, $475 billion is flowing into AI data centers globally. In the United States, AI data centers accounted for roughly one-fifth of GDP growth in the second quarter of 2025. But the obstacle for other nations hoping to follow suit isn’t just money. It’s energy and physics. Global data center capacity is projected to hit 130 gigawatts by 2030, and for every $1 billion spent on these facilities, $125 million is needed for electricity networks. More than $750 billion in planned investment is already facing grid delays.
And it’s also talent. Researchers and entrepreneurs are mobile, drawn to ecosystems with access to capital, competitive wages, and rapid innovation cycles. Infrastructure alone won’t attract or retain world-class talent.
What works: An orchestrated sovereignty
What nations need isn’t sovereignty through isolation but through specialization and orchestration. This means choosing which capabilities you build, which you pursue through partnership, and where you can genuinely lead in shaping the global AI landscape.
The most successful AI strategies don’t try to replicate Silicon Valley; they identify specific advantages and build partnerships around them.
Singapore offers a model. Rather than seeking to duplicate massive infrastructure, it invested in governance frameworks, digital-identity platforms, and applications of AI in logistics and finance, areas where it can realistically compete.
Israel shows a different path. Its strength lies in a dense network of startups and military-adjacent research institutions delivering outsize influence despite the country’s small size.
South Korea is instructive too. While it has national champions like Samsung and Naver, these firms still partner with Microsoft and Nvidia on infrastructure. That’s deliberate collaboration reflecting strategic oversight, not dependence.
Even China, despite its scale and ambition, cannot secure full-stack autonomy. Its reliance on global research networks and on foreign lithography equipment, such as extreme ultraviolet systems needed to manufacture advanced chips and GPU architectures, shows the limits of techno-nationalism.
The pattern is clear: Nations that specialize and partner strategically can outperform those trying to do everything alone.
Three ways to align ambition with reality
1. Measure added value, not inputs.
Sovereignty isn’t how many petaflops you own. It’s how many lives you improve and how fast the economy grows. Real sovereignty is the ability to innovate in support of national priorities such as productivity, resilience, and sustainability while maintaining freedom to shape governance and standards.
Nations should track the use of AI in health care and monitor how the technology’s adoption correlates with manufacturing productivity, patent citations, and international research collaborations. The goal is to ensure that AI ecosystems generate inclusive and lasting economic and social value.
2. Cultivate a strong AI innovation ecosystem.
Build infrastructure, but also build the ecosystem around it: research institutions, technical education, entrepreneurship support, and public-private talent development. Infrastructure without skilled talent and vibrant networks cannot deliver a lasting competitive advantage.
3. Build global partnerships.
Strategic partnerships enable nations to pool resources, lower infrastructure costs, and access complementary expertise. Singapore’s work with global cloud providers and the EU’s collaborative research programs show how nations advance capabilities faster through partnership than through isolation. Rather than competing to set dominant standards, nations should collaborate on interoperable frameworks for transparency, safety, and accountability.
What’s at stake
Overinvesting in independence fragments markets and slows cross-border innovation, which is the foundation of AI progress. When strategies focus too narrowly on control, they sacrifice the agility needed to compete.
The cost of getting this wrong isn’t just wasted capital—it’s a decade of falling behind. Nations that double down on infrastructure-first strategies risk ending up with expensive data centers running yesterday’s models, while competitors that choose strategic partnerships iterate faster, attract better talent, and shape the standards that matter.
The winners will be those who define sovereignty not as separation, but as participation plus leadership—choosing who they depend on, where they build, and which global rules they shape. Strategic interdependence may feel less satisfying than independence, but it’s real, it is achievable, and it will separate the leaders from the followers over the next decade.
The age of intelligent systems demands intelligent strategies—ones that measure success not by infrastructure owned, but by problems solved. Nations that embrace this shift won’t just participate in the AI economy; they’ll shape it. That’s sovereignty worth pursuing.
Cathy Li is head of the Centre for AI Excellence at the World Economic Forum.
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
All anyone wants to talk about at Davos is AI and Donald Trump
—Mat Honan, MIT Technology Review’s editor in chief
At Davos this year Trump is dominating all the side conversations. There are lots of little jokes. Nervous laughter. Outright anger. Fear in the eyes. It’s wild. The US president is due to speak here today, amid threats of seizing Greenland and fears that he’s about to permanently fracture the NATO alliance.
This subscriber-only story appeared first in The Debrief, Mat’s weekly newsletter about the biggest stories in tech. Sign up here to get the next one in your inbox, and subscribe if you haven’t already!
The UK government is backing AI that can run its own lab experiments
A number of startups and university teams that are building “AI scientists” to design and run experiments in the lab, including robot biologists and chemists, have just won extra funding from the UK government agency that funds moonshot R&D.
The competition, set up by ARIA (the Advanced Research and Invention Agency), gives a clear sense of how fast this technology is moving: The agency received 245 proposals from research teams that are already building tools capable of automating increasing amounts of lab work. Read the full story to learn more.
—Will Douglas Heaven
Everyone wants AI sovereignty. No one can truly have it.
—Cathy Li is head of the Centre for AI Excellence at the World Economic Forum
Governments plan to pour $1.3 trillion into AI infrastructure by 2030 to invest in “sovereign AI,” with the premise being that countries should be in control of their own AI capabilities. The funds include financing for domestic data centers, locally trained models, independent supply chains, and national talent pipelines.
This is a response to real shocks: covid-era supply chain breakdowns, rising geopolitical tensions, and the war in Ukraine. But the pursuit of absolute autonomy is running into reality: AI supply chains are irreducibly global. If sovereignty is to remain meaningful, it must shift from defensive self-reliance to a vision that balances national autonomy with strategic partnership. Read the full story.
Here’s how extinct DNA could help us in the present—and the future
Thanks to genetic science, gene editing, and techniques like cloning, it’s now possible to move DNA through time, studying genetic information in ancient remains and then re-creating it in the bodies of modern beings. And that, scientists say, offers new ways to try to help endangered species, engineer new plants that resist climate change, or even create new human medicines.
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 The White House wants Americans to embrace AI It faces an uphill battle—the US public is mostly pretty gloomy about AI’s impact. (WP $) + What’s next for AI in 2026. (MIT Technology Review)
2 The UN says we’re entering an “era of water bankruptcy” And it’s set to affect the vast majority of us on the planet. (Reuters $) + Water shortages are fueling the protests in Iran. (Undark) + This Nobel Prize–winning chemist dreams of making water from thin air. (MIT Technology Review)
3 How is US science faring after a year of Trump? Not that well, after proposed budget cuts amounting to $32 billion. (Nature $) + The foundations of America’s prosperity are being dismantled. (MIT Technology Review)
4 We need to talk about the early career AI jobs crisis Young people are graduating and finding there simply aren’t any roles for them to do. (NY Mag $) + AI companies are fighting to win over teachers. (Axios $) + Chinese universities want students to use more AI, not less. (MIT Technology Review)
5 The AI boyfriend business is booming in China And it’s mostly geared towards Gen Z women. (Wired $) + It’s surprisingly easy to stumble into a relationship with an AI chatbot. (MIT Technology Review)
6 Snap has settled a social media addiction lawsuit ahead of a trial However the other defendants, including Meta, TikTok and YouTube, are still fighting it. (BBC) + A new study is going to examine the effects of restricting social media for children. (The Guardian)
7 Here are some of the best ideas of this century so far From smartphones to HIV drugs, the pace of progress has been dizzying. (New Scientist $)
8 Robots may be on the cusp of becoming very capable Until now, their role in the world of work has been limited. AI could radically change that. (FT $) + Why the humanoid workforce is running late. (MIT Technology Review)
9 Scientists are racing to put a radio telescope on the moon If they succeed, it will be able to ‘hear’ all the way back to over 13 billion years ago, just 380,000 years after the big bang. (IEEE Spectrum) + Inside the quest to map the universe with mysterious bursts of radio energy. (MIT Technology Review)
10 It turns out cows can use tools What will we discover next? Flying pigs?! (Futurism)
Quote of the day
“We’re still staggering along, but I don’t know for how much longer. I don’t have the energy any more.”
—A researcher at the National Oceanic and Atmospheric Administration tells Nature they and their colleagues are exhausted by the Trump administration’s attacks on science.
One more thing
Palmer Luckey on the Pentagon’s future of mixed reality
Palmer Luckey has, in some ways, come full circle.
His first experience with virtual-reality headsets was as a teenage lab technician at a defense research center in Southern California, studying their potential to curb PTSD symptoms in veterans. He then built Oculus, sold it to Facebook for $2 billion, left Facebook after a highly public ousting, and founded Anduril, which focuses on drones, cruise missiles, and other AI-enhanced technologies for the US Department of Defense. The company is now valued at $14 billion.
Now Luckey is redirecting his energy again, to headsets for the military. He spoke to MIT Technology Review about his plans. Read the full interview.
—James O’Donnell
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ I want to skip around every single one of these beautiful gardens. + Your friends help you live longer. Isn’t that nice of them?! + Brb, just buying a pharaoh headdress for my cat. + Consider this your annual reminder that you don’t need a gym membership or fancy equipment to get fitter.
This story first appeared in The Debrief, our subscriber-only newsletter about the biggest news in tech by Mat Honan, Editor in Chief. Subscribe to read the next edition as soon as it lands.
Hello from the World Economic Forum annual meeting in Davos, Switzerland. I’ve been here for two days now, attending meetings, speaking on panels, and basically trying to talk to anyone I can. And as far as I can tell, the only things anyone wants to talk about are AI and Trump.
Davos is physically defined by the Congress Center, where the official WEF sessions take place, and the Promenade, a street running through the center of the town lined with various “houses”—mostly retailers that are temporarily converted into meeting hubs for various corporate or national sponsors. So there is a Ukraine House, a Brazil House, Saudi House, and yes, a USA House (more on that tomorrow). There are a handful of media houses from the likes of CNBC and the Wall Street Journal. Some houses are devoted to specific topics; for example, there’s one for science and another for AI.
But like everything else in 2026, the Promenade is dominated by tech companies. At one point I realized that literally everything I could see, in a spot where the road bends a bit, was a tech company house. Palantir, Workday, Infosys, Cloudflare, C3.ai. Maybe this should go without saying, but their presence, both in the houses and on the various stages and parties and platforms here at the World Economic Forum, really drove home to me how utterly and completely tech has captured the global economy.
While the houses host events and serve as networking hubs, the big show is inside the Congress Center. On Tuesday morning, I kicked off my official Davos experience there by moderating a panel with the CEOs of Accenture, Aramco, Royal Philips, and Visa. The topic was scaling up AI within organizations. All of these leaders represented companies that have gone from pilot projects to large internal implementations. It was, for me, a fascinating conversation. You can watch the whole thing here, but my takeaway was that while there are plenty of stories about AI being overhyped (including from us), it is certainly having substantive effects at large companies.
Aramco CEO Amin Nasser, for example, described how that company has found $3 billion to $5 billion in cost savings by improving the efficiency of its operations. Royal Philips CEO Roy Jakobs described how it was allowing health-care practitioners to spend more time with patients by doing things such as automated note-taking. (This really resonated with me, as my wife is a pediatrics nurse, and for decades now I’ve heard her talk about how much of her time is devoted to charting.) And Visa CEO Ryan McInerney talked about his company’s push into agentic commerce and the way that will play out for consumers, small businesses, and the global payments industry.
To elaborate a little on that point, McInerney painted a picture of commerce where agents won’t just shop for things you ask them to, which will be basically step one, but will eventually be able to shop for things based on your preferences and previous spending patterns. This could be your regular grocery shopping, or even a vacation getaway. That’s going to require a lot of trust and authentication to protect both merchants and consumers, but it is clear that the steps into agentic commerce we saw in 2025 were just baby ones. There are much bigger ones coming for 2026. (Coincidentally, I had a discussion with a senior executive from Mastercard on Monday, who made several of the same points.)
But the thing that really resonated with me from the panel was a comment from Accenture CEO Julie Sweet, who has a view not only of her own large org but across a spectrum of companies: “It’s hard to trust something until you understand it.”
I felt that neatly summed up where we are as a society with AI.
Clearly, other people feel the same. Before the official start of the conference I was at AI House for a panel. The place was packed. There was a consistent, massive line to get in, and once inside, I literally had to muscle my way through the crowd. Everyone wanted to get in. Everyone wanted to talk about AI.
(A quick aside on what I was doing there: I sat on a panel called “Creativity and Identity in the Age of Memes and Deepfakes,” led by Atlantic CEO Nicholas Thompson; it featured the artist Emi Kusano, who works with AI, and Duncan Crabtree-Ireland, the chief negotiator for SAG-AFTRA, who has been at the center of a lot of the debates about AI in the film and gaming industries. I’m not going to spend much time describing it because I’m already running long, but it was a rip-roarer of a panel. Check it out.)
And, okay. Sigh. Donald Trump.
The president is due here Wednesday, amid threats of seizing Greenland and fears that he’s about to permanently fracture the NATO alliance. While AI is all over the stages, Trump is dominating all the side conversations. There are lots of little jokes. Nervous laughter. Outright anger. Fear in the eyes. It’s wild.
These conversations are also starting to spill out into the public. Just after my panel on Tuesday, I headed to a pavilion outside the main hall in the Congress Center. I saw someone coming down the stairs with a small entourage, who was suddenly mobbed by cameras and phones.
Moments earlier in the same spot, the press had been surrounding David Beckham, shouting questions at him. So I was primed for it to be another celebrity—after all, captains of industry were everywhere you looked. I mean, I had just bumped into Eric Schmidt, who was literally standing in line in front of me at the coffee bar. Davos is weird.
But in fact, it was Gavin Newsom, the governor of California, who is increasingly seen as the leading voice of the Democratic opposition to President Trump, and a likely contender, or even front-runner, in the race to replace him. Because I live in San Francisco I’ve encountered Newsom many times, dating back to his early days as a city supervisor before he was even mayor. I’ve rarely, rarely, seen him quite so worked up as he was on Tuesday.
Among other things, he called Trump a narcissist who follows “the law of the jungle, the rule of Don” and compared him to a T-Rex, saying, “You mate with him or he devours you.” And he was just as harsh on the world leaders, many of whom are gathered in Davos, calling them “pathetic” and saying he should have brought knee pads for them.
Yikes.
There was more of this sentiment, if in more measured tones, from Canadian prime minister Mark Carney during his address at Davos. While I missed his remarks, they had people talking. “If we’re not at the table, we’re on the menu,” he argued.
The story of enterprise resource planning (ERP) is really a story of businesses learning to organize themselves around the latest, greatest technology of the times. In the 1960s through the ’80s, mainframes, material requirements planning (MRP), and manufacturing resource planning (MRP II) brought core business data from file cabinets to centralized systems. Client-server architectures defined the ’80s and ’90s, taking digitization mainstream during the internet’s infancy. And in the 21st century, as work moved beyond the desktop, SaaS and cloud ushered in flexible access and elastic infrastructure.
The rise of composability and agentic AI marks yet another dawn—and an apt one for the nascent intelligence age. Composable architectures let organizations assemble capabilities from multiple systems in a mix-and-match fashion, so they can swap vendor gridlock for an à la carte portfolio of fit-for-purpose modules. On top of that architectural shift, agentic AI enables coordination across systems that weren’t originally designed to talk to one another.
Early indicators suggest that AI-enabled ERP will yield meaningful performance gains: One 2024 study found that organizations implementing AI-driven ERP solutions stand to gain around a 30% boost in user satisfaction and a 25% lift in productivity; another suggested that AI-driven ERP can lead to processing time savings of up to 45%, as well as improvements in decision accuracy to the tune of 60%.
These dual advancements address long-standing gaps that previous ERP eras fell short of delivering: freedom to innovate outside of vendor roadmaps, capacity for rapid iteration, and true interoperability across all critical functions. This shift signals the end of monolithic dependency as well as a once-in-a-generation opportunity for early movers to gain a competitive edge.
Key takeaways include:
Enterprises are moving away from monolithic ERP vendor upgrades in favor of modular architectures that allow them to change or modernize components independently while keeping a stable core for essential transactions.
Agentic AI is a timely complement to composability, functioning as a UX and orchestration layer that can coordinate workflows across disparate systems and turn multi-step processes into automated, cross-platform operations.
These dual shifts are finally enabling technology architecture to organize around the business, instead of the business around the ERP. Companies can modernize by reconfiguring and extending what they already have, rather than relying on ERP-centric upgrades.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
AI agents are moving beyond coding assistants and customer service chatbots into the operational core of the enterprise. The ROI is promising, but autonomy without alignment is a recipe for chaos. Business leaders need to lay the essential foundations now.
The agent explosion is coming
Agents are independently handling end-to-end processes across lead generation, supply chain optimization, customer support, and financial reconciliation. A mid-sized organization could easily run 4,000 agents, each making decisions that affect revenue, compliance, and customer experience.
The transformation toward an agent-driven enterprise is inevitable. The economic benefits are too significant to ignore, and the potential is becoming a reality faster than most predicted. The problem? Most businesses and their underlying infrastructure are not prepared for this shift. Early adopters have found unlocking AI initiatives at scale to be extremely challenging.
The reliability gap that’s holding AI back
Companies are investing heavily in AI, but the returns aren’t materializing. According to recent research from Boston Consulting Group, 60% of companies report minimal revenue and cost gains despite substantial investment. However, the leaders reported they achieved five times the revenue increases and three times the cost reductions. Clearly, there is a massive premium for being a leader.
What separates the leaders from the pack isn’t how much they’re spending or which models they’re using. Before scaling AI deployment, these “future-built” companies put critical data infrastructure capabilities in place. They invested in the foundational work that enables AI to function reliably.
A framework for agent reliability: The four quadrants
To understand how and where enterprise AI can fail, consider four critical quadrants: models, tools, context, and governance.
Take a simple example: an agent that orders you pizza. The model interprets your request (“get me a pizza”). The tool executes the action (calling the Domino’s or Pizza Hut API). Context provides personalization (you tend to order pepperoni on Friday nights at 7pm). Governance validates the outcome (did the pizza actually arrive?).
Each dimension represents a potential failure point:
Models: The underlying AI systems that interpret prompts, generate responses, and make predictions
Tools: The integration layer that connects AI to enterprise systems, such as APIs, protocols, and connectors
Context: Before making decisions, information agents need to understand the full business picture, including customer histories, product catalogs, and supply chain networks
Governance: The policies, controls, and processes that ensure data quality, security, and compliance
This framework helps diagnose where reliability gaps emerge. When an enterprise agent fails, which quadrant is the problem? Is the model misunderstanding intent? Are the tools unavailable or broken? Is the context incomplete or contradictory? Or is there no mechanism to verify that the agent did what it was supposed to do?
Tooling is also accelerating. Integration frameworks like the Model Context Protocol (MCP) make it dramatically easier to connect agents with enterprise systems and APIs.
If models are powerful and tools are maturing, then what is holding back adoption?
To borrow from James Carville, “It is the data, stupid.” The root cause of most misbehaving agents is misaligned, inconsistent, or incomplete data.
Enterprises have accumulated data debt over decades. Acquisitions, custom systems, departmental tools, and shadow IT have left data scattered across silos that rarely agree. Support systems do not match what is in marketing systems. Supplier data is duplicated across finance, procurement, and logistics. Locations have multiple representations depending on the source.
Drop a few agents into this environment, and they will perform wonderfully at first, because each one is given a curated set of systems to call. Add more agents and the cracks grow, as each one builds its own fragment of truth.
This dynamic has played out before. When business intelligence became self-serve, everyone started creating dashboards. Productivity soared, reports failed to match. Now imagine that phenomenon not in static dashboards, but in AI agents that can take action. With agents, data inconsistency produces real business consequences, not just debates among departments.
Companies that build unified context and robust governance can deploy thousands of agents with confidence, knowing they’ll work together coherently and comply with business rules. Companies that skip this foundational work will watch their agents produce contradictory results, violate policies, and ultimately erode trust faster than they create value.
Leverage agentic AI without the chaos
The question for enterprises centers on organizational readiness. Will your company prepare the data foundation needed to make agent transformation work? Or will you spend years debugging agents, one issue at a time, forever chasing problems that originate in infrastructure you never built?
Autonomous agents are already transforming how work gets done. But the enterprise will only experience the upside if those systems operate from the same truth. This ensures that when agents reason, plan, and act, they do so based on accurate, consistent, and up-to-date information.
The companies generating value from AI today have built on fit-for-purpose data foundations. They recognized early that in an agentic world, data functions as essential infrastructure. A solid data foundation is what turns experimentation into dependable operations.
At Reltio, the focus is on building that foundation. The Reltio data management platform unifies core data from across the enterprise, giving every agent immediate access to the same business context. This unified approach enables enterprises to move faster, act smarter, and unlock the full value of AI.
Agents will define the future of the enterprise. Context intelligence will determine who leads it.
For leaders navigating this next wave of transformation, see Relatio’s practical guide: Unlocking Agentic AI: A Business Playbook for Data Readiness. Get your copy now to learn how real-time context becomes the decisive advantage in the age of intelligence.
A number of startups and university teams that are building “AI scientists” to design and run experiments in the lab, including robot biologists and chemists, have just won extra funding from the UK government agency that supports moonshot R&D. The competition, set up by ARIA (the Advanced Research and Invention Agency), gives a clear sense of how fast this technology is moving: The agency received 245 proposals from researchers who are already developing tools that can automate significant stretches of lab work.
ARIA defines an AI scientist as a system that can run an entire scientific workflow, coming up with hypotheses, designing and running experiments to test those hypotheses, and then analyzing the results. In many cases, the system may then feed those results back into itself and run the loop again and again. Human scientists become overseers, coming up with the initial research questions and then letting the AI scientist get on with the grunt work.
“There are better uses for a PhD student than waiting around in a lab until 3 a.m. to make sure an experiment is run to the end,” says Ant Rowstron, ARIA’s chief technology officer.
ARIA picked 12 projects to fund from the 245 proposals, doubling the amount of funding it had intended to allocate because of the large number and high quality of submissions. Half the teams are from the UK; the rest are from the US and Europe. Some of the teams are from universities, some from industry. Each will get around £500,000 (around $675,000) to cover nine months’ work. At the end of that time, they should be able to demonstrate that their AI scientist was able to come up with novel findings.
Winning teams include Lila Sciences, a US company that is building what it calls an AI nano-scientist—a system that will design and run experiments to discover the best ways to compose and process quantum dots, which are nanometer-scale semiconductor particles used in medical imaging, solar panels, and QLED TVs.
“We are using the funds and time to prove a point,” says Rafa Gómez-Bombarelli, chief science officer for physical sciences at Lila: “The grant lets us design a real AI robotics loop around a focused scientific problem, generate evidence that it works, and document the playbook so others can reproduce and extend it.”
Another team, from the University of Liverpool, UK, is building a robot chemist, which runs multiple experiments at once and uses a vision language model to help troubleshoot when the robot makes an error.
And a startup based in London, still in stealth mode, is developing an AI scientist called ThetaWorld, which is using LLMs to design experiments on the physical and chemical interactions that are important for the performance of batteries. Those experiments will then be run in an automated lab.
Taking the temperature
Compared with the £5 million projects spanning two or three years that ARIA usually funds, £500,000 is small change. But that was the idea, says Rowstron: It’s an experiment on ARIA’s part too. By funding a range of projects for a short amount of time, the agency is taking the temperature at the cutting edge to determine how the way science is done is changing, and how fast. What it learns will become the baseline for funding future large-scale projects.
Rowstron acknowledges there’s a lot of hype, especially now that most of the top AI companies have teams focused on science. When results are shared by press release and not peer review, it can be hard to know what the technology can and can’t do. “That’s always a challenge for a research agency trying to fund the frontier,” he says. “To do things at the frontier, we’ve got to know what the frontier is.”
For now, the cutting edge involves agentic systems calling up other existing tools on the fly. “They’re running things like large language models to do the ideation, and then they use other models to do optimization and run experiments,” says Rowstron. “And then they feed the results back round.”
Rowstron sees the technology stacked in tiers. At the bottom are AI tools designed by humans for humans, such as AlphaFold. These tools let scientists leapfrog slow and painstaking parts of the scientific pipeline but can still require many months of lab work to verify results. The idea of an AI scientist is to automate that work too.
An AI scientist sits in a layer above those human-made tools and calls on them as needed, says Rowstron. “There’s a point in time—and I don’t think it’s a decade away—where that AI scientist layer says, ‘I need a tool and it doesn’t exist,’ and it will actually create an AlphaFold kind of tool just on the way to figuring out how to solve another problem. That whole bottom zone will just be automated.”
Going off the rails
But we’re not there yet, he says. All the projects ARIA is now funding involve systems that call on existing tools rather than spin up new ones.
There are also unsolved problems with agentic systems in general, which limits how long they can run by themselves without going off the rails and making errors. For example, a study, titled “Why LLMs aren’t scientists yet,” posted online last week by researchers at Lossfunk, an AI lab based in India, reports that in an experiment to get LLM agents to run a scientific workflow to completion, the system failed three out of four times. According to the researchers, the reasons the LLMs broke down included “deviation from original research specifications toward simpler, more familiar solutions” and “overexcitement that declares success despite obvious failures.”
“Obviously, at the moment these tools are still fairly early in their cycle and these things might plateau,” says Rowstron. “I’m not expecting them to win a Nobel Prize.”
“But there is a world where some of these tools will force us to operate so much quicker,” he continues. “And if we end up in that world, it’s super important for us to be ready.”
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
The man who made India digital isn’t done yet
Nandan Nilekani can’t stop trying to push India into the future. He started nearly 30 years ago, masterminding an ongoing experiment in technological state capacity that started with Aadhaar—the world’s largest digital identity system.
Using Aadhaar as the bedrock, Nilekani and people working with him went on to build a sprawling collection of free, interoperating online tools that add up to nothing less than a digital infrastructure for society, covering government services, banking, and health care. They offer convenience and access that would be eye-popping in wealthy countries a tenth of India’s size.
Many Americans agree that it’s acceptable to screen embryos for severe genetic diseases. Far fewer say it’s okay to test for characteristics related to a future child’s appearance, behavior, or intelligence. But a few startups are now advertising what they claim is a way to do just that.
This new kind of testing—which can cost up to $50,000—is incredibly controversial. Nevertheless, the practice has grown popular in Silicon Valley, and it’s becoming more widely available to everyone. Read the full story.
—Julia Black Embryo scoring is one of our 10 Breakthrough Technologies this year. Check out what else made the list, and scroll down to vote for the technology you think deserves the 11th slot.
Five AI predictions for 2026
What will surprise us most about AI in 2026?
Tune in at 12.30pm today to hear me, our senior AI editor Will Douglas Heaven and senior AI reporter James O’Donnell discuss our “5 AI Predictions for 2026”. This special LinkedIn Live event will explore the trends that are poised to transform the next twelve months of AI. The conversation will also offer a first glimpse at EmTech AI 2026, MIT Technology Review’s longest running AI event for business leadership. Sign up to join us later today!
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Europe is trying to build its own DeepSeek That’s been a goal for a while, but US hostility is making those efforts newly urgent. (Wired $) + Plenty of Europeans want to wean off US technology. That’s easier said than done. (New Scientist $) + DeepSeek may have found a new way to improve AI’s ability to remember. (MIT Technology Review $)
2 Ship-tracking data shows China is creating massive floating barriers The maneuvers show that Beijing can now rapidly muster large numbers of the boats in disputed seas. (NYT $) + Quantum navigation could solve the military’s GPS jamming problem. (MIT Technology Review)
3 The AI bubble risks disrupting the global economy, says the IMF But it’s hard to see anyone pumping the brakes any time soon. (FT $) + British politicians say the UK is being exposed to ‘serious harm’ by AI risks. (The Guardian) + What even is the AI bubble? (MIT Technology Review)
4 Cryptocurrencies are dying in record numbers In an era of one-off joke coins and pump and dump scams, that’s surely a good thing. (Gizmodo) + President Trump has pardoned a lot of people who’ve committed financial crimes. (NBC)
5 Threads has more global daily mobile users than X now And once-popular alternative Bluesky barely even makes the charts. (Forbes)
6 The UK is considering banning under 16s from social media Just weeks after a similar ban took effect in Australia. (BBC)
7 You can burn yourself out with AI coding agents They could be set to make experienced programmers busier than ever before. (Ars Technica) + Why Anthropic’s Claude Code is taking the AI world by storm. (WSJ $) + AI coding is now everywhere. But not everyone is convinced. (MIT Technology Review)
8 Some tech billionaires are leaving California Not all though—the founders of Nvidia and Airbnb say they’ll stay and pay the 5% wealth tax. (WP $) + Tech bosses’ support for Trump is paying off for them big time. (FT $)
9 Matt Damon says Netflix tells directors to repeat movie plots To accommodate all the people using their phones. (NME)
10 Why more people are going analog in 2026 Crafting, reading, and other screen-free hobbies are on the rise. (CNN) + Dumbphones are becoming popular too—but it’s worth thinking hard before you switch. (Wired $)
Quote of the day
‘It may sound like American chauvinism…and it is. We’re done apologising about that.”
—Thomas Dans, a Trump appointee who heads the US Arctic Research Commission, tells the FT his boss is deadly serious about acquiring Greenland.
One more thing
BRUCE PETERSON
Inside the fierce, messy fight over “healthy” sugar tech
On the outskirts of Charlottesville, Virginia, a new kind of sugar factory is taking shape. The facility is being developed by a startup called Bonumose. It uses a processed corn product called maltodextrin that is found in many junk foods and is calorically similar to table sugar (sucrose).
But for Bonumose, maltodextrin isn’t an ingredient—it’s a raw material. When it’s poured into the company’s bioreactors, what emerges is tagatose. Found naturally in small concentrations in fruit, some grains, and milk, it is nearly as sweet as sucrose but apparently with only around half the calories, and wider health benefits.
Bonumose’s process originated in a company spun out of the Virginia Tech lab of Yi-Heng “Percival” Zhang. When MIT Technology Review spoke to Zhang, he was sitting alone in an empty lab in Tianjin, China, after serving a two-year sentence of supervised release in Virginia for conspiracy to defraud the US government, making false statements, and obstruction of justice. If sugar is the new oil, the global battle to control it has already begun. Read the full story.
—Mark Harris
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ Paul Mescal just keeps getting cooler. + Make this year calmer with these evidence-backed tips. ($) + I can confirm that Lumie wake-up lamps really are worth it (and no one paid me to say so!) + There are some real gems in Green Day’s bassist Mike Dirnt’s favorite albums list.
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
What it’s like to be banned from the US for fighting online hate
Just before Christmas the Trump administration dramatically escalated its war on digital rights by banning five people from entering the US. One of them, Josephine Ballon, is a director of HateAid, a small German nonprofit founded to support the victims of online harassment and violence. The organization is a strong advocate of EU tech regulations, and so finds itself attacked in campaigns from right-wing politicians and provocateurs who claim that it engages in censorship.
EU officials, freedom of speech experts, and the five people targeted all flatly reject these accusations. Ballon told us that their work is fundamentally about making people feel safer online. But their experiences over the past few weeks show just how politicized and besieged their work in online safety has become. Read the full story.
—Eileen Guo
TR10: AI companions
Chatbots are skilled at crafting sophisticated dialogue and mimicking empathetic behavior. They never get tired of chatting. It’s no wonder, then, that so many people now use them for companionship—forging friendships or even romantic relationships.
72% of US teenagers have used AI for companionship, according to a study from the nonprofit Common Sense Media. But while chatbots can provide much-needed emotional support and guidance for some people, they can exacerbate underlying problems in others—especially vulnerable people or those with mental health issues.
And, if you want to learn more about what we predict for AI this year, sign up to join me for our free LinkedIn Live event tomorrow at 12.30pm ET.
Why inventing new emotions feels so good
Have you ever felt “velvetmist”?
It’s a “complex and subtle emotion that elicits feelings of comfort, serenity, and a gentle sense of floating.” It’s peaceful, but more ephemeral and intangible than contentment. It might be evoked by the sight of a sunset or a moody, low-key album.
If you haven’t ever felt this sensation—or even heard of it—that’s not surprising. A Reddit user generated it with ChatGPT, along with advice on how to evoke the feeling. Don’t scoff: Researchers say more and more terms for these “neo-emotions” are showing up online, describing new dimensions and aspects of feeling. Read our story to learn more about why.
—Anya Kamenetz
This story is from the latest print issue of MIT Technology Review. If you haven’t already, subscribe now to receive the next edition as soon as it lands (and benefit from some hefty seasonal discounts too!)
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Ads are coming to ChatGPT For American users initially, with plans to expand soon. (CNN) + Here’s how they’ll work. (Wired $)
2 What will we be able to salvage after the AI bubble bursts? It will be ugly, but there are plenty of good uses for AI that we’ll want to keep. (The Guardian) + What even is the AI bubble? (MIT Technology Review)
3 It’s almost impossible to mine Greenland’s natural resources It has vast supplies of rare earth elements, but its harsh climate and environment make them very hard to access. (The Week)
4 Iran is now 10 days into its internet shutdown It’s one of the longest and most extreme we’ve ever witnessed. (BBC) + Starlink isn’t proving as helpful as hoped as the regime finds ways to jam it. (Reuters $) + Battles are raging online about what’s really going on inside Iran. (NYT $)
5 America is heading for a polymarket disaster Prediction markets are getting out of control, and some people are losing a lot of money. (The Atlantic $) + They were first embraced by political junkies, but now they’re everywhere. (NYT $)
6 How to fireproof a city Californians are starting to fight fires before they can even start. (The Verge $) + How AI can help spot wildfires. (MIT Technology Review)
7 Stoking ‘deep state’ conspiracy theories can be dangerous Especially if you’re then given the task of helping run one of those state institutions, as Dan Bongino is now learning. (WP $) + Why everything is a conspiracy now. (MIT Technology Review)
8 Why we’re suddenly all having a ‘Very Chinese Time’ It’s a fun, flippant trend—but it also shows how China’s soft power is growing around the globe. (Wired $)
9 Why there’s no one best way to store information Each one involves trade-offs between space and time. (Quanta $)
10 Meat may play a surprising role in helping people reach 100 Perhaps because it can assist with building stronger muscles and bones. (New Scientist $)
Quote of the day
“That’s the level of anxiety now – people watching the skies and the seas themselves because they don’t know what else to do.”
—A Greenlander tells The Guardian just how seriously she and her fellow compatriots are taking Trump’s threat to invade their country.
One more thing
KATHERINE LAM
Inside a romance scam compound—and how people get tricked into being there
Gavesh’s journey started, seemingly innocently, with a job ad on Facebook promising work he desperately needed.
Instead, he found himself trafficked into a business commonly known as “pig butchering”—a form of fraud in which scammers form close relationships with targets online and extract money from them. The Chinese crime syndicates behind the scams have netted billions of dollars, and they have used violence and coercion to force their workers, many of them trafficked like Gavesh, to carry out the frauds from large compounds, several of which operate openly in the quasi-lawless borderlands of Myanmar.
Big Tech may hold the key to breaking up the scam syndicates—if these companies can be persuaded or compelled to act. Read the full story.
—Peter Guest & Emily Fishbein
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ Blue Monday isn’t real (but it is an absolute banger of a track.) + Some great advice here about how to be productive during the working day. + Twelfth Night is one of Shakespeare’s most fun plays—as these top actors can attest. + If the cold and dark gets to you, try making yourself a delicious bowl of soup.
Today marks an inflection point for enterprise AI adoption. Despite billions invested in generative AI, only 5% of integrated pilots deliver measurable business value and nearly one in two companies abandons AI initiatives before reaching production.
The bottleneck is not the models themselves. What’s holding enterprises back is the surrounding infrastructure: Limited data accessibility, rigid integration, and fragile deployment pathways prevent AI initiatives from scaling beyond early LLM and RAG experiments. In response, enterprises are moving toward composable and sovereign AI architectures that lower costs, preserve data ownership, and adapt to the rapid, unpredictable evolution of AI—a shift IDC expects 75% of global businesses to make by 2027.
The concept to production reality
AI pilots almost always work, and that’s the problem. Proofs of concept (PoCs) are meant to validate feasibility, surface use cases, and build confidence for larger investments. But they thrive in conditions that rarely resemble the realities of production.
Source: Compiled by MIT Technology Review Insights with data from Informatica, CDO Insights 2025 report, 2026
“PoCs live inside a safe bubble” observes Cristopher Kuehl, chief data officer at Continent 8 Technologies. Data is carefully curated, integrations are few, and the work is often handled by the most senior and motivated teams.
The result, according to Gerry Murray, research director at IDC, is not so much pilot failure as structural mis-design: Many AI initiatives are effectively “set up for failure from the start.”
It was early evening in Berlin, just a day before Christmas Eve, when Josephine Ballon got an unexpected email from US Customs and Border Protection. The status of her ability to travel to the United States had changed—she’d no longer be able to enter the country.
At first, she couldn’t find any information online as to why, though she had her suspicions. She was one of the directors of HateAid, a small German nonprofit founded to support the victims of online harassment and violence. As the organization has become a strong advocate of EU tech regulations, it has increasingly found itself attacked in campaigns from right-wing politicians and provocateurs who claim that it engages in censorship.
It was only later that she saw what US Secretary of State Marco Rubio had posted on X:
For far too long, ideologues in Europe have led organized efforts to coerce American platforms to punish American viewpoints they oppose. The Trump Administration will no longer tolerate these egregious acts of extraterritorial censorship.
Rubio was promoting a conspiracy theory about what he has called the “censorship-industrial complex,” which alleges widespread collusion between the US government, tech companies, and civil society organizations to silence conservative voices—the very conspiracy theory HateAid has recently been caught up in.
Then Undersecretary of State Sarah B. Rogers posted on X the names of the people targeted by travel bans. The list included Ballon, as well as her HateAid co-director, Anna Lena von Hodenberg. Also named were three others doing similar or related work: former EU commissioner Thierry Breton, who had helped author Europe’s Digital Services Act (DSA); Imran Ahmed of the Center for Countering Digital Hate, which documents hate speech on social media platforms; and Clare Melford of the Global Disinformation Index, which provides risk ratings warning advertisers about placing ads on websites promoting hate speech and disinformation.
It was an escalation in the Trump administration’s war on digital rights—fought in the name of free speech. But EU officials, freedom of speech experts, and the five people targeted all flatly reject the accusations of censorship. Ballon, von Hodenberg, and some of their clients tell me that their work is fundamentally about making people feel safer online. And their experiences over the past few weeks show just how politicized and besieged their work in online safety has become. They almost certainly won’t be the last people targeted in this way.
Ballon was the one to tell von Hodenberg that both their names were on the list. “We kind of felt a chill in our bones,” von Hodenberg told me when I caught up with the pair in early January.
But she added that they also quickly realized, “Okay, it’s the old playbook to silence us.” So they got to work—starting with challenging the narrative the US government was pushing about them.
Within a few hours, Ballon and von Hodenberg had issued a strongly worded statement refuting the allegations: “We will not be intimidated by a government that uses accusations of censorship to silence those who stand up for human rights and freedom of expression,” they wrote. “We demand a clear signal from the German government and the European Commission that this is unacceptable. Otherwise, no civil society organisation, no politician, no researcher, and certainly no individual will dare to denounce abuses by US tech companies in the future.”
Those signals came swiftly. On X, Johann Wadephul, the German foreign minister, called the entry bans “not acceptable,” adding that “the DSA was democratically adopted by the EU, for the EU—it does not have extraterritorial effect.” Also on X, French president Emmanuel Macron wrote that “these measures amount to intimidation and coercion aimed at undermining European digital sovereignty.” The European Commission issued a statement that it “strongly condemns” the Trump administration’s actions and reaffirmed its “sovereign right to regulate economic activity in line with our democratic values.”
Ahmed, Melford, Breton, and their respective organizations also made their own statements denouncing the entry bans. Ahmed, the only one of the five based in the United States, also successfully filed suit to preempt any attempts to detain him, which the State Department had indicated it would consider doing.
But alongside the statements of solidarity, Ballon and von Hodenberg said, they also received more practical advice: Assume the travel ban was just the start and that more consequences could be coming. Service providers might preemptively revoke access to their online accounts; banks might restrict their access to money or the global payment system; they might see malicious attempts to get hold of their personal data or that of their clients. Perhaps, allies told them, they should even consider moving their money into friends’ accounts or keeping cash on hand so that they could pay their team’s salaries—and buy their families’ groceries.
These warnings felt particularly urgent given that just days before, the Trump administration had sanctioned two International Criminal Court judges for “illegitimate targeting of Israel.” As a result, they had lost access to many American tech platforms, including Microsoft, Amazon, and Gmail.
“If Microsoft does that to someone who is a lot more important than we are,” Ballon told me, “they will not even blink to shut down the email accounts from some random human rights organization in Germany.”
“We have now this dark cloud over us that any minute, something can happen,” von Hodenberg added. “We’re running against time to take the appropriate measures.”
Helping navigate “a lawless place”
Founded in 2018 to support people experiencing digital violence, HateAid has since evolved to defend digital rights more broadly. It provides ways for people to report illegal online content and offers victims advice, digital security, emotional support, and help with evidence preservation. It also educates German police, prosecutors, and politicians about how to handle online hate crimes.
Once the group is contacted for help, and if its lawyers determine that the type of harassment has likely violated the law, the organization connects victims with legal counsel who can help them file civil and criminal lawsuits against perpetrators, and if necessary, helps finance the cases. (HateAid itself does not file cases against individuals.) Ballon and von Hodenberg estimate that HateAid has worked with around 7,500 victims and helped them file 700 criminal cases and 300 civil cases, mostly against individual offenders.
For 23-year-old German law student and outspoken political activist Theresia Crone, HateAid’s support has meant that she has been able to regain some sense of agency in her life, both on and offline. She had reached out after she discovered entire online forums dedicated to making deepfakes of her. Without HateAid, she told me, “I would have had to either put my faith into the police and the public prosecutor to prosecute this properly, or I would have had to foot the bill of an attorney myself”—a huge financial burden for “a student with basically no fixed income.”
In addition, working alone would have been retraumatizing: “I would have had to document everything by myself,” she said—meaning “I would have had to see all of these pictures again and again.”
“The internet is a lawless place,” Ballon told me when we first spoke, back in mid-December, a few weeks before the travel ban was announced. In a conference room at the HateAid office in Berlin, she said there are many cases that “cannot even be prosecuted, because no perpetrator is identified.” That’s why the nonprofit also advocates for better laws and regulations governing technology companies in Germany and across the European Union.
On occasion, they have also engaged in strategic litigation against the platforms themselves. In 2023, for example, HateAid and the European Union of Jewish Students sued X for failing to enforce its terms of service against posts that were antisemitic or that denied the Holocaust, which is illegal in Germany.
This almost certainly put the organization in the crosshairs of X owner Elon Musk; it also made HateAid a frequent target of Germany’s far right party, the Alternative für Deutschland, which Musk has called “the only hope for Germany.” (X did not respond to a request to comment on this lawsuit.)
HateAid gets caught in Trump World’s dragnet
For better and worse, HateAid’s profile grew further when it took on another critical job in online safety. In June 2024, it was named as a trusted flagger organization under the Digital Services Act, a 2022 EU law that requires social media companies to remove certain content (including hate speech and violence) that violates national laws, and to provide more transparency to the public, in part by allowing more appeals on platforms’ moderation decisions.
Trusted flaggers are entities designated by individual EU countries to point out illegal content, and they are a key part of DSA enforcement. While anyone can report such content, trusted flaggers’ reports are prioritized and legally require a response from the platforms.
The Trump administration has loudly argued that the trusted flagger program and the DSA more broadly are examples of censorship that disproportionately affect voices on the right and American technology companies,like X.
When we first spoke in December, Ballon said these claims of censorship simply don’t hold water: “We don’t delete content, and we also don’t, like, flag content publicly for everyone to see and to shame people. The only thing that we do: We use the same notification channels that everyone can use, and the only thing that is in the Digital Services Act is that platforms should prioritize our reporting.” Then it is on the platforms to decide what to do.
Nevertheless, the idea that HateAid and like-minded organizations are censoring the right has become a powerful conspiracy theory with real-world consequences. (Last year, MIT Technology Reviewcovered the closure of a small State Department office following allegations that it had conducted “censorship,” as well as an unusual attempt by State leadership to access internal records related to supposed censorship—including information about two of the people who have now been banned, Medford and Ahmed, and both of their organizations.)
HateAid saw a fresh wave of harassment starting last February, when 60 Minutes aired a documentary on hate speech laws in Germany; it featured a quote from Ballon that “free speech needs boundaries,” which, she added, “are part of our constitution.” The interview happened to air just days before Vice President JD Vance attended the Munich Security Conference; there he warned that “across Europe, free speech … is in retreat.” This, Ballon told me, led to heightened hostility toward her and her organization.
Fast-forward to July, when a report by Republicans in the US House of Representatives claimed that the DSA “compels censorship and infringes on American free speech.” HateAid was explicitly named in the report.
All of this has made its work “more dangerous,” Ballon told me in December. Before the 60 Minutes interview, “maybe one and a half years ago, as an organization, there were attacks against us, but mostly against our clients, because they were the activists, the journalists, the politicians at the forefront. But now … we see them becoming more personal.”
As a result, over the last year, HateAid has taken more steps to protect its reputation and get ahead of the damaging narratives. Ballon has reported the hate speech targeted at her—“More [complaints] than in all the years I did this job before,” she said—as well as defamation lawsuits on behalf of HateAid.
All these tensions finally came to a head in December. At the start of the month, the European Commission fined X $140 million for DSA violations. This set off yet another round of recriminations about supposed censorship of the right, with Trump calling the fine “a nasty one” and warning: “Europe has to be very careful.”
Just a few weeks later, the day before Christmas Eve, retaliation against individuals finally arrived.
Who gets to define—and experience—free speech
Digital rights groups are pushing back against the Trump administration’s narrow view of what constitutes free speech and censorship.
“What we see from this administration is a conception of freedom of expression that is not a human-rights-based conception where this is an inalienable, indelible right that’s held by every person,” says David Greene, the civil liberties director of the Electronic Frontier Foundation, a US-based digital rights group. Rather, he sees an “expectation that… [if] anybody else’s speech is challenged, there’s a good reason for it, but it should never happen to them.”
Since Trump won his second term, social media platforms have walked back their commitments to trust and safety. Meta, for example, ended fact-checking on Facebook and adopted much of the administration’s censorship language, with CEO Mark Zuckerberg telling the podcaster Joe Rogan that it would “work with President Trump to push back on governments around the world” if they are seen as “going after American companies and pushing to censor more.”
Have more information on this story or a tip for something else that we should report? Using a non-work device, reach the reporter on Signal at eileenguo.15 or tips@technologyreview.com.
And as the recent fines on X show, Musk’s platform has gone even further in flouting European law—and, ultimately, ignoring the user rights that the DSA was written to protect. In perhaps one of the most egregious examples yet, in recent weeks X allowed people to use Grok, its AI generator, to create nonconsensual nude images of women and children, with few limits—and, so far at least, few consequences. (Last week, X released a statement that it would start limiting users’ ability to create explicit images with Grok; in response to a number of questions, X representative Rosemarie Esposito pointed me to that statement.)
For Ballon, it makes perfect sense: “You can better make money if you don’t have to implement safety measures and don’t have to invest money in making your platform the safest place,” she told me.
“It goes both ways,” von Hodenberg added. “It’s not only the platforms who profit from the US administration undermining European laws … but also, obviously, the US administration also has a huge interest in not regulating the platforms … because who is amplified right now? It’s the extreme right.”
She believes this explains why HateAid—and Ahmed’s Center for Countering Digital Hate and Melford’s Global Disinformation Index, as well as Breton and the DSA—have been targeted: They are working to disrupt this “unholy deal where the platforms profit economically and the US administration is profiting in dividing the European Union,” she said.
The travel restrictions intentionally send a strong message to all groups that work to hold tech companies accountable. “It’s purely vindictive,” Greene says. “It’s designed to punish people from pursuing further work on disinformation or anti-hate work.” (The State Department did not respond to a request for comment.)
And ultimately, this has a broad effect on who feels safe enough to participate online.
Ballon pointed to research that shows the “silencing effect” of harassment and hate speech, not only for “those who have been attacked,” but also for those who witness such attacks. This is particularly true for women, who tend to face more online hate that is also more sexualized and violent. It’ll only be worse if groups like HateAid get deplatformed or lose funding.
Von Hodenberg put it more bluntly: “They reclaim freedom of speech for themselves when they want to say whatever they want, but they silence and censor the ones that criticize them.”
Still, the HateAid directors insist they’re not backing down. They say they’re taking “all advice” they have received seriously, especially with regard to “becoming more independent from service providers,” Ballon told me.
“Part of the reason that they don’t like us is because we are strengthening our clients and empowering them,” said von Hodenberg. “We are making sure that they are not succeeding, and not withdrawing from the public debate.”
“So when they think they can silence us by attacking us? That is just a very wrong perception.”
Martin Sona contributed reporting.
Correction: This article originally misstated the name of Germany’s far right party.
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
AI coding is now everywhere. But not everyone is convinced.
Depending who you ask, AI-powered coding is either giving software developers an unprecedented productivity boost or churning out masses of poorly designed code that saps their attention and sets software projects up for serious long term-maintenance problems.
The problem is right now, it’s not easy to know which is true.
As tech giants pour billions into large language models (LLMs), coding has been touted as the technology’s killer app. Executives enamored with the potential are pushing engineers to lean into an AI-powered future. But after speaking to more than 30 developers, technology executives, analysts, and researchers, MIT Technology Review found that the picture is not as straightforward as it might seem. Read the full story.
This story was also part of our Hype Correction package. You can read the rest of the stories here.
The biotech trends to watch for in 2026
Earlier this week, MIT Technology Review published our annual list of Ten Breakthrough Technologies.
This year’s list includes tech that’s set to transform the energy industry, artificial intelligence, space travel—and of course biotech and health. Our breakthrough biotechnologies for 2026 involve editing a baby’s genes and, separately, resurrecting genes from ancient species. We also included a controversial technology that offers parents the chance to screen their embryos for characteristics like height and intelligence. Here’s the story behind our biotech choices.
—Jessica Hamzelou
This story is from The Checkup, our weekly newsletter all about the latest in health and biotech. Sign upto receive it in your inbox every Thursday.
MIT Technology Review Narrated: What’s next for AI in 2026
Our AI writers have made some big bets for the coming year—read our story about the five hot trends to watch, or listen to it on Spotify, Apple, or wherever you get your podcasts.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Minnesota shows how governing and content creation have merged In another era, we’d have just called this propaganda. (NPR) + MAGA influencers are just straight up lying about what is happening there. (Vox) + Activists are trying to identify individual ICE officers while protecting their own identities. (WP $) + A backlash against ICE is growing in Silicon Valley. (Wired $)
2 There’s probably more child abuse material online now than ever before Of all Big Tech’s failures, this is surely the most appalling. (The Atlantic $) + US investigators are using AI to detect child abuse images made by AI. (MIT Technology Review) + Grok is still being used to undress images of real people. (Quartz)
3 ChatGPT wrote a suicide lullaby for a man who later killed himself This shows it’s “still an unsafe product,” a lawyer representing a family in a tragically similar case said. (Ars Technica) + An AI chatbot told a user how to kill himself—but the company doesn’t want to “censor” it. (MIT Technology Review)
4 Videos emerging from Iran show how bloody the crackdown has become Iranians are finding ways around the internet blackout to show the rest of the world how many of them have been killed. (NBC) + Here’s how they’re getting around the blackout. (NPR)
5 China dominates the global humanoid robot market A new report by analysts found its companies account for over 80% of all deployments. (South China Morning Post) + Just how useful are the latest humanoids, though? (Nature) + Why humanoid robots need their own safety rules. (MIT Technology Review)
6 How is Australia’s social media ban for kids going? It’s mixed—some teens welcome it, but others are finding workarounds. (CNBC)
7 Scientists are finding more objective ways to spot mental illness Biomarkers like voice cadence and heart rate proving pretty reliable for diagnosing conditions like depression. (New Scientist $)
8 The Pebble smartwatch be making a comeback This could be the thing that tempts me back into buying wearables… (Gizmodo)
9 A new video game traps you in an online scam center Can’t see the appeal myself, but… each to their own I guess? (NYT $)
10 Smoke detectors are poised to get a high-tech upgrade And one of the technologies boosting their capabilities is, of course, AI. (BBC)
Quote of the day
“I am very annoyed. I’m very disappointed. I’m seriously frustrated.”
—Pfizer CEO Albert Bourla tells attendees at a healthcare conference this week his feelings about the anti-vaccine agenda Health Secretary Robert F. Kennedy Jr. has been implementing, Bloomberg reports.
One more thing
ARIEL DAVIS
How close are we to genuine “mind reading?”
Technically speaking, neuroscientists have been able to read your mind for decades. It’s not easy, mind you. First, you must lie motionless within a fMRI scanner, perhaps for hours, while you watch films or listen to audiobooks.
If you do elect to endure claustrophobic hours in the scanner, the software will learn to generate a bespoke reconstruction of what you were seeing or listening to, just by analyzing how blood moves through your brain.
More recently, researchers have deployed generative AI tools, like Stable Diffusion and GPT, to create far more realistic, if not entirely accurate, reconstructions of films and podcasts based on neural activity. So how close are we to genuine “mind reading?” Read the full story.
—Grace Huckins
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ Still keen to do a bit of reflecting on the year behind and the one ahead? This free guide might help! + Turns out British comedian Rik Mayall had some pretty solid life advice. + I want to stay in this house in São Paolo. + If you want to stop doomscrolling, it’s worth looking at your sleep habits. ($)