Reading view

There are new articles available, click to refresh the page.

The Download: the case for AI slop, and helping CRISPR fulfill its promise

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How I learned to stop worrying and love AI slop

—Caiwei Chen

If I were to locate the moment AI slop broke through into popular consciousness, I’d pick the video of rabbits bouncing on a trampoline that went viral last summer. For many savvy internet users, myself included, it was the first time we were fooled by an AI video, and it ended up spawning a wave of almost identical generated clips.

My first reaction was that, broadly speaking, all of this sucked. That’s become a familiar refrain, in think pieces and at dinner parties. Everything online is slop now—the internet “enshittified,” with AI taking much of the blame. Initially, I largely agreed. But then friends started sharing AI clips in group chats that were compellingly weird, or funny. Some even had a grain of brilliance. 

I had to admit I didn’t fully understand what I was rejecting—what I found so objectionable. To try to get to the bottom of how I felt (and why), I spoke to the people making the videos, a company creating bespoke tools for creators, and experts who study how new media becomes culture. What I found convinced me that maybe generative AI will not end up ruining everything after all. Read the full story.

A new CRISPR startup is betting regulators will ease up on gene-editing

Here at MIT Technology Review we’ve been writing about the gene-editing technology CRISPR since 2013, calling it the biggest biotech breakthrough of the century. Yet so far, there’s been only one gene-editing drug approved, and it’s been used commercially on only about 40 patients, all with sickle-cell disease.

It’s becoming clear that the impact of CRISPR isn’t as big as we all hoped. In fact, there’s a pall of discouragement over the entire field—with some journalists saying the gene-editing revolution has “lost its mojo.”

So what will it take for CRISPR to help more people? A new startup says the answer could be an “umbrella approach” to testing and commercializing treatments which could avoid costly new trials or approvals for every new version. Read the full story.

—Antonio Regalado

America’s new dietary guidelines ignore decades of scientific research

The first days of 2026 have brought big news for health. On Wednesday, health secretary Robert F. Kennedy Jr. and his colleagues at the Departments of Health and Human Services and Agriculture unveiled new dietary guidelines for Americans. And they are causing a bit of a stir.

That’s partly because they recommend products like red meat, butter, and beef tallow—foods that have been linked to cardiovascular disease, and that nutrition experts have been recommending people limit in their diets.

These guidelines are a big deal—they influence food assistance programs and school lunches, for example. Let’s take a look at the good, the bad, and the ugly advice being dished up to Americans by their government.

—Jessica Hamzelou

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Grok has switched off its image-generating function for most users
Following a global backlash to its sexualized pictures of women and children. (The Guardian)
+ Elon Musk has previously lamented the “guardrails” around the chatbot. (CNN)
+ XAI has been burning through cash lately. (Bloomberg $)

2 Online sleuths tried to use AI to unmask the ICE agent who killed a woman
The problem is, its results are far from reliable. (WP $)
+ The Trump administration is pushing videos of the incident filmed from a specific angle. (The Verge)
+ Minneapolis is struggling to make sense of the shooting of Renee Nicole Good. (WSJ $)

3 Smartphones and PCs are about to get more expensive
You can thank the memory chip shortage sparked by the AI data center boom. (FT $)
+ Expect delays alongside those price rises, too. (Economist $)

4 NASA is bringing four of the seven ISS crew members back to Earth

It’s not clear exactly why, but it said one of them experienced a “medical situation” earlier this week. (Ars Technica)

5 The vast majority of humanoid robots shipped last year were from China
The country is dominating early supply for the bipedal machines. (Bloomberg $)
+ Why a Chinese robot vacuum firm is moving into EVs. (Wired $)
+ China’s EV giants are betting big on humanoid robots. (MIT Technology Review)

6 New Jersey has banned students’ phones in schools
It’s the latest in a long line of states to restrict devices during school hours. (NYT $)

7 Are AI coding assistants getting worse?
This data scientist certainly seems to think so. (IEEE Spectrum)
+ AI coding is now everywhere. But not everyone is convinced. (MIT Technology Review)

8 How to save wine from wildfires 🍇
Smoke leaves the alcohol with an ashy taste, but a group of scientists are working on a solution. (New Yorker $)

9 Celebrity Letterboxd accounts are good fun
Unsurprisingly, a subset of web users have chosen to hound them. (NY Mag $)

10 Craigslist refuses to die
The old-school classifieds corner of the web still has a legion of diehard fans. (Wired $)

Quote of the day

“Tools like Grok now risk bringing sexual AI imagery of children into the mainstream. The harms are rippling out.”

—Ngaire Alexander, head of the Internet Watch Foundation’s reporting hotline, explains the dangers around low-moderation AI tools like Grok to the Wall Street Journal.

One more thing

How to measure the returns on R&D spending

Given the draconian cuts to US federal funding for science, it’s worth asking some hard-nosed money questions: How much should we be spending on R&D? How much value do we get out of such investments, anyway?

To answer that, in several recent papers, economists have approached this issue in clever new ways.  And, though they ask slightly different questions, their conclusions share a bottom line: R&D is, in fact, one of the better long-term investments that the government can make. Read the full story.

—David Rotman

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Bruno Mars is back, baby!
+ Hmm, interesting: Apple’s new Widow’s Bay show is inspired by both Stephen King and Donald Glover, which is an intriguing combination.
+ Give this man control of the new Lego AI bricks!
+ An iron age war trumpet recently uncovered in Britain is the most complete example discovered anywhere in the world.

The Download: mimicking pregnancy’s first moments in a lab, and AI parameters explained

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Researchers are getting organoids pregnant with human embryos

At first glance, it looks like the start of a human pregnancy: A ball-shaped embryo presses into the lining of the uterus then grips tight, burrowing in as the first tendrils of a future placenta appear. This is implantation—the moment that pregnancy officially begins.

Only none of it is happening inside a body. These images were captured in a Beijing laboratory, inside a microfluidic chip, as scientists watched the scene unfold.

In three recent papers published by Cell Press, scientists report what they call the most accurate efforts yet to mimic the first moments of pregnancy in the lab. They’ve taken human embryos from IVF centers and let these merge with “organoids” made of endometrial cells, which form the lining of the uterus. Read our story about their work, and what might come next.

—Antonio Regalado

LLMs contain a LOT of parameters. But what’s a parameter?

A large language model’s parameters are often said to be the dials and levers that control how it behaves. Think of a planet-size pinball machine that sends its balls pinging from one end to the other via billions of paddles and bumpers set just so. Tweak those settings and the balls will behave in a different way.  

OpenAI’s GPT-3, released in 2020, had 175 billion parameters. Google DeepMind’s latest LLM, Gemini 3, may have at least a trillion—some think it’s probably more like 7 trillion—but the company isn’t saying. (With competition now fierce, AI firms no longer share information about how their models are built.)

But the basics of what parameters are and how they make LLMs do the remarkable things that they do are the same across different models. Ever wondered what makes an LLM really tick—what’s behind the colorful pinball-machine metaphors? Let’s dive in

—Will Douglas Heaven

What new legal challenges mean for the future of US offshore wind

For offshore wind power in the US, the new year is bringing new legal battles.

On December 22, the Trump administration announced it would pause the leases of five wind farms currently under construction off the US East Coast. Developers were ordered to stop work immediately.

The cited reason? Concerns that turbines can cause radar interference. But that’s a known issue, and developers have worked with the government to deal with it for years.

Companies have been quick to file lawsuits, and the court battles could begin as soon as this week. Here’s what the latest kerfuffle might mean for the US’s struggling offshore wind industry.

—Casey Crownhart

This story is from The Spark, our weekly newsletter that explains the tech that could combat the climate crisis. Sign up to receive it in your inbox every Wednesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Google and Character.AI have agreed to settle a lawsuit over a teenager’s death
It’s one of five lawsuits the companies have settled linked to young people’s deaths this week. (NYT $)
+ AI companions are the final stage of digital addiction, and lawmakers are taking aim. (MIT Technology Review)

2 The Trump administration’s chief output is online trolling
Witness the Maduro memes. (The Atlantic $)

3 OpenAI has created a new ChatGPT Health feature 
It’s dedicated to analyzing medical results and answering health queries. (Axios)
+ AI chatbots fail to give adequate advice for most questions relating to women’s health. (New Scientist $)
+ AI companies have stopped warning you that their chatbots aren’t doctors. (MIT Technology Review)

4 Meta’s acquisition of Manus is being probed by China
Holding up the purchase gives it another bargaining chip in its dealings with the US. (CNBC)
+ What happened when we put Manus to the test. (MIT Technology Review)

5 China is building humanoid robot training centers

To address a major shortage of the data needed to make them more competent. (Rest of World)
+ The robot race is fueling a fight for training data. (MIT Technology Review)

6 AI still isn’t close to automating our jobs
The technology just fundamentally isn’t good enough yet—for now. (WP $)

7 Weight regain seems to happen within two years of quitting the jabs
That’s the conclusion of a review of more than 40 studies. But dig into the details, and it’s not all bad news. (New Scientist $)

8 This Silicon Valley community is betting on algorithms to find love

Which feels like a bit of a fool’s errand. (NYT $)

9 Hearing aids are about to get really good

You can—of course—thank advances in AI. (IEEE Spectrum)

10 The first 100% AI-generated movie will hit our screen within three years
That’s according to Roku’s founder Anthony Wood. (Variety $)
+ How do AI models generate videos? (MIT Technology Review)

Quote of the day

“I’ve seen the video. Don’t believe this propaganda machine. ” 

—Minnesota’s governor Tim Walz responds on X to Homeland Security’s claim that ICE’s shooting of a woman in Minneapolis was justified.

One more thing

Inside the strange limbo facing millions of IVF embryos

Millions of embryos created through IVF sit frozen in time, stored in cryopreservation tanks around the world. The number is only growing thanks to advances in technology, the rising popularity of IVF, and improvements in its success rates.

At a basic level, an embryo is simply a tiny ball of a hundred or so cells. But unlike other types of body tissue, it holds the potential for life. Many argue that this endows embryos with a special moral status, one that requires special protections.

The problem is that no one can really agree on what that status is. So while these embryos persist in suspended animation, patients, clinicians, embryologists, and legislators must grapple with the essential question of what we should do with them. What do these embryos mean to us? Who should be responsible for them? Read the full story.

—Jessica Hamzelou

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ I love hearing about musicians’ favorite songs 🎶
+ Here are some top tips for making the most of travelling on your own.
+ Check out just some of the excellent-sounding new books due for publication this year.
+ I could play this spherical version of Snake forever (thanks Rachel!)

The Download: war in Europe, and the company that wants to cool the planet

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Europe’s drone-filled vision for the future of war

Last spring, 3,000 British soldiers deployed an invisible automated intelligence network, known as a “digital targeting web,” as part of a NATO exercise called Hedgehog in the damp forests of Estonia’s eastern territories.

The system had been cobbled together over the course of four months—an astonishing pace for weapons development, which is usually measured in years. Its purpose is to connect everything that looks for targets—“sensors,” in military lingo—and everything that fires on them (“shooters”) to a single, shared wireless electronic brain.

Eighty years after total war last transformed the continent, the Hedgehog tests signal a brutal new calculus of European defense. But leaning too much on this new mathematics of warfare could be a risky bet. Read the full story.

—Arthur Holland Michel

This story is from the next print issue of MIT Technology Review magazine. If you haven’t already, subscribe now to receive it once it lands.

MIT Technology Review Narrated: How one controversial startup hopes to cool the planet

Stardust Solutions believes that it can solve climate change—for a price.

The Israel-based geoengineering startup has said it expects nations will soon pay it more than a billion dollars a year to launch specially equipped aircraft into the stratosphere. Once they’ve reached the necessary altitude, those planes will disperse particles engineered to reflect away enough sunlight to cool down the planet, purportedly without causing environmental side effects. 

But numerous solar geoengineering researchers are skeptical that Stardust will line up the customers it needs to carry out a global deployment in the next decade. They’re also highly critical of the idea of a private company setting the global temperature for us.

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Amazon has been accused of listing products without retailers’ consent
Small shop owners claim Amazon’s AI tool sold their goods without their permission. (Bloomberg $)
+ It also listed products the shops didn’t actually have in stock. (CNBC)
+ A new feature called “Shop Direct” appears to be to blame. (Insider $)

2 Data centers are a political issue 
Opposition to them is uniting communities across the political divide. (WP $)
+ Power-grid operators have suggested the centers power down at certain times. (WSJ $)
+ The data center boom in the desert. (MIT Technology Review)

3 Things are looking up for the nuclear power industry
The Trump administration is pumping money into it—but success is not guaranteed. (NYT $)
+ Why the grid relies on nuclear reactors in the winter. (MIT Technology Review)

4 A new form of climate modelling pins blame on specific companies

It may not be too long until we see the first case of how attribution science holds up in court. (New Scientist $)
+ Google, Amazon and the problem with Big Tech’s climate claims. (MIT Technology Review)

5 Meta has paused the launch of its Ray-Ban smartglasses 🕶
They’re just too darn popular, apparently. (Engadget)
+ Europe and Canada will just have to wait. (Gizmodo)
+ It’s blaming supply shortages and “unprecedented” demand. (Insider $)

6 Sperm contains information about a father’s fitness and diet
New research is shedding light on how we think about heredity. (Quanta Magazine)

7 Meta is selling online gambling ads in countries where it’s illegal
It’s ignoring local laws across Asia and the Middle East. (Rest of World)

8 AI isn’t always trying to steal your job
Sometimes it makes your toy robot a better companion. (The Verge)
+ How cuddly robots could change dementia care. (MIT Technology Review)

9 How to lock down a job at one of tech’s biggest companies
You’re more likely to be accepted into Harvard, apparently. (Fast Company $)

10 Millennials are falling out of love with the internet
Is a better future still possible? (Vox)
+ How to fix the internet. (MIT Technology Review)

Quote of the day

“I want to keep up with the latest doom.”

—Author Margaret Atwood explains why she doomscrolls to Wired.

One more thing

Inside the decades-long fight over Yahoo’s misdeeds in China

When you think of Big Tech these days, Yahoo is probably not top of mind. But for Chinese dissident Xu Wanping, the company still looms large—and has for nearly two decades.

In 2005, Xu was arrested for signing online petitions relating to anti-Japanese protests. He didn’t use his real name, but he did use his Yahoo email address. Yahoo China violated its users’ trust—providing information on certain email accounts to Chinese law enforcement, which in turn allowed the government to identify and arrest some users.

Xu was one of them; he would serve nine years in prison. Now, he and five other Chinese former political prisoners are suing Yahoo and a slate of co-defendants—not because of the company’s information-sharing (which was the focus of an earlier lawsuit filed by other plaintiffs), but rather because of what came after. Read the full story.

—Eileen Guo

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ It’s time to celebrate the life and legacy of Cecilia Giménez Zueco, the legendary Spanish amateur painter whose botched fresco restoration reached viral fame in 2012.
+ If you’re a sci-fi literature fan, there’s plenty of new releases to look forward to in 2026.
+ Last week’s wolf supermoon was a sight to behold.
+ This Mississippi restaurant is putting its giant lazy Susan to good use.

The Download: our predictions for AI, and good climate news

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

What’s next for AI in 2026

In an industry in constant flux, sticking your neck out to predict what’s coming next may seem reckless. (AI bubble? What AI bubble?) But for the last few years we’ve done just that—check out our pretty accurate predictions for 2025—and now, we’re doing it all over again.

So what’s coming in 2026? Here are our big bets for the next 12 months.

—Rhiannon Williams, Will Douglas Heaven, Caiwei Chen, James O’Donnell & Michelle Kim

Interested in why it’s so hard to make predictions about AI—and why we’ve done it anyway? Check out the latest edition of The Algorithm, our weekly AI newsletter. Sign up here to make sure you receive future editions straight to your inbox.

This story is also part of MIT Technology Review’s What’s Next series, looking across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

Four bright spots in climate news in 2025

Climate news wasn’t great in 2025. Global greenhouse-gas emissions hit record highs (again). It’s set to be either the second or third warmest on record. Climate-fueled disasters like wildfires in California and flooding in Indonesia and Pakistan devastated communities and caused billions in damage.

There’s no doubt that we’re in a severe situation. But for those looking for bright spots, there was some good news in 2025, too. Here are just a few of the positive stories our climate reporters noticed this year. Read the full story.

—Casey Crownhart & James Temple

Nominate someone you know for our global 2026 Innovators Under 35 competition

Last month we started accepting nominations for MIT Technology Review’s 2026 Innovators Under 35 competition. This annual list recognizes 35 of the world’s best young scientists and inventors, and our newsroom has produced it for more than two decades.

We’re looking for people who are making important scientific discoveries and applying that knowledge to build new technologies. Or those who are engineering new systems and algorithms that will aid our work or extend our abilities.

The good news is that we’re still accepting submissions for another two weeks! It’s free to nominate yourself or someone you know, and it only takes a few moments. Here’s how to submit your nomination.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The US government is recommending fewer childhood shots
It’ll no longer suggest every child be vaccinated against flu, hepatitis A, rotavirus and meningococcal disease. (WP $)
+ We may end up witnessing a major uptick in rotavirus cases as a result. (The Atlantic $)
+ That brings the total number of recommended vaccines from 17 to 11. (BBC)
+ The changes were made without public comment or input from vaccine makers. (NPR)

2 Telegram can’t quite cut its ties to Russia
Bonds in the company have been frozen under western sanctions. (FT $)

3 America is in the grip of a flu outbreak
Infections have reached their highest levels since the covid pandemic. (Bloomberg $)
+ All but four states have reported high levels of flu activity. (CNN)
+ A new strain of the virus could be to blame. (AP News)

4 Humanoid factory robots are about to get a lot smarter
Google DeepMind is teaming up with Boston Dynamics to help its Atlas bipedal robots complete tasks more quickly. (Wired $)
+ In theory, the deal could help Atlas interact more naturally with humans, too. (TechCrunch)
+ Why the humanoid workforce is running late. (MIT Technology Review)

5 NASA’s budget for 2026 is better than we expected
It’s a drop of just 1% compared to last year, despite a series of brutal cut proposals. (Ars Technica)

6 Nvidia’s first self-driving cars will hit the road later this year
Watch out Tesla! (NYT $)
+ They’re a pretty smooth drive, apparently. (Ars Technica)
+ The company is also going full steam ahead to produce new chips. (Reuters)

7 Elon Musk’s fans are using Grok to make revenge porn of one of his sons’ mothers
Ashley St Clair says her complaints have gone unanswered. (The Guardian)
+ This is what happens when you scrap nearly all rules and safety protocols. (404 Media)
+ Authorities across the world are attempting to crack down on Grok. (Rest of World)

8 A Greenland ice dome has melted once before
And if temperatures remain high, it could do so again. (New Scientist $)
+ Inside a new quest to save the “doomsday glacier.” (MIT Technology Review)

9 A Chinese chatbot went rogue and snapped at a user
Tencent’s AI assistant Yuanbao told them their request was “stupid” and to “get lost.” (Insider $)
+ At least it’s not being overly sycophantic… (MIT Technology Review)

10 Lego’s bricks have been given a smart makeover
They contain tiny computers to bring entire sets to life. (The Verge)
+ The tech will create fun contextual sounds and light effects. (Wired $)

Quote of the day

“The goal of this administration is to basically make vaccines optional. And we’re paying the price.”

—Paul Offit, an infectious diseases physician, criticizes the Trump administration’s decision to slash the number of recommended vaccinations for children, the Guardian reports.

One more thing

I asked an AI to tell me how beautiful I am

Qoves started as a studio that would airbrush images for modeling agencies. Now it is a “facial aesthetics consultancy” that promises answers to the age-old question of what makes a face attractive. Its most compelling feature is the “facial assessment tool”: an AI-driven system that promises to tell you how beautiful you are—or aren’t—spitting out numerical values akin to credit ratings.

If that prospect isn’t concerning enough, most of these algorithms are littered with inaccuracies, ageism, and racism. Read the full story.

—Tate Ryan-Mosley

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Baxter the horse and Mr Fuzzy the barn cat have a beautiful relationship.
+ Cool—a new seven-mile underwater sculpture park has opened off the coast of Miami Beach.
+ Where can I buy this incredible Godzilla piggy bank?
+ Congratulations to the world’s oldest professional footballer Kazuyoshi Miura, who’s still going strong at 58 years old.

The Download: Kenya’s Great Carbon Valley, and the AI terms that were everywhere in 2025

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Welcome to Kenya’s Great Carbon Valley: a bold new gamble to fight climate change

In June last year, startup Octavia Carbon began running a high-stakes test in the small town of Gilgil in south-central Kenya. It’s harnessing some of the excess energy generated by vast clouds of steam under the Earth’s surface to power prototypes of a machine that promises to remove carbon dioxide from the air in a manner that the company says is efficient, affordable, and—crucially—scalable.

The company’s long-term vision is undoubtedly ambitious—it wants to prove that direct air capture (DAC), as the process is known, can be a powerful tool to help the world keep temperatures from rising to ever more dangerous levels. 

But DAC is also a controversial technology, unproven at scale and wildly expensive to operate. On top of that, Kenya’s Maasai people have plenty of reasons to distrust energy companies. Read the full story.

Diana Kruzman

This article is also part of the Big Story series: MIT Technology Review’s most important, ambitious reporting. The stories in the series take a deep look at the technologies that are coming next and what they will mean for us and the world we live in. Check out the rest of them here.

AI Wrapped: The 14 AI terms you couldn’t avoid in 2025

If the past 12 months have taught us anything, it’s that the AI hype train is showing no signs of slowing. It’s hard to believe that at the beginning of the year, DeepSeek had yet to turn the entire industry on its head, Meta was better known for trying (and failing) to make the metaverse cool than for its relentless quest to dominate superintelligence, and vibe coding wasn’t a thing.

If that’s left you feeling a little confused, fear not. Our writers have taken a look back over the AI terms that dominated the year, for better or worse. Read the full list.

MIT Technology Review’s most popular stories of 2025

2025 was a busy and productive year here at MIT Technology Review. We published magazine issues on power, creativity, innovation, bodies, relationships, and security. We hosted 14 exclusive virtual conversations with our editors and outside experts in our subscriber-only series, Roundtables, and held two events on MIT’s campus. And we published hundreds of articles online, following new developments in computing, climate tech, robotics, and more.

As the new year begins, we wanted to give you a chance to revisit some of this work with us. Whether we were covering the red-hot rise of artificial intelligence or the future of biotech, these are some of the stories that resonated the most with our readers.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Washington’s battle to break up Big Tech is in peril
A string of judges have opted not to force them to spin off key assets. (FT $)
+ Here’s some of the major tech litigation we can expect in the next 12 months. (Reuters)

2 Disinformation about the US invasion of Venezuela is rife on social media
And the biggest platforms don’t appear to be doing much about it. (Wired $)
+ Trump shared a picture of captured president Maduro on Truth Social. (NYT $)

3 Here’s what we know about Big Tech’s ties to the Israeli military
AI is central to its military operations, and giant US firms have stepped up to help. (The Guardian)

4 Alibaba’s AI tool is detecting cancer cases in China
PANDA is adept at spotting pancreatic cancer, which is typically tough to identify. (NYT $)
+ How hospitals became an AI testbed. (WSJ $)
+ A medical portal in New Zealand was hacked into last week. (Reuters)

5 This Discord community supports people recovering from AI-fueled delusions
They say reconnecting with fellow humans is an important step forward. (WP $)
+ The looming crackdown on AI companionship. (MIT Technology Review)

6 Californians can now demand data brokers delete their personal information 
Thanks to a new tool—but there’s a catch. (TechCrunch)
+ This California lawmaker wants to ban AI from kids’ toys. (Fast Company $)

7 Chinese peptides are flooding into Silicon Valley

The unproven drugs promise to heal injuries, improve focus and reduce appetite—and American tech workers are hooked. (NYT $)

8 Alaska’s court system built an AI assistant to navigate probate
But the project has been plagued by delays and setbacks. (NBC News)
+ Inside Amsterdam’s high-stakes experiment to create fair welfare AI. (MIT Technology Review)

9 These ghostly particles could upend how we think about the universe
The standard model of particle physics may have a crack in it. (New Scientist $)
+ Why is the universe so complex and beautiful? (MIT Technology Review)

10 Sick of the same old social media apps?
Give these alternative platforms a go. (Insider $)

Quote of the day

“Just an unbelievable amount of pollution.”

—Sharon Wilson, a former oil and gas worker who tracks methane releases, tells the Guardian what a thermal imaging camera pointed at xAI’s Colossus datacentre has revealed.

One more thing

How aging clocks can help us understand why we age—and if we can reverse it

Wrinkles and gray hairs aside, it can be difficult to know how well—or poorly—someone’s body is truly aging. A person who develops age-related diseases earlier in life, or has other biological changes associated with aging, might be considered “biologically older” than a similar-age person who doesn’t have those changes. Some 80-year-olds will be weak and frail, while others are fit and active.

Over the past decade, scientists have been uncovering new methods of looking at the hidden ways our bodies are aging. And what they’ve found is changing our understanding of aging itself. Read the full story.

—Jessica Hamzelou

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ You heard it here first: 2026 is the year of cabbage (yes, cabbage.)
+ Darts is bigger than ever. So why are we still waiting for the first great darts video game? 🎯
+ This year’s CES is already off to a bang, courtesy of an essential, cutting-edge vibrating knife.
+ At least one good thing came out of that Stranger Things finale—streams of Prince’s excellent back catalog have soared.

What’s next for AI in 2026

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

In an industry in constant flux, sticking your neck out to predict what’s coming next may seem reckless. (AI bubble? What AI bubble?) But for the last few years we’ve done just that—and we’re doing it again. 

How did we do last time? We picked five hot AI trends to look out for in 2025, including what we called generative virtual playgrounds, a.k.a world models (check: From Google DeepMind’s Genie 3 to World Labs’s Marble, tech that can generate realistic virtual environments on the fly keeps getting better and better); so-called reasoning models (check: Need we say more? Reasoning models have fast become the new paradigm for best-in-class problem solving); a boom in AI for science (check: OpenAI is now following Google DeepMind by setting up a dedicated team to focus on just that); AI companies that are cozier with national security (check: OpenAI reversed position on the use of its technology for warfare to sign a deal with the defense-tech startup Anduril to help it take down battlefield drones); and legitimate competition for Nvidia (check, kind of: China is going all in on developing advanced AI chips, but Nvidia’s dominance still looks unassailable—for now at least). 

So what’s coming in 2026? Here are our big bets for the next 12 months. 

More Silicon Valley products will be built on Chinese LLMs

The last year shaped up as a big one for Chinese open-source models. In January, DeepSeek released R1, its open-source reasoning model, and shocked the world with what a relatively small firm in China could do with limited resources. By the end of the year, “DeepSeek moment” had become a phrase frequently tossed around by AI entrepreneurs, observers, and builders—an aspirational benchmark of sorts. 

It was the first time many people realized they could get a taste of top-tier AI performance without going through OpenAI, Anthropic, or Google.

Open-weight models like R1 allow anyone to download a model and run it on their own hardware. They are also more customizable, letting teams tweak models through techniques like distillation and pruning. This stands in stark contrast to the “closed” models released by major American firms, where core capabilities remain proprietary and access is often expensive.

As a result, Chinese models have become an easy choice. Reports by CNBC and Bloomberg suggest that startups in the US have increasingly recognized and embraced what they can offer.

One popular group of models is Qwen, created by Alibaba, the company behind China’s largest e-commerce platform, Taobao. Qwen2.5-1.5B-Instruct alone has 8.85 million downloads, making it one of the most widely used pretrained LLMs. The Qwen family spans a wide range of model sizes alongside specialized versions tuned for math, coding, vision, and instruction-following, a breadth that has helped it become an open-source powerhouse.

Other Chinese AI firms that were previously unsure about committing to open source are following DeepSeek’s playbook. Standouts include Zhipu’s GLM and Moonshot’s Kimi. The competition has also pushed American firms to open up, at least in part. In August, OpenAI released its first open-source model. In November, the Allen Institute for AI, a Seattle-based nonprofit, released its latest open-source model, Olmo 3. 

Even amid growing US-China antagonism, Chinese AI firms’ near-unanimous embrace of open source has earned them goodwill in the global AI community and a long-term trust advantage. In 2026, expect more Silicon Valley apps to quietly ship on top of Chinese open models, and look for the lag between Chinese releases and the Western frontier to keep shrinking—from months to weeks, and sometimes less.

Caiwei Chen

The US will face another year of regulatory tug-of-war

T​​he battle over regulating artificial intelligence is heading for a showdown. On December 11, President Donald Trump signed an executive order aiming to neuter state AI laws, a move meant to handcuff states from keeping the growing industry in check. In 2026, expect more political warfare. The White House and states will spar over who gets to govern the booming technology, while AI companies wage a fierce lobbying campaign to crush regulations, armed with the narrative that a patchwork of state laws will smother innovation and hobble the US in the AI arms race against China.

Under Trump’s executive order, states may fear being sued or starved federal funding if they clash with his vision for light-touch regulation. Big Democratic states like California—which just enacted the nation’s first frontier AI law requiring companies to publish safety testing for their AI models—will take the fight to court, arguing that only Congress can override state laws. But states that can’t afford to lose federal funding, or fear getting in Trump’s crosshairs, might fold. Still, expect to see more state lawmaking on hot-button issues, especially where Trump’s order gives states a green light to legislate. With chatbots accused of triggering teen suicides and data centers sucking up more and more energy, states will face mounting public pressure to push for guardrails. 

In place of state laws, Trump promises to work with Congress to establish a federal AI law. Don’t count on it. Congress failed to pass a moratorium on state legislation twice in 2025, and we aren’t holding out hope that it will deliver its own bill this year. 

AI companies like OpenAI and Meta will continue to deploy powerful super-PACs to support political candidates who back their agenda and target those who stand in their way. On the other side, super-PACs supporting AI regulation will build their own war chests to counter. Watch them duke it out at next year’s midterm elections.

The further AI advances, the more people will fight to steer its course, and 2026 will be another year of regulatory tug-of-war—with no end in sight.

Michelle Kim

Chatbots will change the way we shop

Imagine a world in which you have a personal shopper at your disposal 24-7—an expert who can instantly recommend a gift for even the trickiest-to-buy-for friend or relative, or trawl the web to draw up a list of the best bookcases available within your tight budget. Better yet, they can analyze a kitchen appliance’s strengths and weaknesses, compare it with its seemingly identical competition, and find you the best deal. Then once you’re happy with their suggestion, they’ll take care of the purchasing and delivery details too.

But this ultra-knowledgeable shopper isn’t a clued-up human at all—it’s a chatbot. This is no distant prediction, either. Salesforce recently said it anticipates that AI will drive $263 billion in online purchases this holiday season. That’s some 21% of all orders. And experts are betting on AI-enhanced shopping becoming even bigger business within the next few years. By 2030, between $3 trillion and $5 trillion annually will be made from agentic commerce, according to research from the consulting firm McKinsey. 

Unsurprisingly, AI companies are already heavily invested in making purchasing through their platforms as frictionless as possible. Google’s Gemini app can now tap into the company’s powerful Shopping Graph data set of products and sellers, and can even use its agentic technology to call stores on your behalf. Meanwhile, back in November, OpenAI announced a ChatGPT shopping feature capable of rapidly compiling buyer’s guides, and the company has struck deals with Walmart, Target, and Etsy to allow shoppers to buy products directly within chatbot interactions. 

Expect plenty more of these kinds of deals to be struck within the next year as consumer time spent chatting with AI keeps on rising, and web traffic from search engines and social media continues to plummet. 

Rhiannon Williams

An LLM will make an important new discovery

I’m going to hedge here, right out of the gate. It’s no secret that large language models spit out a lot of nonsense. Unless it’s with monkeys-and-typewriters luck, LLMs won’t discover anything by themselves. But LLMs do still have the potential to extend the bounds of human knowledge.

We got a glimpse of how this could work in May, when Google DeepMind revealed AlphaEvolve, a system that used the firm’s Gemini LLM to come up with new algorithms for solving unsolved problems. The breakthrough was to combine Gemini with an evolutionary algorithm that checked its suggestions, picked the best ones, and fed them back into the LLM to make them even better.

Google DeepMind used AlphaEvolve to come up with more efficient ways to manage power consumption by data centers and Google’s TPU chips. Those discoveries are significant but not game-changing. Yet. Researchers at Google DeepMind are now pushing their approach to see how far it will go.

And others have been quick to follow their lead. A week after AlphaEvolve came out, Asankhaya Sharma, an AI engineer in Singapore, shared OpenEvolve, an open-source version of Google DeepMind’s tool. In September, the Japanese firm Sakana AI released a version of the software called SinkaEvolve. And in November, a team of US and Chinese researchers revealed AlphaResearch, which they claim improves on one of AlphaEvolve’s already better-than-human math solutions.

There are alternative approaches too. For example, researchers at the University of Colorado Denver are trying to make LLMs more inventive by tweaking the way so-called reasoning models work. They have drawn on what cognitive scientists know about creative thinking in humans to push reasoning models toward solutions that are more outside the box than their typical safe-bet suggestions.

Hundreds of companies are spending billions of dollars looking for ways to get AI to crack unsolved math problems, speed up computers, and come up with new drugs and materials. Now that AlphaEvolve has shown what’s possible with LLMs, expect activity on this front to ramp up fast.    

Will Douglas Heaven

Legal fights heat up

For a while, lawsuits against AI companies were pretty predictable: Rights holders like authors or musicians would sue companies that trained AI models on their work, and the courts generally found in favor of the tech giants. AI’s upcoming legal battles will be far messier.

The fights center on thorny, unresolved questions: Can AI companies be held liable for what their chatbots encourage people to do, as when they help teens plan suicides? If a chatbot spreads patently false information about you, can its creator be sued for defamation? If companies lose these cases, will insurers shun AI companies as clients?

In 2026, we’ll start to see the answers to these questions, in part because some notable cases will go to trial (the family of a teen who died by suicide will bring OpenAI to court in November).

At the same time, the legal landscape will be further complicated by President Trump’s executive order from December—see Michelle’s item above for more details on the brewing regulatory storm.

No matter what, we’ll see a dizzying array of lawsuits in all directions (not to mention some judges even turning to AI amid the deluge).

James O’Donnell

AI Wrapped: The 14 AI terms you couldn’t avoid in 2025

If the past 12 months have taught us anything, it’s that the AI hype train is showing no signs of slowing. It’s hard to believe that at the beginning of the year, DeepSeek had yet to turn the entire industry on its head, Meta was better known for trying (and failing) to make the metaverse cool than for its relentless quest to dominate superintelligence, and vibe coding wasn’t a thing.

If that’s left you feeling a little confused, fear not. As we near the end of 2025, our writers have taken a look back over the AI terms that dominated the year, for better or worse.

Make sure you take the time to brace yourself for what promises to be another bonkers year.

—Rhiannon Williams

1. Superintelligence

a jack russell terrier wearing glasses and a bow tie

As long as people have been hyping AI, they have been coming up with names for a future, ultra-powerful form of the technology that could bring about utopian or dystopian consequences for humanity. “Superintelligence” is that latest hot term. Meta announced in July that it would form an AI team to pursue superintelligence, and it was reportedly offering nine-figure compensation packages to AI experts from the company’s competitors to join.

In December, Microsoft’s head of AI followed suit, saying the company would be spending big sums, perhaps hundreds of billions, on the pursuit of superintelligence. If you think superintelligence is as vaguely defined as artificial general intelligence, or AGI, you’d be right! While it’s conceivable that these sorts of technologies will be feasible in humanity’s long run, the question is really when, and whether today’s AI is good enough to be treated as a stepping stone toward something like superintelligence. Not that that will stop the hype kings. —James O’Donnell

2. Vibe coding

Thirty years ago, Steve Jobs said everyone in America should learn how to program a computer. Today, people with zero knowledge of how to code can knock up an app, game, or website in no time at all thanks to vibe coding—a catch-all phrase coined by OpenAI cofounder Andrej Karpathy. To vibe-code, you simply prompt generative AI models’ coding assistants to create the digital object of your desire and accept pretty much everything they spit out. Will the result work? Possibly not. Will it be secure? Almost definitely not, but the technique’s biggest champions aren’t letting those minor details stand in their way. Also—it sounds fun! — Rhiannon Williams

3. Chatbot psychosis

One of the biggest AI stories over the past year has been how prolonged interactions with chatbots can cause vulnerable people to experience delusions and, in some extreme cases, can either cause or worsen psychosis. Although “chatbot psychosis” is not a recognized medical term, researchers are paying close attention to the growing anecdotal evidence from users who say it’s happened to them or someone they know. Sadly, the increasing number of lawsuits filed against AI companies by the families of people who died following their conversations with chatbots demonstrate the technology’s potentially deadly consequences. —Rhiannon Williams

4. Reasoning

Few things kept the AI hype train going this year more than so-called reasoning models, LLMs that can break down a problem into multiple steps and work through them one by one. OpenAI released its first reasoning models, o1 and o3, a year ago.

A month later, the Chinese firm DeepSeek took everyone by surprise with a very fast follow, putting out R1, the first open-source reasoning model. In no time, reasoning models became the industry standard: All major mass-market chatbots now come in flavors backed by this tech. Reasoning models have pushed the envelope of what LLMs can do, matching top human performances in prestigious math and coding competitions. On the flip side, all the buzz about LLMs that could “reason” reignited old debates about how smart LLMs really are and how they really work. Like “artificial intelligence” itself, “reasoning” is technical jargon dressed up with marketing sparkle. Choo choo! —Will Douglas Heaven

5. World models 

For all their uncanny facility with language, LLMs have very little common sense. Put simply, they don’t have any grounding in how the world works. Book learners in the most literal sense, LLMs can wax lyrical about everything under the sun and then fall flat with a howler about how many elephants you could fit into an Olympic swimming pool (exactly one, according to one of Google DeepMind’s LLMs).

World models—a broad church encompassing various technologies—aim to give AI some basic common sense about how stuff in the world actually fits together. In their most vivid form, world models like Google DeepMind’s Genie 3 and Marble, the much-anticipated new tech from Fei-Fei Li’s startup World Labs, can generate detailed and realistic virtual worlds for robots to train in and more. Yann LeCun, Meta’s former chief scientist, is also working on world models. He has been trying to give AI a sense of how the world works for years, by training models to predict what happens next in videos. This year he quit Meta to focus on this approach in a new start up called Advanced Machine Intelligence Labs. If all goes well, world models could be the next thing. —Will Douglas Heaven

6. Hyperscalers

Have you heard about all the people saying no thanks, we actually don’t want a giant data center plopped in our backyard? The data centers in question—which tech companies want to built everywhere, including space—are typically referred to as hyperscalers: massive buildings purpose-built for AI operations and used by the likes of OpenAI and Google to build bigger and more powerful AI models. Inside such buildings, the world’s best chips hum away training and fine-tuning models, and they’re built to be modular and grow according to needs.

It’s been a big year for hyperscalers. OpenAI announced, alongside President Donald Trump, its Stargate project, a $500 billion joint venture to pepper the country with the largest data centers ever. But it leaves almost everyone else asking: What exactly do we get out of it? Consumers worry the new data centers will raise their power bills. Such buildings generally struggle to run on renewable energy. And they don’t tend to create all that many jobs. But hey, maybe these massive, windowless buildings could at least give a moody, sci-fi vibe to your community. —James O’Donnell

7. Bubble

The lofty promises of AI are levitating the economy. AI companies are raising eye-popping sums of money and watching their valuations soar into the stratosphere. They’re pouring hundreds of billions of dollars into chips and data centers, financed increasingly by debt and eyebrow-raising circular deals. Meanwhile, the companies leading the gold rush, like OpenAI and Anthropic, might not turn a profit for years, if ever. Investors are betting big that AI will usher in a new era of riches, yet no one knows how transformative the technology will actually be.

Most organizations using AI aren’t yet seeing the payoff, and AI work slop is everywhere. There’s scientific uncertainty about whether scaling LLMs will deliver superintelligence or whether new breakthroughs need to pave the way. But unlike their predecessors in the dot-com bubble, AI companies are showing strong revenue growth, and some are even deep-pocketed tech titans like Microsoft, Google, and Meta. Will the manic dream ever burst—Michelle Kim

8. Agentic

This year, AI agents were everywhere. Every new feature announcement, model drop, or security report throughout 2025 was peppered with mentions of them, even though plenty of AI companies and experts disagree on exactly what counts as being truly “agentic,” a vague term if ever there was one. No matter that it’s virtually impossible to guarantee that an AI acting on your behalf out in the wide web will always do exactly what it’s supposed to do—it seems as though agentic AI is here to stay for the foreseeable. Want to sell something? Call it agentic! —Rhiannon Williams

9. Distillation

Early this year, DeepSeek unveiled its new model DeepSeek R1, an open-source reasoning model that matches top Western models but costs a fraction of the price. Its launch freaked Silicon Valley out, as many suddenly realized for the first time that huge scale and resources were not necessarily the key to high-level AI models. Nvidia stock plunged by 17% the day after R1 was released.

The key to R1’s success was distillation, a technique that makes AI models more efficient. It works by getting a bigger model to tutor a smaller model: You run the teacher model on a lot of examples and record the answers, and reward the student model as it copies those responses as closely as possible, so that it gains a compressed version of the teacher’s knowledge.  —Caiwei Chen

10. Sycophancy

As people across the world spend increasing amounts of time interacting with chatbots like ChatGPT, chatbot makers are struggling to work out the kind of tone and “personality” the models should adopt. Back in April, OpenAI admitted it’d struck the wrong balance between helpful and sniveling, saying a new update had rendered GPT-4o too sycophantic. Having it suck up to you isn’t just irritating—it can mislead users by reinforcing their incorrect beliefs and spreading misinformation. So consider this your reminder to take everything—yes, everything—LLMs produce with a pinch of salt. —Rhiannon Williams

11. Slop

If there is one AI-related term that has fully escaped the nerd enclosures and entered public consciousness, it’s “slop.” The word itself is old (think pig feed), but “slop” is now commonly used to refer to low-effort, mass-produced content generated by AI, often optimized for online traffic. A lot of people even use it as a shorthand for any AI-generated content. It has felt inescapable in the past year: We have been marinated in it, from fake biographies to shrimp Jesus images to surreal human-animal hybrid videos.

But people are also having fun with it. The term’s sardonic flexibility has made it easy for internet users to slap it on all kinds of words as a suffix to describe anything that lacks substance and is absurdly mediocre: think “work slop” or “friend slop.” As the hype cycle resets, “slop” marks a cultural reckoning about what we trust, what we value as creative labor, and what it means to be surrounded by stuff that was made for engagement rather than expression. —Caiwei Chen

12. Physical intelligence

Did you come across the hypnotizing video from earlier this year of a humanoid robot putting away dishes in a bleak, gray-scale kitchen? That pretty much embodies the idea of physical intelligence: the idea that advancements in AI can help robots better move around the physical world. 

It’s true that robots have been able to learn new tasks faster than ever before, everywhere from operating rooms to warehouses. Self-driving-car companies have seen improvements in how they simulate the roads, too. That said, it’s still wise to be skeptical that AI has revolutionized the field. Consider, for example, that many robots advertised as butlers in your home are doing the majority of their tasks thanks to remote operators in the Philippines

The road ahead for physical intelligence is also sure to be weird. Large language models train on text, which is abundant on the internet, but robots learn more from videos of people doing things. That’s why the robot company Figure suggested in September that it would pay people to film themselves in their apartments doing chores. Would you sign up? —James O’Donnell

13. Fair use

AI models are trained by devouring millions of words and images across the internet, including copyrighted work by artists and writers. AI companies argue this is “fair use”—a legal doctrine that lets you use copyrighted material without permission if you transform it into something new that doesn’t compete with the original. Courts are starting to weigh in. In June, Anthropic’s training of its AI model Claude on a library of books was ruled fair use because the technology was “exceedingly transformative.”

That same month, Meta scored a similar win, but only because the authors couldn’t show that the company’s literary buffet cut into their paychecks. As copyright battles brew, some creators are cashing in on the feast. In December, Disney signed a splashy deal with OpenAI to let users of Sora, the AI video platform, generate videos featuring more than 200 characters from Disney’s franchises. Meanwhile, governments around the world are rewriting copyright rules for the content-guzzling machines. Is training AI on copyrighted work fair use? As with any billion-dollar legal question, it depends—Michelle Kim

14. GEO

Just a few short years ago, an entire industry was built around helping websites rank highly in search results (okay, just in Google). Now search engine optimization (SEO), is giving way to GEO—generative engine optimization—as the AI boom forces brands and businesses to scramble to maximize their visibility in AI, whether that’s in AI-enhanced search results like Google’s AI Overviews or within responses from LLMs. It’s no wonder they’re freaked out. We already know that news companies have experienced a colossal drop in search-driven web traffic, and AI companies are working on ways to cut out the middleman and allow their users to visit sites from directly within their platforms. It’s time to adapt or die. —Rhiannon Williams

The Download: China’s dying EV batteries, and why AI doomers are doubling down

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

China figured out how to sell EVs. Now it has to bury their batteries.

In the past decade, China has seen an EV boom, thanks in part to government support. Buying an electric car has gone from a novel decision to a routine one; by late 2025, nearly 60% of new cars sold were electric or plug-in hybrids.

But as the batteries in China’s first wave of EVs reach the end of their useful life, early owners are starting to retire their cars, and the country is now under pressure to figure out what to do with those aging components.

The issue is putting strain on China’s still-developing battery recycling industry and has given rise to a gray market that often cuts corners on safety and environmental standards. National regulators and commercial players are also stepping in, but so far these efforts have struggled to keep pace with the flood of batteries coming off the road. Read the full story.

—Caiwei Chen

The AI doomers feel undeterred

It’s a weird time to be an AI doomer.This small but influential community believes, in the simplest terms, that AI could get so good it could be bad—very, very bad—for humanity.

The doomer crowd has had some notable success over the past several years: including helping shape AI policy coming from the Biden administration. But a number of developments over the past six months have put them on the back foot. Talk of an AI bubble has overwhelmed the discourse as tech companies continue to invest in multiple Manhattan Projects’ worth of data centers without any certainty that future demand will match what they’re building.

So where does this leave the doomers? We decided to ask some of the movement’s biggest names to see if the recent setbacks and general vibe shift had altered their views. See what they had to say in our story.

—Garrison Lovely

This story is part of our new Hype Correction package, a collection of stories designed to help you reset your expectations about what AI makes possible—and what it doesn’t. Check out the rest of the package.

Take our quiz on the year in health and biotechnology

In just a couple of weeks, we’ll be bidding farewell to 2025. And what a year it has been! Artificial intelligence is being incorporated into more aspects of our lives, weight-loss drugs have expanded in scope, and there have been some real “omg” biotech stories from the fields of gene therapy, IVF, neurotech, and more.

Jessica Hamzelou, our senior biotech reporter, is inviting you to put your own memory to the test. So how closely have you been paying attention this year?

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 TikTok has signed a deal to sell its US unit 
Its new owner will be a joint venture controlled by American investors including Oracle. (Axios)
+ But the platform is adamant that its Chinese owner will retain its core US business. (FT $)
+ The deal is slated to close on January 22 next year. (Bloomberg $)
+ It means TikTok will sidestep a US ban—at least for now. (The Guardian)

2 A tip on Reddit helped to end the hunt for the Brown University shooter
The suspect, who has been found dead, is also suspected of killing an MIT professor. (NYT $)
+ The shooter’s motivation is still unclear, police say. (WP $)

3 Tech leaders are among those captured in newly-released Epstein photos
Bill Gates and Google’s Sergey Brin are both in the pictures. (FT $)
+ They’ve been pulled from a tranche of more than 95,000. (Wired $)

4 A Starlink satellite appears to have exploded
And it’s now falling back to earth. (The Verge)
+ On the ground in Ukraine’s largest Starlink repair shop. (MIT Technology Review)

5 YouTube has shut down two major channels that share fake movie trailers
Screen Culture and KH Studio uploaded AI-generated mock trailers with over a billion views. (Deadline)
+ Google is treading a thin line between embracing and shunning generative AI. (Ars Technica)

6 Trump is cracking down on investment in Chinese tech firms
Lawmakers are increasingly worried that US money is bolstering the country’s surveillance state. (WSJ $)
+ Meanwhile, China is working on boosting its chip output. (FT $)

7 ICE has paid an AI agent company to track down targets
It claims to be able to rapidly trace a target’s online network. (404 Media)

8 America wants to return to the Moon by 2028
And to build some nuclear reactors while it’s up there. (Ars Technica)
+ Southeast Asia seeks its place in space. (MIT Technology Review)

9 Actors in the UK are refusing to be scanned for AI
They’re reportedly routinely pressured to consent to creating digital likenesses of themselves. (The Guardian)
+ How Meta and AI companies recruited striking actors to train AI. (MIT Technology Review)

10 Indian tutors are explaining how to use AI over WhatsApp
Lessons are cheap and personalized—but the teachers aren’t always credible. (Rest of World)
+ How Indian health-care workers use WhatsApp to save pregnant women. (MIT Technology Review)

Quote of the day

“Trump wants to hand over even more control of what you watch to his billionaire buddies. Americans deserve to know if the president struck another backdoor deal for this billionaire takeover of TikTok.”

—Democratic senator Elizabeth Warren queries the terms of the deal that TikTok has made to allow it to continue operating in the US in a post on Bluesky.

One more thing

Synthesia’s AI clones are more expressive than ever. Soon they’ll be able to talk back.

—Rhiannon Williams


Earlier this summer, I visited the AI company Synthesia to create a hyperrealistic AI-generated avatar of me. The company’s avatars are a decent barometer of just how dizzying progress has been in AI over the past few years, so I was curious just how accurately its latest AI model, introduced last month, could replicate me.

I found my avatar as unnerving as it is technically impressive. It’s slick enough to pass as a high-definition recording of a chirpy corporate speech, and if you didn’t know me, you’d probably think that’s exactly what it was.

My avatar shows how it’s becoming ever-harder to distinguish the artificial from the real. And before long, these avatars will even be able to talk back to us. But how much better can they get? And what might interacting with AI clones do to us? Read the full story.

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ You can keep your beef tallow—here are the food trends that need to remain firmly in 2025.
+ The Library of Congress has some lovely images of winter that are completely free to use.
+ If you’ve got a last minute Christmas work party tonight, don’t make these Secret Santa mistakes.
+ Did you realize Billie Eilish’s smash hit Birds of a Feather has the same chord progression as Wham’s Last Christmas? They sound surprisingly good mashed together.

The Download: the worst technology of 2025, and Sam Altman’s AI hype

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The 8 worst technology flops of 2025

Welcome to our annual list of the worst, least successful, and simply dumbest technologies of the year.

We like to think there’s a lesson in every technological misadventure. But when technology becomes dependent on power, sometimes the takeaway is simpler: it would have been better to stay away.

Regrets—2025 had a few. Here are some of the more notable ones.

—Antonio Regalado

A brief history of Sam Altman’s hype

Each time you’ve heard a borderline outlandish idea of what AI will be capable of, it often turns out that Sam Altman was, if not the first to articulate it, at least the most persuasive and influential voice behind it.

For more than a decade he has been known in Silicon Valley as a world-class fundraiser and persuader. Throughout, Altman’s words have set the agenda. What he says about AI is rarely provable when he says it, but it persuades us of one thing: This road we’re on with AI can go somewhere either great or terrifying, and OpenAI will need epic sums to steer it toward the right destination. In this sense, he is the ultimate hype man.

To understand how his voice has shaped our understanding of what AI can do, we read almost everything he’s ever said about the technology. His own words trace how we arrived here. Read the full story.

—James O’Donnell

This story is part of our new Hype Correction package, a collection of stories designed to help you reset your expectations about what AI makes possible—and what it doesn’t. Check out the rest of the package here.

Can AI really help us discover new materials?

One of my favorite stories in the Hype Correction package comes from my colleague David Rotman, who took a hard look at AI for materials research. AI could transform the process of discovering new materials—innovation that could be especially useful in the world of climate tech, which needs new batteries, semiconductors, magnets, and more.

But the field still needs to prove it can make materials that are actually novel and useful. Can AI really supercharge materials research? And what would that look like? Read the full story.

—Casey Crownhart

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 China built a chip-making machine to rival the West’s supremacy 
Suggesting China is far closer to achieving semiconductor independence than we previously believed. (Reuters)
+ China’s chip boom is creating a new class of AI-era billionaires. (Insider $)

2 NASA finally has a new boss
It’s billionaire astronaut Jared Isaacman, a close ally of Elon Musk. (Insider $)
+ But will Isaacman lead the US back to the Moon before China? (BBC)
+ Trump previously pulled his nomination, before reselecting Isaacman last month. (The Verge)

3 The parents of a teenage sextortion victim are suing Meta
Murray Dowey took his own life after being tricked into sending intimate pictures to an overseas criminal gang. (The Guardian)
+ It’s believed that the gang is based in West Africa. (BBC)

4 US and Chinese satellites are jostling in orbit
In fact, these clashes are so common that officials have given it a name—”dogfighting.” (WP $)
+ How to fight a war in space (and get away with it) (MIT Technology Review)

5 It’s not just AI that’s trapped in a bubble right now

Labubus, anyone? (Bloomberg $)
+ What even is the AI bubble? (MIT Technology Review)

6 Elon Musk’s Texan school isn’t operating as a school
Instead, it’s a “licensed child care program” with just a handful of enrolled kids. (NYT $)

7 US Border Patrol is building a network of small drones
In a bid to expand its covert surveillance powers. (Wired $)
+ This giant microwave may change the future of war. (MIT Technology Review)

8 This spoon makes low-salt foods taste better
By driving the food’s sodium ions straight to the diner’s tongue. (IEEE Spectrum)

9 AI cannot be trusted to run an office vending machine
Though the lucky Wall Street Journal staffer who walked away with a free PlayStation may beg to differ. (WSJ $)

10 Physicists have 3D-printed a Cheistmas tree from ice 🎄
No refrigeration kit required. (Ars Technica

Quote of the day

“It will be mentioned less and less in the same way that Microsoft Office isn’t mentioned in job postings anymore.”

—Marc Cenedella, founder and CEO of careers platform Ladders, tells Insider why employers will increasingly expect new hires to be fully au fait with AI.

One more thing

Is this the electric grid of the future?

Lincoln Electric System, a publicly owned utility in Nebraska, is used to weathering severe blizzards. But what will happen soon—not only at Lincoln Electric but for all electric utilities—is a challenge of a different order.

Utilities must keep the lights on in the face of more extreme and more frequent storms and fires, growing risks of cyberattacks and physical disruptions, and a wildly uncertain policy and regulatory landscape. They must keep prices low amid inflationary costs. And they must adapt to an epochal change in how the grid works, as the industry attempts to transition from power generated with fossil fuels to power generated from renewable sources like solar and wind.

The electric grid is bracing for a near future characterized by disruption. And, in many ways, Lincoln Electric is an ideal lens through which to examine what’s coming. Read the full story.

—Andrew Blum

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ A fragrance company is trying to recapture the scent of extinct flowers, wow. 
+ Seattle’s Sauna Festival sounds right up my street.
+ Switzerland has built what’s essentially a theme park dedicated to Saint Bernards
+ I fear I’ll never get over this tale of director supremo James Cameron giving a drowning rat CPR to save its life 🐀

The Download: why 2025 has been the year of AI hype correction, and fighting GPS jamming

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The great AI hype correction of 2025

Some disillusionment was inevitable. When OpenAI released a free web app called ChatGPT in late 2022, it changed the course of an entire industry—and several world economies. Millions of people started talking to their computers, and their computers started talking back. We were enchanted, and we expected more.

Well, 2025 has been a year of reckoning. For a start, the heads of the top AI companies made promises they couldn’t keep. At the same time, updates to the core technology are no longer the step changes they once were.

To be clear, the last few years have been filled with genuine “Wow” moments. But this remarkable technology is only a few years old, and in many ways it is still experimental. Its successes come with big caveats. Read the full story to learn more about why we may need to readjust our expectations.

—Will Douglas Heaven

This story is part of our new Hype Correction package, a collection of stories designed to help you reset your expectations about what AI makes possible—and what it doesn’t. Check out the rest of the package here, and you can read more about why it’s time to reset our expectations for AI in the latest edition of the Algorithm, our weekly AI newsletter. Sign up here to make sure you receive future editions straight to your inbox.

Quantum navigation could solve the military’s GPS jamming problem

Since the 2022 invasion of Ukraine, thousands of flights have been affected by a far-reaching Russian campaign of using radio transmissions that jammed its GPS system.

The growing inconvenience to air traffic and risk of a real disaster have highlighted the vulnerability of GPS and focused attention on more secure ways for planes to navigate the gauntlet of jamming and spoofing, the term for tricking a GPS receiver into thinking it’s somewhere else.

One approach that’s emerging from labs is quantum navigation: exploiting the quantum nature of light and atoms to build ultra-sensitive sensors that can allow vehicles to navigate independently, without depending on satellites. Read the full story.

—Amos Zeeberg

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The Trump administration has launched its US Tech Force program
In a bid to lure engineers away from Big Tech roles and straight into modernizing the government. (The Verge)
+ So, essentially replacing the IT workers that DOGE got rid of, then. (The Register)

2 Lawmakers are investigating how AI data centers affect electricity costs
They want to get to the bottom of whether it’s being passed onto consumers. (NYT $)
+ Calculating AI’s water usage is far from straightforward, too. (Wired $)
+ AI is changing the grid. Could it help more than it harms? (MIT Technology Review)

3 Ford isn’t making a large all-electric truck after all
After the US government’s support for EVs plummeted. (Wired $)
+ Instead, the F-150 Lightning pickup will be reborn as a plug-in hybrid. (The Information $)
+ Why Americans may be finally ready to embrace smaller cars. (Fast Company $)
+ The US could really use an affordable electric truck. (MIT Technology Review)

4 PayPal wants to become a bank in the US
The Trump administration is very friendly to non-traditional financial companies, after all. (FT $)
+ It’s been a good year for the crypto industry when it comes to banking. (Economist $)

5 A tech trade deal between the US and UK has been put on ice

America isn’t happy with the lack of progress Britain has made, apparently. (NYT $)
+ It’s a major setback in relations between the pair. (The Guardian)

6 Why does no one want to make the cure for dengue?
A new antiviral pill appears to prevent infection—but its development has been abandoned. (Vox)

7 The majority of the world’s glaciers are forecast to disappear by 2100
At a rate of around 3,000 per year. (New Scientist $)
+ Inside a new quest to save the “doomsday glacier”. (MIT Technology Review)

8 Hollywood is split over AI
While some filmmakers love it, actors are horrified by its inexorable rise. (Bloomberg $)

9 Corporate America is obsessed with hiring storytellers
It’s essentially a rehashed media relations manager role overhauled for the AI age. (WSJ $)

10 The concept of hacking existed before the internet
Just ask this bunch of teenage geeks. (IEEE Spectrum)

Quote of the day

“So the federal government deleted 18F, which was doing great work modernizing the government, and then replaced it with a clone? What is the point of all this?”

—Eugene Vinitsky, an assistant professor at New York University, takes aim at the US government’s decision to launch a new team to overhaul its approach to technology in a post on Bluesky.

One more thing

How DeepSeek became a fortune teller for China’s youth

As DeepSeek has emerged as a homegrown challenger to OpenAI, young people across the country have started using AI to revive fortune-telling practices that have deep roots in Chinese culture.

Across Chinese social media, users are sharing AI-generated readings, experimenting with fortune-telling prompt engineering, and revisiting ancient spiritual texts—all with the help of DeepSeek.

The surge in AI fortune-telling comes during a time of pervasive anxiety and pessimism in Chinese society. And as spiritual practices remain hidden underground thanks to the country’s regime, computers and phone screens are helping younger people to gain a sense of control over their lives. Read the full story.

—Caiwen Chen

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Chess has been online as far back as the 1800s (no, really!) ♟
+ Jane Austen was born 250 years ago today. How well do you know her writing? ($)
+ Rob Reiner, your work will live on forever.
+ I enjoyed this comprehensive guide to absolutely everything you could ever want to know about New England’s extensive seafood offerings.

The Download: introducing the AI Hype Correction package

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Introducing: the AI Hype Correction package

AI is going to reproduce human intelligence. AI will eliminate disease. AI is the single biggest, most important invention in human history. You’ve likely heard it all—but probably none of these things are true.

AI is changing our world, but we don’t yet know the real winners, or how this will all shake out.

After a few years of out-of-control hype, people are now starting to re-calibrate what AI is, what it can do, and how we should think about its ultimate impact.

Here, at the end of 2025, we’re starting the post-hype phase. This new package of stories, called Hype Correction, is a way to reset expectations—a critical look at where we are, what AI makes possible, and where we go next.

Here’s a sneak peek at what you can expect:

+ An introduction to four ways of thinking about the great AI hype correction of 2025.

+  While it’s safe to say we’re definitely in an AI bubble right now, what’s less clear is what it really looks like—and what comes after it pops. Read the full story.

+ Why OpenAI’s Sam Altman can be traced back to so many of the more outlandish proclamations about AI doing the rounds these days. Read the full story.

+ It’s a weird time to be an AI doomer. But they’re not giving up.

+ AI coding is now everywhere—but despite the billions of dollars being poured into improving AI models’ coding abilities, not everyone is convinced. Read the full story.

+ If we really want to start finding new kinds of materials faster, AI materials discovery needs to make it out of the lab and move into the real world. Read the full story.

+ Why reports of AI’s potential to replace trained human lawyers are greatly exaggerated.

+ Dr. Margaret Mitchell, chief ethics scientist at AI startup Hugging Face, explains why the generative AI hype train is distracting us from what AI actually is and what it can—and crucially, cannot—do. Read the full story.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 iRobot has filed for bankruptcy
The Roomba maker is considering handing over control to its main Chinese supplier. (Bloomberg $)
+ A proposed Amazon acquisition fell through close to two years ago. (FT $)
+ How the company lost its way. (TechCrunch)
+ A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook? (MIT Technology Review)

2 Meta’s 2025 has been a total rollercoaster ride
From its controversial AI team to Mark Zuckerberg’s newfound appreciation for masculine energy. (Insider $)

3 The Trump administration is giving the crypto industry a much easier ride
It’s dismissed crypto lawsuits involving many firms with financial ties to Trump. (NYT $)
+ Celebrities are feeling emboldened to flog crypto once again. (The Guardian)
+ A bitcoin investor wants to set up a crypto libertarian community in the Caribbean. (FT $)

4 There’s a new weight-loss drug in town
And people are already taking it, even though it’s unapproved. (Wired $)
+ What we still don’t know about weight-loss drugs. (MIT Technology Review)

5 Chinese billionaires are having dozens of US-born surrogate babies
An entire industry has sprung up to support them. (WSJ $)
+ A controversial Chinese CRISPR scientist is still hopeful about embryo gene editing. (MIT Technology Review)

6 Trump’s “big beautiful bill” funding hinges on states integrating AI into healthcare
Experts fear it’ll be used as a cost-cutting measure, even if it doesn’t work. (The Guardian)
+ Artificial intelligence is infiltrating health care. We shouldn’t let it make all the decisions. (MIT Technology Review)

7 Extreme rainfall is wreaking havoc in the desert
Oman and the UAE are unaccustomed to increasingly common torrential downpours. (WP $)

8 Data centers are being built in countries that are too hot for them
Which makes it a lot harder to cool them sufficiently. (Rest of World)

9 Why AI image generators are getting deliberately worse
Their makers are pursuing realism—not that overly polished, Uncanny Valley look. (The Verge)
+ Inside the AI attention economy wars. (NY Mag $)

10 How a tiny Swedish city became a major video game hub
Skövde has formed an unlikely community of cutting-edge developers. (The Guardian)
+ Google DeepMind is using Gemini to train agents inside one of Skövde’s biggest franchises. (MIT Technology Review)

Quote of the day

“They don’t care about the games. They don’t care about the art. They just want their money.”

—Anna C Webster, chair of the freelancing committee of the United Videogame Workers union, tells the Guardian why their members are protesting the prestigious 2025 Game Awards in the wake of major layoffs.

One more thing

Recapturing early internet whimsy with HTML

Websites weren’t always slick digital experiences.

There was a time when surfing the web involved opening tabs that played music against your will and sifting through walls of text on a colored background. In the 2000s, before Squarespace and social media, websites were manifestations of individuality—built from scratch using HTML, by users who had some knowledge of code.

Scattered across the web are communities of programmers working to revive this seemingly outdated approach. And the movement is anything but a superficial appeal to retro aesthetics—it’s about celebrating the human touch in digital experiences. Read the full story.

—Tiffany Ng

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+  Here’s how a bit of math can help you wrap your presents much more neatly this year.
+ It seems that humans mastered making fire way, way earlier than we realized.
+ The Arab-owned cafes opening up across the US sound warm and welcoming.
+ How to give a gift the recipient will still be using and loving for decades to come.

The Download: expanded carrier screening, and how Southeast Asia plans to get to space

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Expanded carrier screening: Is it worth it?

Carrier screening  tests would-be parents for hidden genetic mutations that might affect their children. It initially involved testing for specific genes in at-risk populations.

Expanded carrier screening takes things further, giving would-be parents an option to test for a wide array of diseases in prospective parents and egg and sperm donors.

The companies offering these screens “started out with 100 genes, and now some of them go up to 2,000,” Sara Levene, genetics counsellor at Guided Genetics, said at a meeting I attended this week. “It’s becoming a bit of an arms race amongst labs, to be honest.”

But expanded carrier screening comes with downsides. And it isn’t for everyone. Read the full story.

—Jessica Hamzelou

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Southeast Asia seeks its place in space

It’s a scorching October day in Bangkok and I’m wandering through the exhibits at the Thai Space Expo, held in one of the city’s busiest shopping malls, when I do a double take. Amid the flashy space suits and model rockets on display, there’s a plain-looking package of Thai basil chicken. I’m told the same kind of vacuum-­sealed package has just been launched to the International Space Station.

It’s an unexpected sight, one that reflects the growing excitement within the Southeast Asian space sector. And while there is some uncertainty about how exactly the region’s space sector may evolve, there is plenty of optimism, too. Read the full story.

—Jonathan O’Callaghan

This story is from the next print issue of MIT Technology Review magazine. If you haven’t already, subscribe now to receive future issues once they land.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Disney just signed a major deal with OpenAI
Meaning you’ll soon be able to create Sora clips starring 200 Marvel, Pixel and Star Wars characters. (Hollywood Reporter $)
+ Disney used to be openly skeptical of AI. What changed? (WSJ $)
+ It’s not feeling quite so friendly towards Google, however. (Ars Technica)
+ Expect a load of AI slop making its way to Disney Plus. (The Verge)

2 Donald Trump has blocked US states from enforcing their own AI rules
But technically, only Congress has the power to override state laws. (NYT $)
+ A new task force will seek out states with “inconsistent” AI rules. (Engadget)
+ The move is particularly bad news for California. (The Markup)

3 Reddit is challenging Australia’s social media ban for teens
It’s arguing that the ban infringes on their freedom of political communication. (Bloomberg $)
+ We’re learning more about the mysterious machinations of the teenage brain. (Vox)

4 ChatGPT’s “adult mode” is due to launch early next year

But OpenAI admits it needs to improve its age estimation tech first. (The Verge)
+ It’s pretty easy to get DeepSeek to talk dirty. (MIT Technology Review)

5 The death of Running Tide’s carbon removal dream
The company’s demise is a wake-up call to others dabbling in experimental tech. (Wired $)
+ We first wrote about Running Tide’s issues back in 2022. (MIT Technology Review)
+ What’s next for carbon removal? (MIT Technology Review)

6 That dirty-talking AI teddy bear wasn’t a one-off

It turns out that a wide range of LLM-powered toys aren’t suitable for children. (NBC News)
+ AI toys are all the rage in China—and now they’re appearing on shelves in the US too. (MIT Technology Review)

7 These are the cheapest places to create a fake online account
For a few cents, scammers can easily set up bots. (FT $)

8 How professors are attempting to AI-proof exams
ChatGPT won’t help you cut corners to ace an oral examination. (WP $)

9 Can a font be woke?
Marco Rubio seems to think so. (The Atlantic $)

10 Next year is all about maximalist circus decor 🎪
That’s according to Pinterest’s trend predictions for 2026. (The Guardian)

Quote of the day

 “Trump is delivering exactly what his billionaire benefactors demanded—all at the expense of our kids, our communities, our workers, and our planet.” 

—Senator Ed Markey criticizes Donald Trump’s decision to sign an order cracking down on US states’ ability to self-regulate AI, the Wall Street Journal reports.

One more thing

Taiwan’s “silicon shield” could be weakening

Taiwanese politics increasingly revolves around one crucial question: Will China invade? China’s ruling party has wanted to seize Taiwan for more than half a century. But in recent years, China’s leader, Xi Jinping, has placed greater emphasis on the idea of “taking back” the island (which the Chinese Communist Party, or CCP, has never controlled).

Many in Taiwan and elsewhere think one major deterrent has to do with the island’s critical role in semiconductor manufacturing. Taiwan produces the majority of the world’s semiconductors and more than 90% of the most advanced chips needed for AI applications.

But now some Taiwan specialists and some of the island’s citi­zens are worried that this “silicon shield,” if it ever existed, is cracking. Read the full story.

—Johanna M. Costigan

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Reasons to be cheerful: people are actually nicer than we think they are.
+ This year’s Krampus Run in Whitby—the Yorkshire town that inspired Bram Stoker’s Dracula—looks delightfully spooky.
+ How to find the magic in that most mundane of locations: the airport.
+ The happiest of birthdays to Dionne Warwick, who turns 85 today.

The Download: solar geoengineering’s future, and OpenAI is being sued

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Solar geoengineering startups are getting serious

Solar geoengineering aims to manipulate the climate by bouncing sunlight back into space. In theory, it could ease global warming. But as interest in the idea grows, so do concerns about potential consequences.

A startup called Stardust Solutions recently raised a $60 million funding round, the largest known to date for a geoengineering startup. My colleague James Temple has a new story out about the company, and how its emergence is making some researchers nervous.

So far, the field has been limited to debates, proposed academic research, and—sure—a few fringe actors to keep an eye on. Now things are getting more serious. So what does it mean for geoengineering, and for the climate? Read the full story.

—Casey Crownhart

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

If you’re interested in reading more about solar geoengineering, check out:

+ Why the for-profit race into solar geoengineering is bad for science and public trust. Read the full story.

+ Why we need more research—including outdoor experiments—to make better-informed decisions about such climate interventions.

+ The hard lessons of Harvard’s failed geoengineering experiment, which was officially terminated last year. Read the full story.

+ How this London nonprofit became one of the biggest backers of geoengineering research.

+ The technology could alter the entire planet. These groups want every nation to have a say.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 OpenAI is being sued for wrongful death
By the estate of a woman killed by her son after he engaged in delusion-filled conversations with ChatGPT. (WSJ $)
+ The chatbot appeared to validate Stein-Erik Soelberg’s conspiratorial ideas. (WP $)
+ It’s the latest in a string of wrongful death legal actions filed against chatbot makers. (ABC News)

2 ICE is tracking pregnant immigrants through specifically-developed smartwatches
They’re unable to take the devices off, even during labor. (The Guardian)
+ Pregnant and postpartum women say they’ve been detained in solitary confinement. (Slate $)
+ Another effort to track ICE raids has been taken offline. (MIT Technology Review)

3 Meta’s new AI hires aren’t making friends with the rest of the company
Tensions are rife between the AGI team and other divisions. (NYT $)
+ Mark Zuckerberg is keen to make money off the company’s AI ambitions. (Bloomberg $)
+ Meanwhile, what’s life like for the remaining Scale AI team? (Insider $)

4 Google DeepMind is building its first materials science lab in the UK
It’ll focus on developing new materials to build superconductors and solar cells. (FT $) 

5 The new space race is to build orbital data centers
And Blue Origin is winning, apparently. (WSJ $)
+ Plenty of companies are jostling for their slice of the pie. (The Verge)
+ Should we be moving data centers to space? (MIT Technology Review)

6 Inside the quest to find out what causes Parkinson’s
A growing body of work suggests it may not be purely genetic after all. (Wired $)

7 Are you in TikTok’s cat niche? 
If so, you’re likely to be in these other niches too. (WP $)

8 Why do our brains get tired? 🧠💤
Researchers are trying to get to the bottom of it.  (Nature $)

9 Microsoft’s boss has built his own cricket app 🏏
Satya Nadella can’t get enough of the sound of leather on willow. (Bloomberg $)

10 How much vibe coding is too much vibe coding? 
One journalist’s journey into the heart of darkness. (Rest of World)
+ What is vibe coding, exactly? (MIT Technology Review)

Quote of the day

“I feel so much pain seeing his sad face…I hope for a New Year’s miracle.”

—A child in Russia sends a message to the Kremlin-aligned Safe Internet League explaining the impact of the country’s decision to block access to the wildly popular gaming platform Roblox on their brother, the Washington Post reports.

 One more thing

Why it’s so hard to stop tech-facilitated abuse

After Gioia had her first child with her then husband, he installed baby monitors throughout their home—to “watch what we were doing,” she says, while he went to work. She’d turn them off; he’d get angry. By the time their third child turned seven, Gioia and her husband had divorced, but he still found ways to monitor her behavior. 

One Christmas, he gave their youngest a smartwatch. Gioia showed it to a tech-savvy friend, who found that the watch had a tracking feature turned on. It could be turned off only by the watch’s owner—her ex.

Gioia is far from alone. In fact, tech-facilitated abuse now occurs in most cases of intimate partner violence—and we’re doing shockingly little to prevent it. Read the full story

—Jessica Klein

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ The New Yorker has picked its best TV shows of 2025. Let the debate commence!
+ Check out the winners of this year’s Drone Photo Awards.
+ I’m sorry to report you aren’t half as intuitive as you think you are when it comes to deciphering your dog’s emotions.
+ Germany’s “home of Christmas” sure looks magical.

The Download: LLM confessions, and tapping into geothermal hot spots

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

OpenAI has trained its LLM to confess to bad behavior

What’s new: OpenAI is testing a new way to expose the complicated processes at work inside large language models. Researchers at the company can make an LLM produce what they call a confession, in which the model explains how it carried out a task and (most of the time) own up to any bad behavior.

Why it matters: Figuring out why large language models do what they do—and in particular why they sometimes appear to lie, cheat, and deceive—is one of the hottest topics in AI right now. If this multitrillion-dollar technology is to be deployed as widely as its makers hope it will be, it must be made more trustworthy. OpenAI sees confessions as one step toward that goal. Read the full story.

—Will Douglas Heaven

How AI is uncovering hidden geothermal energy resources

Sometimes geothermal hot spots are obvious, marked by geysers and hot springs on Earth’s surface. But in other places, they’re obscured thousands of feet underground. Now AI could help uncover these hidden pockets of potential power.

A startup company called Zanskar announced today that it’s used AI and other advanced computational methods to uncover a blind geothermal system—meaning there aren’t signs of it on the surface—in the western Nevada desert. The company says it’s the first blind system that’s been identified and confirmed to be a commercial prospect in over 30 years. Read the full story.

—Casey Crownhart

Why the grid relies on nuclear reactors in the winter

In the US, nuclear reactors follow predictable seasonal trends. Summer and winter tend to see the highest electricity demand, so plant operators schedule maintenance and refueling for other parts of the year.

This scheduled regularity might seem mundane, but it’s quite the feat that operational reactors are as reliable and predictable as they are. Now we’re seeing a growing pool of companies aiming to bring new technologies to the nuclear industry. Read the full story.

—Casey Crownhart

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Donald Trump has scrapped Biden’s fuel efficiency requirements
It’s a major blow for green automobile initiatives. (NYT $)
+ Trump maintains that getting rid of the rules will drive down the price of cars. (Politico)

2 RFK Jr’s vaccine advisers may delay hepatitis B vaccines for babies
The shots are a key part in combating acute cases of the infection. (The Guardian)
+ Former FDA commissioners are worried by its current chief’s vaccine views. (Ars Technica)
+ Meanwhile, a fentanyl vaccine is being trialed in the Netherlands. (Wired $)

3 Amazon is exploring building its own US delivery network
Which could mean axing its long-standing partnership with the US Postal Service. (WP $)

4 Republicans are defying Trump’s orders to block states from passing AI laws

They’re pushing back against plans to sneak the rule into an annual defense bill. (The Hill)+ Trump has been pressuring them to fall in line for months. (Ars Technica)
+ Congress killed an attempt to stop states regulating AI back in July. (CNN)

5 Wikipedia is exploring AI licensing deals
It’s a bid to monetize AI firms’ heavy reliance on its web pages. (Reuters)
+ How AI and Wikipedia have sent vulnerable languages into a doom spiral. (MIT Technology Review)

6 OpenAI is looking to the stars—and beyond
Sam Altman is reportedly interested in acquiring or partnering with a rocket company. (WSJ $)

7 What we can learn from wildfires

This year’s Dragon Bravo fire defied predictive modelling. But why? (New Yorker $)
+ How AI can help spot wildfires. (MIT Technology Review)

8 What’s behind America’s falling birth rates?
It’s remarkably hard to say. (Undark)

9 Researchers are studying whether brain rot is actually real 🧠
Including whether its effects could be permanent. (NBC News)

10 YouTuber Mr Beast is planning to launch a mobile phone service
Beast Mobile, anyone? (Insider $)
+ The New York Stock Exchange could be next in his sights. (TechCrunch)

Quote of the day

“I think there are some players who are YOLO-ing.”

—Anthropic CEO Dario Amodei suggests some rival AI companies are veering into risky spending territory, Bloomberg reports.

One more thing

The quest to show that biological sex matters in the immune system

For years, microbiologist Sabra Klein has painstakingly made the case that sex—defined by biological attributes such as our sex chromosomes, sex hormones, and reproductive tissues—can influence immune responses.

Klein and others have shown how and why male and female immune systems respond differently to the flu virus, HIV, and certain cancer therapies, and why most women receive greater protection from vaccines but are also more likely to get severe asthma and autoimmune disorders.

Klein has helped spearhead a shift in immunology, a field that long thought sex differences didn’t matter—and she’s set her sights on pushing the field of sex differences even further. Read the full story.

—Sandeep Ravindran

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Digital artist Beeple’s latest Art Basel show features Elon Musk, Jeff Bezos and Mark Zuckerberg robotic dogs pooping out NFTs 💩
+ If you’ve always dreamed of seeing the Northern Lights, here’s your best bet at doing so.
+ Check out this fun timeline of fashion’s hottest venues.
+ Why monkeys in ancient Roman times had pet piglets 🐖🐒

The Download: AI and coding, and Waymo’s aggressive driverless cars

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Everything you need to know about AI and coding

AI has already transformed how code is written, but a new wave of autonomous systems promise to make the process even smoother and less prone to making mistakes.

Amazon Web Services has just revealed three new “frontier” AI agents, its term for a more sophisticated class of autonomous agents capable of working for days at a time without human intervention. One of them, called Kiro, is designed to work independently without the need for a human to constantly point it in the right direction. Another, AWS Security Agent, scans a project for common vulnerabilities: an interesting development given that many AI-enabled coding assistants can end up introducing errors.

To learn more about the exciting direction AI-enhanced coding is heading in, check out our team’s reporting: 

+ A string of startups are racing to build models that can produce better and better software. Read the full story.

+ We’re starting to give AI agents real autonomy. Are we ready for what could happen next

+ What is vibe coding, exactly?

+ Anthropic’s cofounder and chief scientist Jared Kaplan on 4 ways agents will improve. Read the full story.

+ How AI assistants are already changing the way code gets made. Read the full story

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Amazon’s new agents can reportedly code for days at a time 
They remember previous sessions and continuously learn from a company’s codebase. (VentureBeat)
+ AWS says it’s aware of the pitfalls of handing over control to AI. (The Register)
+ The company faces the challenge of building enough infrastructure to support its AI services. (WSJ $)

2 Waymo’s driverless cars are getting surprisingly aggressive
The company’s goal to make the vehicles “confidently assertive” is prompting them to bend the rules. (WSJ $)
+ That said, their cars still have a far lower crash rate than human drivers. (NYT $)

3 The FDA’s top drug regulator has stepped down
After only three weeks in the role. (Ars Technica)+ A leaked vaccine memo from the agency doesn’t inspire confidence. (Bloomberg $)

4 Maybe DOGE isn’t entirely dead after all

Many of its former workers are embedded in various federal agencies. (Wired $)

5 A Chinese startup’s reusable rocket crash-landed after launch

It suffered what it called an “abnormal burn,” scuppering hopes of a soft landing. (Bloomberg $)

6  Startups are building digital clones of major sites to train AI agents

From Amazon to Gmail, they’re creating virtual agent playgrounds. (NYT $)

7 Half of US states now require visitors to porn sites to upload their ID
Missouri has become the 25th state to enact age verification laws. (404 Media)

8 AGI truthers are trying to influence the Pope
They’re desperate for him to take their concerns seriously.(The Verge)
+ How AGI became the most consequential conspiracy theory of our time. (MIT Technology Review)

9 Marketers are leaning into ragebait ads
But does making customers annoyed really translate into sales? (WP $)

10 The surprising role plant pores could play in fighting drought
At night as well as daytime. (Knowable Magazine)
+ Africa fights rising hunger by looking to foods of the past. (MIT Technology Review)

Quote of the day

“Everyone is begging for supply.”

—An anonymous source tells Reuters about the desperate measures Chinese AI companies take to secure scarce chips.

One more thing

The case against humans in space

Elon Musk and Jeff Bezos are bitter rivals in the commercial space race, but they agree on one thing: Settling space is an existential imperative. Space is the place. The final frontier. It is our human destiny to transcend our home world and expand our civilization to extraterrestrial vistas.

This belief has been mainstream for decades, but its rise has been positively meteoric in this new gilded age of astropreneurs.

But as visions of giant orbital stations and Martian cities dance in our heads, a case against human space colonization has found its footing in a number of recent books, from doubts about the practical feasibility of off-Earth communities, to realism about the harsh environment of space and the enormous tax it would exact on the human body. Read the full story.

—Becky Ferreira

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ This compilation of 21st century floor fillers is guaranteed to make you feel old.
+ A fire-loving amoeba has been found chilling out in volcanic hot springs.
+ This old-school Terminator 2 game is pixel perfection.
+ How truthful an adaptation is your favorite based-on-a-true-story movie? Let’s take a look at the data.

The Download: AI’s impact on the economy, and DeepSeek strikes again

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The State of AI: Welcome to the economic singularity

—David Rotman and Richard Waters

Any far-reaching new technology is always uneven in its adoption, but few have been more uneven than generative AI. That makes it hard to assess its likely impact on individual businesses, let alone on productivity across the economy as a whole.

At one extreme, AI coding assistants have revolutionized the work of software developers. At the other extreme, most companies are seeing little if any benefit from their initial investments. 

That has provided fuel for the skeptics who maintain that—by its very nature as a probabilistic technology prone to hallucinating—generative AI will never have a deep impact on business. To students of tech history, though, the lack of immediate impact is normal. Read the full story.

If you’re an MIT Technology Review subscriber, you can join David and Richard, alongside our editor in chief, Mat Honan, for an exclusive conversation digging into what’s happening across different markets live on Tuesday, December 9 at 1pm ET.  Register here

The State of AI is our subscriber-only collaboration between the Financial Times and MIT Technology Review examining the ways in which AI is reshaping global power. Sign up to receive future editions every Monday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 DeepSeek has unveiled two new experimental AI models 
DeepSeek-V3.2 is designed to match OpenAI’s GPT-5’s reasoning capabilities. (Bloomberg $)
+ Here’s how DeepSeek slashes its models’ computational burden. (VentureBeat)
+ It’s achieved these results despite its limited access to powerful chips. (SCMP $)

2 OpenAI has issued a “code red” warning to its employees
It’s a call to arms to improve ChatGPT, or risk being overtaken. (The Information $)
+ Both Google and Anthropic are snapping at OpenAI’s heels. (FT $)
+ Advertising and other initiatives will be pushed back to accommodate the new focus. (WSJ $)

3 How to know when the AI bubble has burst
These are the signs to look out for. (Economist $)
+ Things could get a whole lot worse for the economy if and when it pops. (Axios)
+ We don’t really know how the AI investment surge is being financed. (The Guardian)

4 Some US states are making it illegal for AI to discriminate against you

California is the latest to give workers more power to fight algorithms. (WP $)

5 This AI startup is working on a post-transformer future

Transformer architecture underpins the current AI boom—but Pathway is developing something new. (WSJ $)
+ What the next frontier of AI could look like. (IEEE Spectrum)

6 India is demanding smartphone makers install a government app
Which privacy advocates say is unacceptable snooping. (FT $)
+ India’s tech talent is looking for opportunities outside the US. (Rest of World)

7 College students are desperate to sign up for AI majors
AI is now the second-largest major at MIT behind computer science. (NYT $)
+ AI’s giants want to take over the classroom. (MIT Technology Review)

8 America’s musical heritage is at serious risk
Much of it is stored on studio tapes, which are deteriorating over time. (NYT $)
+ The race to save our online lives from a digital dark age. (MIT Technology Review)

9 Celebrities are increasingly turning on AI
That doesn’t stop fans from casting them in slop videos anyway. (The Verge)

10 Samsung has revealed its first tri-folding phone
But will people actually want to buy it? (Bloomberg $)
+ It’ll cost more than $2,000 when it goes on sale in South Korea. (Reuters)

Quote of the day

“The Chinese will not pause. They will take over.”

—Michael Lohscheller, chief executive of Swedish electric car maker Polestar, tells the Guardian why Europe should stick to its plan to ban the production of new petrol and diesel cars by 2035. 

One more thing

Inside Amsterdam’s high-stakes experiment to create fair welfare AI

Amsterdam thought it was on the right track. City officials in the welfare department believed they could build technology that would prevent fraud while protecting citizens’ rights. They followed these emerging best practices and invested a vast amount of time and money in a project that eventually processed live welfare applications. But in their pilot, they found that the system they’d developed was still not fair and effective. Why?

Lighthouse Reports, MIT Technology Review, and the Dutch newspaper Trouw have gained unprecedented access to the system to try to find out. Read about what we discovered.

—Eileen Guo, Gabriel Geiger & Justin-Casimir Braun

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Hear me out: a truly great festive film doesn’t need to be about Christmas at all.
+ Maybe we should judge a book by its cover after all.
+ Happy birthday to Ms Britney Spears, still the princess of pop at 44!
+ The fascinating psychology behind why we love travelling so much.

The Download: spotting crimes in prisoners’ phone calls, and nominate an Innovator Under 35

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

An AI model trained on prison phone calls now looks for planned crimes in those calls

A US telecom company trained an AI model on years of inmates’ phone and video calls and is now piloting that model to scan their calls, texts, and emails in the hope of predicting and preventing crimes.

Securus Technologies president Kevin Elder told MIT Technology Review that the company began building its AI tools in 2023, using its massive database of recorded calls to train AI models to detect criminal activity. It created one model, for example, using seven years of calls made by inmates in the Texas prison system, but it has been working on models for other states and counties.

However, prisoner rights advocates say that the new AI system enables a system of invasive surveillance, and courts have specified few limits to this power.  Read the full story.

—James O’Donnell

Nominations are now open for our global 2026 Innovators Under 35 competition

We have some exciting news: Nominations are now open for MIT Technology Review’s 2026 Innovators Under 35 competition. This annual list recognizes 35 of the world’s best young scientists and inventors, and our newsroom has produced it for more than two decades. 

It’s free to nominate yourself or someone you know, and it only takes a few moments. Here’s how to submit your nomination.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 New York is cracking down on personalized pricing algorithms
A new law forces retailers to declare if their pricing is informed by users’ data. (NYT $)
+ The US National Retail Federation tried to block it from passing. (TechCrunch)

2 The White House has launched a media bias tracker
Complete with a “media offender of the week” section and a Hall of Shame. (WP $)
+ The Washington Post is currently listed as the site’s top offender. (The Guardian)
+ Donald Trump has lashed out at several reporters in the past few weeks. (The Hill)

3 American startups are hooked on open-source Chinese AI models

They’re cheap and customizable—what’s not to like? (NBC News)
+ Americans also love China’s cheap goods, regardless of tariffs. (WP $)
+ The State of AI: Is China about to win the race? (MIT Technology Review)

4 How police body cam footage became viral YouTube content
Recent arrestees live in fear of ending up on popular channels. (Vox)
+ AI was supposed to make police bodycams better. What happened? (MIT Technology Review)

5 Construction workers are cashing in on the data center boom
Might as well enjoy it while it lasts. (WSJ $)
+ The data center boom in the desert. (MIT Technology Review)

6 China isn’t convinced by crypto
Even though bitcoin mining is quietly making a (banned) comeback. (Reuters)
+ The country’s central bank is no fan of stablecoins. (CoinDesk)

7 A startup is treating its AI companions like characters in a novel
Could that approach make for better AI companions? (Fast Company $)
+ Gemini is the most empathetic model, apparently. (Semafor)
+ The looming crackdown on AI companionship. (MIT Technology Review)

8 Ozempic is so yesterday 💉
New weight-loss drugs are tailored to individual patients. (The Atlantic $)
+ What we still don’t know about weight-loss drugs. (MIT Technology Review)

9 AI is upending how consultants work
For the third year in a row, big firms are freezing junior workers’ salaries. (FT $)

10 Behind the scenes of Disney’s AI animation accelerator
What took five months to create has been whittled down to under five weeks. (CNET)
+ Director supremo James Cameron appears to have changed his mind about AI. (TechCrunch)
+ Why are people scrolling through weirdly-formatted TV clips? (WP $)

Quote of the day

“[I hope AI] comes to a point where it becomes sort of mental junk food and we feel sick and we don’t know why.”

—Actor Jenna Ortega outlines her hopes for AI’s future role in filmmaking, Variety reports.

One more thing

The weeds are winning

Since the 1980s, more and more plants have evolved to become immune to the biochemical mechanisms that herbicides leverage to kill them. This herbicidal resistance threatens to decrease yields—out-of-control weeds can reduce them by 50% or more, and extreme cases can wipe out whole fields.

At worst, it can even drive farmers out of business. It’s the agricultural equivalent of antibiotic resistance, and it keeps getting worse. Weeds have evolved resistance to 168 different herbicides and 21 of the 31 known “modes of action,” which means the specific biochemical target or pathway a chemical is designed to disrupt.

Agriculture needs to embrace a diversity of weed control practices. But that’s much easier said than done. Read the full story.

—Douglas Main

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Now we’re finally in December, don’t let Iceland’s gigantic child-eating Yule Cat give you nightmares 😺
+ These breathtaking sculpture parks are serious must-sees ($)
+ 1985 sure was a vintage year for films.
+ Is nothing sacred?! Now Ozempic has come for our Christmas trees!

The Download: the mysteries surrounding weight-loss drugs, and the economic effects of AI

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

What we still don’t know about weight-loss drugs

Weight-loss drugs have been back in the news this week. First, we heard that Eli Lilly, the company behind Mounjaro and Zepbound, became the first healthcare company in the world to achieve a trillion-dollar valuation.

But we also learned that, disappointingly, GLP-1 drugs don’t seem to help people with Alzheimer’s disease. And that people who stop taking the drugs when they become pregnant can experience potentially dangerous levels of weight gain. On top of that, some researchers worry that people are using the drugs postpartum to lose pregnancy weight without understanding potential risks.

All of this news should serve as a reminder that there’s a lot we still don’t know about these drugs. So let’s look at the enduring questions surrounding GLP-1 agonist drugs.

—Jessica Hamzelou

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

If you’re interested in weight loss drugs and how they affect us, take a look at:

+ GLP-1 agonists like Wegovy, Ozempic, and Mounjaro might benefit heart and brain health—but research suggests they might also cause pregnancy complications and harm some users. Read the full story.

+ We’ve never understood how hunger works. That might be about to change. Read the full story.

+ Weight-loss injections have taken over the internet. But what does this mean for people IRL?

+ This vibrating weight-loss pill seems to work—in pigs. Read the full story.

What we know about how AI is affecting the economy

There’s a lot at stake when it comes to understanding how AI is changing the economy right now. Should we be pessimistic? Optimistic? Or is the situation too nuanced for that?

Hopefully, we can point you towards some answers. Mat Honan, our editor in chief, will hold a special subscriber-only Roundtables conversation with our editor at large David Rotman, and Richard Waters, Financial Times columnist, exploring what’s happening across different markets. Register here to join us at 1pm ET on Tuesday December 9.

The event is part of the Financial Times and MIT Technology Review “The State of AI” partnership, exploring the global impact of artificial intelligence. Over the past month, we’ve been running discussions between our journalists—sign up here to receive future editions every Monday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Tech billionaires are gearing up to fight AI regulation 
By amassing multi-million dollar war chests ahead of the 2026 US midterm elections. (WSJ $)
+ Donald Trump’s “Manhattan Project” for AI is certainly ambitious. (The Information $)

2 The EU wants to hold social media platforms liable for financial scams
New rules will force tech firms to compensate banks if they fail to remove reported scams. (Politico)

3 China is worried about a humanoid robot bubble
Because more than 150 companies there are building very similar machines. (Bloomberg $)
+ It could learn some lessons from the current AI bubble. (CNN)+ Why the humanoid workforce is running late. (MIT Technology Review)

4 A Myanmar scam compound was blown up
But its residents will simply find new bases for their operations. (NYT $)
+ Experts suspect the destruction may have been for show. (Wired $)
+ Inside a romance scam compound—and how people get tricked into being there. (MIT Technology Review)

5 Navies across the world are investing in submarine drones 
They cost a fraction of what it takes to run a traditional manned sub. (The Guardian)
+ How underwater drones could shape a potential Taiwan-China conflict. (MIT Technology Review)

6 What to expect from China’s seemingly unstoppable innovation drive
Its extremely permissive regulators play a big role. (Economist $)
+ Is China about to win the AI race? (MIT Technology Review)

7 The UK is waging a war on VPNs
Good luck trying to persuade people to stop using them. (The Verge)

8 We’re learning more about Jeff Bezos’ mysterious clock project
He’s backed the Clock of the Long Now for years—and construction is amping up. (FT $)
+ How aging clocks can help us understand why we age—and if we can reverse it. (MIT Technology Review)

9 Have we finally seen the first hints of dark matter?
These researchers seem to think so. (New Scientist $)

10 A helpful robot is helping archaeologists reconstruct Pompeii
Reassembling ancient frescos is fiddly and time-consuming, but less so if you’re a dextrous machine. (Reuters)

Quote of the day

“We do fail… a lot.”

—Defense company Anduril explains its move-fast-and-break-things ethos to the Wall Street Journal in response to reports its systems have been marred by issues in Ukraine.

One more thing

How to build a better AI benchmark

It’s not easy being one of Silicon Valley’s favorite benchmarks.

SWE-Bench (pronounced “swee bench”) launched in November 2024 as a way to evaluate an AI model’s coding skill. It has since quickly become one of the most popular tests in AI. A SWE-Bench score has become a mainstay of major model releases from OpenAI, Anthropic, and Google—and outside of foundation models, the fine-tuners at AI firms are in constant competition to see who can rise above the pack.

Despite all the fervor, this isn’t exactly a truthful assessment of which model is “better.” Entrants have begun to game the system—which is pushing many others to wonder whether there’s a better way to actually measure AI achievement. Read the full story.

—Russell Brandom

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Aww, these sharks appear to be playing with pool toys.
+ Strange things are happening over on Easter Island (even weirder than you can imagine) 🗿
+ Very cool—archaeologists have uncovered a Roman tomb that’s been sealed shut for 1,700 years.
+ This Japanese mass media collage is making my eyes swim, in a good way.

The Download: the fossil fuel elephant in the room, and better tests for endometriosis

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

This year’s UN climate talks avoided fossil fuels, again

Over the past few weeks in Belem, Brazil, attendees of this year’s UN climate talks dealt with oppressive heat and flooding, and at one point a literal fire broke out, delaying negotiations. The symbolism was almost too much to bear.

While many, including the president of Brazil, framed this year’s conference as one of action, the talks ended with a watered-down agreement. The final draft doesn’t even include the phrase “fossil fuels.”

As emissions and global temperatures reach record highs again this year, I’m left wondering: Why is it so hard to formally acknowledge what’s causing the problem?

—Casey Crownhart

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

New noninvasive endometriosis tests are on the rise

Endometriosis inflicts debilitating pain and heavy bleeding on more than 11% of reproductive-­age women in the United States. Diagnosis takes nearly 10 years on average, partly because half the cases don’t show up on scans, and surgery is required to obtain tissue samples.

But a new generation of noninvasive tests are emerging that could help accelerate diagnosis and improve management of this poorly understood condition. Read the full story.

—Colleen de Bellefonds

This story is from the last print issue of MIT Technology Review magazine, which is full of fascinating stories about the body. If you haven’t already, subscribe now to receive future issues once they land.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 OpenAI claims a teenager circumvented its safety features before ending his life
It says ChatGPT directed Adam Raine to seek help more than 100 times. (TechCrunch)
+ OpenAI is strongly refuting the idea it’s liable for the 16-year old’s death. (NBC News)
+ The looming crackdown on AI companionship. (MIT Technology Review)

2 The CDC’s new deputy director prefers natural immunity to vaccines
And he wasn’t even the worst choice among those considered for the role. (Ars Technica)
+ Meet Jim O’Neill, the longevity enthusiast who is now RFK Jr.’s right-hand man. (MIT Technology Review)

3 An MIT study says AI could already replace 12% of the US workforce
Researchers drew that conclusion after simulating a digital twin of the US labor market. (CNBC)
+ Separate research suggests it could replace 3 million jobs in the UK, too. (The Guardian)
+ AI usage looks unlikely to keep climbing. (Economist $)

4 An Italian defense group has created an AI-powered air shield system
It claims the system allows defenders to generate dome-style missile shields. (FT $)
+ Why Trump’s “golden dome” missile defense idea is another ripped straight from the movies. (MIT Technology Review)

5 The EU is considering a ban on social media for under-16s

Following in Australia’s footsteps, whose own ban comes into power next month. (Politico)
+ The European Parliament wants parents to decide on access. (The Guardian)

6 Why do so many astronauts keep getting stuck in space?

America, Russia and now China have had to contend with this situation. (WP $)
+ A rescue craft for three stranded Chinese astronauts has successfully reached them. (The Register)

7 Uploading pictures of your hotel room could help trafficking victims
A new app uses computer vision to determine where pictures of generic-looking rooms were taken. (IEEE Spectrum)

8 This browser tool turns back the clock to a pre-AI slop web
Back to the golden age of pre-November 30 2022. (404 Media)
+ The White House’s slop posts are shockingly bad. (NY Mag $)
+ Animated neo-Nazi propaganda is freely available on X. (The Atlantic $)

9 Grok’s “epic roasts” are as tragic as you’d expect
Test it out at parties at your own peril. (Wired $)

10 Startup founders dread explaining their jobs at Thanksgiving 🍗
Yes Grandma, I work with computers. (Insider $)

Quote of the day

“AI cannot ever replace the unique gift that you are to the world.”

—Pope Leo XIV warns students about the dangers of over-relying on AI, New York Magazine reports.

One more thing

Why we should thank pigeons for our AI breakthroughs

People looking for precursors to artificial intelligence often point to science fiction or thought experiments like the Turing test. But an equally important, if surprising and less appreciated, forerunner is American psychologist B.F. Skinner’s research with pigeons in the middle of the 20th century.

Skinner believed that association—learning, through trial and error, to link an action with a punishment or reward—was the building block of every behavior, not just in pigeons but in all living organisms, including human beings.

His “behaviorist” theories fell out of favor in the 1960s but were taken up by computer scientists who eventually provided the foundation for many of the leading AI tools. Read the full story.

—Ben Crair

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ I hope you had a happy, err, Green Wednesday if you partook this year.
+ Here how to help an endangered species from the comfort of your own home.
+ Polly wants to FaceTime—now! 📱🦜(thanks Alice!)
+ I need Macaulay Culkin’s idea for another Home Alone sequel to get greenlit, stat.

The Download: AI and the economy, and slop for the masses

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How AI is changing the economy

There’s a lot at stake when it comes to understanding how AI is changing the economy right now. Should we be pessimistic? Optimistic? Or is the situation too nuanced for that?

Hopefully, we can point you towards some answers. Mat Honan, our editor in chief, will hold a special subscriber-only Roundtables conversation with our editor at large David Rotman, and Richard Waters, Financial Times columnist, exploring what’s happening across different markets. Register here to join us at 1pm ET on Tuesday December 9.

The event is part of the Financial Times and MIT Technology Review “The State of AI” partnership, exploring the global impact of artificial intelligence. Over the past month, we’ve been running discussions between our journalists—sign up here to receive future editions every Monday.

If you’re interested in how AI is affecting the economy, take a look at: 

+ People are worried that AI will take everyone’s jobs. We’ve been here before.

+  What will AI mean for economic inequality? If we’re not careful, we could see widening gaps within countries and between them. Read the full story.

+ Artificial intelligence could put us on the path to a booming economic future, but getting there will take some serious course corrections. Here’s how to fine-tune AI for prosperity.

The AI Hype Index: The people can’t get enough of AI slop

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry. Take a look at this month’s edition of the index here, featuring everything from replacing animal testing with AI to our story on why AGI should be viewed as a conspiracy theory

MIT Technology Review Narrated: How to fix the internet

We all know the internet (well, social media) is broken. But it has also provided a haven for marginalized groups and a place for support. It offers information at times of crisis. It can connect you with long-lost friends. It can make you laugh.

That makes it worth fighting for. And yet, fixing online discourse is the definition of a hard problem.

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 How much AI investment is too much AI investment?
Tech companies hope to learn from beleaguered Intel. (WSJ $)
+ HP is pivoting to AI in the hopes of saving $1 billion a year. (The Guardian)
+ The European Central bank has accused tech investors of FOMO. (FT $)

2 ICE is outsourcing immigrant surveillance to private firms
It’s incentivizing contractors with multi-million dollar rewards. (Wired $)
+ Californian residents have been traumatized by recent raids. (The Guardian)
+ Another effort to track ICE raids was just taken offline. (MIT Technology Review)

3 Poland plans to use drones to defend its rail network from attack
It’s blaming Russia for a recent line explosion. (FT $)
+ This giant microwave may change the future of war. (MIT Technology Review)

4 ChatGPT could eventually have as many subscribers as Spotify
According to erm, OpenAI. (The Information $)

5 Here’s how your phone-checking habits could shape your daily life

You’re probably underestimating just how often you pick it up. (WP $)
+ How to log off. (MIT Technology Review)

6 Chinese drugs are coming

Its drugmakers are on the verge of making more money overseas than at home. (Economist $)

7 Uber is deploying fully driverless robotaxis in an Abu Dhabi island
Roaming 12 square miles of the popular tourist destination. (The Verge)
+ Tesla is hoping to double its robotaxi fleet in Austin next month. (Reuters)

8 Apple is set to become the world’s largest smartphone maker
After more than a decade in Samsung’s shadow. (Bloomberg $)

9 An AI teddy bear that discussed sexual topics is back on sale
But the Teddy Kumma toy is now powered by a different chatbot. (Bloomberg $)
+ AI toys are all the rage in China—and now they’re appearing on shelves in the US too. (MIT Technology Review)

10 How Stranger Things became the ultimate algorithmic TV show
Its creators mashed a load of pop culture references together and created a streaming phenomenon. (NYT $)

Quote of the day

“AI is a very powerful tool—it’s a hammer and that doesn’t mean everything is a nail.”

—Marketing consultant Ryan Bearden explains to the Wall Street Journal why it pays to be discerning when using AI.

One more thing

Are we ready to hand AI agents the keys?

In recent months, a new class of agents has arrived on the scene: ones built using large language models. Any action that can be captured by text—from playing a video game using written commands to running a social media account—is potentially within the purview of this type of system.

LLM agents don’t have much of a track record yet, but to hear CEOs tell it, they will transform the economy—and soon. Despite that, like chatbot LLMs, agents can be chaotic and unpredictable. Here’s what could happen as we try to integrate them into everything.

—Grace Huckins

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ The entries for this year’s Nature inFocus Photography Awards are fantastic.
+ There’s nothing like a good karaoke sesh.
+ Happy heavenly birthday Tina Turner, who would have turned 86 years old today.
+ Stop the presses—the hotly-contested list of the world’s top 50 vineyards has officially been announced 🍇

❌