Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

America’s coming war over AI regulation

23 January 2026 at 05:00

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

In the final weeks of 2025, the battle over regulating artificial intelligence in the US reached a boiling point. On December 11, after Congress failed twice to pass a law banning state AI laws, President Donald Trump signed a sweeping executive order seeking to handcuff states from regulating the booming industry. Instead, he vowed to work with Congress to establish a “minimally burdensome” national AI policy, one that would position the US to win the global AI race. The move marked a qualified victory for tech titans, who have been marshaling multimillion-dollar war chests to oppose AI regulations, arguing that a patchwork of state laws would stifle innovation.

In 2026, the battleground will shift to the courts. While some states might back down from passing AI laws, others will charge ahead, buoyed by mounting public pressure to protect children from chatbots and rein in power-hungry data centers. Meanwhile, dueling super PACs bankrolled by tech moguls and AI-safety advocates will pour tens of millions into congressional and state elections to seat lawmakers who champion their competing visions for AI regulation. 

Trump’s executive order directs the Department of Justice to establish a task force that sues states whose AI laws clash with his vision for light-touch regulation. It also directs the Department of Commerce to starve states of federal broadband funding if their AI laws are “onerous.” In practice, the order may target a handful of laws in Democratic states, says James Grimmelmann, a law professor at Cornell Law School. “The executive order will be used to challenge a smaller number of provisions, mostly relating to transparency and bias in AI, which tend to be more liberal issues,” Grimmelmann says.

For now, many states aren’t flinching. On December 19, New York’s governor, Kathy Hochul, signed the Responsible AI Safety and Education (RAISE) Act, a landmark law requiring AI companies to publish the protocols used to ensure the safe development of their AI models and report critical safety incidents. On January 1, California debuted the nation’s first frontier AI safety law, SB 53—which the RAISE Act was modeled on—aimed at preventing catastrophic harms such as biological weapons or cyberattacks. While both laws were watered down from earlier iterations to survive bruising industry lobbying, they struck a rare, if fragile, compromise between tech giants and AI safety advocates.

If Trump targets these hard-won laws, Democratic states like California and New York will likely take the fight to court. Republican states like Florida with vocal champions for AI regulation might follow suit. Trump could face an uphill battle. “The Trump administration is stretching itself thin with some of its attempts to effectively preempt [legislation] via executive action,” says Margot Kaminski, a law professor at the University of Colorado Law School. “It’s on thin ice.”

But Republican states that are anxious to stay off Trump’s radar or can’t afford to lose federal broadband funding for their sprawling rural communities might retreat from passing or enforcing AI laws. Win or lose in court, the chaos and uncertainty could chill state lawmaking. Paradoxically, the Democratic states that Trump wants to rein in—armed with big budgets and emboldened by the optics of battling the administration—may be the least likely to budge.

In lieu of state laws, Trump promises to create a federal AI policy with Congress. But the gridlocked and polarized body won’t be delivering a bill this year. In July, the Senate killed a moratorium on state AI laws that had been inserted into a tax bill, and in November, the House scrapped an encore attempt in a defense bill. In fact, Trump’s bid to strong-arm Congress with an executive order may sour any appetite for a bipartisan deal. 

The executive order “has made it harder to pass responsible AI policy by hardening a lot of positions, making it a much more partisan issue,” says Brad Carson, a former Democratic congressman from Oklahoma who is building a network of super PACs backing candidates who support AI regulation. “It hardened Democrats and created incredible fault lines among Republicans,” he says. 

While AI accelerationists in Trump’s orbit—AI and crypto czar David Sacks among them—champion deregulation, populist MAGA firebrands like Steve Bannon warn of rogue superintelligence and mass unemployment. In response to Trump’s executive order, Republican state attorneys general signed a bipartisan letter urging the FCC not to supersede state AI laws.

With Americans increasingly anxious about how AI could harm mental health, jobs, and the environment, public demand for regulation is growing. If Congress stays paralyzed, states will be the only ones acting to keep the AI industry in check. In 2025, state legislators introduced more than 1,000 AI bills, and nearly 40 states enacted over 100 laws, according to the National Conference of State Legislatures.

Efforts to protect children from chatbots may inspire rare consensus. On January 7, Google and Character Technologies, a startup behind the companion chatbot Character.AI, settled several lawsuits with families of teenagers who killed themselves after interacting with the bot. Just a day later, the Kentucky attorney general sued Character Technologies, alleging that the chatbots drove children to suicide and other forms of self-harm. OpenAI and Meta face a barrage of similar suits. Expect more to pile up this year. Without AI laws on the books, it remains to be seen how product liability laws and free speech doctrines apply to these novel dangers. “It’s an open question what the courts will do,” says Grimmelmann. 

While litigation brews, states will move to pass child safety laws, which are exempt from Trump’s proposed ban on state AI laws. On January 9, OpenAI inked a deal with a former foe, the child-safety advocacy group Common Sense Media, to back a ballot initiative in California called the Parents & Kids Safe AI Act, setting guardrails around how chatbots interact with children. The measure proposes requiring AI companies to verify users’ age, offer parental controls, and undergo independent child-safety audits. If passed, it could be a blueprint for states across the country seeking to crack down on chatbots. 

Fueled by widespread backlash against data centers, states will also try to regulate the resources needed to run AI. That means bills requiring data centers to report on their power and water use and foot their own electricity bills. If AI starts to displace jobs at scale, labor groups might float AI bans in specific professions. A few states concerned about the catastrophic risks posed by AI may pass safety bills mirroring SB 53 and the RAISE Act. 

Meanwhile, tech titans will continue to use their deep pockets to crush AI regulations. Leading the Future, a super PAC backed by OpenAI president Greg Brockman and the venture capital firm Andreessen Horowitz, will try to elect candidates who endorse unfettered AI development to Congress and state legislatures. They’ll follow the crypto industry’s playbook for electing allies and writing the rules. To counter this, super PACs funded by Public First, an organization run by Carson and former Republican congressman Chris Stewart of Utah, will back candidates advocating for AI regulation. We might even see a handful of candidates running on anti-AI populist platforms.

In 2026, the slow, messy process of American democracy will grind on. And the rules written in state capitals could decide how the most disruptive technology of our generation develops far beyond America’s borders, for years to come.

Hyperscale AI data centers: 10 Breakthrough Technologies 2026

12 January 2026 at 06:00

In sprawling stretches of farmland and industrial parks, supersized buildings packed with racks of computers are springing up to fuel the AI race. These engineering marvels are a new species of infrastructure: supercomputers designed to train and run large language models at mind-­bending scale, complete with their own specialized chips, cooling systems, and even energy supplies.

Hyperscale AI data centers bundle hundreds of thousands of specialized computer chips called graphics processing units (GPUs), such as Nvidia’s H100s, into synchronized clusters that work like one giant supercomputer. These chips excel at processing massive amounts of data in parallel. Hundreds of thousands of miles of fiber-optic cables connect the chips like a nervous system, letting them communicate at lightning speed. Enormous storage systems continuously feed data to the chips as the facilities hum and whir around the clock.

Tech companies like OpenAI, Google, Amazon, Microsoft, and Meta are pouring hundreds of billions of dollars into this infrastructure. Governments are spending big too.

But the impressive computing power comes at a cost. The densely packed chips run so hot that air-conditioning can’t cool them. Instead, they’re mounted to cold water plates or dunked in baths of cooling fluid. Dipping them in seawater may be next. 

The largest data centers being built can devour more than a gigawatt of electricity—enough to power entire cities. Over half of that electricity comes from fossil fuels, while renewables meet just over a quarter of the demand. Some AI giants are turning to nuclear power. Google is dreaming of building solar-powered data centers in space

The frenzied buildout of data centers is driven by the scaling laws of AI and by exploding demand as the technology gets wedged into everything from anime girlfriends to fitness apps. But the public may shoulder the costs of all this construction for years to come, as communities hosting the power-hungry facilities grapple with soaring energy bills, water shortages, droning noise, and air ­pollution.

What’s next for AI in 2026

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

In an industry in constant flux, sticking your neck out to predict what’s coming next may seem reckless. (AI bubble? What AI bubble?) But for the last few years we’ve done just that—and we’re doing it again. 

How did we do last time? We picked five hot AI trends to look out for in 2025, including what we called generative virtual playgrounds, a.k.a world models (check: From Google DeepMind’s Genie 3 to World Labs’s Marble, tech that can generate realistic virtual environments on the fly keeps getting better and better); so-called reasoning models (check: Need we say more? Reasoning models have fast become the new paradigm for best-in-class problem solving); a boom in AI for science (check: OpenAI is now following Google DeepMind by setting up a dedicated team to focus on just that); AI companies that are cozier with national security (check: OpenAI reversed position on the use of its technology for warfare to sign a deal with the defense-tech startup Anduril to help it take down battlefield drones); and legitimate competition for Nvidia (check, kind of: China is going all in on developing advanced AI chips, but Nvidia’s dominance still looks unassailable—for now at least). 

So what’s coming in 2026? Here are our big bets for the next 12 months. 

More Silicon Valley products will be built on Chinese LLMs

The last year shaped up as a big one for Chinese open-source models. In January, DeepSeek released R1, its open-source reasoning model, and shocked the world with what a relatively small firm in China could do with limited resources. By the end of the year, “DeepSeek moment” had become a phrase frequently tossed around by AI entrepreneurs, observers, and builders—an aspirational benchmark of sorts. 

It was the first time many people realized they could get a taste of top-tier AI performance without going through OpenAI, Anthropic, or Google.

Open-weight models like R1 allow anyone to download a model and run it on their own hardware. They are also more customizable, letting teams tweak models through techniques like distillation and pruning. This stands in stark contrast to the “closed” models released by major American firms, where core capabilities remain proprietary and access is often expensive.

As a result, Chinese models have become an easy choice. Reports by CNBC and Bloomberg suggest that startups in the US have increasingly recognized and embraced what they can offer.

One popular group of models is Qwen, created by Alibaba, the company behind China’s largest e-commerce platform, Taobao. Qwen2.5-1.5B-Instruct alone has 8.85 million downloads, making it one of the most widely used pretrained LLMs. The Qwen family spans a wide range of model sizes alongside specialized versions tuned for math, coding, vision, and instruction-following, a breadth that has helped it become an open-source powerhouse.

Other Chinese AI firms that were previously unsure about committing to open source are following DeepSeek’s playbook. Standouts include Zhipu’s GLM and Moonshot’s Kimi. The competition has also pushed American firms to open up, at least in part. In August, OpenAI released its first open-source model. In November, the Allen Institute for AI, a Seattle-based nonprofit, released its latest open-source model, Olmo 3. 

Even amid growing US-China antagonism, Chinese AI firms’ near-unanimous embrace of open source has earned them goodwill in the global AI community and a long-term trust advantage. In 2026, expect more Silicon Valley apps to quietly ship on top of Chinese open models, and look for the lag between Chinese releases and the Western frontier to keep shrinking—from months to weeks, and sometimes less.

Caiwei Chen

The US will face another year of regulatory tug-of-war

T​​he battle over regulating artificial intelligence is heading for a showdown. On December 11, President Donald Trump signed an executive order aiming to neuter state AI laws, a move meant to handcuff states from keeping the growing industry in check. In 2026, expect more political warfare. The White House and states will spar over who gets to govern the booming technology, while AI companies wage a fierce lobbying campaign to crush regulations, armed with the narrative that a patchwork of state laws will smother innovation and hobble the US in the AI arms race against China.

Under Trump’s executive order, states may fear being sued or starved federal funding if they clash with his vision for light-touch regulation. Big Democratic states like California—which just enacted the nation’s first frontier AI law requiring companies to publish safety testing for their AI models—will take the fight to court, arguing that only Congress can override state laws. But states that can’t afford to lose federal funding, or fear getting in Trump’s crosshairs, might fold. Still, expect to see more state lawmaking on hot-button issues, especially where Trump’s order gives states a green light to legislate. With chatbots accused of triggering teen suicides and data centers sucking up more and more energy, states will face mounting public pressure to push for guardrails. 

In place of state laws, Trump promises to work with Congress to establish a federal AI law. Don’t count on it. Congress failed to pass a moratorium on state legislation twice in 2025, and we aren’t holding out hope that it will deliver its own bill this year. 

AI companies like OpenAI and Meta will continue to deploy powerful super-PACs to support political candidates who back their agenda and target those who stand in their way. On the other side, super-PACs supporting AI regulation will build their own war chests to counter. Watch them duke it out at next year’s midterm elections.

The further AI advances, the more people will fight to steer its course, and 2026 will be another year of regulatory tug-of-war—with no end in sight.

Michelle Kim

Chatbots will change the way we shop

Imagine a world in which you have a personal shopper at your disposal 24-7—an expert who can instantly recommend a gift for even the trickiest-to-buy-for friend or relative, or trawl the web to draw up a list of the best bookcases available within your tight budget. Better yet, they can analyze a kitchen appliance’s strengths and weaknesses, compare it with its seemingly identical competition, and find you the best deal. Then once you’re happy with their suggestion, they’ll take care of the purchasing and delivery details too.

But this ultra-knowledgeable shopper isn’t a clued-up human at all—it’s a chatbot. This is no distant prediction, either. Salesforce recently said it anticipates that AI will drive $263 billion in online purchases this holiday season. That’s some 21% of all orders. And experts are betting on AI-enhanced shopping becoming even bigger business within the next few years. By 2030, between $3 trillion and $5 trillion annually will be made from agentic commerce, according to research from the consulting firm McKinsey. 

Unsurprisingly, AI companies are already heavily invested in making purchasing through their platforms as frictionless as possible. Google’s Gemini app can now tap into the company’s powerful Shopping Graph data set of products and sellers, and can even use its agentic technology to call stores on your behalf. Meanwhile, back in November, OpenAI announced a ChatGPT shopping feature capable of rapidly compiling buyer’s guides, and the company has struck deals with Walmart, Target, and Etsy to allow shoppers to buy products directly within chatbot interactions. 

Expect plenty more of these kinds of deals to be struck within the next year as consumer time spent chatting with AI keeps on rising, and web traffic from search engines and social media continues to plummet. 

Rhiannon Williams

An LLM will make an important new discovery

I’m going to hedge here, right out of the gate. It’s no secret that large language models spit out a lot of nonsense. Unless it’s with monkeys-and-typewriters luck, LLMs won’t discover anything by themselves. But LLMs do still have the potential to extend the bounds of human knowledge.

We got a glimpse of how this could work in May, when Google DeepMind revealed AlphaEvolve, a system that used the firm’s Gemini LLM to come up with new algorithms for solving unsolved problems. The breakthrough was to combine Gemini with an evolutionary algorithm that checked its suggestions, picked the best ones, and fed them back into the LLM to make them even better.

Google DeepMind used AlphaEvolve to come up with more efficient ways to manage power consumption by data centers and Google’s TPU chips. Those discoveries are significant but not game-changing. Yet. Researchers at Google DeepMind are now pushing their approach to see how far it will go.

And others have been quick to follow their lead. A week after AlphaEvolve came out, Asankhaya Sharma, an AI engineer in Singapore, shared OpenEvolve, an open-source version of Google DeepMind’s tool. In September, the Japanese firm Sakana AI released a version of the software called SinkaEvolve. And in November, a team of US and Chinese researchers revealed AlphaResearch, which they claim improves on one of AlphaEvolve’s already better-than-human math solutions.

There are alternative approaches too. For example, researchers at the University of Colorado Denver are trying to make LLMs more inventive by tweaking the way so-called reasoning models work. They have drawn on what cognitive scientists know about creative thinking in humans to push reasoning models toward solutions that are more outside the box than their typical safe-bet suggestions.

Hundreds of companies are spending billions of dollars looking for ways to get AI to crack unsolved math problems, speed up computers, and come up with new drugs and materials. Now that AlphaEvolve has shown what’s possible with LLMs, expect activity on this front to ramp up fast.    

Will Douglas Heaven

Legal fights heat up

For a while, lawsuits against AI companies were pretty predictable: Rights holders like authors or musicians would sue companies that trained AI models on their work, and the courts generally found in favor of the tech giants. AI’s upcoming legal battles will be far messier.

The fights center on thorny, unresolved questions: Can AI companies be held liable for what their chatbots encourage people to do, as when they help teens plan suicides? If a chatbot spreads patently false information about you, can its creator be sued for defamation? If companies lose these cases, will insurers shun AI companies as clients?

In 2026, we’ll start to see the answers to these questions, in part because some notable cases will go to trial (the family of a teen who died by suicide will bring OpenAI to court in November).

At the same time, the legal landscape will be further complicated by President Trump’s executive order from December—see Michelle’s item above for more details on the brewing regulatory storm.

No matter what, we’ll see a dizzying array of lawsuits in all directions (not to mention some judges even turning to AI amid the deluge).

James O’Donnell

AI Wrapped: The 14 AI terms you couldn’t avoid in 2025

If the past 12 months have taught us anything, it’s that the AI hype train is showing no signs of slowing. It’s hard to believe that at the beginning of the year, DeepSeek had yet to turn the entire industry on its head, Meta was better known for trying (and failing) to make the metaverse cool than for its relentless quest to dominate superintelligence, and vibe coding wasn’t a thing.

If that’s left you feeling a little confused, fear not. As we near the end of 2025, our writers have taken a look back over the AI terms that dominated the year, for better or worse.

Make sure you take the time to brace yourself for what promises to be another bonkers year.

—Rhiannon Williams

1. Superintelligence

a jack russell terrier wearing glasses and a bow tie

As long as people have been hyping AI, they have been coming up with names for a future, ultra-powerful form of the technology that could bring about utopian or dystopian consequences for humanity. “Superintelligence” is that latest hot term. Meta announced in July that it would form an AI team to pursue superintelligence, and it was reportedly offering nine-figure compensation packages to AI experts from the company’s competitors to join.

In December, Microsoft’s head of AI followed suit, saying the company would be spending big sums, perhaps hundreds of billions, on the pursuit of superintelligence. If you think superintelligence is as vaguely defined as artificial general intelligence, or AGI, you’d be right! While it’s conceivable that these sorts of technologies will be feasible in humanity’s long run, the question is really when, and whether today’s AI is good enough to be treated as a stepping stone toward something like superintelligence. Not that that will stop the hype kings. —James O’Donnell

2. Vibe coding

Thirty years ago, Steve Jobs said everyone in America should learn how to program a computer. Today, people with zero knowledge of how to code can knock up an app, game, or website in no time at all thanks to vibe coding—a catch-all phrase coined by OpenAI cofounder Andrej Karpathy. To vibe-code, you simply prompt generative AI models’ coding assistants to create the digital object of your desire and accept pretty much everything they spit out. Will the result work? Possibly not. Will it be secure? Almost definitely not, but the technique’s biggest champions aren’t letting those minor details stand in their way. Also—it sounds fun! — Rhiannon Williams

3. Chatbot psychosis

One of the biggest AI stories over the past year has been how prolonged interactions with chatbots can cause vulnerable people to experience delusions and, in some extreme cases, can either cause or worsen psychosis. Although “chatbot psychosis” is not a recognized medical term, researchers are paying close attention to the growing anecdotal evidence from users who say it’s happened to them or someone they know. Sadly, the increasing number of lawsuits filed against AI companies by the families of people who died following their conversations with chatbots demonstrate the technology’s potentially deadly consequences. —Rhiannon Williams

4. Reasoning

Few things kept the AI hype train going this year more than so-called reasoning models, LLMs that can break down a problem into multiple steps and work through them one by one. OpenAI released its first reasoning models, o1 and o3, a year ago.

A month later, the Chinese firm DeepSeek took everyone by surprise with a very fast follow, putting out R1, the first open-source reasoning model. In no time, reasoning models became the industry standard: All major mass-market chatbots now come in flavors backed by this tech. Reasoning models have pushed the envelope of what LLMs can do, matching top human performances in prestigious math and coding competitions. On the flip side, all the buzz about LLMs that could “reason” reignited old debates about how smart LLMs really are and how they really work. Like “artificial intelligence” itself, “reasoning” is technical jargon dressed up with marketing sparkle. Choo choo! —Will Douglas Heaven

5. World models 

For all their uncanny facility with language, LLMs have very little common sense. Put simply, they don’t have any grounding in how the world works. Book learners in the most literal sense, LLMs can wax lyrical about everything under the sun and then fall flat with a howler about how many elephants you could fit into an Olympic swimming pool (exactly one, according to one of Google DeepMind’s LLMs).

World models—a broad church encompassing various technologies—aim to give AI some basic common sense about how stuff in the world actually fits together. In their most vivid form, world models like Google DeepMind’s Genie 3 and Marble, the much-anticipated new tech from Fei-Fei Li’s startup World Labs, can generate detailed and realistic virtual worlds for robots to train in and more. Yann LeCun, Meta’s former chief scientist, is also working on world models. He has been trying to give AI a sense of how the world works for years, by training models to predict what happens next in videos. This year he quit Meta to focus on this approach in a new start up called Advanced Machine Intelligence Labs. If all goes well, world models could be the next thing. —Will Douglas Heaven

6. Hyperscalers

Have you heard about all the people saying no thanks, we actually don’t want a giant data center plopped in our backyard? The data centers in question—which tech companies want to built everywhere, including space—are typically referred to as hyperscalers: massive buildings purpose-built for AI operations and used by the likes of OpenAI and Google to build bigger and more powerful AI models. Inside such buildings, the world’s best chips hum away training and fine-tuning models, and they’re built to be modular and grow according to needs.

It’s been a big year for hyperscalers. OpenAI announced, alongside President Donald Trump, its Stargate project, a $500 billion joint venture to pepper the country with the largest data centers ever. But it leaves almost everyone else asking: What exactly do we get out of it? Consumers worry the new data centers will raise their power bills. Such buildings generally struggle to run on renewable energy. And they don’t tend to create all that many jobs. But hey, maybe these massive, windowless buildings could at least give a moody, sci-fi vibe to your community. —James O’Donnell

7. Bubble

The lofty promises of AI are levitating the economy. AI companies are raising eye-popping sums of money and watching their valuations soar into the stratosphere. They’re pouring hundreds of billions of dollars into chips and data centers, financed increasingly by debt and eyebrow-raising circular deals. Meanwhile, the companies leading the gold rush, like OpenAI and Anthropic, might not turn a profit for years, if ever. Investors are betting big that AI will usher in a new era of riches, yet no one knows how transformative the technology will actually be.

Most organizations using AI aren’t yet seeing the payoff, and AI work slop is everywhere. There’s scientific uncertainty about whether scaling LLMs will deliver superintelligence or whether new breakthroughs need to pave the way. But unlike their predecessors in the dot-com bubble, AI companies are showing strong revenue growth, and some are even deep-pocketed tech titans like Microsoft, Google, and Meta. Will the manic dream ever burst—Michelle Kim

8. Agentic

This year, AI agents were everywhere. Every new feature announcement, model drop, or security report throughout 2025 was peppered with mentions of them, even though plenty of AI companies and experts disagree on exactly what counts as being truly “agentic,” a vague term if ever there was one. No matter that it’s virtually impossible to guarantee that an AI acting on your behalf out in the wide web will always do exactly what it’s supposed to do—it seems as though agentic AI is here to stay for the foreseeable. Want to sell something? Call it agentic! —Rhiannon Williams

9. Distillation

Early this year, DeepSeek unveiled its new model DeepSeek R1, an open-source reasoning model that matches top Western models but costs a fraction of the price. Its launch freaked Silicon Valley out, as many suddenly realized for the first time that huge scale and resources were not necessarily the key to high-level AI models. Nvidia stock plunged by 17% the day after R1 was released.

The key to R1’s success was distillation, a technique that makes AI models more efficient. It works by getting a bigger model to tutor a smaller model: You run the teacher model on a lot of examples and record the answers, and reward the student model as it copies those responses as closely as possible, so that it gains a compressed version of the teacher’s knowledge.  —Caiwei Chen

10. Sycophancy

As people across the world spend increasing amounts of time interacting with chatbots like ChatGPT, chatbot makers are struggling to work out the kind of tone and “personality” the models should adopt. Back in April, OpenAI admitted it’d struck the wrong balance between helpful and sniveling, saying a new update had rendered GPT-4o too sycophantic. Having it suck up to you isn’t just irritating—it can mislead users by reinforcing their incorrect beliefs and spreading misinformation. So consider this your reminder to take everything—yes, everything—LLMs produce with a pinch of salt. —Rhiannon Williams

11. Slop

If there is one AI-related term that has fully escaped the nerd enclosures and entered public consciousness, it’s “slop.” The word itself is old (think pig feed), but “slop” is now commonly used to refer to low-effort, mass-produced content generated by AI, often optimized for online traffic. A lot of people even use it as a shorthand for any AI-generated content. It has felt inescapable in the past year: We have been marinated in it, from fake biographies to shrimp Jesus images to surreal human-animal hybrid videos.

But people are also having fun with it. The term’s sardonic flexibility has made it easy for internet users to slap it on all kinds of words as a suffix to describe anything that lacks substance and is absurdly mediocre: think “work slop” or “friend slop.” As the hype cycle resets, “slop” marks a cultural reckoning about what we trust, what we value as creative labor, and what it means to be surrounded by stuff that was made for engagement rather than expression. —Caiwei Chen

12. Physical intelligence

Did you come across the hypnotizing video from earlier this year of a humanoid robot putting away dishes in a bleak, gray-scale kitchen? That pretty much embodies the idea of physical intelligence: the idea that advancements in AI can help robots better move around the physical world. 

It’s true that robots have been able to learn new tasks faster than ever before, everywhere from operating rooms to warehouses. Self-driving-car companies have seen improvements in how they simulate the roads, too. That said, it’s still wise to be skeptical that AI has revolutionized the field. Consider, for example, that many robots advertised as butlers in your home are doing the majority of their tasks thanks to remote operators in the Philippines

The road ahead for physical intelligence is also sure to be weird. Large language models train on text, which is abundant on the internet, but robots learn more from videos of people doing things. That’s why the robot company Figure suggested in September that it would pay people to film themselves in their apartments doing chores. Would you sign up? —James O’Donnell

13. Fair use

AI models are trained by devouring millions of words and images across the internet, including copyrighted work by artists and writers. AI companies argue this is “fair use”—a legal doctrine that lets you use copyrighted material without permission if you transform it into something new that doesn’t compete with the original. Courts are starting to weigh in. In June, Anthropic’s training of its AI model Claude on a library of books was ruled fair use because the technology was “exceedingly transformative.”

That same month, Meta scored a similar win, but only because the authors couldn’t show that the company’s literary buffet cut into their paychecks. As copyright battles brew, some creators are cashing in on the feast. In December, Disney signed a splashy deal with OpenAI to let users of Sora, the AI video platform, generate videos featuring more than 200 characters from Disney’s franchises. Meanwhile, governments around the world are rewriting copyright rules for the content-guzzling machines. Is training AI on copyrighted work fair use? As with any billion-dollar legal question, it depends—Michelle Kim

14. GEO

Just a few short years ago, an entire industry was built around helping websites rank highly in search results (okay, just in Google). Now search engine optimization (SEO), is giving way to GEO—generative engine optimization—as the AI boom forces brands and businesses to scramble to maximize their visibility in AI, whether that’s in AI-enhanced search results like Google’s AI Overviews or within responses from LLMs. It’s no wonder they’re freaked out. We already know that news companies have experienced a colossal drop in search-driven web traffic, and AI companies are working on ways to cut out the middleman and allow their users to visit sites from directly within their platforms. It’s time to adapt or die. —Rhiannon Williams

AI might not be coming for lawyers’ jobs anytime soon

15 December 2025 at 05:00

When the generative AI boom took off in 2022, Rudi Miller and her law school classmates were suddenly gripped with anxiety. “Before graduating, there was discussion about what the job market would look like for us if AI became adopted,” she recalls. 

So when it came time to choose a speciality, Miller—now a junior associate at the law firm Orrick—decided to become a litigator, the kind of lawyer who represents clients in court. She hoped the courtroom would be the last human stage. “Judges haven’t allowed ChatGPT-enabled robots to argue in court yet,” she says.


This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next.


She had reason to be worried. The artificial-intelligence job apocalypse seemed to be coming for lawyers. In March 2023, researchers reported that GPT-4 had smashed the Uniform Bar Exam. That same month, an industry report predicted that 44% of legal work could be automated. The legal tech industry entered a boom as law firms began adopting generative AI to mine mountains of documents and draft contracts, work ordinarily done by junior associates. Last month, the law firm Clifford Chance axed 10% of its staff in London, citing increased use of AI as a reason.

But for all the hype, LLMs are still far from thinking like lawyers—let alone replacing them. The models continue to hallucinate case citations, struggle to navigate gray areas of the law and reason about novel questions, and stumble when they attempt to synthesize information scattered across statutes, regulations, and court cases. And there are deeper institutional reasons to think the models could struggle to supplant legal jobs. While AI is reshaping the grunt work of the profession, the end of lawyers may not be arriving anytime soon.

The big experiment

The legal industry has long been defined by long hours and grueling workloads, so the promise of superhuman efficiency is appealing. Law firms are experimenting with general-purpose tools like ChatGPT and Microsoft Copilot and specialized legal tools like Harvey and Thomson Reuters’ CoCounsel, with some building their own in-house tools on top of frontier models. They’re rolling out AI boot camps and letting associates bill hundreds of hours to AI experimentation. As of 2024, 47.8% of attorneys at law firms employing 500 or more lawyers used AI, according to the American Bar Association. 

But lawyers say that LLMs are a long way from reasoning well enough to replace them. Lucas Hale, a junior associate at McDermott Will & Schulte, has been embracing AI for many routine chores. He uses Relativity to sift through long documents and Microsoft Copilot for drafting legal citations. But when he turns to ChatGPT with a complex legal question, he finds the chatbot spewing hallucinations, rambling off topic, or drawing a blank.

“In the case where we have a very narrow question or a question of first impression for the court,” he says, referring to a novel legal question that a court has never decided before, “that’s the kind of thinking that the tool can’t do.”

Much of Lucas’s work involves creatively applying the law to new fact patterns. “Right now, I don’t think very much of the work that litigators do, at least not the work that I do, can be outsourced to an AI utility,” he says.

Allison Douglis, a senior associate at Jenner & Block, uses an LLM to kick off her legal research. But the tools only take her so far. “When it comes to actually fleshing out and developing an argument as a litigator, I don’t think they’re there,” she says. She has watched the models hallucinate case citations and fumble through ambiguous areas of the law.

“Right now, I would much rather work with a junior associate than an AI tool,” she says. “Unless they get extraordinarily good very quickly, I can’t imagine that changing in the near future.”

Beyond the bar

The legal industry has seemed ripe for an AI takeover ever since ChatGPT’s triumph on the bar exam. But passing a standardized test isn’t the same as practicing law. The exam tests whether people can memorize legal rules and apply them to hypothetical situations—not whether they can exercise strategic judgment in complicated realities or craft arguments in uncharted legal territory. And models can be trained to ace benchmarks without genuinely improving their reasoning.

But new benchmarks are aiming to better measure the models’ ability to do legal work in the real world. The Professional Reasoning Benchmark, published by ScaleAI in November, evaluated leading LLMs on legal and financial tasks designed by professionals in the field. The study found that the models have critical gaps in their reliability for professional adoption, with the best-performing model scoring only 37% on the most difficult legal problems, meaning it met just over a third of possible points on the evaluation criteria. The models frequently made inaccurate legal judgments, and if they did reach correct conclusions, they did so through incomplete or opaque reasoning processes. 

“The tools actually are not there to basically substitute [for] your lawyer,” says Afra Feyza Akyurek, the lead author of the paper. “Even though a lot of people think that LLMs have a good grasp of the law, it’s still lagging behind.” 

The paper builds on other benchmarks measuring the models’ performance on economically valuable work. The AI Productivity Index, published by the data firm Mercor in September and updated in December, found that the models have “substantial limitations” in performing legal work. The best-performing model scored 77.9% on legal tasks, meaning it satisfied roughly four out of five evaluation criteria. A model with such a score might generate substantial economic value in some industries, but in fields where errors are costly, it may not be useful at all, the early version of the study noted.  

Professional benchmarks are a big step forward in evaluating the LLMs’ real-world capabilities, but they may still not capture what lawyers actually do. “These questions, although more challenging than those in past benchmarks, still don’t fully reflect the kinds of subjective, extremely challenging questions lawyers tackle in real life,” says Jon Choi, a law professor at the University of Washington School of Law, who coauthored a study on legal benchmarks in 2023. 

Unlike math or coding, in which LLMs have made significant progress, legal reasoning may be challenging for the models to learn. The law deals with messy real-world problems, riddled with ambiguity and subjectivity, that often have no right answer, says Choi. Making matters worse, a lot of legal work isn’t recorded in ways that can be used to train the models, he says. When it is, documents can span hundreds of pages, scattered across statutes, regulations, and court cases that exist in a complex hierarchy.  

But a more fundamental limitation might be that LLMs are simply not trained to think like lawyers. “The reasoning models still don’t fully reason about problems like we humans do,” says Julian Nyarko, a law professor at Stanford Law School. The models may lack a mental model of the world—the ability to simulate a scenario and predict what will happen—and that capability could be at the heart of complex legal reasoning, he says. It’s possible that the current paradigm of LLMs trained on next-word prediction gets us only so far.  

The jobs remain

Despite early signs that AI is beginning to affect entry-level workers, labor statistics have yet to show that lawyers are being displaced. 93.4% of law school graduates in 2024 were employed within 10 months of graduation—the highest rate on record—according to the National Association for Law Placement. The number of graduates working in law firms rose by 13% from 2023 to 2024. 

For now, law firms are slow to shrink their ranks. “We’re not reducing headcounts at this point,” said Amy Ross, the chief of attorney talent at the law firm Ropes & Gray. 

Even looking ahead, the effects could be incremental. “I will expect some impact on the legal profession’s labor market, but not major,” says Mert Demirer, an economist at MIT. “AI is going to be very useful in terms of information discovery and summary,” he says, but for complex legal tasks, “the law’s low risk tolerance, plus the current capabilities of AI, are going to make that case less automatable at this point.” Capabilities may evolve over time, but that’s a big unknown.

It’s not just that the models themselves are not ready to replace junior lawyers. Institutional barriers may also shape how AI is deployed. Higher productivity reduces billable hours, challenging the dominant business model of law firms. Liability looms large for lawyers, and clients may still want a human on the hook. Regulations could also constrain how lawyers use the technology.

Still, as AI takes on some associate work, law firms may need to reinvent their training system. “When junior work dries up, you have to have a more formal way of teaching than hoping that an apprenticeship works,” says Ethan Mollick, a management professor at the Wharton School of the University of Pennsylvania.

Zach Couger, a junior associate at McDermott Will & Schulte, leans on ChatGPT to comb through piles of contracts he once slogged through by hand. He can’t imagine going back to doing the job himself, but he wonders what he’s missing. 

“I’m worried that I’m not getting the same reps that senior attorneys got,” he says, referring to the repetitive training that has long defined the early experiences of lawyers. “On the other hand, it is very nice to have a semi–knowledge expert to just ask questions to that’s not a partner who’s also very busy.” 

Even though an AI job apocalypse looks distant, the uncertainty sticks with him. Lately, Couger finds himself staying up late, wondering if he could be part of the last class of associates at big law firms: “I may be the last plane out.”

AI chatbots can sway voters better than political advertisements

4 December 2025 at 14:54

In 2024, a Democratic congressional candidate in Pennsylvania, Shamaine Daniels, used an AI chatbot named Ashley to call voters and carry on conversations with them. “Hello. My name is Ashley, and I’m an artificial intelligence volunteer for Shamaine Daniels’s run for Congress,” the calls began. Daniels didn’t ultimately win. But maybe those calls helped her cause: New research reveals that AI chatbots can shift voters’ opinions in a single conversation—and they’re surprisingly good at it. 

A multi-university team of researchers has found that chatting with a politically biased AI model was more effective than political advertisements at nudging both Democrats and Republicans to support presidential candidates of the opposing party. The chatbots swayed opinions by citing facts and evidence, but they were not always accurate—in fact, the researchers found, the most persuasive models said the most untrue things. 

The findings, detailed in a pair of studies published in the journals Nature and Science, are the latest in an emerging body of research demonstrating the persuasive power of LLMs. They raise profound questions about how generative AI could reshape elections. 

“One conversation with an LLM has a pretty meaningful effect on salient election choices,” says Gordon Pennycook, a psychologist at Cornell University who worked on the Nature study. LLMs can persuade people more effectively than political advertisements because they generate much more information in real time and strategically deploy it in conversations, he says. 

For the Nature paper, the researchers recruited more than 2,300 participants to engage in a conversation with a chatbot two months before the 2024 US presidential election. The chatbot, which was trained to advocate for either one of the top two candidates, was comparatively persuasive, especially when discussing candidates’ policy platforms on issues such as the economy and health care. Donald Trump supporters who chatted with an AI model favoring Kamala Harris became slightly more inclined to support Harris, moving 3.9 points toward her on a 100-point scale. That was roughly four times the measured effect of political advertisements during the 2016 and 2020 elections. The AI model favoring Trump moved Harris supporters 2.3 points toward Trump. 

In similar experiments conducted during the lead-ups to the 2025 Canadian federal election and the 2025 Polish presidential election, the team found an even larger effect. The chatbots shifted opposition voters’ attitudes by about 10 points.

Long-standing theories of politically motivated reasoning hold that partisan voters are impervious to facts and evidence that contradict their beliefs. But the researchers found that the chatbots, which used a range of models including variants of GPT and DeepSeek, were more persuasive when they were instructed to use facts and evidence than when they were told not to do so. “People are updating on the basis of the facts and information that the model is providing to them,” says Thomas Costello, a psychologist at American University, who worked on the project. 

The catch is, some of the “evidence” and “facts” the chatbots presented were untrue. Across all three countries, chatbots advocating for right-leaning candidates made a larger number of inaccurate claims than those advocating for left-leaning candidates. The underlying models are trained on vast amounts of human-written text, which means they reproduce real-world phenomena—including “political communication that comes from the right, which tends to be less accurate,” according to studies of partisan social media posts, says Costello.

In the other study published this week, in Science, an overlapping team of researchers investigated what makes these chatbots so persuasive. They deployed 19 LLMs to interact with nearly 77,000 participants from the UK on more than 700 political issues while varying factors like computational power, training techniques, and rhetorical strategies. 

The most effective way to make the models persuasive was to instruct them to pack their arguments with facts and evidence and then give them additional training by feeding them examples of persuasive conversations. In fact, the most persuasive model shifted participants who initially disagreed with a political statement 26.1 points toward agreeing. “These are really large treatment effects,” says Kobi Hackenburg, a research scientist at the UK AI Security Institute, who worked on the project. 

But optimizing persuasiveness came at the cost of truthfulness. When the models became more persuasive, they increasingly provided misleading or false information—and no one is sure why. “It could be that as the models learn to deploy more and more facts, they essentially reach to the bottom of the barrel of stuff they know, so the facts get worse-quality,” says Hackenburg.

The chatbots’ persuasive power could have profound consequences for the future of democracy, the authors note. Political campaigns that use AI chatbots could shape public opinion in ways that compromise voters’ ability to make independent political judgments.

Still, the exact contours of the impact remain to be seen. “We’re not sure what future campaigns might look like and how they might incorporate these kinds of technologies,” says Andy Guess, a political scientist at Princeton University. Competing for voters’ attention is expensive and difficult, and getting them to engage in long political conversations with chatbots might be challenging. “Is this going to be the way that people inform themselves about politics, or is this going to be more of a niche activity?” he asks.

Even if chatbots do become a bigger part of elections, it’s not clear whether they’ll do more to  amplify truth or fiction. Usually, misinformation has an informational advantage in a campaign, so the emergence of electioneering AIs “might mean we’re headed for a disaster,” says Alex Coppock, a political scientist at Northwestern University. “But it’s also possible that means that now, correct information will also be scalable.”

And then the question is who will have the upper hand. “If everybody has their chatbots running around in the wild, does that mean that we’ll just persuade ourselves to a draw?” Coppock asks. But there are reasons to doubt that. Politicians’ access to the most persuasive models may not be evenly distributed. And voters across the political spectrum may have different levels of engagement with chatbots. If “supporters of one candidate or party are more tech savvy than the other,” Guess says, the persuasive impacts might not balance out.

As people turn to AI to help them navigate their lives, they may also start asking chatbots for voting advice whether campaigns prompt the interaction or not. That may be a troubling world for democracy, unless there are strong guardrails to keep the systems in check. Auditing and documenting the accuracy of LLM outputs in conversations about politics may be a first step.

❌
❌