Normal view

There are new articles available, click to refresh the page.
Today — 25 January 2026Main stream

WhatsApp has begun testing a long-overdue group chat feature

25 January 2026 at 10:27

WhatsApp's upcoming group chat history sharing feature aims to eliminate confusion for newly added members by letting admins share up to 100 recent messages.

The post WhatsApp has begun testing a long-overdue group chat feature appeared first on Digital Trends.

Your WhatsApp voice notes could help screen for early signs of depression

25 January 2026 at 07:31

Brazilian researchers developed an AI system that analyzes WhatsApp audio messages to identify depression, showing high accuracy and potential for low-cost, real-world mental health screening.

The post Your WhatsApp voice notes could help screen for early signs of depression appeared first on Digital Trends.

Before yesterdayMain stream

America’s coming war over AI regulation

23 January 2026 at 05:00

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

In the final weeks of 2025, the battle over regulating artificial intelligence in the US reached a boiling point. On December 11, after Congress failed twice to pass a law banning state AI laws, President Donald Trump signed a sweeping executive order seeking to handcuff states from regulating the booming industry. Instead, he vowed to work with Congress to establish a “minimally burdensome” national AI policy, one that would position the US to win the global AI race. The move marked a qualified victory for tech titans, who have been marshaling multimillion-dollar war chests to oppose AI regulations, arguing that a patchwork of state laws would stifle innovation.

In 2026, the battleground will shift to the courts. While some states might back down from passing AI laws, others will charge ahead, buoyed by mounting public pressure to protect children from chatbots and rein in power-hungry data centers. Meanwhile, dueling super PACs bankrolled by tech moguls and AI-safety advocates will pour tens of millions into congressional and state elections to seat lawmakers who champion their competing visions for AI regulation. 

Trump’s executive order directs the Department of Justice to establish a task force that sues states whose AI laws clash with his vision for light-touch regulation. It also directs the Department of Commerce to starve states of federal broadband funding if their AI laws are “onerous.” In practice, the order may target a handful of laws in Democratic states, says James Grimmelmann, a law professor at Cornell Law School. “The executive order will be used to challenge a smaller number of provisions, mostly relating to transparency and bias in AI, which tend to be more liberal issues,” Grimmelmann says.

For now, many states aren’t flinching. On December 19, New York’s governor, Kathy Hochul, signed the Responsible AI Safety and Education (RAISE) Act, a landmark law requiring AI companies to publish the protocols used to ensure the safe development of their AI models and report critical safety incidents. On January 1, California debuted the nation’s first frontier AI safety law, SB 53—which the RAISE Act was modeled on—aimed at preventing catastrophic harms such as biological weapons or cyberattacks. While both laws were watered down from earlier iterations to survive bruising industry lobbying, they struck a rare, if fragile, compromise between tech giants and AI safety advocates.

If Trump targets these hard-won laws, Democratic states like California and New York will likely take the fight to court. Republican states like Florida with vocal champions for AI regulation might follow suit. Trump could face an uphill battle. “The Trump administration is stretching itself thin with some of its attempts to effectively preempt [legislation] via executive action,” says Margot Kaminski, a law professor at the University of Colorado Law School. “It’s on thin ice.”

But Republican states that are anxious to stay off Trump’s radar or can’t afford to lose federal broadband funding for their sprawling rural communities might retreat from passing or enforcing AI laws. Win or lose in court, the chaos and uncertainty could chill state lawmaking. Paradoxically, the Democratic states that Trump wants to rein in—armed with big budgets and emboldened by the optics of battling the administration—may be the least likely to budge.

In lieu of state laws, Trump promises to create a federal AI policy with Congress. But the gridlocked and polarized body won’t be delivering a bill this year. In July, the Senate killed a moratorium on state AI laws that had been inserted into a tax bill, and in November, the House scrapped an encore attempt in a defense bill. In fact, Trump’s bid to strong-arm Congress with an executive order may sour any appetite for a bipartisan deal. 

The executive order “has made it harder to pass responsible AI policy by hardening a lot of positions, making it a much more partisan issue,” says Brad Carson, a former Democratic congressman from Oklahoma who is building a network of super PACs backing candidates who support AI regulation. “It hardened Democrats and created incredible fault lines among Republicans,” he says. 

While AI accelerationists in Trump’s orbit—AI and crypto czar David Sacks among them—champion deregulation, populist MAGA firebrands like Steve Bannon warn of rogue superintelligence and mass unemployment. In response to Trump’s executive order, Republican state attorneys general signed a bipartisan letter urging the FCC not to supersede state AI laws.

With Americans increasingly anxious about how AI could harm mental health, jobs, and the environment, public demand for regulation is growing. If Congress stays paralyzed, states will be the only ones acting to keep the AI industry in check. In 2025, state legislators introduced more than 1,000 AI bills, and nearly 40 states enacted over 100 laws, according to the National Conference of State Legislatures.

Efforts to protect children from chatbots may inspire rare consensus. On January 7, Google and Character Technologies, a startup behind the companion chatbot Character.AI, settled several lawsuits with families of teenagers who killed themselves after interacting with the bot. Just a day later, the Kentucky attorney general sued Character Technologies, alleging that the chatbots drove children to suicide and other forms of self-harm. OpenAI and Meta face a barrage of similar suits. Expect more to pile up this year. Without AI laws on the books, it remains to be seen how product liability laws and free speech doctrines apply to these novel dangers. “It’s an open question what the courts will do,” says Grimmelmann. 

While litigation brews, states will move to pass child safety laws, which are exempt from Trump’s proposed ban on state AI laws. On January 9, OpenAI inked a deal with a former foe, the child-safety advocacy group Common Sense Media, to back a ballot initiative in California called the Parents & Kids Safe AI Act, setting guardrails around how chatbots interact with children. The measure proposes requiring AI companies to verify users’ age, offer parental controls, and undergo independent child-safety audits. If passed, it could be a blueprint for states across the country seeking to crack down on chatbots. 

Fueled by widespread backlash against data centers, states will also try to regulate the resources needed to run AI. That means bills requiring data centers to report on their power and water use and foot their own electricity bills. If AI starts to displace jobs at scale, labor groups might float AI bans in specific professions. A few states concerned about the catastrophic risks posed by AI may pass safety bills mirroring SB 53 and the RAISE Act. 

Meanwhile, tech titans will continue to use their deep pockets to crush AI regulations. Leading the Future, a super PAC backed by OpenAI president Greg Brockman and the venture capital firm Andreessen Horowitz, will try to elect candidates who endorse unfettered AI development to Congress and state legislatures. They’ll follow the crypto industry’s playbook for electing allies and writing the rules. To counter this, super PACs funded by Public First, an organization run by Carson and former Republican congressman Chris Stewart of Utah, will back candidates advocating for AI regulation. We might even see a handful of candidates running on anti-AI populist platforms.

In 2026, the slow, messy process of American democracy will grind on. And the rules written in state capitals could decide how the most disruptive technology of our generation develops far beyond America’s borders, for years to come.

Measles is surging in the US. Wastewater tracking could help.

23 January 2026 at 05:00

This week marked a rather unpleasant anniversary: It’s a year since Texas reported a case of measles—the start of a significant outbreak that ended up spreading across multiple states. Since the start of January 2025, there have been over 2,500 confirmed cases of measles in the US. Three people have died.

As vaccination rates drop and outbreaks continue, scientists have been experimenting with new ways to quickly identify new cases and prevent the disease from spreading. And they are starting to see some success with wastewater surveillance.

After all, wastewater contains saliva, urine, feces, shed skin, and more. You could consider it a rich biological sample. Wastewater analysis helped scientists understand how covid was spreading during the pandemic. It’s early days, but it is starting to help us get a handle on measles.

Globally, there has been some progress toward eliminating measles, largely thanks to vaccination efforts. Such efforts led to an 88% drop in measles deaths between 2000 and 2024, according to the World Health Organization. It estimates that “nearly 59 million lives have been saved by the measles vaccine” since 2000.

Still, an estimated 95,000 people died from measles in 2024 alone—most of them young children. And cases are surging in Europe, Southeast Asia, and the Eastern Mediterranean region.

Last year, the US saw the highest levels of measles in decades. The country is on track to lose its measles elimination status—a sorry fate that met Canada in November after the country recorded over 5,000 cases in a little over a year.

Public health efforts to contain the spread of measles—which is incredibly contagious—typically involve clinical monitoring in health-care settings, along with vaccination campaigns. But scientists have started looking to wastewater, too.

Along with various bodily fluids, we all shed viruses and bacteria into wastewater, whether that’s through brushing our teeth, showering, or using the toilet. The idea of looking for these pathogens in wastewater to track diseases has been around for a while, but things really kicked into gear during the covid-19 pandemic, when scientists found that the coronavirus responsible for the disease was shed in feces.

This led Marlene Wolfe of Emory University and Alexandria Boehm of Stanford University to establish WastewaterSCAN, an academic-led program developed to analyze wastewater samples across the US. Covid was just the beginning, says Wolfe. “Over the years we have worked to expand what can be monitored,” she says.

Two years ago, for a previous edition of the Checkup, Wolfe told Cassandra Willyard that wastewater surveillance of measles was “absolutely possible,” as the virus is shed in urine. The hope was that this approach could shed light on measles outbreaks in a community, even if members of that community weren’t able to access health care and receive an official diagnosis. And that it could highlight when and where public health officials needed to act to prevent measles from spreading. Evidence that it worked as an effective public health measure was, at the time, scant.

Since then, she and her colleagues have developed a test to identify measles RNA. They trialed it at two wastewater treatment plants in Texas between December 2024 and May 2025. At each site, the team collected samples two or three times a week and tested them for measles RNA.

Over that period, the team found measles RNA in 10.5% of the samples they collected, as reported in a preprint paper published at medRxiv in July and currently under review at a peer-reviewed journal. The first detection came a week before the first case of measles was officially confirmed in the area. That’s promising—it suggests that wastewater surveillance might pick up measles cases early, giving public health officials a head start in efforts to limit any outbreaks.

There are more promising results from a team in Canada. Mike McKay and Ryland Corchis-Scott at the University of Windsor in Ontario and their colleagues have also been testing wastewater samples for measles RNA. Between February and November 2025, the team collected samples from a wastewater treatment facility serving over 30,000 people in Leamington, Ontario. 

These wastewater tests are somewhat limited—even if they do pick up measles, they won’t tell you who has measles, where exactly infections are occurring, or even how many people are infected. McKay and his colleagues have begun to make some progress here. In addition to monitoring the large wastewater plant, the team used tampons to soak up wastewater from a hospital lateral sewer.

They then compared their measles test results with the number of clinical cases in that hospital. This gave them some idea of the virus’s “shedding rate.” When they applied this to the data collected from the Leamington wastewater treatment facility, the team got estimates of measles cases that were much higher than the figures officially reported. 

Their findings track with the opinions of local health officials (who estimate that the true number of cases during the outbreak was around five to 10 times higher than the confirmed case count), the team members wrote in a paper published on medRxiv a couple of weeks ago.

There will always be limits to wastewater surveillance. “We’re looking at the pool of waste of an entire community, so it’s very hard to pull in information about individual infections,” says Corchis-Scott.

Wolfe also acknowledges that “we have a lot to learn about how we can best use the tools so they are useful.” But her team at WastewaterSCAN has been testing wastewater across the US for measles since May last year. And their findings are published online and shared with public health officials.

In some cases, the findings are already helping inform the response to measles. “We’ve seen public health departments act on this data,” says Wolfe. Some have issued alerts, or increased vaccination efforts in those areas, for example. “[We’re at] a point now where we really see public health departments, clinicians, [and] families using that information to help keep themselves and their communities safe,” she says.

McKay says his team has stopped testing for measles because the Ontario outbreak “has been declared over.” He says testing would restart if and when a single new case of measles is confirmed in the region, but he also thinks that his research makes a strong case for maintaining a wastewater surveillance system for measles.

McKay wonders if this approach might help Canada regain its measles elimination status. “It’s sort of like [we’re] a pariah now,” he says. If his approach can help limit measles outbreaks, it could be “a nice tool for public health in Canada to [show] we’ve got our act together.”

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

“Dr. Google” had its issues. Can ChatGPT Health do better?

22 January 2026 at 12:38

For the past two decades, there’s been a clear first step for anyone who starts experiencing new medical symptoms: Look them up online. The practice was so common that it gained the pejorative moniker “Dr. Google.” But times are changing, and many medical-information seekers are now using LLMs. According to OpenAI, 230 million people ask ChatGPT health-related queries each week. 

That’s the context around the launch of OpenAI’s new ChatGPT Health product, which debuted earlier this month. It landed at an inauspicious time: Two days earlier, the news website SFGate had broken the story of Sam Nelson, a teenager who died of an overdose last year after extensive conversations with ChatGPT about how best to combine various drugs. In the wake of both pieces of news, multiple journalists questioned the wisdom of relying for medical advice on a tool that could cause such extreme harm.

Though ChatGPT Health lives in a separate sidebar tab from the rest of ChatGPT, it isn’t a new model. It’s more like a wrapper that provides one of OpenAI’s preexisting models with guidance and tools it can use to provide health advice—including some that allow it to access a user’s electronic medical records and fitness app data, if granted permission. There’s no doubt that ChatGPT and other large language models can make medical mistakes, and OpenAI emphasizes that ChatGPT Health is intended as an additional support, rather than a replacement for one’s doctor. But when doctors are unavailable or unable to help, people will turn to alternatives. 

Some doctors see LLMs as a boon for medical literacy. The average patient might struggle to navigate the vast landscape of online medical information—and, in particular, to distinguish high-quality sources from polished but factually dubious websites—but LLMs can do that job for them, at least in theory. Treating patients who had searched for their symptoms on Google required “a lot of attacking patient anxiety [and] reducing misinformation,” says Marc Succi, an associate professor at Harvard Medical School and a practicing radiologist. But now, he says, “you see patients with a college education, a high school education, asking questions at the level of something an early med student might ask.”

The release of ChatGPT Health, and Anthropic’s subsequent announcement of new health integrations for Claude, indicate that the AI giants are increasingly willing to acknowledge and encourage health-related uses of their models. Such uses certainly come with risks, given LLMs’ well-documented tendencies to agree with users and make up information rather than admit ignorance. 

But those risks also have to be weighed against potential benefits. There’s an analogy here to autonomous vehicles: When policymakers consider whether to allow Waymo in their city, the key metric is not whether its cars are ever involved in accidents but whether they cause less harm than the status quo of relying on human drivers. If Dr. ChatGPT is an improvement over Dr. Google—and early evidence suggests it may be—it could potentially lessen the enormous burden of medical misinformation and unnecessary health anxiety that the internet has created.

Pinning down the effectiveness of a chatbot such as ChatGPT or Claude for consumer health, however, is tricky. “It’s exceedingly difficult to evaluate an open-ended chatbot,” says Danielle Bitterman, the clinical lead for data science and AI at the Mass General Brigham health-care system. Large language models score well on medical licensing examinations, but those exams use multiple-choice questions that don’t reflect how people use chatbots to look up medical information.

Sirisha Rambhatla, an assistant professor of management science and engineering at the University of Waterloo, attempted to close that gap by evaluating how GPT-4o responded to licensing exam questions when it did not have access to a list of possible answers. Medical experts who evaluated the responses scored only about half of them as entirely correct. But multiple-choice exam questions are designed to be tricky enough that the answer options don’t give them entirely away, and they’re still a pretty distant approximation for the sort of thing that a user would type into ChatGPT.

A different study, which tested GPT-4o on more realistic prompts submitted by human volunteers, found that it answered medical questions correctly about 85% of the time. When I spoke with Amulya Yadav, an associate professor at Pennsylvania State University who runs the Responsible AI for Social Emancipation Lab and led the study, he made it clear that he wasn’t personally a fan of patient-facing medical LLMs. But he freely admits that, technically speaking, they seem up to the task—after all, he says, human doctors misdiagnose patients 10% to 15% of the time. “If I look at it dispassionately, it seems that the world is gonna change, whether I like it or not,” he says.

For people seeking medical information online, Yadav says, LLMs do seem to be a better choice than Google. Succi, the radiologist, also concluded that LLMs can be a better alternative to web search when he compared GPT-4’s responses to questions about common chronic medical conditions with the information presented in Google’s knowledge panel, the information box that sometimes appears on the right side of the search results.

Since Yadav’s and Succi’s studies appeared online, in the first half of 2025, OpenAI has released multiple new versions of GPT, and it’s reasonable to expect that GPT-5.2 would perform even better than its predecessors. But the studies do have important limitations: They focus on straightforward, factual questions, and they examine only brief interactions between users and chatbots or web search tools. Some of the weaknesses of LLMs—most notably their sycophancy and tendency to hallucinate—might be more likely to rear their heads in more extensive conversations and with people who are dealing with more complex problems. Reeva Lederman, a professor at the University of Melbourne who studies technology and health, notes that patients who don’t like the diagnosis or treatment recommendations that they receive from a doctor might seek out another opinion from an LLM—and the LLM, if it’s sycophantic, might encourage them to reject their doctor’s advice.

Some studies have found that LLMs will hallucinate and exhibit sycophancy in response to health-related prompts. For example, one study showed that GPT-4 and GPT-4o will happily accept and run with incorrect drug information included in a user’s question. In another, GPT-4o frequently concocted definitions for fake syndromes and lab tests mentioned in the user’s prompt. Given the abundance of medically dubious diagnoses and treatments floating around the internet, these patterns of LLM behavior could contribute to the spread of medical misinformation, particularly if people see LLMs as trustworthy.

OpenAI has reported that the GPT-5 series of models is markedly less sycophantic and prone to hallucination than their predecessors, so the results of these studies might not apply to ChatGPT Health. The company also evaluated the model that powers ChatGPT Health on its responses to health-specific questions, using their publicly available HeathBench benchmark. HealthBench rewards models that express uncertainty when appropriate, recommend that users seek medical attention when necessary, and refrain from causing users unnecessary stress by telling them their condition is more serious that it truly is. It’s reasonable to assume that the model underlying ChatGPT Health exhibited those behaviors in testing, though Bitterman notes that some of the prompts in HealthBench were generated by LLMs, not users, which could limit how well the benchmark translates into the real world.

An LLM that avoids alarmism seems like a clear improvement over systems that have people convincing themselves they have cancer after a few minutes of browsing. And as large language models, and the products built around them, continue to develop, whatever advantage Dr. ChatGPT has over Dr. Google will likely grow. The introduction of ChatGPT Health is certainly a move in that direction: By looking through your medical records, ChatGPT can potentially gain far more context about your specific health situation than could be included in any Google search, although numerous experts have cautioned against giving ChatGPT that access for privacy reasons.

Even if ChatGPT Health and other new tools do represent a meaningful improvement over Google searches, they could still conceivably have a negative effect on health overall. Much as automated vehicles, even if they are safer than human-driven cars, might still prove a net negative if they encourage people to use public transit less, LLMs could undermine users’ health if they induce people to rely on the internet instead of human doctors, even if they do increase the quality of health information available online.

Lederman says that this outcome is plausible. In her research, she has found that members of online communities centered on health tend to put their trust in users who express themselves well, regardless of the validity of the information they are sharing. Because ChatGPT communicates like an articulate person, some people might trust it too much, potentially to the exclusion of their doctor. But LLMs are certainly no replacement for a human doctor—at least not yet.

Dispatch from Davos: hot air, big egos and cold flexes

By: Mat Honan
22 January 2026 at 11:39

This story first appeared in The Debrief, our subscriber-only newsletter about the biggest news in tech by Mat Honan, Editor in Chief. Subscribe to read the next edition as soon as it lands.

It’s supposed to be frigid in Davos this time of year. Part of the charm is seeing the world’s elite tromp through the streets in respectable suits and snow boots. But this year it’s positively balmy, with highs in the mid 30s, or a little over 1°C. The current conditions when I flew out of New York were colder, and definitely snowier. I’m told this is due to something called a föhn, a dry warm wind that’s been blowing across the Alps. 

I’m no meteorologist, but it’s true that there is a lot of hot air here. 

On Wednesday, President Donald Trump arrived in Davos to address the assembly, and held forth for more than 90 minutes, weaving his way through remarks about the economy, Greenland, windmills, Switzerland, Rolexes, Venezuela, and drug prices. It was a talk lousy with gripes, grievances and outright falsehoods. 

One small example: Trump made a big deal of claiming that China, despite being the world leader in manufacturing windmill componentry, doesn’t actually use them for energy generation itself. In fact, it is the world leader in generation, as well. 

I did not get to watch this spectacle from the room itself. Sad! 

By the time I got to the Congress Hall where the address was taking place, there was already a massive scrum of people jostling to get in. 

I had just wrapped up moderating a panel on “the intelligent co-worker,” ie: AI agents in the workplace. I was really excited for this one as the speakers represented a diverse cross-section of the AI ecosystem. Christoph Schweizer, CEO of BCG had the macro strategic view; Enrique Lores, HP CEO, could speak to both hardware and large enterprises, Workera CEO Kian Katanforoosh has the inside view on workforce training and transformation, Manjul Shah CEO of Hippocratic AI addressed working in the high stakes field of healthcare, and Kate Kallot CEO of Amini AI gave perspective on the global south and Africa in particular. 

Interestingly, most of the panel shied away from using the term co-worker, and some even rejected the term agent. But the view they painted was definitely one of humans working alongside AI and augmenting what’s possible. Shah, for example, talked about having agents call 16,000 people in Texas during a heat wave to perform a health and safety check. It was a great discussion. You can watch the whole thing here

But by the time it let out, the push of people outside the Congress Hall was already too thick for me to get in. In fact I couldn’t even get into a nearby overflow room. I did make it into a third overflow room, but getting in meant navigating my way through a mass of people, so jammed in tight together that it reminded me of being at a Turnstile concert. 

The speech blew way past its allotted time, and I had to step out early to get to yet another discussion. Walking through the halls while Trump spoke was a truly surreal experience. He had truly captured the attention of the gathered global elite. I don’t think I saw a single person not starting at a laptop, or phone or iPad, all watching the same video. 

Trump is speaking again on Thursday in a previously unscheduled address to announce his Board of Peace. As is (I heard) Elon Musk. So it’s shaping up to be another big day for elite attention capture. 

I should say, though, there are elites, and then there are elites. And there are all sorts of ways of sorting out who is who. Your badge color is one of them. I have a white participant badge, because I was moderating panels. This gets you in pretty much anywhere and therefore is its own sort of status symbol. Where you are staying is another. I’m in Klosters, a neighboring town that’s a 40 minute train ride away from the Congress Centre. Not so elite. 

There are more subtle ways of status sorting, too. Yesterday I learned that when people ask if this is your first time at Davos, it’s sometimes meant as a way of trying to figure out how important you are. If you’re any kind of big deal, you’ve probably been coming for years. 

But the best one I’ve yet encountered happened when I made small talk with the woman sitting next to me as I changed back into my snow boots. It turned out that, like me, she lived in California–at least part time. “But I don’t think I’ll stay there much longer,” she said, “due to the new tax law.” This was just an ice cold flex. 

Because California’s newly proposed tax legislation? It only targets billionaires. 

Welcome to Davos.

Why 2026 is a hot year for lithium

22 January 2026 at 06:00

In 2026, I’m going to be closely watching the price of lithium.

If you’re not in the habit of obsessively tracking commodity markets, I certainly don’t blame you. (Though the news lately definitely makes the case that minerals can have major implications for global politics and the economy.)

But lithium is worthy of a close look right now.

The metal is crucial for lithium-ion batteries used in phones and laptops, electric vehicles, and large-scale energy storage arrays on the grid. Prices have been on quite the roller coaster over the last few years, and they’re ticking up again after a low period. What happens next could have big implications for mining and battery technology.

Before we look ahead, let’s take a quick trip down memory lane. In 2020, global EV sales started to really take off, driving up demand for the lithium used in their batteries. Because of that growing demand and a limited supply, prices shot up dramatically, with lithium carbonate going from under $10 per kilogram to a high of roughly $70 per kilogram in just two years.

And the tech world took notice. During those high points, there was a ton of interest in developing alternative batteries that didn’t rely on lithium. I was writing about sodium-based batteries, iron-air batteries, and even experimental ones that were made with plastic.

Researchers and startups were also hunting for alternative ways to get lithium, including battery recycling and processing methods like direct lithium extraction (more on this in a moment).

But soon, prices crashed back down to earth. We saw lower-than-expected demand for EVs in the US, and developers ramped up mining and processing to meet demand. Through late 2024 and 2025, lithium carbonate was back around $10 a kilogram again. Avoiding lithium or finding new ways to get it suddenly looked a lot less crucial.

That brings us to today: lithium prices are ticking up again. So far, it’s nowhere close to the dramatic rise we saw a few years ago, but analysts are watching closely. Strong EV growth in China is playing a major role—EVs still make up about 75% of battery demand today. But growth in stationary storage, batteries for the grid, is also contributing to rising demand for lithium in both China and the US.

Higher prices could create new opportunities. The possibilities include alternative battery chemistries, specifically sodium-ion batteries, says Evelina Stoikou, head of battery technologies and supply chains at BloombergNEF. (I’ll note here that we recently named sodium-ion batteries to our 2026 list of 10 Breakthrough Technologies.)

It’s not just batteries, though. Another industry that could see big changes from a lithium price swing: extraction.

Today, most lithium is mined from rocks, largely in Australia, before being shipped to China for processing. There’s a growing effort to process the mineral in other places, though, as countries try to create their own lithium supply chains. Tesla recently confirmed that it’s started production at its lithium refinery in Texas, which broke ground in 2023. We could see more investment in processing plants outside China if prices continue to climb.

This could also be a key year for direct lithium extraction, as Katie Brigham wrote in a recent story for Heatmap. That technology uses chemical or electrochemical processes to extract lithium from brine (salty water that’s usually sourced from salt lakes or underground reservoirs), quickly and cheaply. Companies including Lilac Solutions, Standard Lithium, and Rio Tinto are all making plans or starting construction on commercial facilities this year in the US and Argentina. 

If there’s anything I’ve learned about following batteries and minerals over the past few years, it’s that predicting the future is impossible. But if you’re looking for tea leaves to read, lithium prices deserve a look. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Yann LeCun’s new venture is a contrarian bet against large language models  

22 January 2026 at 05:00

Yann LeCun is a Turing Award recipient and a top AI researcher, but he has long been a contrarian figure in the tech world. He believes that the industry’s current obsession with large language models is wrong-headed and will ultimately fail to solve many pressing problems. 

Instead, he thinks we should be betting on world models—a different type of AI that accurately reflects the dynamics of the real world. He is also a staunch advocate for open-source AI and criticizes the closed approach of frontier labs like OpenAI and Anthropic. 

Perhaps it’s no surprise, then, that he recently left Meta, where he had served as chief scientist for FAIR (Fundamental AI Research), the company’s influential research lab that he founded. Meta has struggled to gain much traction with its open-source AI model Llama and has seen internal shake-ups, including the controversial acquisition of ScaleAI. 

LeCun sat down with MIT Technology Review in an exclusive online interview from his Paris apartment to discuss his new venture, life after Meta, the future of artificial intelligence, and why he thinks the industry is chasing the wrong ideas. 

Both the questions and answers below have been edited for clarity and brevity.

You’ve just announced a new company, Advanced Machine Intelligence (AMI).  Tell me about the big ideas behind it.

It is going to be a global company, but headquartered in Paris. You pronounce it “ami”—it means “friend” in French. I am excited. There is a very high concentration of talent in Europe, but it is not always given a proper environment to flourish. And there is certainly a huge demand from the industry and governments for a credible frontier AI company that is neither Chinese nor American. I think that is going to be to our advantage.

So an ambitious alternative to the US-China binary we currently have. What made you want to pursue that third path?

Well, there are sovereignty issues for a lot of countries, and they want some control over AI. What I’m advocating is that AI is going to become a platform, and most platforms tend to become open-source. Unfortunately, that’s not really the direction the American industry is taking. Right? As the competition increases, they feel like they have to be secretive. I think that is a strategic mistake.

It’s certainly true for OpenAI, which went from very open to very closed, and Anthropic has always been closed. Google was sort of a little open. And then Meta, we’ll see. My sense is that it’s not going in a positive direction at this moment.

Simultaneously, China has completely embraced this open approach. So all leading open-source AI platforms are Chinese, and the result is that academia and startups, outside of the US, have basically embraced Chinese models. There’s nothing wrong with that—you know, Chinese models are good. Chinese engineers and scientists are great. But you know, if there is a future in which all of our information diet is being mediated by AI assistance, and the choice is either English-speaking models produced by proprietary companies always close to the US or Chinese models which may be open-source but need to be fine-tuned so that they answer questions about Tiananmen Square in 1989—you know, it’s not a very pleasant and engaging future. 

They [the future models] should be able to be fine-tuned by anyone and produce a very high diversity of AI assistance, with different linguistic abilities and value systems and political biases and centers of interests. You need high diversity of assistance for the same reason that you need high diversity of press. 

That is certainly a compelling pitch. How are investors buying that idea so far?

They really like it. A lot of venture capitalists are very much in favor of this idea of open-source, because they know for a lot of small startups, they really rely on open-source models. They don’t have the means to train their own model, and it’s kind of dangerous for them strategically to embrace a proprietary model.

You recently left Meta. What’s your view on the company and Mark Zuckerberg’s leadership? There’s a perception that Meta has fumbled its AI advantage.

I think FAIR [LeCun’s lab at Meta] was extremely successful in the research part. Where Meta was less successful is in picking up on that research and pushing it into practical technology and products. Mark made some choices that he thought were the best for the company. I may not have agreed with all of them. For example, the robotics group at FAIR was let go, which I think was a strategic mistake. But I’m not the director of FAIR. People make decisions rationally, and there’s no reason to be upset.

So, no bad blood? Could Meta be a future client for AMI?

Meta might be our first client! We’ll see. The work we are doing is not in direct competition. Our focus on world models for the physical world is very different from their focus on generative AI and LLMs.

You were working on AI long before LLMs became a mainstream approach. But since ChatGPT broke out, LLMs have become almost synonymous with AI.

Yes, and we are going to change that. The public face of AI, perhaps, is mostly LLMs and chatbots of various types. But the latest ones of those are not pure LLMs. They are LLM plus a lot of things, like perception systems and code that solves particular problems. So we are going to see LLMs as kind of the orchestrator in systems, a little bit.

Beyond LLMs, there is a lot of AI that is behind the scenes that runs a big chunk of our society. There are assistance driving programs in a car, quick-turn MRI images, algorithms that drive social media—that’s all AI. 

You have been vocal in arguing that LLMs can only get us so far. Do you think LLMs are overhyped these days? Can you summarize to our readers why you believe that LLMs are not enough?

There is a sense in which they have not been overhyped, which is that they are extremely useful to a lot of people, particularly if you write text, do research, or write code. LLMs manipulate language really well. But people have had this illusion, or delusion, that it is a matter of time until we can scale them up to having human-level intelligence, and that is simply false.

The truly difficult part is understanding the real world. This is the Moravec Paradox (a phenomenon observed by the computer scientist Hans Moravec in 1988): What’s easy for us, like perception and navigation, is hard for computers, and vice versa. LLMs are limited to the discrete world of text. They can’t truly reason or plan, because they lack a model of the world. They can’t predict the consequences of their actions. This is why we don’t have a domestic robot that is as agile as a house cat, or a truly autonomous car.

We are going to have AI systems that have humanlike and human-level intelligence, but they’re  not going to be built on LLMs, and it’s not going to happen next year or two years from now. It’s going to take a while. There are major conceptual breakthroughs that have to happen before we have AI systems that have human-level intelligence. And that is what I’ve been working on. And this company, AMI Labs, is focusing on the next generation.

And your solution is world models and JEPA architecture (JEPA, or “joint embedding predictive architecture,” is a learning framework that trains AI models to understand the world, created by LeCun while he was at Meta). What’s the elevator pitch?

The world is unpredictable. If you try to build a generative model that predicts every detail of the future, it will fail.  JEPA is not generative AI. It is a system that learns to represent videos really well. The key is to learn an abstract representation of the world and make predictions in that abstract space, ignoring the details you can’t predict. That’s what JEPA does. It learns the underlying rules of the world from observation, like a baby learning about gravity. This is the foundation for common sense, and it’s the key to building truly intelligent systems that can reason and plan in the real world. The most exciting work so far on this is coming from academia, not the big industrial labs stuck in the LLM world.

The lack of non-text data has been a problem in taking AI systems further in understanding the physical world. JEPA is trained on videos. What other kinds of data will you be using?

Our systems will be trained on video, audio, and sensor data of all kinds—not just text. We are working with various modalities, from the position of a robot arm to lidar data to audio. I’m also involved in a project using JEPA to model complex physical and clinical phenomena. 

What are some of the concrete, real-world applications you envision for world models?

The applications are vast. Think about complex industrial processes where you have thousands of sensors, like in a jet engine, a steel mill, or a chemical factory. There is no technique right now to build a complete, holistic model of these systems. A world model could learn this from the sensor data and predict how the system will behave. Or think of smart glasses that can watch what you’re doing, identify your actions, and then predict what you’re going to do next to assist you. This is what will finally make agentic systems reliable. An agentic system that is supposed to take actions in the world cannot work reliably unless it has a world model to predict the consequences of its actions. Without it, the system will inevitably make mistakes. This is the key to unlocking everything from truly useful domestic robots to Level 5 autonomous driving.

Humanoid robots are all the rage recently, especially ones built by companies from China. What’s your take?

There are all these brute-force ways to get around the limitations of learning systems, which require inordinate amounts of training data to do anything. So the secret of all the companies getting robots to do kung fu or dance is they are all planned in advance. But frankly, nobody—absolutely nobody—knows how to make those robots smart enough to be useful. Take my word for it. 


You need an enormous amount of tele-operation training data for every single task, and when the environment changes a little bit, it doesn’t generalize very well. What this tells us is we are missing something very big. The reason why a 17-year-old can learn to drive in 20 hours is because they already know a lot about how the world behaves. If we want a generally useful domestic robot, we need systems to have a kind of good understanding of the physical world. That’s not going to happen until we have good world models and planning.

There’s a growing sentiment that it’s becoming harder to do foundational AI research in academia because of the massive computing resources required. Do you think the most important innovations will now come from industry?

No. LLMs are now technology development, not research. It’s true that it’s very difficult for academics to play an important role there because of the requirements for computation, data access, and engineering support. But it’s a product now. It’s not something academia should even be interested in. It’s like speech recognition in the early 2010s—it was a solved problem, and the progress was in the hands of industry. 

What academia should be working on is long-term objectives that go beyond the capabilities of current systems. That’s why I tell people in universities: Don’t work on LLMs. There is no point. You’re not going to be able to rival what’s going on in industry. Work on something else. Invent new techniques. The breakthroughs are not going to come from scaling up LLMs. The most exciting work on world models is coming from academia, not the big industrial labs. The whole idea of using attention circuits in neural nets came out of the University of Montreal. That research paper started the whole revolution. Now that the big companies are closing up, the breakthroughs are going to slow down. Academia needs access to computing resources, but they should be focused on the next big thing, not on refining the last one.

You wear many hats: professor, researcher, educator, public thinker … Now you just took on a new one. What is that going to look like for you?

I am going to be the executive chairman of the company, and Alex LeBrun [a former colleague from Meta AI] will be the CEO. It’s going to be LeCun and LeBrun—it’s nice if you pronounce it the French way.

I am going to keep my position at NYU. I teach one class per year, I have PhD students and postdocs, so I am going to be kept based in New York. But I go to Paris pretty often because of my lab. 

Does that mean that you won’t be very hands-on?

Well, there’s two ways to be hands-on. One is to manage people day to day, and another is to actually get your hands dirty in research projects, right? 

I can do management, but I don’t like doing it. This is not my mission in life. It’s really to make science and technology progress as far as we can, inspire other people to work on things that are interesting, and then contribute to those things. So that has been my role at Meta for the last seven years. I founded FAIR and led it for four to five years. I kind of hated being a director. I am not good at this career management thing. I’m much more visionary and a scientist.

What makes Alex LeBrun the right fit?

Alex is a serial entrepreneur; he’s built three successful AI companies. The first he sold to Microsoft; the second to Facebook, where he was head of the engineering division of FAIR in Paris. He then left to create Nabla, a very successful company in the health-care space. When I offered him the chance to join me in this effort, he accepted almost immediately. He has the experience to build the company, allowing me to focus on science and technology. 

You’re headquartered in Paris. Where else do you plan to have offices?

We are a global company. There’s going to be an office in North America.

New York, hopefully?

New York is great. That’s where I am, right? And it’s not Silicon Valley. Silicon Valley is a bit of a monoculture.

What about Asia? I’m guessing Singapore, too?

Probably, yeah. I’ll let you guess. 

And how are you attracting talent?

We don’t have any issue recruiting. There are a lot of people in the AI research community who think the future of AI is in world models. Those people, regardless of pay package, will be motivated to come work for us because they believe in the technological future we are building. We’ve already recruited people from places like OpenAI, Google DeepMind, and xAI.

I heard that Saining Xie, a prominent researcher from NYU and Google DeepMind, might be joining you as chief scientist. Any comments?

Saining is a brilliant researcher. I have a lot of admiration for him. I hired him twice already. I hired him at FAIR, and I convinced my colleagues at NYU that we should hire him there. Let’s just say I have a lot of respect for him.

When will you be ready to share more details about AMI Labs, like financial backing or other core members?

Soon—in February, maybe. I’ll let you know.

Google’s New Update Refreshes Android Voice Search UI

21 January 2026 at 12:51

Google’s hands-free voice search on Android is getting a major UI overhaul, replacing the bodyless face with a microphone, “Ask Anything,” and a colored arc.

The post Google’s New Update Refreshes Android Voice Search UI appeared first on TechRepublic.

Google’s New Update Refreshes Android Voice Search UI

21 January 2026 at 12:51

Google’s hands-free voice search on Android is getting a major UI overhaul, replacing the bodyless face with a microphone, “Ask Anything,” and a colored arc.

The post Google’s New Update Refreshes Android Voice Search UI appeared first on TechRepublic.

One of the EU’s Largest Alternative App Stores Is Shutting Down

21 January 2026 at 10:50

Setapp Mobile will shut down in February, citing Apple’s complex EU terms as developers weigh new fees, link-out rules, and uncertain alternative stores.

The post One of the EU’s Largest Alternative App Stores Is Shutting Down appeared first on TechRepublic.

One of the EU’s Largest Alternative App Stores Is Shutting Down

21 January 2026 at 10:50

Setapp Mobile will shut down in February, citing Apple’s complex EU terms as developers weigh new fees, link-out rules, and uncertain alternative stores.

The post One of the EU’s Largest Alternative App Stores Is Shutting Down appeared first on TechRepublic.

Everyone wants AI sovereignty. No one can truly have it.

By: Cathy Li
21 January 2026 at 09:00

Governments plan to pour $1.3 trillion into AI infrastructure by 2030 to invest in “sovereign AI,” with the premise being that countries should be in control of their own AI capabilities. The funds include financing for domestic data centers, locally trained models, independent supply chains, and national talent pipelines. This is a response to real shocks: covid-era supply chain breakdowns, rising geopolitical tensions, and the war in Ukraine.  

But the pursuit of absolute autonomy is running into reality. AI supply chains are irreducibly global: Chips are designed in the US and manufactured in East Asia; models are trained on data sets drawn from multiple countries; applications are deployed across dozens of jurisdictions.  

If sovereignty is to remain meaningful, it must shift from a defensive model of self-reliance to a vision that emphasizes the concept of orchestration, balancing national autonomy with strategic partnership. 

Why infrastructure-first strategies hit walls 

A November survey by Accenture found that 62% of European organizations are now seeking sovereign AI solutions, driven primarily by geopolitical anxiety rather than technical necessity. That figure rises to 80% in Denmark and 72% in Germany. The European Union has appointed its first Commissioner for Tech Sovereignty. 

This year, $475 billion is flowing into AI data centers globally. In the United States, AI data centers accounted for roughly one-fifth of GDP growth in the second quarter of 2025. But the obstacle for other nations hoping to follow suit isn’t just money. It’s energy and physics. Global data center capacity is projected to hit 130 gigawatts by 2030, and for every $1 billion spent on these facilities, $125 million is needed for electricity networks. More than $750 billion in planned investment is already facing grid delays. 

And it’s also talent. Researchers and entrepreneurs are mobile, drawn to ecosystems with access to capital, competitive wages, and rapid innovation cycles. Infrastructure alone won’t attract or retain world-class talent.  

What works: An orchestrated sovereignty

What nations need isn’t sovereignty through isolation but through specialization and orchestration. This means choosing which capabilities you build, which you pursue through partnership, and where you can genuinely lead in shaping the global AI landscape. 

The most successful AI strategies don’t try to replicate Silicon Valley; they identify specific advantages and build partnerships around them. 

Singapore offers a model. Rather than seeking to duplicate massive infrastructure, it invested in governance frameworks, digital-identity platforms, and applications of AI in logistics and finance, areas where it can realistically compete. 

Israel shows a different path. Its strength lies in a dense network of startups and military-adjacent research institutions delivering outsize influence despite the country’s small size. 

South Korea is instructive too. While it has national champions like Samsung and Naver, these firms still partner with Microsoft and Nvidia on infrastructure. That’s deliberate collaboration reflecting strategic oversight, not dependence.  

Even China, despite its scale and ambition, cannot secure full-stack autonomy. Its reliance on global research networks and on foreign lithography equipment, such as extreme ultraviolet systems needed to manufacture advanced chips and GPU architectures, shows the limits of techno-nationalism. 

The pattern is clear: Nations that specialize and partner strategically can outperform those trying to do everything alone. 

Three ways to align ambition with reality 

1.  Measure added value, not inputs.  

Sovereignty isn’t how many petaflops you own. It’s how many lives you improve and how fast the economy grows. Real sovereignty is the ability to innovate in support of national priorities such as productivity, resilience, and sustainability while maintaining freedom to shape governance and standards.  

Nations should track the use of AI in health care and monitor how the technology’s adoption correlates with manufacturing productivity, patent citations, and international research collaborations. The goal is to ensure that AI ecosystems generate inclusive and lasting economic and social value.  

2. Cultivate a strong AI innovation ecosystem. 

Build infrastructure, but also build the ecosystem around it: research institutions, technical education, entrepreneurship support, and public-private talent development. Infrastructure without skilled talent and vibrant networks cannot deliver a lasting competitive advantage.   

3. Build global partnerships.  

Strategic partnerships enable nations to pool resources, lower infrastructure costs, and access complementary expertise. Singapore’s work with global cloud providers and the EU’s collaborative research programs show how nations advance capabilities faster through partnership than through isolation. Rather than competing to set dominant standards, nations should collaborate on interoperable frameworks for transparency, safety, and accountability.  

What’s at stake 

Overinvesting in independence fragments markets and slows cross-border innovation, which is the foundation of AI progress. When strategies focus too narrowly on control, they sacrifice the agility needed to compete. 

The cost of getting this wrong isn’t just wasted capital—it’s a decade of falling behind. Nations that double down on infrastructure-first strategies risk ending up with expensive data centers running yesterday’s models, while competitors that choose strategic partnerships iterate faster, attract better talent, and shape the standards that matter. 

The winners will be those who define sovereignty not as separation, but as participation plus leadership—choosing who they depend on, where they build, and which global rules they shape. Strategic interdependence may feel less satisfying than independence, but it’s real, it is achievable, and it will separate the leaders from the followers over the next decade. 

The age of intelligent systems demands intelligent strategies—ones that measure success not by infrastructure owned, but by problems solved. Nations that embrace this shift won’t just participate in the AI economy; they’ll shape it. That’s sovereignty worth pursuing. 

Cathy Li is head of the Centre for AI Excellence at the World Economic Forum.

All anyone wants to talk about at Davos is AI and Donald Trump

By: Mat Honan
21 January 2026 at 06:20

This story first appeared in The Debrief, our subscriber-only newsletter about the biggest news in tech by Mat Honan, Editor in Chief. Subscribe to read the next edition as soon as it lands.

Hello from the World Economic Forum annual meeting in Davos, Switzerland. I’ve been here for two days now, attending meetings, speaking on panels, and basically trying to talk to anyone I can. And as far as I can tell, the only things anyone wants to talk about are AI and Trump. 

Davos is physically defined by the Congress Center, where the official WEF sessions take place, and the Promenade, a street running through the center of the town lined with various “houses”—mostly retailers that are temporarily converted into meeting hubs for various corporate or national sponsors. So there is a Ukraine House, a Brazil House, Saudi House, and yes, a USA House (more on that tomorrow). There are a handful of media houses from the likes of CNBC and the Wall Street Journal. Some houses are devoted to specific topics; for example, there’s one for science and another for AI. 

But like everything else in 2026, the Promenade is dominated by tech companies. At one point I realized that literally everything I could see, in a spot where the road bends a bit, was a tech company house. Palantir, Workday, Infosys, Cloudflare, C3.ai. Maybe this should go without saying, but their presence, both in the houses and on the various stages and parties and platforms here at the World Economic Forum, really drove home to me how utterly and completely tech has captured the global economy. 

While the houses host events and serve as networking hubs, the big show is inside the Congress Center. On Tuesday morning, I kicked off my official Davos experience there by moderating a panel with the CEOs of Accenture, Aramco, Royal Philips, and Visa. The topic was scaling up AI within organizations. All of these leaders represented companies that have gone from pilot projects to large internal implementations. It was, for me, a fascinating conversation. You can watch the whole thing here, but my takeaway was that while there are plenty of stories about AI being overhyped (including from us), it is certainly having substantive effects at large companies.  

Aramco CEO Amin Nasser, for example, described how that company has found $3 billion to $5 billion in cost savings by improving the efficiency of its operations. Royal Philips CEO Roy Jakobs described how it was allowing health-care practitioners to spend more time with patients by doing things such as automated note-taking. (This really resonated with me, as my wife is a pediatrics nurse, and for decades now I’ve heard her talk about how much of her time is devoted to charting.) And Visa CEO Ryan McInerney talked about his company’s push into agentic commerce and the way that will play out for consumers, small businesses, and the global payments industry. 

To elaborate a little on that point, McInerney painted a picture of commerce where agents won’t just shop for things you ask them to, which will be basically step one, but will eventually be able to shop for things based on your preferences and previous spending patterns. This could be your regular grocery shopping, or even a vacation getaway. That’s going to require a lot of trust and authentication to protect both merchants and consumers, but it is clear that the steps into agentic commerce we saw in 2025 were just baby ones. There are much bigger ones coming for 2026. (Coincidentally, I had a discussion with a senior executive from Mastercard on Monday, who made several of the same points.) 

But the thing that really resonated with me from the panel was a comment from Accenture CEO Julie Sweet, who has a view not only of her own large org but across a spectrum of companies: “It’s hard to trust something until you understand it.” 

I felt that neatly summed up where we are as a society with AI. 

Clearly, other people feel the same. Before the official start of the conference I was at AI House for a panel. The place was packed. There was a consistent, massive line to get in, and once inside, I literally had to muscle my way through the crowd. Everyone wanted to get in. Everyone wanted to talk about AI. 

(A quick aside on what I was doing there: I sat on a panel called “Creativity and Identity in the Age of Memes and Deepfakes,” led by Atlantic CEO Nicholas Thompson; it featured the artist Emi Kusano, who works with AI, and Duncan Crabtree-Ireland, the chief negotiator for SAG-AFTRA, who has been at the center of a lot of the debates about AI in the film and gaming industries. I’m not going to spend much time describing it because I’m already running long, but it was a rip-roarer of a panel. Check it out.)

And, okay. Sigh. Donald Trump. 

The president is due here Wednesday, amid threats of seizing Greenland and fears that he’s about to permanently fracture the NATO alliance. While AI is all over the stages, Trump is dominating all the side conversations. There are lots of little jokes. Nervous laughter. Outright anger. Fear in the eyes. It’s wild. 

These conversations are also starting to spill out into the public. Just after my panel on Tuesday, I headed to a pavilion outside the main hall in the Congress Center. I saw someone coming down the stairs with a small entourage, who was suddenly mobbed by cameras and phones. 

Moments earlier in the same spot, the press had been surrounding David Beckham, shouting questions at him. So I was primed for it to be another celebrity—after all, captains of industry were everywhere you looked. I mean, I had just bumped into Eric Schmidt, who was literally standing in line in front of me at the coffee bar. Davos is weird. 

But in fact, it was Gavin Newsom, the governor of California, who is increasingly seen as the leading voice of the Democratic opposition to President Trump, and a likely contender, or even front-runner, in the race to replace him. Because I live in San Francisco I’ve encountered Newsom many times, dating back to his early days as a city supervisor before he was even mayor. I’ve rarely, rarely, seen him quite so worked up as he was on Tuesday. 

Among other things, he called Trump a narcissist who follows “the law of the jungle, the rule of Don” and compared him to a T-Rex, saying, “You mate with him or he devours you.” And he was just as harsh on the world leaders, many of whom are gathered in Davos, calling them “pathetic” and saying he should have brought knee pads for them. 

Yikes.

There was more of this sentiment, if in more measured tones, from Canadian prime minister Mark Carney during his address at Davos. While I missed his remarks, they had people talking. “If we’re not at the table, we’re on the menu,” he argued. 

Zuck stuck on Trump’s bad side: FTC appeals loss in Meta monopoly case

20 January 2026 at 18:22

Still feeling uneasy about Meta's acquisition of Instagram in 2012 and WhatsApp in 2014, the Federal Trade Commission will be appealing a November ruling that cleared Meta of allegations that it holds an illegal monopoly in a market dubbed "personal social networking."

The FTC hopes the US Court of Appeals for the District of Columbia will agree that "robust evidence at trial" showed that Meta's acquisitions were improper. In the initial trial, the FTC sought a breakup of Meta's apps, with Meta risking forced divestments of Instagram or WhatsApp.

In a press release Tuesday, the FTC confirmed that it "continues to allege" that "for over a decade Meta has illegally maintained a monopoly in personal social networking services through anticompetitive conduct—by buying the significant competitive threats it identified in Instagram and WhatsApp."

Read full article

Comments

© Aurich Lawson | Getty Images

The UK government is backing AI that can run its own lab experiments

20 January 2026 at 08:28

A number of startups and university teams that are building “AI scientists” to design and run experiments in the lab, including robot biologists and chemists, have just won extra funding from the UK government agency that supports moonshot R&D. The competition, set up by ARIA (the Advanced Research and Invention Agency), gives a clear sense of how fast this technology is moving: The agency received 245 proposals from researchers who are already developing tools that can automate significant stretches of lab work.

ARIA defines an AI scientist as a system that can run an entire scientific workflow, coming up with hypotheses, designing and running experiments to test those hypotheses, and then analyzing the results. In many cases, the system may then feed those results back into itself and run the loop again and again. Human scientists become overseers, coming up with the initial research questions and then letting the AI scientist get on with the grunt work.

“There are better uses for a PhD student than waiting around in a lab until 3 a.m. to make sure an experiment is run to the end,” says Ant Rowstron, ARIA’s chief technology officer. 

ARIA picked 12 projects to fund from the 245 proposals, doubling the amount of funding it had intended to allocate because of the large number and high quality of submissions. Half the teams are from the UK; the rest are from the US and Europe. Some of the teams are from universities, some from industry. Each will get around £500,000 (around $675,000) to cover nine months’ work. At the end of that time, they should be able to demonstrate that their AI scientist was able to come up with novel findings.

Winning teams include Lila Sciences, a US company that is building what it calls an AI nano-scientist—a system that will design and run experiments to discover the best ways to compose and process quantum dots, which are nanometer-scale semiconductor particles used in medical imaging, solar panels, and QLED TVs.

“We are using the funds and time to prove a point,” says Rafa Gómez-Bombarelli, chief science officer for physical sciences at Lila: “The grant lets us design a real AI robotics loop around a focused scientific problem, generate evidence that it works, and document the playbook so others can reproduce and extend it.”

Another team, from the University of Liverpool, UK, is building a robot chemist, which runs multiple experiments at once and uses a vision language model to help troubleshoot when the robot makes an error.

And a startup based in London, still in stealth mode, is developing an AI scientist called ThetaWorld, which is using LLMs to design experiments on the physical and chemical interactions that are important for the performance of batteries. Those experiments will then be run in an automated lab.

Taking the temperature

Compared with the £5 million projects spanning two or three years that ARIA usually funds, £500,000 is small change. But that was the idea, says Rowstron: It’s an experiment on ARIA’s part too. By funding a range of projects for a short amount of time, the agency is taking the temperature at the cutting edge to determine how the way science is done is changing, and how fast. What it learns will become the baseline for funding future large-scale projects.   

Rowstron acknowledges there’s a lot of hype, especially now that most of the top AI companies have teams focused on science. When results are shared by press release and not peer review, it can be hard to know what the technology can and can’t do. “That’s always a challenge for a research agency trying to fund the frontier,” he says. “To do things at the frontier, we’ve got to know what the frontier is.”

For now, the cutting edge involves agentic systems calling up other existing tools on the fly. “They’re running things like large language models to do the ideation, and then they use other models to do optimization and run experiments,” says Rowstron. “And then they feed the results back round.”

Rowstron sees the technology stacked in tiers. At the bottom are AI tools designed by humans for humans, such as AlphaFold. These tools let scientists leapfrog slow and painstaking parts of the scientific pipeline but can still require many months of lab work to verify results. The idea of an AI scientist is to automate that work too.  

An AI scientist sits in a layer above those human-made tools and calls on them as needed, says Rowstron. “There’s a point in time—and I don’t think it’s a decade away—where that AI scientist layer says, ‘I need a tool and it doesn’t exist,’ and it will actually create an AlphaFold kind of tool just on the way to figuring out how to solve another problem. That whole bottom zone will just be automated.”

Going off the rails

But we’re not there yet, he says. All the projects ARIA is now funding involve systems that call on existing tools rather than spin up new ones.

There are also unsolved problems with agentic systems in general, which limits how long they can run by themselves without going off the rails and making errors. For example, a study, titled “Why LLMs aren’t scientists yet,” posted online last week by researchers at Lossfunk, an AI lab based in India, reports that in an experiment to get LLM agents to run a scientific workflow to completion, the system failed three out of four times. According to the researchers, the reasons the LLMs broke down included “deviation from original research specifications toward simpler, more familiar solutions” and “overexcitement that declares success despite obvious failures.”

“Obviously, at the moment these tools are still fairly early in their cycle and these things might plateau,” says Rowstron. “I’m not expecting them to win a Nobel Prize.”

“But there is a world where some of these tools will force us to operate so much quicker,” he continues. “And if we end up in that world, it’s super important for us to be ready.”

❌
❌