Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

How AI is uncovering hidden geothermal energy resources

4 December 2025 at 08:00

Sometimes geothermal hot spots are obvious, marked by geysers and hot springs on the planet’s surface. But in other places, they’re obscured thousands of feet underground. Now AI could help uncover these hidden pockets of potential power.

A startup company called Zanskar announced today that it’s used AI and other advanced computational methods to uncover a blind geothermal system—meaning there aren’t signs of it on the surface—in the western Nevada desert. The company says it’s the first blind system that’s been identified and confirmed to be a commercial prospect in over 30 years. 

Historically, finding new sites for geothermal power was a matter of brute force. Companies spent a lot of time and money drilling deep wells, looking for places where it made sense to build a plant.

Zanskar’s approach is more precise. With advancements in AI, the company aims to “solve this problem that had been unsolvable for decades, and go and finally find those resources and prove that they’re way bigger than previously thought,” says Carl Hoiland, the company’s cofounder and CEO.  

To support a successful geothermal power plant, a site needs high temperatures at an accessible depth and space for fluid to move through the rock and deliver heat. In the case of the new site, which the company calls Big Blind, the prize is a reservoir that reaches 250 °F at about 2,700 feet below the surface.

As electricity demand rises around the world, geothermal systems like this one could provide a source of constant power without emitting the greenhouse gases that cause climate change. 

The company has used its technology to identify many potential hot spots. “We have dozens of sites that look just like this,” says Joel Edwards, Zanskar’s cofounder and CTO. But for Big Blind, the team has done the fieldwork to confirm its model’s predictions.

The first step to identifying a new site is to use regional AI models to search large areas. The team trains models on known hot spots and on simulations it creates. Then it feeds in geological, satellite, and other types of data, including information about fault lines. The models can then predict where potential hot spots might be.

One strength of using AI for this task is that it can handle the immense complexity of the information at hand. “If there’s something learnable in the earth, even if it’s a very complex phenomenon that’s hard for us humans to understand, neural nets are capable of learning that, if given enough data,” Hoiland says. 

Once models identify a potential hot spot, a field crew heads to the site, which might be roughly 100 square miles or so, and collects additional information through techniques that include drilling shallow holes to look for elevated underground temperatures.

In the case of Big Blind, this prospecting information gave the company enough confidence to purchase a federal lease, allowing it to develop a geothermal plant. With that lease secured, the team returned with large drill rigs and drilled thousands of feet down in July and August. The workers found the hot, permeable rock they expected.

Next they must secure permits to build and connect to the grid and line up the investments needed to build the plant. The team will also continue testing at the site, including long-term testing to track heat and water flow.

“There’s a tremendous need for methodology that can look for large-scale features,” says John McLennan, technical lead for resource management at Utah FORGE, a national lab field site for geothermal energy funded by the US Department of Energy. The new discovery is “promising,” McLennan adds.

Big Blind is Zanskar’s first confirmed discovery that wasn’t previously explored or developed, but the company has used its tools for other geothermal exploration projects. Earlier this year, it announced a discovery at a site that had previously been explored by the industry but not developed. The company also purchased and revived a geothermal power plant in New Mexico.

And this could be just the beginning for Zanskar. As Edwards puts it, “This is the start of a wave of new, naturally occurring geothermal systems that will have enough heat in place to support power plants.”

Why the grid relies on nuclear reactors in the winter

4 December 2025 at 06:00

As many of us are ramping up with shopping, baking, and planning for the holiday season, nuclear power plants are also getting ready for one of their busiest seasons of the year.

Here in the US, nuclear reactors follow predictable seasonal trends. Summer and winter tend to see the highest electricity demand, so plant operators schedule maintenance and refueling for other parts of the year.

This scheduled regularity might seem mundane, but it’s quite the feat that operational reactors are as reliable and predictable as they are. It leaves some big shoes to fill for next-generation technology hoping to join the fleet in the next few years.

Generally, nuclear reactors operate at constant levels, as close to full capacity as possible. In 2024, for commercial reactors worldwide, the average capacity factor—the ratio of actual energy output to the theoretical maxiumum—was 83%. North America rang in at an average of about 90%.

(I’ll note here that it’s not always fair to just look at this number to compare different kinds of power plants—natural-gas plants can have lower capacity factors, but it’s mostly because they’re more likely to be intentionally turned on and off to help meet uneven demand.)

Those high capacity factors also undersell the fleet’s true reliability—a lot of the downtime is scheduled. Reactors need to refuel every 18 to 24 months, and operators tend to schedule those outages for the spring and fall, when electricity demand isn’t as high as when we’re all running our air conditioners or heaters at full tilt.

Take a look at this chart of nuclear outages from the US Energy Information Administration. There are some days, especially at the height of summer, when outages are low, and nearly all commercial reactors in the US are operating at nearly full capacity. On July 28 of this year, the fleet was operating at 99.6%. Compare that with  the 77.6% of capacity on October 18, as reactors were taken offline for refueling and maintenance. Now we’re heading into another busy season, when reactors are coming back online and shutdowns are entering another low point.

That’s not to say all outages are planned. At the Sequoyah nuclear power plant in Tennessee, a generator failure in July 2024 took one of two reactors offline, an outage that lasted nearly a year. (The utility also did some maintenance during that time to extend the life of the plant.) Then, just days after that reactor started back up, the entire plant had to shut down because of low water levels.

And who can forget the incident earlier this year when jellyfish wreaked havoc on not one but two nuclear power plants in France? In the second instance, the squishy creatures got into the filters of equipment that sucks water out of the English Channel for cooling at the Paluel nuclear plant. They forced the plant to cut output by nearly half, though it was restored within days.

Barring jellyfish disasters and occasional maintenance, the global nuclear fleet operates quite reliably. That wasn’t always the case, though. In the 1970s, reactors operated at an average capacity factor of just 60%. They were shut down nearly as often as they were running.

The fleet of reactors today has benefited from decades of experience. Now we’re seeing a growing pool of companies aiming to bring new technologies to the nuclear industry.

Next-generation reactors that use new materials for fuel or cooling will be able to borrow some lessons from the existing fleet, but they’ll also face novel challenges.

That could mean early demonstration reactors aren’t as reliable as the current commercial fleet at first. “First-of-a-kind nuclear, just like with any other first-of-a-kind technologies, is very challenging,” says Koroush Shirvan, a professor of nuclear science and engineering at MIT.

That means it will probably take time for molten-salt reactors or small modular reactors, or any of the other designs out there to overcome technical hurdles and settle into their own rhythm. It’s taken decades to get to a place where we take it for granted that the nuclear fleet can follow a neat seasonal curve based on electricity demand. 

There will always be hurricanes and electrical failures and jellyfish invasions that cause some unexpected problems and force nuclear plants (or any power plants, for that matter) to shut down. But overall, the fleet today operates at an extremely high level of consistency. One of the major challenges ahead for next-generation technologies will be proving that they can do the same.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

This year’s UN climate talks avoided fossil fuels, again

27 November 2025 at 06:00

If we didn’t have pictures and videos, I almost wouldn’t believe the imagery that came out of this year’s UN climate talks.

Over the past few weeks in Belem, Brazil, attendees dealt with oppressive heat and flooding, and at one point a literal fire broke out, delaying negotiations. The symbolism was almost too much to bear.

While many, including the president of Brazil, framed this year’s conference as one of action, the talks ended with a watered-down agreement. The final draft doesn’t even include the phrase “fossil fuels.”

As emissions and global temperatures reach record highs again this year, I’m left wondering: Why is it so hard to formally acknowledge what’s causing the problem?

This is the 30th time that leaders have gathered for the Conference of the Parties, or COP, an annual UN conference focused on climate change. COP30 also marks 10 years since the gathering that produced the Paris Agreement, in which world powers committed to limiting global warming to “well below” 2.0 °C above preindustrial levels, with a goal of staying below the 1.5 °C mark. (That’s 3.6 °F and 2.7 °F, respectively, for my fellow Americans.)

Before the conference kicked off this year, host country Brazil’s president, Luiz Inácio Lula da Silva, cast this as the “implementation COP” and called for negotiators to focus on action, and specifically to deliver a road map for a global transition away from fossil fuels.

The science is clear—burning fossil fuels emits greenhouse gases and drives climate change. Reports have shown that meeting the goal of limiting warming to 1.5 °C would require stopping new fossil-fuel exploration and development.

The problem is, “fossil fuels” might as well be a curse word at global climate negotiations. Two years ago, fights over how to address fossil fuels brought talks at COP28 to a standstill. (It’s worth noting that the conference was hosted in Dubai in the UAE, and the leader was literally the head of the country’s national oil company.)

The agreement in Dubai ended up including a line that called on countries to transition away from fossil fuels in energy systems. It was short of what many advocates wanted, which was a more explicit call to phase out fossil fuels entirely. But it was still hailed as a win. As I wrote at the time: “The bar is truly on the floor.”

And yet this year, it seems we’ve dug into the basement.

At one point about 80 countries, a little under half of those present, demanded a concrete plan to move away from fossil fuels.

But oil producers like Saudi Arabia were insistent that fossil fuels not be singled out. Other countries, including some in Africa and Asia, also made a very fair point: Western nations like the US have burned the most fossil fuels and benefited from it economically. This contingent maintains that legacy polluters have a unique responsibility to finance the transition for less wealthy and developing nations rather than simply barring them from taking the same development route. 

The US, by the way, didn’t send a formal delegation to the talks, for the first time in 30 years. But the absence spoke volumes. In a statement to the New York Times that sidestepped the COP talks, White House spokesperson Taylor Rogers said that president Trump had “set a strong example for the rest of the world” by pursuing new fossil-fuel development.

To sum up: Some countries are economically dependent on fossil fuels, some don’t want to stop depending on fossil fuels without incentives from other countries, and the current US administration would rather keep using fossil fuels than switch to other energy sources. 

All those factors combined help explain why, in its final form, COP30’s agreement doesn’t name fossil fuels at all. Instead, there’s a vague line that leaders should take into account the decisions made in Dubai, and an acknowledgement that the “global transition towards low greenhouse-gas emissions and climate-resilient development is irreversible and the trend of the future.”

Hopefully, that’s true. But it’s concerning that even on the world’s biggest stage, naming what we’re supposed to be transitioning away from and putting together any sort of plan to actually do it seems to be impossible.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Three things to know about the future of electricity

20 November 2025 at 04:00

One of the dominant storylines I’ve been following through 2025 is electricity—where and how demand is going up, how much it costs, and how this all intersects with that topic everyone is talking about: AI.

Last week, the International Energy Agency released the latest version of the World Energy Outlook, the annual report that takes stock of the current state of global energy and looks toward the future. It contains some interesting insights and a few surprising figures about electricity, grids, and the state of climate change. So let’s dig into some numbers, shall we?

We’re in the age of electricity

Energy demand in general is going up around the world as populations increase and economies grow. But electricity is the star of the show, with demand projected to grow by 40% in the next 10 years.

China has accounted for the bulk of electricity growth for the past 10 years, and that’s going to continue. But emerging economies outside China will be a much bigger piece of the pie going forward. And while advanced economies, including the US and Europe, have seen flat demand in the past decade, the rise of AI and data centers will cause demand to climb there as well.

Air-conditioning is a major source of rising demand. Growing economies will give more people access to air-conditioning; income-driven AC growth will add about 330 gigawatts to global peak demand by 2035. Rising temperatures will tack on another 170 GW in that time. Together, that’s an increase of over 10% from 2024 levels.  

AI is a local story

This year, AI has been the story that none of us can get away from. One number that jumped out at me from this report: In 2025, investment in data centers is expected to top $580 billion. That’s more than the $540 billion spent on the global oil supply. 

It’s no wonder, then, that the energy demands of AI are in the spotlight. One key takeaway is that these demands are vastly different in different parts of the world.

Data centers still make up less than 10% of the projected increase in total electricity demand between now and 2035. It’s not nothing, but it’s far outweighed by sectors like industry and appliances, including air conditioners. Even electric vehicles will add more demand to the grid than data centers.

But AI will be the factor for the grid in some parts of the world. In the US, data centers will account for half the growth in total electricity demand between now and 2030.

And as we’ve covered in this newsletter before, data centers present a unique challenge, because they tend to be clustered together, so the demand tends to be concentrated around specific communities and on specific grids. Half the data center capacity that’s in the pipeline is close to large cities.

Look out for a coal crossover

As we ask more from our grid, the key factor that’s going to determine what all this means for climate change is what’s supplying the electricity we’re using.

As it stands, the world’s grids still primarily run on fossil fuels, so every bit of electricity growth comes with planet-warming greenhouse-gas emissions attached. That’s slowly changing, though.

Together, solar and wind were the leading source of electricity in the first half of this year, overtaking coal for the first time. Coal use could peak and begin to fall by the end of this decade.

Nuclear could play a role in replacing fossil fuels: After two decades of stagnation, the global nuclear fleet could increase by a third in the next 10 years. Solar is set to continue its meteoric rise, too. Of all the electricity demand growth we’re expecting in the next decade, 80% is in places with high-quality solar irradiation—meaning they’re good spots for solar power.

Ultimately, there are a lot of ways in which the world is moving in the right direction on energy. But we’re far from moving fast enough. Global emissions are, once again, going to hit a record high this year. To limit warming and prevent the worst effects of climate change, we need to remake our energy system, including electricity, and we need to do it faster. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Google is still aiming for its “moonshot” 2030 energy goals

13 November 2025 at 06:00

Last week, we hosted EmTech MIT, MIT Technology Review’s annual flagship conference in Cambridge, Massachusetts. Over the course of three days of main-stage sessions, I learned about innovations in AI, biotech, and robotics. 

But as you might imagine, some of this climate reporter’s favorite moments came in the climate sessions. I was listening especially closely to my colleague James Temple’s discussion with Lucia Tian, head of advanced energy technologies at Google. 

They spoke about the tech giant’s growing energy demand and what sort of technologies the company is looking to to help meet it. In case you weren’t able to join us, let’s dig into that session and consider how the company is thinking about energy in the face of AI’s rapid rise. 

I’ve been closely following Google’s work in energy this year. Like the rest of the tech industry, the company is seeing ballooning electricity demand in its data centers. That could get in the way of a major goal that Google has been talking about for years. 

See, back in 2020, the company announced an ambitious target: by 2030, it aimed to run on carbon-free energy 24-7. Basically, that means Google would purchase enough renewable energy on the grids where it operates to meet its entire electricity demand, and the purchases would match up so the electricity would have to be generated when the company was actually using energy. (For more on the nuances of Big Tech’s renewable-energy pledges, check out James’s piece from last year.)

Google’s is an ambitious goal, and on stage, Tian said that the company is still aiming for it but acknowledged that it’s looking tough with the rise of AI. 

“It was always a moonshot,” she said. “It’s something very, very hard to achieve, and it’s only harder in the face of this growth. But our perspective is, if we don’t move in that direction, we’ll never get there.”

Google’s total electricity demand more than doubled from 2020 to 2024, according to its latest Environmental Report. As for that goal of 24-7 carbon-free energy? The company is basically treading water. While it was at 67% for its data centers in 2020, last year it came in at 66%. 

Not going backwards is something of an accomplishment, given the rapid growth in electricity demand. But it still leaves the company some distance away from its finish line.

To close the gap, Google has been signing what feels like constant deals in the energy space. Two recent announcements that Tian talked about on stage were a project involving carbon capture and storage at a natural-gas plant in Illinois and plans to reopen a shuttered nuclear power plant in Iowa. 

Let’s start with carbon capture. Google signed an agreement to purchase most of the electricity from a new natural-gas plant, which will capture and store about 90% of its carbon dioxide emissions. 

That announcement was controversial, with critics arguing that carbon capture keeps fossil-fuel infrastructure online longer and still releases greenhouse gases and other pollutants into the atmosphere. 

One question that James raised on stage: Why build a new natural-gas plant rather than add equipment to an already existing facility? Tacking on equipment to an operational plant would mean cutting emissions from the status quo, rather than adding entirely new fossil-fuel infrastructure. 

The company did consider many existing plants, Tian said. But, as she put it, “Retrofits aren’t going to make sense everywhere.” Space can be limited at existing plants, for example, and many may not have the right geology to store carbon dioxide underground. 

“We wanted to lead with a project that could prove this technology at scale,” Tian said. This site has an operational Class VI well, the type used for permanent sequestration, she added, and it also doesn’t require a big pipeline buildout. 

Tian also touched on the company’s recent announcement that it’s collaborating with NextEra Energy to reopen Duane Arnold Energy Center, a nuclear power plant in Iowa. The company will purchase electricity from that plant, which is scheduled to reopen in 2029. 

As I covered in a story earlier this year, Duane Arnold was basically the final option in the US for companies looking to reopen shuttered nuclear power plants. “Just a few years back, we were still closing down nuclear plants in this country,” Tian said on stage. 

While each reopening will look a little different, Tian highlighted the groups working to restart the Palisades plant in Michigan, which was the first reopening to be announced, last spring. “They’re the real heroes of the story,” she said.

I’m always interested to get a peek behind the curtain at how Big Tech is thinking about energy. I’m skeptical but certainly interested to see how Google’s, and the rest of the industry’s, goals shape up over the next few years. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The State of AI: Energy is king, and the US is falling behind

Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday, writers from both publications debate one aspect of the generative AI revolution and how it is reshaping global power.

This week, Casey Crownhart, senior reporter for energy at MIT Technology Review and Pilita Clark, FT’s columnist, consider how China’s rapid renewables buildout could help it leapfrog on AI progress.

Casey Crownhart writes:

In the age of AI, the biggest barrier to progress isn’t money but energy. That should be particularly worrying here in the US, where massive data centers are waiting to come online, and it doesn’t look as if the country will build the steady power supply or infrastructure needed to serve them all.

It wasn’t always like this. For about a decade before 2020, data centers were able to offset increased demand with efficiency improvements. Now, though, electricity demand is ticking up in the US, with billions of queries to popular AI models each day—and efficiency gains aren’t keeping pace. With too little new power capacity coming online, the strain is starting to show: Electricity bills are ballooning for people who live in places where data centers place a growing load on the grid.

If we want AI to have the chance to deliver on big promises without driving electricity prices sky-high for the rest of us, the US needs to learn some lessons from the rest of the world on energy abundance. Just look at China.

China installed 429 GW of new power generation capacity in 2024, more than six times the net capacity added in the US during that time.

China still generates much of its electricity with coal, but that makes up a declining share of the mix. Rather, the country is focused on installing solar, wind, nuclear, and gas at record rates.

The US, meanwhile, is focused on reviving its ailing coal industry. Coal-fired power plants are polluting and, crucially, expensive to run. Aging plants in the US are also less reliable than they used to be, generating electricity just 42% of the time, compared with a 61% capacity factor in 2014.

It’s not a great situation. And unless the US changes something, we risk becoming consumers as opposed to innovators in both energy and AI tech. Already, China earns more from exporting renewables than the US does from oil and gas exports. 

Building and permitting new renewable power plants would certainly help, since they’re currently the cheapest and fastest to bring online. But wind and solar are politically unpopular with the current administration. Natural gas is an obvious candidate, though there are concerns about delays with key equipment.

One quick fix would be for data centers to be more flexible. If they agreed not to suck electricity from the grid during times of stress, new AI infrastructure might be able to come online without any new energy infrastructure.

One study from Duke University found that if data centers agree to curtail their consumption just 0.25% of the time (roughly 22 hours over the course of the year), the grid could provide power for about 76 GW of new demand. That’s like adding about 5% of the entire grid’s capacity without needing to build anything new.

But flexibility wouldn’t be enough to truly meet the swell in AI electricity demand. What do you think, Pilita? What would get the US out of these energy constraints? Is there anything else we should be thinking about when it comes to AI and its energy use? 

Pilita Clark responds:

I agree. Data centers that can cut their power use at times of grid stress should be the norm, not the exception. Likewise, we need more deals like those giving cheaper electricity to data centers that let power utilities access their backup generators. Both reduce the need to build more power plants, which makes sense regardless of how much electricity AI ends up using.

This is a critical point for countries across the world, because we still don’t know exactly how much power AI is going to consume. 

Forecasts for what data centers will need in as little as five years’ time vary wildly, from less than twice today’s rates to four times as much.

This is partly because there’s a lack of public data about AI systems’ energy needs. It’s also because we don’t know how much more efficient these systems will become. The US chip designer Nvidia said last year that its specialized chips had become 45,000 times more energy efficient over the previous eight years. 

Moreover, we have been very wrong about tech energy needs before. At the height of the dot-com boom in 1999, it was erroneously claimed that the internet would need half the US’s electricity within a decade—necessitating a lot more coal power.

Still, some countries are clearly feeling the pressure already. In Ireland, data centers chew up so much power that new connections have been restricted around Dublin to avoid straining the grid.

Some regulators are eyeing new rules forcing tech companies to provide enough power generation to match their demand. I hope such efforts grow. I also hope AI itself helps boost power abundance and, crucially, accelerates the global energy transition needed to combat climate change. OpenAI’s Sam Altman said in 2023 that “once we have a really powerful super intelligence, addressing climate change will not be particularly difficult.” 

The evidence so far is not promising, especially in the US, where renewable projects are being axed. Still, the US may end up being an outlier in a world where ever cheaper renewables made up more than 90% of new power capacity added globally last year. 

Europe is aiming to power one of its biggest data centers predominantly with renewables and batteries. But the country leading the green energy expansion is clearly China.

The 20th century was dominated by countries rich in the fossil fuels whose reign the US now wants to prolong. China, in contrast, may become the world’s first green electrostate. If it does this in a way that helps it win an AI race the US has so far controlled, it will mark a striking chapter in economic, technological, and geopolitical history.

Casey Crownhart replies:

I share your skepticism of tech executives’ claims that AI will be a groundbreaking help in the race to address climate change. To be fair, AI is progressing rapidly. But we don’t have time to wait for technologies standing on big claims with nothing to back them up. 

When it comes to the grid, for example, experts say there’s potential for AI to help with planning and even operating, but these efforts are still experimental.  

Meanwhile, much of the world is making measurable progress on transitioning to newer, greener forms of energy. How that will affect the AI boom remains to be seen. What is clear is that AI is changing our grid and our world, and we need to be clear-eyed about the consequences. 

Further reading 

MIT Technology Review reporters did the math on the energy needs of an AI query.

There are still a few reasons to be optimistic about AI’s energy demands.  

The FT’s visual data team take a look inside the relentless race for AI capacity.

And global FT reporters ask whether data centers can ever truly be green.

This article first appeared in our weekly AI newsletter, The Algorithm. Sign up here to get next week’s installment early.

Stop worrying about your AI footprint. Look at the big picture instead.

6 November 2025 at 06:00

Picture it: I’m minding my business at a party, parked by the snack table (of course). A friend of a friend wanders up, and we strike up a conversation. It quickly turns to work, and upon learning that I’m a climate technology reporter, my new acquaintance says something like: “Should I be using AI? I’ve heard it’s awful for the environment.” 

This actually happens pretty often now. Generally, I tell people not to worry—let a chatbot plan your vacation, suggest recipe ideas, or write you a poem if you want. 

That response might surprise some people, but I promise I’m not living under a rock, and I have seen all the concerning projections about how much electricity AI is using. Data centers could consume up to 945 terawatt-hours annually by 2030. (That’s roughly as much as Japan.) 

But I feel strongly about not putting the onus on individuals, partly because AI concerns remind me so much of another question: “What should I do to reduce my carbon footprint?” 

That one gets under my skin because of the context: BP helped popularize the concept of a carbon footprint in a marketing campaign in the early 2000s. That framing effectively shifts the burden of worrying about the environment from fossil-fuel companies to individuals. 

The reality is, no one person can address climate change alone: Our entire society is built around burning fossil fuels. To address climate change, we need political action and public support for researching and scaling up climate technology. We need companies to innovate and take decisive action to reduce greenhouse-gas emissions. Focusing too much on individuals is a distraction from the real solutions on the table. 

I see something similar today with AI. People are asking climate reporters at barbecues whether they should feel guilty about using chatbots too frequently when we need to focus on the bigger picture. 

Big tech companies are playing into this narrative by providing energy-use estimates for their products at the user level. A couple of recent reports put the electricity used to query a chatbot at about 0.3 watt-hours, the same as powering a microwave for about a second. That’s so small as to be virtually insignificant.

But stopping with the energy use of a single query obscures the full truth, which is that this industry is growing quickly, building energy-hungry infrastructure at a nearly incomprehensible scale to satisfy the AI appetites of society as a whole. Meta is currently building a data center in Louisiana with five gigawatts of computational power—about the same demand as the entire state of Maine at the summer peak.  (To learn more, read our Power Hungry series online.)

Increasingly, there’s no getting away from AI, and it’s not as simple as choosing to use or not use the technology. Your favorite search engine likely gives you an AI summary at the top of your search results. Your email provider’s suggested replies? Probably AI. Same for chatting with customer service while you’re shopping online. 

Just as with climate change, we need to look at this as a system rather than a series of individual choices. 

Massive tech companies using AI in their products should be disclosing their total energy and water use and going into detail about how they complete their calculations. Estimating the burden per query is a start, but we also deserve to see how these impacts add up for billions of users, and how that’s changing over time as companies (hopefully) make their products more efficient. Lawmakers should be mandating these disclosures, and we should be asking for them, too. 

That’s not to say there’s absolutely no individual action that you can take. Just as you could meaningfully reduce your individual greenhouse-gas emissions by taking fewer flights and eating less meat, there are some reasonable things that you can do to reduce your AI footprint. Generating videos tends to be especially energy-intensive, as does using reasoning models to engage with long prompts and produce long answers. Asking a chatbot to help plan your day, suggest fun activities to do with your family, or summarize a ridiculously long email has relatively minor impact. 

Ultimately, as long as you aren’t relentlessly churning out AI slop, you shouldn’t be too worried about your individual AI footprint. But we should all be keeping our eye on what this industry will mean for our grid, our society, and our planet. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

This startup wants to clean up the copper industry

3 November 2025 at 06:00

Demand for copper is surging, as is pollution from its dirty production processes. The founders of one startup, Still Bright, think they have a better, cleaner way to generate the copper the world needs. 

The company uses water-based reactions, based on battery chemistry technology, to purify copper in a process that could be less polluting than traditional smelting. The hope is that this alternative will also help ease growing strain on the copper supply chain.

“We’re really focused on addressing the copper supply crisis that’s looming ahead of us,” says Randy Allen, Still Bright’s cofounder and CEO.

Copper is a crucial ingredient in everything from electrical wiring to cookware today. And clean energy technologies like solar panels and electric vehicles are introducing even more demand for the metal. Global copper demand is expected to grow by 40% between now and 2040. 

As demand swells, so do the climate and environmental impacts of copper extraction, the process of refining ore into a pure metal. There’s also growing concern about the geographic concentration of the copper supply chain. Copper is mined all over the world, and historically, many of those mines had smelters on-site to process what they extracted. (Smelters form pure copper metal by essentially burning concentrated copper ore at high temperatures.) But today, the smelting industry has consolidated, with many mines shipping copper concentrates to smelters in Asia, particularly China.

That’s partly because smelting uses a lot of energy and chemicals, and it can produce sulfur-containing emissions that can harm air quality. “They shipped the environmental and social problems elsewhere,” says Simon Jowitt, a professor at the University of Nevada, Reno, and director of the Nevada Bureau of Mines and Geology.

It’s possible to scrub pollution out of a smelter’s emissions, and smelters are much cleaner than they used to be, Jowitt says. But overall, smelting centers aren’t exactly known for environmental responsibility. 

So even countries like the US, which have plenty of copper reserves and operational mines, largely ship copper concentrates, which contain up to around 30% copper, to China or other countries for smelting. (There are just two operational ore smelters in the US today.)

Still Bright avoids the pyrometallurgic process that smelters use in favor of a chemical approach, partially inspired by devices called vanadium flow batteries.

In the startup’s reactor, vanadium reacts with the copper compounds in copper concentrates. The copper metal remains a solid, leaving many of the impurities behind in the liquid phase. The whole thing takes between 30 and 90 minutes. The solid, which contains roughly 70% copper after this reaction, can then be fed into another, established process in the mining industry, called solvent extraction and electrowinning, to make copper that’s over 99% pure. 

This is far from the first attempt to use a water-based, chemical approach to processing copper. Today, some copper ore is processed with acid, for example, and Ceibo, a startup based in Chile, is trying to use a version of that process on the type of copper that’s traditionally smelted. The difference here is the particular chemistry, particularly the choice to use vanadium.

One of Still Bright’s founders, Jon Vardner, was researching copper reactions and vanadium flow batteries when he came up with the idea to marry a copper extraction reaction with an electrical charging step that could recycle the vanadium.

worker in the lab
COURTESY OF STILL BRIGHT

After the vanadium reacts with the copper, the liquid soup can be fed into an electrolyzer, which uses electricity to turn the vanadium back into a form that can react with copper again. It’s basically the same process that vanadium flow batteries use to charge up. 

While other chemical processes for copper refining require high temperatures or extremely acidic conditions to get the copper into solution and force the reaction to proceed quickly and ensure all the copper gets reacted, Still Bright’s process can run at ambient temperatures.

One of the major benefits to this approach is cutting the pollution from copper refining.  Traditional smelting heats the target material to over 1,200 °C (2,000 °F), forming sulfur-containing gases that are released into the atmosphere. 

Still Bright’s process produces hydrogen sulfide gas as a by-product instead. It’s still a dangerous material, but one that can be effectively captured and converted into useful side products, Allen says.

Another source of potential pollution is the sulfide minerals left over after the refining process, which can form sulfuric acid when exposed to air and water (this is called acid mine drainage, common in mining waste). Still Bright’s process will also produce that material, and the company plans to carefully track it, ensuring that it doesn’t leak into groundwater. 

The company is currently testing its process in the lab in New Jersey and designing a pilot facility in Colorado, which will have the capacity to make about two tons of copper per year. Next will be a demonstration-scale reactor, which will have a 500-ton annual capacity and should come online in 2027 or 2028 at a mine site, Allen says. Still Bright recently raised an $18.7 million seed round to help with the scale-up process.

How scale up goes will be a crucial test of the technology and whether the typically conservative mining industry will jump on board, UNR’s Jowitt says: “You want to see what happens on an industrial scale. And I think until that happens, people might be a little reluctant to get into this.”

Four thoughts from Bill Gates on climate tech

30 October 2025 at 07:00

Bill Gates doesn’t shy away or pretend modesty when it comes to his stature in the climate world today. “Well, who’s the biggest funder of climate innovation companies?” he asked a handful of journalists at a media roundtable event last week. “If there’s someone else, I’ve never met them.”

The former Microsoft CEO has spent the last decade investing in climate technology through Breakthrough Energy, which he founded in 2015. Ahead of the UN climate meetings kicking off next week, Gates published a memo outlining what he thinks activists and negotiators should focus on and how he’s thinking about the state of climate tech right now. Let’s get into it. 

Are we too focused on near-term climate goals?

One of the central points Gates made in his new memo is that he thinks the world is too focused on near-term emissions goals and national emissions reporting.

So in parallel with the national accounting structure for emissions, Gates argues, we should have high-level climate discussions at events like the UN climate conference. Those discussions should take a global view on how to reduce emissions in key sectors like energy and heavy industry.

“The way everybody makes steel, it’s the same. The way everybody makes cement, it’s the same. The way we make fertilizer, it’s all the same,” he says.

As he noted in one recent essay for MIT Technology Review, he sees innovation as the key to cutting the cost of clean versions of energy, cement, vehicles, and so on. And once products get cheaper, they can see wider adoption.

What’s most likely to power our grid in the future?

“In the long run, probably either fission or fusion will be the cheapest way to make electricity,” he says. (It should be noted that, as with most climate technologies, Gates has investments in both fission and fusion companies through Breakthrough Energy Ventures, so he has a vested interest here.)

He acknowledges, though, that reactors likely won’t come online quickly enough to meet rising electricity demand in the US: “I wish I could deliver nuclear fusion, like, three years earlier than I can.”

He also spoke to China’s leadership in both nuclear fission and fusion energy. “The amount of money they’re putting [into] fusion is more than the rest of the world put together times two. I mean, it’s not guaranteed to work. But name your favorite fusion approach here in the US—there’s a Chinese project.”

Can carbon removal be part of the solution?

I had my colleague James Temple’s recent story on what’s next for carbon removal at the top of my mind, so I asked Gates if he saw carbon credits or carbon removal as part of the problematic near-term thinking he wrote about in the memo.

Gates buys offsets to cancel out his own personal emissions, to the tune of about $9 million a year, he said at the roundtable, but doesn’t expect many of those offsets to make a significant dent in climate progress on a broader scale: “That stuff, most of those technologies, are a complete dead end. They don’t get you cheap enough to be meaningful.

“Carbon sequestration at $400, $200, $100, can never be a meaningful part of this game. If you have a technology that starts at $400 and can get to $4, then hallelujah, let’s go. I haven’t seen that one. There are some now that look like they can get to $40 or $50, and that can play somewhat of a role.”

 Will AI be good news for innovation? 

During the discussion, I started a tally in the corner of my notebook, adding a tick every time Gates mentioned AI. Over the course of about an hour, I got to six tally marks, and I definitely missed making a few.

Gates acknowledged that AI is going to add electricity demand, a challenge for a US grid that hasn’t seen net demand go up for decades. But so too will electric cars and heat pumps. 

I was surprised at just how positively he spoke about AI’s potential, though:

“AI will accelerate every innovation pipeline you can name: cancer, Alzheimer’s, catalysts in material science, you name it. And we’re all trying to figure out what that means. That is the biggest change agent in the world today, moving at a pace that is very, very rapid … every breakthrough energy company will be able to move faster because of using those tools, some very dramatically.”

I’ll add that, as I’ve noted here before, I’m skeptical of big claims about AI’s potential to be a silver bullet across industries, including climate tech. (If you missed it, check out this story about AI and the grid from earlier this year.) 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Whales are dying. Don’t blame wind turbines.

30 October 2025 at 06:00

When a whale dies, it often decomposes quite quickly—the process starts within hours of an animal’s stranding on shore. Depending on the species, they may have six inches or more of blubber, an insulating layer that traps heat inside and turns their internal organs to mush. 

That can make Jennifer Bloodgood’s job very difficult. As a wildlife veterinarian with New York state and Cornell Wildlife Health Lab, she’s an expert on conducting whale necropsies, as autopsies on animals are called—and she knows that it’s best to start them quickly, or she could miss key clues about how the gigantic mammal perished.

These investigations have become especially important because whale deaths have become a political flashpoint with significant consequences. There are currently three active mortality events for whales in the Atlantic, meaning clusters of deaths that experts consider unusual. And Republican lawmakers, influential conservative think tanks, and—most notably—President Donald Trump (a longtime enemy of wind power) are making dubious claims about how offshore wind farms are responsible.


This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology.


This has become the basis for a nebulous quasi-conspiracy theory, in which some anti-wind groups have claimed that the surveying technology used to map sites for wind farms can disturb the animals. Another argument is that the noise emitted by operational turbines disrupts whales’ communication and navigation. “The windmills are driving the whales crazy, obviously,” President Trump remarked in January. 

Over the past year, the supposed threat posed by the turbines has been cited as part of the official justification for Washington’s attack on offshore wind power—a significant part of what was once a growing clean energy infrastructure in the country. The Trump administration has halted leases and permits for new projects, ordered work to stop on a major new wind farm that was nearly complete, and canceled over $600 million in funding for ports to support the industry. 

But any finger-pointing at wind turbines for whale deaths ignores the fact that whales have been washing up on beaches since long before the giant machines were rooted in the ocean floor. This is something that has always happened. And the scientific consensus is clear: There’s no evidence that wind farms are the cause of recent increases in whale deaths. 

There’s still a lot that researchers don’t know about whales’ lives and deaths, but experts often conduct dozens of in-depth (and somewhat gruesome) investigations each year on the US East Coast alone. And in the active mortality events there, the data shows that humpback whales and North Atlantic right whales are typically casualties of human interaction, falling victim to things like boat strikes and entanglement in fishing gear. (In fact, in Bloodgood’s experience, about half the humpback whales that are in good enough condition to necropsy show signs of a vessel strike or other human interaction.) And minke whales appear to be falling to a common infection called brucella, which she’s also observed. 

“When a whale strands, there’s a huge effort that goes into responding and figuring out why it died,” Bloodgood says. “Many people’s job is to go out and figure out what’s happening.”

And, notably, what they’re finding is not death by turbine. “There is currently no evidence,” she tells me, “that wind energy is influencing whale strandings.”


Bloodgood is largely clinical as she talks about her work, describing the sometimes gory work of necropsies with a straight face from her simply decorated office—though on several occasions she chuckles and apologizes after sharing a particularly graphic detail. 

“We can learn so much from dead animals,” she tells me. By investigating bodies that wash up on shore, she and her fellow experts can uncover basic details like their species and age, but also what they ate, and, of course, why they died. They may look for signs of disease, or for evidence of human interference—boats, fishing nets, and yes, wind farm development. 

The first step after someone spots a whale washed up on shore is to call local authorities and groups of scientists, veterinarians, and volunteers, called stranding networks, that can help rescue, rehabilitate, and release the ones that are still alive—or perform necropsies on the ones that aren’t. 

Over the past few years, Bloodgood has helped with nine strandings across New Jersey, New York, and Delaware. (As a professor, she often brings students along. She’s had to introduce a lottery system since so many are interested.)

How any necropsy unfolds depends on the condition of the whale and on its stranding location. But generally, if there’s a fresh enough body to justify a full necropsy, a large team will get together on the scene. They’ll start with an external exam, looking for anything unusual on the skin, eyes, blowhole, and mouth. Then they’ll systematically dismantle the whale, noting anything that seems abnormal, and taking samples to send back to the lab. 

When the researchers are evaluating a cause of death, they’re looking at the whole picture, trying to find the most likely cause and gather evidence to discount all other potential causes. There’s not always a smoking gun, Bloodgood says. But a thorough enough examination can usually yield some meaningful clues. 

Say, for instance, a whale’s suspected cause of death is a boat strike. Researchers will look out for bruises and cuts during the external examination, and then they’ll try to spot broken bones and internal hemorrhaging. But they’ll also keep their eyes out for other issues, like lesions that can signal brucellosis. 

Usually, you want experienced cutters to do the carving, Bloodgood says, since whales are so large it’s usually necessary to use knives that are one or two feet long. Whales are also oily, so the knives can get quite slippery.  

After cutting through the thick blubber layer, researchers may use gaff hooks to spear skin or organs in order to move them around or keep them out of the way. There’s no rigid order of operations, though they’ll typically look at the major organs including lungs, liver, kidneys, and brain; it’s also usually helpful to open up the digestive system to see what the whale has been eating, which Bloodgood says increasingly includes plastic.

Accessing all the necessary organs can require moving the whale. Sometimes, if a stranding location is accessible enough, heavy equipment like an excavator can help lift part of the body to assist in splitting it open.

When that’s not the case, experts can use what’s called the window method, Bloodgood says. That basically just means cutting strategically placed holes along the body to access the desired organs. Near the pectoral fin is generally a good target for a sample of the lung, for example. One problem with this method is that it doesn’t always work if the body has decomposed and been tossed around in the waves before washing onshore. In that case, things get all jumbled up, and the lungs could end up by the tail, for instance.

After the deconstructing is done, Bloodgood goes back to the lab, taking samples of each of the tissues to conduct further analysis. One area of interest for her is ear bones. If a whale were in fact affected by the sound waves used by boats surveying wind farm locations (something that’s unlikely, given the type of sound waves used), their ear bones might show evidence of trauma associated with noise. That damage could be visible under a microscope or in a CT scan. 

Bloodgood has been investigating this theory, with a particular focus on dolphins—their heads, unlike whales’, are small enough to fit in the scanner. There’s been no sign of such damage in any of the samples she’s examined.  


Despite all the things that experts like Bloodgood can observe and test for, the system can never be perfect. Not all dead whales end up on beaches, and not all that do are in good enough shape for a thorough investigation. What’s more, it turns out to be quite difficult to entirely disprove that any single cause contributed to a whale’s death. Even if a stranded animal had an infection, or was hit by a boat, it’s theoretically possible there was another factor as well.

Still, in many cases, the necropsy turns up enough for scientists to feel confident assigning a cause of death. And after an investigation is complete, they publish a report, which is then analyzed by the National Oceanic and Atmospheric Administration. Enough such reports can point to trends.  

But most people almost certainly don’t see those reports or seek out that data. Even as whale deaths have become a heated point of debate at the highest levels of politics, Bloodgood feels the public doesn’t always recognize the care researchers take to investigate what’s going on. “I think a lot of people don’t realize the amount of effort that goes into understanding why whales die,” she says. 

She notes that these experts are working with limited, and dwindling, public support and resources. The Trump administration recently canceled funding for two programs that used aerial surveys and underwater listening devices to track whale populations and better understand the effects of human activity—including offshore wind development. 

At the same time, there’s more pressure, and more misinformation out there, she adds:  “I think it’s just become increasingly important to be transparent with the public.” In addition to publishing reports from necropsies online, some stranding networks give updates to local communities on social media, too. 

“If you don’t tell people what you find, they start coming up with their own ideas,” Bloodgood says. “If they think you’re hiding something, that’s the worst.” 

What a massive thermal battery means for energy storage

23 October 2025 at 06:00

Rondo Energy just turned on what it says is the world’s largest thermal battery, an energy storage system that can take in electricity and provide a consistent source of heat.

The company announced last week that its first full-scale system is operational, with 100 megawatt-hours of capacity. The thermal battery is powered by an off-grid solar array and will provide heat for enhanced oil recovery (more on this in a moment).

Thermal batteries could help clean up difficult-to-decarbonize sectors like manufacturing and heavy industrial processes like cement and steel production. With Rondo’s latest announcement, the industry has reached a major milestone in its effort to prove that thermal energy storage can work in the real world. Let’s dig into this announcement, what it means to have oil and gas involved, and what comes next.

The concept behind a thermal battery is overwhelmingly simple: Use electricity to heat up some cheap, sturdy material (like bricks) and keep it hot until you want to use that heat later, either directly in an industrial process or to produce electricity.

Rondo’s new system has been operating for 10 weeks and achieved all the relevant efficiency and reliability benchmarks, according to the company. The bricks reach temperatures over 1,000 °C (about 1,800 °F), and over 97% of the energy put into the system is returned as heat.

This is a big step from the 2 MWh pilot system that Rondo started up in 2023, and it’s the first of the mass-produced, full-size heat batteries that the company hopes to put in the hands of customers.

Thermal batteries could be a major tool in cutting emissions: 20% of total energy demand today is used to provide heat for industrial processes, and most of that is generated by burning fossil fuels. So this project’s success is significant for climate action.

There’s one major detail here, though, that dulls some of that promise: This battery is being used for enhanced oil recovery, a process where steam is injected down into wells to get stubborn oil out of the ground.

It can be  tricky for a climate technology to show its merit by helping harvest fossil fuels. Some critics argue that these sorts of techniques keep that polluting infrastructure running longer.

When I spoke to Rondo founder and chief innovation officer  John O’Donnell about the new system, he defended the choice to work with oil and gas.  

“We are decarbonizing the world as it is today,” O’Donnell says. To his mind, it’s better to help an oil and gas company use solar power for its operation than leave it to continue burning natural gas for heat. Between cheap solar, expensive natural gas, and policies in California, he adds, Rondo’s technology made sense for the customer.

Having a willing customer pay for a full-scale system has been crucial to Rondo’s effort to show that it can deliver its technology.

And the next units are on the way: Rondo is currently building three more full-scale units in Europe. The company will be able to bring them online cheaper and faster because of what it’s learned from the California project, O’Donnell says. 

The company has the capacity to build more batteries, and do it quickly. It currently makes batteries at its factory in Thailand, which has the capacity to make 2.4 gigawatt-hours’ worth of heat batteries today.

I’ve been following progress on thermal batteries for years, and this project obviously represents a big step forward. For all the promises of cheap, robust energy storage, there’s nothing like actually building a large-scale system and testing it in the field.

It’s definitely hard to get excited about enhanced oil recovery—we need to stop burning fossil fuels, and do it quickly, to avoid the worst impacts of climate change. But I see the argument that as long as oil and gas operations exist, there’s value in cleaning them up.

And as O’Donnell puts it, heat batteries can help: “This is a really dumb, practical thing that’s ready now.”

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The problem with Big Tech’s favorite carbon removal tech

16 October 2025 at 06:00

Sucking carbon pollution out of the atmosphere is becoming a big business—companies are paying top dollar for technologies that can cancel out their own emissions.

Today, nearly 70% of announced carbon removal contracts are for one technology: bioenergy with carbon capture and storage (BECCS). Basically, the idea is to use trees or some other types of biomass for energy, and then capture the emissions when you burn it.

While corporations, including tech giants like Microsoft, are betting big on this technology, there are a few potential problems with BECCS, as my colleague James Temple laid out in a new story. And some of the concerns echo similar problems with other climate technologies we cover, like carbon offsets and alternative jet fuels.

Carbon math can be complicated.

To illustrate one of the biggest issues with BECCS, we need to run through the logic on its carbon accounting. (And while this tech can use many different forms of biomass, let’s assume we’re talking about trees.)

When trees grow, they suck up carbon dioxide from the atmosphere. Those trees can be harvested and used for some intended purpose, like making paper. The leftover material, which might otherwise be waste, is then processed and burned for energy.

This cycle is, in theory, carbon neutral. The emissions from burning the biomass are canceled out by what was removed from the atmosphere during plants’ growth. (Assuming those trees are replaced after they’re harvested.)

So now imagine that carbon-scrubbing equipment is added to the facility that burns the biomass, capturing emissions. If the cycle was logically carbon neutral before, now it’s carbon negative: On net, emissions are removed from the atmosphere. Sounds great, no notes. 

There are a few problems with this math, though. For one, it leaves out the emissions that might be produced while harvesting, transporting, and processing wood. And if projects require clearing land to plant trees or grow crops, that transformation can wind up releasing emissions too.

Issues with carbon math might sound a little familiar if you’ve read any of James’s reporting on carbon offsets, programs where people pay for others to avoid emissions. In particular, his 2021 investigation with ProPublica’s Lisa Song laid out how this so-called solution was actually adding millions of tons of carbon dioxide into the atmosphere.

Carbon capture may entrench polluting facilities.

One of the big benefits of BECCS is that it can be added to existing facilities. There’s less building involved than there might be in something like a facility that vacuums carbon directly out of air. That helps keep costs down, so BECCS is currently much cheaper than direct air capture and other forms of carbon removal.

But keeping legacy equipment running might not be a great thing for emissions or local communities in the long run.

Carbon dioxide is far from the only pollutant spewing out of these facilities. Burning biomass or biofuels can release emissions that harm human health, like particulate matter, sulfur dioxide, and carbon monoxide. Carbon capture equipment might trap some of these pollutants, like sulfur dioxide, but not all.

Assuming that waste material wouldn’t be used for something else might not be right.

It sounds great to use waste, but there’s a major asterisk lurking here, as James lays out in the story:

But the critical question that emerges with waste is: Would it otherwise have been burned or allowed to decompose, or might some of it have been used in some other way that kept the carbon out of the atmosphere? 

Biomass can be used for other things, like making plastic, building material, or even soil additives that can help crops get more nutrients. So the assumption that it’s BECCS or nothing is flawed.

Moreover, a weird thing happens when you start making waste valuable: There’s an incentive to produce more of it. Some experts are concerned that companies could wind up trimming more trees or clearing more forests than what’s needed to make more material for BECCS.

These waste issues remind me of conversations around sustainable aviation fuels. These alternative fuels can be made from a huge range of materials, including crop waste or even used cooking oil. But as demand for these clean fuels has ballooned, things have gotten a little wonky—there are even some reports of fraud, where scammers try to pass off newly made oil from crops as used cooking oil.

BECCS is a potentially useful technology, but like many things in climate tech, it can quickly get complicated. 

James has been reporting on carbon offsets and carbon removal for years. As he put it to me this week when we were chatting about this story: “Just cut emissions and stop messing around.”

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

3 takeaways about climate tech right now

9 October 2025 at 06:00

On Monday, we published our 2025 edition of Climate Tech Companies to Watch. This marks the third time we’ve put the list together, and it’s become one of my favorite projects to work on every year. 

In the journalism world, it’s easy to get caught up in the latest news, whether it’s a fundraising round, research paper, or startup failure. Curating this list gives our team a chance to take a step back and consider the broader picture. What industries are making progress or lagging behind? Which countries or regions are seeing quick changes? Who’s likely to succeed? 

This year is an especially interesting moment in the climate tech world, something we grappled with while choosing companies. Here are three of my takeaways from the process of building this list. 

1. It’s hard to overstate China’s role in energy technology right now. 

To put it bluntly, China’s progress on cleantech is wild. The country is dominating in installing wind and solar power and building EVs, and it’s also pumping government money into emerging technologies like fusion energy. 

We knew we wanted this list to reflect China’s emergence as a global energy superpower, and we ended up including two Chinese firms in key industries: renewables and batteries.

In 2024, China accounted for the top four wind turbine makers worldwide. Envision was in the second spot, with 19.3 gigawatts of new capacity added last year. But the company isn’t limited to wind; it’s working to help power heavy industries like steel and chemicals with technology like green hydrogen. 

Batteries are also a hot industry in China, and we’re seeing progress in tech beyond the lithium-ion cells that currently dominate EVs and energy storage on the grid. We represent that industry with HiNa Battery Technology, a leading startup building sodium-ion batteries, which could be cheaper than today’s options. The company’s batteries are already being used in electric mopeds and grid installations. 

2. Energy demand from data centers and AI is on everyone’s mind, especially in the US. 

Another trend we noticed this year was a fixation on the growing energy demand of data centers, including massive planned dedicated facilities that power AI models. (Here’s another nudge to check out our Power Hungry series on AI and energy, in case you haven’t explored it already.) 

Even if their technology has nothing to do with data centers, companies are trying to show how they can be valuable in this age of rising energy demand. Some are signing lucrative deals with tech giants that could provide the money needed to help bring their product to market. 

Kairos Power hopes to be one such energy generator, building next-generation nuclear reactors. Last year, the company signed an agreement with Google that will see the company buy up to 500 megawatts of electricity from Kairos’s first reactors through 2035. 

In a more direct play, Redwood Materials is stringing together used EV batteries to build microgrids that could power—you guessed it—data centers. The company’s first installation fired up this year, and while it’s small, it’s an interesting example of a new use for old technology. 

3. Materials continue to be an area that’s ripe for innovation. 

In a new essay that accompanies the list, Bill Gates lays out the key role of innovation in making progress on climate technology. One thing that jumped out at me while I was reading that piece was a number: 30% of global greenhouse-gas emissions come from manufacturing, including cement and steel production. 

I’ve obviously covered materials and heavy industry for years. But it still strikes me just how much innovation we still need in the most important materials we use to scaffold our world. 

Several companies on this year’s list focus on materials: We’ve once again represented cement, a material that accounts for 7% of global greenhouse-gas emissions. Cemvision is working to use alternative fuel sources and starting materials to clean up the dirty industry. 

And Cyclic Materials is trying to reclaim and recycle rare earth magnets, a crucial technology that underpins everything from speakers to EVs and wind turbines. Today, only about 0.2% of rare earths from recycled devices are recycled, but the company is building multiple facilities in North America in hopes of changing that. 

Our list of 10 Climate Tech Companies to Watch highlights businesses we think have a shot at helping the world address and adapt to climate change with the help of everything from established energy technologies to novel materials. It’s a representation of this moment, and I hope you enjoy taking a spin through it.

2025 Climate Tech Companies to Watch: Cemvision and its low-emissions cement

6 October 2025 at 06:45

Cement is one of the most used materials on the planet, and the industry emits billions of tons of greenhouse gasses annually. Cemvision wants to use waste materials and alternative fuels to help reduce climate pollution from cement production.

Today, making cement requires crushing limestone and heating it to super high temperatures, usually by burning fossil fuels. The chemical reactions also release carbon dioxide pollution. 

Swedish startup Cemvision made a few key production changes to reduce both emissions and the need to mine new materials. First, the company is moving away from Portland cement, the most common form of the material used currently. 

Making Portland cement requires reaching ultra-high temperatures, over 1,450 °C (2,650 °F). Instead, Cemvision makes a material that requires lower temperatures (roughly 1,200 °C, or 2,200 °F), which reduces the amount of energy required. 

The company also uses alternative sources for heating. Rather than fossil fuels, Cemvision can use a combination of plasma, hydrogen, and electricity. The startup tested its process in a demonstration-scale kiln, which can make up to 12 tons per day. The material has a high strength under compression and doesn’t heat up much when it’s mixed with water, both desirable qualities for builders. 

Cemvision also has a strong focus on building a circular economy. The company’s cement incorporates waste materials like mine tailings and slag, a by-product of iron and steel manufacturing. And it recently published results showing that it can use steel slag from electric arc furnaces and basic oxygen furnaces. These materials reduce the need for newly-mined limestone and other virgin materials, cutting down on the carbon dioxide emitted from that material in chemical reactions taking place in the kiln. 


Key indicators

  • Industry: Cement
  • Founded: 2019
  • Headquarters: Stockholm, Sweden
  • Notable fact: Cemvision was a member of the Breakthrough Energy Fellows program and the Norrsken accelerator program, started by Klarna cofounder Niklas Adalberth.

Potential for impact

The cement industry today accounts for about 7% of global greenhouse gas emissions. Cemvision’s process can reduce emissions by between 80% and 95% compared to traditional cement-making by using waste materials and alternative fuels. 

The company has partnerships with builders and industrial customers, including in construction and mining. 

Caveats

Cemvision’s material will be more expensive than conventional cement, so it’ll require either policy support or customers who are willing to pay more. The European Union has a policy system that charges for pollution, and that should help make Cemvision’s cement competitive. The company says its product will be less expensive than one of the leading methods of cleaning up cement, carbon capture and sequestration. 

The cement industry is quite conservative, and there’s often resistance to new technologies, including adopting materials other than Portland cement. Cemvision’s cement will need to gain wide acceptance to make progress on emissions. 

Next steps

Cemvision has a site selected and is currently raising money to finance a full-scale plant in Northern Europe. That facility will have a capacity of 500,000 metric tons annually, and the company says it should open by 2028. 

EV tax credits are dead in the US. Now what?

2 October 2025 at 06:00

On Wednesday, federal EV tax credits in the US officially came to an end.

Those credits, expanded and extended in the 2022 Inflation Reduction Act, gave drivers up to $7,500 in credits toward the purchase of a new electric vehicle. They’ve been a major force in cutting the up-front costs of EVs, pushing more people toward purchasing them and giving automakers confidence that demand would be strong.

The tax credits’ demise comes at a time when battery-electric vehicles still make up a small percentage of new vehicle sales in the country. And transportation is a major contributor to US climate pollution, with cars, trucks, ships, trains, and planes together making up roughly 30% of total greenhouse-gas emissions.

To anticipate what’s next for the US EV market, we can look to countries like Germany, which have ended similar subsidy programs. (Spoiler alert: It’s probably going to be a rough end to the year.)

When you factor in fuel savings, the lifetime cost of an EV can already be lower than that of a gas-powered vehicle today. But EVs can have a higher up-front cost, which is why some governments offer a tax credit or rebate that can help boost adoption for the technology.

In 2016, Germany kicked off a national incentive program to encourage EV sales. While the program was active, drivers could get grants of up to about €6,000 toward the purchase of a new battery-electric or plug-in hybrid vehicle.

Eventually, the government began pulling back the credits. Support for plug-in hybrids ended in 2022, and commercial buyers lost eligibility in September 2023. Then the entire program came to a screeching halt in December 2023, when the government announced it would be ending the incentives with about one week’s notice.

Monthly sales data shows the fingerprints of those changes. In each case where there’s a contraction of public support, there’s a peak in sales just before a cutback, then a crash after. These short-term effects can be dramatic: There were about half as many battery-electric vehicles sold in Germany in January 2024 than there were in December 2023. 

We’re already seeing the first half of this sort of boom-bust cycle in the US: EV sales ticked up in August, making up about 10% of all new vehicle sales, and analysts say September will turn out to be a record-breaking month. People rushed to take advantage of the credits while they still could.

Next comes the crash—the next few months will probably be very slow for EVs. One analyst predicted to the Washington Post that the figure could plummet to the low single digits, “like 1 or 2%.”

Ultimately, it’s not terribly surprising that there are local effects around these policy changes. “The question is really how long this decline will last, and how slowly any recovery in the growth will be,” Robbie Andrew, a senior researcher at the CICERO Center for International Climate Research in Norway who collects EV sales data, said in an email. 

When I spoke to experts (including Andrew) for a story last year, several told me that Germany’s subsidies were ending too soon, and that they were concerned about what cutting off support early would mean for the long-term prospects of the technology in the country. And Germany was much further along than the US, with EVs making up 20% of new vehicle sales—twice the American proportion.

EV growth did see a longer-term backslide in Germany after the end of the subsidies. Battery-electric vehicles made up 13.5% of new registrations in 2024, down from 18.5% the year before, and the UK also passed Germany to become Europe’s largest EV market. 

Things have improved this year, with sales in the first half beating records set in 2023. But growth would need to pick up significantly for Germany to reach its goal of getting 15 million battery-electric vehicles registered in the country by 2030. As of January 2025, that number was just 1.65 million. 

According to early projections, the end of tax credits in the US could significantly slow progress on EVs and, by extension, on cutting emissions. Sales of battery-electric vehicles could be about 40% lower in 2030 without the credits than what we’d see with them, according to one analysis by Princeton University’s Zero Lab.

Some US states still have their own incentive programs for people looking to buy electric vehicles. But without federal support, the US is likely to continue lagging behind global EV leaders like China. 

As Andrew put it: “From a climate perspective, with road transport responsible for almost a quarter of US total emissions, leaving the low-hanging fruit on the tree is a significant setback.” 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Coming soon: Our 2025 list of Climate Tech Companies to Watch

29 September 2025 at 06:00

The need to cut emissions and adapt to our warming world is growing more urgent. This year, we’ve seen temperatures reach record highs, as they have nearly every year for the last decade. Climate-fueled natural disasters are affecting communities around the world, costing billions of dollars. 

That’s why, for the past two years, MIT Technology Review has curated a list of companies with the potential to make a meaningful difference in addressing climate change (you can revisit the 2024 list here). We’re excited to share that we’ll publish our third edition of Climate Tech Companies to Watch on October 6. 

The list features businesses from around the world that are building technologies to reduce emissions or address the impacts of climate change. They represent advances across a wide range of industries, from agriculture and transportation to energy and critical minerals. 

One notable difference about this year’s list is that we’ve focused on fewer firms—we’ll highlight 10 instead of the 15 we’ve recognized in previous years. 

This change reflects the times: Climate science and technology are in a dramatically different place from where they were just one year ago. The US, the world’s largest economy and historically its biggest polluter, has made a U-turn on climate policy as the Trump administration cancels hundreds of billions of dollars in grants, tax credits, and loans designed to support the industry and climate research.  

And the stark truth is that time is of the essence. This year marks 10 years since the Paris Agreement, the UN treaty that aimed to limit global warming by setting a goal of cutting emissions so that temperatures would rise no more than 1.5 °C above preindustrial temperatures. Today, experts agree that we’ve virtually run out of time to reach that goal and will need to act fast to limit warming to less than 2 °C.

The companies on this year’s list are inventing and scaling technologies that could help. There’s a wide array of firms represented, from early-stage startups to multibillion-dollar businesses. Their technologies run the gamut from electric vehicles to the materials that scaffold our world. 

Of course, we can’t claim to be able to predict the future: Not all the businesses we’ve recognized will succeed. But we’ve done our best to choose companies with a solid technical footing, as well as feasible plans for bringing their solutions to the right market and scaling them effectively. 

We’re excited to share the list with you in just a few days. These companies are helping address one of the most crucial challenges of our time. Who knows—maybe you’ll even come away feeling a little more hopeful.

Fusion power plants don’t exist yet, but they’re making money anyway

25 September 2025 at 06:00

This week, Commonwealth Fusion Systems announced it has another customer for its first commercial fusion power plant, in Virginia. Eni, one of the world’s largest oil and gas companies, signed a billion-dollar deal to buy electricity from the facility.

One small detail? That reactor doesn’t exist yet. Neither does the smaller reactor Commonwealth is building first to demonstrate that its tokamak design will work as intended.

This is a weird moment in fusion. Investors are pouring billions into the field to build power plants, and some companies are even signing huge agreements to purchase power from those still-nonexistent plants. All this comes before companies have actually completed a working reactor that can produce electricity. It takes money to develop a new technology, but all this funding could lead to some twisted expectations. 

Nearly three years ago, the National Ignition Facility at Lawrence Livermore National Laboratory hit a major milestone for fusion power. With the help of the world’s most powerful lasers, scientists heated a pellet of fuel to 100 million °C. Hydrogen atoms in that fuel fused together, releasing more energy than the lasers put in.

It was a game changer for the vibes in fusion. The NIF experiment finally showed that a fusion reactor could yield net energy. Plasma physicists’ models had certainly suggested that it should be true, but it was another thing to see it demonstrated in real life.

But in some ways, the NIF results didn’t really change much for commercial fusion. That site’s lasers used a bonkers amount of energy, the setup was wildly complicated, and the whole thing lasted a fraction of a second. To operate a fusion power plant, not only do you have to achieve net energy, but you also need to do that on a somewhat constant basis and—crucially—do it economically.

So in the wake of the NIF news, all eyes went to companies like Commonwealth, Helion, and Zap Energy. Who would be the first to demonstrate this milestone in a more commercially feasible reactor? Or better yet, who would be the first to get a power plant up and running?

So far, the answer is none of them.

To be fair, many fusion companies have made technical progress. Commonwealth has built and tested its high-temperature superconducting magnets and published research about that work. Zap Energy demonstrated three hours of continuous operation in its test system, a milestone validated by the US Department of Energy. Helion started construction of its power plant in Washington in July. (And that’s not to mention a thriving, publicly funded fusion industry in China.)  

These are all important milestones, and these and other companies have seen many more. But as Ed Morse, a professor of nuclear engineering at Berkeley, summed it up to me: “They don’t have a reactor.” (He was speaking specifically about Commonwealth, but really, the same goes for the others.)

And yet, the money pours in. Commonwealth raised over $800 million in funding earlier this year. And now it’s got two big customers signed on to buy electricity from this future power plant.

Why buy electricity from a reactor that’s currently little more than ideas on paper? From the perspective of these particular potential buyers, such agreements can be something of a win-win, says Adam Stein, director of nuclear energy innovation at the Breakthrough Institute.

By putting a vote of confidence behind Commonwealth, Eni could help the fusion startup get the capital it needs to actually build its plant. The company also directly invests in Commonwealth, so it stands to benefit from success. Getting a good rate on the capital needed to build the plant could also mean the electricity is ultimately cheaper for Eni, Stein says. 

Ultimately, fusion needs a lot of money. If fossil-fuel companies and tech giants want to provide it, all the better. One concern I have, though, is how outside observers are interpreting these big commitments. 

US Energy Secretary Chris Wright has been loud about his support for fusion and his expectations of the technology. Earlier this month, he told the BBC that it will soon power the world.

He’s certainly not the first to have big dreams for fusion, and it is an exciting technology. But despite the jaw-dropping financial milestones, this industry is still very much in development. 

And while Wright praises fusion, the Trump administration is slashing support for other energy technologies, including wind and solar power, and spreading disinformation about their safety, cost, and effectiveness. 

To meet the growing electricity demand and cut emissions from the power sector, we’ll need a whole range of technologies. It’s a risk and a distraction to put all our hopes on an unproven energy tech when there are plenty of options that actually exist. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

An oil and gas giant signed a $1 billion deal with Commonwealth Fusion Systems

22 September 2025 at 07:00

Eni, one of the world’s largest oil and gas companies, just agreed to buy $1 billion in electricity from a power plant being built by Commonwealth Fusion Systems. The deal is the latest to illustrate just how much investment Commonwealth and other fusion companies are courting as they attempt to take fusion power from the lab to the power grid. 

“This is showing in concrete terms that people that use large amounts of energy, that know the energy market—they want fusion power, and they’re willing to contract for it and to pay for it,” said Bob Mumgaard, cofounder and CEO of Commonwealth, on a press call about the deal.   

The agreement will see Eni purchase electricity from Commonwealth’s first commercial fusion power plant, in Virginia. The facility is still in the planning stages but is scheduled to come online in the early 2030s.

The news comes a few weeks after Commonwealth announced a $863 million funding round, bringing its total funding raised to date to nearly $3 billion. The fusion company also announced earlier this year that Google would be its first commercial power customer for the Virginia plant.

Commonwealth, a spinout from MIT’s Plasma Science and Fusion Center, is widely considered one of the leading companies in fusion power. Investment in the company represents nearly one-third of the total global investment in private fusion companies. (MIT Technology Review is owned by MIT but is editorially independent.)

Eni has invested in Commonwealth since 2018 and participated in the latest fundraising round. The vast majority of the company’s business is in oil and gas, but in recent years it’s made investments in technologies like biofuels and renewables.

“A company like us—we cannot stay and wait for things to happen,” says Lorenzo Fiorillo, Eni’s director of technology, research and development, and digital. 

One open question is what, exactly, Eni plans to do with this electricity. When asked about it on the press call, Fiorillo referenced wind and solar plants that Eni owns and said the plan “is not different from what we do in other areas in the US and the world.” (Eni sells electricity from power plants that it owns, including renewable and fossil-fuel plants.)

Commonwealth is building tokamak fusion reactors that use superconducting magnets to hold plasma in place. That plasma is where fusion reactions happen, forcing hydrogen atoms together to release large amounts of energy.

The company’s first demonstration reactor, which it calls Sparc, is over 65% complete, and the team is testing components and assembling them. The plan is for the reactor, which is located outside Boston, to make plasma within two years and then demonstrate that it can generate more energy than is required to run it.

While Sparc is still under construction, Commonwealth is working on plans for Arc, its first commercial power plant. That facility should begin construction in 2027 or 2028 and generate electricity for the grid in the early 2030s, Mumgaard says.

Despite the billions of dollars Commonwealth has already raised, the company still needs more money to build its Arc power plant—that will be a multibillion-dollar project, Mumgaard said on a press call in August about the company’s latest fundraising round. 

The latest commitment from Eni could help Commonwealth secure the funding it needs to get Arc built. “These agreements are a really good way to create the right environment for building up more investment,” says Paul Wilson, chair of the department of nuclear engineering and engineering physics at the University of Wisconsin, Madison.

Even though commercial fusion energy is still years away at a minimum, investors and big tech companies have pumped money into the industry and signed agreements to buy power from plants once they’re operational. 

Helion, another leading fusion startup, has plans to produce electricity from its first reactor in 2028 (an aggressive timeline that has some experts expressing skepticism). That facility will have a full generating capacity of 50 megawatts, and in 2023 Microsoft signed an agreement to purchase energy from the facility in order to help power its data centers.

As billions of dollars pour into the fusion industry, there are still many milestones ahead. To date, only the National Ignition Facility at Lawrence Livermore National Laboratory has demonstrated that a fusion reactor can generate more energy than the amount put into the reaction. No commercial project has achieved that yet. 

“There’s a lot of capital going out now to these startup companies,” says Ed Morse, a professor of nuclear engineering at the University of California, Berkeley. “What I’m not seeing is a peer-reviewed scientific article that makes me feel like, boy, we really turned the corner with the physics.”

But others are taking major commercial deals from Commonwealth and others as reasons to be optimistic. “Fusion is moving from the lab to be a proper industry,” says Sehila Gonzalez de Vicente, global director of fusion energy at the nonprofit Clean Air Task Force. “This is very good for the whole sector to be perceived as a real source of energy.”

❌
❌