Reading view

There are new articles available, click to refresh the page.

Why the grid relies on nuclear reactors in the winter

As many of us are ramping up with shopping, baking, and planning for the holiday season, nuclear power plants are also getting ready for one of their busiest seasons of the year.

Here in the US, nuclear reactors follow predictable seasonal trends. Summer and winter tend to see the highest electricity demand, so plant operators schedule maintenance and refueling for other parts of the year.

This scheduled regularity might seem mundane, but it’s quite the feat that operational reactors are as reliable and predictable as they are. It leaves some big shoes to fill for next-generation technology hoping to join the fleet in the next few years.

Generally, nuclear reactors operate at constant levels, as close to full capacity as possible. In 2024, for commercial reactors worldwide, the average capacity factor—the ratio of actual energy output to the theoretical maxiumum—was 83%. North America rang in at an average of about 90%.

(I’ll note here that it’s not always fair to just look at this number to compare different kinds of power plants—natural-gas plants can have lower capacity factors, but it’s mostly because they’re more likely to be intentionally turned on and off to help meet uneven demand.)

Those high capacity factors also undersell the fleet’s true reliability—a lot of the downtime is scheduled. Reactors need to refuel every 18 to 24 months, and operators tend to schedule those outages for the spring and fall, when electricity demand isn’t as high as when we’re all running our air conditioners or heaters at full tilt.

Take a look at this chart of nuclear outages from the US Energy Information Administration. There are some days, especially at the height of summer, when outages are low, and nearly all commercial reactors in the US are operating at nearly full capacity. On July 28 of this year, the fleet was operating at 99.6%. Compare that with  the 77.6% of capacity on October 18, as reactors were taken offline for refueling and maintenance. Now we’re heading into another busy season, when reactors are coming back online and shutdowns are entering another low point.

That’s not to say all outages are planned. At the Sequoyah nuclear power plant in Tennessee, a generator failure in July 2024 took one of two reactors offline, an outage that lasted nearly a year. (The utility also did some maintenance during that time to extend the life of the plant.) Then, just days after that reactor started back up, the entire plant had to shut down because of low water levels.

And who can forget the incident earlier this year when jellyfish wreaked havoc on not one but two nuclear power plants in France? In the second instance, the squishy creatures got into the filters of equipment that sucks water out of the English Channel for cooling at the Paluel nuclear plant. They forced the plant to cut output by nearly half, though it was restored within days.

Barring jellyfish disasters and occasional maintenance, the global nuclear fleet operates quite reliably. That wasn’t always the case, though. In the 1970s, reactors operated at an average capacity factor of just 60%. They were shut down nearly as often as they were running.

The fleet of reactors today has benefited from decades of experience. Now we’re seeing a growing pool of companies aiming to bring new technologies to the nuclear industry.

Next-generation reactors that use new materials for fuel or cooling will be able to borrow some lessons from the existing fleet, but they’ll also face novel challenges.

That could mean early demonstration reactors aren’t as reliable as the current commercial fleet at first. “First-of-a-kind nuclear, just like with any other first-of-a-kind technologies, is very challenging,” says Koroush Shirvan, a professor of nuclear science and engineering at MIT.

That means it will probably take time for molten-salt reactors or small modular reactors, or any of the other designs out there to overcome technical hurdles and settle into their own rhythm. It’s taken decades to get to a place where we take it for granted that the nuclear fleet can follow a neat seasonal curve based on electricity demand. 

There will always be hurricanes and electrical failures and jellyfish invasions that cause some unexpected problems and force nuclear plants (or any power plants, for that matter) to shut down. But overall, the fleet today operates at an extremely high level of consistency. One of the major challenges ahead for next-generation technologies will be proving that they can do the same.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Reintroduced carnivores’ impacts on ecosystems are still coming into focus

When the US Fish and Wildlife Service reintroduced 14 gray wolves to Yellowstone National Park in 1995, the animals were, in some ways, stepping into a new world.

After humans hunted wolves to near-extinction across the Western US in the early 20th century, the carnivore’s absence likely altered ecosystems and food webs across the Rocky Mountains. Once wolves were reintroduced to the landscape, scientists hoped to learn if, and how quickly, these changes could be reversed.

Despite studies claiming to show early evidence of a tantalizing relationship between wolves and regenerating riparian ecosystems since the canines returned to Yellowstone, scientists are still debating how large carnivores impact vegetation and other animals, according to a new paper published this month.

Read full article

Comments

© Qian Weizhong/VCG via Getty Images

This year’s UN climate talks avoided fossil fuels, again

If we didn’t have pictures and videos, I almost wouldn’t believe the imagery that came out of this year’s UN climate talks.

Over the past few weeks in Belem, Brazil, attendees dealt with oppressive heat and flooding, and at one point a literal fire broke out, delaying negotiations. The symbolism was almost too much to bear.

While many, including the president of Brazil, framed this year’s conference as one of action, the talks ended with a watered-down agreement. The final draft doesn’t even include the phrase “fossil fuels.”

As emissions and global temperatures reach record highs again this year, I’m left wondering: Why is it so hard to formally acknowledge what’s causing the problem?

This is the 30th time that leaders have gathered for the Conference of the Parties, or COP, an annual UN conference focused on climate change. COP30 also marks 10 years since the gathering that produced the Paris Agreement, in which world powers committed to limiting global warming to “well below” 2.0 °C above preindustrial levels, with a goal of staying below the 1.5 °C mark. (That’s 3.6 °F and 2.7 °F, respectively, for my fellow Americans.)

Before the conference kicked off this year, host country Brazil’s president, Luiz Inácio Lula da Silva, cast this as the “implementation COP” and called for negotiators to focus on action, and specifically to deliver a road map for a global transition away from fossil fuels.

The science is clear—burning fossil fuels emits greenhouse gases and drives climate change. Reports have shown that meeting the goal of limiting warming to 1.5 °C would require stopping new fossil-fuel exploration and development.

The problem is, “fossil fuels” might as well be a curse word at global climate negotiations. Two years ago, fights over how to address fossil fuels brought talks at COP28 to a standstill. (It’s worth noting that the conference was hosted in Dubai in the UAE, and the leader was literally the head of the country’s national oil company.)

The agreement in Dubai ended up including a line that called on countries to transition away from fossil fuels in energy systems. It was short of what many advocates wanted, which was a more explicit call to phase out fossil fuels entirely. But it was still hailed as a win. As I wrote at the time: “The bar is truly on the floor.”

And yet this year, it seems we’ve dug into the basement.

At one point about 80 countries, a little under half of those present, demanded a concrete plan to move away from fossil fuels.

But oil producers like Saudi Arabia were insistent that fossil fuels not be singled out. Other countries, including some in Africa and Asia, also made a very fair point: Western nations like the US have burned the most fossil fuels and benefited from it economically. This contingent maintains that legacy polluters have a unique responsibility to finance the transition for less wealthy and developing nations rather than simply barring them from taking the same development route. 

The US, by the way, didn’t send a formal delegation to the talks, for the first time in 30 years. But the absence spoke volumes. In a statement to the New York Times that sidestepped the COP talks, White House spokesperson Taylor Rogers said that president Trump had “set a strong example for the rest of the world” by pursuing new fossil-fuel development.

To sum up: Some countries are economically dependent on fossil fuels, some don’t want to stop depending on fossil fuels without incentives from other countries, and the current US administration would rather keep using fossil fuels than switch to other energy sources. 

All those factors combined help explain why, in its final form, COP30’s agreement doesn’t name fossil fuels at all. Instead, there’s a vague line that leaders should take into account the decisions made in Dubai, and an acknowledgement that the “global transition towards low greenhouse-gas emissions and climate-resilient development is irreversible and the trend of the future.”

Hopefully, that’s true. But it’s concerning that even on the world’s biggest stage, naming what we’re supposed to be transitioning away from and putting together any sort of plan to actually do it seems to be impossible.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Three things to know about the future of electricity

One of the dominant storylines I’ve been following through 2025 is electricity—where and how demand is going up, how much it costs, and how this all intersects with that topic everyone is talking about: AI.

Last week, the International Energy Agency released the latest version of the World Energy Outlook, the annual report that takes stock of the current state of global energy and looks toward the future. It contains some interesting insights and a few surprising figures about electricity, grids, and the state of climate change. So let’s dig into some numbers, shall we?

We’re in the age of electricity

Energy demand in general is going up around the world as populations increase and economies grow. But electricity is the star of the show, with demand projected to grow by 40% in the next 10 years.

China has accounted for the bulk of electricity growth for the past 10 years, and that’s going to continue. But emerging economies outside China will be a much bigger piece of the pie going forward. And while advanced economies, including the US and Europe, have seen flat demand in the past decade, the rise of AI and data centers will cause demand to climb there as well.

Air-conditioning is a major source of rising demand. Growing economies will give more people access to air-conditioning; income-driven AC growth will add about 330 gigawatts to global peak demand by 2035. Rising temperatures will tack on another 170 GW in that time. Together, that’s an increase of over 10% from 2024 levels.  

AI is a local story

This year, AI has been the story that none of us can get away from. One number that jumped out at me from this report: In 2025, investment in data centers is expected to top $580 billion. That’s more than the $540 billion spent on the global oil supply. 

It’s no wonder, then, that the energy demands of AI are in the spotlight. One key takeaway is that these demands are vastly different in different parts of the world.

Data centers still make up less than 10% of the projected increase in total electricity demand between now and 2035. It’s not nothing, but it’s far outweighed by sectors like industry and appliances, including air conditioners. Even electric vehicles will add more demand to the grid than data centers.

But AI will be the factor for the grid in some parts of the world. In the US, data centers will account for half the growth in total electricity demand between now and 2030.

And as we’ve covered in this newsletter before, data centers present a unique challenge, because they tend to be clustered together, so the demand tends to be concentrated around specific communities and on specific grids. Half the data center capacity that’s in the pipeline is close to large cities.

Look out for a coal crossover

As we ask more from our grid, the key factor that’s going to determine what all this means for climate change is what’s supplying the electricity we’re using.

As it stands, the world’s grids still primarily run on fossil fuels, so every bit of electricity growth comes with planet-warming greenhouse-gas emissions attached. That’s slowly changing, though.

Together, solar and wind were the leading source of electricity in the first half of this year, overtaking coal for the first time. Coal use could peak and begin to fall by the end of this decade.

Nuclear could play a role in replacing fossil fuels: After two decades of stagnation, the global nuclear fleet could increase by a third in the next 10 years. Solar is set to continue its meteoric rise, too. Of all the electricity demand growth we’re expecting in the next decade, 80% is in places with high-quality solar irradiation—meaning they’re good spots for solar power.

Ultimately, there are a lot of ways in which the world is moving in the right direction on energy. But we’re far from moving fast enough. Global emissions are, once again, going to hit a record high this year. To limit warming and prevent the worst effects of climate change, we need to remake our energy system, including electricity, and we need to do it faster. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Kubernetes Cluster Goes Mobile in Pet Carrier

There’s been a bit of a virtualization revolution going on for the last decade or so, where tools like Docker and LXC have made it possible to quickly deploy server applications without worrying much about dependency issues. Of course as these tools got adopted we needed more tools to scale them easily. Enter Kubernetes, a container orchestration platform that normally herds fleets of microservices in sprawling cloud architectures, but it turns out it’s perfectly happy running on a tiny computer stuffed in a cat carrier.

This was a build for the recent Kubecon in Atlanta, and the project’s creator [Justin] wanted it to have an AI angle to it since the core compute in the backpack is an NVIDIA DGX Spark. When someone scans the QR code, the backpack takes a picture and then runs it through a two-node cluster on the Spark running a local AI model that stylizes the picture and sends it back to the user. Only the AI workload runs on the Spark; [Justin] also is using a LattePanda to handle most of everything else rather than host everything on the Spark.

To get power for the mobile cluster [Justin] is using a small power bank, and with that it gets around three hours of use before it needs to be recharged. Originally it was planned to work on the WiFi at the conference as well but this was unreliable and he switched to using a USB tether to his phone. It was a big hit with the conference goers though, with people using it around every ten minutes while he had it on his back. Of course you don’t need a fancy NVIDIA product to run a portable kubernetes cluster. You can always use a few old phones to run one as well.

Google is still aiming for its “moonshot” 2030 energy goals

Last week, we hosted EmTech MIT, MIT Technology Review’s annual flagship conference in Cambridge, Massachusetts. Over the course of three days of main-stage sessions, I learned about innovations in AI, biotech, and robotics. 

But as you might imagine, some of this climate reporter’s favorite moments came in the climate sessions. I was listening especially closely to my colleague James Temple’s discussion with Lucia Tian, head of advanced energy technologies at Google. 

They spoke about the tech giant’s growing energy demand and what sort of technologies the company is looking to to help meet it. In case you weren’t able to join us, let’s dig into that session and consider how the company is thinking about energy in the face of AI’s rapid rise. 

I’ve been closely following Google’s work in energy this year. Like the rest of the tech industry, the company is seeing ballooning electricity demand in its data centers. That could get in the way of a major goal that Google has been talking about for years. 

See, back in 2020, the company announced an ambitious target: by 2030, it aimed to run on carbon-free energy 24-7. Basically, that means Google would purchase enough renewable energy on the grids where it operates to meet its entire electricity demand, and the purchases would match up so the electricity would have to be generated when the company was actually using energy. (For more on the nuances of Big Tech’s renewable-energy pledges, check out James’s piece from last year.)

Google’s is an ambitious goal, and on stage, Tian said that the company is still aiming for it but acknowledged that it’s looking tough with the rise of AI. 

“It was always a moonshot,” she said. “It’s something very, very hard to achieve, and it’s only harder in the face of this growth. But our perspective is, if we don’t move in that direction, we’ll never get there.”

Google’s total electricity demand more than doubled from 2020 to 2024, according to its latest Environmental Report. As for that goal of 24-7 carbon-free energy? The company is basically treading water. While it was at 67% for its data centers in 2020, last year it came in at 66%. 

Not going backwards is something of an accomplishment, given the rapid growth in electricity demand. But it still leaves the company some distance away from its finish line.

To close the gap, Google has been signing what feels like constant deals in the energy space. Two recent announcements that Tian talked about on stage were a project involving carbon capture and storage at a natural-gas plant in Illinois and plans to reopen a shuttered nuclear power plant in Iowa. 

Let’s start with carbon capture. Google signed an agreement to purchase most of the electricity from a new natural-gas plant, which will capture and store about 90% of its carbon dioxide emissions. 

That announcement was controversial, with critics arguing that carbon capture keeps fossil-fuel infrastructure online longer and still releases greenhouse gases and other pollutants into the atmosphere. 

One question that James raised on stage: Why build a new natural-gas plant rather than add equipment to an already existing facility? Tacking on equipment to an operational plant would mean cutting emissions from the status quo, rather than adding entirely new fossil-fuel infrastructure. 

The company did consider many existing plants, Tian said. But, as she put it, “Retrofits aren’t going to make sense everywhere.” Space can be limited at existing plants, for example, and many may not have the right geology to store carbon dioxide underground. 

“We wanted to lead with a project that could prove this technology at scale,” Tian said. This site has an operational Class VI well, the type used for permanent sequestration, she added, and it also doesn’t require a big pipeline buildout. 

Tian also touched on the company’s recent announcement that it’s collaborating with NextEra Energy to reopen Duane Arnold Energy Center, a nuclear power plant in Iowa. The company will purchase electricity from that plant, which is scheduled to reopen in 2029. 

As I covered in a story earlier this year, Duane Arnold was basically the final option in the US for companies looking to reopen shuttered nuclear power plants. “Just a few years back, we were still closing down nuclear plants in this country,” Tian said on stage. 

While each reopening will look a little different, Tian highlighted the groups working to restart the Palisades plant in Michigan, which was the first reopening to be announced, last spring. “They’re the real heroes of the story,” she said.

I’m always interested to get a peek behind the curtain at how Big Tech is thinking about energy. I’m skeptical but certainly interested to see how Google’s, and the rest of the industry’s, goals shape up over the next few years. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Stop worrying about your AI footprint. Look at the big picture instead.

Picture it: I’m minding my business at a party, parked by the snack table (of course). A friend of a friend wanders up, and we strike up a conversation. It quickly turns to work, and upon learning that I’m a climate technology reporter, my new acquaintance says something like: “Should I be using AI? I’ve heard it’s awful for the environment.” 

This actually happens pretty often now. Generally, I tell people not to worry—let a chatbot plan your vacation, suggest recipe ideas, or write you a poem if you want. 

That response might surprise some people, but I promise I’m not living under a rock, and I have seen all the concerning projections about how much electricity AI is using. Data centers could consume up to 945 terawatt-hours annually by 2030. (That’s roughly as much as Japan.) 

But I feel strongly about not putting the onus on individuals, partly because AI concerns remind me so much of another question: “What should I do to reduce my carbon footprint?” 

That one gets under my skin because of the context: BP helped popularize the concept of a carbon footprint in a marketing campaign in the early 2000s. That framing effectively shifts the burden of worrying about the environment from fossil-fuel companies to individuals. 

The reality is, no one person can address climate change alone: Our entire society is built around burning fossil fuels. To address climate change, we need political action and public support for researching and scaling up climate technology. We need companies to innovate and take decisive action to reduce greenhouse-gas emissions. Focusing too much on individuals is a distraction from the real solutions on the table. 

I see something similar today with AI. People are asking climate reporters at barbecues whether they should feel guilty about using chatbots too frequently when we need to focus on the bigger picture. 

Big tech companies are playing into this narrative by providing energy-use estimates for their products at the user level. A couple of recent reports put the electricity used to query a chatbot at about 0.3 watt-hours, the same as powering a microwave for about a second. That’s so small as to be virtually insignificant.

But stopping with the energy use of a single query obscures the full truth, which is that this industry is growing quickly, building energy-hungry infrastructure at a nearly incomprehensible scale to satisfy the AI appetites of society as a whole. Meta is currently building a data center in Louisiana with five gigawatts of computational power—about the same demand as the entire state of Maine at the summer peak.  (To learn more, read our Power Hungry series online.)

Increasingly, there’s no getting away from AI, and it’s not as simple as choosing to use or not use the technology. Your favorite search engine likely gives you an AI summary at the top of your search results. Your email provider’s suggested replies? Probably AI. Same for chatting with customer service while you’re shopping online. 

Just as with climate change, we need to look at this as a system rather than a series of individual choices. 

Massive tech companies using AI in their products should be disclosing their total energy and water use and going into detail about how they complete their calculations. Estimating the burden per query is a start, but we also deserve to see how these impacts add up for billions of users, and how that’s changing over time as companies (hopefully) make their products more efficient. Lawmakers should be mandating these disclosures, and we should be asking for them, too. 

That’s not to say there’s absolutely no individual action that you can take. Just as you could meaningfully reduce your individual greenhouse-gas emissions by taking fewer flights and eating less meat, there are some reasonable things that you can do to reduce your AI footprint. Generating videos tends to be especially energy-intensive, as does using reasoning models to engage with long prompts and produce long answers. Asking a chatbot to help plan your day, suggest fun activities to do with your family, or summarize a ridiculously long email has relatively minor impact. 

Ultimately, as long as you aren’t relentlessly churning out AI slop, you shouldn’t be too worried about your individual AI footprint. But we should all be keeping our eye on what this industry will mean for our grid, our society, and our planet. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Four thoughts from Bill Gates on climate tech

Bill Gates doesn’t shy away or pretend modesty when it comes to his stature in the climate world today. “Well, who’s the biggest funder of climate innovation companies?” he asked a handful of journalists at a media roundtable event last week. “If there’s someone else, I’ve never met them.”

The former Microsoft CEO has spent the last decade investing in climate technology through Breakthrough Energy, which he founded in 2015. Ahead of the UN climate meetings kicking off next week, Gates published a memo outlining what he thinks activists and negotiators should focus on and how he’s thinking about the state of climate tech right now. Let’s get into it. 

Are we too focused on near-term climate goals?

One of the central points Gates made in his new memo is that he thinks the world is too focused on near-term emissions goals and national emissions reporting.

So in parallel with the national accounting structure for emissions, Gates argues, we should have high-level climate discussions at events like the UN climate conference. Those discussions should take a global view on how to reduce emissions in key sectors like energy and heavy industry.

“The way everybody makes steel, it’s the same. The way everybody makes cement, it’s the same. The way we make fertilizer, it’s all the same,” he says.

As he noted in one recent essay for MIT Technology Review, he sees innovation as the key to cutting the cost of clean versions of energy, cement, vehicles, and so on. And once products get cheaper, they can see wider adoption.

What’s most likely to power our grid in the future?

“In the long run, probably either fission or fusion will be the cheapest way to make electricity,” he says. (It should be noted that, as with most climate technologies, Gates has investments in both fission and fusion companies through Breakthrough Energy Ventures, so he has a vested interest here.)

He acknowledges, though, that reactors likely won’t come online quickly enough to meet rising electricity demand in the US: “I wish I could deliver nuclear fusion, like, three years earlier than I can.”

He also spoke to China’s leadership in both nuclear fission and fusion energy. “The amount of money they’re putting [into] fusion is more than the rest of the world put together times two. I mean, it’s not guaranteed to work. But name your favorite fusion approach here in the US—there’s a Chinese project.”

Can carbon removal be part of the solution?

I had my colleague James Temple’s recent story on what’s next for carbon removal at the top of my mind, so I asked Gates if he saw carbon credits or carbon removal as part of the problematic near-term thinking he wrote about in the memo.

Gates buys offsets to cancel out his own personal emissions, to the tune of about $9 million a year, he said at the roundtable, but doesn’t expect many of those offsets to make a significant dent in climate progress on a broader scale: “That stuff, most of those technologies, are a complete dead end. They don’t get you cheap enough to be meaningful.

“Carbon sequestration at $400, $200, $100, can never be a meaningful part of this game. If you have a technology that starts at $400 and can get to $4, then hallelujah, let’s go. I haven’t seen that one. There are some now that look like they can get to $40 or $50, and that can play somewhat of a role.”

 Will AI be good news for innovation? 

During the discussion, I started a tally in the corner of my notebook, adding a tick every time Gates mentioned AI. Over the course of about an hour, I got to six tally marks, and I definitely missed making a few.

Gates acknowledged that AI is going to add electricity demand, a challenge for a US grid that hasn’t seen net demand go up for decades. But so too will electric cars and heat pumps. 

I was surprised at just how positively he spoke about AI’s potential, though:

“AI will accelerate every innovation pipeline you can name: cancer, Alzheimer’s, catalysts in material science, you name it. And we’re all trying to figure out what that means. That is the biggest change agent in the world today, moving at a pace that is very, very rapid … every breakthrough energy company will be able to move faster because of using those tools, some very dramatically.”

I’ll add that, as I’ve noted here before, I’m skeptical of big claims about AI’s potential to be a silver bullet across industries, including climate tech. (If you missed it, check out this story about AI and the grid from earlier this year.) 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Spark and Ark: A Look At Our Newest Bitcoin Layer Twos

Bitcoin Magazine

Spark and Ark: A Look At Our Newest Bitcoin Layer Twos

In my quest to find the best solution for Cake Wallet to offer user-friendly, non-custodial Lightning to our users, I’ve gone deep down the rabbit hole of both Spark and Ark. Both are quite novel approaches to Bitcoin layer two networks, and are designed at their core to be interoperable with the broader Bitcoin network for payments via the Lightning Network. While both can be used “just” for Lightning payments, both networks are positioned to rapidly expand and be used for far more than that over the coming months and years.

One thing to keep in mind is that while Spark and Ark on their face seem rather similar, in practice and in implementation they are quite distinct.

Why do we need new layer twos?

Bitcoin at its core is an incredible tool for freedom, but due to block size constraints, we know that the majority of the world will never be able to make transactions on-chain. Enter Lightning, a solution that allows one on-chain transaction to allow for essentially infinite off-chain transactions, expanding the usefulness of Bitcoin’s base layer and making it possible for more people to transact.

While Lightning provided a promising approach to scaling Bitcoin payments, ultimately the realization that its best role is as an interoperability layer and not as a tool for end-users to run themselves has become clear. On-chain requirements, liquidity management, liveness requirements, and other core hurdles make the implementation of user-friendly, self-custodial Lightning next to impossible. This has become apparent as most Lightning wallets and use-cases have opted to use custodial or federated models out of a need to simplify the user experience and the implementation difficulty.

The biggest win that Spark and Ark provide to the Bitcoin space out of the gate is providing a much simpler and easier way for the average developer to provide Lightning to their users, while allowing for greatly expanded functionality down the line beyond Lightning payments.

Ark, simplified

History

The concept of Ark was created in May of 2023 by Burak, a Lightning advocate and developer. The driving force behind its creation was the realization that the Lightning network as constructed was not effective as an onboarding tool for the average individual due to inbound liquidity requirements among many other things, and that privacy was often lacking. While Burak invented the protocol itself, two companies – Ark Labs and Second – have stepped in to build the Ark protocol into an end-to-end layer-two network for Bitcoin.

While both companies are building around the same open-source Ark protocol, their implementations and objectives are rather dissimilar. As a result, I’ll do my best to distill both below where possible.

Terminology

Ark: Ark is a protocol for moving Bitcoin transactions off-chain by leveraging multisig and pre-signed transactions between users and the Ark Operator. Anything you can do on Bitcoin, you can do on Ark but faster and with lower fees.

Ark Operator: The entity running the centralized Ark server infrastructure and responsible for providing liquidity for user’s VTXOs before expiry.

Lightning Gateway: The entity that provides the ability for Ark users to send or receive Lightning payments using trustless atomic swaps of Ark VTXOs. This function can be provided by the same entity as the Ark Operator, but is often distinct to spread out counter-party risk.

Virtual Transaction Outputs: Also called “VTXOs”, these are very similar to on-chain UTXOs in nature, but are virtual as they aren’t represented as unique UTXOs on-chain and live entirely off-chain. Users send and receive VTXOs within Ark.

Rounds: In order to gain true finality and/or refresh VTXOs, Ark users will need to join rounds, where they work together with other Ark users and the Ark Operator to get new VTXOs in exchange for a fee.

Making transactions

Ark functions very similarly to on-chain Bitcoin transactions, and inherits many of the same mannerisms while allowing transactions to be near-instant and trust-minimized between Ark participants. The sender works with the Ark Operator to sign the VTXO over to the recipient, or in the case of Ark Labs to create a new, chained VTXO for the recipient. This allows a user-experience similar in many ways to on-chain payments, but with far lower fees and far faster transaction times. When the user wants to send or receive Lightning payments, they can work with a Lightning Gateway to atomically swap VTXOs for Lightning payments as-needed. At the moment no offline receive for Lightning payments in Ark is possible, but it’s likely this will be solved in a similarly trust-minimized way within Ark as it is in Spark.

If the user desires finality (i.e. they’ve received a large payment), they can choose to join a round to finalize the payment and gain the same finality assumptions as on-chain Bitcoin. The frequency of this round process will vary by Ark Operator –  with estimates ranging from every 10min to every hour – and requires a relatively lengthy coordinated signing process between all users seeking to join the round with the Ark Operator. The round frequency can even vary based on demand, and is not something that has to be set in stone to a single frequency unlike Bitcoin block times.

As Ark inherits Bitcoin scripting and the UTXO model directly from on-chain Bitcoin, Ark will likely be extended to support token protocols like Taproot Assets in the future.

Trust tradeoffs

Ark targets a very trust-minimized approach to scaling Bitcoin, striking something of a middle-ground in terms of usability and tradeoffs between Lightning and Spark. Note that Ark as a protocol is rapidly developing, and some of these tradeoffs will hopefully be solved through the use of novel off-chain methods or after the implementation of covenants in Bitcoin.

Lack of out-of-round finality

While Spark lacks provable finality, Ark strikes something of a middle ground. For small payments, users can rely on the Ark Operator and previous senders to not collude for security, allowing for instant transfers with no need for collaborative signing rounds. Note that by default, payments within Ark will be “out-of-round” payments that lack true finality, a tradeoff that allows Ark to deliver a good user experience out of the box.

That being said, users who do need or want true finality can have it by joining a round and receiving a new VTXO from the Ark Operator. Receivers are essentially in control of their preferred trust model.

VTXO expiration

As a result of the liquidity requirements to operate an Ark instance, Ark Operators need a way to reclaim liquidity regularly. To allow this liquidity reclamation, Ark VTXOs will expire regularly (i.e. after 30d, with the VTXO expiry being set by each Ark Operator), requiring their owners to either join a round to refresh the VTXO or risk giving up control of their funds entirely to the Ark Operator. While the Ark Operator has strong incentives to merely issue a new VTXO to the owner of the expired one when they come back online, both the Ark Operator and the user will have the ability to spend funds until a new VTXO is issued to the user.

To avoid funds expiring, users will be required to refresh their VTXOs within that window either directly or by offloading refresh to a delegate. Alternatively, atomic swaps of an expiring VTXO for one with a longer lifecycle could be done with an entity like Boltz for a fee, but that is not yet implemented.

Complex round user experience

If you’ve ever used Coinjoin on Bitcoin, you know how tedious and unreliable collaboratively signing a transaction with other Bitcoiners can be. In Ark, those seeking true finality for their VTXOs will need to be available throughout a round signing process until its completion, something that will depend heavily on other participants properly completing the signing process. While this is quite trivial to accomplish for a wallet running on an always-online server, it’s rather complex to reliably perform on mobile platforms, especially iOS where no background execution (and thus no ability to be online at the right time for signing) can be guaranteed for any app.

As a result of this complex user experience, Ark Labs have come up with a system that leverages delegated third parties performing the refresh in a trust-minimized way for users, offloading the liveliness requirement to a third party. While this third party has no ability to steal funds, if they are offline for any reason or refuse to refresh a given VTXO, the user will be forced to join a round themselves before the expiry period. To mitigate this risk, users can designate multiple delegates, shifting the trust assumptions for expiry to a 1-of-N assumption, where if any delegate is honest their VTXO will be refreshed properly.

Second also have a similarly designed system that enables trustless, non-interactive rounds for users, allowing any number of parties to sign for a user during a round (i.e. the wallet provider and a third-party delegate) where if any of those parties signs properly, the users VTXO is properly refreshed.

Note that while these two solutions can refresh expiring VTXOs, they cannot give users true finality without the user actively participating in the round themselves.

Lastly, it’s important to call out that the vast majority of complexity with the round process can be entirely mitigated if a simple covenant is deployed in an upgrade to Bitcoin, something that would unlock a vastly improved user experience for Ark.

Privacy tradeoffs

At its core, Ark inherits Bitcoin’s poor privacy and doesn’t provide any notable privacy improvements as a protocol. That being said, its ability to offload execution off-chain and expand Bitcoin’s functionality allows existing and novel privacy protocols to be built on top of it in the future, with covenants fully unlocking things like private rounds within Ark.

In the short-term, Ark Labs have planned to use WabiSabi-like blinded credentials to improve privacy from the operator when users participate in rounds.

Transaction visibility

While all transactions within Ark don’t need to be published on-chain, providing some loose ephemerality, all transaction details are visible to the Ark Operator and shouldn’t be considered private in the truest sense. Instead, viewing the ephemeral privacy provided by Ark as analogous to the VPN model (offloading visibility into transactions from the Bitcoin blockchain to a trusted third-party) is a useful mental model.

It’s unclear at this time if Ark Labs and Second will keep transaction data private or publish it publicly, but as with a VPN users should not rely entirely on a promise to not log for their privacy.

Learn more

Spark, simplified

History

The Spark network was launched earlier this year by the folks at Lightspark, a Bitcoin-adjacent company with an interesting history. From UMA (a username system with natively integrated compliance features for their banking partners) to connections with the failed Libra currency, they have an odd track record of building tools that aren’t quite up to par with Bitcoin’s more cypherpunk roots. But, when I put aside their odd track record and focused purely on what Spark the protocol actually is, it presents a rather useful, pragmatic, and powerful tool overall.

Spark at its core takes a lot of the useful features of statechains, a novel approach to layer twos on Bitcoin created by Ruben Somsen in 2018. Spark specifically extends statechains with the idea of “leaves”, allowing users to send any amount in a transaction instead of being solely able to transact with whole UTXOs, one of the biggest issues with statechains up to this point.

Terminology

Spark Entity: the entity running a given Spark instance, i.e. Lightspark, made up of a collection of Spark Operators. As Spark is an open-source protocol, anyone can start their own Spark Entity, but each Spark Entity controls which Spark Operators can join.

Spark Operator: each Spark Entity is composed of one or more Spark Operators, each of which are responsible for validating and signing operations of users within the Spark instance, including transfers of funds and tokens, issuance of new tokens, etc. These can be the same entity as the Spark Entity, or (hopefully) distinct in relationship and jurisdiction from the Spark Entity. Currently the two Operators for Spark are Lightspark themselves and Flashnet, but more are slated to be added in the near future.

Spark Service Provider: an entity that provides various services to Spark users, including using atomic swaps to trustlessly send and receive Lightning payments on the users behalf.

Spark leaves: Spark solves the issues around whole-coin transfer requirements in statechains with the introduction of leaves. These can be thought of similarly to UTXOs within Bitcoin, as they can be freely broken up into any size necessary.

Making transactions

At its core, Spark functions by allowing users to easily move Bitcoin around the Spark network near-instantly by working in a trust-minimized way with Spark Operators to transfer ownership of individual leaves to another person. There is no need for a blockchain, confirmations, or liveness between sender and receiver, making payments simple and very fast. When a user wants to make a payment on Lightning, they atomically swap a leaf or leaves from their wallet with a Spark Service Provider who then sends the payment trustlessly on their behalf for a fee.

To transfer a Spark leaf, the sender co-signs ownership of the leaf over from themselves + Spark Operators to the new owner + Spark Operators. This is done in such a way that if any of the Spark Operators or previous owner honestly deletes their keyshare used in the co-signing operation, the leaf is then solely owned by the recipient and no double-spend is possible. As this operation only requires collaboration between the Spark Operators and sender and not any other Spark users, these signing rounds are very fast and resistant to DoS attacks.

Spark also includes a similar 1-of-N trust model to do offline receive for Lightning payments, a key user-experience improvement over standard Lightning wallet usage. This is especially important when using Spark on a mobile wallet, as mobile platforms cannot guarantee background execution or perfect network access 24/7.

In addition to regular payments, Spark has extended the idea to include native token support, with the core focus being on stablecoins like USDT and USDC able to be issued and transferred seamlessly within the Spark network. Tokens transfers themselves share a similar trust model to standard transactions on Spark, and retain the ability to unilaterally exit on-chain.

Lastly, users in Spark can unilaterally exit on-chain at any time by publishing a pre-signed exit transaction on-chain. While the cost of exiting can vary widely due to variables like leaf depth and on-chain fee rates, likely pricing out smaller amounts, it’s a critical tool to ensure that funds can be retrieved in the event of a malicious or unavailable Spark Entity.

Trust tradeoffs

Spark makes a very pragmatic set of tradeoffs that compliment the current issues befalling Lightning and Bitcoin usage today. That being said, there are some major differences with Spark compared to on-chain Bitcoin or Lightning usage. I prefer to use the term “trust-minimized” when talking about Spark (and most other layer two networks) as only self-custody of Bitcoin on-chain can truly be viewed as “trustless”.

Lack of true finality

The core risk to self-sovereignty in Spark is the lack of true finality, where users can never know for sure that their funds cannot be double-spent through collusion between the Spark Operators and a previous spender. Within Spark, finality (knowing that your funds can only be moved with your keys) exists – but is not provable – on the condition that any single Spark Operator deletes their keyshare after signing off on a Spark transaction. On the flip side, if all Spark Operators are malicious and refuse to delete their keyshare and collude with a previous sender of a leaf you own they can double-spend that leaf and effectively steal funds.

While in practice I think this 1-of-N trust assumption is reasonable, it obviously falls far short of the regular, on-chain Bitcoin trust assumptions where true finality is a default. It’s also important to note that due to the pseudonymous nature of Spark transactions, the previous sender could be the same entity as the Spark Entity.

Potentially centralized token control

While transfers of tokens themselves share the 1-of-N trust assumption of regular Spark payments, the tokens themselves can be frozen at any time if the issuer decides to enable this functionality. While this is similar to many centrally controlled stablecoins like USDT (who freeze and confiscate Tether quite often for legal reasons), it’s important to callout and will likely be enabled in many regulated stablecoins like USDC and USDT.

1-of-N offline Lightning receive security

While offline Lightning receives are not trust-minimized in the same way standard Lightning payments are, theft of funds would require all Spark Operators to collude to steal a single Lightning payment, something that is disincentivized due to the small size of Lightning payments and the massive reputational risk if caught stealing from users, something that is easy to detect due to the inherent proof of payment in the Lightning network.

Privacy tradeoffs

Spark itself should not be viewed as a privacy tool, as it inherits core privacy problems from Bitcoin’s base layer and has made some poor design choices initially when it comes to privacy. That being said, Spark’s core technology could be extended to have fantastic privacy with the introduction of blind signing for all transactions, confidential amounts for token transfers, and other privacy technologies that aren’t normally possible within the Bitcoin ecosystem.

Transaction visibility

While transactions within Spark aren’t published for all time to a blockchain like on-chain transactions, all Spark Operators do get full visibility into transactions. In theory this could provide ephemerality if Spark Operators had a non-logging policy, but in practice all transaction data is currently being published to an explorer by Flashnet, one of the Spark Operators. This means that outside observers can trivially look up Spark addresses and see all transaction details, token balances, and even link Lightning payments to addresses using timing and amount analysis.

Note that Spark is working to add the ability for wallet developers to opt-out of this data publishing by marking transactions as private, which then falls back to the same VPN-like trust model as previously described for Ark. If a wallet developer opts to enable this (as I hope they all will!), the Spark Operators will promise not to publish this transaction data publicly, but of course still have the ability to store this data locally if they so choose.

Lack of address rotation

In its current form, Spark doesn’t support spending funds from multiple distinct Spark addresses in a single transaction. While this is slated to be fixed and already acknowledged as a key shortcoming of Spark, at present it means that most Spark implementations will rely on a single, static address for all transactions, making Spark’s privacy at the moment worse than even on-chain Bitcoin. Combining this address re-use with all amounts being visible means that it would be trivial for an attacker to perform timing + amount heuristics on payments to ascertain which Lightning payments pertain to which Spark addresses.

Spark address leaks

To complete the trifecta of current privacy problems in Spark, the core SDKs provided by Spark (and used by the most common implementation of Spark in Wallet of Satoshi) by default include the user’s Spark address unnecessarily in BOLT 11 Lightning invoices. This means that anyone can easily decode a provided BOLT 11 invoice and learn every transaction from that user in Spark, thanks to the use of static addresses and all details being published to an explorer as detailed above.

Note that this isn’t absolutely necessary, can easily be disabled by wallet developers, and is already removed in the Breez Nodeless SDK that utilizes Spark and is rapidly gaining adoption but is important to callout nonetheless.

Learn more

Conclusion

While both Spark and Ark present an exciting new time in the world of Bitcoin usability and scalability, as with all things they come with their own unique sets of tradeoffs. While neither is a perfect solution, it’s exciting that wallet developers finally have two competing and interesting options to solve the implementation of Lightning, native tokens, and other functionality into their wallets and software without the complexity traditionally associated with Lightning. Both Spark and Ark present a pragmatic outcome for scaling Bitcoin, representing a hard but sane path to do things in a way that balances trust-minimization with user-experience and scaling.

As both are rapidly evolving protocols, the hope is that the tradeoffs presented by both solutions will be rapidly improved upon and minimized in the coming months and years, providing an even better option that gets non-custodial Bitcoin into the hands of many more people while extending the things that we can build on top of Bitcoin.

A special thank you to the folks at Spark, Ark Labs, Second, Breez, Spiral, and Bitcoin QnA for taking the time to provide feedback on this article! It takes a tribe to work out all of the trust assumptions and tradeoffs of these novel systems, and I’m extremely grateful to each for taking out some of their valuable time to help here.

This is a guest post by Seth For Privacy Opinions expressed are entirely their own and do not necessarily reflect those of BTC Inc or Bitcoin Magazine.

This post Spark and Ark: A Look At Our Newest Bitcoin Layer Twos first appeared on Bitcoin Magazine and is written by Seth For Privacy.

What a massive thermal battery means for energy storage

Rondo Energy just turned on what it says is the world’s largest thermal battery, an energy storage system that can take in electricity and provide a consistent source of heat.

The company announced last week that its first full-scale system is operational, with 100 megawatt-hours of capacity. The thermal battery is powered by an off-grid solar array and will provide heat for enhanced oil recovery (more on this in a moment).

Thermal batteries could help clean up difficult-to-decarbonize sectors like manufacturing and heavy industrial processes like cement and steel production. With Rondo’s latest announcement, the industry has reached a major milestone in its effort to prove that thermal energy storage can work in the real world. Let’s dig into this announcement, what it means to have oil and gas involved, and what comes next.

The concept behind a thermal battery is overwhelmingly simple: Use electricity to heat up some cheap, sturdy material (like bricks) and keep it hot until you want to use that heat later, either directly in an industrial process or to produce electricity.

Rondo’s new system has been operating for 10 weeks and achieved all the relevant efficiency and reliability benchmarks, according to the company. The bricks reach temperatures over 1,000 °C (about 1,800 °F), and over 97% of the energy put into the system is returned as heat.

This is a big step from the 2 MWh pilot system that Rondo started up in 2023, and it’s the first of the mass-produced, full-size heat batteries that the company hopes to put in the hands of customers.

Thermal batteries could be a major tool in cutting emissions: 20% of total energy demand today is used to provide heat for industrial processes, and most of that is generated by burning fossil fuels. So this project’s success is significant for climate action.

There’s one major detail here, though, that dulls some of that promise: This battery is being used for enhanced oil recovery, a process where steam is injected down into wells to get stubborn oil out of the ground.

It can be  tricky for a climate technology to show its merit by helping harvest fossil fuels. Some critics argue that these sorts of techniques keep that polluting infrastructure running longer.

When I spoke to Rondo founder and chief innovation officer  John O’Donnell about the new system, he defended the choice to work with oil and gas.  

“We are decarbonizing the world as it is today,” O’Donnell says. To his mind, it’s better to help an oil and gas company use solar power for its operation than leave it to continue burning natural gas for heat. Between cheap solar, expensive natural gas, and policies in California, he adds, Rondo’s technology made sense for the customer.

Having a willing customer pay for a full-scale system has been crucial to Rondo’s effort to show that it can deliver its technology.

And the next units are on the way: Rondo is currently building three more full-scale units in Europe. The company will be able to bring them online cheaper and faster because of what it’s learned from the California project, O’Donnell says. 

The company has the capacity to build more batteries, and do it quickly. It currently makes batteries at its factory in Thailand, which has the capacity to make 2.4 gigawatt-hours’ worth of heat batteries today.

I’ve been following progress on thermal batteries for years, and this project obviously represents a big step forward. For all the promises of cheap, robust energy storage, there’s nothing like actually building a large-scale system and testing it in the field.

It’s definitely hard to get excited about enhanced oil recovery—we need to stop burning fossil fuels, and do it quickly, to avoid the worst impacts of climate change. But I see the argument that as long as oil and gas operations exist, there’s value in cleaning them up.

And as O’Donnell puts it, heat batteries can help: “This is a really dumb, practical thing that’s ready now.”

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The problem with Big Tech’s favorite carbon removal tech

Sucking carbon pollution out of the atmosphere is becoming a big business—companies are paying top dollar for technologies that can cancel out their own emissions.

Today, nearly 70% of announced carbon removal contracts are for one technology: bioenergy with carbon capture and storage (BECCS). Basically, the idea is to use trees or some other types of biomass for energy, and then capture the emissions when you burn it.

While corporations, including tech giants like Microsoft, are betting big on this technology, there are a few potential problems with BECCS, as my colleague James Temple laid out in a new story. And some of the concerns echo similar problems with other climate technologies we cover, like carbon offsets and alternative jet fuels.

Carbon math can be complicated.

To illustrate one of the biggest issues with BECCS, we need to run through the logic on its carbon accounting. (And while this tech can use many different forms of biomass, let’s assume we’re talking about trees.)

When trees grow, they suck up carbon dioxide from the atmosphere. Those trees can be harvested and used for some intended purpose, like making paper. The leftover material, which might otherwise be waste, is then processed and burned for energy.

This cycle is, in theory, carbon neutral. The emissions from burning the biomass are canceled out by what was removed from the atmosphere during plants’ growth. (Assuming those trees are replaced after they’re harvested.)

So now imagine that carbon-scrubbing equipment is added to the facility that burns the biomass, capturing emissions. If the cycle was logically carbon neutral before, now it’s carbon negative: On net, emissions are removed from the atmosphere. Sounds great, no notes. 

There are a few problems with this math, though. For one, it leaves out the emissions that might be produced while harvesting, transporting, and processing wood. And if projects require clearing land to plant trees or grow crops, that transformation can wind up releasing emissions too.

Issues with carbon math might sound a little familiar if you’ve read any of James’s reporting on carbon offsets, programs where people pay for others to avoid emissions. In particular, his 2021 investigation with ProPublica’s Lisa Song laid out how this so-called solution was actually adding millions of tons of carbon dioxide into the atmosphere.

Carbon capture may entrench polluting facilities.

One of the big benefits of BECCS is that it can be added to existing facilities. There’s less building involved than there might be in something like a facility that vacuums carbon directly out of air. That helps keep costs down, so BECCS is currently much cheaper than direct air capture and other forms of carbon removal.

But keeping legacy equipment running might not be a great thing for emissions or local communities in the long run.

Carbon dioxide is far from the only pollutant spewing out of these facilities. Burning biomass or biofuels can release emissions that harm human health, like particulate matter, sulfur dioxide, and carbon monoxide. Carbon capture equipment might trap some of these pollutants, like sulfur dioxide, but not all.

Assuming that waste material wouldn’t be used for something else might not be right.

It sounds great to use waste, but there’s a major asterisk lurking here, as James lays out in the story:

But the critical question that emerges with waste is: Would it otherwise have been burned or allowed to decompose, or might some of it have been used in some other way that kept the carbon out of the atmosphere? 

Biomass can be used for other things, like making plastic, building material, or even soil additives that can help crops get more nutrients. So the assumption that it’s BECCS or nothing is flawed.

Moreover, a weird thing happens when you start making waste valuable: There’s an incentive to produce more of it. Some experts are concerned that companies could wind up trimming more trees or clearing more forests than what’s needed to make more material for BECCS.

These waste issues remind me of conversations around sustainable aviation fuels. These alternative fuels can be made from a huge range of materials, including crop waste or even used cooking oil. But as demand for these clean fuels has ballooned, things have gotten a little wonky—there are even some reports of fraud, where scammers try to pass off newly made oil from crops as used cooking oil.

BECCS is a potentially useful technology, but like many things in climate tech, it can quickly get complicated. 

James has been reporting on carbon offsets and carbon removal for years. As he put it to me this week when we were chatting about this story: “Just cut emissions and stop messing around.”

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

3 takeaways about climate tech right now

On Monday, we published our 2025 edition of Climate Tech Companies to Watch. This marks the third time we’ve put the list together, and it’s become one of my favorite projects to work on every year. 

In the journalism world, it’s easy to get caught up in the latest news, whether it’s a fundraising round, research paper, or startup failure. Curating this list gives our team a chance to take a step back and consider the broader picture. What industries are making progress or lagging behind? Which countries or regions are seeing quick changes? Who’s likely to succeed? 

This year is an especially interesting moment in the climate tech world, something we grappled with while choosing companies. Here are three of my takeaways from the process of building this list. 

1. It’s hard to overstate China’s role in energy technology right now. 

To put it bluntly, China’s progress on cleantech is wild. The country is dominating in installing wind and solar power and building EVs, and it’s also pumping government money into emerging technologies like fusion energy. 

We knew we wanted this list to reflect China’s emergence as a global energy superpower, and we ended up including two Chinese firms in key industries: renewables and batteries.

In 2024, China accounted for the top four wind turbine makers worldwide. Envision was in the second spot, with 19.3 gigawatts of new capacity added last year. But the company isn’t limited to wind; it’s working to help power heavy industries like steel and chemicals with technology like green hydrogen. 

Batteries are also a hot industry in China, and we’re seeing progress in tech beyond the lithium-ion cells that currently dominate EVs and energy storage on the grid. We represent that industry with HiNa Battery Technology, a leading startup building sodium-ion batteries, which could be cheaper than today’s options. The company’s batteries are already being used in electric mopeds and grid installations. 

2. Energy demand from data centers and AI is on everyone’s mind, especially in the US. 

Another trend we noticed this year was a fixation on the growing energy demand of data centers, including massive planned dedicated facilities that power AI models. (Here’s another nudge to check out our Power Hungry series on AI and energy, in case you haven’t explored it already.) 

Even if their technology has nothing to do with data centers, companies are trying to show how they can be valuable in this age of rising energy demand. Some are signing lucrative deals with tech giants that could provide the money needed to help bring their product to market. 

Kairos Power hopes to be one such energy generator, building next-generation nuclear reactors. Last year, the company signed an agreement with Google that will see the company buy up to 500 megawatts of electricity from Kairos’s first reactors through 2035. 

In a more direct play, Redwood Materials is stringing together used EV batteries to build microgrids that could power—you guessed it—data centers. The company’s first installation fired up this year, and while it’s small, it’s an interesting example of a new use for old technology. 

3. Materials continue to be an area that’s ripe for innovation. 

In a new essay that accompanies the list, Bill Gates lays out the key role of innovation in making progress on climate technology. One thing that jumped out at me while I was reading that piece was a number: 30% of global greenhouse-gas emissions come from manufacturing, including cement and steel production. 

I’ve obviously covered materials and heavy industry for years. But it still strikes me just how much innovation we still need in the most important materials we use to scaffold our world. 

Several companies on this year’s list focus on materials: We’ve once again represented cement, a material that accounts for 7% of global greenhouse-gas emissions. Cemvision is working to use alternative fuel sources and starting materials to clean up the dirty industry. 

And Cyclic Materials is trying to reclaim and recycle rare earth magnets, a crucial technology that underpins everything from speakers to EVs and wind turbines. Today, only about 0.2% of rare earths from recycled devices are recycled, but the company is building multiple facilities in North America in hopes of changing that. 

Our list of 10 Climate Tech Companies to Watch highlights businesses we think have a shot at helping the world address and adapt to climate change with the help of everything from established energy technologies to novel materials. It’s a representation of this moment, and I hope you enjoy taking a spin through it.

EV tax credits are dead in the US. Now what?

On Wednesday, federal EV tax credits in the US officially came to an end.

Those credits, expanded and extended in the 2022 Inflation Reduction Act, gave drivers up to $7,500 in credits toward the purchase of a new electric vehicle. They’ve been a major force in cutting the up-front costs of EVs, pushing more people toward purchasing them and giving automakers confidence that demand would be strong.

The tax credits’ demise comes at a time when battery-electric vehicles still make up a small percentage of new vehicle sales in the country. And transportation is a major contributor to US climate pollution, with cars, trucks, ships, trains, and planes together making up roughly 30% of total greenhouse-gas emissions.

To anticipate what’s next for the US EV market, we can look to countries like Germany, which have ended similar subsidy programs. (Spoiler alert: It’s probably going to be a rough end to the year.)

When you factor in fuel savings, the lifetime cost of an EV can already be lower than that of a gas-powered vehicle today. But EVs can have a higher up-front cost, which is why some governments offer a tax credit or rebate that can help boost adoption for the technology.

In 2016, Germany kicked off a national incentive program to encourage EV sales. While the program was active, drivers could get grants of up to about €6,000 toward the purchase of a new battery-electric or plug-in hybrid vehicle.

Eventually, the government began pulling back the credits. Support for plug-in hybrids ended in 2022, and commercial buyers lost eligibility in September 2023. Then the entire program came to a screeching halt in December 2023, when the government announced it would be ending the incentives with about one week’s notice.

Monthly sales data shows the fingerprints of those changes. In each case where there’s a contraction of public support, there’s a peak in sales just before a cutback, then a crash after. These short-term effects can be dramatic: There were about half as many battery-electric vehicles sold in Germany in January 2024 than there were in December 2023. 

We’re already seeing the first half of this sort of boom-bust cycle in the US: EV sales ticked up in August, making up about 10% of all new vehicle sales, and analysts say September will turn out to be a record-breaking month. People rushed to take advantage of the credits while they still could.

Next comes the crash—the next few months will probably be very slow for EVs. One analyst predicted to the Washington Post that the figure could plummet to the low single digits, “like 1 or 2%.”

Ultimately, it’s not terribly surprising that there are local effects around these policy changes. “The question is really how long this decline will last, and how slowly any recovery in the growth will be,” Robbie Andrew, a senior researcher at the CICERO Center for International Climate Research in Norway who collects EV sales data, said in an email. 

When I spoke to experts (including Andrew) for a story last year, several told me that Germany’s subsidies were ending too soon, and that they were concerned about what cutting off support early would mean for the long-term prospects of the technology in the country. And Germany was much further along than the US, with EVs making up 20% of new vehicle sales—twice the American proportion.

EV growth did see a longer-term backslide in Germany after the end of the subsidies. Battery-electric vehicles made up 13.5% of new registrations in 2024, down from 18.5% the year before, and the UK also passed Germany to become Europe’s largest EV market. 

Things have improved this year, with sales in the first half beating records set in 2023. But growth would need to pick up significantly for Germany to reach its goal of getting 15 million battery-electric vehicles registered in the country by 2030. As of January 2025, that number was just 1.65 million. 

According to early projections, the end of tax credits in the US could significantly slow progress on EVs and, by extension, on cutting emissions. Sales of battery-electric vehicles could be about 40% lower in 2030 without the credits than what we’d see with them, according to one analysis by Princeton University’s Zero Lab.

Some US states still have their own incentive programs for people looking to buy electric vehicles. But without federal support, the US is likely to continue lagging behind global EV leaders like China. 

As Andrew put it: “From a climate perspective, with road transport responsible for almost a quarter of US total emissions, leaving the low-hanging fruit on the tree is a significant setback.” 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Fusion power plants don’t exist yet, but they’re making money anyway

This week, Commonwealth Fusion Systems announced it has another customer for its first commercial fusion power plant, in Virginia. Eni, one of the world’s largest oil and gas companies, signed a billion-dollar deal to buy electricity from the facility.

One small detail? That reactor doesn’t exist yet. Neither does the smaller reactor Commonwealth is building first to demonstrate that its tokamak design will work as intended.

This is a weird moment in fusion. Investors are pouring billions into the field to build power plants, and some companies are even signing huge agreements to purchase power from those still-nonexistent plants. All this comes before companies have actually completed a working reactor that can produce electricity. It takes money to develop a new technology, but all this funding could lead to some twisted expectations. 

Nearly three years ago, the National Ignition Facility at Lawrence Livermore National Laboratory hit a major milestone for fusion power. With the help of the world’s most powerful lasers, scientists heated a pellet of fuel to 100 million °C. Hydrogen atoms in that fuel fused together, releasing more energy than the lasers put in.

It was a game changer for the vibes in fusion. The NIF experiment finally showed that a fusion reactor could yield net energy. Plasma physicists’ models had certainly suggested that it should be true, but it was another thing to see it demonstrated in real life.

But in some ways, the NIF results didn’t really change much for commercial fusion. That site’s lasers used a bonkers amount of energy, the setup was wildly complicated, and the whole thing lasted a fraction of a second. To operate a fusion power plant, not only do you have to achieve net energy, but you also need to do that on a somewhat constant basis and—crucially—do it economically.

So in the wake of the NIF news, all eyes went to companies like Commonwealth, Helion, and Zap Energy. Who would be the first to demonstrate this milestone in a more commercially feasible reactor? Or better yet, who would be the first to get a power plant up and running?

So far, the answer is none of them.

To be fair, many fusion companies have made technical progress. Commonwealth has built and tested its high-temperature superconducting magnets and published research about that work. Zap Energy demonstrated three hours of continuous operation in its test system, a milestone validated by the US Department of Energy. Helion started construction of its power plant in Washington in July. (And that’s not to mention a thriving, publicly funded fusion industry in China.)  

These are all important milestones, and these and other companies have seen many more. But as Ed Morse, a professor of nuclear engineering at Berkeley, summed it up to me: “They don’t have a reactor.” (He was speaking specifically about Commonwealth, but really, the same goes for the others.)

And yet, the money pours in. Commonwealth raised over $800 million in funding earlier this year. And now it’s got two big customers signed on to buy electricity from this future power plant.

Why buy electricity from a reactor that’s currently little more than ideas on paper? From the perspective of these particular potential buyers, such agreements can be something of a win-win, says Adam Stein, director of nuclear energy innovation at the Breakthrough Institute.

By putting a vote of confidence behind Commonwealth, Eni could help the fusion startup get the capital it needs to actually build its plant. The company also directly invests in Commonwealth, so it stands to benefit from success. Getting a good rate on the capital needed to build the plant could also mean the electricity is ultimately cheaper for Eni, Stein says. 

Ultimately, fusion needs a lot of money. If fossil-fuel companies and tech giants want to provide it, all the better. One concern I have, though, is how outside observers are interpreting these big commitments. 

US Energy Secretary Chris Wright has been loud about his support for fusion and his expectations of the technology. Earlier this month, he told the BBC that it will soon power the world.

He’s certainly not the first to have big dreams for fusion, and it is an exciting technology. But despite the jaw-dropping financial milestones, this industry is still very much in development. 

And while Wright praises fusion, the Trump administration is slashing support for other energy technologies, including wind and solar power, and spreading disinformation about their safety, cost, and effectiveness. 

To meet the growing electricity demand and cut emissions from the power sector, we’ll need a whole range of technologies. It’s a risk and a distraction to put all our hopes on an unproven energy tech when there are plenty of options that actually exist. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Winter Garden Clean Up

As winter approaches, it’s tempting just to sit back and put your feet up and not have to think about the garden until springtime. However, just a bit of extra work at this time of the year can save you a whole lot of hassle come planting time. Garden clean-up, the last big chore for gardeners, is often overlooked, especially […]

The post Winter Garden Clean Up appeared first on Backyard Gardener.

❌