Normal view

There are new articles available, click to refresh the page.
Yesterday — 5 December 2025Main stream

The ads that sell the sizzle of genetic trait discrimination

5 December 2025 at 06:00

One day this fall, I watched an electronic sign outside the Broadway-Lafayette subway station in Manhattan switch seamlessly between an ad for makeup and one promoting the website Pickyourbaby.com, which promises a way for potential parents to use genetic tests to influence their baby’s traits, including eye color, hair color, and IQ.

Inside the station, every surface was wrapped with more ads—babies on turnstiles, on staircases, on banners overhead. “Think about it. Makeup and then genetic optimization,” exulted Kian Sadeghi, the 26-year-old founder of Nucleus Genomics, the startup running the ads. To his mind, one should be as accessible as the other. 

Nucleus is a young, attention-seeking genetic software company that says it can analyze genetic tests on IVF embryos to score them for 2,000 traits and disease risks, letting parents pick some and reject others. This is possible because of how our DNA shapes us, sometimes powerfully. As one of the subway banners reminded the New York riders: “Height is 80% genetic.”

The day after the campaign launched, Sadeghi and I had briefly sparred online. He’d been on X showing off a phone app where parents can click through traits like eye color and hair color. I snapped back that all this sounded a lot like Uber Eats—another crappy, frictionless future invented by entrepreneurs, but this time you’d click for a baby.

I agreed to meet Sadeghi that night in the station under a banner that read, “IQ is 50% genetic.” He appeared in a puffer jacket and told me the campaign would soon spread to 1,000 train cars. Not long ago, this was a secretive technology to whisper about at Silicon Valley dinner parties. But now? “Look at the stairs. The entire subway is genetic optimization. We’re bringing it mainstream,” he said. “I mean, like, we are normalizing it, right?”

Normalizing what, exactly? The ability to choose embryos on the basis of predicted traits could lead to healthier people. But the traits mentioned in the subway—height and IQ—focus the public’s mind toward cosmetic choices and even naked discrimination. “I think people are going to read this and start realizing: Wow, it is now an option that I can pick. I can have a taller, smarter, healthier baby,” says Sadeghi.

Sadeghi poses under the first in a row of advertisements. The one above him reads, "Nucleus IVF+ Have a healthier baby." with the word "healthier" emphasized.
Entrepreneur Kian Sadeghi stands under advertising banner in the Broadway-Lafayette subway station in Manhattan, part of a campaign called “Have Your Best Baby.”
COURTESY OF THE AUTHOR

Nucleus got its seed funding from Founders Fund, an investment firm known for its love of contrarian bets. And embryo scoring fits right in—it’s an unpopular concept, and professional groups say the genetic predictions aren’t reliable. So far, leading IVF clinics still refuse to offer these tests. Doctors worry, among other things, that they’ll create unrealistic parental expectations. What if little Johnny doesn’t do as well on the SAT as his embryo score predicted?

The ad blitz is a way to end-run such gatekeepers: If a clinic won’t agree to order the test, would-be parents can take their business elsewhere. Another embryo testing company, Orchid, notes that high consumer demand emboldened Uber’s early incursions into regulated taxi markets. “Doctors are essentially being shoved in the direction of using it, not because they want to, but because they will lose patients if they don’t,” Orchid founder Noor Siddiqui said during an online event this past August.

Sadeghi prefers to compare his startup to Airbnb. He hopes it can link customers to clinics, becoming a digital “funnel” offering a “better experience” for everyone. He notes that Nucleus ads don’t mention DNA or any details of how the scoring technique works. That’s not the point. In advertising, you sell the sizzle, not the steak. And in Nucleus’s ad copy, what sizzles is height, smarts, and light-colored eyes.

It makes you wonder if the ads should be permitted. Indeed, I learned from Sadeghi that the Metropolitan Transportation Authority had objected to parts of the campaign. The metro agency, for instance, did not let Nucleus run ads saying “Have a girl” and “Have a boy,” even though it’s very easy to identify the sex of an embryo using a genetic test. The reason was an MTA policy that forbids using government-owned infrastructure to promote “invidious discrimination” against protected classes, which include race, religion and biological sex.

Since 2023, New York City has also included height and weight in its anti-discrimination law, the idea being to “root out bias” related to body size in housing and in public spaces. So I’m not sure why the MTA let Nucleus declare that height is 80% genetic. (The MTA advertising department didn’t respond to questions.) Perhaps it’s because the statement is a factual claim, not an explicit call to action. But we all know what to do: Pick the tall one and leave shorty in the IVF freezer, never to be born.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Before yesterdayMain stream

What we still don’t know about weight-loss drugs

28 November 2025 at 05:00

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

Weight-loss drugs have been back in the news this week. First, we heard that Eli Lilly, the company behind the drugs Mounjaro and Zepbound, became the first healthcare company in the world to achieve a trillion-dollar valuation.

Those two drugs, which are prescribed for diabetes and obesity respectively, are generating billions of dollars in revenue for the company. Other GLP-1 agonist drugs—a class that includes Mounjaro and Zepbound, which have the same active ingredient—have also been approved to reduce the risk of heart attack and stroke in overweight people. Many hope these apparent wonder drugs will also treat neurological disorders and potentially substance use disorders, too.

But this week we also learned that, disappointingly, GLP-1 drugs don’t seem to help people with Alzheimer’s disease. And that people who stop taking the drugs when they become pregnant can experience potentially dangerous levels of weight gain during their pregnancies. On top of that, some researchers worry that people are using the drugs postpartum to lose pregnancy weight without understanding potential risks.

All of this news should serve as a reminder that there’s a lot we still don’t know about these drugs. This week, let’s look at the enduring questions surrounding GLP-1 agonist drugs.

First a quick recap. Glucagon-like peptide-1 is a hormone made in the gut that helps regulate blood sugar levels. But we’ve learned that it also appears to have effects across the body. Receptors that GLP-1 can bind to have been found in multiple organs and throughout the brain, says Daniel Drucker, an endocrinologist at the University of Toronto who has been studying the hormone for decades.

GLP-1 agonist drugs essentially mimic the hormone’s action. Quite a few have been developed, including semaglutide, tirzepatide, liraglutide, and exenatide, which have brand names like Ozempic, Saxenda and Wegovy. Some of them are recommended for some people with diabetes.

But because these drugs also seem to suppress appetite, they have become hugely popular weight loss aids. And studies have found that many people who take them for diabetes or weight loss experience surprising side effects; that their mental health improves, for example, or that they feel less inclined to smoke or consume alcohol. Research has also found that the drugs seem to increase the growth of brain cells in lab animals.

So far, so promising. But there are a few outstanding gray areas.

Are they good for our brains?

Novo Nordisk, a competitor of Eli Lilly, manufactures GLP-1 drugs Wegovy and Saxenda. The company recently trialed an oral semaglutide in people with Alzheimer’s disease who had mild cognitive impairment or mild dementia. The placebo-controlled trial included 3808 volunteers.

Unfortunately, the company found that the drug did not appear to delay the progression of Alzheimer’s disease in the volunteers who took it.

The news came as a huge disappointment to the research community. “It was kind of crushing,” says Drucker. That’s despite the fact that, deep down, he wasn’t expecting a “clear win.” Alzheimer’s disease has proven notoriously difficult to treat, and by the time people get a diagnosis, a lot of damage has already taken place.

But he is one of many that isn’t giving up hope entirely. After all, research suggests that GLP-1 reduces inflammation in the brain and improves the health of neurons, and that it appears to improve the way brain regions communicate with each other. This all implies that GLP-1 drugs should benefit the brain, says Drucker. There’s still a chance that the drugs might help stave off Alzheimer’s in those who are still cognitively healthy.

Are they safe before, during or after pregnancy?

Other research published this week raises questions about the effects of GLP-1s taken around the time of pregnancy. At the moment, people are advised to plan to stop taking the medicines two months before they become pregnant. That’s partly because some animal studies suggest the drugs can harm the development of a fetus, but mainly because scientists haven’t studied the impact on pregnancy in humans.

Among the broader population, research suggests that many people who take GLP-1s for weight loss regain much of their lost weight once they stop taking those drugs. So perhaps it’s not surprising that a study published in JAMA earlier this week saw a similar effect in pregnant people.

The study found that people who had been taking those drugs gained around 3.3kg more than others who had not. And those who had been taking the drugs also appeared to have a slightly higher risk of gestational diabetes, blood pressure disorders and even preterm birth.

It sounds pretty worrying. But a different study published in August had the opposite finding—it noted a reduction in the risk of those outcomes among women who had taken the drugs before becoming pregnant.

If you’re wondering how to make sense of all this, you’re not the only one. No one really knows how these drugs should be used before pregnancy—or during it for that matter.

Another study out this week found that people (in Denmark) are increasingly taking GLP-1s postpartum to lose weight gained during pregnancy. Drucker tells me that, anecdotally, he gets asked about this potential use a lot.

But there’s a lot going on in a postpartum body. It’s a time of huge physical and hormonal change that can include bonding, breastfeeding and even a rewiring of the brain. We have no idea if, or how, GLP-1s might affect any of those.

Howand whencan people safely stop using them?

Yet another study out this week—you can tell GLP-1s are one of the hottest topics in medicine right now—looked at what happens when people stop taking tirzepatide (marketed as Zepbound) for their obesity.

The trial participants all took the drug for 36 weeks, at which point half continued with the drug, and half were switched to a placebo for another 52 weeks. During that first 36 weeks, the weight and heart health of the participants improved.

But by the end of the study, most of those that had switched to a placebo had regained more than 25% of the weight they had originally lost. One in four had regained more than 75% of that weight, and 9% ended up at a higher weight than when they’d started the study. Their heart health also worsened.

Does that mean that people need to take these drugs forever? Scientists don’t have the answer to that one, either. Or if taking the drugs indefinitely is safe. The answer might depend on the individual, their age or health status, or what they are using the drug for.

There are other gray areas. GLP-1s look promising for substance use disorders, but we don’t yet know how effective they might be. We don’t know the long-term effects these drugs have on children who take them. And we don’t know the long-term consequences these drugs might have for healthy-weight people who take them for weight loss.

Earlier this year, Drucker accepted a Breakthrough Prize in Life Sciences at a glitzy event in California. “All of these Hollywood celebrities were coming up to me and saying ‘thank you so much,’” he says. “A lot of these people don’t need to be on these medicines.”

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

We’re learning more about what vitamin D does to our bodies

21 November 2025 at 05:00

It has started to get really wintry here in London over the last few days. The mornings are frosty, the wind is biting, and it’s already dark by the time I pick my kids up from school. The darkness in particular has got me thinking about vitamin D, a.k.a. the sunshine vitamin.

At a checkup a few years ago, a doctor told me I was deficient in vitamin D. But he wouldn’t write me a prescription for supplements, simply because, as he put it, everyone in the UK is deficient. Putting the entire population on vitamin D supplements would be too expensive for the country’s national health service, he told me.

But supplementation—whether covered by a health-care provider or not—can be important. As those of us living in the Northern Hemisphere spend fewer of our waking hours in sunlight, let’s consider the importance of vitamin D.

Yes, it is important for bone health. But recent research is also uncovering surprising new insights into how the vitamin might influence other parts of our bodies, including our immune systems and heart health.

Vitamin D was discovered just over 100 years ago, when health professionals were looking for ways to treat what was then called “the English disease.” Today, we know that rickets, a weakening of bones in children, is caused by vitamin D deficiency. And vitamin D is best known for its importance in bone health.

That’s because it helps our bodies absorb calcium. Our bones are continually being broken down and rebuilt, and they need calcium for that rebuilding process. Without enough calcium, bones can become weak and brittle. (Depressingly, rickets is still a global health issue, which is why there is global consensus that infants should receive a vitamin D supplement at least until they are one year old.)

In the decades since then, scientists have learned that vitamin D has effects beyond our bones. There’s some evidence to suggest, for example, that being deficient in vitamin D puts people at risk of high blood pressure. Daily or weekly supplements can help those individuals lower their blood pressure.

A vitamin D deficiency has also been linked to a greater risk of “cardiovascular events” like heart attacks, although it’s not clear whether supplements can reduce this risk; the evidence is pretty mixed.

Vitamin D appears to influence our immune health, too. Studies have found a link between low vitamin D levels and incidence of the common cold, for example. And other research has shown that vitamin D supplements can influence the way our genes make proteins that play important roles in the way our immune systems work.

We don’t yet know exactly how these relationships work, however. And, unfortunately, a recent study that assessed the results of 37 clinical trials found that overall, vitamin D supplements aren’t likely to stop you from getting an “acute respiratory infection.”

Other studies have linked vitamin D levels to mental health, pregnancy outcomes, and even how long people survive after a cancer diagnosis. It’s tantalizing to imagine that a cheap supplement could benefit so many aspects of our health.

But, as you might have gathered if you’ve got this far, we’re not quite there yet. The evidence on the effects of vitamin D supplementation for those various conditions is mixed at best.

In fairness to researchers, it can be difficult to run a randomized clinical trial for vitamin D supplements. That’s because most of us get the bulk of our vitamin D from sunlight. Our skin converts UVB rays into a form of the vitamin that our bodies can use. We get it in our diets, too, but not much. (The main sources are oily fish, egg yolks, mushrooms, and some fortified cereals and milk alternatives.)

The standard way to measure a person’s vitamin D status is to look at blood levels of 25-hydroxycholecalciferol (25(OH)D), which is formed when the liver metabolizes vitamin D. But not everyone can agree on what the “ideal” level is.

Even if everyone did agree on a figure, it isn’t obvious how much vitamin D a person would need to consume to reach this target, or how much sunlight exposure it would take. One complicating factor is that people respond to UV rays in different ways—a lot of that can depend on how much melanin is in your skin. Similarly, if you’re sitting down to a meal of oily fish and mushrooms and washing it down with a glass of fortified milk, it’s hard to know how much more you might need.

There is more consensus on the definition of vitamin D deficiency, though. (It’s a blood level below 30 nanomoles per liter, in case you were wondering.) And until we know more about what vitamin D is doing in our bodies, our focus should be on avoiding that.

For me, that means topping up with a supplement. The UK government advises everyone in the country to take a 10-microgram vitamin D supplement over autumn and winter. That advice doesn’t factor in my age, my blood levels, or the amount of melanin in my skin. But it’s all I’ve got for now.

These technologies could help put a stop to animal testing

14 November 2025 at 05:00

Earlier this week, the UK’s science minister announced an ambitious plan: to phase out animal testing.

Testing potential skin irritants on animals will be stopped by the end of next year, according to a strategy released on Tuesday. By 2027, researchers are “expected to end” tests of the strength of Botox on mice. And drug tests in dogs and nonhuman primates will be reduced by 2030. 

The news follows similar moves by other countries. In April, the US Food and Drug Administration announced a plan to replace animal testing for monoclonal antibody therapies with “more effective, human-relevant models.” And, following a workshop in June 2024, the European Commission also began working on a “road map” to phase out animal testing for chemical safety assessments.

Animal welfare groups have been campaigning for commitments like these for decades. But a lack of alternatives has made it difficult to put a stop to animal testing. Advances in medical science and biotechnology are changing that.

Animals have been used in scientific research for thousands of years. Animal experimentation has led to many important discoveries about how the brains and bodies of animals work. And because regulators require drugs to be first tested in research animals, it has played an important role in the creation of medicines and devices for both humans and other animals.

Today, countries like the UK and the US regulate animal research and require scientists to hold multiple licenses and adhere to rules on animal housing and care. Still, millions of animals are used annually in research. Plenty of scientists don’t want to take part in animal testing. And some question whether animal research is justifiable—especially considering that around 95% of treatments that look promising in animals don’t make it to market.

In recent decades, we’ve seen dramatic advances in technologies that offer new ways to model the human body and test the effects of potential therapies, without experimenting on humans or other animals.

Take “organs on chips,” for example. Researchers have been creating miniature versions of human organs inside tiny plastic cases. These systems are designed to contain the same mix of cells you’d find in a full-grown organ and receive a supply of nutrients that keeps them alive.

Today, multiple teams have created models of livers, intestines, hearts, kidneys and even the brain. And they are already being used in research. Heart chips have been sent into space to observe how they respond to low gravity. The FDA used lung chips to assess covid-19 vaccines. Gut chips are being used to study the effects of radiation.

Some researchers are even working to connect multiple chips to create a “body on a chip”—although this has been in the works for over a decade and no one has quite managed it yet.

In the same vein, others have been working on creating model versions of organs—and even embryos—in the lab. By growing groups of cells into tiny 3D structures, scientists can study how organs develop and work, and even test drugs on them. They can even be personalized—if you take cells from someone, you should be able to model that person’s specific organs. Some researchers have even been able to create organoids of developing fetuses.

The UK government strategy mentions the promise of artificial intelligence, too. Many scientists have been quick to adopt AI as a tool to help them make sense of vast databases, and to find connections between genes, proteins and disease, for example. Others are using AI to design all-new drugs.

Those new drugs could potentially be tested on virtual humans. Not flesh-and-blood people, but digital reconstructions that live in a computer. Biomedical engineers have already created digital twins of organs. In ongoing trials, digital hearts are being used to guide surgeons on how—and where—to operate on real hearts.

When I spoke to Natalia Trayanova, the biomedical engineering professor behind this trial, she told me that her model could recommend regions of heart tissue to be burned off as part of treatment for atrial fibrillation. Her tool would normally suggest two or three regions but occasionally would recommend many more. “They just have to trust us,” she told me.

It is unlikely that we’ll completely phase out animal testing by 2030. The UK government acknowledges that animal testing is still required by lots of regulators, including the FDA, the European Medicines Agency, and the World Health Organization. And while alternatives to animal testing have come a long way, none of them perfectly capture how a living body will respond to a treatment.

At least not yet. Given all the progress that has been made in recent years, it’s not too hard to imagine a future without animal testing.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Cloning isn’t just for celebrity pets like Tom Brady’s dog

7 November 2025 at 05:00

This week, we heard that Tom Brady had his dog cloned. The former quarterback revealed that his Junie is actually a clone of Lua, a pit bull mix that died in 2023.

Brady’s announcement follows those of celebrities like Paris Hilton and Barbra Streisand, who also famously cloned their pet dogs. But some believe there are better ways to make use of cloning technologies.

While the pampered pooches of the rich and famous may dominate this week’s headlines, cloning technologies are also being used to diversify the genetic pools of inbred species and potentially bring other animals back from the brink of extinction.

Cloning itself isn’t new. The first mammal cloned from an adult cell, Dolly the sheep, was born in the 1990s. The technology has been used in livestock breeding over the decades since.

Say you’ve got a particularly large bull, or a cow that has an especially high milk yield. Those animals are valuable. You could selectively breed for those kinds of characteristics. Or you could clone the original animals—essentially creating genetic twins.

Scientists can take some of the animals’ cells, freeze them, and store them in a biobank. That opens the option to clone them in the future. It’s possible to thaw those cells, remove the DNA-containing nuclei of the cells, and insert them into donor egg cells.

Those donor egg cells, which come from another animal of the same species, have their own nuclei removed. So it’s a case of swapping out the DNA. The resulting cell is stimulated and grown in the lab until it starts to look like an embryo. Then it is transferred to the uterus of a surrogate animal—which eventually gives birth to a clone.

There are a handful of companies offering to clone pets. Viagen, which claims to have “cloned more animals than anyone else on Earth,” will clone a dog or cat for $50,000. That’s the company that cloned Streisand’s pet dog Samantha, twice.

This week, Colossal Biosciences—the “de-extinction” company that claims to have resurrected the dire wolf and created a “woolly mouse” as a precursor to reviving the woolly mammoth—announced that it had acquired Viagen, but that Viagen will “continue to operate under its current leadership.”

Pet cloning is controversial, for a few reasons. The companies themselves point out that, while the cloned animal will be a genetic twin of the original animal, it won’t be identical. One issue is mitochondrial DNA—a tiny fraction of DNA that sits outside the nucleus and is inherited from the mother. The cloned animal may inherit some of this from the surrogate.

Mitochondrial DNA is unlikely to have much of an impact on the animal itself. More important are the many, many factors thought to shape an individual’s personality and temperament. “It’s the old nature-versus-nurture question,” says Samantha Wisely, a conservation geneticist at the University of Florida. After all, human identical twins are never carbon copies of each other. Anyone who clones a pet expecting a like-for-like reincarnation is likely to be disappointed.

And some animal welfare groups are opposed to the practice of pet cloning. People for the Ethical Treatment of Animals (PETA) described it as “a horror show,” and the UK’s Royal Society for the Prevention of Cruelty to Animals (RSPCA) says that “there is no justification for cloning animals for such trivial purposes.” 

But there are other uses for cloning technology that are arguably less trivial. Wisely has long been interested in diversifying the gene pool of the critically endangered black-footed ferret, for example.

Today, there are around 10,000 black-footed ferrets that have been captively bred from only seven individuals, says Wisely. That level of inbreeding isn’t good for any species—it tends to leave organisms at risk of poor health. They are less able to reproduce or adapt to changes in their environment.

Wisely and her colleagues had access to frozen tissue samples taken from two other ferrets. Along with colleagues at not-for-profit Revive and Restore, the team created clones of those two individuals. The first clone, Elizabeth Ann, was born in 2020. Since then, other clones have been born, and the team has started breeding the cloned animals with the descendants of the other seven ferrets, says Wisely.

The same approach has been used to clone the endangered Przewalski’s horse, using decades-old tissue samples stored by the San Diego Zoo. It’s too soon to predict the impact of these efforts. Researchers are still evaluating the cloned ferrets and their offspring to see if they behave like typical animals and could survive in the wild.

Even this practice is not without its critics. Some have pointed out that cloning alone will not save any species. After all, it doesn’t address the habitat loss or human-wildlife conflict that is responsible for the endangerment of these animals in the first place. And there will always be detractors who accuse people who clone animals of “playing God.” 

For all her involvement in cloning endangered ferrets, Wisely tells me she would not consider cloning her own pets. She currently has three rescue dogs, a rescue cat, and “geriatric chickens.” “I love them all dearly,” she says. “But there are a lot of rescue animals out there that need homes.”

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Here’s the latest company planning for gene-edited babies

31 October 2025 at 15:27

A West Coast biotech entrepreneur says he’s secured $30 million to form a public-benefit company to study how to safely create genetically edited babies, marking the largest known investment into the taboo technology.  

The new company, called Preventive, is being formed to research so-called “heritable genome editing,” in which the DNA of embryos would be modified by correcting harmful mutations or installing beneficial genes. The goal would be to prevent disease.

Preventive was founded by the gene-editing scientist Lucas Harrington, who described his plans yesterday in a blog post announcing the venture. Preventive, he said, will not rush to try out the technique but instead will dedicate itself “to rigorously researching whether heritable genome editing can be done safely and responsibly.”

Creating genetically edited humans remains controversial, and the first scientist to do it, in China, was imprisoned for three years. The procedure remains illegal in many countries, including the US, and doubts surround its usefulness as a form of medicine.

Still, as gene-editing technology races forward, the temptation to shape the future of the species may prove irresistible, particularly to entrepreneurs keen to put their stamp on the human condition. In theory, even small genetic tweaks could create people who never get heart disease or Alzheimer’s, and who would pass those traits on to their own offspring.

According to Harrington, if the technique proves safe, it “could become one of the most important health technologies of our time.” He has estimated that editing an embryo would cost only about $5,000 and believes regulations could change in the future. 

Preventive is the third US startup this year to say it is pursuing technology to produce gene-edited babies. The first, Bootstrap Bio, based in California, is reportedly seeking seed funding and has an interest in enhancing intelligence. Another, Manhattan Genomics, is also in the formation stage but has not announced funding yet.

As of now, none of these companies have significant staff or facilities, and they largely lack any credibility among mainstream gene-editing scientists. Reached by email, Fyodor Urnov, an expert in gene editing at the University of California, Berkeley, where Harrington studied, said he believes such ventures should not move forward.

Urnov has been a pointed critic of the concept of heritable genome editing, calling it dangerous, misguided, and a distraction from the real benefits of gene editing to treat adults and children. 

In his email, Urnov said the launch of still another venture into the area made him want to “howl with pain.”  

Harrinton’s venture was incorporated in Delaware in May 2025,under the name Preventive Medicine PBC. As a public-benefit corporation, it is organized to put its public mission above profits. “If our research shows [heritable genome editing] cannot be done safely, that conclusion is equally valuable to the scientific community and society,” Harrington wrote in his post.

Harrington is a cofounder of Mammoth Biosciences, a gene-editing company pursuing drugs for adults, and remains a board member there.

In recent months, Preventive has sought endorsements from leading figures in genome editing, but according to its post, it had secured only one—from Paula Amato, a fertility doctor at Oregon Health Sciences University, who said she had agreed to act as an advisor to the company.

Amato is a member of a US team that has researched embryo editing in the country since 2017, and she has promoted the technology as a way to increase IVF success. That could be the case if editing could correct abnormal embryos, making more available for use in trying to create a pregnancy.

It remains unclear where Preventive’s funding is coming from. Harrington said the $30 million was gathered from “private funders who share our commitment to pursuing this research responsibly.” But he declined to identify those investors other than SciFounders, a venture firm he runs with his personal and business partner Matt Krisiloff, the CEO of the biotech company Conception, which aims to create human eggs from stem cells.

That’s yet another technology that could change reproduction, if it works. Krisiloff is listed as a member of Preventive’s founding team.

The idea of edited babies has received growing attention from figures in the cryptocurrency business. These include Brian Armstrong, the billionaire founder of Coinbase, who has held a series of off-the-record dinners to discuss the technology (which Harrington attended). Armstrong previously argued that the “time is right” for a startup venture in the area.

Will Harborne, a crypto entrepreneur and partner at LongGame Ventures, says he’s “thrilled” to see Preventive launch. If the technology proves safe, he argues, “widespread adoption is inevitable,” calling its use a “societal obligation.”

Harborne’s fund has invested in Herasight, a company that uses genetic tests to rank IVF embryos for future IQ and other traits. That’s another hotly debated technology, but one that has already reached the market, since such testing isn’t strictly regulated. Some have begun to use the term “human enhancement companies” to refer to such ventures.

What’s still lacking is evidence that leading gene-editing specialists support these ventures. Preventive was unsuccessful in establishing a collaboration with at least one key research group, and Urnov says he had harsh words for Manhattan Genomics when that company reached out to him about working together. “I encourage you to stop,” he wrote back. “You will cause zero good and formidable harm.”

Harrington thinks Preventive could change such attitudes, if it shows that it is serious about doing responsible research. “Most scientists I speak with either accept embryo editing as inevitable or are enthusiastic about the potential but hesitate to voice these opinions publicly,” he told MIT Technology Review earlier this year. “Part of being more public about this is to encourage others in the field to discuss this instead of ignoring it.”

Here’s why we don’t have a cold vaccine. Yet.

31 October 2025 at 05:00

For those of us in the Northern Hemisphere, it’s the season of the sniffles. As the weather turns, we’re all spending more time indoors. The kids have been back at school for a couple of months. And cold germs are everywhere.

My youngest started school this year, and along with artwork and seedlings, she has also been bringing home lots of lovely bugs to share with the rest of her family. As she coughed directly into my face for what felt like the hundredth time, I started to wonder if there was anything I could do to stop this endless cycle of winter illnesses. We all got our flu jabs a month ago. Why couldn’t we get a vaccine to protect us against the common cold, too?

Scientists have been working on this for decades. It turns out that creating a cold vaccine is hard. Really hard.

But not impossible. There’s still hope. Let me explain.

Technically, colds are infections that affect your nose and throat, causing symptoms like sneezing, coughing, and generally feeling like garbage. Unlike some other infections,—covid-19, for example—they aren’t defined by the specific virus that causes them.

That’s because there are a lot of viruses that cause colds, including rhinoviruses, adenoviruses, and even seasonal coronaviruses (they don’t all cause covid!). Within those virus families, there are many different variants.

Take rhinoviruses, for example. These viruses are thought to be behind most colds. They’re human viruses—over the course of evolution, they have become perfectly adapted to infecting us, rapidly multiplying in our noses and airways to make us sick. There are around 180 rhinovirus variants, says Gary McLean, a molecular immunologist at Imperial College London in the UK.

Once you factor in the other cold-causing viruses, there are around 280 variants all told. That’s 280 suspects behind the cough that my daughter sprayed into my face. It’s going to be really hard to make a vaccine that will offer protection against all of them.

The second challenge lies in the prevalence of those variants.

Scientists tailor flu and covid vaccines to whatever strain happens to be circulating. Months before flu season starts, the World Health Organization advises countries on which strains their vaccines should protect against. Early recommendations for the Northern Hemisphere can be based on which strains seem to be dominant in the Southern Hemisphere, and vice versa.

That approach wouldn’t work for the common cold, because all those hundreds of variants are circulating all the time, says McLean.

That’s not to say that people haven’t tried to make a cold vaccine. There was a flurry of interest in the 1960s and ’70s, when scientists made valiant efforts to develop vaccines for the common cold. Sadly, they all failed. And we haven’t made much progress since then.

In 2022, a team of researchers reviewed all the research that had been published up to that year. They only identified one clinical trial—and it was conducted back in 1965.

Interest has certainly died down since then, too. Some question whether a cold vaccine is even worth the effort. After all, most colds don’t require much in the way of treatment and don’t last more than a week or two. There are many, many more dangerous viruses out there we could be focusing on.

And while cold viruses do mutate and evolve, no one really expects them to cause the next pandemic, says McLean. They’ve evolved to cause mild disease in humans—something they’ve been doing successfully for a long, long time. Flu viruses—which can cause serious illness, disability, or even death—pose a much bigger risk, so they probably deserve more attention.

But colds are still irritating, disruptive, and potentially harmful. Rhinoviruses are considered to be the leading cause of human infectious disease. They can cause pneumonia in children and older adults. And once you add up doctor visits, medication, and missed work, the economic cost of colds is pretty hefty: a 2003 study put it at $40 billion per year for the US alone.

So it’s reassuring that we needn’t abandon all hope: Some scientists are making progress! McLean and his colleagues are working on ways to prepare the immune systems of people with asthma and lung diseases to potentially protect them from cold viruses. And a team at Emory University has developed a vaccine that appears to protect monkeys from around a third of rhinoviruses.

There’s still a long way to go. Don’t expect a cold vaccine to materialize in the next five years, at least. “We’re not quite there yet,” says Michael Boeckh, an infectious-disease researcher at Fred Hutch Cancer Center in Seattle, Washington. “But will it at some point happen? Possibly.”

At the end of our Zoom call, perhaps after reading the disappointed expression on my sniffling, cold-riddled face (yes, I did end up catching my daughter’s cold), McLean told me he hoped he was “positive enough.” He admitted that he used to be more optimistic about a cold vaccine. But he hasn’t given up hope. He’s even running a trial of a potential new vaccine in people, although he wouldn’t reveal the details.

“It could be done,” he said.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

How conspiracy theories infiltrated the doctor’s office

30 October 2025 at 06:00

As anyone who has googled their symptoms and convinced themselves that they’ve got a brain tumor will attest, the internet makes it very easy to self-(mis)diagnose your health problems. And although social media and other digital forums can be a lifeline for some people looking for a diagnosis or community, when that information is wrong, it can put their well-being and even lives in danger.

Unfortunately, this modern impulse to “do your own research” became even more pronounced during the coronavirus pandemic.


This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology.


We asked a number of health-care professionals about how this shifting landscape is changing their profession. They told us that they are being forced to adapt how they treat patients. It’s a wide range of experiences: Some say patients tell them they just want more information about certain treatments because they’re concerned about how effective they are. Others hear that their patients just don’t trust the powers that be. Still others say patients are rejecting evidence-based medicine altogether in favor of alternative theories they’ve come across online. 

These are their stories, in their own words.

Interviews have been edited for length and clarity.


The physician trying to set shared goals 

David Scales

Internal medicine hospitalist and assistant professor of medicine,
Weill Cornell Medical College
New York City

Every one of my colleagues has stories about patients who have been rejective of care, or had very peculiar perspectives on what their care should be. Sometimes that’s driven by religion. But I think what has changed is people, not necessarily with a religious standpoint, having very fixed beliefs that are sometimes—based on all the evidence that we have—in contradiction with their health goals. And that is a very challenging situation. 

I once treated a patient with a connective tissue disease called Ehlers-Danlos syndrome. While there’s no doubt that the illness exists, there’s a lot of doubt and uncertainty over which symptoms can be attributed to Ehlers-Danlos. This means it can fall into what social scientists call a “contested illness.” 

Contested illnesses used to be causes for arguably fringe movements, but they have become much more prominent since the rise of social media in the mid-2010s. Patients often search for information that resonates with their experience. 

This patient was very hesitant about various treatments, and it was clear she was getting her information from, I would say, suspect sources. She’d been following people online who were not necessarily trustworthy, so I sat down with her and we looked them up on Quackwatch, a site that lists health myths and misconduct. 

“She was extremely knowledgeable, and had done a lot of her own research, but she struggled to tell the difference between good and bad sources.”

She was still accepting of treatment, and was extremely knowledgeable, and had done a lot of her own research, but she struggled to tell the difference between good and bad sources and fixed beliefs that overemphasize particular things—like what symptoms might be attributable to other stuff.

Physicians have the tools to work with patients who are struggling with these challenges. The first is motivational interviewing, a counseling technique that was developed for people with substance-use disorders. It’s a nonjudgmental approach that uses open-ended questions to draw out people’s motivations, and to find where there’s a mismatch between their behaviors and their beliefs. It’s highly effective in treating people who are vaccine-hesitant.

Another is an approach called shared decision-making. First we work out what the patient’s goals are and then figure out a way to align those with what we know about the evidence-based way to treat them. It’s something we use for end-of-life care, too.

What’s concerning to me is that it seems as though there’s a dynamic of patients coming in with a fixed belief of how to diagnose their illness, how their symptoms should be treated, and how to treat it in a way that’s completely divorced from the kinds of medicine you’d find in textbooks—and that the same dynamic is starting to extend to other illnesses, too.


The therapist committed to being there when the conspiracy fever breaks 

Damien Stewart

Psychologist
Warsaw, Poland

Before covid, I hadn’t really had any clients bring up conspiracy theories into my practice. But once the pandemic began, they went from being fun or harmless to something dangerous.

In my experience, vaccines were the topic where I first really started to see some militancy—people who were looking down the barrel of losing their jobs because they wouldn’t get vaccinated. At one point, I had an out-and-out conspiracy theorist say to me, “I might as well wear a yellow star like the Jews during the Holocaust, because I won’t get vaccinated.” 

I felt pure anger, and I reached a point in my therapeutic journey I didn’t know would ever occur—I’d found that I had a line that could be crossed by a client that I could not tolerate. I spoke in a very direct manner he probably wasn’t used to and challenged his conspiracy theory. He got very angry and hung up the call.  

It made me figure out how I was going to deal with this in future, and to develop an approach—which was to not challenge the conspiracy theory, but to gently talk through it, to provide alternative points of view and ask questions. I try to find the therapeutic value in the information, in the conversations we’re having. My belief is and evidence seems to show that people believe in conspiracy theories because there’s something wrong in their life that is inexplicable, and they need something to explain what’s happening to them. And even if I have no belief or agreement whatsoever in what they’re saying, I think I need to sit here and have this conversation, because one day this person might snap out of it, and I need to be here when that happens.

As a psychologist, you have to remember that these people who believe in these things are extremely vulnerable. So my anger around these conspiracy theories has changed from being directed toward the deliverer—the person sitting in front of me saying these things—to the people driving the theories.


The emergency room doctor trying to get patients to reconnect with the evidence

Luis Aguilar Montalvan

Attending emergency medicine physician 
Queens, New York

The emergency department is essentially the pulse of what is happening in society. That’s what really attracted me to it. And I think the job of the emergency doctor, particularly within shifting political views or belief in Western medicine, is to try to reconnect with someone. To just create the experience that you need to prime someone to hopefully reconsider their relationship with this evidence-based medicine.

When I was working in the pediatrics emergency department a few years ago, we saw a resurgence of diseases we thought we had eradicated, like measles. I typically framed it by saying to the child’s caregiver: “This is a disease we typically use vaccines for, and it can prevent it in the majority of people.” 

“The doctor is now more like a consultant or a customer service provider than the authority. … The power dynamic has changed.”

The sentiment among my adult patients who are reluctant to get vaccinated or take certain medications seems to be from a mistrust of the government or “The System” rather than from anything Robert F. Kennedy Jr. says directly, for example. I’m definitely seeing more patients these days asking me what they can take to manage a condition or pain that’s not medication. I tell them that the knowledge I have is based on science, and explain the medications I’d typically give other people in their situation. I try to give them autonomy while reintroducing the idea of sticking with the evidence, and for the most part they’re appreciative and courteous.

The role of doctor has changed in recent years—there’s been a cultural change. My understanding is that back in the day, what the doctor said, the patient did. Some doctors used to shame parents who hadn’t vaccinated their kids. Now we’re shifting away from that, and the doctor is now more like a consultant or a customer service provider than the authority. I think that could be because we’ve seen a lot of bad actors in medicine, so the power dynamic has changed.  

I think if we had a more unified approach at a national level, if they had an actual unified and transparent relationship with the population, that would set us up right. But I’m not sure we’ve ever had it.

STEPHANIE ARNETT/MIT TECHNOLOGY REVIEW | PUBLIC DOMAIN

The psychologist who supported severely mentally ill patients through the pandemic 

Michelle Sallee

Psychologist, board certified in serious mental illness psychology
Oakland, California

I’m a clinical psychologist who only works with people who have been in the hospital three or more times in the last 12 months. I do both individual therapy and a lot of group work, and several years ago during the pandemic, I wrote a 10-week program for patients about how to cope with sheltering in place, following safety guidelines, and their concerns about vaccines.

My groups were very structured around evidence-based practice, and I had rules for the groups. First, I would tell people that the goal was not to talk them out of their conspiracy theory; my goal was not to talk them into a vaccination. My goal was to provide a safe place for them to be able to talk about things that were terrifying to them. We wanted to reduce anxiety, depression, thoughts of suicide, and the need for psychiatric hospitalizations. 

Half of the group was pro–public health requirements, and their paranoia and fear for safety was around people who don’t get vaccinated; the other half might have been strongly opposed to anyone other than themselves deciding they need a vaccination or a mask. Both sides were fearing for their lives—but from each other.

I wanted to make sure everybody felt heard, and it was really important to be able to talk about what they believed—like, some people felt like the government was trying to track us and even kill us—without any judgment from other people. My theory is that if you allow people to talk freely about what’s on their mind without blocking them with your own opinions or judgment, they will find their way eventually. And a lot of times that works. 

People have been stuck on their conspiracy theory or their paranoia has been stuck on it for a long time because they’re always fighting with people about it, everyone’s telling them that this is not true. So we would just have an open discussion about these things. 

“People have been stuck on their conspiracy theory for a long time because they’re always fighting with people about it, everyone’s telling them that this is not true.”

I ran the program four times for a total of 27 people, and the thing that I remember the most was how respectful and tolerant and empathic, but still honest about their feelings and opinions, everybody was. At the end of the program, most participants reported a decrease in pandemic-related stress. Half reported a decrease in general perceived stress, and half reported no change.

I’d say that the rate of how much vaccines are talked about now is significantly lower, and covid doesn’t really come up anymore. But other medical illnesses come up—patients saying, “My doctor said I need to get this surgery, but I know who they’re working for.” Everybody has their concerns, but when a person with psychosis has concerns, it becomes delusional, paranoid, and psychotic.

I’d like to see more providers be given more training around severe mental illness. These are not just people who just need to go to the hospital to get remedicated for a couple of days. There’s a whole life that needs to get looked at here, and they deserve that. I’d like to see more group settings with a combination of psychoeducation, evidence-based research, skills training, and process, because the research says that’s the combination that’s really important.

Editor’s note: Sallee works for a large HMO psychiatry department, and her account here is not on behalf of, endorsed by, or speaking for any larger organization.


The epidemiologist rethinking how to bridge differences in culture and community 

John Wright

Clinician and epidemiologist
Bradford, United Kingdom

I work in Bradford, the fifth-biggest city in the UK. It has a big South Asian population and high levels of deprivation. Before covid, I’d say there was growing awareness about conspiracies. But during the pandemic, I think that lockdown, isolation, fear of this unknown virus, and then the uncertainty about the future came together in a perfect storm to highlight people’s latent attraction to alternative hypotheses and conspiracies—it was fertile ground. I’ve been a National Health Service doctor for almost 40 years, and until recently, the NHS had a great reputation, with great trust, and great public support. The pandemic was the first time that I started seeing that erode.

It wasn’t just conspiracies about vaccines or new drugs, either—it was also an undermining of trust in public institutions. I remember an older woman who had come into the emergency department with covid. She was very unwell, but she just wouldn’t go into hospital despite all our efforts, because there were conspiracies going around that we were killing patients in hospital. So she went home, and I don’t know what happened to her.

The other big change in recent years has been social media and social networks that have obviously amplified and accelerated alternative theories and conspiracies. That’s been the tinder that’s allowed the wildfires to spread with these sort of conspiracy theories. In Bradford, particularly among ethnic minority communities, there’s been stronger links between them—allowing this to spread quicker—but also a more structural distrust. 

Vaccination rates have fallen since the pandemic, and we’re seeing lower uptake of the meningitis and HPV vaccines in schools among South Asian families. Ultimately, this needs a bigger societal approach than individual clinicians putting needles in arms. We started a project called Born in Bradford in 2007 that’s following more than 13,000 families, including around 20,000 teenagers as they grow up. One of the biggest focuses for us is how they use social media and how it links to their mental health, so we’re asking them to donate their digital media to us so we can examine it in confidence. We’re hoping it could allow us to explore conspiracies and influences.

The challenge for the next generation of resident doctors and clinicians is: How do we encourage health literacy in young people about what’s right and what’s wrong without being paternalistic? We also need to get better at engaging with people as health advocates to counter some of the online narratives. The NHS website can’t compete with how engaging content on TikTok is.


The pediatrician who worries about the confusing public narrative on vaccines

Jessica Weisz

Pediatrician
Washington, DC

I’m an outpatient pediatrician, so I do a lot of preventative care, checkups, and sick visits, and treating coughs and colds—those sorts of things. I’ve had specific training in how to support families in clinical decision-making related to vaccines, and every family wants what’s best for their child, and so supporting them is part of my job.

I don’t see specific articulation of conspiracy theories, but I do think there’s more questions about vaccines in conversations I’ve not typically had to have before. I’ve found that parents and caregivers do ask general questions about the risks and benefits of vaccines. We just try to reiterate that vaccines have been studied, that they are intentionally scheduled to protect an immature immune system when it’s the most vulnerable, and that we want everyone to be safe, healthy, and strong. That’s how we can provide protection.

“I think what’s confusing is that distress is being sowed in headlines when most patients, families, and caregivers are motivated and want to be vaccinated.”

I feel that the narrative in the public space is unfairly confusing to families when over 90% of families still want their kids to be vaccinated. The families who are not as interested in that, or have questions—it typically takes multiple conversations to support that family in their decision-making. It’s very rarely one conversation.

I think what’s confusing is that distress is being sowed in headlines when most patients, families, and caregivers are motivated and want to be vaccinated. For example, some of the headlines around recent changes the CDC are making make it sound like they’re making a huge clinical change, when it’s actually not a huge change from what people are typically doing. In my standard clinical practice, we don’t give the combined MMRV vaccine to children under four years old, and that’s been standard practice in all of the places I’ve worked on the Eastern Seaboard. [Editor’s note: In early October, the CDC updated its recommendation that young children receive the varicella vaccine separately from the combined vaccine for measles, mumps, and rubella. Many practitioners, including Weisz, already offer the shots separately.]

If you look at public surveys, pediatricians are still the most trusted [among health-care providers], and I do live in a jurisdiction with pretty strong policy about school-based vaccination. I think that people are getting information from multiple sources, but at the end of the day, in terms of both the national rates and also what I see in clinical practice, we really are seeing most families wanting vaccines.

An AI app to measure pain is here

24 October 2025 at 05:00

How are you feeling?

I’m genuinely interested in the well-being of all my treasured Checkup readers, of course. But this week I’ve also been wondering how science and technology can help answer that question—especially when it comes to pain. 
In the latest issue of MIT Technology Review magazine, Deena Mousa describes how an AI-powered smartphone app is being used to assess how much pain a person is in.

The app, and other tools like it, could help doctors and caregivers. They could be especially useful in the care of people who aren’t able to tell others how they are feeling.

But they are far from perfect. And they open up all kinds of thorny questions about how we experience, communicate, and even treat pain.

Pain can be notoriously difficult to describe, as almost everyone who has ever been asked to will know. At a recent medical visit, my doctor asked me to rank my pain on a scale from 1 to 10. I found it incredibly difficult to do. A 10, she said, meant “the worst pain imaginable,” which brought back unpleasant memories of having appendicitis.

A short while before the problem that brought me in, I’d broken my toe in two places, which had hurt like a mother—but less than appendicitis. If appendicitis was a 10, breaking a toe was an 8, I figured. If that was the case, maybe my current pain was a 6. As a pain score, it didn’t sound as bad as I actually felt. I couldn’t help wondering if I might have given a higher score if my appendix were still intact. I wondered, too, how someone else with my medical issue might score their pain.

In truth, we all experience pain in our own unique ways. Pain is subjective, and it is influenced by our past experiences, our moods, and our expectations. The way people describe their pain can vary tremendously, too.

We’ve known this for ages. In the 1940s, the anesthesiologist Henry Beecher noted that wounded soldiers were much less likely to ask for pain relief than similarly injured people in civilian hospitals. Perhaps they were putting on a brave face, or maybe they just felt lucky to be alive, given their circumstances. We have no way of knowing how much pain they were really feeling.

Given this messy picture, I can see the appeal of a simple test that can score pain and help medical professionals understand how best to treat their patients. That’s what is being offered by PainChek, the smartphone app Deena wrote about. The app works by assessing small facial movements, such as lip raises or brow pinches. A user is then required to fill a separate checklist to identify other signs of pain the patient might be displaying. It seems to work well, and it is already being used in hospitals and care settings.

But the app is judged against subjective reports of pain. It might be useful for assessing the pain of people who can’t describe it themselves—perhaps because they have dementia, for example—but it won’t add much to assessments from people who can already communicate their pain levels.

There are other complications. Say a test could spot that a person was experiencing pain. What can a doctor do with that information? Perhaps prescribe pain relief—but most of the pain-relieving drugs we have were designed to treat acute, short-term pain. If a person is grimacing from a chronic pain condition, the treatment options are more limited, says Stuart Derbyshire, a pain neuroscientist at the National University of Singapore.

The last time I spoke to Derbyshire was back in 2010, when I covered work by researchers in London who were using brain scans to measure pain. That was 15 years ago. But pain-measuring brain scanners are yet to become a routine part of clinical care.

That scoring system was also built on subjective pain reports. Those reports are, as Derbyshire puts it, “baked into the system.” It’s not ideal, but when it comes down to it, we must rely on these wobbly, malleable, and sometimes incoherent self-descriptions of pain. It’s the best we have.

Derbyshire says he doesn’t think we’ll ever have a “pain-o-meter” that can tell you what a person is truly experiencing. “Subjective report is the gold standard, and I think it always will be,” he says.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Job titles of the future: AI embryologist

22 October 2025 at 06:00

Embryologists are the scientists behind the scenes of in vitro fertilization who oversee the development and selection of embryos, prepare them for transfer, and maintain the lab environment. They’ve been a critical part of IVF for decades, but their job has gotten a whole lot busier in recent years as demand for the fertility treatment skyrockets and clinics struggle to keep up. The United States is in fact facing a critical shortage of both embryologists and genetic counselors. 

Klaus Wiemer, a veteran embryologist and IVF lab director, believes artificial intelligence might help by predicting embryo health in real time and unlocking new avenues for productivity in the lab. 

Wiemer is the chief scientific officer and head of clinical affairs at Fairtility, a company that uses artificial intelligence to shed light on the viability of eggs and embryos before proceeding with IVF. The company’s algorithm, called CHLOE (for Cultivating Human Life through Optimal Embryos), has been trained on millions of embryo data points and outcomes and can quickly sift through a patient’s embryos to point the clinician to the ones with the highest potential for successful implantation. This, the company claims, will improve time to pregnancy and live births. While its effectiveness has been tested only retrospectively to date, CHLOE is the first and only FDA-approved AI tool for embryo assessment. 

Current challenge 

When a patient undergoes IVF, the goal is to make genetically normal embryos. Embryologists collect cells from each embryo and send them off for external genetic testing. The results of this biopsy can take up to two weeks, and the process can add thousands of dollars to the treatment cost. Moreover, passing the screen just means an embryo has the correct number of chromosomes. That number doesn’t necessarily reflect the overall health of the embryo. 

“An embryo has one singular function, and that is to divide,” says Wiemer. “There are millions of data points concerning embryo cell division, cell division characteristics, area and size of the inner cell mass, and the number of times the trophectoderm [the layer that contributes to the future placenta] contracts.”

The AI model allows for a group of embryos to be constantly measured against the optimal characteristics at each stage of development. “What CHLOE answers is: How well did that embryo develop? And does it have all the necessary components that are needed in order to make a healthy implantation?” says Wiemer. CHLOE produces an AI score reflecting all the analysis that’s been done within an embryo. 

In the near future, Wiemer says, reducing the percentage of abnormal embryos that IVF clinics transfer to patients should not require a biopsy: “Every embryology laboratory will be doing automatic assessments of embryo development.” 

A changing field

Wiemer, who started his career in animal science, says the difference between animal embryology and human embryology is the extent of paperwork. “Embryologists spend 40% of their time on non-embryology skills,” he adds. “AI will allow us to declutter the embryology field so we can get back to being true scientists.” This means spending more time studying the embryos, ensuring that they are developing normally, and using all that newfound information to get better at picking which embryos to transfer. 

“CHLOE is like having a virtual assistant in the lab to help with embryo selection, ensure conditions are optimal, and send out reports to patients and clinical staff,” he says. “Getting to study data and see what impacts embryo development is extremely rewarding, given that this capability was impossible a few years ago.” 

Amanda Smith is a freelance journalist and writer reporting on culture, society, human interest, and technology.

New noninvasive endometriosis tests are on the rise

21 October 2025 at 06:00

Shantana Hazel often thought her insides might fall out during menstruation. It took 14 years of stabbing pain before she ultimately received a diagnosis of endometriosis, an inflammatory disease where tissue similar to the uterine lining implants outside the uterus and bleeds with each cycle. The results can include painful periods and damaging scar tissue. Hazel, now 50 and the founder of the endometriosis advocacy organization Sister Girl Foundation, was once told by a surgeon that her internal organs were “fused together” by lesions resembling Laffy Taffy. After 16 surgeries, she had a hysterectomy at age 30. 

Hazel is far from alone. Endometriosis inflicts debilitating pain and heavy bleeding on more than 11% of reproductive-­age women in the United States. Diagnosis takes nearly 10 years on average, partly because half the cases don’t show up on scans, and surgery is required to obtain tissue samples.

But a new generation of noninvasive tests are emerging that could help accelerate diagnosis and improve management of this poorly understood condition. 

Within the next year, several companies, including Hera Biotech, Proteomics International, NextGen Jane, and Ziwig, aim to launch endometriosis diagnostics in the United States. Their tests analyze biomarkers—biological molecules (in this case, mRNA, proteins, or miRNA) that signal a disease or process like inflammation—in samples of endometrial tissue, blood, menstrual blood, and saliva. 

The facade of the Ziwig Lab in France.
Ziwig Lab, headquartered in France, hopes to have their test approved for use in the United States soon.
COURTESY MM PRODUCTION – MAGALI MEIRA

These tests could help patients get an accurate diagnosis quickly and noninvasively, speeding access to endometriosis treatments and management strategies, including surgery, hormonal medications, and pelvic floor physical therapy. Early identification could also help doctors manage conditions for which people with endometriosis face increased risk, including cardiovascular disease, heart attack, and stroke. Endometriosis can also make it difficult to become pregnant. Because half of women with infertility have endometriosis, identifying and managing the condition sooner may improve fertility and IVF outcomes. 

Endometriosis biomarker tests rely on a range of technologies, including single-cell RNA sequencing and mass spectrometry that can identify thousands of proteins simultaneously. “These instruments are very good at precisely identifying a molecule, in [our] case a protein. And what’s changed over the last five or 10 years is they’ve gotten more sensitive,” says Proteomics cofounder Richard Lipscombe. Machine learning can also now efficiently sift through large quantities of the resulting data. 

So far only Ziwig has a test on the market. It uses a saliva sample to identify biomarkers in people with endometriosis symptoms and is currently sold in 30 countries. In France, where the company is based, the cost is fully covered by national health insurance. 

Some researchers are concerned that Ziwig’s test might not be accurate when it’s used in larger and more diverse populations; its interim validation study included just 200 people. “I’m not saying this doesn’t work. I just would want to see more validation,” says Kathryn Terry, an associate professor of epidemiology and gynecology at Harvard. Company representatives say they’re preparing to publish results on 1,000 patients in the near future, adding that French authorities had access to the full data set before approving government reimbursements.

P4 Flowcell Cartridge for Illumina NextSeq 2000 Sequencing Platform.
COURTESY MM PRODUCTION – MAGALI MEIRA

These tests are emerging as momentum is building to tackle endometriosis. Over the past five years, France, Australia, the United Kingdom, and Canada have launched ambitious endometriosis initiatives. 

The potential benefits are not just on the individual level: In 2025, the World Economic Forum estimated that earlier diagnosis and improved treatment to address the chronic pain, infertility, and depression caused by endometriosis could add at least $12 billion to global GDP by 2040.

As these biomarker tests are further developed, it’s possible their results could inform such treatments. Today surgery is often used to excise the lesions. The process can take as long as seven hours, and even then, lesions frequently form again. Jason Abbott, chair of Australia’s National Endometriosis Clinical and Scientific Trials Network, compares endometriosis management today to breast cancer care 30 years ago. Whereas doctors once prescribed surgery for all breast cancer patients, targeted treatments now address the underlying cell processes that help tumors grow and spread. Endometriosis tests could likewise help researchers categorize the condition’s distinct subsets and understand their underlying inflammatory pathways—information drugmakers could use to develop targeted treatments that keep it in remission.

Colleen de Bellefonds is a science journalist based in Paris.

The astonishing embryo models of Jacob Hanna

21 October 2025 at 05:02

When the Palestinian stem-cell scientist Jacob Hanna was stopped while entering the US last May, airport customs agents took him aside and held him for hours in “secondary,” a back office where you don’t have your passport and can’t use your phone. There were two young Russian women and a candy machine in the room with him. Hanna, who has a trim beard and glasses and holds an Israeli passport, accepted the scrutiny. “It’s almost like you are under arrest, but in a friendly way,” he says. He agreed to turn over his phone and social media for inspection.  

“They said, ‘You have the right to refuse,’” he recalls, “and I said, ‘No, no, it’s an open book.’”

The agents scrolling through his feeds would have learned that Hanna is part of Israel’s small Arab Christian minority, a nonbinary LGBTQ-rights advocate, and an outspoken critic of the Gaza occupation, who uses his social media accounts to post images of atrocities and hold up a mirror to scientific colleagues including those at the Weizmann Institute of Science, the pure-science powerhouse where he works—Israel’s version of Caltech or Rockefeller University. In his luggage, they would have found his keffiyeh, or traditional headscarf, which Hanna last year vowed to wear at lecture podiums on his many trips abroad.

Hanna had been stopped before; he knew the routine. Anything to declare? Any biological samples? But this time the agents’ questions touched on a specific new topic: embryos.

Weeks earlier, a Harvard University researcher had been arrested for having frog embryos in her luggage and sent to a detention center in Louisiana. Hanna didn’t have any specimens from his lab, but if he had, it would have been surprisingly hard to say what they were. That’s because his lab specializes in creating synthetic embryo models, structures that resemble real embryos but don’t involve sperm, eggs, or fertilization. 

Instead of relying on the same old recipe biology has followed for a billion years, give or take, Hanna is coaxing the beginnings of animal bodies directly from stem cells. Join these cells together in the right way, and they will spontaneously attempt to organize into an embryo—a feat that’s opening up the earliest phases of development to scientific scrutiny and may lead to a new source of tissue for transplant medicine.

Soon it could be difficult to distinguish between a real human embryo—the kind with legal protections—and one conjured from stem cells.

In 2022, working with mice, Hanna reported he’d used the technique to produce synthetic embryos with beating hearts and neural folds—growing them inside small jars connected to a gas mixer, a type of artificial womb. The next year, he repeated the trick using human cells. This time the structures were not so far developed, still spherical in shape. Nonetheless, they were incredibly realistic mimics of a two-week-old human embryo, including cells destined to form the placenta. 

These sorts of models aren’t yet the same as embryos. It’s rare that they form correctly—it takes a hundred tries to make one—and they skip past normal steps before popping into existence. Yet to scientists like the French biologist Denis Duboule, Hanna’s creations are “entirely astonishing and very disturbing.” Soon, Duboule expects, it could be difficult to distinguish between a real human embryo—the kind with legal protections—and one conjured from stem cells. 

Hanna is the vanguard of a wider movement that’s fusing advanced methods in genetics, stem-cell biology, and still-­primitive artificial wombs to create bodies where they’ve never grown before—outside the uterus. Joining the chase are researchers at Caltech, the University of Cambridge, and Rockefeller in New York, as well as a growing cadre of startup companies with commercial aims. There’s Renewal Bio, a startup Hanna cofounded, which hopes to grow synthetic embryos as a source of youthful replacement cells, such as bits of liver or even eggs. In Europe, Dawn Bio has started placing a type of embryo model called a blastoid on uterine tissue. That will light up a pregnancy test and could, the company thinks, provide new insights into IVF medicine. Patent offices in the US and Europe are seeing a flood of claims as universities grasp for exclusive commercial control over these new types of beings. 

Jacob Hanna sitting in a lab space
Jacob Hanna leads a team at the Weizmann Institute of Science in Rehovot, Israel, that is studying how to create embryos without using sperm, eggs, or fertilization. He’s cofounded a startup company, Renewal Bio, that has plans to use these synthetic embryo models as bioprinters to produce youthful tissue, but ethical questions surround the project.
AHMAD GHARABLI/GETTY IMAGES

Hanna declined a request to discuss his research for this story. But for the last three years, MIT Technology Review has followed Hanna across online presentations, lecture halls, and two in-person ethics meetings, both organized by the Global Observatory for Genome Editing, a public consultation project where he agreed to engage with religious scholars, bioethicists, and other experts. What emerged is a remarkable picture of a scientist working at a Nobel Prize level but whose research, though approved by his institution, raises serious long-term ethical questions.

Exactly how far Hanna has taken his models of the human embryo is an open question. According to public comments from Renewal Bio, the answer is at least 28 days. But it’s possibly further. One scientist in contact with the company said he thought they’d reached close to day 40, a point where you would see the beginning of eyes and budding limbs. Renewal did not respond to a request for comment.

But even if he hasn’t gotten that far yet, Hanna intends to. His team is “trying to make entities at more advanced stages—depending on the goal, it could be day 30 in development, day 40, or day 70,” he told an audience last May in Cambridge, Massachusetts, where he’d traveled to join a panel discussion involving religious scholars and social scientists at the Global Observatory’s annual summit. The more advanced versions would be similar in size and development to a fetus in the third month of pregnancy. 

O. Carter Snead, a bioethicist from the University of Notre Dame who led the panel featuring Hanna, approached me afterward to ask if I’d heard what the scientist had said. Snead was surprised that Hanna had so frankly disclosed his goals and that no one had objected, or maybe even grasped what it meant. Perhaps, Snead thinks, this technology won’t sink in until people can see it with their own eyes. “If you had one of these spinning bottles with something that looked like a human fetus inside it, I think you’d get people’s attention,” he says. “That’s going to be like, whoa—what are we doing?”

Snead, a Catholic who sits on a panel that advises the Vatican, also was not comforted by Hanna’s plan to make sure his models, if they advance to later stages of development, will pass ethical scrutiny. That plan involves blocking the formation of the head, brain, or perhaps heart of the synthetic structures, by means including genetic modification. If there’s no brain, Hanna’s reasoning goes, there’s no awareness, no person, and no foul. Just a clump of organs.

Snead says that’s not the same standard of humanity he knows, which treats all humans the same, regardless of their intellectual capacity or anything else. “What is considered human? Who is considered human?” wonders Snead. “It’s who’s in and who’s out. There is a dramatic consequence of being in versus out of the boundaries of humanity.”

The beginnings of bodies

Each of us—me, you the reader, and Jacob Hanna—started as a fertilized egg, a single cell that’s able to divide and dynamically carry out a program to build a complete body with all its organs and billions of specialized cells. Science has long sought ways to seize on that dramatic potential. A first step came in the 1990s, when scientists were able to isolate powerful stem cells from five-day-old embryos created through in vitro fertilization—and keep them growing in their labs. These embryonic stem cells had the inherent potential to become any other type of cell. If they could be directed in the lab to form, for example, neurons or the insulin-making cells that diabetics need, that would open up a way to treat disease using cell transplants. 

""
A side-by-side comparison of synthetic (left) and natural (right) mouse embryos shows similar formation of the brain and heart.
AMADEI AND HANDFORD/UNIVERSITY OF CAMBRIDGE

But these lab recipes are often unsuccessful, which explains the general lack of new stem-cell treatments. “The sad truth is that over 25 years that we’ve been working on this problem, there are about 10 cell types you make that have reasonable function,” says Chad Cowan, chief scientific officer of the stem-cell company Century Therapeutics. If we think of the body as a car, he explains, “we’ve got only spark plugs. We maybe have some tires.” The body’s most potent blood-forming cells in particular “never appear,” according to Cowan, even though biotech companies have spent millions trying to make them.

Hanna’s startup plans to use synthetic embryos as a kind of “bioprinter,” producing medically valuable cells in cases where other methods have failed.

It turns out, though, that stem cells retain a natural urge to work together. Scientists began to notice that, when left alone, the cells would join into blobs, tubes, and cavities—some of which resembled parts of an embryo. 

Early versions of these structures were crude, even just a swirling film of cells on a glass slide. But each year, they have grown more realistic. By 2023, Hanna was describing what he called a “bona fide” human embryo model that was “fully integrated,” with all the major parts arranged in an architecture that was hard to distinguish from the real thing. 

His company, Renewal, plans to use these synthetic embryos as a kind of “bioprinter,” producing medically valuable cells in cases where other methods have failed. This could be particularly valuable if the synthetic embryos are a perfect match with a patient’s DNA. And that’s possible too: These days reprogramming anyone’s skin cells into stem cells is easily done. Hanna has tried it on himself, transforming his own cells into synthetic embryos. 

Hanna’s research, and that of other groups, has at times collided with a powerful scientific body called the International Society for Stem Cell Research, or ISSCR, a self-governance organization that sets boundaries about what research can and can’t be published and what terminology to use. That’s to shield scientists from sensational headlines, public backlash, or the reach of actual regulators. 

The organization has taken a particularly categorical position on structures made from stem cells, saying they are mere “models.” According to a statement it fired off in 2023, “embryo models are neither synthetic nor embryos”—and, it added, they “cannot and will not develop to the equivalent of postnatal stage human.” 

Many scientists, including Hanna, agree no one should ever try to make a stem-cell baby. But he is fairly certain these structures will become more realistic and can grow further. In fact, that may be the real test of what an embryo is: whether it can dynamically keep reaching new stages of development, especially organogenesis, or the first emergence of organs. The language in the ISSCR statement, he complained, was “brainwashing.” 

Replacement parts

Most of the commercial projects involving synthetic embryos are doomed to a short and fitful life as the technology proves too difficult or undeveloped. But the idea isn’t going away. Instead, there are signals it’s getting bigger, and weirder. In an editorial published in March by MIT Technology Review, a group of Stanford scientists put forward a proposal for what they called “bodyoids,” arguing that stem cells and artificial wombs may lead to an “unlimited source” of nonsentient human bodies for use in drug research or as organ donors. One of its authors, Henry Greely, among the foremost bioethicists in the US, posted on Bluesky that even though the idea gives him “some creeps,” he added his name because he feels it is plausible enough to need discussion, and “soon.”

Especially in the Bay Area, headless bodies are having a moment. The Stanford biologist Hiro Nakauchi, another “bodyoids” author, said the editorial provided a surprise entrée for him into a world of stealth startups already pursuing synthetic embryos, artificial wombs, and body-part “replacement.” He met the CEO of Hanna’s company, signing on as an advisor. But other teams have still more radical plans. One venture capitalist introduced him to a longevity entrepreneur tinkering with a plan for head transplants. The idea: Swap your aged head onto the body of a younger clone. That company claims to have a facility on a Caribbean island “just like Jurassic Park,” Nakauchi says.   

These sorts of plans—real or rumored—have gotten the attention of the stem-cell police, the ISSCR. This June, an ethics committee led by Amander Clark, a fetal specialist at UCLA and a past president of the society, wrote that it had become aware of “commercial and other groups raising the possibility of building an embryo in vitro” and bringing it to viability inside “artificial systems.” Though the ISSCR had previously decreed that embryo models “cannot and will not” develop to term, it now declared efforts aiming at viability “unsafe and unethical,” placing them in a “prohibited” category. It added that the ban would cover “any purpose: reproductive, research, or commercial.” 

Blurred boundaries

Clark and her colleagues are right that, for the foreseeable future, no one is going to decant a full-term baby out of a bottle. That’s still science fiction. But there’s a pressing issue that needs to be dealt with right now. And that’s what to do about synthetic embryo models that develop just part of the way—say for a few weeks, or months, as Hanna proposes. 

Because right now, hardly any laws or policies apply to synthetic embryos. One reason is their unnatural origin: Because these entities don’t start with conception and grow in labs, most existing laws won’t cover them. That includes the Fetus Farming Prohibition Act, legislation passed unanimously in 2006 by the US Congress, which sought to prevent anyone from growing a fetus for its organs. But that law references “a human pregnancy” and a “uterus”—and there would be neither if a synthetic embryo were grown in a mechanical vessel. 

Another policy under pressure is the “14-day rule,” a widely employed convention that natural embryos should not be grown longer than two weeks in the lab. Though it’s a mostly arbitrary stopping point, it’s been convenient for laboratory scientists to know where their limit is. But that rule isn’t being applied to the embryo models. For instance, even though the United Kingdom has a 14-day rule enshrined in law, that legislation doesn’t define what an embryo is. To scientists working on models, that’s a critical loophole. If the structures aren’t considered true embryos, then the rule doesn’t apply.  

Last year, the University of Cambridge, in the UK, described the situation as a “grey area” and said it “has left scientists and research organisations uncertain about the acceptable boundaries of their work, both legally and ethically.” 

Researchers at the university, which is a hot spot for human embryo models, have been working with one that has advanced features, including beating heart cells. But the appearance of distinctive features under their microscopes is unsettling—even to scientists. “I was scared, honestly,” Jitesh Neupane, who led that work, told the Guardian in 2023. “I had to look down and look back again.” 

That particular stem-cell model isn’t complete—it entirely lacks placenta cells and a brain. So it’s not a real embryo. But it could get ever trickier to insist the models don’t count, given the accelerating race to make them more realistic. To Duboule, scientists are caught in a “fool’s paradox” and a “rather unstable situation.”

Even incomplete models raise the question of where to draw the line. Should you stop when it can feel pain? When it’s just too human-looking for comfort? Scientific leaders may soon have to decide if there are “morally significant” human features—like hands or a face—that should be avoided, whether the structure has a brain or not. “I personally think there should be regulation, and many in the field believe this too,” says Alejandro De Los Angeles, a stem-cell biologist affiliated with the University of Central Florida. 

“I always live in fear that I might find myself embroiled in some kind of a scandal … Things can shift very quickly for political reasons.”

Jacob Hanna

Hanna says he has all the necessary approvals in Israel to carry his work forward. But he also worries that the ground rules could change. “I’m almost the only one [in Israel] doing these kinds of experiments, and I always live in fear that I might find myself embroiled in some kind of a scandal,” he says. “Things can shift very quickly for political reasons.” 

And his statements about the situation in Gaza have made him a target. He’s gotten voicemails wondering why a Weizmann professor is so sympathetic to Palestine, and once when he returned from a trip, someone had tucked an Israeli army beret into the door handle of his car. Last year, he says, political opponents even went after his science by filing a complaint that his research was illegal.

What is clear is that Hanna, who is gregarious and attentive, has worked to cultivate a large group of friends and allies, including religious authorities—all part of a campaign to explain the science and hear out other views. He says he got a perfect grade in a bioethics class with a rabbi, conferenced with a priest from his hometown in Galilee, and even paid his respects to an Orthodox professor at a conservative hospital in Jerusalem. “It was unofficial. I didn’t have to get a permit from him,” Hanna says. “But … what does he think? Can I get him on board? Do I get a different opinion?” 

“I really do think it’s admirable that he is willing to ask these hard questions about what it is that he’s doing. I think that makes him different,” says Snead. “But if you are cynical, you could ask if his focus on the ethical dimension of this is more of a branding exercise.” Perhaps, Snead says, it’s a way to market the structures as the “green, sustainable alternative to embryos.”

A heartbeat in a jar

To admirers, Hanna is a doctor and researcher “heads above the rest,” according to Eli Adashi, the former dean of Brown University’s medical school. “He’s very unusual, very special, and is making major discoveries that can’t be ignored,” Adashi says. “He’s one of those unusually talented people that exceed the capacity of us mortals, and it all emanates from a town in Galilee that no one knows exists.”

While it is something of a rarity for a Palestinian to rise so high in Israel’s ivory tower, in reality Hanna has an elite background—he’s from a family of MDs, and an uncle, Nabil Hanna, co-developed the first antibody drug for cancer, the blockbuster rituximab.

Since the October 7 attack on Israel by Hamas, Israel has been at war in Gaza, and Hanna’s team has felt the effects. One young scientist dropped his pipette to don an IDF uniform. Another trainee, who is from Gaza, had a brother and other family members struck dead by an Israeli missile that hit near a church where people were sheltering. Then, this June, an Iranian ballistic missile hit the grounds of the Weizmann Institute, shattering windows and walls and sending Hanna’s students scrambling to save research. 

Despite delays in his research due to the ongoing conflict, Hanna’s ideas and technologies are being exported—and emulated. One place to see a version of the artificial womb is at the Janelia Research Campus, in Virginia, where one of Hanna’s former students, Alejandro Aguilera Castrejón, now operates a lab of his own. Aguilera Castrejón, for whom science was a ticket out of the poor outskirts of Mexico City, has tattoos from his wrists to his elbows; the newest depicts a hydra, a sea polyp noted for being able to regenerate itself from a few cells.

During a visit in June, Aguilera Castrejón flipped aside a black cover to reveal the incubator: a metal wheel that slowly turned, gently agitating jars filled with blood serum. Inside one, a mouse embryo drifted—a tiny, translucent shape, curved like a comma. Then, awesomely, a red-colored blob expanded in its center. A heartbeat. 

That day, it was a normal mouse embryo in the jar—it had been transferred there to see how far it would grow. Aguilera Castrejón has the goal of eventually birthing a mouse from an incubator, a process called ectogenesis. But the stem-cell embryos don’t grow as well or as long, he says. The problem isn’t just the challenge of growing them in culture jars. There’s probably some kind of fundamental disorganization. They aren’t entirely normal—not yet true embryos.

""
A rotating bioreactor, developed in Israel, is used to grow synthetic embryos in small jars of blood serum.
GETTY IMAGES

Aguilera Castrejón, who spent eight years at Weizmann contributing to Hanna’s research, is skeptical that the human version of the technology is ready for commercialization. For one thing, it’s inefficient. In every 100 attempts to make a synthetic embryo, the desired structure will form only once or twice. The rest are disorganized blobs—closer to “huevos fritos” than real embryos, he says. “I do think the human embryo model will go further, but it could take years,” he adds.

In Aguilera Castrejón’s view, Hanna is well placed to lead that work. One reason is that Israel offers a relatively permissive environment—and so does Jewish thought. In the Talmud, the embryo is considered “mere water” until the 40th day. Plus, Hanna is already successful. “Some people aren’t allowed to do it. And some people want to do it, but they can’t,” says Aguilera Castrejón. “Jacob wants to make it as realistic as possible and go as far as possible—that is his aim. He’s very ambitious and wants to tackle very big things people don’t dare to do. He really wants to do something big. His main aim is always to grow them as far as you can.” 

The first payoff of a technology for mimicking embryos this way is a new view of the unfolding human no one has ever had before. Real human embryos are rarely seen at the early stages, since they’re inside the womb—and at four or five weeks, many people don’t even know they’re pregnant. It’s been a black box. But synthetic models of the embryo can be made in the thousands (depending on the type), studied closely, inspected with modern microscopes, and subjected to dyes and genetic engineering tools, all while they’re still alive. Add a known toxic chemical that causes birth defects, like thalidomide, and you can closely trace the effects. “Since we don’t have a way to peer into the uterus, this allows us to watch things as if they are intrauterine but are not,” says Adashi, the former Brown dean and a fertility doctor. 

What’s more, a synthetic embryo may be able to make cells correctly—just as a real one does—and make all types at once, expanding on the limited few that scientists can create from stem cells today. While not all embryonic material is useful to medicine, the blood-forming cells in an embryo are known to be particularly potent. In mice, they can be extracted and multiplied—and if transplanted into a mouse subjected to lethal radiation, they will save it. 

Hanna imagines a cancer patient who needs a bone marrow transplant but can’t find a match. Could blood-forming cells be harvested from, say, 100 or 500 embryo-stage clones of that person, providing perfectly matched tissue? 

In his cost-benefit analysis, he believes the chance to save lives outweighs the moral risk of growing embryo models for a month, which is about how long it takes for key blood cells to form. At that stage, says Hanna, he thinks “there is still no personification of the embryo” and it’s permissible to use them in research.

Young everything

Hanna cofounded Renewal in 2022 with Omri Amirav-Drory, a venture capitalist whose fund, NFX, raised about $9 million for the company and purchased rights to Weizmann patents. The startup’s idea is to create synthetic embryos from the cells of patients, allowing them to grow for weeks or months to produce what Amirav-Drory calls “perfect cells” for transplant. That is because the synthetic structure, as a clone, would contain “young, genetically identical everything.”

Speaking at an event for tech futurists last year near San Francisco, Amirav-Drory flashed a picture of pregnancy tests used on the synthetic embryos. “We even went to CVS,” he said, “and by day eight it’s already triggering a pregnancy test. So it’s alive.”  

Amirav-Drory is a fan of Peter F. Hamilton, the science fiction author whose Commonwealth series features a society where space colonists transfer their minds into cloned bodies, attaining second lives. And he’s pitched Hanna’s technology along related lines, as a new type of longevity medicine based on replacing old cells with young ones. He is convinced Hanna’s work is “magic” that’s sure to win a Nobel.

“The importance of getting rid of the head is all ethical. It just means we can make all these bodies and organ structures without having to cross ethical lines or harm sentient living beings.”

Carsten Charlesworth, researcher, Stanford University

But he knows the startup has both technical and ethical challenges. The technical challenge is that once the synthetic embryos reach a certain size and age, the incubator can’t support them any longer. That’s because they lack a blood supply and need to absorb oxygen and nutrients from their surroundings; they starve once they get too big. One idea being considered is to add a feeding tube, but that involves microsurgery and isn’t easily scalable. The ethical issue is also age related: The more developed they become, the more they will be recognizably human, with the beginnings of organs and small, webbed fingers and toes. “No one has a problem with day 14, but the further we go, the further it looks like a baby, and we get into trouble. So how do we solve that?” Amirav-Drory asked a different audience, in Menlo Park.

The solution, so far, is a neural knockout—genetic changes made to the embryoids so they don’t develop a brain. The group has already tried out the concept on mice, removing a gene called LIM-1. That yielded a headless mouse, which looks a bit like a pink thumb, except with little claws and a tail. Those mice won’t live after birth, but they can develop in the womb. “We got synthetic mouse embryos growing with no head, with no brain,” Amirav-Drory said in Menlo Park. “It’s just to show you where we can go to solve both technical and ethical issues.” 

The idea of brain removal is a surprisingly active area of research—suggesting that it’s no sideshow. Working with mice, for example, Nakauchi’s team at Stanford is currently testing several different genetic changes to see if they can consistently yield an animal with no brain or head, but whose other tissues are normal. “The importance of getting rid of the head is all ethical. It just means we can make all these bodies and organ structures without having to cross ethical lines or harm sentient living beings,” says Carsten Charlesworth, a researcher in Nakauchi’s lab. He says the group is working toward a “genetic software package” it can add to mouse embryos to create a “reproducible phenotype.”

It may seem surprising that a technique designed to call forth a living being from stem cells is, simultaneously, being paired with a tactic to diminish that being. To Douglas Kysar, a professor at Yale Law School, that’s part of a broader trend toward what he calls “life that is not life,” which includes innovations like lab-grown meat. In the areas of animal-rights law Kysar studies, commercial biotech projects have begun to explore what he terms “disenhancement” and “disengineering.” That is the use of genetics to reduce the capacity of animals to suffer, feel pain, or have conscious experience at all, typically as part of a program to increase the efficiency and ethics of food production. 

For humans, of course, the worry around genetic engineering is usually that it will be used for enhancement—creating a baby with advantages. It’s much harder to think of examples where genetic disenhancements get pointed at the human embryo. John Evans, who co-directs the Institute of Applied Ethics at the University of California, San Diego, told me he can think of one, in literature. Hanna’s plans remind him of Bokanovsky’s Process, the fictional method of producing clones of different intelligence levels in the 1932 novel Brave New World.

That may not be a complete turnoff to investors. Lately, the plots of science fiction dystopias—Jurassic Park, Gattaca—seem to be getting repurposed at hot biotech properties. There’s Colossal, the company that wants to re-create extinct animals. Aguilera Castrejón says he’s already had a high-dollar offer to pack up his academic lab and join a startup company that wants to build an artificial womb. And when Hanna was at the Global Observatory meeting near Boston ​earlier this year, he was being shadowed by Matt Krisiloff, CEO of the Silicon Valley company Conception, which was set up to try to manufacture human eggs in the lab and has funding from OpenAI leader Sam Altman.

Eggs are another cell type that has proved difficult to generate from a stem cell in the lab. But a growing fetus will  form millions of immature egg cells. So just imagine: Someone too old to conceive gives some blood, which is converted into stem cells and then into a clone, from which the fetal gonad is dissected. Maybe the reproductive cells found there could be matured further in the lab. Or maybe those young and perfectly matched ovaries—her ovaries, really, not anyone else’s—could be returned to her body to finish developing. A fertility expert, David Albertini, told me it might just be possible.

During the ethics meeting he traveled to the US in May to attend, Hanna participated on a panel whose topic was “sources of moral authority.” Hanna’s authority comes from the possible benefits the science of synthetic embryos may bring. But he also wields his moral credibility. Early in his remarks, Hanna had framed the whole matter in a way that made worrying about what’s in the petri dish start to sound silly. Wearing a keffiyeh around his shoulders, he said: “I’d like to start and, you know, just remind everyone, unfortunately, that there is a genocide ongoing right now in Gaza, where children are being starved intentionally. And it is relevant, because we’re sitting here and we’re discussing human dignity, we’re discussing the status of an embryo, and we’re discussing the status of a fetus. But what about the life of the children, and adults, and innocent adults? How does it relate?”

This retina implant lets people with vision loss do a crossword puzzle

20 October 2025 at 08:00

Science Corporation—a competitor to Neuralink founded by the former president of Elon Musk’s brain-interface venture—has leapfrogged its rival after acquiring, at a fire-sale price, a vision implant that’s in advanced testing,.

The implant produces a form of “artificial vision” that lets some patients read text and do crosswords, according to a report published in the New England Journal of Medicine today.

The implant is a microelectronic chip placed under the retina. Using signals from a camera mounted on a pair of glasses, the chip emits bursts of electricity in order to bypass photoreceptor cells damaged by macular degeneration, the leading cause of vision loss in elderly people.

“The magnitude of the effect is what’s notable,” says José-Alain Sahel, a University of Pittsburgh vision scientist who led testing of the system, which is called PRIMA. “There’s a patient in the UK and she is reading the pages of a regular book, which is unprecedented.”  

Until last year, the device was being developed by Pixium Vision, a French startup cofounded by Sahel, which faced bankruptcy after it couldn’t raise more cash.  

That’s when Science Corporation swept in to purchase the company’s assets for about €4 million ($4.7 million), according to court filings.

“Science was able to buy it for very cheap just when the study was coming out, so it was good timing for them,” says Sahel. “They could quickly access very advanced technology that’s closer to the market, which is good for a company to have.”

Science was founded in 2021 by Max Hodak, the first president of Neuralink, after his sudden departure from that company. Since its founding, Science has raised around $290 million, according to the venture capital database Pitchbook, and used the money to launch broad-ranging exploratory research on brain interfaces and new types of vision treatments.

“The ambition here is to build a big, standalone medical technology company that would fit in with an Apple, Samsung, or an Alphabet,” Hodak said in an interview at Science’s labs in Alameda, California in September. “The goal is to change the world in important ways … but we need to make money in order to invest in these programs.”

By acquiring the PRIMA implant program, Science effectively vaulted past years of development and testing. The company has requested approval to sell the eye chip in Europe and is in discussions with regulators in the US.

Unlike Neuralink’s implant, which records brain signals so paralyzed recipients can use their thoughts to move a computer mouse, the retina chip sends information into the brain to produce vision. Because the retina is an outgrowth of the brain, the chip qualifies as a type of brain-computer interface.

Artificial vision systems have been studied for years and one, called the Argus II, even reached the market and was installed in the eyes of about 400 people. But that product was later withdrawn after it proved to be a money-loser, according to Cortigent, the company that now owns that technology.

Thirty-eight patients in Europe received a PRIMA implant in one eye. On average, the study found, they were able to read five additional lines on a vision chart—the kind with rows of letters, each smaller than the last. Some of that improvement was due to what Sahel calls “various tricks” like using a zoom function, which allows patients to zero in on text they want to read.

The type of vision loss being treated with the new implant is called geographic atrophy, in which patients have peripheral vision but can’t make out objects directly in front of them, like words or faces. According to Prevent Blindness, an advocacy organization, this type of central vision loss affects around one in 10 people over 80.  

The implant was originally designed starting 20 years ago by Daniel Palanker, a laser expert and now a professor at Stanford University, who says his breakthrough was realizing that light beams could supply both energy and information to a chip placed under the retina. Other implants, like Argus II, use a wire, which adds complexity.

“The chip has no brains at all. It just turns light into electrical current that flows into the tissue,” says Palanker. “Patients describe the color they see as yellowish blue or sun color.”

The system works using a wearable camera that records a scene and then blasts bright infrared light into the eye, using a wavelength humans can’t see. That light hits the chip, which is covered by “what are basically tiny solar panels,” says Palanker. “We just try to replace the photoreceptors with a photo-array.”

A diagram of how a visual scene could be represented by a retinal implant.
COURTESY SCIENCE CORPORATION

The current system produces about 400 spots of vision, which lets users make out the outlines of words and objects. Palanaker says a next-generation device will have five times as many “pixels” and should let people see more: “What we discovered in the trial is that even though you stimulate individual pixels, patients perceive it as continuous. The patient says ‘I see a line,’ “I see a letter.’”

Palanker says it will be important to keep improving the system because “the market size depends on the quality of the vision produced.”

When Pixium teetered on insolvency, Palanker says, he helped search for a buyer, meeting with Hodak. “It was a fire sale, not a celebration,” he says. “But for me it’s a very lucky outcome, because it means the product is going forward. And the purchase price doesn’t really matter, because there’s a big investment needed to bring it to market. It’s going to cost money.”  

Photo of the PRIMA Glasses and Pocket Processor.
The PRIMA artificial vision system has a battery pack/controller and an eye-mounted camera.
COURTESY SCIENCE CORPORATION

During a visit to Science’s headquarters, Hodak described the company’s effort to redesign the system into something sleeker and more user-friendly. In the original design, in addition to the wearable camera, the patient has to carry around a bulky controller containing a battery and laser, as well as buttons to zoom in and out. 

But Science has already prototyped a version in which those electronics are squeezed into what look like an extra-large pair of sunglasses.

“The implant is great, but we’ll have new glasses on patients fairly shortly,” Hodak says. “This will substantially improve their ability to have it with them all day.” 

Other companies also want to treat blindness with brain-computer interfaces, but some think it might be better to send signals directly into the brain. This year, Neuralink has been touting plans for “Blindsight,” a project to send electrical signals directly into the brain’s visual cortex, bypassing the retina entirely. It has yet to test the approach in a person.

AI could predict who will have a heart attack

20 October 2025 at 06:00

For all the modern marvels of cardiology, we struggle to predict who will have a heart attack. Many people never get screened at all. Now, startups like Bunkerhill Health, Nanox.AI, and HeartLung Technologies are applying AI algorithms to screen millions of CT scans for early signs of heart disease. This technology could be a breakthrough for public health, applying an old tool to uncover patients whose high risk for a heart attack is hiding in plain sight. But it remains unproven at scale while raising thorny questions about implementation and even how we define disease. 

Last year, an estimated 20 million Americans had chest CT scans done, after an event like a car accident or to screen for lung cancer. Frequently, they show evidence of coronary artery calcium (CAC), a marker for heart attack risk, that is buried or not mentioned in a radiology report focusing on ruling out bony injuries, life-threatening internal trauma, or cancer.

Dedicated testing for CAC remains an underutilized method of predicting heart attack risk. Over decades, plaque in heart arteries moves through its own life cycle, hardening from lipid-rich residue into calcium. Heart attacks themselves typically occur when younger, lipid-rich plaque unpredictably ruptures, kicking off a clotting cascade of inflammation that ultimately blocks the heart’s blood supply. Calcified plaque is generally stable, but finding CAC suggests that younger, more rupture-prone plaque is likely present too. 

Coronary artery calcium can often be spotted on chest CTs, and its concentration can be subjectively described. Normally, quantifying a person’s CAC score involves obtaining a heart-specific CT scan. Algorithms that calculate CAC scores from routine chest CTs, however, could massively expand access to this metric. In practice, these algorithms could then be deployed to alert patients and their doctors about abnormally high scores, encouraging them to seek further care. Today, the footprint of the startups offering AI-derived CAC scores is not large, but it is growing quickly. As their use grows, these algorithms may identify high-risk patients who are traditionally missed or who are on the margins of care. 

Historically, CAC scans were believed to have marginal benefit and were marketed to the worried well. Even today, most insurers won’t cover them. Attitudes, though, may be shifting. More expert groups are endorsing CAC scores as a way to refine cardiovascular risk estimates and persuade skeptical patients to start taking statins. 

The promise of AI-derived CAC scores is part of a broader trend toward mining troves of medical data to spot otherwise undetected disease. But while it seems promising, the practice raises plenty of questions. For example, CAC scores ­haven’t proved useful as a blunt instrument for universal screening. A 2022 Danish study evaluating a population-based program, for example, showed no benefit in mortality rates for patients who had undergone CAC screening tests. If AI delivered this information automatically, would the calculus really shift? 

And with widespread adoption, abnormal CAC scores will become common. Who follows up on these findings? “Many health systems aren’t yet set up to act on incidental calcium findings at scale,” says Nishith Khandwala, the cofounder of Bunkerhill Health. Without a standard procedure for doing so, he says, “you risk creating more work than value.” 

There’s also the question of whether these AI-generated scores would actually improve patient care. For a symptomatic patient, a CAC score of zero may offer false reassurance. For the asymptomatic patient with a high CAC score, the next steps remain uncertain. Beyond statins, it isn’t clear if these patients would benefit from starting costly cholesterol-lowering drugs such as Repatha or other PCSK9-inhibitors. It may encourage some to pursue unnecessary but costly downstream procedures that could even end up doing harm. Currently, AI-derived CAC scoring is not reimbursed as a separate service by Medicare or most insurers. The business case for this technology today, effectively, lies in these potentially perverse incentives. 

At a fundamental level, this approach could actually change how we define disease. Adam Rodman, a hospitalist and AI expert at Beth Israel Deaconess Medical Center in Boston, has observed that AI-derived CAC scores share similarities with the “incidentaloma,” a term coined in the 1980s to describe unexpected findings on CT scans. In both cases, the normal pattern of diagnosis—in which doctors and patients deliberately embark on testing to figure out what’s causing a specific problem—were fundamentally disrupted. But, as Rodman notes, incidentalomas were still found by humans reviewing the scans. 

Now, he says, we are entering an era of “machine-based nosology,” where algorithms define diseases on their own terms. As machines make more diagnoses, they may catch things we miss. But Rodman and I began to wonder if a two-tiered diagnostic future may emerge, where “haves” pay for brand-name algorithms while “have-nots” settle for lesser alternatives. 

For patients who have no risk factors or are detached from regular medical care, an AI-derived CAC score could potentially catch problems earlier and rewrite the script. But how these scores reach people, what is done about them, and whether they can ultimately improve patient outcomes at scale remain open questions. For now—holding the pen as they toggle between patients and algorithmic outputs—clinicians still matter. 

Vishal Khetpal is a fellow in cardiovascular disease. The views expressed in this article do not represent those of his employers. 

This startup thinks slime mold can help us design better cities

17 October 2025 at 06:00

It is a yellow blob with no brain, yet some researchers believe a curious organism known as slime mold could help us build more resilient cities.

Humans have been building cities for 6,000 years, but slime mold has been around for 600 million. The team behind a new startup called Mireta wants to translate the organism’s biological superpowers into algorithms that might help improve transit times, alleviate congestion, and minimize climate-related disruptions in cities worldwide.

Mireta’s algorithm mimics how slime mold efficiently distributes resources through branching networks. The startup’s founders think this approach could help connect subway stations, design bike lanes, or optimize factory assembly lines. They claim its software can factor in flood zones, traffic patterns, budget constraints, and more.

“It’s very rational to think that some [natural] systems or organisms have actually come up with clever solutions to problems we share,” says Raphael Kay, Mireta’s cofounder and head of design, who has a background in architecture and mechanical engineering and is currently a PhD candidate in materials science and mechanical engineering at Harvard University.

As urbanization continues—about 60% of the global population will live in metropolises by 2030—cities must provide critical services while facing population growth, aging infrastructure, and extreme weather caused by climate change. Kay, who has also studied how microscopic sea creatures could help researchers design zero-energy buildings, believes nature’s time-tested solutions may offer a path toward more adaptive urban systems.

Officially known as Physarum polycephalum, slime mold is neither plant, animal, nor fungus but a single-­celled organism older than dinosaurs. When searching for food, it extends tentacle-like projections in multiple directions simultaneously. It then doubles down on the most efficient paths that lead to food while abandoning less productive routes. This process creates optimized networks that balance efficiency with resilience—a sought-after quality in transportation and infrastructure systems.

The organism’s ability to find the shortest path between multiple points while maintaining backup connections has made it a favorite among researchers studying network design. Most famously, in 2010 researchers at Hokkaido University reported results from an experiment in which they dumped a blob of slime mold onto a detailed map of Tokyo’s railway system, marking major stations with oat flakes. At first the brainless organism engulfed the entire map. Days later, it had pruned itself back, leaving behind only the most efficient pathways. The result closely mirrored Tokyo’s actual rail network.

Since then, researchers worldwide have used slime mold to solve mazes and even map the dark matter holding the universe together. Experts across Mexico, Great Britain, and the Iberian peninsula have tasked the organism with redesigning their roadways—though few of these experiments have translated into real-world upgrades.

Historically, researchers working with the organism would print a physical map and add slime mold onto it. But Kay believes that Mireta’s approach, which replicates slime mold’s pathway-building without requiring actual organisms, could help solve more complex problems. Slime mold is visible to the naked eye, so Kay’s team studied how the blobs behave in the lab, focusing on the key behaviors that make these organisms so good at creating efficient networks. Then they translated these behaviors into a set of rules that became an algorithm.

Some experts aren’t convinced. According to Geoff Boeing, an associate professor at the University of Southern California’s Department of Urban Planning and Spatial Analysis, such algorithms don’t address “the messy realities of entering a room with a group of stakeholders and co-visioning a future for their community.” Modern urban planning problems, he says, aren’t solely technical issues: “It’s not that we don’t know how to make infrastructure networks efficient, resilient, connected—it’s that it’s politically challenging to do so.”

Michael Batty, a professor emeritus at University College London’s Centre for Advanced Spatial Analysis, finds the concept more promising. “There is certainly potential for exploration,” he says, noting that humans have long drawn parallels between biological systems and cities. For decades now, designers have looked to nature for ideas—think ventilation systems inspired by termite mounds or bullet trains modeled after the kingfisher’s beak

Like Boeing, Batty worries that such algorithms could reinforce top-down planning when most cities grow from the bottom up. But for Kay, the algorithm’s beauty lies in how it mimics bottom-up biological growth—like the way slime mold starts from multiple points and connects organically rather than following predetermined paths. 

Since launching earlier this year, Mireta, which is based in Cambridge, Massachusetts, has worked on about five projects. And slime mold is just the beginning. The team is also looking at algorithms inspired by ants, which leave chemical trails that strengthen with use and have their own decentralized solutions for network optimization. “Biology has solved just about every network problem you can imagine,” says Kay.

Elissaveta M. Brandon is an independent journalist interested in how design, culture, and technology shape the way we live.

Take our quiz: How much do you know about antimicrobial resistance?

16 October 2025 at 11:31

This week we had some terrifying news from the World Health Organization: Antibiotics are failing us. A growing number of bacterial infections aren’t responding to these medicines—including common ones that affect the blood, gut, and urinary tract. Get infected with one of these bugs, and there’s a fair chance antibiotics won’t help. 

The scary truth is that a growing number of harmful bacteria and fungi are becoming resistant to drugs. Just a few weeks ago, the US Centers for Disease Control and Prevention published a report finding a sharp rise in infections caused by a dangerous type of bacteria that are resistant to some of the strongest antibiotics. Now, the WHO report shows that the problem is surging around the world.

In this week’s Checkup, we’re trying something a bit different—a little quiz. You’ve probably heard about antimicrobial resistance (AMR) before, but how much do you know about microbes, antibiotics, and the scale of the problem? Here’s our attempt to put the “fun” in “fundamental threat to modern medicine.” Test your knowledge below!

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

The race to make the perfect baby is creating an ethical mess

16 October 2025 at 06:00

Consider, if you will, the translucent blob in the eye of a microscope: a human blastocyst, the biological specimen that emerges just five days or so after a fateful encounter between egg and sperm. This bundle of cells, about the size of a grain of sand pulled from a powdery white Caribbean beach, contains the coiled potential of a future life: 46 chromosomes, thousands of genes, and roughly six billion base pairs of DNA—an instruction manual to assemble a one-of-a-kind human.

Now imagine a laser pulse snipping a hole in the blastocyst’s outermost shell so a handful of cells can be suctioned up by a microscopic pipette. This is the moment, thanks to advances in genetic sequencing technology, when it becomes possible to read virtually that entire instruction manual.

An emerging field of science seeks to use the analysis pulled from that procedure to predict what kind of a person that embryo might become. Some parents turn to these tests to avoid passing on devastating genetic disorders that run in their families. A much smaller group, driven by dreams of Ivy League diplomas or attractive, well-behaved offspring, are willing to pay tens of thousands of dollars to optimize for intelligence, appearance, and personality. Some of the most eager early boosters of this technology are members of the Silicon Valley elite, including tech billionaires like Elon Musk, Peter Thiel, and Coinbase CEO Brian Armstrong. 

Embryo selection is less like a build-a-baby workshop and more akin to a store where parents can shop for their future children from several available models—complete with stat cards.

But customers of the companies emerging to provide it to the public may not be getting what they’re paying for. Genetics experts have been highlighting the potential deficiencies of this testing for years. A 2021 paper by members of the European Society of Human Genetics said, “No clinical research has been performed to assess its diagnostic effectiveness in embryos. Patients need to be properly informed on the limitations of this use.” And a paper published this May in the Journal of Clinical Medicine echoed this concern and expressed particular reservations about screening for psychiatric disorders and non-­disease-related traits: “Unfortunately, no clinical research has to date been published comprehensively evaluating the effectiveness of this strategy [of predictive testing]. Patient awareness regarding the limitations of this procedure is paramount.”    

Moreover, the assumptions underlying some of this work—that how a person turns out is the product not of privilege or circumstance but of innate biology—have made these companies a political lightning rod. 

SELMAN DESIGN

As this niche technology begins to make its way toward the mainstream, scientists and ethicists are racing to confront the implications—for our social contract, for future generations, and for our very understanding of what it means to be human.


Preimplantation genetic testing (PGT), while still relatively rare, is not new. Since the 1990s, parents undergoing in vitro fertilization have been able to access a number of genetic tests before choosing which embryo to use. A type known as PGT-M can detect single-gene disorders like cystic fibrosis, sickle cell anemia, and Huntington’s disease. PGT-A can ascertain the sex of an embryo and identify chromosomal abnormalities that can lead to conditions like Down syndrome or reduce the chances that an embryo will implant successfully in the uterus. PGT-SR helps parents avoid embryos with issues such as duplicated or missing segments of the chromosome.

Those tests all identify clear-cut genetic problems that are relatively easy to detect, but most of the genetic instruction manual included in an embryo is written in far more nuanced code. In recent years, a fledgling market has sprung up around a new, more advanced version of the testing process called PGT-P: preimplantation genetic testing for polygenic disorders (and, some claim, traits)—that is, outcomes determined by the elaborate interaction of hundreds or thousands of genetic variants.

In 2020, the first baby selected using PGT-P was born. While the exact figure is unknown, estimates put the number of children who have now been born with the aid of this technology in the hundreds. As the technology is commercialized, that number is likely to grow.

Embryo selection is less like a build-a-baby workshop and more akin to a store where parents can shop for their future children from several available models—complete with stat cards indicating their predispositions.

A handful of startups, armed with tens of millions of dollars of Silicon Valley cash, have developed proprietary algorithms to compute these stats—analyzing vast numbers of genetic variants and producing a “polygenic risk score” that shows the probability of an embryo developing a variety of complex traits.  

For the last five years or so, two companies—Genomic Prediction and Orchid—have dominated this small landscape, focusing their efforts on disease prevention. But more recently, two splashy new competitors have emerged: Nucleus Genomics and Herasight, which have rejected the more cautious approach of their predecessors and waded into the controversial territory of genetic testing for intelligence. (Nucleus also offers tests for a wide variety of other behavioral and appearance-related traits.) 

The practical limitations of polygenic risk scores are substantial. For starters, there is still a lot we don’t understand about the complex gene interactions driving polygenic traits and disorders. And the biobank data sets they are based on tend to overwhelmingly represent individuals with Western European ancestry, making it more difficult to generate reliable scores for patients from other backgrounds. These scores also lack the full context of environment, lifestyle, and the myriad other factors that can influence a person’s characteristics. And while polygenic risk scores can be effective at detecting large, population-level trends, their predictive abilities drop significantly when the sample size is as tiny as a single batch of embryos that share much of the same DNA.

The medical community—including organizations like the American Society of Human Genetics, the American College of Medical Genetics and Genomics, and the American Society for Reproductive Medicine—is generally wary of using polygenic risk scores for embryo selection. “The practice has moved too fast with too little evidence,” the American College of Medical Genetics and Genomics wrote in an official statement in 2024.

But beyond questions of whether evidence supports the technology’s effectiveness, critics of the companies selling it accuse them of reviving a disturbing ideology: eugenics, or the belief that selective breeding can be used to improve humanity. Indeed, some of the voices who have been most confident that these methods can successfully predict nondisease traits have made startling claims about natural genetic hierarchies and innate racial differences.

What everyone can agree on, though, is that this new wave of technology is helping to inflame a centuries-old debate over nature versus nurture.


The term “eugenics” was coined in 1883 by a British anthropologist and statistician named Sir Francis Galton, inspired in part by the work of his cousin Charles Darwin. He derived it from a Greek word meaning “good in stock, hereditarily endowed with noble qualities.”

Some of modern history’s darkest chapters have been built on Galton’s legacy, from the Holocaust to the forced sterilization laws that affected certain groups in the United States well into the 20th century. Modern science has demonstrated the many logical and empirical problems with Galton’s methodology. (For starters, he counted vague concepts like “eminence”—as well as infections like syphilis and tuberculosis—as heritable phenotypes, meaning characteristics that result from the interaction of genes and environment.)

Yet even today, Galton’s influence lives on in the field of behavioral genetics, which investigates the genetic roots of psychological traits. Starting in the 1960s, researchers in the US began to revisit one of Galton’s favorite methods: twin studies. Many of these studies, which analyzed pairs of identical and fraternal twins to try to determine which traits were heritable and which resulted from socialization, were funded by the US government. The most well-known of these, the Minnesota Twin Study, also accepted grants from the Pioneer Fund, a now defunct nonprofit that had promoted eugenics and “race betterment” since its founding in 1937. 

The nature-versus-nurture debate hit a major inflection point in 2003, when the Human Genome Project was declared complete. After 13 years and at a cost of nearly $3 billion, an international consortium of thousands of researchers had sequenced 92% of the human genome for the first time.

Today, the cost of sequencing a genome can be as low as $600, and one company says it will soon drop even further. This dramatic reduction has made it possible to build massive DNA databases like the UK Biobank and the National Institutes of Health’s All of Us, each containing genetic data from more than half a million volunteers. Resources like these have enabled researchers to conduct genome-wide association studies, or GWASs, which identify correlations between genetic variants and human traits by analyzing single-nucleotide polymorphisms (SNPs)—the most common form of genetic variation between individuals. The findings from these studies serve as a reference point for developing polygenic risk scores.

Most GWASs have focused on disease prevention and personalized medicine. But in 2011, a group of medical researchers, social scientists, and economists launched the Social Science Genetic Association Consortium (SSGAC) to investigate the genetic basis of complex social and behavioral outcomes. One of the phenotypes they focused on was the level of education people reached.

“It was a bit of a phenotype of convenience,” explains Patrick Turley, an economist and member of the steering committee at SSGAC, given that educational attainment is routinely recorded in surveys when genetic data is collected. Still, it was “clear that genes play some role,” he says. “And trying to understand what that role is, I think, is really interesting.” He adds that social scientists can also use genetic data to try to better “understand the role that is due to nongenetic pathways.”

Many on the left are generally willing to allow that any number of traits, from addiction to obesity, are genetically influenced. Yet heritable cognitive ability seems to be “beyond the pale for us to integrate as a source of difference.”

The work immediately stirred feelings of discomfort—not least among the consortium’s own members, who feared that they might unintentionally help reinforce racism, inequality, and genetic determinism. 

It’s also created quite a bit of discomfort in some political circles, says Kathryn Paige Harden, a psychologist and behavioral geneticist at the University of Texas in Austin, who says she has spent much of her career making the unpopular argument to fellow liberals that genes are relevant predictors of social outcomes. 

Harden thinks a strength of those on the left is their ability to recognize “that bodies are different from each other in a way that matters.” Many are generally willing to allow that any number of traits, from addiction to obesity, are genetically influenced. Yet, she says, heritable cognitive ability seems to be “beyond the pale for us to integrate as a source of difference that impacts our life.” 

Harden believes that genes matter for our understanding of traits like intelligence, and that this should help shape progressive policymaking. She gives the example of an education department seeking policy interventions to improve math scores in a given school district. If a polygenic risk score is “as strongly correlated with their school grades” as family income is, she says of the students in such a district, then “does deliberately not collecting that [genetic] information, or not knowing about it, make your research harder [and] your inferences worse?”

To Harden, persisting with this strategy of avoidance for fear of encouraging eugenicists is a mistake. If “insisting that IQ is a myth and genes have nothing to do with it was going to be successful at neutralizing eugenics,” she says, “it would’ve won by now.”

Part of the reason these ideas are so taboo in many circles is that today’s debate around genetic determinism is still deeply infused with Galton’s ideas—and has become a particular fixation among the online right. 

SELMAN DESIGN

After Elon Musk took over Twitter (now X) in 2022 and loosened its restrictions on hate speech, a flood of accounts started sharing racist posts, some speculating about the genetic origins of inequality while arguing against immigration and racial integration. Musk himself frequently reposts and engages with accounts like Crémieux Recueil, the pen name of independent researcher Jordan Lasker, who has written about the “Black-White IQ gap,” and i/o, an anonymous account that once praised Musk for “acknowledging data on race and crime,” saying it “has done more to raise awareness of the disproportionalities observed in these data than anything I can remember.” (In response to allegations that his research encourages eugenics, Lasker wrote to MIT Technology Review, “The popular understanding of eugenics is about coercion and cutting people cast as ‘undesirable’ out of the breeding pool. This is nothing like that, so it doesn’t qualify as eugenics by that popular understanding of the term.” After going to print, i/o wrote in an email, “Just because differences in intelligence at the individual level are largely heritable, it does not mean that group differences in measured intelligence … are due to genetic differences between groups,” but that the latter is not “scientifically settled” and “an extremely important (and necessary) research area that should be funded rather than made taboo.” He added, “I’ve never made any argument against racial integration or intermarriage or whatever.” X and Musk did not respond to requests for comment.)

Harden, though, warns against discounting the work of an entire field because of a few noisy neoreactionaries. “I think there can be this idea that technology is giving rise to the terrible racism,” she says. The truth, she believes, is that “the racism has preexisted any of this technology.”


In 2019, a company called Genomic Prediction began to offer the first preimplantation polygenic testing that had ever been made commercially available. With its LifeView Embryo Health Score, prospective parents are able to assess their embryos’ predisposition to genetically complex health problems like cancer, diabetes, and heart disease. Pricing for the service starts at $3,500. Genomic Prediction uses a technique called an SNP array, which targets specific sites in the genome where common variants occur. The results are then cross-checked against GWASs that show correlations between genetic variants and certain diseases.

Four years later, a company named Orchid began offering a competing test. Orchid’s Whole Genome Embryo Report distinguished itself by claiming to sequence more than 99% of an embryo’s genome, allowing it to detect novel mutations and, the company says, diagnose rare diseases more accurately. For $2,500 per embryo, parents can access polygenic risk scores for 12 disorders, including schizophrenia, breast cancer, and hypothyroidism. 

Orchid was founded by a woman named Noor Siddiqui. Before getting undergraduate and graduate degrees from Stanford, she was awarded the Thiel fellowship—a $200,000 grant given to young entrepreneurs willing to work on their ideas instead of going to college—back when she was a teenager, in 2012. This set her up to attract attention from members of the tech elite as both customers and financial backers. Her company has raised $16.5 million to date from investors like Ethereum founder Vitalik Buterin, former Coinbase CTO Balaji Srinivasan, and Armstrong, the Coinbase CEO.

In August Siddiqui made the controversial suggestion that parents who choose not to use genetic testing might be considered irresponsible. “Just be honest: you’re okay with your kid potentially suffering for life so you can feel morally superior …” she wrote on X.

Americans have varied opinions on the emerging technology. In 2024, a group of bioethicists surveyed 1,627 US adults to determine attitudes toward a variety of polygenic testing criteria. A large majority approved of testing for physical health conditions like cancer, heart disease, and diabetes. Screening for mental health disorders, like depression, OCD, and ADHD, drew a more mixed—but still positive—response. Appearance-related traits, like skin color, baldness, and height, received less approval as something to test for.

Intelligence was among the most contentious traits—unsurprising given the way it has been weaponized throughout history and the lack of cultural consensus on how it should even be defined. (In many countries, intelligence testing for embryos is heavily regulated; in the UK, the practice is banned outright.) In the 2024 survey, 36.9% of respondents approved of preimplantation genetic testing for intelligence, 40.5% disapproved, and 22.6% said they were uncertain.

Despite the disagreement, intelligence has been among the traits most talked about as targets for testing. From early on, Genomic Prediction says, it began receiving inquiries “from all over the world” about testing for intelligence, according to Diego Marin, the company’s head of global business development and scientific affairs.

At one time, the company offered a predictor for what it called “intellectual disability.” After some backlash questioning both the predictive capacity and the ethics of these scores, the company discontinued the feature. “Our mission and vision of this company is not to improve [a baby], but to reduce risk for disease,” Marin told me. “When it comes to traits about IQ or skin color or height or something that’s cosmetic and doesn’t really have a connotation of a disease, then we just don’t invest in it.”

Orchid, on the other hand, does test for genetic markers associated with intellectual disability and developmental delay. But that may not be all. According to one employee of the company, who spoke on the condition of anonymity, intelligence testing is also offered to “high-roller” clients. According to this employee, another source close to the company, and reporting in the Washington Post, Musk used Orchid’s services in the conception of at least one of the children he shares with the tech executive Shivon Zilis. (Orchid, Musk, and Zilis did not respond to requests for comment.)


I met Kian Sadeghi, the 25-year-old founder of New York–based Nucleus Genomics, on a sweltering July afternoon in his SoHo office. Slight and kinetic, Sadeghi spoke at a machine-gun pace, pausing only occasionally to ask if I was keeping up. 

Sadeghi had modified his first organism—a sample of brewer’s yeast—at the age of 16. As a high schooler in 2016, he was taking a course on CRISPR-Cas9 at a Brooklyn laboratory when he fell in love with the “beautiful depth” of genetics. Just a few years later, he dropped out of college to build “a better 23andMe.” 

His company targets what you might call the application layer of PGT-P, accepting data from IVF clinics—and even from the competitors mentioned in this story—and running its own computational analysis.

“Unlike a lot of the other testing companies, we’re software first, and we’re consumer first,” Sadeghi told me. “It’s not enough to give someone a polygenic score. What does that mean? How do you compare them? There’s so many really hard design problems.”

Like its competitors, Nucleus calculates its polygenic risk scores by comparing an individual’s genetic data with trait-associated variants identified in large GWASs, providing statistically informed predictions. 

Nucleus provides two displays of a patient’s results: a Z-score, plotted from –4 to 4, which explains the risk of a certain trait relative to a population with similar genetic ancestry (for example, if Embryo #3 has a 2.1 Z-score for breast cancer, its risk is higher than average), and an absolute risk score, which includes relevant clinical factors (Embryo #3 has a minuscule actual risk of breast cancer, given that it is male).

The real difference between Nucleus and its competitors lies in the breadth of what it claims to offer clients. On its sleek website, prospective parents can sort through more than 2,000 possible diseases, as well as traits from eye color to IQ. Access to the Nucleus Embryo platform costs $8,999, while the company’s new IVF+ offering—which includes one IVF cycle with a partner clinic, embryo screening for up to 20 embryos, and concierge services throughout the process—starts at $24,999.

“Maybe you want your baby to have blue eyes versus green eyes,” Nucleus founder Kian Sadeghi said at a June event. “That is up to the liberty of the parents.”

Its promises are remarkably bold. The company claims to be able to forecast a propensity for anxiety, ADHD, insomnia, and other mental issues. It says you can see which of your embryos are more likely to have alcohol dependence, which are more likely to be left-handed, and which might end up with severe acne or seasonal allergies. (Nevertheless, at the time of writing, the embryo-screening platform provided this disclaimer: “DNA is not destiny. Genetics can be a helpful tool for choosing an embryo, but it’s not a guarantee. Genetic research is still in it’s [sic] infancy, and there’s still a lot we don’t know about how DNA shapes who we are.”)

To people accustomed to sleep trackers, biohacking supplements, and glucose monitoring, taking advantage of Nucleus’s options might seem like a no-brainer. To anyone who welcomes a bit of serendipity in their life, this level of perceived control may be disconcerting to say the least.

Sadeghi likes to frame his arguments in terms of personal choice. “Maybe you want your baby to have blue eyes versus green eyes,” he told a small audience at Nucleus Embryo’s June launch event. “That is up to the liberty of the parents.”

On the official launch day, Sadeghi spent hours gleefully sparring with X users who accused him of practicing eugenics. He rejects the term, favoring instead “genetic optimization”—though it seems he wasn’t too upset about the free viral marketing. “This week we got five million impressions on Twitter,” he told a crowd at the launch event, to a smattering of applause. (In an email to MIT Technology Review, Sadeghi wrote, “The history of eugenics is one of coercion and discrimination by states and institutions; what Nucleus does is the opposite—genetic forecasting that empowers individuals to make informed decisions.”)

Nucleus has raised more than $36 million from investors like Srinivasan, Alexis Ohanian’s venture capital firm Seven Seven Six, and Thiel’s Founders Fund. (Like Siddiqui, Sadeghi was a recipient of a Thiel fellowship when he dropped out of college; a representative for Thiel did not respond to a request for comment for this story.) Sadeghi has even poached Genomic Prediction’s cofounder Nathan Treff, who is now Nucleus’s chief clinical officer.

Sadeghi’s real goal is to build a one-stop shop for every possible application of genetic sequencing technology, from genealogy to precision medicine to genetic engineering. He names a handful of companies providing these services, with a combined market cap in the billions. “Nucleus is collapsing all five of these companies into one,” he says. “We are not an IVF testing company. We are a genetic stack.”


This spring, I elbowed my way into a packed hotel bar in the Flatiron district, where over a hundred people had gathered to hear a talk called “How to create SUPERBABIES.” The event was part of New York’s Deep Tech Week, so I expected to meet a smattering of biotech professionals and investors. Instead, I was surprised to encounter a diverse and curious group of creatives, software engineers, students, and prospective parents—many of whom had come with no previous knowledge of the subject.

The speaker that evening was Jonathan Anomaly, a soft-spoken political philosopher whose didactic tone betrays his years as a university professor.

Some of Anomaly’s academic work has focused on developing theories of rational behavior. At Duke and the University of Pennsylvania, he led introductory courses on game theory, ethics, and collective action problems as well as bioethics, digging into thorny questions about abortion, vaccines, and euthanasia. But perhaps no topic has interested him so much as the emerging field of genetic enhancement. 

In 2018, in a bioethics journal, Anomaly published a paper with the intentionally provocative title “Defending Eugenics.” He sought to distinguish what he called “positive eugenics”—noncoercive methods aimed at increasing traits that “promote individual and social welfare”—from the so-called “negative eugenics” we know from our history books.

Anomaly likes to argue that embryo selection isn’t all that different from practices we already take for granted. Don’t believe two cousins should be allowed to have children? Perhaps you’re a eugenicist, he contends. Your friend who picked out a six-foot-two Harvard grad from a binder of potential sperm donors? Same logic.

His hiring at the University of Pennsylvania in 2019 caused outrage among some students, who accused him of “racial essentialism.” In 2020, Anomaly left academia, lamenting that “American universities had become an intellectual prison.”

A few years later, Anomaly joined a nascent PGT-P company named Herasight, which was promising to screen for IQ.

At the end of July, the company officially emerged from stealth mode. A representative told me that most of the money raised so far is from angel investors, including Srinivasan, who also invested in Orchid and Nucleus. According to the launch announcement on X, Herasight has screened “hundreds of embryos” for private customers and is beginning to offer its first publicly available consumer product, a polygenic assessment that claims to detect an embryo’s likelihood of developing 17 diseases.

Their marketing materials boast predictive abilities 122% better than Orchid’s and 193% better than Genomic Prediction’s for this set of diseases. (“Herasight is comparing their current predictor to models we published over five years ago,” Genomic Prediction responded in a statement. “Our team is confident our predictors are world-class and are not exceeded in quality by any other lab.”) 

The company did not include comparisons with Nucleus, pointing to the “absence of published performance validations” by that company and claiming it represented a case where “marketing outpaces science.” (“Nucleus is known for world-class science and marketing, and we understand why that’s frustrating to our competitors,” a representative from the company responded in a comment.) 

Herasight also emphasized new advances in “within-family validation” (making sure that the scores are not merely picking up shared environmental factors by comparing their performance between unrelated people to their performance between siblings) and “cross-­ancestry accuracy” (improving the accuracy of scores for people outside the European ancestry groups where most of the biobank data is concentrated). The representative explained that pricing varies by customer and the number of embryos tested, but it can reach $50,000.

When it comes to traits that Jonathan Anomaly believes are genetically encoded, intelligence is just the tip of the iceberg. He has also spoken about the heritability of empathy, violence, religiosity, and political leanings.

Herasight tests for just one non-disease-related trait: intelligence. For a couple who produce 10 embryos, it claims it can detect an IQ spread of about 15 points, from the lowest-scoring embryo to the highest. The representative says the company plans to release a detailed white paper on its IQ predictor in the future.

The day of Herasight’s launch, Musk responded to the company announcement: “Cool.” Meanwhile, a Danish researcher named Emil Kirkegaard, whose research has largely focused on IQ differences between racial groups, boosted the company to his nearly 45,000 followers on X (as well as in a Substack blog), writing, “Proper embryo selection just landed.” Kirkegaard has in fact supported Anomaly’s work for years; he’s posted about him on X and recommended his 2020 book Creating Future People, which he called a “biotech eugenics advocacy book,” adding: “Naturally, I agree with this stuff!”

When it comes to traits that Anomaly believes are genetically encoded, intelligence—which he claimed in his talk is about 75% heritable—is just the tip of the iceberg. He has also spoken about the heritability of empathy, impulse control, violence, passivity, religiosity, and political leanings.

Anomaly concedes there are limitations to the kinds of relative predictions that can be made from a small batch of embryos. But he believes we’re only at the dawn of what he likes to call the “reproductive revolution.” At his talk, he pointed to a technology currently in development at a handful of startups: in vitro gametogenesis. IVG aims to create sperm or egg cells in a laboratory using adult stem cells, genetically reprogrammed from cells found in a sample of skin or blood. In theory, this process could allow a couple to quickly produce a practically unlimited number of embryos to analyze for preferred traits. Anomaly predicted this technology could be ready to use on humans within eight years.

SELMAN DESIGN

“I doubt the FDA will allow it immediately. That’s what places like Próspera are for,” he said, referring to the so-called “startup city” in Honduras, where scientists and entrepreneurs can conduct medical experiments free from the kinds of regulatory oversight they’d encounter in the US.

“You might have a moral intuition that this is wrong,” said Anomaly, “but when it’s discovered that elites are doing it privately … the dominoes are going to fall very, very quickly.” The coming “evolutionary arms race,” he claimed, will “change the moral landscape.”

He added that some of those elites are his own customers: “I could already name names, but I won’t do it.”

After Anomaly’s talk was over, I spoke with a young photographer who told me he was hoping to pursue a master’s degree in theology. He came to the event, he told me, to reckon with the ethical implications of playing God. “Technology is sending us toward an Old-to-New-Testament transition moment, where we have to decide what parts of religion still serve us,” he said soberly.


Criticisms of polygenic testing tend to fall into two camps: skepticism about the tests’ effectiveness and concerns about their ethics. “On one hand,” says Turley from the Social Science Genetic Association Consortium, “you have arguments saying ‘This isn’t going to work anyway, and the reason it’s bad is because we’re tricking parents, which would be a problem.’ And on the other hand, they say, ‘Oh, this is going to work so well that it’s going to lead to enormous inequalities in society.’ It’s just funny to see. Sometimes these arguments are being made by the same people.”

One of those people is Sasha Gusev, who runs a quantitative genetics lab at the Dana-Farber Cancer Institute. A vocal critic of PGT-P for embryo selection, he also often engages in online debates with the far-right accounts promoting race science on X.

Gusev is one of many professionals in his field who believe that because of numerous confounding socioeconomic factors—for example, childhood nutrition, geography, personal networks, and parenting styles—there isn’t much point in trying to trace outcomes like educational attainment back to genetics, particularly not as a way to prove that there’s a genetic basis for IQ.

He adds, “I think there’s a real risk in moving toward a society where you see genetics and ‘genetic endowments’ as the drivers of people’s behavior and as a ceiling on their outcomes and their capabilities.”

Gusev thinks there is real promise for this technology in clinical settings among specific adult populations. For adults identified as having high polygenic risk scores for cancer and cardiovascular disease, he argues, a combination of early screening and intervention could be lifesaving. But when it comes to the preimplantation testing currently on the market, he thinks there are significant limitations—and few regulatory measures or long-term validation methods to check the promises companies are making. He fears that giving these services too much attention could backfire.

“These reckless, overpromised, and oftentimes just straight-up manipulative embryo selection applications are a risk for the credibility and the utility of these clinical tools,” he says.

Many IVF patients have also had strong reactions to publicity around PGT-P. When the New York Times published an opinion piece about Orchid in the spring, angry parents took to Reddit to rant. One user posted, “For people who dont [sic] know why other types of testing are necessary or needed this just makes IVF people sound like we want to create ‘perfect’ babies, while we just want (our) healthy babies.”

Still, others defended the need for a conversation. “When could technologies like this change the mission from helping infertile people have healthy babies to eugenics?” one Redditor posted. “It’s a fine line to walk and an important discussion to have.”

Some PGT-P proponents, like Kirkegaard and Anomaly, have argued that policy decisions should more explicitly account for genetic differences. In a series of blog posts following the 2024 presidential election, under the header “Make science great again,” Kirkegaard called for ending affirmative action laws, legalizing race-based hiring discrimination, and removing restrictions on data sets like the NIH’s All of Us biobank that prevent researchers like him from using the data for race science. Anomaly has criticized social welfare policies for putting a finger on the scale to “punish the high-IQ people.”

Indeed, the notion of genetic determinism has gained some traction among loyalists to President Donald Trump. 

In October 2024, Trump himself made a campaign stop on the conservative radio program The Hugh Hewitt Show. He began a rambling answer about immigration and homicide statistics. “A murderer, I believe this, it’s in their genes. And we got a lot of bad genes in our country right now,” he told the host.

Gusev believes that while embryo selection won’t have much impact on individual outcomes, the intellectual framework endorsed by many PGT-P advocates could have dire social consequences.

“If you just think of the differences that we observe in society as being cultural, then you help people out. You give them better schooling, you give them better nutrition and education, and they’re able to excel,” he says. “If you think of these differences as being strongly innate, then you can fool yourself into thinking that there’s nothing that can be done and people just are what they are at birth.”

For the time being, there are no plans for longitudinal studies to track actual outcomes for the humans these companies have helped bring into the world. Harden, the behavioral geneticist from UT Austin, suspects that 25 years down the line, adults who were once embryos selected on the basis of polygenic risk scores are “going to end up with the same question that we all have.” They will look at their life and wonder, “What would’ve had to change for it to be different?”

Julia Black is a Brooklyn-based features writer and a reporter in residence at Omidyar Network. She has previously worked for Business Insider, Vox, The Information, and Esquire.

The quest to find out how our bodies react to extreme temperatures

15 October 2025 at 06:00

It’s the 25th of June and I’m shivering in my lab-issued underwear in Fort Worth, Texas. Libby Cowgill, an anthropologist in a furry parka, has wheeled me and my cot into a metal-walled room set to 40 °F. A loud fan pummels me from above and siphons the dregs of my body heat through the cot’s mesh from below. A large respirator fits snug over my nose and mouth. The device tracks carbon dioxide in my exhales—a proxy for how my metabolism speeds up or slows down throughout the experiment. Eventually Cowgill will remove my respirator to slip a wire-thin metal temperature probe several pointy inches into my nose.

Cowgill and a graduate student quietly observe me from the corner of their so-called “climate chamber. Just a few hours earlier I’d sat beside them to observe as another volunteer, a 24-year-old personal trainer, endured the cold. Every few minutes, they measured his skin temperature with a thermal camera, his core temperature with a wireless pill, and his blood pressure and other metrics that hinted at how his body handles extreme cold. He lasted almost an hour without shivering; when my turn comes, I shiver aggressively on the cot for nearly an hour straight.

I’m visiting Texas to learn about this experiment on how different bodies respond to extreme climates. “What’s the record for fastest to shiver so far?” I jokingly ask Cowgill as she tapes biosensing devices to my chest and legs. After I exit the cold, she surprises me: “You, believe it or not, were not the worst person we’ve ever seen.”

Climate change forces us to reckon with the knotty science of how our bodies interact with the environment.

Cowgill is a 40-something anthropologist at the University of Missouri who powerlifts and teaches CrossFit in her spare time. She’s small and strong, with dark bangs and geometric tattoos. Since 2022, she’s spent the summers at the University of North Texas Health Science Center tending to these uncomfortable experiments. Her team hopes to revamp the science of thermoregulation. 

While we know in broad strokes how people thermoregulate, the science of keeping warm or cool is mottled with blind spots. “We have the general picture. We don’t have a lot of the specifics for vulnerable groups,” says Kristie Ebi, an epidemiologist with the University of Washington who has studied heat and health for over 30 years. “How does thermoregulation work if you’ve got heart disease?” 

“Epidemiologists have particular tools that they’re applying for this question,” Ebi continues. “But we do need more answers from other disciplines.”

Climate change is subjecting vulnerable people to temperatures that push their limits. In 2023, about 47,000 heat-related deaths are believed to have occurred in Europe. Researchers estimate that climate change could add an extra 2.3 million European heat deaths this century. That’s heightened the stakes for solving the mystery of just what happens to bodies in extreme conditions. 

Extreme temperatures already threaten large stretches of the world. Populations across the Middle East, Asia, and sub-­Saharan Africa regularly face highs beyond widely accepted levels of human heat tolerance. Swaths of the southern US, northern Europe, and Asia now also endure unprecedented lows: The 2021 Texas freeze killed at least 246 people, and a 2023 polar vortex sank temperatures in China’s northernmost city to a hypothermic record of –63.4 °F. 

This change is here, and more is coming. Climate scientists predict that limiting emissions can prevent lethal extremes from encroaching elsewhere. But if emissions keep course, fierce heat and even cold will reach deeper into every continent. About 2.5 billion people in the world’s hottest places don’t have air-­conditioning. When people do, it can make outdoor temperatures even worse, intensifying the heat island effect in dense cities. And neither AC nor radiators are much help when heat waves and cold snaps capsize the power grid.

A thermal image shows a human male holding up peace signs during a test of extreme temperatures.
COURTESY OF MAX G. LEVY
A thermal image shows a human hand during a test of extreme temperatures.
COURTESY OF MAX G. LEVY
A thermal image shows a human foot during a test of extreme temperatures.
COURTESY OF MAX G. LEVY

“You, believe it or not, were not the worst person we’ve ever seen,” the author was told after enduring Cowgill’s “climate chamber.”

Through experiments like Cowgill’s, researchers around the world are revising rules about when extremes veer from uncomfortable to deadly. Their findings change how we should think about the limits of hot and cold—and how to survive in a new world. 

Embodied change

Archaeologists have known for some time that we once braved colder temperatures than anyone previously imagined. Humans pushed into Eurasia and North America well before the last glacial period ended about 11,700 years ago. We were the only hominins to make it out of this era. Neanderthals, Denisovans, and Homo floresiensis all went extinct. We don’t know for certain what killed those species. But we do know that humans survived thanks to protection from clothing, large social networks, and physiological flexibility. Human resilience to extreme temperature is baked into our bodies, behavior, and genetic code. We wouldn’t be here without it. 

“Our bodies are constantly in communication with the environment,” says Cara Ocobock, an anthropologist at the University of Notre Dame who studies how we expend energy in extreme conditions. She has worked closely with Finnish reindeer herders and Wyoming mountaineers. 

But the relationship between bodies and temperature is surprisingly still a mystery to scientists. In 1847, the anatomist Carl Bergmann observed that animal species grow larger in cold climates. The zoologist Joel Asaph Allen noted in 1877 that cold-dwellers had shorter appendages. Then there’s the nose thing: In the 1920s, the British anthropologist Arthur Thomson theorized that people in cold places have relatively long, narrow noses, the better to heat and humidify the air they take in. These theories stemmed from observations of animals like bears and foxes, and others that followed stemmed from studies comparing the bodies of cold-accustomed Indigenous populations with white male control groups. Some, like those having to do with optimization of surface area, do make sense: It seems reasonable that a tall, thin body increases the amount of skin available to dump excess heat. The problem is, scientists have never actually tested this stuff in humans. 

“Our bodies are constantly in communication with the environment.”

Cara Ocobock, anthropologist, University of Notre Dame

Some of what we know about temperature tolerance thus far comes from century-old race science or assumptions that anatomy controls everything. But science has evolved. Biology has matured. Childhood experiences, lifestyles, fat cells, and wonky biochemical feedback loops can contribute to a picture of the body as more malleable than anything imagined before. And that’s prompting researchers to change how they study it.

“If you take someone who’s super long and lanky and lean and put them in a cold climate, are they gonna burn more calories to stay warm than somebody who’s short and broad?” Ocobock says. “No one’s looked at that.”

Ocobock and Cowgill teamed up with Scott Maddux and Elizabeth Cho at the Center for Anatomical Sciences at the University of North Texas Health Fort Worth. All four are biological anthropologists who have also puzzled over whether the rules Bergmann, Allen, and Thomson proposed are actually true. 

For the past four years, the team has been studying how factors like metabolism, fat, sweat, blood flow, and personal history control thermoregulation. 

Your native climate, for example, may influence how you handle temperature extremes. In a unique study of mortality statistics from 1980s Milan, Italians raised in warm southern Italy were more likely to survive heat waves in the northern part of the country. 

Similar trends have appeared in cold climes. Researchers often measure cold tolerance by a person’s “brown adipose,” a type of fat that is specialized for generating heat (unlike white fat, which primarily stores energy). Brown fat is a cold adaptation because it delivers heat without the mechanism of shivering. Studies have linked it to living in cold climates, particularly at young ages. Wouter van Marken Lichtenbelt, the physiologist at Maastricht University who with colleagues discovered brown fat in adults, has shown that this tissue can further activate with cold exposure and even help regulate blood sugar and influence how the body burns other fat. 

That adaptability served as an early clue for the Texas team. They want to know how a person’s response to hot and cold correlates with height, weight, and body shape. What is the difference, Maddux asks, between “a male who’s 6 foot 6 and weighs 240 pounds” and someone else in the same environment “who’s 4 foot 10 and weighs 89 pounds”? But the team also wondered if shape was only part of the story. 

Their multi-year experiment uses tools that anthropologists couldn’t have imagined a century ago—devices that track metabolism in real time and analyze genetics. Each participant gets a CT scan (measuring body shape), a DEXA scan (estimating percentages of fat and muscle), high-resolution 3D scans, and DNA analysis from saliva to examine ancestry genetically. 

Volunteers lie on a cot in underwear, as I did, for about 45 minutes in each climate condition, all on separate days. There’s dry cold, around 40 °F, akin to braving a walk-in refrigerator. Then dry heat and humid heat: 112 °F with 15% humidity and 98 °F with 85% humidity. They call it “going to Vegas” and “going to Houston,” says Cowgill. The chamber session is long enough to measure an effect, but short enough to be safe. 

Before I traveled to Texas, Cowgill told me she suspected the old rules would fall. Studies linking temperature tolerance to race and ethnicity, for example, seemed tenuous because biological anthropologists today reject the concept of distinct races. It’s a false premise, she told me: “No one in biological anthropology would argue that human beings do not vary across the globe—that’s obvious to anyone with eyes. [But] you can’t draw sharp borders around populations.” 

She added, “I think there’s a substantial possibility that we spend four years testing this and find out that really, limb length, body mass, surface area […] are not the primary things that are predicting how well you do in cold and heat.” 

Adaptable to a degree

In July 1995, a week-long heat wave pushed Chicago above 100 °F, killing roughly 500 people. Thirty years later, Ollie Jay, a physiologist at the University of Sydney, can duplicate the conditions of that exceptionally humid heat wave in a climate chamber at his laboratory. 

“We can simulate the Chicago heat wave of ’95. The Paris heat wave of 2003. The heat wave [in early July of this year]  in Europe,” Jay says. “As long as we’ve got the temperature and humidity information, we can re-create those conditions.”

“Everybody has quite an intimate experience of feeling hot, so we’ve got 8 billion experts on how to keep cool,” he says. Yet our internal sense of when heat turns deadly is unreliable. Even professional athletes overseen by experienced medics have died after missing dangerous warning signs. And little research has been done to explore how vulnerable populations such as elderly people, those with heart disease, and low-income communities with limited access to cooling respond to extreme heat. 

Jay’s team researches the most effective strategies for surviving it. He lambastes air-conditioning, saying it demands so much energy that it can aggravate climate change in “a vicious cycle.” Instead, he has monitored people’s vital signs while they use fans and skin mists to endure three hours in humid and dry heat. In results published last year, his research found that fans reduced cardiovascular strain by 86% for people with heart disease in the type of humid heat familiar in Chicago. 

Dry heat was a different story. In that simulation, fans not only didn’t help but actually doubled the rate at which core temperatures rose in healthy older people.

Heat kills. But not without a fight. Your body must keep its internal temperature in a narrow window flanking 98 °F by less than two degrees. The simple fact that you’re alive means you are producing heat. Your body needs to export that heat without amassing much more. The nervous system relaxes narrow blood vessels along your skin. Your heart rate increases, propelling more warm blood to your extremities and away from your organs. You sweat. And when that sweat evaporates, it carries a torrent of body heat away with it. 

This thermoregulatory response can be trained. Studies by van Marken Lichtenbelt have shown that exposure to mild heat increases sweat capacity, decreases blood pressure, and drops resting heart rate. Long-term studies based on Finnish saunas suggest similar correlations

The body may adapt protectively to cold, too. In this case, body heat is your lifeline. Shivering and exercise help keep bodies warm. So can clothing. Cardiovascular deaths are thought to spike in cold weather. But people more adapted to cold seem better able to reroute their blood flow in ways that keep their organs warm without dropping their temperature too many degrees in their extremities. 

Earlier this year, the biological anthropologist Stephanie B. Levy (no relation) reported that New Yorkers who experienced lower average temperatures had more productive brown fat, adding evidence for the idea that the inner workings of our bodies adjust to the climate throughout the year and perhaps even throughout our lives. “Do our bodies hold a biological memory of past seasons?” Levy wonders. “That’s still an open question. There’s some work in rodent models to suggest that that’s the case.”

Although people clearly acclimatize with enough strenuous exposures to either cold or heat, Jay says, “you reach a ceiling.” Consider sweat: Heat exposure can increase the amount you sweat only until your skin is completely saturated. It’s a non­negotiable physical limit. Any additional sweat just means leaking water without carrying away any more heat. “I’ve heard people say we’ll just find a way of evolving out of this—we’ll biologically adapt,” Jay says. “Unless we’re completely changing our body shape, then that’s not going to happen.”

And body shape may not even sway thermoregulation as much as previously believed. The subject I observed, a personal trainer, appeared outwardly adapted for cold: his broad shoulders didn’t even fit in a single CT scan image. Cowgill supposed that this muscle mass insulated him. When he emerged from his session in the 40 °F environment, though, he had finally started shivering—intensely. The researchers covered him in a heated blanket. He continued shivering. Driving to lunch over an hour later in a hot car, he still mentioned feeling cold. An hour after that, a finger prick drew no blood, a sign that blood vessels in his extremities remained constricted. His body temperature fell about half a degree C in the cold session—a significant drop—and his wider build did not appear to shield him from the cold as well as my involuntary shivering protected me. 

I asked Cowgill if perhaps there is no such thing as being uniquely predisposed to hot or cold. “Absolutely,” she said. 

A hot mess

So if body shape doesn’t tell us much about how a person maintains body temperature, and acclimation also runs into limits, then how do we determine how hot is too hot? 

In 2010 two climate change researchers, Steven Sherwood and Matthew Huber, argued that regions around the world become uninhabitable at wet-bulb temperatures of 35 °C, or 95 °F. (Wet-bulb measurements are a way to combine air temperature and relative humidity.) Above 35 °C, a person simply wouldn’t be able to dissipate heat quickly enough. But it turns out that their estimate was too optimistic. 

Researchers “ran with” that number for a decade, says Daniel Vecellio, a bioclimatologist at the University of Nebraska, Omaha. “But the number had never been actually empirically tested.” In 2021 a Pennsylvania State University physiologist, W. Larry Kenney, worked with Vecellio and others to test wet-bulb limits in a climate chamber. Kenney’s lab investigates which combinations of temperature, humidity, and time push a person’s body over the edge. 

Not long after, the researchers came up with their own wet-bulb limit of human tolerance: below 31 °C in warm, humid conditions for the youngest cohort, people in their thermoregulatory prime. Their research suggests that a day reaching 98 °F and 65% humidity, for example, poses danger in a matter of hours, even for healthy people. 

JUSTIN CLEMONS
JUSTIN CLEMONS
three medical team members make preparations around a person on a gurney
JUSTIN CLEMONS

Cowgill and her colleagues Elizabeth Cho (top) and Scott Maddux prepare graduate student Joanna Bui for a “room-temperature test.”

In 2023, Vecellio and Huber teamed up, combining the growing arsenal of lab data with state-of-the-art climate simulations to predict where heat and humidity most threatened global populations: first the Middle East and South Asia, then sub-Saharan Africa and eastern China. And assuming that warming reaches 3 to 4 °C over preindustrial levels this century, as predicted, parts of North America, South America, and northern and central Australia will be next. 

Last June, Vecellio, Huber, and Kenney co-published an article revising the limits that Huber had proposed in 2010. “Why not 35 °C?” explained why the human limits have turned out to be lower than expected. Those initial estimates overlooked the fact that our skin temperature can quickly jump above 101 °F in hot weather, for example, making it harder to dump internal heat.

The Penn State team has published deep dives on how heat tolerance changes with sex and age. Older participants’ wet-bulb limits wound up being even lower—between 27 and 28 °C in warm, humid conditions—and varied more from person to person than they did in young people. “The conditions that we experience now—especially here in North America and Europe, places like that—are well below the limits that we found in our research,” Vecellio says. “We know that heat kills now.”  

What this fast-growing body of research suggests, Vecellio stresses, is that you can’t define heat risk by just one or two numbers. Last year, he and researchers at Arizona State University pulled up the hottest 10% of hours between 2005 and 2020 for each of 96 US cities. They wanted to compare recent heat-health research with historical weather data for a new perspective: How frequently is it so hot that people’s bodies can’t compensate for it? Over 88% of those “hot hours” met that criterion for people in full sun. In the shade, most of those heat waves became meaningfully less dangerous. 

“There’s really almost no one who ‘needs’ to die in a heat wave,” says Ebi, the epidemiologist. “We have the tools. We have the understanding. Essentially all [those] deaths are preventable.”

More than a number

A year after visiting Texas, I called Cowgill to hear what she was thinking after four summers of chamber experiments. She told me that the only rule about hot and cold she currently stands behind is … well, none.

She recalled a recent participant—the smallest man in the study, weighing 114 pounds. “He shivered like a leaf on a tree,” Cowgill says. Normally, a strong shiverer warms up quickly. Core temperature may even climb a little. “This [guy] was just shivering and shivering and shivering and not getting any warmer,” she says. She doesn’t know why this happened. “Every time I think I get a picture of what’s going on in there, we’ll have one person come in and just kind of be a complete exception to the rule,” she says, adding that you can’t just gloss over how much human bodies vary inside and out.

The same messiness complicates physiology studies. 

Jay looks to embrace bodily complexities by improving physiological simulations of heat and the human strain it causes. He’s piloted studies that input a person’s activity level and type of clothing to predict core temperature, dehydration, and cardiovascular strain based on the particular level of heat. One can then estimate the person’s risk on the basis of factors like age and health. He’s also working on physiological models to identify vulnerable groups, inform early-warning systems ahead of heat waves, and possibly advise cities on whether interventions like fans and mists can help protect residents. “Heat is an all-of-­society issue,” Ebi says. Officials could better prepare the public for cold snaps this way too.

“Death is not the only thing we’re concerned about,” Jay adds.  Extreme temperatures bring morbidity and sickness and strain hospital systems: “There’s all these community-level impacts that we’re just completely missing.”

Climate change forces us to reckon with the knotty science of how our bodies interact with the environment. Predicting the health effects is a big and messy matter. 

The first wave of answers from Fort Worth will materialize next year. The researchers will analyze thermal images to crunch data on brown fat. They’ll resolve whether, as Cowgill suspects, your body shape may not sway temperature tolerance as much as previously assumed. “Human variation is the rule,” she says, “not the exception.” 

Max G. Levy is an independent journalist who writes about chemistry, public health, and the environment.

AI is changing how we quantify pain

15 October 2025 at 06:00

For years at Orchard Care Homes, a 23‑facility dementia-care chain in northern England, Cheryl Baird watched nurses fill out the Abbey Pain Scale, an observational methodology used to evaluate pain in those who can’t communicate verbally. Baird, a former nurse who was then the facility’s director of quality, describes it as “a tick‑box exercise where people weren’t truly considering pain indicators.”

As a result, agitated residents were assumed to have behavioral issues, since the scale does not always differentiate well between pain and other forms of suffering or distress. They were often prescribed psychotropic sedatives, while the pain itself went untreated.

Then, in January 2021, Orchard Care Homes began a trial of PainChek, a smartphone app that scans a resident’s face for microscopic muscle movements and uses artificial intelligence to output an expected pain score. Within weeks, the pilot unit saw fewer prescriptions and had calmer corridors. “We immediately saw the benefits: ease of use, accuracy, and identifying pain that wouldn’t have been spotted using the old scale,” Baird recalls.

In nursing homes, neonatal units, and ICU wards, researchers are racing to turn pain into something a camera or sensor can score as reliably as blood pressure.

This kind of technology-assisted diagnosis hints at a bigger trend. In nursing homes, neonatal units, and ICU wards, researchers are racing to turn pain—medicine’s most subjective vital sign—into something a camera or sensor can score as reliably as blood pressure. The push has already produced PainChek, which has been cleared by regulators on three continents and has logged more than 10 million pain assessments. Other startups are beginning to make similar inroads in care settings.

The way we assess pain may finally be shifting, but when algorithms measure our suffering, does that change the way we understand and treat it?

Science already understands certain aspects of pain. We know that when you stub your toe, for example, microscopic alarm bells called nociceptors send electrical impulses toward your spinal cord on “express” wires, delivering the first stab of pain, while a slower convoy follows with the dull throb that lingers. At the spinal cord, the signal meets a microscopic switchboard scientists call the gate. Flood that gate with friendly touches—say, by rubbing the bruise—or let the brain return an instruction born of panic or calm, and the gate might muffle or magnify the message before you even become aware of it.

The gate can either let pain signals pass through or block them, depending on other nerve activity and instructions from your brain. Only the signals that succeed in getting past this gate travel up to your brain’s sensory map to help locate the damage, while others branch out to emotion centers that decide how bad it feels. Within milliseconds, those same hubs in the brain shoot fresh orders back down the line, releasing built-in painkillers or stoking the alarm. In other words, pain isn’t a straightforward translation of damage or sensation but a live negotiation between the body and the brain.

But much of how that negotiation plays out is still a mystery. For instance, scientists cannot predict what causes someone to slip from a routine injury into years-long hypersensitivity; the molecular shift from acute to chronic pain is still largely unknown. Phantom-limb pain remains equally puzzling: About two-thirds of amputees feel agony in a part of their body that no longer exists, yet competing theories—cortical remapping, peripheral neuromas, body-schema mismatch—do not explain why they suffer while the other third feel nothing.

The first serious attempt at a system for quantifying pain was introduced in 1921. Patients marked their degree of pain as a point on a blank 10‑centimeter line and clinicians scored the distance in millimeters, converting lived experience into a 0–100 ladder. By 1975, psychologist Ronald Melzack’s McGill Pain Questionnaire offered 78 adjectives like “burning,” “stabbing,” and “throbbing,” so that pain’s texture could join intensity in the chart. Over the past few decades, hospitals have ultimately settled on the 0–10 Numeric Rating Scale.

Yet pain is stubbornly subjective. Feedback from the brain in the form of your reaction can send instructions back down the spinal cord, meaning that expectation and emotion can change how much the same injury hurts. In one trial, volunteers who believed they had received a pain relief cream reported a stimulus as 22% less painful than those who knew the cream was inactive—and a functional magnetic resonance image of their brains showed that the drop corresponded with decreased activity in the parts of the brain that report pain, meaning they really did feel less hurt.

What’s more, pain can also be affected by a slew of external factors. In one study, experimenters applied the same calibrated electrical stimulus to volunteers from Italy, Sweden, and Saudi Arabia, and the ratings varied dramatically. Italian women recorded the highest scores on the 0–10 scale, while Swedish and Saudi participants judged the identical burn several points lower, implying that culture can amplify or dampen the felt intensity of the same experience.

Bias inside the clinic can drive different responses even to the same pain score. A 2024 analysis of discharge notes found that women’s scores were recorded 10% less often than men’s. At a large pediatric emergency department, Black children presenting with limb fractures were roughly 39% less likely to receive an opioid analgesic than their white non-Hispanic peers, even after the researchers controlled for pain score and other clinical factors. Together these studies make clear that an “8 out of 10” does not always result in the same reaction or treatment. And many patients cannot self-report their pain at all—for example, a review of bedside studies concludes that about 70% of intensive-care patients have pain that goes unrecognized or undertreated, a problem the authors link to their impaired communication due to sedation or intubation.

These issues have prompted a search for a better, more objective way to understand and assess pain. Progress in artificial intelligence has brought a new dimension to that hunt.

Research groups are pursuing two broad routes. The first listens underneath the skin. Electrophysiologists strap electrode nets to volunteers and look for neural signatures that rise and fall with administered stimuli. A 2024 machine-learning study reported that one such algorithm could tell with over 80% accuracy, using a few minutes of resting-state EEG, which subjects experienced chronic pain and which were pain-free control participants. Other researchers combine EEG with galvanic skin response and heart-rate variability, hoping a multisignal “pain fingerprint” will provide more robust measurements.

One example of this method is the PMD-200 patient monitor from Medasense, which uses AI-based tools to output pain scores. The device uses physiological patterns like heart rate, sweating, or peripheral temperature changes as the input and focuses on surgical patients, with the goal of helping anesthesiologists adjust doses during operations. In a 2022 study of 75 patients undergoing major abdominal surgery, use of the monitor resulted in lower self-reported pain scores after the operation—a median score of 3 out of 10, versus 5 out of 10 in controls—without an increase in opioid use. The device is authorized by the US Food and Drug Administration and is in use in the United States, the European Union, Canada, and elsewhere.

The second path is behavioral. A grimace, a guarded posture, or a sharp intake of breath correlates with various levels of pain. Computer-vision teams have fed high-speed video of patients’ changing expressions into neural networks trained on the Face Action Coding System (FACS), which was introduced in the late 1970s with the goal of creating an objective and universal system to analyze such expressions—it’s the Rosetta stone of 44 facial micro-movements. In lab tests, those models can flag frames indicating pain from the data set with over 90% accuracy, edging close to the consistency of expert human assessors. Similar approaches mine posture and even sentence fragments in clinical notes, using natural-language processing, to spot phrases like “curling knees to chest” that often correlate with high pain.

PainChek is one of these behavioral models, and it acts like a camera‑based thermometer, but for pain: A care worker opens the app and holds a phone 30 centimeters from a person’s face. For three seconds, a neural network looks for nine particular microscopic movements—upper‑lip raise, brow pinch, cheek tension, and so on—that research has linked most strongly to pain. Then the screen flashes a score of 0 to 42. “There’s a catalogue of ‘action‑unit codes’—facial expressions common to all humans. Nine of those are associated with pain,” explains Kreshnik Hoti, a senior research scientist with PainChek and a co-inventor of the device. This system is built directly on the foundation of FACS. After the scan, the app walks the user through a yes‑or‑no checklist of other signs, like groaning, “guarding,” and sleep disruption, and stores the result on a cloud dashboard that can show trends.

Linking the scan to a human‑filled checklist was, Hoti admits, a late design choice. “Initially, we thought AI should automate everything, but now we see [that] hybrid use—AI plus human input—is our major strength,” he says. Care aides, not nurses, complete most assessments, freeing clinicians to act on the data rather than gather it.

PainChek was cleared by Australia’s Therapeutic Goods Administration in 2017, and national rollout funding from Canberra helped embed it in hundreds of nursing homes in the country. The system has also won authorization in the UK—where expansion began just before covid-19 started spreading and resumed as lockdowns eased—and in Canada and New Zealand, which are running pilot programs. In the US, it’s currently awaiting an FDA decision. Company‑wide data show “about a 25% drop in anti­psychotic use and, in Scotland, a 42% reduction in falls,” Hoti says.

a person holding a phone up in front of an elderly person, whose face is visible on the screen
PainChek is a mobile app that estimates pain scores by applying artificial intelligence to facial scans.
COURTESY OF PAINCHEK

Orchard Care Homes is one of its early adopters. Baird, then the facility’s director of quality, remembers the pre‑AI routine that was largely done “to prove compliance,” she says.

PainChek added an algorithm to that workflow, and the hybrid approach has paid off. Orchard’s internal study of four care homes tracked monthly pain scores, behavioral incidents, and prescriptions. Within weeks, psychotropic scripts fell and residents’ behavior calmed. The ripple effects went beyond pharmacy tallies. Residents who had skipped meals because of undetected dental pain “began eating again,” Baird notes, and “those who were isolated due to pain began socializing.”

Inside Orchard facilities, a cultural shift is underway. When Baird trained new staff, she likened pain “to measuring blood pressure or oxygen,” she says. “We wouldn’t guess those, so why guess pain?” The analogy lands, but getting people fully on board is still a slog. Some nurses insist their clinical judgment is enough; others balk at another login and audit trail. “The sector has been slow to adopt technology, but it’s changing,” Baird says. That’s helped by the fact that administering a full Abbey Pain Scale takes 20 minutes, while a PainChek scan and checklist take less than five.

Engineers at PainChek are now adapting the code for the very youngest patients. PainChek Infant targets babies under one year, whose grimaces flicker faster than adults’. The algorithm, retrained on neonatal faces, detects six validated facial action units based on the well-established Baby Facial Action Coding System. PainChek Infant is starting limited testing in Australia while the company pursues a separate regulatory pathway.

Skeptics raise familiar red flags about these devices. Facial‑analysis AI has a history of skin‑tone bias, for example. Facial analysis may also misread grimaces stemming from nausea or fear. The tool is only as good as the yes‑or‑no answers that follow the scan; sloppy data entry can skew results in either direction. Results lack the broader clinical and interpersonal context a caregiver is likely to have from interacting with individual patients regularly and understanding their medical history. It’s also possible that clinicians might defer too strongly to the algorithm, over-relying on outside judgment and eroding their own.

If PainChek is approved by the FDA this fall, it will be part of a broader effort to create a system of new pain measurement technology. Other startups are pitching EEG headbands for neuropathic pain, galvanic skin sensors that flag breakthrough cancer pain, and even language models that comb nursing notes for evidence of hidden distress. Still, quantifying pain with an external device could be rife with hidden issues, like bias or inaccuracies, that we will uncover only after significant use.

For Baird, the issue is fairly straightforward nonetheless. “I’ve lived with chronic pain and had a hard time getting people to believe me. [PainChek] would have made a huge difference,” she says. If artificial intelligence can give silent sufferers a numerical voice—and make clinicians listen—then adding one more line to the vital‑sign chart might be worth the screen time.

Deena Mousa is a researcher, grantmaker, and journalist focused on global health, economic development, and scientific and technological progress.

Mousa is employed as lead researcher by Open Philanthropy, a funder and adviser focused on high-impact causes, including global health and the potential risks posed by AI. The research team investigates new causes of focus and is not involved in work related to pain management. Mousa has not been involved with any grants related to pain management, although Open Philanthropy has funded research in this area in the past.

How aging clocks can help us understand why we age—and if we can reverse it

14 October 2025 at 06:00

Be honest: Have you ever looked up someone from your childhood on social media with the sole intention of seeing how they’ve aged? 

One of my colleagues, who shall remain nameless, certainly has. He recently shared a photo of a former classmate. “Can you believe we’re the same age?” he asked, with a hint of glee in his voice. A relative also delights in this pastime. “Wow, she looks like an old woman,” she’ll say when looking at a picture of someone she has known since childhood. The years certainly are kinder to some of us than others.

But wrinkles and gray hairs aside, it can be difficult to know how well—or poorly—someone’s body is truly aging, under the hood. A person who develops age-related diseases earlier in life, or has other biological changes associated with aging (such as elevated cholesterol or markers of inflammation), might be considered “biologically older” than a similar-age person who doesn’t have those changes. Some 80-year-olds will be weak and frail, while others are fit and active. 

Doctors have long used functional tests that measure their patients’ strength or the distance they can walk, for example, or simply “eyeball” them to guess whether they look fit enough to survive some treatment regimen, says Tamir Chandra, who studies aging at the Mayo Clinic. 

But over the past decade, scientists have been uncovering new methods of looking at the hidden ways our bodies are aging. What they’ve found is changing our understanding of aging itself. 

“Aging clocks” are new scientific tools that can measure how our organs are wearing out, giving us insight into our mortality and health. They hint at our biological age. While chronological age is simply how many birthdays we’ve had, biological age is meant to reflect something deeper. It measures how our bodies are handling the passing of time and—perhaps—lets us know how much more of it we have left. And while you can’t change your chronological age, you just might be able to influence your biological age.

It’s not just scientists who are using these clocks. Longevity influencers like Bryan Johnson often use them to make the case that they are aging backwards. “My telomeres say I’m 10 years old,” Johnson posted on X in April. The Kardashians have tried them too (Khloé was told on TV that her biological age was 12 years below her chronological age). Even my local health-food store offers biological age testing. Some are pushing the use of clocks even further, using them to sell unproven “anti-aging” supplements.

The science is still new, and few experts in the field—some of whom affectionately refer to it as “clock world”—would argue that an aging clock can definitively reveal an individual’s biological age. 

But their work is revealing that aging clocks can offer so much more than an insta-brag, a snake-oil pitch—or even just an eye-catching number. In fact, they are helping scientists unravel some of the deepest mysteries in biology: Why do we age? How do we age? When does aging begin? What does it even mean to age?

Ultimately, and most importantly, they might soon tell us whether we can reverse the whole process.

Clocks kick off

The way your genes work can change. Molecules called methyl groups can attach to DNA, controlling the way genes make proteins. This process is called methylation, and it can potentially occur at millions of points along the genome. These epigenetic markers, as they are known, can switch genes on or off, or increase or decrease how much protein they make. They’re not part of our DNA, but they influence how it works.

In 2011, Steve Horvath, then a biostatistician at the University of California, Los Angeles, took part in a study that was looking for links between sexual orientation and these epigenetic markers. Steve is straight; he says his twin brother, Markus, who also volunteered, is gay.

That study didn’t find a link between DNA methyl­ation and sexual orientation. But when Horvath looked at the data, he noticed a different trend—a very strong link between age and methylation at around 88 points on the genome. He once told me he fell off his chair when he saw it

Many of the affected genes had already been linked to age-related brain and cardiovascular diseases, but it wasn’t clear how methylation might be related to those diseases. 

If a model could work out what average aging looks like, it could potentially estimate whether someone was aging unusually fast or slowly. It could transform medicine and fast-track the search for an anti-aging drug. It could help us understand what aging is, and why it happens at all.

In 2013, Horvath collected methylation data from 8,000 tissue and cell samples to create what he called the Horvath clock—essentially a mathematical model that could estimate age on the basis of DNA methylation at 353 points on the genome. From a tissue sample, it was able to detect a person’s age within a range of 2.9 years.

That clock changed everything. Its publication in 2013 marked the birth of “clock world.” To some, the possibilities were almost endless. If a model could work out what average aging looks like, it could potentially estimate whether someone was aging unusually fast or slowly. It could transform medicine and fast-track the search for an anti-aging drug. It could help us understand what aging is, and why it happens at all.

The epigenetic clock was a success story in “a field that, frankly, doesn’t have a lot of success stories,” says João Pedro de Magalhães, who researches aging at the University of Birmingham, UK.

It took a few years, but as more aging researchers heard about the clock, they began incorporating it into their research and even developing their own clocks. Horvath became a bit of a celebrity. Scientists started asking for selfies with him at conferences, he says. Some researchers even made T-shirts bearing the front page of his 2013 paper.

Some of the many other aging clocks developed since have become notable in their own right. Examples include the PhenoAge clock, which incorporates health data such as blood cell counts and signs of inflammation along with methyl­ation, and the Dunedin Pace of Aging clock, which tells you how quickly or slowly a person is aging rather than pointing to a specific age. Many of the clocks measure methylation, but some look at other variables, such as proteins in blood or certain carbohydrate molecules that attach to such proteins.

Today, there are hundreds or even thousands of clocks out there, says Chiara Herzog, who researches aging at King’s College London and is a member of the Biomarkers of Aging Consortium. Everyone has a favorite. Horvath himself favors his GrimAge clock, which was named after the Grim Reaper because it is designed to predict time to death.

That clock was trained on data collected from people who were monitored for decades, many of whom died in that period. Horvath won’t use it to tell people when they might die of old age, he stresses, saying that it wouldn’t be ethical. Instead, it can be used to deliver a biological age that hints at how long a person might expect to live. Someone who is 50 but has a GrimAge of 60 can assume that, compared with the average 50-year-old, they might be a bit closer to the end.

GrimAge is not perfect. While it can strongly predict time to death given the health trajectory someone is on, no aging clock can predict if someone will start smoking or get a divorce (which generally speeds aging) or suddenly take up running (which can generally slow it). “People are complicated,” Horvath tells MIT Technology Review. “There’s a huge error bar.”

On the whole, the clocks are pretty good at making predictions about health and lifespan. They’ve been able to predict that people over the age of 105 have lower biological ages, which tracks given how rare it is for people to make it past that age. A higher epigenetic age has been linked to declining cognitive function and signs of Alzheimer’s disease, while better physical and cognitive fitness has been linked to a lower epigenetic age.

Black-box clocks

But accuracy is a challenge for all aging clocks. Part of the problem lies in how they were designed. Most of the clocks were trained to link age with methylation. The best clocks will deliver an estimate that reflects how far a person’s biology deviates from the average. Aging clocks are still judged on how well they can predict a person’s chronological age, but you don’t want them to be too close, says Lucas Paulo de Lima Camillo, head of machine learning at Shift Bioscience, who was awarded $10,000 by the Biomarkers of Aging Consortium for developing a clock that could estimate age within a range of 2.55 years.

a cartoon alarm clock shrugging
None of the clocks are precise enough to predict the biological age of a single person. Putting the same biological sample through five different clocks will give you five wildly different results.
LEON EDLER

“There’s this paradox,” says Camillo. If a clock is really good at predicting chronological age, that’s all it will tell you—and it probably won’t reveal much about your biological age. No one needs an aging clock to tell them how many birthdays they’ve had. Camillo says he’s noticed that when the clocks get too close to “perfect” age prediction, they actually become less accurate at predicting mortality.

Therein lies the other central issue for scientists who develop and use aging clocks: What is the thing they are really measuring? It is a difficult question for a field whose members notoriously fail to agree on the basics. (Everything from the definition of aging to how it occurs and why is up for debate among the experts.)

They do agree that aging is incredibly complex. A methylation-based aging clock might tell you about how that collection of chemical markers compares across individuals, but at best, it’s only giving you an idea of their “epigenetic age,” says Chandra. There are probably plenty of other biological markers that might reveal other aspects of aging, he says: “None of the clocks measure everything.” 

We don’t know why some methyl groups appear or disappear with age, either. Are these changes causing damage? Or are they a by-product of it? Are the epigenetic patterns seen in a 90-year-old a sign of deterioration? Or have they been responsible for keeping that person alive into very old age?

To make matters even more complicated, two different clocks can give similar answers by measuring methylation at entirely different regions of the genome. No one knows why, or which regions might be the best ones to focus on.

“The biomarkers have this black-box quality,” says Jesse Poganik at Brigham and Women’s Hospital in Boston. “Some of them are probably causal, some of them may be adaptive … and some of them may just be neutral”: either “there’s no reason for them not to happen” or “they just happen by random chance.”

What we know is that, as things stand, none of the clocks are precise enough to predict the biological age of a single person (sorry, Khloé). Putting the same biological sample through five different clocks will give you five wildly different results.

Even the same clock can give you different answers if you put a sample through it more than once. “They’re not yet individually predictive,” says Herzog. “We don’t know what [a clock result] means for a person, [or if] they’re more or less likely to develop disease.”

And it’s why plenty of aging researchers—even those who regularly use the clocks in their work—haven’t bothered to measure their own epigenetic age. “Let’s say I do a clock and it says that my biological age … is five years older than it should be,” says Magalhães. “So what?” He shrugs. “I don’t see much point in it.”

You might think this lack of clarity would make aging clocks pretty useless in a clinical setting. But plenty of clinics are offering them anyway. Some longevity clinics are more careful, and will regularly test their patients with a range of clocks, noting their results and tracking them over time. Others will simply offer an estimate of biological age as part of a longevity treatment package.

And then there are the people who use aging clocks to sell supplements. While no drug or supplement has been definitively shown to make people live longer, that hasn’t stopped the lightly regulated wellness industry from pushing a range of “treatments” that range from lotions to herbal pills all the way through to stem-cell injections.

Some of these people come to aging meetings. I was in the audience at an event when one CEO took to the stage to claim he had reversed his own biological age by 18 years—thanks to the supplement he was selling. Tom Weldon of Ponce de Leon Health told us his gray hair was turning brown. His biological age was supposedly reversing so rapidly that he had reached “longevity escape velocity.”

But if the people who buy his supplements expect some kind of Benjamin Button effect, they might be disappointed. His company hasn’t yet conducted a randomized controlled trial to demonstrate any anti-aging effects of that supplement, called Rejuvant. Weldon says that such a trial would take years and cost millions of dollars, and that he’d “have to increase the price of our product more than four times” to pay for one. (The company has so far tested the active ingredient in mice and carried out a provisional trial in people.)

More generally, Horvath says he “gets a bad taste in [his] mouth” when people use the clocks to sell products and “make a quick buck.” But he thinks that most of those sellers have genuine faith in both the clocks and their products. “People truly believe their own nonsense,” he says. “They are so passionate about what they discovered, they fall into this trap of believing [their] own prejudices.” 

The accuracy of the clocks is at a level that makes them useful for research, but not for individual predictions. Even if a clock did tell someone they were five years younger than their chronological age, that wouldn’t necessarily mean the person could expect to live five years longer, says Magalhães. “The field of aging has long been a rich ground for snake-oil salesmen and hype,” he says. “It comes with the territory.” (Weldon, for his part, says Rejuvant is the only product that has “clinically meaningful” claims.) 

In any case, Magalhães adds that he thinks any publicity is better than no publicity.

And there’s the rub. Most people in the longevity field seem to have mixed feelings about the trendiness of aging clocks and how they are being used. They’ll agree that the clocks aren’t ready for consumer prime time, but they tend to appreciate the attention. Longevity research is expensive, after all. With a surge in funding and an explosion in the number of biotech companies working on longevity, aging scientists are hopeful that innovation and progress will follow. 

So they want to be sure that the reputation of aging clocks doesn’t end up being tarnished by association. Because while influencers and supplement sellers are using their “biological ages” to garner attention, scientists are now using these clocks to make some remarkable discoveries. Discoveries that are changing the way we think about aging.

How to be young again

Two little mice lie side by side, anesthetized and unconscious, as Jim White prepares his scalpel. The animals are of the same breed but look decidedly different. One is a youthful three-month-old, its fur thick, black, and glossy. By comparison, the second mouse, a 20-month-old, looks a little the worse for wear. Its fur is graying and patchy. Its whiskers are short, and it generally looks kind of frail.

But the two mice are about to have a lot more in common. White, with some help from a colleague, makes incisions along the side of each mouse’s body and into the upper part of an arm and leg on the same side. He then carefully stitches the two animals together—membranes, fascia, and skin. 

The procedure takes around an hour, and the mice are then roused from their anesthesia. At first, the two still-groggy animals pull away from each other. But within a few days, they seem to have accepted that they now share their bodies. Soon their circulatory systems will fuse, and the animals will share a blood flow too.

cartoon man in profile with a stick of a wrist watch around a lit stick of dynamite in his mouth
“People are complicated. There’s a huge error bar.” — Steve Horvath, former biostatistician at the University of California, Los Angeles
LEON EDLER

White, who studies aging at Duke University, has been stitching mice together for years; he has performed this strange procedure, known as heterochronic parabiosis, more than a hundred times. And he’s seen a curious phenomenon occur. The older mice appear to benefit from the arrangement. They seem to get younger.

Experiments with heterochronic parabiosis have been performed for decades, but typically scientists keep the mice attached to each other for only a few weeks, says White. In their experiment, he and his colleagues left the mice attached for three months—equivalent to around 10 human years. The team then carefully separated the animals to assess how each of them had fared. “You’d think that they’d want to separate immediately,” says White. “But when you detach them … they kind of follow each other around.”

The most striking result of that experiment was that the older mice who had been attached to a younger mouse ended up living longer than other mice of a similar age. “[They lived] around 10% longer, but [they] also maintained a lot of [their] function,” says White. They were more active and maintained their strength for longer, he adds.

When his colleagues, including Poganik, applied aging clocks to the mice, they found that their epigenetic ages were lower than expected. “The young circulation slowed aging in the old mice,” says White. The effect seemed to last, too—at least for a little while. “It preserved that youthful state for longer than we expected,” he says.

The young mice went the other way and appeared biologically older, both while they were attached to the old mice and shortly after they were detached. But in their case, the effect seemed to be short-lived, says White: “The young mice went back to being young again.” 

To White, this suggests that something about the “youthful state” might be programmed in some way. That perhaps it is written into our DNA. Maybe we don’t have to go through the biological process of aging. 

This gets at a central debate in the aging field: What is aging, and why does it happen? Some believe it’s simply a result of accumulated damage. Some believe that the aging process is programmed; just as we grow limbs, develop a brain, reach puberty, and experience menopause, we are destined to deteriorate. Others think programs that play an important role in our early development just turn out to be harmful later in life by chance. And there are some scientists who agree with all of the above.

White’s theory is that being old is just “a loss of youth,” he says. If that’s the case, there’s a silver lining: Knowing how youth is lost might point toward a way to somehow regain it, perhaps by restoring those youthful programs in some way. 

Dogs and dolphins

Horvath’s eponymous clock was developed by measuring methylation in DNA samples taken from tissues around the body. It seems to represent aging in all these tissues, which is why Horvath calls it a pan-tissue clock. Given that our organs are thought to age differently, it was remarkable that a single clock could measure aging in so many of them.

But Horvath had ambitious plans for an even more universal clock: a pan-species model that could measure aging in all mammals. He started out, in 2017, with an email campaign that involved asking hundreds of scientists around the world to share samples of tissues from animals they had worked with. He tried zoos, too.   

The pan-mammalian clock suggests that there is something universal about aging—not just that all mammals experience it in a similar way, but that a similar set of genetic or epigenetic factors might be responsible for it.

“I learned that people had spent careers collecting [animal] tissues,” he says. “They had freezers full of [them].” Amenable scientists would ship those frozen tissues, or just DNA, to Horvath’s lab in California, where he would use them to train a new model.

Horvath says he initially set out to profile 30 different species. But he ended up receiving around 15,000 samples from 200 scientists, representing 348 species—including everything from dogs to dolphins. Could a single clock really predict age in all of them?

“I truly felt it would fail,” says Horvath. “But it turned out that I was completely wrong.” He and his colleagues developed a clock that assessed methylation at 36,000 locations on the genome. The result, which was published in 2023 as the pan-mammalian clock, can estimate the age of any mammal and even the maximum lifespan of the species. The data set is open to anyone who wants to download it, he adds: “I hope people will mine the data to find the secret of how to extend a healthy lifespan.”

The pan-mammalian clock suggests that there is something universal about aging—not just that all mammals experience it in a similar way, but that a similar set of genetic or epigenetic factors might be responsible for it.

Comparisons between mammals also support the idea that the slower methylation changes occur, the longer the lifespan of the animal, says Nelly Olova, an epigeneticist who researches aging at the University of Edinburgh in the UK. “DNA methylation slowly erodes with age,” she says. “We still have the instructions in place, but they become a little messier.” The research in different mammals suggests that cells can take only so much change before they stop functioning.

“There’s a finite amount of change that the cell can tolerate,” she says. “If the instructions become too messy and noisy … it cannot support life.”

Olova has been investigating exactly when aging clocks first begin to tick—in other words, the point at which aging starts. Clocks can be trained on data from volunteers, and by matching the patterns of methylation on their DNA to their chronological age. The trained clocks are then typically used to estimate the biological age of adults. But they can also be used on samples from children. Or babies. They can be used to work out the biological age of cells that make up embryos. 

In her research, Olova used adult skin cells, which—thanks to Nobel Prize–winning research in the 2000s—can be “reprogrammed” back to a state resembling that of the pluripotent stem cells found in embryos. When Olova and her colleagues used a “partial reprogramming” approach to take cells close to that state, they found that the closer they got to the entirely reprogrammed state, the “younger” the cells were. 

It was around 20 days after the cells had been reprogrammed into stem cells that they reached the biological age of zero according to the clock used, says Olova. “It was a bit surreal,” she says. “The pluripotent cells measure as minus 0.5; they’re slightly below zero.”

Vadim Gladyshev, a prominent aging researcher at Harvard University, has since proposed that the same negative level of aging might apply to embryos. After all, some kind of rejuvenation happens during the early stages of embryo formation—an aged egg cell and an aged sperm cell somehow create a brand-new cell. The slate is wiped clean.

Gladyshev calls this point “ground zero.” He posits that it’s reached sometime during the “mid-embryonic state.” At this point, aging begins. And so does “organismal life,” he argues. “It’s interesting how this coincides with philosophical questions about when life starts,” says Olova. 

Some have argued that life begins when sperm meets egg, while others have suggested that the point when embryonic cells start to form some kind of unified structure is what counts. The ground zero point is when the body plan is set out and cells begin to organize accordingly, she says. “Before that, it’s just a bunch of cells.”

This doesn’t mean that life begins at the embryonic state, but it does suggest that this is when aging begins—perhaps as the result of “a generational clearance of damage,” says Poganik.

It is early days—no pun intended—for this research, and the science is far from settled. But knowing when aging begins could help inform attempts to rewind the clock. If scientists can pinpoint an ideal biological age for cells, perhaps they can find ways to get old cells back to that state. There might be a way to slow aging once cells reach a certain biological age, too. 

“Presumably, there may be opportunities for targeting aging before … you’re full of gray hair,” says Poganik. “It could mean that there is an ideal window for intervention which is much earlier than our current geriatrics-based approach.”

When young meets old

When White first started stitching mice together, he would sit and watch them for hours. “I was like, look at them go! They’re together, and they don’t even care!” he says. Since then, he’s learned a few tricks. He tends to work with female mice, for instance—the males tend to bicker and nip at each other, he says. The females, on the other hand, seem to get on well. 

The effect their partnership appears to have on their biological ages, if only temporarily, is among the ways aging clocks are helping us understand that biological age is plastic to some degree. White and his colleagues have also found, for instance, that stress seems to increase biological age, but that the effect can be reversed once the stress stops. Both pregnancy and covid-19 infections have a similar reversible effect.

Poganik wonders if this finding might have applications for human organ transplants. Perhaps there’s a way to measure the biological age of an organ before it is transplanted and somehow rejuvenate organs before surgery. 

But new data from aging clocks suggests that this might be more complicated than it sounds. Poganik and his colleagues have been using methylation clocks to measure the biological age of samples taken from recently transplanted hearts in living people. 

If being old is simply a case of losing our youthfulness, then that might give us a clue to how we can somehow regain it.

Young hearts do well in older bodies, but the biological age of these organs eventually creeps up to match that of their recipient. The same is true for older hearts in younger bodies, says Poganik, who has not yet published his findings. “After a few months, the tissue may assimilate the biological age of the organism,” he says. 

If that’s the case, the benefits of young organs might be short-lived. It also suggests that scientists working on ways to rejuvenate individual organs may need to focus their anti-aging efforts on more systemic means of rejuvenation—for example, stem cells that repopulate the blood. Reprogramming these cells to a youthful state, perhaps one a little closer to “ground zero,” might be the way to go.

Whole-body rejuvenation might be some way off, but scientists are still hopeful that aging clocks might help them find a way to reverse aging in people.

“We have the machinery to reset our epigenetic clock to a more youthful state,” says White. “That means we have the ability to turn the clock backwards.” 

❌
❌