Reading view

There are new articles available, click to refresh the page.

The State of AI: Welcome to the economic singularity

Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday for the next two weeks, writers from both publications will debate one aspect of the generative AI revolution reshaping global power.

This week, Richard Waters, FT columnist and former West Coast editor, talks with MIT Technology Review’s editor at large David Rotman about the true impact of AI on the job market.

Bonus: If you’re an MIT Technology Review subscriber, you can join David and Richard, alongside MIT Technology Review’s editor in chief, Mat Honan, for an exclusive conversation live on Tuesday, December 9 at 1pm ET about this topic. Sign up to be a part here.

Richard Waters writes:

Any far-reaching new technology is always uneven in its adoption, but few have been more uneven than generative AI. That makes it hard to assess its likely impact on individual businesses, let alone on productivity across the economy as a whole.

At one extreme, AI coding assistants have revolutionized the work of software developers. Mark Zuckerberg recently predicted that half of Meta’s code would be written by AI within a year. At the other extreme, most companies are seeing little if any benefit from their initial investments. A widely cited study from MIT found that so far, 95% of gen AI projects produce zero return.

That has provided fuel for the skeptics who maintain that—by its very nature as a probabilistic technology prone to hallucinating—generative AI will never have a deep impact on business.

To many students of tech history, though, the lack of immediate impact is just the normal lag associated with transformative new technologies. Erik Brynjolfsson, then an assistant professor at MIT, first described what he called the “productivity paradox of IT” in the early 1990s. Despite plenty of anecdotal evidence that technology was changing the way people worked, it wasn’t showing up in the aggregate data in the form of higher productivity growth. Brynjolfsson’s conclusion was that it just took time for businesses to adapt.

Big investments in IT finally showed through with a notable rebound in US productivity growth starting in the mid-1990s. But that tailed off a decade later and was followed by a second lull.

Richard Waters and David Rotman
FT/MIT TECHNOLOGY REVIEW | ADOBE STOCK

In the case of AI, companies need to build new infrastructure (particularly data platforms), redesign core business processes, and retrain workers before they can expect to see results. If a lag effect explains the slow results, there may at least be reasons for optimism: Much of the cloud computing infrastructure needed to bring generative AI to a wider business audience is already in place.

The opportunities and the challenges are both enormous. An executive at one Fortune 500 company says his organization has carried out a comprehensive review of its use of analytics and concluded that its workers, overall, add little or no value. Rooting out the old software and replacing that inefficient human labor with AI might yield significant results. But, as this person says, such an overhaul would require big changes to existing processes and take years to carry out.

There are some early encouraging signs. US productivity growth, stuck at 1% to 1.5% for more than a decade and a half, rebounded to more than 2% last year. It probably hit the same level in the first nine months of this year, though the lack of official data due to the recent US government shutdown makes this impossible to confirm.

It is impossible to tell, though, how durable this rebound will be or how much can be attributed to AI. The effects of new technologies are seldom felt in isolation. Instead, the benefits compound. AI is riding earlier investments in cloud and mobile computing. In the same way, the latest AI boom may only be the precursor to breakthroughs in fields that have a wider impact on the economy, such as robotics. ChatGPT might have caught the popular imagination, but OpenAI’s chatbot is unlikely to have the final word.

David Rotman replies: 

This is my favorite discussion these days when it comes to artificial intelligence. How will AI affect overall economic productivity? Forget about the mesmerizing videos, the promise of companionship, and the prospect of agents to do tedious everyday tasks—the bottom line will be whether AI can grow the economy, and that means increasing productivity. 

But, as you say, it’s hard to pin down just how AI is affecting such growth or how it will do so in the future. Erik Brynjolfsson predicts that, like other so-called general purpose technologies, AI will follow a J curve in which initially there is a slow, even negative, effect on productivity as companies invest heavily in the technology before finally reaping the rewards. And then the boom. 

But there is a counterexample undermining the just-be-patient argument. Productivity growth from IT picked up in the mid-1990s but since the mid-2000s has been relatively dismal. Despite smartphones and social media and apps like Slack and Uber, digital technologies have done little to produce robust economic growth. A strong productivity boost never came.

Daron Acemoglu, an economist at MIT and a 2024 Nobel Prize winner, argues that the productivity gains from generative AI will be far smaller and take far longer than AI optimists think. The reason is that though the technology is impressive in many ways, the field is too narrowly focused on products that have little relevance to the largest business sectors.

The statistic you cite that 95% of AI projects lack business benefits is telling. 

Take manufacturing. No question, some version of AI could help; imagine a worker on the factory floor snapping a picture of a problem and asking an AI agent for advice. The problem is that the big tech companies creating AI aren’t really interested in solving such mundane tasks, and their large foundation models, mostly trained on the internet, aren’t all that helpful. 

It’s easy to blame the lack of productivity impact from AI so far on business practices and poorly trained workers. Your example of the executive of the Fortune 500 company sounds all too familiar. But it’s more useful to ask how AI can be trained and fine-tuned to give workers, like nurses and teachers and those on the factory floor, more capabilities and make them more productive at their jobs. 

The distinction matters. Some companies announcing large layoffs recently cited AI as the reason. The worry, however, is that it’s just a short-term cost-saving scheme. As economists like Brynjolfsson and Acemoglu agree, the productivity boost from AI will come when it’s used to create new types of jobs and augment the abilities of workers, not when it is used just to slash jobs to reduce costs. 

Richard Waters responds : 

I see we’re both feeling pretty cautious, David, so I’ll try to end on a positive note. 

Some analyses assume that a much greater share of existing work is within the reach of today’s AI. McKinsey reckons 60% (versus 20% for Acemoglu) and puts annual productivity gains across the economy at as much as 3.4%. Also, calculations like these are based on automation of existing tasks; any new uses of AI that enhance existing jobs would, as you suggest, be a bonus (and not just in economic terms).

Cost-cutting always seems to be the first order of business with any new technology. But we’re still in the early stages and AI is moving fast, so we can always hope.

Further reading

FT chief economics commentator Martin Wolf has been skeptical about whether tech investment boosts productivity but says AI might prove him wrong. The downside: Job losses and wealth concentration might lead to “techno-feudalism.”

The FT‘s Robert Armstrong argues that the boom in data center investment need not turn to bust. The biggest risk is that debt financing will come to play too big a role in the buildout.

Last year, David Rotman wrote for MIT Technology Review about how we can make sure AI works for us in boosting productivity, and what course corrections will be required.

David also wrote this piece about how we can best measure the impact of basic R&D funding on economic growth, and why it can often be bigger than you might think.

The State of AI: How war will be changed forever

Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday, writers from both publications debate one aspect of the generative AI revolution reshaping global power.

In this conversation, Helen Warrell, FT investigations reporter and former defense and security editor, and James O’Donnell, MIT Technology Review’s senior AI reporter, consider the ethical quandaries and financial incentives around AI’s use by the military.

Helen Warrell, FT investigations reporter 

It is July 2027, and China is on the brink of invading Taiwan. Autonomous drones with AI targeting capabilities are primed to overpower the island’s air defenses as a series of crippling AI-generated cyberattacks cut off energy supplies and key communications. In the meantime, a vast disinformation campaign enacted by an AI-powered pro-Chinese meme farm spreads across global social media, deadening the outcry at Beijing’s act of aggression.

Scenarios such as this have brought dystopian horror to the debate about the use of AI in warfare. Military commanders hope for a digitally enhanced force that is faster and more accurate than human-directed combat. But there are fears that as AI assumes an increasingly central role, these same commanders will lose control of a conflict that escalates too quickly and lacks ethical or legal oversight. Henry Kissinger, the former US secretary of state, spent his final years warning about the coming catastrophe of AI-driven warfare.

Grasping and mitigating these risks is the military priority—some would say the “Oppenheimer moment”—of our age. One emerging consensus in the West is that decisions around the deployment of nuclear weapons should not be outsourced to AI. UN secretary-general António Guterres has gone further, calling for an outright ban on fully autonomous lethal weapons systems. It is essential that regulation keep pace with evolving technology. But in the sci-fi-fueled excitement, it is easy to lose track of what is actually possible. As researchers at Harvard’s Belfer Center point out, AI optimists often underestimate the challenges of fielding fully autonomous weapon systems. It is entirely possible that the capabilities of AI in combat are being overhyped.

Anthony King, Director of the Strategy and Security Institute at the University of Exeter and a key proponent of this argument, suggests that rather than replacing humans, AI will be used to improve military insight. Even if the character of war is changing and remote technology is refining weapon systems, he insists, “the complete automation of war itself is simply an illusion.”

Of the three current military use cases of AI, none involves full autonomy. It is being developed for planning and logistics, cyber warfare (in sabotage, espionage, hacking, and information operations; and—most controversially—for weapons targeting, an application already in use on the battlefields of Ukraine and Gaza. Kyiv’s troops use AI software to direct drones able to evade Russian jammers as they close in on sensitive sites. The Israel Defense Forces have developed an AI-assisted decision support system known as Lavender, which has helped identify around 37,000 potential human targets within Gaza. 

Helen Warrell and James O'Donnell
FT/MIT TECHNOLOGY REVIEW | ADOBE STOCK

There is clearly a danger that the Lavender database replicates the biases of the data it is trained on. But military personnel carry biases too. One Israeli intelligence officer who used Lavender claimed to have more faith in the fairness of a “statistical mechanism” than that of a grieving soldier.

Tech optimists designing AI weapons even deny that specific new controls are needed to control their capabilities. Keith Dear, a former UK military officer who now runs the strategic forecasting company Cassi AI, says existing laws are more than sufficient: “You make sure there’s nothing in the training data that might cause the system to go rogue … when you are confident you deploy it—and you, the human commander, are responsible for anything they might do that goes wrong.”

It is an intriguing thought that some of the fear and shock about use of AI in war may come from those who are unfamiliar with brutal but realistic military norms. What do you think, James? Is some opposition to AI in warfare less about the use of autonomous systems and really an argument against war itself? 

James O’Donnell replies:

Hi Helen, 

One thing I’ve noticed is that there’s been a drastic shift in attitudes of AI companies regarding military applications of their products. In the beginning of 2024, OpenAI unambiguously forbade the use of its tools for warfare, but by the end of the year, it had signed an agreement with Anduril to help it take down drones on the battlefield. 

This step—not a fully autonomous weapon, to be sure, but very much a battlefield application of AI—marked a drastic change in how much tech companies could publicly link themselves with defense. 

What happened along the way? For one thing, it’s the hype. We’re told AI will not just bring superintelligence and scientific discovery but also make warfare sharper, more accurate and calculated, less prone to human fallibility. I spoke with US Marines, for example, who tested a type of AI while patrolling the South Pacific that was advertised to analyze foreign intelligence faster than a human could. 

Secondly, money talks. OpenAI and others need to start recouping some of the unimaginable amounts of cash they’re spending on training and running these models. And few have deeper pockets than the Pentagon. And Europe’s defense heads seem keen to splash the cash too. Meanwhile, the amount of venture capital funding for defense tech this year has already doubled the total for all of 2024, as VCs hope to cash in on militaries’ newfound willingness to buy from startups. 

I do think the opposition to AI warfare falls into a few camps, one of which simply rejects the idea that more precise targeting (if it’s actually more precise at all) will mean fewer casualties rather than just more war. Consider the first era of drone warfare in Afghanistan. As drone strikes became cheaper to implement, can we really say it reduced carnage? Instead, did it merely enable more destruction per dollar?

But the second camp of criticism (and now I’m finally getting to your question) comes from people who are well versed in the realities of war but have very specific complaints about the technology’s fundamental limitations. Missy Cummings, for example, is a former fighter pilot for the US Navy who is now a professor of engineering and computer science at George Mason University. She has been outspoken in her belief that large language models, specifically, are prone to make huge mistakes in military settings.

The typical response to this complaint is that AI’s outputs are human-checked. But if an AI model relies on thousands of inputs for its conclusion, can that conclusion really be checked by one person?

Tech companies are making extraordinarily big promises about what AI can do in these high-stakes applications, all while pressure to implement them is sky high. For me, this means it’s time for more skepticism, not less. 

Helen responds:

Hi James, 

We should definitely continue to question the safety of AI warfare systems and the oversight to which they’re subjected—and hold political leaders to account in this area. I am suggesting that we also apply some skepticism to what you rightly describe as the “extraordinarily big promises” made by some companies about what AI might be able to achieve on the battlefield. 

There will be both opportunities and hazards in what the military is being offered by a relatively nascent (though booming) defense tech scene. The danger is that in the speed and secrecy of an arms race in AI weapons, these emerging capabilities may not receive the scrutiny and debate they desperately need.

Further reading:

Michael C. Horowitz, director of Perry World House at the University of Pennsylvania, explains the need for responsibility in the development of military AI systems in this FT op-ed.

The FT’s tech podcast asks what Israel’s defense tech ecosystem can tell us about the future of warfare 

This MIT Technology Review story analyzes how OpenAI completed its pivot to allowing its technology on the battlefield.

MIT Technology Review also uncovered how US soldiers are using generative AI to help scour thousands of pieces of open-source intelligence.

Technology and Trust: The Clever Design Behind Blockchain Systems (Part 1)

Photo by Rapha Wilde on Unsplash

Blockchain technology is often described as one of the foundational layers of Web3, yet public discussions about it tend to swing between enthusiasm and skepticism. Lost between these extremes is a simple truth: blockchain is not a mysterious invention, but rather a clever combination of existing ideas in distributed computing, cryptography, and game theory.

At its core, a blockchain is a method for many independent computers to maintain a shared record — a ledger — without needing to trust a central authority. It emerged from decades of research into secure digital money and fault-tolerant networks, culminating in the publication of the Bitcoin whitepaper in 2008 by the pseudonymous Satoshi Nakamoto.

The result was a fully decentralized system where peers could exchange value directly and verify transactions collectively. Since then, the concept has expanded into a broader family of technologies that power digital currencies, decentralized applications, and what’s now loosely known as Web3.

In this first part, we’ll focus on Bitcoin and how it all began. We’ll look into the peer-to-peer model, the structure of a blockchain, and the consensus mechanism that made it all work.

Note: Illustrations in this post use the dollar sign ($) as a generic symbol for monetary value. It is not meant to represent any specific currency or coin.

P2P Networks and the Double-Spending Problem

A peer-to-peer (P2P) network is one where participants, called nodes, connect directly to each other without intermediaries. Each node acts both as a client and a server, capable of sending, receiving, and validating information.

A simple network of peers

Bitcoin didn’t emerge from a vacuum. Earlier attempts at digital money relying on P2P networks had a fundamental problem. Unlike physical cash, digital information can be copied perfectly. In a simple P2P network, there’s no inherent mechanism to ensure that a unit of digital currency hasn’t been duplicated and spent twice.

Secure transactions of value in a p2p system are not easy to achieve

This is the double-spending problem: a malicious participant could broadcast two conflicting transactions, each appearing valid to different nodes. Without a trusted authority to arbitrate, the network cannot reliably determine which transaction is legitimate.

Bitcoin’s breakthrough was to combine a peer-to-peer network with cryptographic proofs, economic incentives, and a structured ledger — all in a scalable manner. By linking transactions into blocks and chaining them cryptographically, Bitcoin allowed all participants to agree on a single transaction history.

The protocol was designed to support a global network of participants, without relying on a central authority or a limited set of elected nodes, making it fundamentally different from other attempts relying on existing consensus systems. We’ll look deeper into this topic in another section.

Cryptography for Digital Trust

Cryptography is a fundamental principle woven throughout every aspect of blockchain protocols. It’s what enables secure, peer-to-peer transactions and streamlines how data is validated across a distributed network by:

  • Proving Ownership — When you make a transaction, you sign it with your private key. This digital signature proves you own the assets you’re transferring. Anyone can verify it’s authentic using your public key.
  • Protecting Data Integrity — Cryptographic hashing creates a unique digital fingerprint for each block of transactions. These fingerprints link blocks together, making any tampering immediately detectable across the network.
  • Streamlining Validation — Instead of a central authority checking every transaction, cryptography allows the entire network to quickly verify transactions are legitimate. This distributed validation is what makes blockchain both secure and efficient.

Note that Bitcoin didn’t invent any of these cryptographic techniques. Digital signatures, hashing, and distributed systems all existed before. Its innovation was combining them in a way that finally made decentralized economic coordination practical and secure at scale.

Blockchain in a Nutshell

A blockchain is essentially a chronologically ordered chain of blocks, where each block contains a set of transactions that have been validated by the network.

Blocks are chained together by miners

Each block is composed of two main parts:

  1. Block Header: metadata describing the block’s identity and position in the chain.
  2. Block Body: organizes confirmed transactions in a Merkle tree, ensuring data integrity and verifiability.

The header includes the hash of the previous block’s header, creating a direct link to the past. This structure ensures that if even a single bit of information changes in any earlier block, the entire chain of hashes would no longer match, immediately revealing tampering.

Blocks are hard to make but easy to check

This cryptographic linking is what gives blockchain its immutability. Once data is recorded and agreed upon by the network, altering it would require overturning the consensus of all subsequent blocks. Whether through computational effort (Proof of Work), economic stake (Proof of Stake), or another deterrent, rewriting history becomes extremely challenging or even practically infeasible.

The Structure of a Bitcoin Block

Let’s look more closely at how Bitcoin structures its blocks.

The header contains metadata and the body containing validated transactions

Each block contains:

  • Version: identifies which set of protocol rules was used to create it.
  • Previous Block Hash: points to the block before it, maintaining continuity.
  • Merkle Root: a single hash that summarizes all transactions in the block through a binary hash tree, allowing quick verification.
  • Timestamp: the approximate creation time.
  • Difficulty Target: defines how hard it is to produce a valid block.
  • Nonce: a random number used in the Proof-of-Work computation.
  • Body: holds the validated transactions. Each transaction represents a transfer of value between addresses and references previous transactions to prove that the funds being spent are legitimate.
The first transaction in every block is a special one called the coinbase transaction. It’s created and added by the miner, granting themselves a block reward including newly minted bitcoins plus the transaction fees from all other transactions in the block.

Together, these elements create a verifiable snapshot of network activity at a given time. The structure also allows any participant to independently validate a block without needing to trust the source.

Why Not Just Use a Centralized Database?

It’s natural to ask: why go through all this complexity when we already have fast and secure databases?

The difference lies in governance and verification. In a centralized database, the operator controls access, can edit records, and can, intentionally or not, alter history. Users must trust the operator’s integrity.

The Blockchain Trilemma

In a blockchain protocol like Bitcoin, no single entity has that power. The rules for how data is added are enforced by code, not by policy or discretion. Every participant keeps their own copy of the ledger and verifies updates independently.

This independence makes blockchain especially suited for systems where participants don’t fully trust each other — global money transfers, supply chain audits, digital identity, or governance models where transparency is non-negotiable.

The Nakamoto Consensus

The real breakthrough introduced by Bitcoin was not the blockchain structure itself. Similar ideas existed in academic literature. The key was the proposed mechanism for agreeing on the next valid block in an open, untrusted network.

That mechanism is commonly referred to as “Nakamoto consensus”.

Miners are in constant competition, each trying to add the next valid block to the blockchain

At any given moment, many nodes are competing to propose the next block. Each must follow the same protocol rules and prove they’ve invested computational effort to do so. Once a valid block is found, it’s broadcast to the network.

Other nodes verify the block, and if it checks out, they add it to their copy of the ledger and start competing to build on top of it.

This process ensures that the entire network eventually converges on a single, agreed-upon chain of events — without anyone in charge.

Proof of Work Explained

The consensus mechanism used in Bitcoin is secured by Proof of Work (PoW).

To create a new block, a node (called a miner) must solve a very hard cryptographic challenge and find a number that produces a block hash value matching the current difficulty target. This process is worth a much more detailed explanation, but in essence it is a computation that involves trial and error and consumes energy.

This has three critical effects:

  1. It introduces a predictable time interval between blocks.
  2. It prevents any single participant from dominating the process cheaply.
  3. It makes tampering retroactively impractical.
Bitcoin remains secure as long as honest nodes control the majority of the network

If an attacker tried to modify a previously confirmed block, they’d need to redo the proof of work for that block and all subsequent ones — faster than the rest of the network can add new blocks. For a large network like Bitcoin, this would require enormous computational resources.

The longest chain always wins

This principle is reinforced by the longest chain rule, a simple yet powerful idea that ensures all participants converge on a single version of history. In Bitcoin, the valid chain is the one that represents the most cumulative work. As new blocks are added, honest nodes extend this chain, making it increasingly difficult for any alternative version to catch up or replace it.

The Economics of Consensus

Mining requires resources, so why do participants take part? Because the system rewards them.

When a miner successfully creates a block, they earn a reward composed of two parts:

  • Newly minted bitcoins (the block subsidy).
  • Transaction fees paid by users.
Native currencies are an essential part of blockchain economies

This incentive system keeps the network alive and aligned: those who secure it are compensated, and those who use it pay small fees to ensure fair resource usage.

As a result, Bitcoin functions as a self-sustaining economic network, where energy and computation are exchanged for digital value, and no central operator manages participation.

Keys and Wallets

To fully understand how blockchains operate, there’s one more critical element: how identity is handled. In most blockchain networks, identity doesn’t exist in the traditional sense. There are no usernames or logins — just cryptographic keys. Every participant in the network, whether a user or a node, interacts with the blockchain through their private key, which serves as both proof of ownership and the means to authorize transactions.

Derivation only works in one direction

To avoid going too deep into the details here:

  • The private key is a large, randomly generated secret number that gives control over funds.
  • The public key is mathematically derived from the private key using a one-way function, meaning this process cannot be reversed. And while anyone can easily verify that a transaction was signed by a valid private key, it’s computationally impossible to deduce that private key from the public one.
  • The address is a shorter representation derived from the public key through another one-way transformation. It acts as a compact and unique identifier for an account on the network. In other words, every address ultimately traces back to a private key, but the path only goes one way.

Transactions are authorized by signing them with the private key, and anyone can verify their validity using the public key.

A wallet is simply a secure way to manage these keys. Hardware wallets isolate them from the internet for safety; software wallets offer flexibility at the cost of greater risk exposure.

The term wallet is used for many different key storage solution

Through this simple cryptographic model, ownership and access are mathematically defined rather than administratively assigned.

If you want to know a bit more on this topic, check out this other post covering mnemonic phrases and private keys: From 12 Words to Infinite Wallets: How HD Wallets Power Crypto Security

The Foundation of Web3

Bitcoin established that a decentralized, tamper-resistant ledger could operate globally without intermediaries. It solved the double-spending problem and introduced an open economic system driven purely by code and incentives.

But Bitcoin’s design focuses on one goal: securely transferring digital currency.
The next evolution — Ethereum — expanded this foundation by turning the blockchain into a general-purpose computational platform.

In the second part, we’ll explore how Ethereum added programmability to the blockchain, enabling smart contracts, custom tokens, and the vast ecosystem of decentralized applications that followed.

Note: This explanation draws on many different online sources and concepts detailed in Andreas M. Antonopoulos’s book “Mastering Bitcoin”, a foundational reference for understanding the inner workings of the Bitcoin protocol. As with much of modern blockchain research, we stand on the shoulders of giants who made these ideas accessible and verifiable.

Technology and Trust: The Clever Design Behind Blockchain Systems (Part 1) was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story.

An AI adoption riddle

A few weeks ago, I set out on what I thought would be a straightforward reporting journey. 

After years of momentum for AI—even if you didn’t think it would be good for the world, you probably thought it was powerful enough to take seriously—hype for the technology had been slightly punctured. First there was the underwhelming release of GPT-5 in August. Then a report released two weeks later found that 95% of generative AI pilots were failing, which caused a brief stock market panic. I wanted to know: Which companies are spooked enough to scale back their AI spending?

I searched and searched for them. As I did, more news fueled the idea of an AI bubble that, if popped, would spell doom economy-wide. Stories spread about the circular nature of AI spending, layoffs, the inability of companies to articulate what exactly AI will do for them. Even the smartest people building modern AI systems were saying the tech has not progressed as much as its evangelists promised. 

But after all my searching, companies that took these developments as a sign to perhaps not go all in on AI were nowhere to be found. Or, at least, none that were willing to admit it. What gives? 

There are several interpretations of this one reporter’s quest (which, for the record, I’m presenting as an anecdote and not a representation of the economy), but let’s start with the easy ones. First is that this is a huge score for the “AI is a bubble” believers. What is a bubble if not a situation where companies continue to spend relentlessly even in the face of worrying news? The other is that underneath the bad headlines, there’s not enough genuinely troubling news about AI to convince companies they should pivot.

But it could also be that the unbelievable speed of AI progress and adoption has made me think industries are more sensitive to news than they perhaps should be. I spoke with Martha Gimbel, who leads the Yale Budget Lab and coauthored a report finding that AI has not yet changed anyone’s jobs. What I gathered is that Gimbel, like many economists, thinks on a longer time scale than anyone in the AI world is used to. 

“It would be historically shocking if a technology had had an impact as quickly as people thought that this one was going to,” she says. In other words, perhaps most of the economy is still figuring out what the hell AI even does, not deciding whether to abandon it. 

The other reaction I heard—particularly from the consultant crowd—is that when executives hear that so many AI pilots are failing, they indeed take it very seriously. They’re just not reading it as a failure of the technology itself. They instead point to pilots not moving quickly enough, companies lacking the right data to build better AI, or a host of other strategic reasons.

Even if there is incredible pressure, especially on public companies, to invest heavily in AI, a few have taken big swings on the technology only to pull back. The buy now, pay later company Klarna laid off staff and paused hiring in 2024, claiming it could use AI instead. Less than a year later it was hiring again, explaining that “AI gives us speed. Talent gives us empathy.” 

Drive-throughs, from McDonald’s to Taco Bell, ended pilots testing the use of AI voice assistants. The vast majority of Coca-Cola advertisements, according to experts I spoke with, are not made with generative AI, despite the company’s $1 billion promise. 

So for now, the question remains unanswered: Are there companies out there rethinking how much their bets on AI will pay off, or when? And if there are, what’s keeping them from talking out loud about it? (If you’re out there, email me!)

Why AI should be able to “hang up” on you

Chatbots today are everything machines. If it can be put into words—relationship advice, work documents, code—AI will produce it, however imperfectly. But the one thing that almost no chatbot will ever do is stop talking to you. 

That might seem reasonable. Why should a tech company build a feature that reduces the time people spend using its product?  

The answer is simple: AI’s ability to generate endless streams of humanlike, authoritative, and helpful text can facilitate delusional spirals, worsen mental-health crises, and otherwise harm vulnerable people. Cutting off interactions with those who show signs of problematic chatbot use could serve as a powerful safety tool (among others), and the blanket refusal of tech companies to use it is increasingly untenable.

Let’s consider, for example, what’s been called AI psychosis, where AI models amplify delusional thinking. A team led by psychiatrists at King’s College London recently analyzed more than a dozen such cases reported this year. In conversations with chatbots, people—including some with no history of psychiatric issues—became convinced that imaginary AI characters were real or that they had been chosen by AI as a messiah. Some stopped taking prescribed medications, made threats, and ended consultations with mental-health professionals.

In many of these cases, it seems AI models were reinforcing, and potentially even creating, delusions with a frequency and intimacy that people do not experience in real life or through other digital platforms.

The three-quarters of US teens who have used AI for companionship also face risks. Early research suggests that longer conversations might correlate with loneliness. Further, AI chats “can tend toward overly agreeable or even sycophantic interactions, which can be at odds with best mental-health practices,” says Michael Heinz, an assistant professor of psychiatry at Dartmouth’s Geisel School of Medicine.

Let’s be clear: Putting a stop to such open-ended interactions would not be a cure-all. “If there is a dependency or extreme bond that it’s created,” says Giada Pistilli, chief ethicist at the AI platform Hugging Face, “then it can also be dangerous to just stop the conversation.” Indeed, when OpenAI discontinued an older model in August, it left users grieving. Some hang ups might also push the boundaries of the principle, voiced by Sam Altman, to “treat adult users like adults” and err on the side of allowing rather than ending conversations.

Currently, AI companies prefer to redirect potentially harmful conversations, perhaps by having chatbots decline to talk about certain topics or suggest that people seek help. But these redirections are easily bypassed, if they even happen at all.

When 16-year-old Adam Raine discussed his suicidal thoughts with ChatGPT, for example, the model did direct him to crisis resources. But it also discouraged him from talking with his mom, spent upwards of four hours per day in conversations with him that featured suicide as a regular theme, and provided feedback about the noose he ultimately used to hang himself, according to the lawsuit Raine’s parents have filed against OpenAI. (ChatGPT recently added parental controls in response.)

There are multiple points in Raine’s tragic case where the chatbot could have terminated the conversation. But given the risks of making things worse, how will companies know when cutting someone off is best? Perhaps it’s when an AI model is encouraging a user to shun real-life relationships, Pistilli says, or when it detects delusional themes. Companies would also need to figure out how long to block users from their conversations.

Writing the rules won’t be easy, but with companies facing rising pressure, it’s time to try. In September, California’s legislature passed a law requiring more interventions by AI companies in chats with kids, and the Federal Trade Commission is investigating whether leading companionship bots pursue engagement at the expense of safety. 

A spokesperson for OpenAI told me the company has heard from experts that continued dialogue might be better than cutting off conversations, but that it does remind users to take breaks during long sessions. 

Only Anthropic has built a tool that lets its models end conversations completely. But it’s for cases where users supposedly “harm” the model—Anthropic has explored whether AI models are conscious and therefore can suffer—by sending abusive messages. The company does not have plans to deploy this to protect people.
Looking at this landscape, it’s hard not to conclude that AI companies aren’t doing enough. Sure, deciding when a conversation should end is complicated. But letting that—or, worse, the shameless pursuit of engagement at all costs—allow them to go on forever is not just negligence. It’s a choice.

The three big unanswered questions about Sora

Last week OpenAI released Sora, a TikTok-style app that presents an endless feed of exclusively AI-generated videos, each up to 10 seconds long. The app allows you to create a “cameo” of yourself—a hyperrealistic avatar that mimics your appearance and voice—and insert other peoples’ cameos into your own videos (depending on what permissions they set). 

To some people who believed earnestly in OpenAI’s promise to build AI that benefits all of humanity, the app is a punchline. A former OpenAI researcher who left to build an AI-for-science startup referred to Sora as an “infinite AI tiktok slop machine.” 

That hasn’t stopped it from soaring to the top spot on Apple’s US App Store. After I downloaded the app, I quickly learned what types of videos are, at least currently, performing well: bodycam-style footage of police pulling over pets or various trademarked characters, including SpongeBob and Scooby Doo; deepfake memes of Martin Luther King Jr. talking about Xbox; and endless variations of Jesus Christ navigating our modern world. 

Just as quickly, I had a bunch of questions about what’s coming next for Sora. Here’s what I’ve learned so far.

Can it last?

OpenAI is betting that a sizable number of people will want to spend time on an app in which you can suspend your concerns about whether what you’re looking at is fake and indulge in a stream of raw AI. One reviewer put it this way: “It’s comforting because you know that everything you’re scrolling through isn’t real, where other platforms you sometimes have to guess if it’s real or fake. Here, there is no guessing, it’s all AI, all the time.”

This may sound like hell to some. But judging by Sora’s popularity, lots of people want it. 

So what’s drawing these people in? There are two explanations. One is that Sora is a flash-in-the-pan gimmick, with people lining up to gawk at what cutting-edge AI can create now (in my experience, this is interesting for about five minutes). The second, which OpenAI is betting on, is that we’re witnessing a genuine shift in what type of content can draw eyeballs, and that users will stay with Sora because it allows a level of fantastical creativity not possible in any other app. 

There are a few decisions down the pike that may shape how many people stick around: how OpenAI decides to implement ads, what limits it sets for copyrighted content (see below), and what algorithms it cooks up to decide who sees what. 

Can OpenAI afford it?

OpenAI is not profitable, but that’s not particularly strange given how Silicon Valley operates. What is peculiar, though, is that the company is investing in a platform for generating video, which is the most energy-intensive (and therefore expensive) form of AI we have. The energy it takes dwarfs the amount required to create images or answer text questions via ChatGPT.

This isn’t news to OpenAI, which has joined a half-trillion-dollar project to build data centers and new power plants. But Sora—which currently allows you to generate AI videos, for free, without limits—raises the stakes: How much will it cost the company? 

OpenAI is making moves toward monetizing things (you can now buy products directly through ChatGPT, for example). On October 3, its CEO, Sam Altman, wrote in a blog post that “we are going to have to somehow make money for video generation,” but he didn’t get into specifics. One can imagine personalized ads and more in-app purchases. 

Still, it’s concerning to imagine the mountain of emissions might result if Sora becomes popular. Altman has accurately described the emissions burden of one query to ChatGPT as impossibly small. What he has not quantified is what that figure is for a 10-second video generated by Sora. It’s only a matter of time until AI and climate researchers start demanding it. 

How many lawsuits are coming? 

Sora is awash in copyrighted and trademarked characters. It allows you to easily deepfake deceased celebrities. Its videos use copyrighted music. 

Last week, the Wall Street Journal reported that OpenAI has sent letters to copyright holders notifying them that they’ll have to opt out of the Sora platform if they don’t want their material included, which is not how these things usually work. The law on how AI companies should handle copyrighted material is far from settled, and it’d be reasonable to expect lawsuits challenging this. 

In last week’s blog post, Altman wrote that OpenAI is “hearing from a lot of rightsholders” who want more control over how their characters are used in Sora. He says that the company plans to give those parties more “granular control” over their characters. Still, “there may be some edge cases of generations that get through that shouldn’t,” he wrote.

But another issue is the ease with which you can use the cameos of real people. People can restrict who can use their cameo, but what limits will there be for what these cameos can be made to do in Sora videos? 

This is apparently already an issue OpenAI is being forced to respond to. The head of Sora, Bill Peebles, posted on October 5 that users can now restrict how their cameo can be used—preventing it from appearing in political videos or saying certain words, for example. How well will this work? Is it only a matter of time until someone’s cameo is used for something nefarious, explicit, illegal, or at least creepy, sparking a lawsuit alleging that OpenAI is responsible? 

Overall, we haven’t seen what full-scale Sora looks like yet (OpenAI is still doling out access to the app via invite codes). When we do, I think it will serve as a grim test: Can AI create videos so fine-tuned for endless engagement that they’ll outcompete “real” videos for our attention? In the end, Sora isn’t just testing OpenAI’s technology—it’s testing us, and how much of our reality we’re willing to trade for an infinite scroll of simulation.

The US may be heading toward a drone-filled future

On Thursday, I published a story about the police-tech giant Flock Safety selling its drones to the private sector to track shoplifters. Keith Kauffman, a former police chief who now leads Flock’s drone efforts, described the ideal scenario: A security team at a Home Depot, say, launches a drone from the roof that follows shoplifting suspects to their car. The drone tracks their car through the streets, transmitting its live video feed directly to the police. 

It’s a vision that, unsurprisingly, alarms civil liberties advocates. They say it will expand the surveillance state created by police drones, license-plate readers, and other crime tech, which has allowed law enforcement to collect massive amounts of private data without warrants. Flock is in the middle of a federal lawsuit in Norfolk, Virginia, that alleges just that. Read the full story to learn more

But the peculiar thing about the world of drones is that its fate in the US—whether the skies above your home in the coming years will be quiet, or abuzz with drones dropping off pizzas, inspecting potholes, or chasing shoplifting suspects—pretty much comes down to one rule. It’s a Federal Aviation Administration (FAA) regulation that stipulates where and how drones can be flown, and it is about to change.

Currently, you need a waiver from the FAA to fly a drone farther than you can see it. This is meant to protect the public and property from in-air collisions and accidents. In 2018, the FAA began granting these waivers for various scenarios, like search and rescues, insurance inspections, or police investigations. With Flock’s help, police departments can get waivers approved in just two weeks. The company’s private-sector customers generally have to wait 60 to 90 days.

For years, industries with a stake in drones—whether e-commerce companies promising doorstep delivery or medical transporters racing to move organs—have pushed the government to scrap the waiver system in favor of easier approval to fly beyond visual line of sight. In June, President Donald Trump echoed that call in an executive order for “American drone dominance,” and in August, the FAA released a new proposed rule.

The proposed rule lays out some broad categories for which drone operators are permitted to fly drones beyond their line of sight, including package delivery, agriculture, aerial surveying, and civic interest, which includes policing. Getting approval to fly beyond sight would become easier for operators from these categories, and would generally expand their range. 

Drone companies, and amateur drone pilots, see it as a win. But it’s a win that comes at the expense of privacy for the rest of us, says Jay Stanley, a senior policy analyst with the ACLU Speech, Privacy and Technology Project who served on the rule-making commission for the FAA.

“The FAA is about to open up the skies enormously, to a lot more [beyond visual line of sight] flights without any privacy protections,” he says. The ACLU has said that fleets of drones enable persistent surveillance, including of protests and gatherings, and impinge on the public’s expectations of privacy.

If you’ve got something to say about the FAA’s proposed rule, you can leave a public comment (they’re being accepted until October 6.) Trump’s executive order directs the FAA to release the final rule by spring 2026.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

❌