Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

An AI model trained on prison phone calls now looks for planned crimes in those calls

1 December 2025 at 05:30

A US telecom company trained an AI model on years of inmates’ phone and video calls and is now piloting that model to scan their calls, texts, and emails in the hope of predicting and preventing crimes. 

Securus Technologies president Kevin Elder told MIT Technology Review that the company began building its AI tools in 2023, using its massive database of recorded calls to train AI models to detect criminal activity. It created one model, for example, using seven years of calls made by inmates in the Texas prison system, but it has been working on building other state- or county-specific models.

Over the past year, Elder says, Securus has been piloting the AI tools to monitor inmate conversations in real time. The company declined to specify where this is taking place, but its customers include jails holding people awaiting trial and prisons for those serving sentences. Some of these facilities using Securus technology also have agreements with Immigrations and Customs Enforcement to detain immigrants, though Securus does not contract with ICE directly.

“We can point that large language model at an entire treasure trove [of data],” Elder says, “to detect and understand when crimes are being thought about or contemplated, so that you’re catching it much earlier in the cycle.”

As with its other monitoring tools, investigators at detention facilities can deploy the AI features to monitor randomly selected conversations or those of individuals suspected by facility investigators of criminal activity, according to Elder. The model will analyze phone and video calls, text messages, and emails and then flag sections for human agents to review. These agents then send them to investigators for follow-up. 

In an interview, Elder said Securus’ monitoring efforts have helped disrupt human trafficking and gang activities organized from within prisons, among other crimes, and said its tools are also used to identify prison staff who are bringing in contraband. But the company did not provide MIT Technology Review with any cases specifically uncovered by its new AI models. 

People in prison, and those they call, are notified that their conversations are recorded. But this doesn’t mean they’re aware that those conversations could be used to train an AI model, says Bianca Tylek, executive director of the prison rights advocacy group Worth Rises. 

“That’s coercive consent; there’s literally no other way you can communicate with your family,” Tylek says. And since inmates in the vast majority of states pay for these calls, she adds, “not only are you not compensating them for the use of their data, but you’re actually charging them while collecting their data.”

A Securus spokesperson said the use of data to train the tool “is not focused on surveilling or targeting specific individuals, but rather on identifying broader patterns, anomalies, and unlawful behaviors across the entire communication system.” They added that correctional facilities determine their own recording and monitoring policies, which Securus follows, and did not directly answer whether inmates can opt out of having their recordings used to train AI.

Other advocates for inmates say Securus has a history of violating their civil liberties. For example, leaks of its recordings databases showed the company had improperly recorded thousands of calls between inmates and their attorneys. Corene Kendrick, the deputy director of the ACLU’s National Prison Project, says that the new AI system enables a system of invasive surveillance, and courts have specified few limits to this power.

“[Are we] going to stop crime before it happens because we’re monitoring every utterance and thought of incarcerated people?” Kendrick says. “I think this is one of many situations where the technology is way far ahead of the law.”

The company spokesperson said the tool’s function is to make monitoring more efficient amid staffing shortages, “not to surveil individuals without cause.”

Securus will have an easier time funding its AI tool thanks to the company’s recent win in a battle with regulators over how telecom companies can spend the money they collect from inmates’ calls.

In 2024, the Federal Communications Commission issued a major reform, shaped and lauded by advocates for prisoners’ rights, that forbade telecoms from passing the costs of recording and surveilling calls on to inmates. Companies were allowed to continue to charge inmates a capped rate for calls, but prisons and jails were ordered to pay for most security costs out of their own budgets.

Negative reactions to this change were swift. Associations of sheriffs (who typically run county jails) complained they could no longer afford proper monitoring of calls, and attorneys general from 14 states sued over the ruling. Some prisons and jails warned they would cut off access to phone calls. 

While it was building and piloting its AI tool, Securus held meetings with the FCC and lobbied for a rule change, arguing that the 2024 reform went too far and asking that the agency again allow companies to use fees collected from inmates to pay for security. 

In June, Brendan Carr, whom President Donald Trump appointed to lead the FCC, said it would postpone all deadlines for jails and prisons to adopt the 2024 reforms, and even signaled that the agency wants to help telecom companies fund their AI surveillance efforts with the fees paid by inmates. In a press release, Carr wrote that rolling back the 2024 reforms would “lead to broader adoption of beneficial public safety tools that include advanced AI and machine learning.”

On October 28, the agency went further: It voted to pass new, higher rate caps and allow companies like Securus to pass security costs relating to recording and monitoring of calls—like storing recordings, transcribing them, or building AI tools to analyze such calls, for example—on to inmates. A spokesperson for Securus told MIT Technology Review that the company aims to balance affordability with the need to fund essential safety and security tools. “These tools, which include our advanced monitoring and AI capabilities, are fundamental to maintaining secure facilities for incarcerated individuals and correctional staff and to protecting the public,” they wrote.

FCC commissioner Anna Gomez dissented in last month’s ruling. “Law enforcement,” she wrote in a statement, “should foot the bill for unrelated security and safety costs, not the families of incarcerated people.”

The FCC will be seeking comment on these new rules before they take final effect. 

This story was updated on December 2 to clarify that Securus does not contract with ICE facilities.

The State of AI: How war will be changed forever

Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday, writers from both publications debate one aspect of the generative AI revolution reshaping global power.

In this conversation, Helen Warrell, FT investigations reporter and former defense and security editor, and James O’Donnell, MIT Technology Review’s senior AI reporter, consider the ethical quandaries and financial incentives around AI’s use by the military.

Helen Warrell, FT investigations reporter 

It is July 2027, and China is on the brink of invading Taiwan. Autonomous drones with AI targeting capabilities are primed to overpower the island’s air defenses as a series of crippling AI-generated cyberattacks cut off energy supplies and key communications. In the meantime, a vast disinformation campaign enacted by an AI-powered pro-Chinese meme farm spreads across global social media, deadening the outcry at Beijing’s act of aggression.

Scenarios such as this have brought dystopian horror to the debate about the use of AI in warfare. Military commanders hope for a digitally enhanced force that is faster and more accurate than human-directed combat. But there are fears that as AI assumes an increasingly central role, these same commanders will lose control of a conflict that escalates too quickly and lacks ethical or legal oversight. Henry Kissinger, the former US secretary of state, spent his final years warning about the coming catastrophe of AI-driven warfare.

Grasping and mitigating these risks is the military priority—some would say the “Oppenheimer moment”—of our age. One emerging consensus in the West is that decisions around the deployment of nuclear weapons should not be outsourced to AI. UN secretary-general António Guterres has gone further, calling for an outright ban on fully autonomous lethal weapons systems. It is essential that regulation keep pace with evolving technology. But in the sci-fi-fueled excitement, it is easy to lose track of what is actually possible. As researchers at Harvard’s Belfer Center point out, AI optimists often underestimate the challenges of fielding fully autonomous weapon systems. It is entirely possible that the capabilities of AI in combat are being overhyped.

Anthony King, Director of the Strategy and Security Institute at the University of Exeter and a key proponent of this argument, suggests that rather than replacing humans, AI will be used to improve military insight. Even if the character of war is changing and remote technology is refining weapon systems, he insists, “the complete automation of war itself is simply an illusion.”

Of the three current military use cases of AI, none involves full autonomy. It is being developed for planning and logistics, cyber warfare (in sabotage, espionage, hacking, and information operations; and—most controversially—for weapons targeting, an application already in use on the battlefields of Ukraine and Gaza. Kyiv’s troops use AI software to direct drones able to evade Russian jammers as they close in on sensitive sites. The Israel Defense Forces have developed an AI-assisted decision support system known as Lavender, which has helped identify around 37,000 potential human targets within Gaza. 

Helen Warrell and James O'Donnell
FT/MIT TECHNOLOGY REVIEW | ADOBE STOCK

There is clearly a danger that the Lavender database replicates the biases of the data it is trained on. But military personnel carry biases too. One Israeli intelligence officer who used Lavender claimed to have more faith in the fairness of a “statistical mechanism” than that of a grieving soldier.

Tech optimists designing AI weapons even deny that specific new controls are needed to control their capabilities. Keith Dear, a former UK military officer who now runs the strategic forecasting company Cassi AI, says existing laws are more than sufficient: “You make sure there’s nothing in the training data that might cause the system to go rogue … when you are confident you deploy it—and you, the human commander, are responsible for anything they might do that goes wrong.”

It is an intriguing thought that some of the fear and shock about use of AI in war may come from those who are unfamiliar with brutal but realistic military norms. What do you think, James? Is some opposition to AI in warfare less about the use of autonomous systems and really an argument against war itself? 

James O’Donnell replies:

Hi Helen, 

One thing I’ve noticed is that there’s been a drastic shift in attitudes of AI companies regarding military applications of their products. In the beginning of 2024, OpenAI unambiguously forbade the use of its tools for warfare, but by the end of the year, it had signed an agreement with Anduril to help it take down drones on the battlefield. 

This step—not a fully autonomous weapon, to be sure, but very much a battlefield application of AI—marked a drastic change in how much tech companies could publicly link themselves with defense. 

What happened along the way? For one thing, it’s the hype. We’re told AI will not just bring superintelligence and scientific discovery but also make warfare sharper, more accurate and calculated, less prone to human fallibility. I spoke with US Marines, for example, who tested a type of AI while patrolling the South Pacific that was advertised to analyze foreign intelligence faster than a human could. 

Secondly, money talks. OpenAI and others need to start recouping some of the unimaginable amounts of cash they’re spending on training and running these models. And few have deeper pockets than the Pentagon. And Europe’s defense heads seem keen to splash the cash too. Meanwhile, the amount of venture capital funding for defense tech this year has already doubled the total for all of 2024, as VCs hope to cash in on militaries’ newfound willingness to buy from startups. 

I do think the opposition to AI warfare falls into a few camps, one of which simply rejects the idea that more precise targeting (if it’s actually more precise at all) will mean fewer casualties rather than just more war. Consider the first era of drone warfare in Afghanistan. As drone strikes became cheaper to implement, can we really say it reduced carnage? Instead, did it merely enable more destruction per dollar?

But the second camp of criticism (and now I’m finally getting to your question) comes from people who are well versed in the realities of war but have very specific complaints about the technology’s fundamental limitations. Missy Cummings, for example, is a former fighter pilot for the US Navy who is now a professor of engineering and computer science at George Mason University. She has been outspoken in her belief that large language models, specifically, are prone to make huge mistakes in military settings.

The typical response to this complaint is that AI’s outputs are human-checked. But if an AI model relies on thousands of inputs for its conclusion, can that conclusion really be checked by one person?

Tech companies are making extraordinarily big promises about what AI can do in these high-stakes applications, all while pressure to implement them is sky high. For me, this means it’s time for more skepticism, not less. 

Helen responds:

Hi James, 

We should definitely continue to question the safety of AI warfare systems and the oversight to which they’re subjected—and hold political leaders to account in this area. I am suggesting that we also apply some skepticism to what you rightly describe as the “extraordinarily big promises” made by some companies about what AI might be able to achieve on the battlefield. 

There will be both opportunities and hazards in what the military is being offered by a relatively nascent (though booming) defense tech scene. The danger is that in the speed and secrecy of an arms race in AI weapons, these emerging capabilities may not receive the scrutiny and debate they desperately need.

Further reading:

Michael C. Horowitz, director of Perry World House at the University of Pennsylvania, explains the need for responsibility in the development of military AI systems in this FT op-ed.

The FT’s tech podcast asks what Israel’s defense tech ecosystem can tell us about the future of warfare 

This MIT Technology Review story analyzes how OpenAI completed its pivot to allowing its technology on the battlefield.

MIT Technology Review also uncovered how US soldiers are using generative AI to help scour thousands of pieces of open-source intelligence.

An AI adoption riddle

28 October 2025 at 05:00

A few weeks ago, I set out on what I thought would be a straightforward reporting journey. 

After years of momentum for AI—even if you didn’t think it would be good for the world, you probably thought it was powerful enough to take seriously—hype for the technology had been slightly punctured. First there was the underwhelming release of GPT-5 in August. Then a report released two weeks later found that 95% of generative AI pilots were failing, which caused a brief stock market panic. I wanted to know: Which companies are spooked enough to scale back their AI spending?

I searched and searched for them. As I did, more news fueled the idea of an AI bubble that, if popped, would spell doom economy-wide. Stories spread about the circular nature of AI spending, layoffs, the inability of companies to articulate what exactly AI will do for them. Even the smartest people building modern AI systems were saying the tech has not progressed as much as its evangelists promised. 

But after all my searching, companies that took these developments as a sign to perhaps not go all in on AI were nowhere to be found. Or, at least, none that were willing to admit it. What gives? 

There are several interpretations of this one reporter’s quest (which, for the record, I’m presenting as an anecdote and not a representation of the economy), but let’s start with the easy ones. First is that this is a huge score for the “AI is a bubble” believers. What is a bubble if not a situation where companies continue to spend relentlessly even in the face of worrying news? The other is that underneath the bad headlines, there’s not enough genuinely troubling news about AI to convince companies they should pivot.

But it could also be that the unbelievable speed of AI progress and adoption has made me think industries are more sensitive to news than they perhaps should be. I spoke with Martha Gimbel, who leads the Yale Budget Lab and coauthored a report finding that AI has not yet changed anyone’s jobs. What I gathered is that Gimbel, like many economists, thinks on a longer time scale than anyone in the AI world is used to. 

“It would be historically shocking if a technology had had an impact as quickly as people thought that this one was going to,” she says. In other words, perhaps most of the economy is still figuring out what the hell AI even does, not deciding whether to abandon it. 

The other reaction I heard—particularly from the consultant crowd—is that when executives hear that so many AI pilots are failing, they indeed take it very seriously. They’re just not reading it as a failure of the technology itself. They instead point to pilots not moving quickly enough, companies lacking the right data to build better AI, or a host of other strategic reasons.

Even if there is incredible pressure, especially on public companies, to invest heavily in AI, a few have taken big swings on the technology only to pull back. The buy now, pay later company Klarna laid off staff and paused hiring in 2024, claiming it could use AI instead. Less than a year later it was hiring again, explaining that “AI gives us speed. Talent gives us empathy.” 

Drive-throughs, from McDonald’s to Taco Bell, ended pilots testing the use of AI voice assistants. The vast majority of Coca-Cola advertisements, according to experts I spoke with, are not made with generative AI, despite the company’s $1 billion promise. 

So for now, the question remains unanswered: Are there companies out there rethinking how much their bets on AI will pay off, or when? And if there are, what’s keeping them from talking out loud about it? (If you’re out there, email me!)

Why AI should be able to “hang up” on you

21 October 2025 at 05:00

Chatbots today are everything machines. If it can be put into words—relationship advice, work documents, code—AI will produce it, however imperfectly. But the one thing that almost no chatbot will ever do is stop talking to you. 

That might seem reasonable. Why should a tech company build a feature that reduces the time people spend using its product?  

The answer is simple: AI’s ability to generate endless streams of humanlike, authoritative, and helpful text can facilitate delusional spirals, worsen mental-health crises, and otherwise harm vulnerable people. Cutting off interactions with those who show signs of problematic chatbot use could serve as a powerful safety tool (among others), and the blanket refusal of tech companies to use it is increasingly untenable.

Let’s consider, for example, what’s been called AI psychosis, where AI models amplify delusional thinking. A team led by psychiatrists at King’s College London recently analyzed more than a dozen such cases reported this year. In conversations with chatbots, people—including some with no history of psychiatric issues—became convinced that imaginary AI characters were real or that they had been chosen by AI as a messiah. Some stopped taking prescribed medications, made threats, and ended consultations with mental-health professionals.

In many of these cases, it seems AI models were reinforcing, and potentially even creating, delusions with a frequency and intimacy that people do not experience in real life or through other digital platforms.

The three-quarters of US teens who have used AI for companionship also face risks. Early research suggests that longer conversations might correlate with loneliness. Further, AI chats “can tend toward overly agreeable or even sycophantic interactions, which can be at odds with best mental-health practices,” says Michael Heinz, an assistant professor of psychiatry at Dartmouth’s Geisel School of Medicine.

Let’s be clear: Putting a stop to such open-ended interactions would not be a cure-all. “If there is a dependency or extreme bond that it’s created,” says Giada Pistilli, chief ethicist at the AI platform Hugging Face, “then it can also be dangerous to just stop the conversation.” Indeed, when OpenAI discontinued an older model in August, it left users grieving. Some hang ups might also push the boundaries of the principle, voiced by Sam Altman, to “treat adult users like adults” and err on the side of allowing rather than ending conversations.

Currently, AI companies prefer to redirect potentially harmful conversations, perhaps by having chatbots decline to talk about certain topics or suggest that people seek help. But these redirections are easily bypassed, if they even happen at all.

When 16-year-old Adam Raine discussed his suicidal thoughts with ChatGPT, for example, the model did direct him to crisis resources. But it also discouraged him from talking with his mom, spent upwards of four hours per day in conversations with him that featured suicide as a regular theme, and provided feedback about the noose he ultimately used to hang himself, according to the lawsuit Raine’s parents have filed against OpenAI. (ChatGPT recently added parental controls in response.)

There are multiple points in Raine’s tragic case where the chatbot could have terminated the conversation. But given the risks of making things worse, how will companies know when cutting someone off is best? Perhaps it’s when an AI model is encouraging a user to shun real-life relationships, Pistilli says, or when it detects delusional themes. Companies would also need to figure out how long to block users from their conversations.

Writing the rules won’t be easy, but with companies facing rising pressure, it’s time to try. In September, California’s legislature passed a law requiring more interventions by AI companies in chats with kids, and the Federal Trade Commission is investigating whether leading companionship bots pursue engagement at the expense of safety. 

A spokesperson for OpenAI told me the company has heard from experts that continued dialogue might be better than cutting off conversations, but that it does remind users to take breaks during long sessions. 

Only Anthropic has built a tool that lets its models end conversations completely. But it’s for cases where users supposedly “harm” the model—Anthropic has explored whether AI models are conscious and therefore can suffer—by sending abusive messages. The company does not have plans to deploy this to protect people.
Looking at this landscape, it’s hard not to conclude that AI companies aren’t doing enough. Sure, deciding when a conversation should end is complicated. But letting that—or, worse, the shameless pursuit of engagement at all costs—allow them to go on forever is not just negligence. It’s a choice.

The three big unanswered questions about Sora

7 October 2025 at 05:00

Last week OpenAI released Sora, a TikTok-style app that presents an endless feed of exclusively AI-generated videos, each up to 10 seconds long. The app allows you to create a “cameo” of yourself—a hyperrealistic avatar that mimics your appearance and voice—and insert other peoples’ cameos into your own videos (depending on what permissions they set). 

To some people who believed earnestly in OpenAI’s promise to build AI that benefits all of humanity, the app is a punchline. A former OpenAI researcher who left to build an AI-for-science startup referred to Sora as an “infinite AI tiktok slop machine.” 

That hasn’t stopped it from soaring to the top spot on Apple’s US App Store. After I downloaded the app, I quickly learned what types of videos are, at least currently, performing well: bodycam-style footage of police pulling over pets or various trademarked characters, including SpongeBob and Scooby Doo; deepfake memes of Martin Luther King Jr. talking about Xbox; and endless variations of Jesus Christ navigating our modern world. 

Just as quickly, I had a bunch of questions about what’s coming next for Sora. Here’s what I’ve learned so far.

Can it last?

OpenAI is betting that a sizable number of people will want to spend time on an app in which you can suspend your concerns about whether what you’re looking at is fake and indulge in a stream of raw AI. One reviewer put it this way: “It’s comforting because you know that everything you’re scrolling through isn’t real, where other platforms you sometimes have to guess if it’s real or fake. Here, there is no guessing, it’s all AI, all the time.”

This may sound like hell to some. But judging by Sora’s popularity, lots of people want it. 

So what’s drawing these people in? There are two explanations. One is that Sora is a flash-in-the-pan gimmick, with people lining up to gawk at what cutting-edge AI can create now (in my experience, this is interesting for about five minutes). The second, which OpenAI is betting on, is that we’re witnessing a genuine shift in what type of content can draw eyeballs, and that users will stay with Sora because it allows a level of fantastical creativity not possible in any other app. 

There are a few decisions down the pike that may shape how many people stick around: how OpenAI decides to implement ads, what limits it sets for copyrighted content (see below), and what algorithms it cooks up to decide who sees what. 

Can OpenAI afford it?

OpenAI is not profitable, but that’s not particularly strange given how Silicon Valley operates. What is peculiar, though, is that the company is investing in a platform for generating video, which is the most energy-intensive (and therefore expensive) form of AI we have. The energy it takes dwarfs the amount required to create images or answer text questions via ChatGPT.

This isn’t news to OpenAI, which has joined a half-trillion-dollar project to build data centers and new power plants. But Sora—which currently allows you to generate AI videos, for free, without limits—raises the stakes: How much will it cost the company? 

OpenAI is making moves toward monetizing things (you can now buy products directly through ChatGPT, for example). On October 3, its CEO, Sam Altman, wrote in a blog post that “we are going to have to somehow make money for video generation,” but he didn’t get into specifics. One can imagine personalized ads and more in-app purchases. 

Still, it’s concerning to imagine the mountain of emissions might result if Sora becomes popular. Altman has accurately described the emissions burden of one query to ChatGPT as impossibly small. What he has not quantified is what that figure is for a 10-second video generated by Sora. It’s only a matter of time until AI and climate researchers start demanding it. 

How many lawsuits are coming? 

Sora is awash in copyrighted and trademarked characters. It allows you to easily deepfake deceased celebrities. Its videos use copyrighted music. 

Last week, the Wall Street Journal reported that OpenAI has sent letters to copyright holders notifying them that they’ll have to opt out of the Sora platform if they don’t want their material included, which is not how these things usually work. The law on how AI companies should handle copyrighted material is far from settled, and it’d be reasonable to expect lawsuits challenging this. 

In last week’s blog post, Altman wrote that OpenAI is “hearing from a lot of rightsholders” who want more control over how their characters are used in Sora. He says that the company plans to give those parties more “granular control” over their characters. Still, “there may be some edge cases of generations that get through that shouldn’t,” he wrote.

But another issue is the ease with which you can use the cameos of real people. People can restrict who can use their cameo, but what limits will there be for what these cameos can be made to do in Sora videos? 

This is apparently already an issue OpenAI is being forced to respond to. The head of Sora, Bill Peebles, posted on October 5 that users can now restrict how their cameo can be used—preventing it from appearing in political videos or saying certain words, for example. How well will this work? Is it only a matter of time until someone’s cameo is used for something nefarious, explicit, illegal, or at least creepy, sparking a lawsuit alleging that OpenAI is responsible? 

Overall, we haven’t seen what full-scale Sora looks like yet (OpenAI is still doling out access to the app via invite codes). When we do, I think it will serve as a grim test: Can AI create videos so fine-tuned for endless engagement that they’ll outcompete “real” videos for our attention? In the end, Sora isn’t just testing OpenAI’s technology—it’s testing us, and how much of our reality we’re willing to trade for an infinite scroll of simulation.

The US may be heading toward a drone-filled future

30 September 2025 at 05:00

On Thursday, I published a story about the police-tech giant Flock Safety selling its drones to the private sector to track shoplifters. Keith Kauffman, a former police chief who now leads Flock’s drone efforts, described the ideal scenario: A security team at a Home Depot, say, launches a drone from the roof that follows shoplifting suspects to their car. The drone tracks their car through the streets, transmitting its live video feed directly to the police. 

It’s a vision that, unsurprisingly, alarms civil liberties advocates. They say it will expand the surveillance state created by police drones, license-plate readers, and other crime tech, which has allowed law enforcement to collect massive amounts of private data without warrants. Flock is in the middle of a federal lawsuit in Norfolk, Virginia, that alleges just that. Read the full story to learn more

But the peculiar thing about the world of drones is that its fate in the US—whether the skies above your home in the coming years will be quiet, or abuzz with drones dropping off pizzas, inspecting potholes, or chasing shoplifting suspects—pretty much comes down to one rule. It’s a Federal Aviation Administration (FAA) regulation that stipulates where and how drones can be flown, and it is about to change.

Currently, you need a waiver from the FAA to fly a drone farther than you can see it. This is meant to protect the public and property from in-air collisions and accidents. In 2018, the FAA began granting these waivers for various scenarios, like search and rescues, insurance inspections, or police investigations. With Flock’s help, police departments can get waivers approved in just two weeks. The company’s private-sector customers generally have to wait 60 to 90 days.

For years, industries with a stake in drones—whether e-commerce companies promising doorstep delivery or medical transporters racing to move organs—have pushed the government to scrap the waiver system in favor of easier approval to fly beyond visual line of sight. In June, President Donald Trump echoed that call in an executive order for “American drone dominance,” and in August, the FAA released a new proposed rule.

The proposed rule lays out some broad categories for which drone operators are permitted to fly drones beyond their line of sight, including package delivery, agriculture, aerial surveying, and civic interest, which includes policing. Getting approval to fly beyond sight would become easier for operators from these categories, and would generally expand their range. 

Drone companies, and amateur drone pilots, see it as a win. But it’s a win that comes at the expense of privacy for the rest of us, says Jay Stanley, a senior policy analyst with the ACLU Speech, Privacy and Technology Project who served on the rule-making commission for the FAA.

“The FAA is about to open up the skies enormously, to a lot more [beyond visual line of sight] flights without any privacy protections,” he says. The ACLU has said that fleets of drones enable persistent surveillance, including of protests and gatherings, and impinge on the public’s expectations of privacy.

If you’ve got something to say about the FAA’s proposed rule, you can leave a public comment (they’re being accepted until October 6.) Trump’s executive order directs the FAA to release the final rule by spring 2026.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

US investigators are using AI to detect child abuse images made by AI

26 September 2025 at 15:03

Generative AI has enabled the production of child sexual abuse images to skyrocket. Now the leading investigator of child exploitation in the US is experimenting with using AI to distinguish AI-generated images from material depicting real victims, according to a new government filing.

The Department of Homeland Security’s Cyber Crimes Center, which investigates child exploitation across international borders, has awarded a $150,000 contract to San Francisco–based Hive AI for its software, which can identify whether a piece of content was AI-generated.

The filing, posted on September 19, is heavily redacted and Hive cofounder and CEO Kevin Guo told MIT Technology Review that he could not discuss the details of the contract, but confirmed it involves use of the company’s AI detection algorithms for child sexual abuse material (CSAM).

The filing quotes data from the National Center for Missing and Exploited Children that reported a 1,325% increase in incidents involving generative AI in 2024. “The sheer volume of digital content circulating online necessitates the use of automated tools to process and analyze data efficiently,” the filing reads.

The first priority of child exploitation investigators is to find and stop any abuse currently happening, but the flood of AI-generated CSAM has made it difficult for investigators to know whether images depict a real victim currently at risk. A tool that could successfully flag real victims would be a massive help when they try to prioritize cases.

Identifying AI-generated images “ensures that investigative resources are focused on cases involving real victims, maximizing the program’s impact and safeguarding vulnerable individuals,” the filing reads.

Hive AI offers AI tools that create videos and images, as well as a range of content moderation tools that can flag violence, spam, and sexual material and even identify celebrities. In December, MIT Technology Review reported that the company was selling its deepfake-detection technology to the US military. 

For detecting CSAM, Hive offers a tool created with Thorn, a child safety nonprofit, which companies can integrate into their platforms. This tool uses a “hashing” system, which assigns unique IDs to content known by investigators to be CSAM, and blocks that material from being uploaded. This tool, and others like it, have become a standard line of defense for tech companies. 

But these tools simply identify a piece of content as CSAM; they don’t detect whether it was generated by AI. Hive has created a separate tool that determines whether images in general were AI-generated. Though it is not trained specifically to work on CSAM, according to Guo, it doesn’t need to be.

“There’s some underlying combination of pixels in this image that we can identify” as AI-generated, he says. “It can be generalizable.” 

This tool, Guo says, is what the Cyber Crimes Center will be using to evaluate CSAM. He adds that Hive benchmarks its detection tools for each specific use case its customers have in mind.

The National Center for Missing and Exploited Children, which participates in efforts to stop the spread of CSAM, did not respond to requests for comment on the effectiveness of such detection models in time for publication. 

In its filing, the government justifies awarding the contract to Hive without a competitive bidding process. Though parts of this justification are redacted, it primarily references two points also found in a Hive presentation slide deck. One involves a 2024 study from the University of Chicago, which found that Hive’s AI detection tool outranked four other detectors in identifying AI-generated art. The other is its contract with the Pentagon for identifying deepfakes. The trial will last three months. 

Shoplifters could soon be chased down by drones

25 September 2025 at 13:34

Flock Safety, whose drones were once reserved for police departments, is now offering them for private-sector security, the company announced today, with potential customers including businesses intent on curbing shoplifting. 

Companies in the US can now place Flock’s drone docking stations on their premises. If the company has a waiver from the Federal Aviation Administration to fly beyond visual line of sight (these are becoming easier to get), its security team can fly the drones within a certain radius, often a few miles. 

“Instead of a 911 call [that triggers the drone], it’s an alarm call,” says Keith Kauffman, a former police chief who now directs Flock’s drone program. “It’s still the same type of response.”

Kauffman walked through how the drone program might work in the case of retail theft: If the security team at a store like Home Depot, for example, saw shoplifters leave the store, then the drone, equipped with cameras, could be activated from its docking station on the roof.

“The drone follows the people. The people get in a car. You click a button,” he says, “and you track the vehicle with the drone, and the drone just follows the car.” 

The video feed of that drone might go to the company’s security team, but it could also be automatically transmitted directly to police departments.

The company says it’s in talks with large retailers but doesn’t yet have any signed contracts. The only private-sector company Kauffman named as a customer is Morning Star, a California tomato processor that uses drones to secure its distribution facilities. Flock will also pitch the drones to hospital campuses, warehouse sites, and oil and gas facilities. 

It’s worth noting that the FAA is currently drafting new rules for how it grants approval to pilots flying drones out of sight, and it’s not clear if Flock’s use case would be allowed under the currently proposed guidance.

The company’s expansion to the private sector follows the rise of programs launched by police departments around the country to deploy drones as first responders. In such programs, law enforcement sends drones to a scene to provide visuals faster than an officer can get there. 

Flock has arguably led this push, and police departments have claimed drone-enabled successes, like a supply drop to a boy lost in the Colorado wilderness. But the programs have also sparked privacy worries, concerns about overpolicing in minority neighborhoods, and lawsuits charging that police departments should not block public access to drone footage. 

Other technologies Flock offers, like license plate readers, have drawn recent criticism for the ease with which federal US immigration agencies, including ICE and CBP, could look at data collected by local police departments amid President Trump’s mass deportation efforts.

Flock’s expansion into private-sector security is “a logical step, but in the wrong direction,” says Rebecca Williams, senior strategist for the ACLU’s privacy and data governance unit. 

Williams cited a growing erosion of Fourth Amendment protections—which prevent unlawful search and seizure—in the online era, in which the government can purchase private data that it would otherwise need a warrant to acquire. Proposed legislation to curb that practice has stalled, and Flock’s expansion into the private sector would exacerbate the issue, Williams says.

“Flock is the Meta of surveillance technology now,” Williams says, referring to the amount of personal data that company has acquired and monetized. “This expansion is very scary.”

❌
❌