Reading view

There are new articles available, click to refresh the page.

Forensic linguistics: how dark web criminals give themselves away with their language

Shutterstock/nomad-photo.eu

Shannon McCoole ran one of the world’s largest dark web child abuse forums for around three years in the early 2010s. The forum provided a secure online space in which those interested in abusing children could exchange images, advice and support. It had around 45,000 users and was fortified with layers of online encryption that ensured near-complete anonymity for its users. In other words, it was a large and flourishing community for paedophiles.

McCoole eventually became the subject of an international investigation led by Taskforce Argos – a specialist unit in Australia’s Queensland Police Service dedicated to tackling online child abuse networks.

Key to the investigation – and McCoole’s eventual arrest and conviction – was a piece of linguistic evidence: his frequent use of an unusual greeting term, “hiyas”, as noticed by an investigating officer.

Investigators began searching relevant “clear web” sites (those openly accessible through mainstream search engines) for any markers of a similar linguistic style. They knew the kinds of websites to search because McCoole would speak about his outside interests on the forum, including basketball and vintage cars.

A man was discovered using the giveaway greeting on a four-wheel drive discussion forum. He lived in Adelaide and used a similar handle to the paedophile forum’s anonymous chief administrator. Another similarly named user – also using “hiyas” as a preferred greeting term – was discovered on a basketball forum. Suddenly, the police had their man.

This linguistic evidence contributed to the identification, arrest and eventual conviction of McCoole. But it didn’t end there. After McCoole’s arrest, Taskforce Argos took over his account and continued to run the forum, as him, for another six months. Police were able to gather vital intelligence that led to the prosecution of hundreds of offenders and to the rescue of at least 85 child victims.

McCoole’s case is breathtaking, and it offers a compelling demonstration of the power of language in identifying anonymous individuals.

The power of language

My journey into forensic linguistics began in 2014 at Aston University, where I began learning about the various methods and approaches to analysing language across different contexts in the criminal justice system.

A forensic linguist might be called upon to identify the most likely author of an anonymously written threatening text message, based on its language features; or they might assist the courts in interpreting the meaning of a particular slang word or phrase.


The Insights section is committed to high-quality longform journalism. Our editors work with academics from many different backgrounds who are tackling a wide range of societal and scientific challenges.


Forensic linguists also analyse the language of police interviews, courtroom processes and complex legal documents, pointing out potential barriers to access to understanding, especially for the most vulnerable groups in society. Without thoughtful consideration of the linguistic processes that occur in legal settings and the communication needs of the population, these processes can (and do) result in serious miscarriages of justice.

A particularly egregious example of this occurred when Gene Gibson was wrongly imprisoned for five years in Australia after being advised to plead guilty to manslaughter. Gibson was an Aboriginal man with a cognitive impairment and for whom English was a third language. The conviction was overturned when the court of appeal heard Gibson had not understood the court process, nor the instructions he was given by his appointed interpreter.

So forensic linguistics is not just about catching criminals, it’s also about finding ways to better support vulnerable groups who find themselves, in whatever capacity, having to interact with legal systems. This is an attempt to improve the delivery of justice through language analysis.


Read more: Forensic linguistics gives victims and the wrongfully convicted the voices they deserve


Something that struck me in the earliest days of my research was the relative lack of work exploring the language of online child sexual abuse and grooming. The topic had long received attention from criminologists and psychologists, but almost never linguists – despite online grooming and other forms of online child sexual offending being almost exclusively done through language.

There is no doubt that researching this dark side of humanity is difficult in all sorts of ways, and it can certainly take its toll.

Nonetheless, I found the decision to do so straightforward. If we don’t know much about how these offenders talk to victims, or indeed each other, then we are missing a vital perspective on how these criminals operate – along with potential new routes to catching them.

These questions became the central themes of both my MA and PhD theses, and led to my ongoing interest in the language that most people never see: real conversations between criminal groups on the dark web.

Anonymity and the dark web

The dark web originated in the mid-1990s as a covert communication tool for the US federal government. It is best described as a portion of the internet that is unindexed by mainstream search engines. It can only be accessed through specialist browsers, such as Tor, that disguise the user’s IP address.

This enables users to interact in these environments virtually anonymously, making them ideal for hidden conversations between people with shared deviant interests. These interests aren’t necessarily criminal or even morally objectionable – consider the act of whistleblowing, or of expressing political dissent in a country without free speech. The notion of deviance depends on local and cultural context.

Nonetheless, the dark web has become all but synonymous with the most egregious and morally abhorrent crimes, including child abuse, fraud, and the trafficking of drugs, weapons and people.

Combating dark web crime centres around the problem of anonymity. It is anonymity that makes these spaces difficult to police. But when all markers of identity – names, faces, voices – are stripped away, what remains is language.

And language expresses identity.

Through our conscious and unconscious selections of sounds, words, phrases, viewpoints and interactional styles, we tell people who we are – or at least, who we are being from moment to moment.

Language is also the primary means by which much (if not most) dark web crime is committed. It is through (written) linguistic interaction that criminal offences are planned, illicit advice exchanged, deals negotiated, goals accomplished.

For linguists, the records and messages documenting the exact processes by which crimes are planned and executed become data for analysis. Armed with theory and methods for understanding how people express (or betray) aspects of their identity online, linguists are uniquely placed to address questions of identity in these highly anonymous spaces.

What kind of person wrote this text?

The task of linguistic profiling is well demonstrated by the case of Matthew Falder. Falder pleaded guilty to 137 charges relating to child sexual exploitation, abuse and blackmail in 2018. The case was dubbed by the National Crime Agency (NCA) as its first ever “hurt-core” prosecution, due to Falder’s prolific use of “hidden dark web forums dedicated to the discussion and image and video sharing of rape, murder, sadism, torture, paedophilia, blackmail, humiliation and degradation”.

As part of the international investigation to identify this once-anonymous offender, police sought out the expertise of Tim Grant, former director of the Aston Institute for Forensic Linguistics, and Jack Grieve from the University of Birmingham. Both are world-leading experts in authorship analysis, the identification of unknown or disputed authors and speakers through their language. The pair were tasked with ascertaining any information they could about a suspect of high interest, based on a set of dark web communications and encrypted emails.

Where McCoole’s case was an example of authorship analysis (who wrote this text?), Falder’s demanded the slightly different task of authorship profiling (what kind of person wrote this text?).

When police need to identify an anonymous person of interest but have no real-world identity with which to connect them, the linguist’s job is to derive any possible identifying demographic information. This includes age, gender, geographical background, socioeconomic status and profession. But they can only glean this information about an author from whatever emails, texts or forum discussions might be available. This then helps them narrow the pool of potential suspects.

Grant and Grieve set to work reading through Falder’s dark web forum contributions and encrypted emails, looking for linguistic cues that might point to identifying information.

They were able to link the encrypted emails to the forum posts through some uncommon word strings that appeared in both datasets. Examples included phrases like “stack of ideas ready” and “there are always the odd exception”.

They then identified features that offered demographic clues to Falder’s identity. For example, the use of both “dish-soap” and “washing-up liquid” (synonymous terms from US and British English) within the same few lines of text. Grant and Grieve interpreted the use of these terms as either potential US influence on a British English-speaker, or as a deliberate attempt by the author to disguise his language background.

Ultimately, the linguists developed a profile that described a highly educated, native British English-speaking older man. This “substantially correct” linguistic profile formed part of a larger intelligence pack that eventually led to Falder’s identification, arrest and conviction. Grant’s and Grieve’s contribution earned them Director’s Commendations from the NCA.

Linguistic strategies

The cases of McCoole and Falder represent some of the most abhorrent crimes that can be imagined. But they also helped usher into public consciousness a broader understanding of the kinds of criminals that use the dark web. These online communities of offenders gather around certain types of illicit and criminal interests, trading goods and services, exchanging information, issuing advice and seeking support.

For example, it is not uncommon to find forums dedicated to the exchange of child abuse images, or advice on methods and approaches to carrying out various types of fraud.

In research, we often refer to such groups as communities of practice – that is, people brought together by a particular interest or endeavour. The concept can apply to a wide range of different communities, whether professional-, political- or hobby-based. What unites them is a shared interest or purpose.

But when communities of practice convene around criminal or harmful interests, providing spaces for people to share advice, collaborate and “upskill”, ultimately they enable people to become more dangerous and more prolific offenders.


Read more: What is the dark web and how does it work?


The emerging branch of research in forensic linguistics of which I am part explores such criminal communities on the dark web, with the overarching aim of assisting the policing and disrupting of them.

Work on child abuse communities has shown the linguistic strategies by which new users attempt to join and ingratiate themselves. These include explicit references to their new status (“I am new to the forums”), commitments to offering abuse material (“I will post a lot more stuff”), and their awareness of the community’s rules and behavioural norms (“I know what’s expected of me”).

Research has also highlighted the social nature of some groups focused on the exchange of indecent images. In a study on the language of a dark website dedicated to the exchange of child abuse images, I found that a quarter of all conversational turns contributed to rapport-building between members – through, for example, friendly greetings (“hello friends”), well-wishing (“hope you’re all well”) and politeness (“sorry, haven’t got those pics”).

Hand typing on neon lit keyboard
Dark web criminals have to abide by strict social rules. Shutterstock/Zuyeu Uladzimir

This demonstrates the perhaps surprising importance of social politeness and community bonding within groups whose central purpose is to trade in child abuse material.

Linguistic research on dark web criminal communities makes two things clear. First, despite the shared interest that brings them together, they do not necessarily attract the same kinds of people. More often than not they are diverse, comprising users with varied moral and ideological stances.

Some child abuse communities, for example, see sexual activity with children as a form of love, protesting against others who engage in violent abuse. Other groups openly (as far as is possible in dark web settings) seem to relish in the violent abuse itself.

Likewise, fraud communities tend to comprise people of highly varied motivations and morality. Some claim to be seeking a way out of desperate financial circumstances, while others proudly discuss their crimes as a way of seeking retribution over “a corporate elite”. Some are looking for a small side hustle that won’t attract “too many questions”, while a small proportion of self-identifying “real fraudsters” brag about their high status while denigrating those less experienced.

A common practice in these groups is to float ideas for new schemes – for example, the use of a fake COVID pass to falsely demonstrate vaccination status, or the use of counterfeit cash to pay sex workers. That the morality of such schemes provokes strong debate among users is evidence that fraud communities comprise different types of people, with a range of motivations and moral stances.

Community rules – even in abuse forums

Perhaps another surprising fact is that rules are king in these secret groups. As with many clear web forums, criminal dark web forums are typically governed by “community rules” which are upheld by site moderators. In the contexts of online fraud – and to an even greater extent, child abuse – these rules do not just govern behaviour and define the nature of these groups, they are essential to their survival.

Rules of child sexual exploitation and abuse forums are often extremely specific, laying out behaviour which are encouraged (often relating to friendliness and support among users) as well those which will see a user banished immediately and indefinitely. These reflect the nature of the community in question, and often differ between forums. For instance, some forums ban explicitly violent images, whereas others do not.

Rules around site and user security highlight users’ awareness of potential law enforcement infiltration of these forums. Rules banning the disclosure of personal information are ubiquitous and crucial to the survival and longevity of these groups.


Read more: Our research on dark web forums reveals the growing threat of AI-generated child abuse images


Dark web sites often survive only days or weeks. The successful ones are those in which users understand the importance of the rules that govern them.

The rise of AI

Researching the language of dark web communities provides operationally useful intelligence for investigators. As in most areas of research, the newest issue we are facing in forensic linguistics is to try and understand the challenges and opportunities posed by increasingly sophisticated AI technologies.

At a time when criminal groups are already using AI tools for malicious purposes like generating abuse imagery to extort children, or creating deepfakes to impersonate public figures to scam victims, it is more important than ever that we understand how criminal groups communicate, build trust, and share knowledge and ideas.

By doing this, we can assist law enforcement with new investigative strategies for offender prioritisation and undercover policing that work to protect the most vulnerable victims.

As we stand at this technological crossroads, the collaboration between linguists, technology and security companies, and law enforcement has become more crucial than ever. The criminals are already adapting. Our methods for understanding and disrupting their communications must evolve just as quickly.


For you: more from our Insights series:

To hear about new Insights articles, join the hundreds of thousands of people who value The Conversation’s evidence-based news. Subscribe to our newsletter.

The Conversation

Emily Chiang has received funding from UKRI - Innovate UK.

Archetyp was one of the dark web’s biggest drug markets. A global sting has shut it down

Operation Deep Sentinel

Last week, one of the dark web’s most prominent drug marketplaces – Archetyp – was shut down in an international, multi-agency law enforcement operation following years of investigations. It was touted as a major policing win and was accompanied by a slick cyberpunk-themed video.

But those of us who have studied this space for years weren’t surprised. Archetyp may have been the most secure dark web market. But shutdowns like this have become a recurring feature of the dark web. And they are usually not a significant turning point.

The durability of these markets tells us that if policing responses keep following the same playbook, they will keep getting the same results. And by focusing so heavily on these hidden platforms, authorities are neglecting the growing digital harms in the spaces we all use.

One of the most popular dark web markets

Dark web markets mirror mainstream e-commerce platforms – think Amazon meets cybercrime. These are encrypted marketplaces accessed via the Tor Browser, a privacy-focused browser that hides users’ IP addresses. Buyers use cryptocurrency and escrow systems (third-party payment systems which hold funds until the transaction is complete) to anonymously purchase illicit drugs.

Usually these products are sent to the buyer by post and money transferred to the seller through the escrow system.

Archetyp launched in May 2020 and quickly grew to become one of the most popular dark web markets with an estimated total transaction volume of €250 million (A$446 million). It had more than 600,000 users worldwide and 17,000 listings consisting mainly of illicit drugs including MDMA, cocaine and methamphetamine.

Compared to its predecessors, Archetyp enforced enhanced security expectations from its users. These included an advanced encryption program known as “Pretty Good Privacy” and a cryptocurrency called Monero. Unlike Bitcoin, which records every payment on a public ledger, Monero conceals all transaction details by default which makes them nearly impossible to trace.

Despite the fact Archetyp had clearly raised the bar on security on the dark web, Operation Deep Sentinel – a collaborative effort between law enforcement agencies in six countries supported by Europol and Eurojust – took down the market. The front page has now been replaced by a banner.

While these publicised take-downs feel effective, evidence has shown such interventions only have short-term impacts and the dark web ecosystem will quickly adapt.

A persistent trade

These shutdowns aren’t new. Silk Road, AlphaBay, WallStreet and Monopoly Market are all familiar names in the digital graveyard of the dark web. Before these dark web marketplaces were shutdown, they sold a range of illegal products, from drugs to firearms.

Yet still, the trade persists. New markets emerge and old users return. In some cases, established sellers on closed-down markets are welcomed onto new markets as digital “refugees” and have joining fees waived.

What current policing strategies neglect is that dark web markets are not isolated to the storefronts that are the popular target of crackdowns. These are communities stretched across dark and surface web forums which develop shared tutorials and help one another adapt to any new changes. These closures bind users together and foster a shared resilience and collective experience in navigating these environments.

Law enforcement shutdowns are also only one type of disruption that dark web communities face. Dark web market users routinely face voluntary closures (the gradual retirement of a market), exit scams (sudden closures of markets where any money in escrow is taken), or even scheduled maintenance of these markets.

Ultimately, this disruption to accessibility is not a unique event. In fact, it is routine for individual’s participating in these dark web communities, par for the course of engaging in the markets.

This ability of dark web communities to thrive in disruptions reflects how dark web market users have become experts at adapting to risks, managing disruptions and rebuilding quickly.

A computer screen displaying a blue-coloured webpage.
Dark web markets are accessed via the highly private and secure Tor Browser. Daniel Constante/Shutterstock

Missing the wider landscape of digital harms

The other emerging issue is that current policing efforts treat dark web markets as the core threat, which might miss the wider landscape of digital harms. Illicit drug sales, for example, are promoted on social media, where platform features such as recommendation systems are affording new means of illicit drug supply.

Beyond drugs, there are now ever-growing examples of generative AI being used for sexual deepfakes across schools and even of public figures, including the recent case of NRL presenter Tiffany Salmond.

This is all alongside the countless cases of celebrities and social media influencers caught up in crypto pump-and-dump schemes, where hype is used to artificially inflate the price of a token before the creators sell off their holdings and leave investors with worthless tokens.

This shows that while the dark web gets all the attention, it’s far from the internet’s biggest problem.

Archetyp’s takedown might make headlines, but it won’t stop the trade of illicit drugs on the dark web. It should force us to think about where harm is really happening online and whether current strategies are looking in the wrong direction.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

Our research on dark web forums reveals the growing threat of AI-generated child abuse images

Ventura/Shutterstock

The UK aims to be the first country in the world to create new offences related to AI-generated sexual abuse. New laws will make it illegal to possess, create or distribute AI tools designed to generate child sexual abuse material (CSAM), punishable by up to five years in prison. The laws will also make it illegal for anyone to possess so-called “paedophile manuals” which teach people how to use AI to sexually abuse children.

In the last few decades, the threat against children from online abuse has multiplied at a concerning rate. According to the Internet Watch Foundation, which tracks down and removes abuse from the internet, there has been an 830% rise in online child sexual abuse imagery since 2014. The prevalence of AI image generation tools is fuelling this further.

Last year, we at the International Policing and Protection Research Institute at Anglia Ruskin University published a report on the growing demand for AI-generated child sexual abuse material online.

Researchers analysed chats that took place in dark web forums over the previous 12 months. We found evidence of growing interest in this technology, and of online offenders’ desire for others to learn more and create abuse images.

Horrifyingly, forum members referred to those creating the AI-imagery as “artists”. This technology is creating a new world of opportunity for offenders to create and share the most depraved forms of child abuse content.

Our analysis showed that members of these forums are using non-AI-generated images and videos already at their disposal to facilitate their learning and train the software they use to create the images. Many expressed their hopes and expectations that the technology would evolve, making it even easier for them to create this material.

Dark web spaces are hidden and only accessible through specialised software. They provide offenders with anonymity and privacy, making it difficult for law enforcement to identify and prosecute them.

The Internet Watch Foundation has documented concerning statistics about the rapid increase in the number of AI-generated images they encounter as part of their work. The volume remains relatively low in comparison to the scale of non-AI images that are being found, but the numbers are growing at an alarming rate.

The charity reported in October 2023 that a total of 20,254 AI generated imaged were uploaded in a month to one dark web forum. Before this report was published, little was known about the threat.

The harms of AI abuse

The perception among offenders is that AI-generated child sexual abuse imagery is a victimless crime, because the images are not “real”. But it is far from harmless, firstly because it can be created from real photos of children, including images that are completely innocent.

While there is a lot we don’t yet know about the impact of AI-generated abuse specifically, there is a wealth of research on the harms of online child sexual abuse, as well as how technology is used to perpetuate or worsen the impact of offline abuse. For example, victims may have continuing trauma due to the permanence of photos or videos, just knowing the images are out there. Offenders may also use images (real or fake) to intimidate or blackmail victims.

These considerations are also part of ongoing discussions about deepfake pornography, the creation of which the government also plans to criminalise.


Read more: Deepfake porn: why we need to make it a crime to create it, not just share it


All of these issues can be exacerbated with AI technology. Additionally, there is also likely to be a traumatic impact on moderators and investigators having to view abuse images in the finest details to identify if they are “real” or “generated” images.

What can the law do?

UK law currently outlaws the taking, making, distribution and possession of an indecent image or a pseudo-photograph (a digitally-created photorealistic image) of a child.

But there are currently no laws that make it an offence to possess the technology to create AI child sexual abuse images. The new laws should ensure that police officers will be able to target abusers who are using or considering using AI to generate this content, even if they are not currently in possession of images when investigated.

Handcuffs on a computer keyboard
New laws on AI tools should help investigators crack down on offenders even if they do not have images in their possession. Pla2na/Shutterstock

We will always be behind offenders when it comes to technology, and law enforcement agencies around the world will soon be overwhelmed. They need laws designed to help them identify and prosecute those seeking to exploit children and young people online.

It is welcome news that the government is committed to taking action, but it has to be fast. The longer the legislation takes to enact, the more children are at risk of being abused.

Tackling the global threat will also take more than laws in one country. We need a whole-system response that starts when new technology is being designed. Many AI products and tools have been developed for entirely genuine, honest and non-harmful reasons, but they can easily be adapted and used by offenders looking to create harmful or illegal material.

The law needs to understand and respond to this, so that technology cannot be used to facilitate abuse, and so that we can differentiate between those using tech to harm, and those using it for good.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

How one 83-year-old fell into a fraudster’s fear bubble – and how gift cards played a key role

Wednesday morning, the day before Thanksgiving, Mae awoke, set her hair in curlers and switched on her laptop. A message appeared. It said her Safari web browser had encountered a problem, and a link offered to connect the 83-year-old to the Apple Computer Company. Mae clicked it.

She didn’t know it yet, but Mae, like millions of Americans each year, had fallen into the grip of fraudsters. Over the next 10 hours, the criminals would try several methods to steal her money.

A portion of a drugstore gift card rack showing the tagline
The drugstore gift card rack used to steal money from Mae. David P. Weber

What worked best was getting her to buy gift cards. The cards, from retailers such as Target, Apple and Amazon, are sold on racks in drugstores and supermarkets.

They’re better than cash for a fraudster, more portable and just as anonymous. Criminals can use gift card numbers online, at stores around the world, or sell or trade them in illicit marketplaces on the dark web, Telegram or Discord.

An estimated US$8 billion is stolen annually from seniors 60 and older through stranger-perpetrated frauds, according to AARP. The cards are a leading fraud payment method reported by older adults,according to the Federal Trade Commission.

Mae’s story is one of many such cases that prompted us – a fraud and forensic accounting professor who is a former top financial regulator, and a Pulitzer Prize-winning investigative reporter – to explore how cracks in the financial regulatory system dating to the Civil War have been exploited by fraudsters and corporations.

The investigation shows that federal regulators haven’t protected the public from gift card fraud, and Congress has largely deferred to regulators. State and federal efforts to rein in the industry have been opposed by lobbyists and gift card trade groups. And gift card retailers are often not helpful in assisting law enforcement.

An ad with a black Friday banner and a list of discounted gift cards available for sale
One of many Telegram ads offering gift cards from many different common providers at a deep discount. SOCRadar

One of us learned about Mae’s case in his work as a fraud examiner and has seen dozens of similar cases. Mae, who lives in Maryland, is unwilling to publish her last name, but she wants people to know her story so they don’t make the same mistakes.

In gift card fraud, everybody but the victim makes money: fraudsters, gift card companies and retailers. The criminals exploit a rapidly evolving payments industry that’s shrouded in secrecy and designed to ensure easy transactions.

Call this number

When Mae called the number that appeared on her screen, a man answered and identified himself as Mac Morgan, an “Apple high-security technician.” The problem seemed to originate from her bank, he told her. She volunteered that she banked with M&T, a Northeast bank headquartered in Buffalo, New York. Call them, he said, and provided a phone number.

The woman who answered said her name was Alivia, from the M&T Bank Fraud Unit. Alivia told Mae that a European pornographer and scammer had tried to gain access to her account and withdraw $20,000 during the night. A hold had been placed on the withdrawal, but Mae needed to come down to the bank and retrieve the money before the fraudsters did.

Alivia promised to stay on the phone with Mae throughout the process.

Gift cards are the latest in fraudsters’ arsenal of tools to steal money from people through deceptions such as romance scams, fake IRS notices and phony investment schemes.

The average reported amount lost is $1,000, but between 2021 and 2022 more than 100 consumers reported gift card fraud losses in excess of $400,000, according to an FTC public records request. About $550 billion is added onto gift cards annually in the U.S., according to Jordan Hirschfield, a gift card analyst at Javelin Strategy & Research. He estimates that between 1% and 5% of gift card sales could involve fraud, but because no one keeps track, it’s difficult to arrive at an exact number. If the 1% to 5% figure is correct, it’s between $5.5 billion and $27.5 billion per year.

A victim’s fear bubble

Mae had entered a fear bubble, an induced state of panic that makes rational thought difficult.

Anyone can fall victim. Mae had graduated summa cum laude from an elite private university. She is a no-nonsense retired nurse and lives independently. Now she was rushing, panicked, to her bank at the direction of a fraudster.

At the bank, the teller and manager tried to dissuade Mae from withdrawing $20,000 in cash. After about 15 minutes, she wore them down.

In Maryland, the bank had no option but to give Mae her money. That’s not the case in other states. In Florida, a state that contends with elevated incidents of fraud on seniors, the Legislature passed a law in May allowing financial institutions to delay transactions to people over 65 if there is a well-founded belief of exploitation.

Anecdotal evidence from law enforcement suggests that even a few hours of delay can pop the fear bubble fraudsters create.

Several states have passed or are considering laws requiring gift card warning signs, including Delaware, Iowa, Nebraska, Pennsylvania, Rhode Island and West Virginia.

Next, Alivia directed Mae to a Cash2Bitcoin ATM at a gas station and talked her through registering, including uploading her driver’s license, a know-your-customer requirement that doesn’t exist for gift cards. Mae fed thousands of dollars into the machine. At $15,000, the ATM hit its limit on deposits.

Alivia then passed the phone to a colleague, Ross, who directed Mae to buy gift cards. At Rite Aid, Mae bought four cards for $2,000, scratched the backs of the cards and read the numbers to Ross.

Gift cards hang side by side; one shows the front with a picture of a dog and a Target logo, the other shows the back
The front and back of a Target gift card. Removing the cardboard tab uncovers a scratch-off area; scratching reveals the card numbers. Once a fraudster has the numbers, the money on the card can be quickly spent online. The Conversation, CC BY-ND

But at the Food Lion supermarket, a manager who knew her refused to sell her any gift cards. Ross gave up and instructed her to go home but not tell anyone what had transpired.

Gift card companies claim to be “highly regulated” because Congress passed the Credit CARD Act in 2009. This eliminated many fees on gift cards, prohibited them from expiring for at least five years and allowed state law to preempt federal law. But it didn’t extend existing credit and debit card consumer fraud protections.

In 2010, Congress created a single regulator for consumer financial protection: the Consumer Financial Protection Bureau. But the agency hasn’t kept up with the rise in consumer financial products outside of banks. Rules it issued in 2016 and 2018 exempted most gift cards from regulation. The FTC and Treasury Department have also proven ineffective in combating the problem.

The fear bubble pops

By the time Mae pulled into her driveway, the ether had lifted. “It was a big fat light bulb: ‘You’ve been screwed,’” she said.

Mae called M&T: There was no open fraud case. She called Target: The gift cards had already been spent. Mae got most of her bitcoin money back, thanks to the compliance efforts and fraud freeze on her account on the day of the fraud. But the gift card money was gone.

Fraud against the elderly, including through gift cards, will likely continue to grow.

Mae reported her story to the local police, AARP and the FTC database. “It can happen to anyone,” she said.

For the full investigation, please visit: Gift card scams generate billions for fraudsters and industry as regulators fail to protect consumers − and how one 83-year-old fell into the ‘fear bubble’

The Conversation

Dr. David P. Weber receives funding from the Administration for Community Living (ACL), U.S. Department of Health and Human Services (HHS) to combat elder financial and high tech exploitation on the Eastern Shore of the Chesapeake Bay, Maryland. The award totals $2.6 million of financial assistance, with 80 percent funded by ACL/HHS and 20 percent funded by Maryland state and local government sources. The contents of this investigative story are those of the authors and do not necessarily represent the official views of, nor an endorsement, by ACL/HHS, or the U.S. Government.

Jake Bernstein does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Gift card scams generate billions for fraudsters and industry as regulators fail to protect consumers − and how one 83-year-old fell into the ‘fear bubble’

Banking regulations haven't caught up with gift cards, which fraudsters are using to steal money from people in ways that are difficult to trace or reverse. The Conversation, CC BY-ND

Wednesday morning, the day before Thanksgiving, Mae awoke, set her hair in curlers and switched on her laptop. The screen froze and a message appeared. It said her Safari web browser had encountered a problem, and a link offered to connect the 83-year-old to the Apple Computer Company. Mae clicked it.

She didn’t know it yet, but Mae, like millions of Americans each year, had fallen into the grip of fraudsters. Over the next 10 hours, the criminals would try several methods to steal her money. The one that worked without a hitch was getting her to buy gift cards. The common cards, from retailers such as Target, Apple and Amazon, are sold on racks in drugstores and supermarkets. They’re better than cash for a fraudster, more portable and just as anonymous. Once criminals have the gift card numbers, they use them to purchase goods online, at stores around the world, or sell or trade them in illicit marketplaces on the dark web, Telegram or Discord.

A portion of a drugstore gift card rack showing the tagline
The drugstore gift card rack used to steal money from Mae. David P. Weber

An estimated US$8 billion is stolen annually from seniors age 60 and older through stranger-perpetrated frauds, according to AARP. Increasingly, gift cards are a leading fraud payment method reported by older adults, according to the Federal Trade Commission.

Mae’s story is one of many such cases that prompted us – a fraud and forensic accounting professor who is a former top financial regulator, and a Pulitzer Prize-winning investigative reporter – to explore how cracks in the financial regulatory system dating to the Civil War have been exploited by fraudsters and corporations.

[ Why gift cards fall into a gap in the 2-tier banking regulation system − and a brief history of why that gap exists ]

The investigation shows that federal regulators have consistently failed to protect the public from gift card fraud and have failed to give gift cards consumer protections like those afforded to credit and debit cards. Congress, in turn, has largely deferred to these regulators. Meanwhile, efforts to rein in the industry on the state and federal level have been met with successful opposition from lobbyists and gift card trade groups. When fraud does occur, gift card retailers are often less than helpful in assisting law enforcement in helping to track down the criminals.

One of us learned about Mae’s case in his work as a fraud examiner and has seen dozens of similar cases. Mae, who lives in Maryland, is unwilling to publish her last name for fear of being revictimized, as well as sheer embarrassment, but she still wants people to know the story so they don’t make the same mistakes.

In gift card fraud, everybody but the victim makes money: fraudsters, gift card companies and retailers. The criminals exploit a rapidly evolving payments industry that’s shrouded in secrecy, designed to ensure easy transactions and lacking in consumer protections.

The technology companies that provide the infrastructure that enables the gift card economy are privately held and release little information publicly. They facilitate payments behind the scenes, out of the view of consumers who see only the brand name of the card and the drugstore or supermarket where they buy it. While retailers who sell gift cards could do more to thwart fraud, the secretive technology companies that set up and manage gift cards are best positioned to stop rampant criminality, but they don’t. There’s no legal requirement to do so, and they make money off the crime.

Call this number

When Mae called the number that appeared on her screen, a man answered and identified himself as Mac Morgan, an “Apple high security technician.” He gave her his employee ID number, which she dutifully wrote down. The problem seemed to originate from her bank, he told her. She volunteered that she banked with M&T, a Northeast bank headquartered in Buffalo, N.Y. Call them, he said, and provided a phone number.

The woman who answered said her name was Alivia, from the M&T Bank Fraud Unit. Alivia told Mae that a European pornographer and scammer had tried to gain access to her account and withdraw $20,000 during the night. A hold had been placed on the withdrawal, but Mae needed to come down to the bank and retrieve the money before the fraudsters did.

Anxiety rising in her voice, Mae told the woman she hadn’t even had a cup of coffee yet; she still had curlers in her hair. Alivia advised her to remove the curlers and, soothingly, promised to stay on the phone with Mae through the entire process.

Sophisticated schemes

Gift cards are just the latest in fraudsters’ seemingly unlimited arsenal of tools that help them steal money from people through deceptions like romance scams, fake IRS notices and phony investment schemes. In addition to consumer swindles like the one that targeted Mae, gift cards, including those that are reloadable, have also been hit with an epidemic of card draining, where criminals either steal barcodes from gift cards on the rack or swap in new barcodes they already control.

When consumers put money on a compromised card, the criminals are alerted because they are monitoring the barcodes using automated online account balance inquiries. They can repeatedly check the balances on thousands of barcodes at a time. As soon as money hits a card, the criminals use the account number to purchase items online or in stores, using runners or “mules” to physically go into stores.

The back of a reloadable gift card with a partially covered barcode
Criminals can steal or alter gift card barcodes to compromise the card. David P. Weber

The gift card draining problem is widespread enough that it attracted the attention of the Department of Homeland Security and sparked hearings in the U.S. Senate in April 2024. Two months later, Maryland passed the nation’s first law targeting card draining, which mandates secure packaging aimed at thwarting criminals who steal or tamper with the numbers on gift cards.

People ages 18 to 49 are more likely than older adults to lose money in gift card fraud, but adults over the age of 80 lose three times as much as younger adults. The average reported amount lost is $1,000, but more than 100 consumers have reported gift card fraud losses to the Federal Trade Commission in excess of $400,000 each between 2021 and 2023, according to information provided by the FTC through a public records request.

Falling victim to a financial scam ranks second in American fears about criminality, after identity theft, far exceeding concerns about violent crime, according to Gallup. Despite these fears, there doesn’t appear to be an accurate government number on exactly how much financial fraud is taking place. The gift card and reloadable card industry also doesn’t keep data on the amount of money consumers lose through the criminal use of its products.

At the same time, many gift card companies are not publicly traded. As such, they aren’t required to file quarterly or annual financial reports with the U.S. Securities and Exchange Commission, which would indicate the size of the industry and might outline the amount of fraud, among other risks. Consequently, nailing down an exact figure for the total amount of fraud involving gift cards and reloadable cards is challenging.

To track trends, regulators rely on victims self-reporting to gauge the scope of the problem.

Yet the vast majority of people who fall victim to financial scams never report their losses to law enforcement. Most victims are too embarrassed or pessimistic about their chances of recouping losses and so don’t complain. And often they are concerned that their adult children, caregivers or authorities such as adult protective services might conclude that guardianship or institutionalization is necessary to protect them. While it is extremely difficult to know how many elders report financial fraud, a 12-year-old study that’s still commonly cited, including by federal authorities, estimates it at 4.2%.

About $550 billion is added onto gift cards annually in the U.S., according to Jordan Hirschfield, a gift card analyst at Javelin Strategy & Research. He estimates that between 1% and 5% of all gift card sales could be fraudulent in some way, but because no one keeps track, it’s difficult to arrive at an exact number. If the 1% to 5% figure is correct, the amount of fraud is between $5.5 billion and $27.5 billion per year.

A victim’s fear bubble

Mae had entered what AARP calls a fear bubble, an induced state of panic that makes rational thought difficult, if not impossible. This is a greater risk for seniors, because as people get older they experience anger and fear more vividly. The fraudsters who manipulate this panic describe putting their victims “under the ether.” Frightened beyond reason, the victim is manipulated into transferring large sums of money to the fraudster to ward off the conjured danger.

Anyone can fall victim. In February, a former New York Times business columnist wrote about losing $50,000 in a fear-induced scam. Mae had graduated summa cum laude from an elite private university. She is a no-nonsense retired nurse and lives independently. Now she was rushing, panicked, to her bank at the direction of a fraudster.

As Mae drove, Alivia advised her to ready a story in case the teller balked at giving her the money. Mae decided to tell them that she needed the $20,000 to buy a used car and it was a matter of urgency.

Frictionless and anonymous

Gift cards have experienced rapid and immense growth because they’re a win-win, an innovative convenience for shoppers and a threefold boon for retailers. The gift card racks are mini billboards for retailers.

A rack of several dozen gift cards of different types and colors
A gift card rack at a Target sells dozens of types of gift cards. Some of the cards resemble credit cards, and even carry their logos, but they all lack the regulatory protections of credit cards. The Conversation, CC BY-ND

Consumers commonly spend one- to two-thirds more than the actual value of the card when they use it, said Ben Jackson, chief operating officer for the Innovative Payments Association, one of several trade groups that represent the industry. And sometimes consumers don’t spend the gift cards. Terms and conditions of the gift cards, frequently in small print or available only online, may allow retailers to retain the balance after a minimum of five years. It’s a tidy gift to retailers amounting to billions of dollars.

The National Retail Federation routinely ranks gift cards as the No. 1 thing shoppers plan to buy. “You don’t want friction in your gift giving,” Jackson said.

He has traced the first gift card to a glove company in Oregon in 1908. The company extolled the convenience of this new innovation: “Gift givers need not worry about picking the right size or color glove; give the recipient a card and let them choose for themselves.”

In the modern era, plastic gift cards were created by Neiman Marcus, but movie rental company Blockbuster first displayed the cards for customers. Known as a closed-loop card, it can be spent for goods only from that particular retailer.

In contrast, open-loop gift cards can be spent at multiple retailers and often have a credit card logo from companies such as Visa or Mastercard, but they don’t offer the same protections afforded actual credit cards, such as requiring an ID on file for the card. Some open-loop cards identify as debit cards even though they also lack the fraud protections of bank debit cards. If the money is swindled, there’s no obligation for the company to reimburse the cardholder.

Open-loop cards work everywhere debit and credit cards do and can sometimes be reloaded with funds. Purchasers can pay by cash to remain anonymous. Criminals love them. In the places where fraudsters lurk – on the dark web, which is made up of sites that resemble ordinary websites but are accessible only using special browsers or authorization codes, and on Telegram and Discord messaging apps – open-loop and closed-loop gift cards are offered as payment for everything from payroll to the purchase of equipment needed to perpetrate more frauds.

The first open-loop card originated with retail malls and foreshadowed how the gift card industry would later game regulators. In 2004, Indianapolis-based Simon Property Group and Bank of America created a stored-value card that could be spent at any store in the 159 Simon malls throughout the U.S.

An ad with a black Friday banner and a list of discounted gift cards available for sale
One of many Telegram ads offering gift cards from many different common providers at a deep discount. SOCRadar

The card activation fee was as much as $6.95. Simon also deducted a fee when a card went unused for six months and charged 50 cents each time a customer checked the card balance after the first inquiry. The fees ran counter to the consumer protection laws of some states where Simon operated, and three states sued Simon. But the mall operator successfully contended that because it was working with a national bank, federal law and regulations, which had no restriction on these fees, preempted state law to allow the fees. While the cards failed to stop online shopping from eclipsing the American mall industry, it eventually roused federal lawmakers into limited action.

Meanwhile, another gift card innovation had launched in California. In 2002, an in-house unit of Safeway supermarkets looking to sell nontraditional goods to Safeway customers created the gift card kiosk. It was so successful that a year later the unit became a Safeway subsidiary called Blackhawk Network. By 2007 there were Blackhawk kiosks in 60,000 retail locations, projecting sales of $100 million that year. Seven years later, Safeway spun off Blackhawk as a stand-alone public company.

And in 2018, with help from Blackhawk insiders, a private equity firm called Silver Lake Partners and a hedge fund named P2 Capital Partners took the company private in a transaction worth $3.5 billion. In 2023, Blackhawk Network Holdings had an estimated annual revenue of $2.8 billion.

Blackhawk and its main competitor, Atlanta-based InComm Payments, put cards in drugstores and supermarket chains throughout the U.S. Each card is a separate, private bespoke agreement negotiated between the card owner and the distributor, according to Jackson.

Typically, the distributor negotiates a small discount, usually under 10%, off the card’s face value. The discount is split between the distributor and the store selling the card.

Screenshot of a description of a discord server that offers gift cards for sale
One of many Discord servers offering to buy, sell and trade many brands of gift cards, including Amazon, Apple and Google. SOCRadar

The distributor handles card activation so that a retailer like Target will recognize that the card is active in the available amount. In some cases, the distributor also handles the back-end technology that allows consumers to spend the money loaded on the card.

Starting as a small industry a little more than 20 years ago, the closed- and open-loop gift card business has become a massive enterprise involving hundreds of billions of dollars, a festival of frictionless commerce that is also beloved by criminals for its convenience and anonymity.

Mae gets stubborn

The bank teller tried to dissuade Mae from withdrawing $20,000 in cash. Eventually, the bank manager joined the conversation and suggested she take a cashier’s check instead. Mae insisted that the guy selling her the car had demanded cash. After about 15 minutes, she wore them down. They gave her the cash.

The bank manager followed Mae to her car to ensure she was OK and to try one more time to get her to reconsider. Mae waved the manager off. Once she was alone again, Mae picked up the phone. Alivia had remained on the line the entire time but told Mae to leave her cellphone in the car while she went into the bank.

A patchwork system of help

In Maryland, the banker had no option but to hand Mae her money. That’s not the case in other states. In Florida, a state that contends with elevated incidents of fraud on seniors, the Legislature passed a law in May allowing financial institutions to delay disbursements or transactions of funds to people over 65 if there is a well-founded belief that they are being exploited. In return, the banks receive immunity from any resulting administrative or civil liability.

The delay, which expires after 15 business days, requires that the financial institution launch an immediate review and contact those the account holder has designated as people of confidence. A court may shorten or extend the length of the pause. Anecdotal evidence from law enforcement suggests that even a few hours of delay can pop the fear bubble fraudsters create. As soon as the persuasive ether of the fraudster lifts, most people realize they’ve been scammed. A delay also makes time for the target to talk to someone they trust who might dissuade them from parting with their money.

In New Jersey in 2021, state Sen. Nellie Pou sponsored a bill that proposed a 48-hour delay before using or validating a gift card worth more than $100 and proposed extending the protections to gift cards that credit cards receive under federal laws and regulations: If a consumer reported fraud, the funds would be frozen, and if the fraud investigation were upheld, the money would be returned to the customer. The bill also proposed a fraud incident hotline for consumers, exempted small businesses and levied a $1,000 civil penalty for card issuers that violated its provisions.

The Innovative Payments Association lobbied against the New Jersey bill. The legislation would harm New Jerseyans, it wrote lawmakers, by “discouraging gift card providers to issue and sell such cards in the state.” The association argued that the waiting period “defeats the purpose of having a gift card,” which is to allow the recipient “to go out and get what they want/need immediately.” The legislation passed the state Senate but died in the Assembly and wasn’t reintroduced.

Several states have also passed or are considering laws requiring retailers selling gift cards to post warning signs, including Delaware, Iowa, Nebraska, Pennsylvania, Rhode Island and West Virginia, but none go as far as the New Jersey bill.

Gift card kiosk with a stop sign shaped warning sign
This gift card display in a Walgreens in Mae’s neighborhood has a sign warning customers not to give gift card numbers to other people. Some stores display warning signs, but some don’t. David P. Weber

Waiting periods and warning signs are not the only tools that gift card companies could use against fraud. The distributors already have a technology in place that would be even more effective: velocity limits.

If unusually large numbers of gift cards are being purchased at a drugstore or supermarket, for instance, a distributor like Blackhawk could freeze the sale and alert the retailer. They have done this on occasion, but our investigation shows this does not happen with consistency. If sale freezes and alerts happened consistently, consumers would be less likely to be reporting on the FTC database large amounts of money lost to gift card scams.

Gift cards could also be required to use geofencing. If a card is purchased in Maryland but redeemed on the same day in California or China, that could be a red flag for fraud because the likelihood that someone like Mae would be able to get gift cards to faraway friends or family so quickly is slim. Geofencing would freeze redemption outside a certain geographical area.

And more simply, retailers could require that gift cards be purchased with a credit or debit card rather than cash to make it easier to reimburse a customer in the event of fraud.

In 2022, around the same time New Jersey was trying to rein in gift card fraud all by itself, Congress passed the Stop Senior Scams Act. The bill created an advisory group of industry members, regulators and law enforcement that is run by the FTC and tasked with studying ways to curtail fraud. Included in the mandate was a focus on technology. The advisory group created a Technology and New Methods Committee subcommittee with about two dozen members, including Blackhawk and the Innovative Payments Association. In the two years since the bill was passed, the main committee has met only twice. Recommendations by federal advisory committees are not binding. Although the Federal Advisory Committee Act requires that committee meetings be open to the public and their records available for public inspection, it’s not a requirement for subcommittees.

The committee is aiming to disrupt fraud, particularly among older adults, by more efficiently sharing information, data and other intelligence, according to committee member Jilenne Gunther, national director of AARP’s BankSafe Initiative.

The industry has pushed consumer education as the best response to the gift card fraud epidemic, even as signage and public service announcements have shown questionable effectiveness. “Consumer education … puts the burden of protection on the targets of fraud,” Marti DeLiema, assistant professor of social work at the University of Minnesota, testified at an Elder Justice Coordinating Counsel hearing in 2022. At the same time, “fraud targets are often in states of emotional distress.”

Some retailers are also training their cashiers to be alert to seniors inexplicably buying fistfuls of gift cards, but these efforts are not always standardized across the industry. Expecting a clerk earning minimum wage to prevent a fearful senior from legally buying gift cards is likely unrealistic.

Blackhawk did not respond to multiple requests for interviews and declined to answer emailed questions. InComm Payments declined to make anyone available for an interview and did not answer detailed emailed questions.

In its letter opposing the New Jersey bill the Innovative Payments Association argued that the industry was “highly regulated,” required to adhere to federal requirements and “strict federal anti-money laundering regulations.”

In practice, that’s not the case.

The criminals direct Mae to crypto

Before sending Mae to buy gift cards, the fraudsters tried another scheme. Alivia directed Mae to a Shell gas station with a Cash2Bitcoin ATM inside and told her that if she put her money into crypto it would be safe. Mae had never before seen a Bitcoin ATM. Alivia talked her through registering for an account, including uploading her driver’s license, a know-your-customer requirement that doesn’t exist for gift cards.

ATM machine with a screen that says Bitcoin
You can convert cash to cryptocurrency at Bitcoin automated teller machines. AP Photo/Ted Shaffrey

As Mae fed thousands of dollars into the machine, another elderly woman stood behind her impatiently. I need to get money to send to my nephew, she told Mae. Much later, Mae would realize that the woman was probably being scammed, too. At $15,000, the ATM hit its limit on deposits. The money Mae was feeding into the ATM went flying. She jammed the receipts into her purse and hurriedly gathered cash off the floor.

The fraudsters then sent Mae to the area’s two other crypto ATMs, but neither worked. It was 5 p.m. and getting dark. Mae hadn’t eaten all day. Alivia asked if Cash2Bitcoin had sent her a receipt for the $15,000. No, Mae replied, forgetting she had shoved it into her purse. Alivia told her to call and find out what the holdup was. Mae’s phone conversation with Cash2Bitcoin was concerning enough that the man at the exchange froze Mae’s money.

Stymied, Alivia handed the call off to her “supervisor,” Mike Ross. Faced with a crypto dead end, but unwilling to relinquish a chance at the remaining $5,000, Ross directed Mae to a Rite Aid near her house to buy gift cards.

Loopholes and laggards

Gift card companies can make the claim they are “highly regulated” because of legislation that occurred after the 2008 financial crisis. The uproar after Simon Property Group flouted state consumer protection laws led Congress to pass the Credit CARD Act in 2009. The law eliminated many of the garbage fees on gift cards and prohibited cards from expiring for at least five years. It also encouraged states to legislate their own reforms by allowing state law to preempt federal law. But the law didn’t extend existing credit and debit card consumer fraud protections for gift card purchasers.

As part of the wave of financial reform, Congress also created a single regulator for consumer financial protection: The Consumer Financial Protection Bureau. It removed regulation-writing authority from the Federal Reserve and gave enforcement and rule writing authority solely to the bureau. It also took away examination and enforcement of all nonbank financial products from the Fed, the FDIC and the Office of the Comptroller of the Currency. Federal consumer protection – bank or nonbank – would ostensibly now be regulated only by this new single regulator.

In the 15 years since the Consumer Financial Protection Bureau was created, there has been a rise in consumer financial products outside of banks, but the new agency hasn’t kept up. As part of the rules it issued in 2016 and 2018, it exempted most gift cards, open- and closed-loop alike, from regulation.

While the bureau declined multiple requests to explain why gift cards were exempted from its consumer protection rules for fraud, it did point to resources including a flowchart showing what types of electronic payment methods would be covered under its rules. The chart, a near-incomprehensible tangle of arrows and scenarios, shows how most prepaid gift cards are exempt from the fraud consumer protection regulations common for debit and credit cards, including all gift cards and branded reloadable cards purchased in retail drugstores and supermarkets. This exemption exists even though these prepaid cards rely on electronic activation and maintenance, which is the purpose of existing laws such as the Electronic Fund Transfer Act.

Flowchart with ten layers of yes and no and arrows that sometimes skip layers
Page 1 of a five-page flowchart explaining the types of gift cards that are covered by Consumer Financial Protection Bureau rules. All gift cards and branded reloadable cards purchased in retail drugstores and supermarkets are exempt. Consumer Financial Protection Bureau

The FTC’s authority

Aside from the Consumer Financial Protection Bureau, the FTC and the Treasury Department have responsibilities that could protect consumers like Mae from gift card fraud. Yet, to date, their actions concerning gift cards are spotty at best.

The FTC is the original consumer protection agency. It can regulate “unfair or deceptive” acts or practices in commerce and provides annual statistics of consumer reports of fraud in all products and services. It provides advice about avoiding scammers, and consumers can fill out a form and join other tragic stories in a growing database, but there is little consequence for the companies involved. The FTC contends it has jurisdiction to bring enforcement actions against gift card nonbank entities for unfair or deceptive acts or practices, but the last time it appears to have done so was in 2007.

The FTC provided a background interview and sent a follow-up memo, but it declined to answer questions about the differences between its authority and that of the Consumer Financial Protection Bureau, or confirm which agency is the primary federal regulator of gift cards.

More agencies, little oversight

The Treasury could also get involved. Two agencies of the U.S. Department of Treasury tackle fraud that touches on national security, terrorism and transnational gangs. Increasingly, criminals from China, Iran, North Korea, Russia and the occupied areas of Ukraine target Americans with tacit, and sometimes explicit, state support. These Treasury agencies have also largely given gift cards a pass, exempting them from controls in place to combat these crimes, even though there is evidence that the cards are being used by international criminals.

The Financial Crimes Enforcement Network, a bureau of the Treasury Department, requires two types of reports that can involve gift cards: currency transaction reports for transactions of $10,000 or more that are made in cash, and confidential suspicious activity reports for a variety of transactions of any value that the filer considers suspicious, including suspected elder financial exploitation.

Financial institutions, including banks and businesses such as car dealerships, casinos, antique dealers and money service providers, are required to file the reports. These include money transmitters – companies such as Western Union and MoneyGram – that work through retail establishments such as supermarkets and Walmart to send money overseas or to another city rather than using a bank wire transfer.

Those businesses must obtain personal identification information, such as a Social Security number and driver’s license from the person conducting the transactions, for the report. Financial institutions file millions of reports every year.

In 2011, with gift cards still in their infancy, the Financial Crimes Enforcement Network issued a regulation to amend the money service business definition to address prepaid access products such as gift cards.

But despite law enforcement concerns, the agency exempted open-loop cards up to $1,000 that weren’t used internationally and closed-loop gift cards up to $2,000 from the money-laundering regulation. For closed-loop cards, there was no restriction on international use.

The Financial Crimes Enforcement Network also didn’t limit aggregation for gift cards. Banks and money service businesses are generally required to aggregate transactions made on the same day from multiple locations and must report if the total amount goes over $10,000 for the day. For gift cards, however, there is no aggregate tracking requirement, so fraudsters can direct seniors to multiple stores in a day – even stores from the same chain – to buy $2,000 worth of gift cards at each, racking up tens of thousands of dollars.

The Financial Crimes Enforcement Network’s rule specifies that “categories of prepaid access products and services were exempted because they pos[ed] lower risks of money laundering and terrorist financing,” despite noting that law enforcement disagreed.

In response to our detailed questions, the Treasury’s Financial Crimes Enforcement Network declined to say how many, if any, regulatory examinations it or the IRS on its behalf has conducted of gift card providers. “Any information or statistics that we can share publicly are located on our website,” Financial Crimes Enforcement Network spokesperson Steve Hudak wrote in an email that also included resource links. “FinCEN declines further comment.”

The final agency in the gift card regulatory puzzle is Treasury’s Office of Foreign Assets Control, which administers and enforces economic sanctions programs against countries and groups of individuals, including foreign hackers and fraudsters targeting the United States.

But because gift card purchasers don’t have to show identification and can provide the card number or a text picture of the card to someone overseas, gift card companies can’t prevent sanctioned people, groups or nations from using their products.

Only one enforcement action appears to have been taken by the Office of Foreign Assets Control against a gift card provider.

In 2022, when Tango Card products, now a division of Blackhawk, self-reported that cards had been used to purchase goods or services in malign nations, including Iran, North Korea, Syria and Russian-occupied areas of Ukraine, the bureau sanctioned the company $116,048.60.

The Office of Foreign Assets Control did not respond to repeated requests for comment.

Mae sends the police away

At Rite Aid, Ross instructed Mae to purchase three types of gift cards: two $500 Nordstrom cards, two $500 Target cards and one $200 Macy’s card.

Large rack of many types of gift cards
Gift card rack at the Rite Aid where Mae was instructed by a fraudster to buy gift cards and give the fraudster the gift card numbers. David P. Weber

Given the size of the purchase, the Rite Aid cashier called over the manager. Mae lied and said she needed the gift cards for her grandson. Likely due to the $2,000 limit Rite Aid imposed on daily purchases of closed-loop gift cards, the drugstore would sell her only the four Nordstrom and Target cards for a total of $2,000. Back in her car, Mae scratched the back of the cards to reveal the numbers and read them to Ross.

Gift cards hang side by side; one shows the front with a picture of a dog and a Target logo, the other shows the back
The front and back of a Target gift card. Removing the cardboard tab uncovers a scratch-off area; scratching reveals the card numbers. Once a fraudster has the numbers, the money on the card can be quickly spent online. The Conversation, CC BY-ND

He was about to direct her to the next stop when there was a knock on the car window. It was a police officer. Mae had been scheduled to cook dinner for a gentleman friend who had become worried by her absence and contacted the local police. They’d tracked down her car. Ross told her to get rid of the cop by inventing a story. He’d stay on the line to listen. She rolled down the window and did as Ross instructed, reassuring the officer that all was well, and she’d be home soon.

Rack of brightly colored gift cards with sign in top right: reloadable download cards
Gift card rack at the Food Lion where Mae tried to buy gift cards. David P. Weber

When the policeman left, Ross sent Mae to a nearby Food Lion supermarket to buy more gift cards. The Food Lion was close to Mae’s house, and the store manager knew her. He refused to sell her the gift cards. This is a scam, he told her. It was now almost 8 pm. Resigned, Ross instructed her to go home but not tell anyone what had transpired.

The fear bubble lifts

By the time Mae pulled into her driveway, the ether had lifted and she knew she’d been scammed. “It was a big fat light bulb: ‘You’ve been screwed,’” she said.

Mae called M&T and learned there was no open fraud case. She called Target. Only 30 minutes had elapsed since she purchased the gift cards at Rite Aid, but they’d already been spent.

Recent prosecutions of Chinese gift card draining rings have revealed that the criminals employ networks of mules. These low-level employees are already positioned to buy goods in person once gift card numbers are obtained. And there are other avenues to monetize the gift cards besides an army of low-level buyers. On the Russian-owned Telegram app, dozens of gift card marketplaces sell illegally obtained cards. The traffic in illicit gift cards appears to be growing in popularity because it’s possible to move huge sums of money offshore anonymously with little to no regulatory controls.

“The reduced fraud protection makes it easy for cybercriminals to find buyers,” said Ensar Seker, advisory chief information security officer at SOCRadar, a cybersecurity firm that monitors the channels.

The cards are usually sold for 50% to 75% of face value, based on the risk incurred in obtaining them, according to Seker. If cards need to be moved quickly because they were acquired through hacking and likely to be canceled, they’re worth closer to 50%. Cards obtained by fraud are worth closer to 75%, because there is little risk of being caught for using one.

Retailers aren’t required to know who their customers are. So the retailer issuing the card has no idea whether the cardholder is the person who bought it, someone who was gifted the card, a fraudster or someone who purchased it from a fraudster on Telegram or the dark web. Sometimes criminals will report the cards stolen and receive a new number to cover their tracks. Because the retailer doesn’t know who bought the card, it can’t tell that it’s the fraudster making the call.

Increasingly, cryptocurrencies can be traced and recovered, said Seker, but gift cards cannot.

“The most important aspect for the criminal is to stay anonymous and untraceable. Gift cards allow this,” he said.

Epilogue

Investigators tried to pursue the criminals responsible for scamming Mae. Her case was referred to a special elder financial exploitation team. Investigators met with Mae less than a week after the fraud.

The phone numbers the fraudsters used in speaking to Mae were internet lines from a service provider that had little information to offer and denied any responsibility. The phone service had been purchased using an open-loop gift card, so there was no record of who purchased the service.

Mae had thrown out the gift cards but gave the investigators the Rite Aid receipts, which had partial numbers of the gift cards, similar to ATM receipts. The investigators subpoenaed Rite Aid for the full gift card numbers using the postal mailing address the store provided for subpoenas.

After a substantial delay, Rite Aid responded to the subpoena, claiming it couldn’t provide the full card numbers using its point-of-sale records. Investigators later connected with a regional loss prevention manager at a different store who provided the full gift card numbers that Rite Aid corporate headquarters claimed in its subpoena response it didn’t have.

The investigators then subpoenaed Nordstrom and Target. But by that time there was no information left to provide. Store surveillance footage was months gone, overwritten with new footage. The retailers had no records of who had used the cards. So despite immediate action by law enforcement, the criminals had vanished, along with Mae’s $2,000.

Mae got most of her bitcoin money back, thanks to the compliance efforts and fraud freeze that had been placed on her bitcoin account on the day of the fraud.

Even as fraud against the elderly, including through gift cards, continues to grow, it’s primed to get only worse. In 2023, Americans 65 and older represented 17.3% of the population, about 57.8 million people. By 2040, they will be 22% of the population, numbering more than 78 million. By 2060, that number is expected to be 88.8 million.

These seniors will be sitting on nest eggs accrued over a lifetime, and fraudsters want a piece of it.

Mae reported her story to the local police, AARP and the FTC database. “It can happen to anyone,” she said.

The Conversation

Dr. David P. Weber receives funding from the Administration for Community Living (ACL), U.S. Department of Health and Human Services (HHS) to combat elder financial and high tech exploitation on the Eastern Shore of the Chesapeake Bay, Maryland. The award totals $2.6 million of financial assistance, with 80 percent funded by ACL/HHS and 20 percent funded by Maryland state and local government sources. The contents of this investigative story are those of the authors and do not necessarily represent the official views of, nor an endorsement, by ACL/HHS, or the U.S. Government.

Jake Bernstein does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Deepfake, AI or real? It’s getting harder for police to protect children from sexual exploitation online

Shutterstock

Artificial intelligence (AI), now an integral part of our everyday lives, is becoming increasingly accessible and ubiquitous. Consequently, there’s a growing trend of AI advancements being exploited for criminal activities.

One significant concern is the ability AI provides to offenders to produce images and videos depicting real or deepfake child sexual exploitation material.

This is particularly important here in Australia. The CyberSecurity Cooperative Research Centre has identified the country as the third-largest market for online sexual abuse material.

So, how is AI being used to create child sexual exploitation material? Is it becoming more common? And importantly, how do we combat this crime to better protect children?

Spreading faster and wider

In the United States, the Department of Homeland Security refers to AI-created child sexual abuse material as being:

the production, through digital media, of child sexual abuse material and other wholly or partly artificial or digitally created sexualised images of children.

The agency has recognised a variety of ways in which AI is used to create this material. This includes generated images or videos that contain real children, or using deepfake technologies, such as de-aging or misuse of a person’s innocent images (or audio or video) to generate offending content.

Deepfakes refer to hyper-realistic multimedia content generated using AI techniques and algorithms. This means any given material could be partially or completely fake.

The Department of Homeland Security has also found guides on how to use AI to generate child sexual exploitation material on the dark web.

The child safety technology company Thorn has also identified a range of ways AI is used in creating this material. It noted in a report that AI can impede victim identification. It can also create new ways to victimise and revictimise children.

Concerningly, the ease with which the technology can be used helps generate more demand. Criminals can then share information about how to make this material (as the Department of Homeland Security found), further proliferating the abuse.

How common is it?

In 2023, an Internet Watch Foundation investigation revealed alarming statistics. Within a month, a dark web forum hosted 20,254 AI-generated images. Analysts assessed that 11,108 of these images were most likely criminal. Using UK laws, they identified 2,562 that satisfied the legal requirements for child sexual exploitation material. A further 416 were criminally prohibited images.

Similarly, the Australian Centre to Counter Child Exploitation, set up in 2018, received more than 49,500 reports of child sexual exploitation material in the 2023–2024 financial year, an increase of about 9,300 over the previous year.

About 90% of deepfake materials online are believed to be explicit. While we don’t exactly know how many include children, the previous statistics indicate many would.

A defocused computer screen with sexually explicit imagery
Australia has recorded thousands of reports of child sexual exploitation. Shutterstock

These data highlight the rapid proliferation of AI in producing realistic and damaging child sexual exploitation material that is difficult to distinguish from genuine images.

This has become a significant national concern. The issue was particularly highlighted during the COVID pandemic when there was a marked increase in the production and distribution of exploitation material.

This trend has prompted an inquiry and a subsequent submission to the Parliamentary Joint Committee on Law Enforcement by the Cyber Security Cooperative Research Centre. As AI technologies become even more advanced and accessible, the issue will only get worse.

Detective Superintendent Frank Rayner from the research centre has said:

the tools that people can access online to create and modify using AI are expanding and they’re becoming more sophisticated, as well. You can jump onto a web browser and enter your prompts in and do text-to-image or text-to-video and have a result in minutes.

Making policing harder

Traditional methods of identifying child sexual exploitation material, which rely on recognising known images and tracking their circulation, are inadequate in the face of AI’s ability to rapidly generate new, unique content.

Moreover, the growing realism of AI-generated exploitation material is adding to the workload of the victim identification unit of the Australian Federal Police. Federal Police Commander Helen Schneider has said

it’s sometimes difficult to discern fact from fiction and therefore we can potentially waste resources looking at images that don’t actually contain real child victims. It means there are victims out there that remain in harmful situations for longer.

However, emerging strategies are being developed to address these challenges.

One promising approach involves leveraging AI technology itself to combat AI-generated content. Machine learning algorithms can be trained to detect subtle anomalies and patterns specific to AI-generated images, such as inconsistencies in lighting, texture or facial features the human eye might miss.

AI technology can also be used to detect exploitation material, including content that was previously hidden. This is done by gathering large data sets from across the internet, which is then assessed by experts.

Collaboration is key

According to Thorn, any response to the use of AI in child sexual exploitation material should involve AI developers and providers, data hosting platforms, social platforms and search engines. Working together would help minimise the possibility of generative AI being further misused.

In 2024, major social media companies such as Google, Meta and Amazon came together to form an alliance to fight the use of AI for such abusive material. The chief executives of the major social media companies also faced a US senate committee on how they are preventing online child sexual exploitation and the use of AI to create these images.

The collaboration between technology companies and law enforcement is essential in the fight against the further proliferation of this material. By leveraging their technological capabilities and working together proactively, they can address this serious national concern more effectively than working on their own.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

What are ‘mule addresses’? Criminologists explain how vacant properties serve as depots for illegal online purchases

Nobody's home, just as the sender intended. AndreyPopov/ iStock via Getty Images Plus

Online shopping isn’t just a convenient way to buy batteries, diapers, computers and other stuff without going to a brick-and-mortar store.

Many Americans also use the internet to quietly acquire illegal, fake and stolen items. Guns, prescription drugs no doctor has ordered and checks are on this long list, as well as cloned credit cards, counterfeit passports and phony driver’s licenses.

Because buyers and sellers alike realize that the authorities can detect illegal online transactions, criminals and their customers prefer covert online platforms that protect user anonymity, such as Tor, or encrypted messaging applications like Telegram and WhatsApp. Buyers and sellers also use digital wallets and cryptocurrencies to further conceal their identities.

As scholars of high-tech crime, we were eager to solve a riddle. Having these items shipped to the buyers’ homes or offices would make it easy for authorities to catch them. So how do people who buy these illegal items maintain their anonymity when they take possession of items they purchased on the dark web?

They mostly use vacant residential properties, called “mule addresses” or “drop addresses.” Once the illegal goods or phony documents get delivered – presumably without the owners’ knowledge – to the doorstep of the uninhabited home, the buyer or a middleman picks it up. This practice makes it very hard to trace these transactions.

Penchant for sharing

To discover where these items change hands, we took advantage of the inclination of some of the criminal vendors to share images on Telegram of the parcels they send, along with the illicit items.

They use this strategy to build their reputations, earn the trust of buyers and market their services.

Not all users of online underground markets do this, but we still spotted thousands of packages delivered this way over a period of two years.

In one case, we found a photo of a forged or stolen check alongside the mailed envelope used for its delivery on a Telegram channel dedicated to trading stolen and counterfeit checks.

The label on the envelope bears not only the shipping date but also the Wyoming address where it was sent. Armed with this information, anyone can retrieve related details by searching online. We found an apartment complex at that address with several units for rent.

A mailed envelope and a check with names obscured
A forged or stolen check alongside the envelope used to mail it to the person who bought it on the dark web. Screen capture by David Maimon, CC BY-NC-ND

Guns, drugs and rentals

We also found that criminal vendors use mule addresses as their sender address. In one example, we found a video, uploaded in April 2023, of an assault rifle shipped from an Arizona address. At the time, that property was for sale.

The video displays an assault rifle apparently shipped from that address after being purchased online on an underground gun market. At the time, that property was for sale.

An assault rifle and an address label
An illegal firearm vendor uploaded a video of an assault rifle being shipped to a customer. Screen capture by David Maimon, CC BY-NC-ND, CC BY-NC-ND

We found a similar video documenting the punctual delivery of what we believe to be illegal drugs. Considering that the video has been circulating in illegal drugs markets that we monitor, it’s reasonable to assume that the package contains narcotics or prescription drugs.

The footage portrays a satisfied customer who has just gotten the drugs. We looked up the recipient’s address, which is discernible in the video.

It’s a property in North Las Vegas, Nevada, which was listed for sale at the time of delivery – although it seems to have later been sold. The anticipated delivery date, March 28, 2023, coincided with the day the package in the video was received.

One of the illegal digital marketplaces we identified is a hub for prescription sales of OxyContin, Viagra, Adderall and Valium. It’s linked to an administrator who presides over several Telegram channels.

The administrator has shared photos on those channels that allowed us to see tracking numbers associated with packages they’d mailed. By collating the tracking numbers from April 20 to May 23, 2023, we compiled a comprehensive database of those addresses and the statuses of those properties when the packages were delivered.

We found that 72% of the 650 deliveries in this database were to properties listed for sale, and the rest were to properties unoccupied for other reasons. The average time that elapsed between a property listing and an illicit package being delivered there was nine days.

Be on guard

We haven’t yet learned of any criminals who were convicted of criminally using mule addresses to deliver illegal packages.

Because criminals take advantage of vacant residential properties listed for sale or rent by unsuspecting homeowners to protect their anonymity, we believe that it’s important for landlords and people who are selling or renting homes to protect themselves from these crimes of commerce.

Some of the same strategies that enhance safety in other regards can help, such as installing surveillance cameras and employing property managers.

The Conversation

David Maimon receives funding from Department of Homeland Security and other private organizations.

Saba Aslanzadeh does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Behind the scenes of the investigation: Heists Worth Billions

David Maimon's cybersecurity research group noticed a flood of checks in underground markets, which opened a window into much broader criminal activity. Collage by Kimberly Patch

Professor David Maimon is director of the Evidence-Based Cybersecurity Research Group at Georgia State University.

He and his group are well familiar with what happens on the dark web, which consists of websites that look like ordinary websites but can be reached only using special browsers or authorization codes and are often used to sell illegal commodities.

In this behind-the-story video, Maimon shows some of the hundreds of thousands of bank-related images that he and his team have collected from the dark web and text message applications, and the research these discoveries spurred them to do. That research sparked the investigative story Heists Worth Billions, which Maimon teamed up to write with The Conversation’s senior investigative editor Kurt Eichenwald. Here’s how Maimon and colleagues uncovered the crimes, and his remarks from a follow-up interview.

Maimon’s group was monitoring images posted on the dark web when it found the initial clues that something big was afoot.

My group and I spend a lot of time on underground markets in which criminals sell all kinds of illicit commodities. We see a lot of counterfeit products. We see a lot of identities. And in mid-2021 we started to see a lot of checks flooding the markets.

Those checks led us down a path where we realized that thousands of sham bank accounts were being created to steal and launder money.

The group’s first realization was about the volume of deposits.

Folks were using multiple accounts simultaneously to deposit the high volume of checks. They were simply purchasing from the markets and depositing on different accounts.

For example, three checks would be deposited into three different bank accounts by a single criminal.

Group members connected another clue that showed them how the criminals were getting access to multiple accounts.

We saw numerous debit cards and realized that the criminals were using those debit cards to deposit all the checks they stole or purchased.

Then, in June 2022, the group made a key observation.

Criminals were posting screenshots from bank accounts with balances showing zero.

We realized that these screenshots of zero-balance bank accounts were advertisements – they were selling bank accounts that had zero balances.

This led the group to an investigation.

Over six months we tracked a single criminal, counting the number of images of credit cards and the number of screenshots of bank accounts showing zero balances that he posted.

We’re seeing this increasing trend from one single actor and, of course, being out there in the ecosystem, we are able to see more and more copycats: more and more folks like the individual we’re monitoring, offering their services.

And a conclusion about what allowed this to happen.

If a criminal opens a credit card under someone else’s name, when the person realizes something is wrong and freezes the credit card, the criminal can’t use that identity anymore.

But with bank accounts, it’s a different story, because the credit freeze does not affect your ability to establish a new bank account under someone else’s name.

Maimon gives some advice on how to protect your identity.

Make sure you freeze your credit. Make sure you purchase some kind of identity theft protection plan, which will alert you every time someone is using your identity. And simply monitor your bank account on a daily basis, monitor your credit card.

Freezing your credit ensures that no one can access your credit report unless you actively lift the freeze.

He talks about what’s next for his research group.

We’re trying to understand how all those identities are actually being used in the context of money laundering and, more specifically, sports betting.

And he sounds the alarm.

This is a serious problem that is largely being ignored. It’s our hope that exposing the magnitude of this will help spur action, because far too many people are losing far too much money to this type of crime.


Graphic showing a masked criminal on a stamp and saying 'Heists worth billions'
This article accompanies Heists Worth Billions, an investigation from The Conversation that found criminal gangs using sham bank accounts and secret online marketplaces to steal from almost anyone – and uncovered just how little being done to combat the fraud.

How to protect yourself from drop account fraud – tips from our investigative unit.

Announcing The Conversation’s new investigative unit

The Conversation

David Maimon receives funding from the National Science Foundation, the Criminal Investigations and Network Analysis Center at George Mason University, and other private grants which support the Evidence Based Cybersecurity research group.

Heists Worth Billions: An investigation found criminal gangs using sham bank accounts and secret online marketplaces to steal from almost anyone – and little being done to combat the fraud

In January 2020, Debi Gamber studied a computer screen filled with information on scores of check deposits. As a manager for eight years at a TD Bank branch in the Baltimore suburb of Essex, she had reviewed a flurry of account activity as a security measure. These transactions, though, from the ATM of a tiny TD location nestled in a nearby mall, struck her as suspicious.

Time and again, Gamber saw that these checks were payable to churches – many states away from the Silver Spring shopping center branch – yet had been deposited into personal accounts, a potential sign of theft.

Digging deeper, she determined that the same customer service representative, Diape Seck, had opened at least seven of the accounts, which had received more than 200 church check deposits. Even fishier, the purported account holders had used Romanian passports and driver’s licenses to prove their identities. Commercial bankers rarely see those forms of ID. So why were all these Romanians streaming into a small branch located above a Marshall’s clothing store?

Suspecting crimes, Gamber submitted an electronic fraud intake form, then contacted TD’s security department to inform them directly of what she had unearthed. Soon, the bank discovered that Seck had relied on Romanian documents for not just seven accounts but for 412 of them. The bank phoned local police and federal law enforcement to report that an insider appeared to be helping criminals cheat churches and TD.

Nine months after TD’s tip, agents started rounding up conspirators, eventually arresting nine of them for crimes that netted more than US$1.7 million in stolen checks. They all pleaded guilty to financial crimes except for Seck, who was convicted in February 2023 for bank fraud, accepting a bribe and other crimes. He was sentenced in June 2023 to three years in prison.

Sophisticated crimes

How could it happen? How could criminals engineer a yearlong, multimillion-dollar fraud just by relying on a couple of employees at two small bank branches in a scheme with victims piling up into hundreds?

The answer is, because it’s easy. Crimes like these happen every day across the country. Scams facilitated by deceiving financial institutions – from international conglomerates to regional chains, community banks, and credit unions – are robbing millions of people and institutions out of billions and billions of dollars. At the heart of this unprecedented crime wave are so-called drop accounts created by street gangs, hackers and even rings of friends. These fraudsters are leveraging technology to obtain fake or stolen information to create the drop accounts, which are then used as the place to first “drop” and then launder purloined funds.

A person in a white hooded sweatshirt walks toward a U.S. postal carrier
An October 2022 surveillance photo of an armed robber approaching a mail carrier. The Conversation/court records

To better understand the growing phenomenon of drop accounts and their role in far-reaching crime, the Evidence-Based Cybersecurity Research Group at Georgia State University joined The Conversation in a four-month investigation of this financial underworld. The inquiry involved extensive surveillance of criminals’ interactions on the dark web and secretive messaging apps that have become hives of illegal activity. The reporting shows:

  • The technological skills of street gangs and other criminal groups are exceptionally sophisticated, allowing them to loot billions from individuals, businesses, municipalities, states and the federal government.
  • Robberies of postal workers have escalated sharply as fraudsters steal public mailbox keys in the first step of a chain of crimes that ends with drop accounts’ being loaded with millions in stolen funds.
  • A robust, anonymous online marketplace provides everything an aspiring criminal needs to commit drop account fraud, including video tutorials and handbooks that describe tactics for each bank. The dark web and encrypted chat services have become one-stop shops for cybercriminals to buy, sell and share stolen data and hacking tools.
  • The federal government and banks know the scope and impact of the crime but have so far failed to take meaningful action.

“What we are seeing is that the fraudsters are collaborating, and they are using the latest tech,” said Michael Diamond, general manager of digital banking at Mitek Systems, a San Diego-based developer of digital identity verification and counterfeit check detection systems. “Those two things combined are what are driving the fraud numbers way, way up.”

Criminals target letter carriers for their arrow keys, giving them access to public mailboxes. Via Evidence-Based Cybersecurity Research Group.

Billions stolen

The growth is staggering. Financial institutions reported more than 680,000 suspected check frauds in 2022, nearly double the 350,000 such reports the prior year, according to the Treasury Department’s Financial Crimes Enforcement Network, also known as FinCEN. Through internet transactions alone, swindles typically facilitated by drop accounts cost individuals and businesses almost $4.8 billion last year, a jump of about 60% from comparable fraud losses of more than $3 billion in 2020, the Federal Bureau of Investigation reported.

Plus, a portion of the estimated $64 billion stolen from just one COVID-19 relief fund went to gangsters who rely on drop accounts, according to a congressional report and an analysis from the University of Texas at Austin. Criminals using drop accounts also hit the pandemic unemployment relief funds, which experienced improper payments of as much as $163 billion, the Labor Department found. Indeed, experts say the large sums of government money meant to combat economic troubles from COVID-19 fueled the rapid growth of drop account fraud, as trillions of dollars in rescue funds were disbursed in the form of wires and paper checks.

“There were a huge range of criminals who were trained in this during the pandemic,” said one banking industry official who spoke on condition of anonymity because of the sensitivity of the matter. “A lot of them have grown up in the pandemic and seen that it is easy to make a lot of money with these schemes, with very little risk of prosecution.”


Graphic showing a masked criminal on a stamp and saying 'Heists worth billions'
This article is an excerpt from Heists Worth Billions, an investigation from The Conversation that found criminal gangs using sham bank accounts and secret online marketplaces to steal from almost anyone – and uncovered just how little being done to combat the fraud.

How to protect yourself from drop account fraud – tips from our investigative unit.

Behind the scenes of the investigation

Announcing The Conversation’s new investigative unit

The Conversation

David Maimon receives funding from the National Science Foundation, the Criminal Investigations and Network Analysis Center at George Mason University, and other private grants which support the Evidence Based Cybersecurity research group.

Kurt Eichenwald does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

How to protect yourself from drop account fraud – tips from our investigative unit

Loot stolen from the U.S. Postal Service is displayed on the dark web. Via Evidence-Based Cybersecurity Research Group

The types of crimes that use drop accounts are multiplying rapidly, but there are ways to decrease your chances of becoming a victim.

Protect your identity online by following these steps

To prevent fraud involving a tax return refund or any other tax issue

  • Complete and send in your tax return as early as possible, which makes it more difficult for someone to steal your refund.
  • Establish an identity protection PIN with the IRS, which only you and the agency will know.
  • If the IRS rejects your attempt to file your tax return, or if you receive any unusual mail from the agency such as a tax transcript you didn’t request, or it notifies you of suspicious activity, contact the agency at the number listed here to report possible identity theft.
  • Pay any taxes owed online, not by check.

To prevent losses through business email compromise scams

  • Learn and teach employees basic email safety techniques.
  • Confirm urgent emails from supervisors or vendors demanding immediate wire transfers. In fact, urgent requests are the most suspicious.
  • Assure employees that double-checking whether these purportedly urgent emails came from the listed sender will not result in criticism or punishment.
  • Never purchase a gift card requested by a supervisor through email or text.
  • Human resources officials should never change bank accounts for direct deposit if employees ask by email or text. Always call to double-check that the request is real.

Graphic showing a masked criminal on a stamp and saying 'Heists worth billions'
This article accompanies Heists Worth Billions, an investigation from The Conversation that found criminal gangs using sham bank accounts and secret online marketplaces to steal from almost anyone – and uncovered just how little being done to combat the fraud.

The Conversation

Kurt Eichenwald does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Lickable toads and magic mushrooms: wildlife traded on the dark web is the kind that gets you high

Colorado River toad Shutterstock

The internet has made it easier for people to buy and sell a huge variety of wildlife – from orchids, cacti and fungi to thousands of birds, mammals, reptiles, amphibians and fish, as well as insects, corals and other invertebrates.

But alongside legal trade in wildlife, there’s a dark twin – illegal trading of wildlife. Endangered birds with very few left in the wild. Horns sawn off shot rhinos. The illegal wildlife trade is a blight. It puts yet more pressure on nature, adds to biodiversity loss and threatens biosecurity, sustainable development and human wellbeing globally.

In our new research, we probed the dark web – the secretive section of the internet deliberately set up out of view of search engines. Most people associate the dark web with illicit drug marketplaces. We wanted to see what types of wildlife were being sold there.

The result? Across 51 dark web marketplaces, we found 153 species being sold. These were almost entirely plants and fungi with psychoactive effects, indicating they are part of the well-known dark web drug trade. There were only a small number of advertisements offering vertebrates such as the infamous Colorado River toad, which faces poaching pressure because its skin secretes psychoactive toxins as a defence.

Why aren’t traders in illegal wildlife using the dark web? Mainly because the trade in illegally traded animals and animal parts is not hidden – it’s all over the open internet. For instance, the frog toxin kambo used in the ritual that killed a Mullumbimby woman in 2019 is still sold openly.

magic mushrooms
Magic mushrooms from the Psilocybe genus were commonly sold on the dark web. Shutterstock

What was being sold on the dark web?

We found over 3,000 advertisements selling wildlife species on dark web marketplaces between 2014 and 2020. We searched these marketplaces for keywords relating to wildlife trade and species names.

What was for sale? Of the 153 species we found, we verified 68 as containing psychoactive chemicals.

The most commonly traded species was a South American tree Mimosa tenuiflora, commonly known as jurema preta, whose bark contains an extremely potent hallucinogen, DMT. Plants made up most of the species being sold, with many coming from Central and Southern America.

We also found 19 species of Psilocybe fungi being sold.


Read more: 'Astonishing': global demand for exotic pets is driving a massive trade in unprotected wildlife


Many species were being sold for their purported medical properties, as well a small number of species being sold for clothing, decoration or as pets.

Many of the animals we found on the dark web have a long history of being illegally traded, such as live African grey parrots, as well as elephant ivory, rhino horn, and the teeth and skins of tigers and lions.

We also found small amounts of less commonly documented wildlife, including the Goliath beetle, Chinese golden scorpion and Japanese sea cucumber.

Japanese sea cucumber
Japanese sea cucumbers were also being sold. Shutterstock

The illegal wildlife trade is hard to stop

Globally, the wildlife trade is regulated by the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). But the regulated market is just a fraction of the whole. To date, CITES protects less than 5% of traded species. The number of species traded live outnumbers the regulated trade by at least three times, according to some estimates.

To date, there have been few effective disincentives to stop traffickers from selling illegal wildlife online. Punishments for convicted wildlife traffickers are not effective, with Australian traffickers continuing to harvest animals even after being caught.

Efforts to combat wildlife trafficking online are increasing. One positive recent initiative is the End Wildlife Trafficking Online coalition. It’s a collaboration between animal NGOs and online platforms like Facebook, Alibaba and eBay aimed at rooting out online trafficking.

While clamping down on illicit open web trade is crucial, crackdowns here make it more likely that a wider range of wildlife will surface on the dark web.

What can be done?

Australia and all other nations that have signed up to CITES have a responsibility to keep track of internet-based wildlife trade. At recent CITES conferences resolutions were made to track and report all internet trade – including on the dark web – in an effort to boost monitoring and enforcement of wildlife trafficked online.

One stumbling block is the legality of online trade, which depends on factors such as the laws of the country or countries involved and whether the sale actually took place.

To stop the trafficking of iconic Australian species such as shingleback lizards and red-tailed black cockatoos, authorities here have to monitor what native species are being bought and sold online, as well as the species trafficked into and through Australia.

Since 2019 we have been monitoring the wildlife trade in Australia, drawing data from over 80 websites and forums.

Datasets like this will be vital in monitoring and combating internet-facilitated wildlife crime as it continues to grow – especially if enforcement drives traffickers to harder-to-access parts of the internet like the dark web.


Read more: New exposé of Australia's exotic pet trade shows an alarming proliferation of alien, threatened and illegal species


The Conversation

Phill Cassey receives funding from the Australian Research Council and previously the Centre for Invasive Species Solutions.

Adam Toomes receives funding from the Australian Research Council and previously the Centre for Invasive Species Solutions.

Charlotte Lassaline previously received funding from the Centre for Invasive Species Solutions.

Freyja Watters receives funding from an Adelaide University Postgraduate Research Scholarship.

Jacob Maher previously received funding from the Centre for Invasive Species Solutions. He is affiliated with the National Tertiary Education Union.

Oliver C. Stringham previously received funding from the Centre for Invasive Species Solutions.

Child sexual abuse review: how research can help to tackle growing online abuse

Each time abuse material is shared, the victim is revictimised. Chinnapong | Shutterstock

In the seven years since the Independent Inquiry into Child Sexual Abuse launched in 2015, it has held more than 300 days of public hearings, processed over 2 million pages of evidence, heard from over 700 witnesses, and engaged with over 7,000 victims and survivors.

One of the most pressing issues the inquiry has raised is that of online-facilitated child sexual abuse. The use of hidden services to distribute online child sexual abuse material globally increased by 155% between 2019 and 2020.

In 2021 alone, Inhope – an organisation that supports 50 hotlines in 46 countries around the world to remove child sexual abuse material from the internet – handled almost 1 million URLs featuring suspected child sexual abuse and exploitation. And the scope and scale of online child sexual abuse show no sign of abating. Of the images and videos reviewed by the Inhope hotlines in 2021, 82% had not seen before.

A growing threat

In the wake of the scandals involving Jimmy Savile, Rolf Harris and other celebrities, the inquiry was commissioned by the UK government in 2015, to scrutinise the extent to which state and non-state institutions had failed to protect children.

On October 20 2022, this inquiry published its final report. Underlining that child protection should be made a national priority, its report puts forward 20 recommendations, designed to make England and Wales places where children can grow up safely and thrive. These both take in lessons from the past and seek to address evolving challenges, of which online sexual abuse is the most urgent.

Research with survivors shows that when documentation of their abuse is shared online, it affects them differently than the abuse they originally suffered. The images are permanent and the sharing never ends. Online distribution of this kind of material thus results in children being re-victimised each time it is viewed.

The sheer scale of offending in this sphere, and the opportunities afforded to offenders to hide their activities with end-to-end encryption, means that the deck is heavily stacked against a law-enforcement response alone. The inquiry has asserted as much.

The report thus focuses attention on the responsibility of platform providers. It recommends that it become mandatory for all search service and user-to-user service providers to screen any material at the point where it is uploaded. The hope is that this will prevent any child-abuse material from ever getting into the public domain.

This recommendation, of course, only addresses the supply side of the equation. What is also needed is an approach that actively reduces the demand for child sexual abuse material.

Research has a key role to play here. By looking for patterns and insight into the behaviour of people who intend to abuse children, as the collaboration between the Child Rescue Coalition non-profit and the Policing Institute for the Eastern Region is doing, academics can help with the development of tools to support law-enforcement investigations.

Research can also help to design interventions for people who share and consume this abuse material. The Redirection survey (2021) by the Helsinki-based non-profit, Suojellaan Lapsia (meaning “Protect Children”), canvased the views of over 8,000 people on the dark web who accessed abuse images. This survey found that only 13% had sought help but that 50% wanted to stop and 62% had tried to stop but failed. These findings have helped with the development of a self-help programme for people who search for, view, and distribute child sexual abuse material.

A 2021 threat assessment by the We Protect Global Alliance organisation stated that online child sexual abuse represents “one of the most urgent and defining issues of our generation”. Finding ways to tackle the devastating harm caused by this type of abuse, at the root, is crucial. For a sustainable, long-term prevention strategy to make any kind of headway, preventing harm in the first place needs to be prioritised.

The Conversation

Samantha Lundrigan receives funding from The Dawes Trust

The dark web down under: what’s driving the rise and rise of NZ’s ‘Tor Market’ for illegal drugs?

Getty Images

New Zealand is generally proud of being a world leader, but there’s one claim that might not be universally admired: being home to the longest running English-language market for illegal drugs on the so-called “darknet”.

Known as “Tor Market”, it has been active since March 2018 and has outlived several larger and better known operations such as “Dream Market”, “Hydra Market” and “Empire”. The longevity of Tor Market is surprising, given so many darknet drug markets have only lasted relatively briefly.

That doesn’t mean you’ll be able to find it easily. The darknet is an encrypted portion of the internet not indexed by search engines. It requires specific anonymising browser software to access, typically I2P or Tor software – hence the local market’s name.

Many darknets sell illegal drugs anonymously, with delivery by traditional post or courier, and resemble legal e-commerce sites such as Amazon.

An analysis of over 100 darknet markets between 2010 and 2017 found sites were active for an average of just over eight months. Of the more than 110 darknet drug markets active from 2010 to 2019, just ten remained fully operational by 2019.

US authorities announce the arrest of 179 people and seizure of more than US$6.5 million in a worldwide crackdown on darknet opioid trafficking in 2020. Getty Images

The fragmented darknet ecosystem

Darknet marketplaces have disappeared as a result of increasingly sophisticated and successful law enforcement operations, including clandestinely taking over sites for extended periods to gather evidence on vendors and buyers.

Alternatively, site administrators pull off opportunistic exit scams and abscond with cryptocurrency held in accounts.

No dominant international darknet market has emerged since the “voluntary shut down” of Dream Market in 2019. And there appears to be a general loss of confidence in darknet drug supply due to those enforcement shutdowns and exit scams.


Read more: The darknet is not a hellhole, it's an answer to internet privacy


While total sales on all darknet markets increased in 2020, and again in the first quarter of 2021, data for the fourth quarter of 2021 suggest sales declined by as much as 50%.

This makes Tor Market’s performance over the same period even more remarkable. Its listings grew from fewer than ten products in the months prior to Dream Market’s closure in early 2019 to over 100 products by July that year.

After a steady period where there were, on average, 255 listings across 2020 and 379 across 2021, another period of growth happened in early 2022. This saw over a thousand products being listed on Tor Market by mid-2022 (see graph below).

This expansion was driven by a steady increase in international sales, which grew to outnumber domestic New Zealand sales by early 2022.



Filling a market gap

On the face of it, New Zealand may seem an unlikely location for a rising international darknet drug market. Its geographical isolation from large European and US drug markets, small population, and historical absence of any substantial cocaine and heroin supply should all work against it.

Yet these factors may be exactly what has driven this market innovation.

Darknets provide anonymous and direct access to international drug sellers who have MDMA, cocaine and opioids for sale – drug types not easily accessed in physical drug markets in New Zealand. These international sellers are otherwise unlikely to have any interest in supplying such a small, distant market.


Read more: Inside a ransomware attack: how dark webs of cybercriminals collaborate to pull them off


By providing offerings from dozens of international drug sellers and a centralised forum for buyers, Tor Market solves the very real economic problem of “thin markets” in the New Zealand drug scene, where there are simply not enough buyers to sustain sellers for some drug types.

Usually, buyers and sellers would have trouble connecting and hence justifying large-scale international trafficking. Darknets solve this problem by offering retail quantities of drug types that are traditionally difficult to source, such as MDMA, directly to buyers.


Read more: How the world's biggest dark web platform spreads millions of items of child sex abuse material — and why it's hard to stop


Size and scrutiny

New Zealanders have a history of innovative solutions to the so-called “tyranny of distance”. They also have a relatively high level of digital engagement and online shopping habits by international standards. Perhaps darknets offer a familiar online shopping experience.

For their part, the Tor Market administrators claim (based on their own site’s help manual) to offer a range of design innovations and features that ensure the security of Tor Market.

This kind of boasting is not uncommon among darknet operators as a marketing strategy to attract new vendors to a site. And it’s not clear whether Tor Market is really offering any superior security features or coding infrastructure compared to other sites.

More credible is Tor Market’s purported business strategy of purposely seeking to maintain a low profile compared to larger international sites. Indeed, many of the vendors on Tor Market in the early days were New Zealand-based and who only sold to local buyers.

The rising international listings on Tor Market may reflect wider problems in the darknet ecosystem, including the closure of previously dominant darknet markets and the unreliability of many sites due to denial-of-service attacks.

In the end, Tor Market’s success may be its undoing. It remains to be seen whether it can sustain its international growth and operate with a higher international profile, given the related risk of international law enforcement looking its way.

The Conversation

Chris Wilkins and Marta Rychert receive funding from the New Zealand Royal Society Te Apārangi Marsden Fund Grant MAU1812.

Marta Rychert receives funding from the New Zealand Royal Society Te Apārangi and NZ Health Research Council.

The ‘Optus hacker’ claims they’ve deleted the data. Here’s what experts want you to know

T. Schneider/Shutterstock

Shortly after Australian telecommunications company Optus announced the identity data of millions of customers had been stolen, a person claiming to be the hacker announced they would delete the data for US$1 million.

When Optus didn’t pay, the purported hacker published 10,000 stolen records and threatened to release ten thousand more every day until the ransom deadline. These leaked records contained identity information such as driver’s license, passport and Medicare numbers, as well as parliamentary and defense contact information.

A few hours after the data drop, the purported hacker unexpectedly apologised and claimed to have deleted the data due to “too many eyes”, suggesting fear of being caught. Optus confirms they did not pay the ransom.

They’ve said they deleted the data – now what? Is it over?

Communication from the person claiming to be the hacker and the release of 10,200 records have all occurred on a website dedicated to buying and selling stolen data.

The data they released are now easily available and appear to be legitimate data stolen from Optus (their legitimacy has not been verified by Optus or the Australian Federal Police; the FBI in the United States has now been called in to help the investigation).

The question then is – why would the hacker express remorse and claim to delete the data?

Unfortunately, while the purported hacker did appear to possess the legitimate data, there is no way to verify the deletion. We have to ask: what would the hacker gain from claiming to delete them?

It is likely a copy still remains, and it’s even possible the post is a ploy to convince victims not to worry about their security – to increase the likelihood of successful attacks using the data. There is also no guarantee the data were not already sold to a third party.

What next?

Whatever the motivations of the person claiming to be the hacker, their actions suggest we should continue to expect all records stolen from Optus do remain in malicious hands.

Despite the developments, recommendations still stand – you should still be taking proactive action to protect yourself. These actions are good cyber hygiene practices no matter the circumstances.


Read more: What does the Optus data breach mean for you and how can you protect yourself? A step-by-step guide


An extra measure offered recently is changing your driver’s license number, ordering a new passport and Medicare card.

However it is unclear at this early stage whether free options to change these documents will be made to all data breach victims, or only a subset of victims.

Can I find out whether my data were part of the 10,200 leaked records?

Reports of people being contacted by scammers suggest they are already being used.

Troy Hunt, the Australian cyber security professional who maintains HaveIBeenPwned – a website you can use to check whether your data are part of a known breach – has announced he will not add the leaked data to the site at this stage. So this method will not be available.

The best course of action in this case is to assume your data may have been released until Optus notifies people in the coming week.

Are the released data already being used?

The least technically sophisticated method of targeting Optus customers is to use the details to make direct contact and ask for a ransom. There are reports blackmailers are already targeting breach victims via text message, claiming to have the data and threatening to post it on the dark web unless the victim pays.

The data have already leaked and claims about deleting the data are untrue. Paying anyone who makes these claims will not increase the security of your information.

Data recovery scams – where scammers target victims offering help to remove their data from the dark web or recover any money lost for a fee – have also become prominent. Instead of helping, they steal money or obtain more information from the victim. Anyone who claims to be able to scrub the data from the dark web is claiming to put toothpaste back in the tube. It isn’t possible.

The data could also be used to identify family members to make the “Hi Mum” or family impersonation scam more convincing. This involves scammers posing as a family member or friend from a new phone number, often using WhatsApp, in need of urgent financial help. Anyone receiving this kind of text message should make every effort to contact their family member or friend by other means.

What else can my data be used for?

The scams involved with these data will only grow in the coming days and weeks and may not be confined to the digital world.

Other possible uses involve activities like attempting to take over valuable online accounts or your SIM card, or setting up new financial services and SIM cards in your name. The advice we provided in our previous article applies to these.

Additionally, anyone with reason to be concerned about physical safety if their location is known (for example domestic abuse survivors) should consider the possibility that their names, telephone numbers and address may have leaked or may in the future.

If you have been the victim of fraud or identity theft as a result of this breach or any others, you can contact IDCare for additional aid and Cyber Report to report the crime.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

How the world’s biggest dark web platform spreads millions of items of child sex abuse material — and why it’s hard to stop

Shutterstock

Child sexual abuse material is rampant online, despite considerable efforts by big tech companies and governments to curb it. And according to reports, it has only become more prevalent during the COVID-19 pandemic.

This material is largely hosted on the anonymous part of the internet — the “darknet” - where perpetrators can share it with little fear of prosecution. There are currently a few platforms offering anonymous internet access, including i2p, FreeNet and Tor.

Tor is by far the largest and presents the biggest conundrum. The open-source network and browser grants users anonymity by encrypting their information and letting them escape tracking by internet service providers.

Online privacy advocates including Edward Snowden have championed the benefits of such platforms, claiming they protect free speech, freedom of thought and civil rights. But they have a dark side, too.

Tor’s perverted underworld

The Tor Project was initially developed by the US Navy to protect online intelligence communications, before its code was publicly released in 2002. The Tor Project’s developers have acknowledged the potential to misuse the service which, when combined with technologies such as untraceable cryptocurrency, can help hide criminals.


Read more: Explainer: what is the dark web?


Tor is an overlay network that exists “on top” of the internet and merges two technologies. The first is the onion service software. These are the websites, or “onion services”, hosted on the Tor network. These sites require an onion address and their servers’ physical locations are hidden from users.

The second is Tor’s privacy-maximising browser. It enables users to browse the internet anonymously by hiding their identity and location. While the Tor browser is needed to access onion services, it can also be used to browse the “surface” internet.

Accessing the Tor network is simple. And while search engine options are limited (there’s no Google), discovering onion services is simple, too. The BBC, New York Times, ProPublica, Facebook, the CIA and Pornhub all have a verified presence on Tor, to name a few.

Service dictionaries such as “The Hidden Wiki” list addresses on the network, allowing users to discover other (often illicit) services.

Hidden Wiki main page screenshot.
The Hidden Wiki main page. Wikimedia Commons

Child sex abuse material and abuse porn is prevalent

The number of onion services active on the Tor network is unknown, although the Tor Project estimates about 170,000 active addresses. The architecture of the network allows partial monitoring of the network traffic and a summary of which services are visited. Among the visited services, child sex abuse material is common.

Of the estimated 2.6 million users that use the Tor network daily, one study reported only 2% (52,000) of users accessed onion services. This suggests most users access the network to retain their online privacy, rather than use anonymous onion services.

That said, the same study found from a single data capture that about 80% of traffic to onion services was directed to services which did offer illegal porn, abuse images and/or child sex abuse material.

Another study estimated 53.4% of the 170,000 or so active onion domains contained legal content, suggesting 46.6% of services had content which was either illegal, or in a grey area.

Although scams make up a significant proportion of these services, cryptocurrency services, drug deals, malware, weapons, stolen credentials, counterfeit products and child sex abuse material also feature in this dark part of the internet.

Only about 7.5% of the child sex abuse material on the Tor network is estimated to be sold for a profit. The majority of those involved aren’t in it for money, so most of this material is simply swapped. That said, some services have started charging fees for content.

Several high-profile onion services hosting child sex abuse material have been shut down following extensive cross-jurisdictional law enforcement operations, including The Love Zone website in 2014, PlaypEn in 2015 and Child’s Play in 2017.

A recent effort led by German police, and involving others including Australian Federal Police, Europol and the FBI, resulted in the shutdown of the illegal website Boystown in May.

But one of the largest child sex abuse material forums on the internet (not just Tor) has evaded law enforcement (and activist) takedown attempts for a decade. As of last month it had 508,721 registered users. And since 2013 it has hosted over a million pictures and videos of child sex abuse material and abuse porn.

The paedophile (eroticisation of pre-pubescent children), haebephile (pubescent children) and ephebophile (adolescents) communities are among the early adopters of anonymous discussion forums on Tor. Forum members distribute media, support each other and exchange tips to avoid police detection and scams targeting them.

The WeProtect Alliance’s 2019 Global Threat Assessment report estimated there were more than 2.88 million users on ten forums dedicated to paedophilia and paraphilia interests operating via onion services.

Countermeasures

There are huge challenges for law enforcement trying to prosecute those who produce and/or distribute child sex abuse material online. Such criminal activity typically falls across multiple jurisdictions, making detection and prosecution difficult.

Undercover operations and novel online investigative techniques are essential. One example is targeted “hacks” which offer law enforcement back-door access to sites or forums hosting child sex abuse material.

Such operations are facilitated by cybercrime and transnational organised crime treaties which address child sex abuse material and the trafficking of women and children.

Given the volatile nature of many onion services, a focus on onion directories and forums may help with harm reduction. Little is known about child sex abuse material forums on Tor, or the extent to which they influence onion services hosting this material.

Apart from coordinating to avoid detection, forum users can also share information about police activity, rate onion service vendors, share sites and expose scams targeting them.

The monitoring of forums by outsiders can lead to actionable interventions, such as the successful profiling of active offenders. Some agencies have explored using undercover law enforcement officers, civil society, or NGO experts (such as from the WeProtect Global Alliance or ECPAT International) to promote self-regulation within these groups.

While there is a lack of research on this, reformed or recovering offenders can also provide counsel to others. Some sub-forums seek to offer education, encourage treatment and reduce harm — usually by focusing on the legal and health issues associated with consuming child sex abuse material, and ways to control urges and avoid stimuli.

Other contraband services also play a role. For instance, onion services dedicated to drug, malware or other illicit trading usually ban child sex abuse material that creeps in.

Why does the Tor network allow such abhorrent material to remain, despite extensive opposition — sometimes even from those within these groups? Surely those representing Tor have read complaints in the media, if not survivor reports about child sex abuse material.


Read more: The darknet – a wild west for fake coronavirus 'cures'? The reality is more complicated (and regulated)


The Conversation

Roderic Broadhurst has received funding for a variety of research projects on cybercrime and darknet markets from the Australian Research Council, Australian Institute of Criminology, Korean institute of Criminology and, the Australian Criminology Research Council. Since April 2019 he has served on the Australian Centre to Counter Child Exploitation Research Working Group.

Matthew Ball does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Here’s how much your personal information is worth to cybercriminals – and what they do with it

The black market for stolen personal information motivates most data breaches. aleksey-martynyuk/iStock via Getty Images

Data breaches have become common, and billions of records are stolen worldwide every year. Most of the media coverage of data breaches tends to focus on how the breach happened, how many records were stolen and the financial and legal impact of the incident for organizations and individuals affected by the breach. But what happens to the data that is stolen during these incidents?

As a cybersecurity researcher, I track data breaches and the black market in stolen data. The destination of stolen data depends on who is behind a data breach and why they’ve stolen a certain type of data. For example, when data thieves are motivated to embarrass a person or organization, expose perceived wrongdoing or improve cybersecurity, they tend to release relevant data into the public domain.

In 2014, hackers backed by North Korea stole Sony Pictures Entertainment employee data such as Social Security numbers, financial records and salary information, as well as emails among top executives. The hackers then published the emails to embarrass the company, possibly in retribution for releasing a comedy about a plot to assassinate North Korea’s leader, Kim Jong Un.

Sometimes when data is stolen by national governments it is not disclosed or sold. Instead, it is used for espionage. For example, the hotel company Marriott was the victim of a data breach in 2018 in which personal information on 500 million guests was stolen. The key suspects in this incident were hackers backed by the Chinese government. One theory is that the Chinese government stole this data as part of an intelligence-gathering effort to collect information about U.S. government officials and corporate executives.

But the majority of hacks seem to be about selling the data to make a buck.

It’s (mostly) about the money

Though data breaches can be a national security threat, 86% are about money, and 55% are committed by organized criminal groups, according to Verizon’s annual data breach report. Stolen data often ends up being sold online on the dark web. For example, in 2018 hackers offered for sale more than 200 million records containing the personal information of Chinese individuals. This included information on 130 million customers of the Chinese hotel chain Huazhu Hotels Group.

Similarly, data stolen from Target, Sally Beauty, P.F. Chang, Harbor Freight and Home Depot turned up on a known online black-market site called Rescator. While it is easy to find marketplaces such as Rescator through a simple Google search, other marketplaces on the dark web can be found only by using special web browsers.

Buyers can purchase the data they are interested in. The most common way to pay for the transaction is with bitcoins or via Western Union. The prices depend on the type of data, its demand and its supply. For example, a big surplus of stolen personally identifiable information caused its price to drop from US$4 for information about a person in 2014 to $1 in 2015. Email dumps containing anywhere from a hundred thousand to a couple of million email addresses go for $10, and voter databases from various states sell for $100.

Where stolen data goes

Buyers use stolen data in several ways. Credit card numbers and security codes can be used to create clone cards for making fraudulent transactions. Social Security numbers, home addresses, full names, dates of birth and other personally identifiable information can be used in identity theft. For example, the buyer can apply for loans or credit cards under the victim’s name and file fraudulent tax returns.

Sometimes stolen personal information is purchased by marketing firms or companies that specialize in spam campaigns. Buyers can also use stolen emails in phishing and other social engineering attacks and to distribute malware.

Hackers have targeted personal information and financial data for a long time because they are easy to sell. Health care data has become a big attraction for data thieves in recent years. In some cases the motivation is extortion.

A good example is the theft of patient data from the Finnish psychotherapy practice firm Vastaamo. The hackers used the information they stole to demand a ransom from not only Vastaamo, but also from its patients. They emailed patients with the threat to expose their mental health records unless the victims paid a ransom of 200 euros in bitcoins. At least 300 of these stolen records have been posted online, according to an Associated Press report.

Stolen data including medical diplomas, medical licenses and insurance documents can also be used to forge a medical background.

How to know and what to do

What can you do to minimize your risk from stolen data? The first step is to find out if your information is being sold on the dark web. You can use websites such as haveibeenpwned and IntelligenceX to see whether your email was part of stolen data. It is also a good idea to subscribe to identity theft protection services.

If you have been the victim of a data breach, you can take these steps to minimize the impact: Inform credit reporting agencies and other organizations that collect data about you, such as your health care provider, insurance company, banks and credit card companies, and change the passwords for your accounts. You can also report the incident to the Federal Trade Commission to get a tailored plan to recover from the incident.

The Conversation

Ravi Sen does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Banning disruptive online groups is a game of Whac-a-Mole that web giants just won’t win

Zenza Flarini/Shutterstock

From Washington DC to Wall Street, 2021 has already seen online groups causing major organised offline disruption. Some of it has been in violation of national laws, some in violation of internet platforms’ terms of service. When these groups are seen to cause societal harm, the solution has been knee-jerk: to ban or “deplatform” those groups immediately, leaving them digitally “homeless”.

But the online world is a Pandora’s box of sites, apps, forums and message boards. Groups banned from Facebook migrated seamlessly to Parler, and from Parler, via encrypted messaging apps, to a host of other platforms. My research has shown how easily users migrate between platforms on the “dark web”. Deplatforming won’t work on the regular internet for the same reason: it’s become too easy for groups to migrate elsewhere.


Read more: Does 'deplatforming' work to curb hate speech and calls for violence? 3 experts in online communications weigh in


This year, we’ve come to see social platforms not as passive communication tools, but rather as active players in public discourse. Twitter’s announcement that it had permanently suspended Donald Trump in the wake of the Capitol riots is one such example: a watershed moment for deplatforming as a means of limiting harmful speech.

Elsewhere, the Robinhood investment platform suspended the trading of GameStop stocks after the Reddit group r/WallStreetBets (which had 2.2 million members at the time) coordinated a mass purchase of the shares. While the original Reddit group remained open, many r/WallStreetBets users had also been communicating via the social network Discord. In response, Discord banned their channel, citing “hate speech”.

A tweet from a Reddit users asking people to migrate to a different platform
Platform promiscuity: a Twitter account connected to a Reddit trading group invites followers to connect on Instagram.

Net Migration

Deplatforming is the mechanism currently used by social networks and technology companies to suspend or ban users who’ve allegedly violated their terms of service. From a company’s perspective, deplatforming is a protection from potential legal actions. For others, it’s hoped that deplatforming might help stop what some see as online mobs, intent on vandalising political, social, and financial institutions.

But deplatforming has proven ineffective in stifling these groups. When Trump was banned from social media, his supporters quickly reorganised on Parler – a social networking site that markets itself as the home of free speech. Shortly after, Parler was removed from the Apple and Google app stores, and Amazon Web Services – who provided the digital infrastructure for the platform – removed Parler from its servers.

With Parler offline, Trump’s supporters began looking for alternative social media apps, including MeWe and CloudHub, which both rose rapidly up the app store rankings, organised by volume of downloads. Similarly, after the Discord ban, Reddit investors quickly reorganised themselves on the messaging service Telegram. These “Whac-a-Mole” dynamics, with deplatformed groups rapidly reforming on other platforms, is strikingly similar to what my research team and I have observed on the dark web.

Dark dynamics

The dark web is a hidden part of the internet that’s easily accessible through specialised web browsers such as TOR. Illicit trade is rife on the dark web, especially in dark “marketplaces”, where users trade goods using cryptocurrencies such as Bitcoin. Silk Road, regarded as the first dark marketplace, launched in 2011 and mostly sold drugs. Shut down by the FBI in 2013, it was followed by dozens of dark marketplaces which also traded in weapons, fake IDs and stolen credit cards.

A web browser showing Silk Road website and a list of drugs for sale on it.
Anonymous marketplaces like Silk Road are commonly removed from the dark web, causing user migration. Jarretera/Shutterstock

My collaborators and I looked at what happens after a dark marketplace is shut down by a police raid or an “exit scam” – where the marketplace’s moderators suddenly close the website and disappear with the users’ funds. We focused on “migrating” users, who move their trading activity to a different marketplace after a closure.

We found that most users flocked to the same alternative marketplace, typically the one with the highest amount of trading. User migration took place within hours, possibly coordinated via a discussion forum such as Reddit or Dread, and the overall amount of trading across the marketplaces quickly recovered. So, although individual marketplaces can be fragile, with participants being exposed to losses due to scams, this coordinated user migration guarantees the marketplaces’ overall resilience, so that new ones continue to flourish.

Platform promiscuity

Back in 2006, Facebook was competing for dominance against other social networks such as MySpace, Orkut, Hi5, Friendster and Multiply. When Facebook started to dominate the scene, network effects made it unstoppable.

Put simply, network effects compound platform dominance because you and I are most likely to join networking platforms our friends are already on. Given this tendency, Facebook and Twitter grew to host billions of users, and Hi5 disappeared. By the time their dominance had crystallised, a ban from Facebook or Twitter would have meant total ostracisation from the online community.

In 2021, everything is different. Global communities organised by interests or political opinion are now established, and are able to quickly formulate emergency evacuation or migration plans. Members are usually in contact on several channels – even “dormant” channels few users are active upon. As dark markets show, dormant channels can become active when they’re required.

All this means that being banned from Twitter, Facebook, Instagram, Snapchat, Twitch and others no longer results in your isolation, or in your community being disbanded. Instead, just like on the dark web, deplatforming simply requires users to migrate to a new home, which they do in a matter of hours.

Deplatforming is clearly an ineffective strategy for stopping disruptive groups from forming and coordinating online. This means that policing online conversation will be harder in the future. Whether this is seen as good or bad will depend on the specific circumstances and - of course - your point of view.

The Conversation

Andrea Baronchelli does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

North Korea targeted cybersecurity researchers using a blend of hacking and espionage

North Korea has a long history of hacking targets in the U.S. Chris Price/Flickr, CC BY-ND

North Korean hackers have staged an audacious attack targeting cybersecurity researchers, many of whom work to counter hackers from places like North Korea, Russia, China and Iran. The attack involved sophisticated efforts to deceive specific people, which raises the level of social engineering, or phishing attacks, and enters the realm of spy tradecraft.

The attack, reported by Google researchers, centered on fake social media accounts on platforms including Twitter. The fake personas, posing as ethical hackers, contacted security researchers with offers to collaborate on research. The social media accounts included content about cybersecurity and faked videos purporting to show new cybersecurity vulnerabilities.

The hackers enticed the researchers to click links to shared code projects – repositories of software related to cybersecurity research – that contained malicious code designed to give the hackers access to the researchers’ computers. Several cybersecurity researchers reported that they fell victim to the attack.

From phishing to espionage

The lowest level of social engineering hack is a typical phishing attack: impersonal messages sent to many people in the hopes that someone will be duped into clicking on a malicious link. Phishing attacks have generally been on the rise since early 2020 – a side effect of the pandemic-driven work-from-home environment in which people are sometimes less vigilant. This is also why ransomware has become prevalent.

The next level of sophistication is spear-phishing. Here people are targeted with messages that include information that is specific to them or their organizations, which increases the likelihood that someone will click a malicious link.

The North Korean operation is at a higher level than spear-phishing because it targeted people who are security-minded by the nature of their occupation. This required the hackers to create convincing social media accounts complete with content about cybersecurity, including videos, that could fool cybersecurity researchers.

The North Korean operation highlights three important trends: stealing cyberweapons from industry, social media as a weapon, and the blurring of cyber and information warfare.

1. Theft of cyberweapons from industry

Before the North Korean operation, the theft of cyberweapons made headlines at the end of 2020. In particular, December’s FireEye breach resulted in the theft of tools used by ethical hackers. These tools were used to crack the security of corporate clients to show the clients their vulnerabilities.

This prior incident, attributed to Russia, illustrates how hackers attempted to augment their arsenals of cyberweapons by stealing from a commercial cybersecurity firm. The North Korean action against security researchers shows that they’ve adopted a similar strategy, though with a different tactic.

Back in the fall, the National Security Agency disclosed a list of vulnerabilities – ways that software and networks can be hacked – that were exploited by Chinese state-sponsored hackers. Despite these warnings the vulnerabilities have persisted, and information about how to exploit them could be found on social media and the dark web. This information was clear and detailed enough that my company, CYR3CON, was able to use machine learning to predict the use of these vulnerabilities.

2. The weaponization of social media

Information operations – collecting information and disseminating disinformation – on social media have become abundant in recent years, especially those conducted by Russia. This includes using “social bots” to spread false information. This “pathogenic social media” has been used by national intelligence operatives and ordinary hackers alike.

Traditionally, this type of targeting has been designed to either spread disinformation or entice an executive or high-ranking government employee to click on a malicious link. In contrast, the North Korean operation was aimed at stealing cyberweapons and information about vulnerabilities.

3. The confluence of cyber and information warfare

Outside of the United States – especially in China and Russia – cyberoperations are considered part of a broader concept of information warfare. The Russians, in particular, have proved very adept at combining information operations and cyberoperations. Information warfare includes using traditional spy tradecraft – operatives with false identities attempting to gain the trust of their targets – to collect and disseminate information.

The attack against cybersecurity researchers could indicate that North Korea is taking cues from these other powers. The low-cost ability of a second-tier authoritarian regime like North Korea to weaponize social media provides it an advantage against the much greater technical capabilities of the U.S.

In addition, the North Koreans appear to have used one of their most valuable cyberweapons in this operation. Google reported that it appeared the hackers used a means of exploiting a zero-day vulnerability – a software flaw that is not widely known – in Google’s Chrome browser in the attack on the cybersecurity researchers. Once such an exploit is used, people are alerted to defend against it and becomes much less effective.

[The Conversation’s science, health and technology editors pick their favorite stories. Weekly on Wednesdays.]

Setting the stage for something bigger?

In cybersecurity, big news items tend to be events like the Sunburst operation by Russian hackers in December – large-scale cyberattacks that cause a great deal of damage. In the Sunburst attack, Russian hackers booby-trapped widely used software, which gave them access to the networks of numerous corporations and government agencies.

These large events are often proceeded by smaller events in which new techniques are experimented with – often without making a large impact. While time will tell if this is true of the North Korean operation, the three current trends – stealing cyberweapons from industry, social media as a weapon, and the blurring of cyber and information warfare – are harbingers of things to come.

The Conversation

Paulo Shakarian works for/consults to/owns shares in Cyber Reconnaissance, Inc. (CYR3CON)

Sketchy darknet websites are taking advantage of the COVID-19 pandemic – buyer beware

Black markets thrive online and flourish during pandemics and other crises. Marko Klaric/EyeEm via Getty Images

Underground markets that sell illegal commodities like drugs, counterfeit currency and fake documentation tend to flourish in times of crisis, and the COVID-19 pandemic is no exception. The online underground economy has responded to the current crisis by exploiting demand for COVID-19-related commodities.

Today, some of the most vibrant underground economies exist in darknet markets. These are internet websites that look like ordinary e-commerce websites but are accessible only using special browsers or authorization codes. Vendors of illegal commodities have also formed dedicated group-chats and channels on encrypted instant messaging services like WhatsApp, Telegram and ICQ.

The Darknet Analysis project at the Evidence-Based Cybersecurity Research Group here at Georgia State University collects data weekly from 60 underground darknet markets and forums. My colleagues Yubao Wu, Robert Harisson and I have analyzed this data and found that three major types of COVID-19 offerings have emerged on darknet markets since late February: protective gear, medications and services that help people commit fraud.

Darknet website product page showing COVID-19 antibody test
If it’s an in-demand COVID-19 commodity, chances are it’s available on darknet markets. Screenshot by David Maimon, CC BY-ND

Using these darknet markets is risky business. First, there’s the built-in risk of becoming the victim of a scam or buying counterfeit products when purchasing products from underground vendors. There are also health and legal risks. Inadvertently buying ineffective COVID-19 protective gear and dangerous remedies from unregulated sellers could physically harm buyers. And purchasing information and services with the aim to defraud people and the government is a criminal offense that carries legal penalties.

Personal protective equipment

Several vendors have added protective gear such as face masks, protective gowns, COVID-19 test kits, thermometers and hand sanitizer to their list of products for sale. The effectiveness of this protective gear is questionable. Underground vendors typically do not disclose their products’ sources, leaving consumers with no way to judge the products.

Darknet website product page showing protective gown
COVID-19 protective gear is a common product type on darknet e-commerce sites. Screenshot by David Maimon, CC BY-ND

One example of the uncertainties that surround protective gear effectiveness comes from one of the encrypted channel platforms we monitored during the first few days of the pandemic. Vendors on the channel offered facemasks for sale. Demand for facemasks was very high at that time, and people around the world were scrambling to find facemasks for personal use.

While governments and suppliers faced difficulties in meeting demand for facemasks, several vendors on these platforms posted ads offering large quantities of facemasks. One vendor even uploaded a video showing many boxes of facemasks in storage.

Given the global shortage of facemasks at the time, our research team found it difficult to understand how this vendor in Thailand could offer so many for sale. One disturbing possibility is that they sold used facemasks. Indeed, authorities in Thailand broke up an operation that washed, ironed and boxed used facemasks and supplied them to underground markets.

Treatments

Darknet vendors are also selling medications and cures, including effective treatments, like Remdesivir, and ineffective treatments, like Hydroxychloroquine. They’re are also selling various purported COVID-19 antidotes and serums. Some vendors even offer to sell and ship oxygen ventilators.

Darknet website product page showing Hydroxychloroquine pills
Darknet markets offer ineffective and potentially dangerous COVID-19 therapies, including hydroxychloroquine, which studies have shown is not an effective treatment. Screenshot by David Maimon, CC BY-ND

Using COVID-19 medications purchased on darknet platforms could be dangerous. Uncertainties about the true identity of medication manufacturers and the ingredients of other cures leaves patients vulnerable to a wide array of potentially detrimental side effects.

[Expertise in your inbox. Sign up for The Conversation’s newsletter and get expert takes on today’s news, every day.]

DIY fraud

Government efforts to relieve the financial stress on individuals and businesses from the economic impact of the pandemic has led to a third category of products on these markets. We have observed many vendors offering to sell online fraud services that promise to improve customers’ financial circumstances during this crisis.

These vendors offer to either support customers in putting together fake websites that allow them to lure victims into disclosing their personal information, or simply provide stolen personal information. The stolen information can be used to file for unemployment benefits or obtain loans. Some vendors go a step further and offer support in the fraudulent benefits application process.

COVID-19-related fraud could have grave consequences for individuals whose identities have been stolen and used to apply for government benefits or loans, including the loss of future government assistance and damage to credit scores. Fraudulent requests for COVID-19 relief funds filed using stolen personal information also puts additional strain on federal, state and local governments.

Digging up the data

The size of the online illicit market of COVID-19 essentials is unknown. We aim to collect enough data to provide an empirical assessment of this underground economy.

There are several challenges to understanding the scope of the COVID-19 underground market, including measuring the magnitude of the demand, the extent supply meets that demand and the impact of this underground economy on the legitimate market. The unknown validity of darknet customers’ and vendors’ reports about the products they purchased and sold also makes it difficult to assess the underground market.

Our systematic research approach should allow us to overcome these issues and collect this data, which could reveal how online underground markets adjust to a worldwide health crisis. This information, in turn, could help authorities develop strategies for disrupting their activities.

The Conversation

David Maimon receives funding from the National Science Foundation.

The darknet – a wild west for fake coronavirus ‘cures’? The reality is more complicated (and regulated)

Shutterstock

The coronavirus pandemic has spawned reports of unregulated health products and fake cures being sold on the dark web. These include black market PPE, illicit medications such as the widely touted “miracle” drug chloroquine, and fake COVID-19 “cures” including blood supposedly from recovered coronavirus patients.

These dealings have once again focused public attention on this little-understood section of the internet. Nearly a decade since it started being used on a significant scale, the dark web continues to be a lucrative safe haven for traders in a range of illegal goods and services, especially illicit drugs.

Black market trading on the dark web is carried out primarily through darknet marketplaces or cryptomarkets. These are anonymised trading platforms that directly connect buyers and sellers of a range of illegal goods and services – similar to legitimate trading websites such as eBay.

So how do darknet marketplaces work? And how much illegal trading of COVID-19-related products is happening via these online spaces?


Read more: Dark web, not dark alley: why drug sellers see the internet as a lucrative safe haven


Not a free-for-all

There are currently more than a dozen darknet marketplaces in operation. Protected by powerful encryption technology, authorities around the world have largely failed to contain their growth. A steadily increasing proportion of illicit drug users around the world report sourcing their drugs online. In Australia, we have one of the world’s highest concentrations of darknet drug vendors per capita.

Contrary to popular belief, cryptomarkets are not the “lawless spaces” they’re often presented as in the news. Market prohibitions exist on all mainstream cryptomarkets. Universally prohibited goods and services include: hitman services, trafficked human organs and snuff movies.

Although cryptomarkets lie outside the realm of state regulation, each one is set up and maintained by a central administrator who, along with employees or associates, is responsible for the market’s security, dispute resolution between buyers and sellers, and the charging of commissions on transactions.

Administrators are also ultimately responsible for determining what can and can’t be sold on their cryptomarket. These decisions are likely informed by:

  • the attitudes of the surrounding community comprising buyers and sellers
  • the extent of consumer demand and supply for certain products
  • the revenues a site makes from commissions charged on transactions
  • and the perceived “heat” that may be attracted from law enforcement in the trading of particularly dangerous illegal goods and services.

Read more: Illuminating the 'dark web'


Experts delve into the dark web

A report from the Australian National University published last week looks at several hundred coronavirus-related products for sale across a dozen cryptomarkets, including supposed vaccines and antidotes.

While the study confirms some unscrupulous dark web traders are indeed exploiting the pandemic and seeking to defraud naïve customers, this information should be contextualised with a couple of important caveats.

Firstly, the number of dodgy covid-related products for sale on the dark web is relatively small. According to this research, they account for about 0.2% of all listed items. The overwhelming majority of products were those we are already familiar with – particularly illicit drugs such as cannabis and MDMA.

Also, while the study focused on products listed for sale, these are most likely listings for products that either do no exist or are listed with the specific intention to defraud a customer.

Thus, the actual sale of fake coronavirus “cures” on the dark web is likely minimal, at best.

A self-regulating entity

By far the most commonly traded products on cryptomarkets are illicit drugs. Smaller sub-markets exist for other products such as stolen credit card information and fraudulent identity documents.

This isn’t to say extraordinarily dangerous and disturbing content, such as child exploitation material, can’t be found on the dark web. Rather, the sites that trade in such “products” are segregated from mainstream cryptomarkets, in much the same way convicted paedophiles are segregated from mainstream prison populations.

Since the outbreak of the coronavirus, dark web journalist and author Eileen Ormsby reported some cryptomarkets have quickly imposed bans on vendors seeking to profit from the pandemic. For instance, the following was tweeted by one cryptomarket administrator:

Any vendor caught flogging goods as a “cure” to coronavirus will not only be permanently removed from this market but should be avoided like the Spanish Flu. You are about to ingest drugs from a stranger on the internet –- under no circumstances should you trust any vendor that is using COVID-19 as a marketing tool to peddle tangible/already questionable goods. I highly doubt many of you would fall for that shit to begin with but you know, dishonest practice is never a good sign and a sure sign to stay away.

So it seems, despite the activities of a few dodgy operators, the vast majority of dark web traders are steering clear of exploiting the pandemic for their own profit. Instead, they are sticking to trading in products they can genuinely supply, such as illicit drugs.


Read more: What is the dark web and how does it work?


The Conversation

James Martin receives funding from the Australian Institute of Criminology and the National Health and Medical Research Council.

❌