Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Following the digital trail: what happens to data stolen in a phishing attack

12 December 2025 at 05:00

Introduction

A typical phishing attack involves a user clicking a fraudulent link and entering their credentials on a scam website. However, the attack is far from over at that point. The moment the confidential information falls into the hands of cybercriminals, it immediately transforms into a commodity and enters the shadow market conveyor belt.

In this article, we trace the path of the stolen data, starting from its collection through various tools – such as Telegram bots and advanced administration panels – to the sale of that data and its subsequent reuse in new attacks. We examine how a once leaked username and password become part of a massive digital dossier and why cybercriminals can leverage even old leaks for targeted attacks, sometimes years after the initial data breach.

Data harvesting mechanisms in phishing attacks

Before we trace the subsequent fate of the stolen data, we need to understand exactly how it leaves the phishing page and reaches the cybercriminals.

By analyzing real-world phishing pages, we have identified the most common methods for data transmission:

  • Send to an email address.
  • Send to a Telegram bot.
  • Upload to an administration panel.

It also bears mentioning that attackers may use legitimate services for data harvesting to make their server harder to detect. Examples include online form services like Google Forms, Microsoft Forms, etc. Stolen data repositories can also be set up on GitHub, Discord servers, and other websites. For the purposes of this analysis, however, we will focus on the primary methods of data harvesting.

Email

Data entered into an HTML form on a phishing page is sent to the cybercriminal’s server via a PHP script, which then forwards it to an email address controlled by the attacker. However, this method is becoming less common due to several limitations of email services, such as delivery delays, the risk of the hosting provider blocking the sending server, and the inconvenience of processing large volumes of data.

As an example, let’s look at a phishing kit targeting DHL users.

Phishing kit contents

Phishing kit contents

The index.php file contains the phishing form designed to harvest user data – in this case, an email address and a password.

Phishing form imitating the DHL website

Phishing form imitating the DHL website

The data that the victim enters into this form is then sent via a script in the next.php file to the email address specified within the mail.php file.

Contents of the PHP scripts

Contents of the PHP scripts

Telegram bots

Unlike the previous method, the script used to send stolen data specifies a Telegram API URL with a bot token and the corresponding Chat ID, rather than an email address. In some cases, the link is hard-coded directly into the phishing HTML form. Attackers create a detailed message template that is sent to the bot after a successful attack. Here is what this looks like in the code:

Code snippet for data submission

Code snippet for data submission

Compared to sending data via email, using Telegram bots provides phishers with enhanced functionality, which is why they are increasingly adopting this method. Data arrives in the bot in real time, with instant notification to the operator. Attackers often use disposable bots, which are harder to track and block. Furthermore, their performance does not depend on the quality of phishing page hosting.

Automated administration panels

More sophisticated cybercriminals use specialized software, including commercial frameworks like BulletProofLink and Caffeine, often as a Platform as a Service (PaaS). These frameworks provide a web interface (dashboard) for managing phishing campaigns.

Data harvested from all phishing pages controlled by the attacker is fed into a unified database that can be viewed and managed through their account.

Sending data to the administration panel

Sending data to the administration panel

These admin panels are used for analyzing and processing victim data. The features of a specific panel depend on the available customization options, but most dashboards typically have the following capabilities:

  • Sorting of real-time statistics: the ability to view the number of successful attacks by time and country, along with data filtering options
  • Automatic verification: some systems can automatically check the validity of the stolen data like credit cards and login credentials
  • Data export: the ability to download the data in various formats for future use or sale
Example of an administration panel

Example of an administration panel

Admin panels are a vital tool for organized cybercriminals.

One campaign often employs several of these data harvesting methods simultaneously.

Sending stolen data to both an email address and a Telegram bot

Sending stolen data to both an email address and a Telegram bot

The data cybercriminals want

The data harvested during a phishing attack varies in value and purpose. In the hands of cybercriminals, it becomes a method of profit and a tool for complex, multi-stage attacks.

Stolen data can be divided into the following categories, based on its intended purpose:

  • Immediate monetization: the direct sale of large volumes of raw data or the immediate withdrawal of funds from a victim’s bank account or online wallet.
    • Banking details: card number, expiration date, cardholder name, and CVV/CVC.
    • Access to online banking accounts and digital wallets: logins, passwords, and one-time 2FA codes.
    • Accounts with linked banking details: logins and passwords for accounts that contain bank card details, such as online stores, subscription services, or payment systems like Apple Pay or Google Pay.
  • Subsequent attacks for further monetization: using the stolen data to conduct new attacks and generate further profit.
    • Credentials for various online accounts: logins and passwords. Importantly, email addresses or phone numbers, which are often used as logins, can hold value for attackers even without the accompanying passwords.
    • Phone numbers, used for phone scams, including attempts to obtain 2FA codes, and for phishing via messaging apps.
    • Personal data: full name, date of birth, and address, abused in social engineering attacks
  • Targeted attacks, blackmail, identity theft, and deepfakes.
    • Biometric data: voice and facial projections.
    • Scans and numbers of personal documents: passports, driver’s licenses, social security cards, and taxpayer IDs.
    • Selfies with documents, used for online loan applications and identity verification.
    • Corporate accounts, used for targeted attacks on businesses.

We analyzed phishing and scam attacks conducted from January through September 2025 to determine which data was most frequently targeted by cybercriminals. We found that 88.5% of attacks aimed to steal credentials for various online accounts, 9.5% targeted personal data (name, address, and date of birth), and 2% focused on stealing bank card details.

Distribution of attacks by target data type, January–September 2025 (download)

Selling data on dark web markets

Except for real-time attacks or those aimed at immediate monetization, stolen data is typically not used instantly. Let’s take a closer look at the route it takes.

  1. Sale of data dumps
    Data is consolidated and put up for sale on dark web markets in the form of dumps: archives that contain millions of records obtained from various phishing attacks and data breaches. A dump can be offered for as little as $50. The primary buyers are often not active scammers but rather dark market analysts, the next link in the supply chain.
  2. Sorting and verification
    Dark market analysts filter the data by type (email accounts, phone numbers, banking details, etc.) and then run automated scripts to verify it. This checks validity and reuse potential, for example, whether a Facebook login and password can be used to sign in to Steam or Gmail. Data stolen from one service several years ago can still be relevant for another service today because people tend to use identical passwords across multiple websites. Verified accounts with an active login and password command a higher price at the point of sale.
    Analysts also focus on combining user data from different attacks. Thus, an old password from a compromised social media site, a login and password from a phishing form mimicking an e-government portal, and a phone number left on a scam site can all be compiled into a single digital dossier on a specific user.
  3. Selling on specialized markets
    Stolen data is typically sold on dark web forums and via Telegram. The instant messaging app is often used as a storefront to display prices, buyer reviews, and other details.
    Offers of social media data, as displayed in Telegram

    Offers of social media data, as displayed in Telegram

    The prices of accounts can vary significantly and depend on many factors, such as account age, balance, linked payment methods (bank cards, online wallets), 2FA authentication, and service popularity. Thus, an online store account may be more expensive if it is linked to an email, has 2FA enabled, and has a long history, with a large number of completed orders. For gaming accounts, such as Steam, expensive game purchases are a factor. Online banking data sells at a premium if the victim has a high account balance and the bank itself has a good reputation.

    The table below shows prices for various types of accounts found on dark web forums as of 2025*.

    Category Price Average price
    Crypto platforms $60–$400 $105
    Banks $70–$2000 $350
    E-government portals $15–$2000 $82.5
    Social media $0.4–$279 $3
    Messaging apps $0.065–$150 $2.5
    Online stores $10–$50 $20
    Games and gaming platforms $1–$50 $6
    Global internet portals $0.2–$2 $0.9
    Personal documents $0.5–$125 $15

    *Data provided by Kaspersky Digital Footprint Intelligence

  4. High-value target selection and targeted attacks
    Cybercriminals take particular interest in valuable targets. These are users who have access to important information: senior executives, accountants, or IT systems administrators.

    Let’s break down a possible scenario for a targeted whaling attack. A breach at Company A exposes data associated with a user who was once employed there but now holds an executive position at Company B. The attackers analyze open-source intelligence (OSINT) to determine the user’s current employer (Company B). Next, they craft a sophisticated phishing email to the target, purportedly from the CEO of Company B. To build trust, the email references some facts from the target’s old job – though other scenarios exist too. By disarming the user’s vigilance, cybercriminals gain the ability to compromise Company B for a further attack.

    Importantly, these targeted attacks are not limited to the corporate sector. Attackers may also be drawn to an individual with a large bank account balance or someone who possesses important personal documents, such as those required for a microloan application.

Takeaways

The journey of stolen data is like a well-oiled conveyor belt, where every piece of information becomes a commodity with a specific price tag. Today, phishing attacks leverage diverse systems for harvesting and analyzing confidential information. Data flows instantly into Telegram bots and attackers’ administration panels, where it is then sorted, verified, and monetized.

It is crucial to understand that data, once lost, does not simply vanish. It is accumulated, consolidated, and can be used against the victim months or even years later, transforming into a tool for targeted attacks, blackmail, or identity theft. In the modern cyber-environment, caution, the use of unique passwords, multi-factor authentication, and regular monitoring of your digital footprint are no longer just recommendations – they are a necessity.

What to do if you become a victim of phishing

  1. If a bank card you hold has been compromised, call your bank as soon as possible and have the card blocked.
  2. If your credentials have been stolen, immediately change the password for the compromised account and any online services where you may have used the same or a similar password. Set a unique password for every account.
  3. Enable multi-factor authentication in all accounts that support this.
  4. Check the sign-in history for your accounts and terminate any suspicious sessions.
  5. If your messaging service or social media account has been compromised, alert your family and friends about potential fraudulent messages sent in your name.
  6. Use specialized services to check if your data has been found in known data breaches.
  7. Treat any unexpected emails, calls, or offers with extreme vigilance – they may appear credible because attackers are using your compromised data.

New trends in phishing and scams: how AI and social media are changing the game

13 August 2025 at 04:00

Introduction

Phishing and scams are dynamic types of online fraud that primarily target individuals, with cybercriminals constantly adapting their tactics to deceive people. Scammers invent new methods and improve old ones, adjusting them to fit current news, trends, and major world events: anything to lure in their next victim.

Since our last publication on phishing tactics, there has been a significant leap in the evolution of these threats. While many of the tools we previously described are still relevant, new techniques have emerged, and the goals and methods of these attacks have shifted.

In this article, we will explore:

  • The impact of AI on phishing and scams
  • How the tools used by cybercriminals have changed
  • The role of messaging apps in spreading threats
  • Types of data that are now a priority for scammers

AI tools leveraged to create scam content

Text

Traditional phishing emails, instant messages, and fake websites often contain grammatical and factual errors, incorrect names and addresses, and formatting issues. Now, however, cybercriminals are increasingly turning to neural networks for help.

They use these tools to create highly convincing messages that closely resemble legitimate ones. Victims are more likely to trust these messages, and therefore, more inclined to click a phishing link, open a malicious attachment, or download an infected file.

Example of a phishing email created with DeepSeek

Example of a phishing email created with DeepSeek

The same is true for personal messages. Social networks are full of AI bots that can maintain conversations just like real people. While these bots can be created for legitimate purposes, they are often used by scammers who impersonate human users. In particular, phishing and scam bots are common in the online dating world. Scammers can run many conversations at once, maintaining the illusion of sincere interest and emotional connection. Their primary goal is to extract money from victims by persuading them to pursue “viable investment opportunities” that often involve cryptocurrency. This scam is known as pig butchering. AI bots are not limited to text communication, either; to be more convincing, they also generate plausible audio messages and visual imagery during video calls.

Deepfakes and AI-generated voices

As mentioned above, attackers are actively using AI capabilities like voice cloning and realistic video generation to create convincing audiovisual content that can deceive victims.

Beyond targeted attacks that mimic the voices and images of friends or colleagues, deepfake technology is now being used in more classic, large-scale scams, such as fake giveaways from celebrities. For example, YouTube users have encountered Shorts where famous actors, influencers, or public figures seemingly promise expensive prizes like MacBooks, iPhones, or large sums of money.

Deepfake YouTube Short

Deepfake YouTube Short

The advancement of AI technology for creating deepfakes is blurring the lines between reality and deception. Voice and visual forgeries can be nearly indistinguishable from authentic messages, as traditional cues used to spot fraud disappear.

Recently, automated calls have become widespread. Scammers use AI-generated voices and number spoofing to impersonate bank security services. During these calls, they claim there has been an unauthorized attempt to access the victim’s bank account. Under the guise of “protecting funds”, they demand a one-time SMS code. This is actually a 2FA code for logging into the victim’s account or authorizing a fraudulent transaction.

 

Example of an OTP (one-time password) bot call

Data harvesting and analysis

Large language models like ChatGPT are well-known for their ability to not only write grammatically correct text in various languages but also to quickly analyze open-source data from media outlets, corporate websites, and social media. Threat actors are actively using specialized AI-powered OSINT tools to collect and process this information.

The data so harvested enables them to launch phishing attacks that are highly tailored to a specific victim or a group of victims – for example, members of a particular social media community. Common scenarios include:

  • Personalized emails or instant messages from what appear to be HR staff or company leadership. These communications contain specific details about internal organizational processes.
  • Spoofed calls, including video chats, from close contacts. The calls leverage personal information that the victim would assume could not be known to an outsider.

This level of personalization dramatically increases the effectiveness of social engineering, making it difficult for even tech-savvy users to spot these targeted scams.

Phishing websites

Phishers are now using AI to generate fake websites too. Cybercriminals have weaponized AI-powered website builders that can automatically copy the design of legitimate websites, generate responsive interfaces, and create sign-in forms.

Some of these sites are well-made clones nearly indistinguishable from the real ones. Others are generic templates used in large-scale campaigns, without much effort to mimic the original.

Phishing pages mimicking travel and tourism websites

Phishing pages mimicking travel and tourism websites

Often, these generic sites collect any data a user enters and are not even checked by a human before being used in an attack. The following are examples of sites with sign-in forms that do not match the original interfaces at all. These are not even “clones” in the traditional sense, as some of the brands being targeted do not offer sign-in pages.

These types of attacks lower the barrier to entry for cybercriminals and make large-scale phishing campaigns even more widespread.

Login forms on fraudulent websites

Login forms on fraudulent websites

Telegram scams

With its massive popularity, open API, and support for crypto payments, Telegram has become a go-to platform for cybercriminals. This messaging app is now both a breeding ground for spreading threats and a target in itself. Once they get their hands on a Telegram account, scammers can either leverage it to launch attacks on other users or sell it on the dark web.

Malicious bots

Scammers are increasingly using Telegram bots, not just for creating phishing websites but also as an alternative or complement to these. For example, a website might be used to redirect a victim to a bot, which then collects the data the scammers need. Here are some common schemes that use bots:

  • Crypto investment scams: fake token airdrops that require a mandatory deposit for KYC verification
Telegram bot seemingly giving away SHIBARMY tokens

Telegram bot seemingly giving away SHIBARMY tokens

  • Phishing and data collection: scammers impersonate official postal service to get a user’s details under the pretense of arranging delivery for a business package.
Phishing site redirects the user to an "official" bot.

Phishing site redirects the user to an “official” bot.

  • Easy money scams: users are offered money to watch short videos.
Phishing site promises easy earnings through a Telegram bot.

Phishing site promises easy earnings through a Telegram bot.

Unlike a phishing website that the user can simply close and forget about when faced with a request for too much data or a commission payment, a malicious bot can be much more persistent. If the victim has interacted with a bot and has not blocked it, the bot can continue to send various messages. These might include suspicious links leading to fraudulent or advertising pages, or requests to be granted admin access to groups or channels. The latter is often framed as being necessary to “activate advanced features”. If the user gives the bot these permissions, it can then spam all the members of these groups or channels.

Account theft

When it comes to stealing Telegram user accounts, social engineering is the most common tactic. Attackers use various tricks and ploys, often tailored to the current season, events, trends, or the age of their target demographic. The goal is always the same: to trick victims into clicking a link and entering the verification code.

Links to phishing pages can be sent in private messages or posted to group chats or compromised channels. Given the scale of these attacks and users’ growing awareness of scams within the messaging app, attackers now often disguise these phishing links using Telegram’s message-editing tools.

This link in this phishing message does not lead to the URL shown

This link in this phishing message does not lead to the URL shown

New ways to evade detection

Integrating with legitimate services

Scammers are actively abusing trusted platforms to keep their phishing resources under the radar for as long as possible.

  • Telegraph is a Telegram-operated service that lets anyone publish long-form content without prior registration. Cybercriminals take advantage of this feature to redirect users to phishing pages.
Phishing page on the telegra.ph domain

Phishing page on the telegra.ph domain

  • Google Translate is a machine translation tool from Google that can translate entire web pages and generate links like https://site-to-translate-com.translate.goog/… Attackers exploit it to hide their assets from security vendors. They create phishing pages, translate them, and then send out the links to the localized pages. This allows them to both avoid blocking and use a subdomain at the beginning of the link that mimics a legitimate organization’s domain name, which can trick users.
Localized phishing page

Localized phishing page

  • CAPTCHA protects websites from bots. Lately, attackers have been increasingly adding CAPTCHAs to their fraudulent sites to avoid being flagged by anti-phishing solutions and evade blocking. Since many legitimate websites also use various types of CAPTCHAs, phishing sites cannot be identified by their use of CAPTCHA technology alone.
CAPTCHA on a phishing site

CAPTCHA on a phishing site

Blob URL

Blob URLs (blob:https://example.com/…) are temporary links generated by browsers to access binary data, such as images and HTML code, locally. They are limited to the current session. While this technology was originally created for legitimate purposes, such as previewing files a user is uploading to a site, cybercriminals are actively using it to hide phishing attacks.

Blob URLs are created with JavaScript. The links start with “blob:” and contain the domain of the website that hosts the script. The data is stored locally in the victim’s browser, not on the attacker’s server.

Blob URL generation script inside a phishing kit

Blob URL generation script inside a phishing kit

Hunting for new data

Cybercriminals are shifting their focus from stealing usernames and passwords to obtaining irrevocable or immutable identity data, such as biometrics, digital signatures, handwritten signatures, and voiceprints.

For example, a phishing site that asks for camera access supposedly to verify an account on an online classifieds service allows scammers to collect your biometric data.

Phishing for biometrics

Phishing for biometrics

For corporate targets, e-signatures are a major focus for attackers. Losing control of these can cause significant reputational and financial damage to a company. This is why services like DocuSign have become a prime target for spear-phishing attacks.

Phishers targeting DocuSign accounts

Phishers targeting DocuSign accounts

Even old-school handwritten signatures are still a hot commodity for modern cybercriminals, as they remain critical for legal and financial transactions.

Phishing for handwritten signatures

Phishing for handwritten signatures

These types of attacks often go hand-in-hand with attempts to gain access to e-government, banking and corporate accounts that use this data for authentication.

These accounts are typically protected by two-factor authentication, with a one-time password (OTP) sent in a text message or a push notification. The most common way to get an OTP is by tricking users into entering it on a fake sign-in page or by asking for it over the phone.

Attackers know users are now more aware of phishing threats, so they have started to offer “protection” or “help for victims” as a new social engineering technique. For example, a scammer might send a victim a fake text message with a meaningless code. Then, using a believable pretext – like a delivery person dropping off flowers or a package – they trick the victim into sharing that code. Since the message sender indeed looks like a delivery service or a florist, the story may sound convincing. Then a second attacker, posing as a government official, calls the victim with an urgent message, telling them they have just been targeted by a tricky phishing attack. They use threats and intimidation to coerce the victim into revealing a real, legitimate OTP from the service the cybercriminals are actually after.

Fake delivery codes

Fake delivery codes

Takeaways

Phishing and scams are evolving at a rapid pace, fueled by AI and other new technology. As users grow increasingly aware of traditional scams, cybercriminals change their tactics and develop more sophisticated schemes. Whereas they once relied on fake emails and websites, today, scammers use deepfakes, voice cloning and multi-stage tactics to steal biometric data and personal information.
Here are the key trends we are seeing:

  • Personalized attacks: AI analyzes social media and corporate data to stage highly convincing phishing attempts.
  • Usage of legitimate services: scammers are misusing trusted platforms like Google Translate and Telegraph to bypass security filters.
  • Theft of immutable data: biometrics, signatures, and voiceprints are becoming highly sought-after targets.
  • More sophisticated methods of circumventing 2FA: cybercriminals are using complex, multi-stage social engineering attacks.

How do you protect yourself?

  • Critically evaluate any unexpected calls, emails, or messages. Avoid clicking links in these communications, even if they appear legitimate. If you do plan to open a link, verify its destination by hovering over it on a desktop or long-pressing on a mobile device.
  • Verify sources of data requests. Never share OTPs with anyone, regardless of who they claim to be, even if they say they are a bank employee.
  • Analyze content for fakery. To spot deepfakes, look for unnatural lip movements or shadows in videos. You should also be suspicious of any videos featuring celebrities who are offering overly generous giveaways.
  • Limit your digital footprint. Do not post photos of documents or sensitive work-related information, such as department names or your boss’s name, on social media.

❌
❌