AI-enabled Fraud: How Scammers Are Exploiting Generative AI
According to data from Chainabuse, TRM Lab’s open-source fraud reporting platform, reports of generative artificial intelligence (genAI)-enabled scams between May 2024 and April 2025 rose by 456%, compared with the same period in 2023-24, which had already seen a 78% increase over 2022-23. This illustrates how the explosion in genAI tools in the past few years has led to a surge in AI-enabled fraud.

Such tools allow bad actors to produce human-like text, code, images, and videos at scale. The technology is being used for activity such as creating more convincing phishing lures as well as generating deepfakes for extortion, as detailed in our February 2025 report, “The Rise of AI-Enabled Crime.”
In this post, we take look at the most common ways in which scammers are deploying AI, using examples from Chainabuse and our own investigations.
{{horizontal-line}}
Key takeaways
- Scammers are increasingly using generative AI (genAI) tools such as deepfakes and large language models (LLMs) to create more believable personas and impersonate public figures. These technologies allow fraud operations to appear more legitimate and harder to detect.
- Deepfake-enabled scams are the most commonly reported type of AI scam. These include cryptocurrency scams featuring manipulated videos of public figures such as Elon Musk. Live deepfakes are making such scams particularly difficult to detect.
- AI agents, which require limited human oversight, are accelerating the scale and sophistication of fraud. They are used by scammers to reach out to people on different platforms, gather personal data from the web for personalized scams, as well as to build LLM-powered chatbots and fake help desks.
- Although criminals are abusing AI, these same tools can be harnessed to fight back. Blockchain intelligence platforms like TRM Labs integrate AI to help investigators trace funds faster, analyze smart contracts, and detect emerging fraud typologies. Combating AI-driven scams will involve a coordinated response from financial institutions, regulators, law enforcement, and intelligence providers.
{{horizontal-line}}
How scammers are using AI
TRM has observed scammers using AI to:
- Generate deepfake crypto scams
- Create deepfake CEO impersonation scams
- Create fake personas through LLMs
- Automate and improve operations
1. Deepfake crypto scams
Deepfake scams are the most commonly reported type of AI-enabled scams, with crypto fraud among the most common use cases. In particular, Chainabuse users have reported a genAI-powered version of the classic “double-your-bitcoin” scam.
The original version of the scam involves bad actors compromising popular YouTube channels then renaming and using them to stream real interviews of popular figures in the crypto community with an overlaid scam website. Celebrities featured in these videos include prominent figures such as Elon Musk, Ripple CEO Brad Garlinghouse, MicroStrategy CEO Michael Saylor, and Ark Invest CEO Cathy Woods.
Since at least mid-2024, a more sophisticated version of this scam has emerged, aided by deepfakes. The scammers use deepfakes of these individuals — most often Elon Musk — to make them appear to be promoting the specific scam website, for example, to say they’ll double your investment.
Another Chainabuse user reported they had lost funds to a deepfake Elon Musk giveaway scam on YouTube in June 2024. TRM’s analysis shows that the reported address appears to have received funds from multiple victims within about 20 minutes, likely when the stream was live on YouTube.
Funds from that address go to various destinations — but primarily to a few large exchanges, particularly MEXC. The scammers who defrauded this victim received at least USD 5 million between March 2024 and January 2025. TRM also observed small amounts of funds being sent to two darknet markets and a cybercrime entity.

Another Chainabuse report from November 2023 claimed an Elon Musk deepfake encouraged the user to invest in an AI-powered trading platform. The scammer also moved the funds to MEXC and likely received over USD 3.3 million between July 2023 and February 2024.
These cases highlight how scammers continue to exploit public trust in celebrities to socially engineer victims. Deepfakes enable malign actors to create highly realistic videos that are difficult to distinguish from authentic celebrity endorsements of cryptocurrencies.
2. Deepfake impersonation scams
It’s not just public figures that are being deepfaked in scams. Increasingly, scammers are using deepfakes to impersonate company executives as well as members of the public.
Live deepfakes, where you overlay another person’s face on top of your own in a live video call, have added a new, sophisticated element to such scams. The technology does not require criminals to collect a large volume of data on victims. Scammers can now replicate a person’s voice or image based on a few seconds of video or audio. In February 2024, a multinational company in Hong Kong was reportedly defrauded out of millions of dollars after an employee held a video call with scammers impersonating their company’s executives using this feature.
In another common scam, criminals impersonate victims by deepfaking their voices then contacting their family members, saying they are in trouble and need money. In a similar scam, most common in Asia, threat actors impersonate victims’ friends or relatives and tell them they need help with a favor or encourage victims to invest in a scheme that they claim to have made money from.
TRM has also seen evidence of scammers using live deepfakes in financial grooming scams (long-con schemes commonly referred to as pig butchering). Furthermore, we have observed crypto payments from financial grooming scams, as well as an investment scam, to deepfake-as-a-service providers.

This indicates that scammers are likely using such services to conduct their scams. The surge in deepfake-as-a-service — and AI-as-a-service more broadly — indicates the growing demand for the technology, likely from organized criminals.
TRM witnessed another use of a live deepfake during a video call with a likely financial grooming scammer (see below). We suspect this scammer was using deepfake technology due to the person’s unnatural-looking hairline. AI detection tools enabled us to corroborate our assessment that the image was likely AI-generated. This specific scam and others related to it have received at least USD 60 million, mostly on Ethereum, indicating the potential financial gains of such fraudulent activity.

Reality check
As genAI becomes more prevalent in scams, the public will also likely become more aware of its use for illicit purposes. To assuage potential victims’ fears and appear more authentic, scammers, particularly those involved in romance-related scams, are using real-life people in conjunction with AI.
For example, TRM found evidence of women in Cambodia advertising their services as “real face models” vs. “AI models” to potential scam compound and online casino recruiters on Telegram. In such operations, scammers will set up a video call for the victim with one of their models. The women involved use deepfake technology to alter their appearance to appear more attractive or to resemble a particular person.

3. Generating fake personas through LLMs
Although deepfakes are becoming increasingly sophisticated, they have historically been easier to detect than text-based genAI. Scammers are increasingly turning to LLMs to enhance their schemes.
LLMs are especially useful to scammers conducting long-cons — such as pig butchering.These tools allow bad actors to:
- Reduce the need for human oversight and minimize operational fatigue
- Generate fake personas to build trust with victims
- Tailor messages with improved cultural or regional context to appear more legitimate
- Create convincing phishing messages at scale (KnowBe4 published a report in March 2025 saying that at least 73.8% of phishing emails they analyzed in 2024 showed some use of AI)
- Translate text more quickly and fluently into target languages
Demonstrating this last point, TRM also discovered scammers who were likely using an LLM to communicate in both simplified Chinese and English.


AI-generated CEO and employee avatars
In the same way that LLMs enable scammers to appear more legitimate, simply having an image of a CEO or team of employees on a website can help build trust with victims. For example, one of the largest pyramid schemes of 2024, MetaMax, appears to have used an AI-generated CEO.
Like many other pyramid and Ponzi schemes, MetaMax claimed users could make significant returns on investments by engaging with content on social media. The scheme used a third party to create an AI-generated avatar and targeted victims around the world, particularly in the Philippines, and received close to USD 200 million in inflows.
Additionally, an investment scam site, babit[.]cc, created avatars of its supposed staff instead of using images of real-life people as previous scams have typically done. A cursory glance at this page (see below) reveals it is likely AI-generated. However, as the technology evolves and images become increasingly realistic, detecting such uses of AI will become more challenging.

4. How AI agents enhance and automate fraud operations
AI agents are emerging as a transformative development for both licit and illicit activity. Unlike more reactive AI models, they operate with a high degree of autonomy and can initiate tasks, make decisions, and often incorporate genAI into various stages of their workflows.
For example, a business could create an AI agent to:
- Scan all unread emails in an inbox
- Categorize each message (for example, pricing query vs. shipping status query)
- Respond automatically when enough information is available
- Escalate the message to a human if additional input is required
Similarly, scammers can use AI agents to organize, scale, and make their operations more efficient. Bad actors can use agents to scrape public data such as a target’s job, location, interests, recent purchases, and social media interactions, to create personalized scams. They can also build LLM-powered chatbots and fake help desks, and leverage an agent to summarize users’ online presences and analyze their overall sentiments to find vulnerable people at scale.
Scammers are likely using AI agents to automate outreach, translation, and communication across multiple platforms. These tools can also help bad actors to build programmatic money laundering processes, optimize scam strategies by reviewing scam script outcomes at scale, and even use victim-persona agents to test new scam techniques.
Despite built-in safeguards, fraudsters likely manipulate genAI models to circumvent them. For example, you can get fraud advice from a restricted GPT by posing as a researcher and asking for help in building processes for fraud purposes. To illustrate this, TRM asked ChatGPT how a scammer might use AI agents to improve their workflow. This is the response it provided:

{{horizontal-line}}
How AI can help detect and disrupt AI-driven scams
Disrupting AI-enabled scams will require a multifaceted and forward-looking approach, involving technical solutions, policy and regulatory measures, public education, and collaboration. Policy-makers globally are working to create regulatory environments that both encourage innovation in AI, while mitigating the risks from illicit actors seeking to abuse this transformative technology.
Educating the public about the risks of AI-driven scams is imperative to help mitigate them. Agencies like the Federal Bureau of Investigation (FBI) and Europol have invested in campaigns to inform the public about the risks of genAI. Awareness reduces vulnerability and helps individuals recognize fraudulent behavior before it escalates.
Financial institutions, regulators, law enforcement, and intelligence providers will also need to work together to combat this threat. As such, blockchain intelligence tools such as those provided by TRM Labs are empowering investigators with AI-enabled tools to accelerate their investigations. AI technologies and tools can also be used for quicker tracing along last-in-first-out (LIFO) or first-in-first-out (FIFO) paths, creating summaries of investigative graphs, and discovering new criminal behavioral patterns on-chain.
TRM’s blockchain intelligence platform:
- Combines machine learning with on-chain analytics to detect criminal typologies — even when obfuscation techniques are used
- Powers real-time monitoring and risk prioritization for financial institutions and law enforcement agencies
- Leverages AI to automatically analyze smart contracts for potential issues or translate Ethereum’s programming language, Solidity, into plain language to understand what a contract does
By embedding AI in our blockchain intelligence tools, TRM empowers organizations to detect threats faster, respond more effectively, and build resilience against the evolving fraud threat landscape. Fraud fighters must incorporate AI into their tool kits to strengthen their defenses and have the best chance of preventing and investigating AI-enabled fraud.
Access our coverage of TRON, Solana and 23 other blockchains
Fill out the form to speak with our team about investigative professional services.