Crypto Heist: The Face That Didn't Exist
Treasury's new report reveals how AI-generated synthetic identities defeated bank verification systems—and why the solution is more AI
The face in the video looked real, the driver’s license appeared authentic, and the account passed all automated checks. Three weeks later, investigators discovered the customer never existed.
Treasury’s Financial Crimes Enforcement Network issued an alert in November 2024 warning that criminals were using deepfake media to bypass identity verification at U.S. financial institutions. By the time the department’s new report to Congress landed last week, the problem had evolved beyond warnings into documented losses: malicious actors had successfully opened accounts using fraudulent identities suspected to have been produced with generative AI, then used those accounts to launder proceeds from other fraud schemes. According to analysis of Bank Secrecy Act data, this isn’t theoretical risk- it’s operational reality.
I’ve been trying to understand the mechanics of how verification systems failed, and the technical details are both straightforward and unsettling. Criminals create deepfake images by modifying authentic source photos or generating synthetic ones entirely. They combine these AI-generated images with stolen personally identifiable information scraped from data breaches, or fabricate the PII entirely using the same generative models. The resulting synthetic identity includes a face that doesn’t exist, credentials that appear legitimate, and behavioral signals that fool automated screening.
Traditional “Know Your Customer” verification relied on document authenticity: does this driver’s license match government databases, do the security features check out, does the face in the photo match the face in the video call? Generative AI breaks this model because the documents are authentic in every verifiable way except the fundamental fact that the person doesn’t exist. A passport photographed at the correct angle, a utility bill in standard format, a selfie video with natural lighting and movement, all produced by algorithms trained on millions of real examples.
The scale shifts fraud economics dramatically. Deepfake files surged from 500,000 in 2023 to 8 million in 2025, according to industry tracking. Fraud attempts using this technology spiked 3,000% in 2023 alone, with 1,740% growth in North America. Contact centers reported a 680% year-over-year rise in deepfake activity. What previously required skilled forgers or insider access to identity databases now requires consumer-grade AI tools and publicly available training data.
Treasury’s response is to fight automation with automation, which sounds either sophisticated or recursive depending on your perspective. The department recommends financial institutions deploy AI systems capable of analyzing blockchain transaction patterns, simulating money laundering scenarios, and adapting to evolving criminal tactics in real time. Specifically: entity resolution through graph analysis to map connections among wallets and exchanges; behavioral monitoring to detect synthetic attempts through login patterns and device signals; algorithms that identify “chain-hopping” (moving assets across blockchains) and “smurfing” (structuring small deposits across multiple accounts).
The technical approach has a certain logic. AI-powered models can process transaction data at speeds human analysts cannot match, learning sequences that indicate money laundering- deposit to exchange, convert to privacy coin, transfer through mixer, swap to stablecoin, withdraw through different exchange—and flagging those patterns before funds disappear into jurisdictions beyond U.S. reach. Some tools can now interdict fraudulent transactions in real time by identifying when a customer’s digital asset wallet interacts with known scam websites at the moment of transfer.
Large language models offer different capabilities: automating adverse media and sanctions screening checks, synthesizing vast amounts of unstructured data to assist case reviews, even drafting suspicious activity report narratives. Compliance personnel describe faster and deeper analysis than manual review allows. One institution reported using AI to cut Know Your Customer verification costs by 60% over 18 months by handling non-standard documents- passports photographed at angles, utility bills in foreign formats, corporate filings in languages automated systems previously rejected.
The challenge is that criminals have access to the same technology. Treasury’s report notes that generative AI tools are used for phishing campaigns, scanning breached data repositories to extract PII, and creating high-quality fraudulent documents that fool both automated systems and human reviewers. Industry sources report that manual analysts “can no longer tell legitimate from fraudulent with the human eye” when examining AI-generated identity documents. The visual and structural tells that previously flagged forgeries—inconsistent shadows, pixelation artifacts, formatting errors, disappear when algorithms generate documents from the same training data that verification systems use to validate authenticity.
This creates an adversarial cycle: institutions deploy machine learning models to detect synthetic identities, criminals train generative models to fool those detection systems, institutions add liveness detection (verifying that biometric samples come from living persons rather than deepfake videos), criminals develop techniques to defeat liveness checks. Each iteration raises the sophistication floor for both sides.
Financial institutions highlighted to Treasury that data quality and model validation remain barriers to implementation. AI systems operate as “black boxes” that complicate compliance teams’ ability to explain decisions to regulators or customers. Historical transaction data may reflect enforcement biases that skew model outcomes. Upfront costs for adoption prove prohibitive for smaller institutions unable to dedicate resources to training custom systems. Ongoing expenses for model maintenance, governance, and monitoring add to the burden, particularly as cybersecurity risks from AI tools themselves emerge as concerns.
What surprised me in Treasury’s recommendations is the acknowledgment that existing regulatory frameworks, while technically technology-agnostic, weren’t designed with AI capabilities in mind. The department suggests financial institutions align their AI model development with the National Institute of Standards and Technology’s AI Risk Management Framework, which emphasizes transparency, documentation, and model-risk validation. The implicit message: regulators want institutions to stop running legacy rules-based systems in parallel with machine learning models once the AI approach proves effective. Parallel runs increase costs, create inefficiencies between systems, and waste investigatory resources on false positives.
The geopolitical dimension complicates this further. The report describes AI-powered entity resolution mapping connections across “multi-jurisdictional networks that may evade detection by legacy, rules-based systems.” North Korean hackers, for instance, have proven adept at using complex laundering sequences that exploit the seams between different regulatory regimes and technology platforms. AI detection systems need to track activity across blockchains, through mixers, across bridges, and into over-the-counter brokers who prefer stablecoins, each step potentially involving different jurisdictions with varying levels of cooperation.
Treasury estimates consumers lost $12.5 billion to fraud in 2024, with digital asset-related fraud accounting for $9 billion of that total. Investment scams using AI-enhanced social engineering—the “pig butchering” schemes run from industrial-scale operations in Southeast Asia—netted $5.8 billion alone. These aren’t failures of technology but of the adversarial balance between defensive and offensive capabilities.
The deeper question is whether detection can keep pace with generation. Machine learning models improve through training on larger datasets; so do generative models producing synthetic identities. Financial institutions invest in graph analysis to map wallet connections; criminals fragment transactions across more wallets and exchanges. Banks deploy liveness detection for video verification; deepfake tools advance to defeat those checks. The technical capabilities are symmetric, which means advantage comes down to resources, speed of implementation, and willingness to accept false positives versus false negatives in screening.
Treasury’s policy recommendations suggest betting on scale: public-private partnerships to share best practices, guidance encouraging AI adoption for compliance, alignment with NIST frameworks to provide regulatory clarity. The logic is that financial institutions collectively have more resources and data than criminals, making sophisticated AI detection economically viable where human review isn’t. But the report also notes that institutions worry about regulatory uncertainty in deploying systems whose decision-making processes they cannot fully explain.
This is the paradox of adversarial AI in financial crime: the same opacity that makes these systems powerful—their ability to identify patterns humans miss, to process data at speeds manual review cannot match—also makes them difficult to audit, explain, or trust. When an algorithm flags a transaction as suspicious based on a complex web of behavioral signals and network connections, compliance teams face the challenge of translating that into actionable intelligence or defensible regulatory reports.
The result is an arms race with no obvious equilibrium. Better detection drives criminals toward more sophisticated generation. More sophisticated generation forces institutions to deploy more complex detection. Each side’s advances push the other toward greater technical capability and higher operational costs. The question isn’t whether AI will be used for both fraud and fraud detection—that ship has sailed. It’s whether defensive applications can maintain parity with offensive ones, and at what cost to institutions, regulators, and the customers caught in between.
Treasury’s report amounts to an acknowledgment that there’s no going back to manual verification and rules-based screening. The technology exists, criminals are using it, and the only question is how quickly financial institutions can build detection capabilities sophisticated enough to keep up. Whether that constitutes progress or just an escalating cycle of technical complexity is a matter of perspective.



