FATA News Alert
Contributing Author: Marie Monet, CPA, CITP, CGMA
In the News: Deepfake Technology Emerges as Critical Threat to Financial Institutions
Background:
Financial institutions are facing an alarming new threat: sophisticated deepfake technology used to commit fraud. The Financial Crimes Enforcement Network (FinCEN) recently issued an alert highlighting the growing use of generative AI (GenAI) to create convincing synthetic content—including realistic but inauthentic videos, pictures, audio, and text—to defraud financial institutions and their customers.
The threat has escalated significantly in 2023 and 2024, with FinCEN observing a marked increase in suspicious activity reports describing deepfake-related fraud schemes. These attacks often involve altering or creating fraudulent identity documents to bypass verification processes. What makes this particularly concerning is that GenAI tools have dramatically reduced the resources required to produce high-quality synthetic content that can be difficult to distinguish from authentic media.¹
One notable case involved criminals creating entirely synthetic identities by combining GenAI-generated images with stolen or fabricated personal identifiable information (PII). These synthetic identities were then used to open accounts that functioned as funnels for laundering proceeds from other fraud schemes, including check fraud, credit card fraud, authorized push payment fraud, loan fraud, and unemployment fraud.²
Main issue(s):
Identity Verification Circumvention: Criminals are using GenAI to alter or generate convincing identity documents, including driver's licenses and passports, that can pass standard verification checks.
Synthetic Identity Creation: By combining AI-generated images with stolen or fabricated PII, criminals are creating entirely new "people" who don't exist but can open accounts and conduct transactions.
Account Opening Fraud: Once accounts are established using deepfake identities, they become conduits for receiving and laundering proceeds from various other fraud schemes.
Sophisticated Social Engineering: Beyond document fraud, criminals are using deepfake audio and video in phishing attacks, business email compromise schemes, elder financial exploitation, and romance scams.
Other risks & considerations:
Technology Evolution: As GenAI tools become more accessible and sophisticated, the barrier to creating convincing deepfakes continues to lower, potentially leading to more widespread fraud.
Verification Challenges: Traditional identity verification methods may be insufficient against this new class of threats, requiring financial institutions to implement more advanced detection techniques.
Regulatory Landscape: Current regulations may not fully address the nuances of GenAI-driven fraud, creating potential gaps in enforcement.
Financial System Risk: The ability to bypass identity verification at scale represents a systemic risk to financial institutions and the broader financial system.
Takeaways:
Implement Multi-Layered Verification: Financial institutions should deploy multifactor authentication (MFA), including phishing-resistant MFA, and consider live verification checks where customers confirm their identity through real-time audio or video.
Watch for Red Flags: Key indicators of potential deepfake usage include photos that appear altered, inconsistencies between multiple identity documents, suspicious technical glitches during verification, reverse-image matches to online galleries of AI-generated faces, and geographic or device data that conflicts with customer identity documents.
Enhance Detection Capabilities: Consider using specialized software designed to detect deepfakes, examining image metadata, and implementing more sophisticated verification techniques.
Learning Resources:
Deepfakes Emerge as Real Cybersecurity Threat: Insights on the growing threat of AI-driven deepfakes in cybersecurity.
Forensic Accounting and Litigation Consulting | FVS Eye on Fraud: An overview of AI’s impact on forensic accounting.
FATA News Alert: Deepfake A.I. Heist of $25M: Delves into the details of a $25M Deepfake A.I. Heist offering a summary of key insights and best practices for practitioners.
The Forensic Accounting Technology & Analytics (FATA) task force serves as a strategic leadership group for the integrated areas of accounting information systems, data analytics, and digital financial forensics. This group facilitates uniquely cross-trained experts into an innovative and strategic thought leadership team, whose primary roles include: stimulating forward-thinking solutions to emerging FATA issues in our global economies; promulgating best practices that are unique to FATA practitioners and environments; identifying and developing FATA resources to enhance learning and professional development opportunities; and educating members and stakeholders to promote FATA’s mission through strategic global relationships.
¹ U.S. Department of the Treasury’s Financial Crimes Enforcement Network (FinCEN), FinCEN Alert on Fraud Schemes Involving Deepfake Media Targeting Financial Institutions, November 2024. FinCEN Alert on Fraud Schemes Involving Deepfake Media Targeting Financial Institutions
² U.S. Department of Treasury, Managing Artificial Intelligence-Specific Risks in the Financial Services Sector, March 2024. Managing Artificial Intelligence-Specific Risks in the Financial Services Sector