AI Scams Statistics in US 2026 | Key Facts

AI Scams Statistics in US 2026 | Key Facts

  • Post category:Tech

What Are AI Scams?

AI scams are fraudulent schemes that use artificial intelligence technologies — including large language models, voice cloning, deepfake video generation, AI-generated phishing content, and autonomous bot systems — to deceive victims into surrendering money, personal data, or access credentials. Unlike traditional scams that relied on clumsy impersonation, scripted phone calls, and easily spotted grammar errors, AI-powered fraud has crossed a threshold that makes it genuinely difficult to detect, even for technically informed individuals. Voice cloning tools can now replicate any person’s voice from as little as three seconds of audio. Deepfake video generators can fabricate a live video call with a convincing digital facsimile of a family member, executive, or government official. Large language models can produce phishing emails so contextually precise and grammatically flawless that click-through rates are now more than four times higher than human-crafted equivalents. The Federal Bureau of Investigation formally acknowledged this shift in April 2026 by including, for the first time in its 26-year history, a dedicated “AI-related crime” category in its annual Internet Crime Complaint Center (IC3) report — a milestone that underscores how profoundly AI has restructured the fraud landscape.

What makes the AI scams statistics in the US in 2026 so alarming is not just the dollar amounts — though those are staggering — but the structural changes that AI has made to the fraud economy itself. Barriers to entry have collapsed. A criminal network with a $60/month subscription to commercial AI tools can now run voice cloning operations, deepfake video campaigns, and mass phishing operations that would have required a professional production team just five years ago. The Federal Trade Commission (FTC) testified before the US Congressional Joint Economic Committee on March 25, 2026, that Americans reported a record $15.9 billion in fraud losses in 2025 — up 430% since 2020 and up 27% from 2024. The FBI’s concurrent IC3 2025 Annual Report recorded $20.9 billion in cybercrime losses, the first time that figure has exceeded $20 billion in a single year. AI is not the only driver of these numbers — but it is the defining accelerant. Every data source, every federal agency, and every enforcement body examining the current fraud environment in the US is arriving at the same conclusion: AI has transformed scamming from a cottage industry into an industrial operation, and the statistics bear this out in full.


Key Facts: AI Scams Statistics in the US 2026

The following table presents the most important and verified AI scams facts 2026, drawn exclusively from federal law enforcement reports, FTC Congressional testimony, and officially cited federal data.

Key Fact Verified Stat
Total US cybercrime losses reported to FBI IC3 in 2025 $20.877 billion — first time surpassing $20B; up 26% from 2024
Total FBI IC3 complaints received in 2025 1,008,597 — first time ever exceeding 1 million in a single year
Average daily complaints received by FBI IC3 in 2025 ~2,760 complaints per day
AI-related complaints formally logged by FBI IC3 in 2025 22,364 complaints — first time AI was a dedicated IC3 category
Total losses attributed to AI-related complaints in 2025 $893 million (FBI IC3 2025 Annual Report, April 2026)
Losses to AI investment fraud (largest AI sub-category) $632 million — 70.8% of all AI-related IC3 losses
Losses to AI-enabled Business Email Compromise (BEC) $30 million confirmed AI-component losses within $3.046B BEC total
Losses to AI-enabled tech support scams $19.5 million in confirmed AI-component IC3 losses
Total FTC consumer fraud losses reported in 2025 $15.9 billion — a record high; up from $12.5B in 2024
FTC fraud reports received in 2025 3 million reports — up from 2.6 million in 2024
FTC fraud losses increase since 2020 +430% — from ~$3 billion in 2020 to $15.9 billion in 2025
FTC investment scam losses in 2025 $7.9 billion — nearly half of all FTC-reported fraud losses
FTC imposter scam reports in 2025 Over 1 million reports — most frequently reported fraud category
FTC imposter scam losses in 2025 $3.5 billion
Social media scam losses in 2025 (FTC Data Spotlight) $2.1 billion — an 8× increase since 2020
Share of 2025 fraud victims saying scam started on social media ~30% (FTC Data Spotlight, April 2026)
Most reported social media platform for scam losses in 2025 Facebook — more losses reported than from text or email scams
Social media investment scam losses alone (2025) $1.1 billion — more than half of total social media scam losses
Losses reported by Americans aged 60+ in 2025 (FBI IC3) $7.7 billion — up ~60% from 2024
Share of AI-related IC3 losses attributed to adults 60+ $352 million — 39% of all AI-related FBI IC3 losses
Generative AI-enabled fraud surge in 2025 1,210% increase vs. prior year (Vectra AI, March 2026)
Deepfake-enabled vishing attacks surge, Q1 2025 vs Q4 2024 +1,600% (Keepnet Labs, March 2026)
Projected US AI-facilitated fraud losses by 2027 $40 billion (Deloitte Center for Financial Services / Javelin Strategy)
Voice cloning tool cost for a fraud kit (monthly subscription) As low as $60/month — commercially available tools
Audio required to clone a person’s voice (McAfee Research) As little as 3 seconds to produce an 85% voice match
1 in 10 Americans (2026 survey data) Has been directly targeted by an AI voice clone scam

Data Sources: FBI Internet Crime Complaint Center (IC3) 2025 Annual Report — released April 6, 2026; FBI Press Release — “Cryptocurrency and AI Scams Bilk Americans of Billions” (April 2026); FTC Congressional Joint Economic Committee Testimony of Lois Greisman, Associate Director — March 25, 2026; FTC Data Spotlight — “New FTC Data Show People Have Lost Billions to Social Media Scams” (April 2026); FTC Annual Report to Congress — “Protecting Older Consumers 2024–2025” (December 1, 2025); Keepnet Labs Deepfake Statistics & Trends 2026 (March 2026); Vectra AI — AI Scams 2026 Analysis (March 2026)

These 26 data points map the full topography of AI-enabled fraud in America as of 2026. The $893 million in formally AI-attributed IC3 losses and the $20.877 billion in total FBI-reported cybercrime losses are not in tension — they measure different things. The $893 million counts only cases where victims explicitly identified an AI component in their complaint, which is almost certainly a dramatic undercount: most victims who lose money to a deepfake phishing email or a voice clone call never identify the AI element when filing a report. The full $20.877 billion is the total cybercrime universe, within which AI is the single most powerful accelerant but rarely the explicitly named mechanism. The FTC’s parallel $15.9 billion represents reported consumer fraud losses across all channels, and the 430% increase since 2020 is the clearest single indicator of how fundamentally the fraud landscape has shifted in five years — a period that precisely maps to the democratization of generative AI tools.


FBI IC3 2025 AI Cybercrime Losses in the US 2026

FBI IC3 Annual Cybercrime Losses — US Historical Trend
(FBI IC3 Annual Reports 2020–2025; released April 2026)

2020 |████████████████████                            $4.2B
2021 |███████████████████████████                     $6.9B
2022 |██████████████████████████████████              $10.3B
2023 |███████████████████████████████████████         $12.5B
2024 |████████████████████████████████████████████    $16.6B
2025 |██████████████████████████████████████████████████████ $20.9B ← RECORD
      ─────────────────────────────────────────────────────────────
      $0      $5B     $10B     $15B      $20B    $20.9B

AI-Related IC3 Complaints & Losses in 2025 (First Dedicated Category):
  22,364 AI-related complaints → $893 million in attributed losses
  AI investment fraud alone:   $632 million (70.8% of AI total)
Crime Category (FBI IC3 2025) Complaints Total Losses AI-Component Losses
Investment Fraud High volume $8.6 billion $632 million confirmed AI-linked
Business Email Compromise (BEC) Significant $3.046 billion $30 million confirmed AI-linked
Tech Support Scams 37,000+ $2.1 billion $19.5 million confirmed AI-linked
Cryptocurrency fraud 180,000+ $11.36 billion Heavily AI-assisted
Phishing / Spoofing 191,561 — #1 by volume $215.8 million AI-enhanced content
AI-Related (all categories) 22,364 $893 million 100% — dedicated new category
Total all cyber-enabled fraud ~453,000 $17.7 billion ~85% of all IC3 losses
Total all IC3 complaints (2025) 1,008,597 $20.877 billion First year >1M complaints
Total IC3 complaints 2024 859,532 $16.6 billion Prior year baseline
Year-over-year loss increase +$4.3B (+26%)

Data Sources: FBI Internet Crime Complaint Center (IC3) 2025 Annual Report (April 6, 2026); FBI Press Release — “Cryptocurrency and AI Scams Bilk Americans of Billions” (fbi.gov, April 9, 2026); Alston & Bird Privacy, Cyber & Data Strategy Blog — “Cybercrime Trends to Watch: Takeaways from the FBI’s 2025 IC3 Annual Report” (April 10, 2026); SecureWorld — “FBI: AI-Enabled Fraud Topped $893M in 2025” (April 9, 2026)

The FBI IC3 2025 Annual Report is the most authoritative single document in the US AI scams statistics 2026 landscape, and its milestone findings deserve to be read carefully. The crossing of $20 billion in total cybercrime losses — reaching $20.877 billion on the back of 1,008,597 complaints — is a watershed moment that the FBI itself described in historic terms: the first time in IC3’s 26-year history that both figures have crossed their respective thresholds simultaneously. The 1,210% surge in generative AI-enabled fraud documented by Vectra AI in March 2026 helps explain why: AI is not incrementally improving fraud execution — it is compressing the cost, skill, and time required to run sophisticated scam operations to the point where mass-scale, high-quality fraud is accessible to anyone with a commercial subscription and a Wi-Fi connection.

The formal introduction of AI-related crime as a dedicated IC3 category — with 22,364 complaints and $893 million in attributed losses in its inaugural year — is best understood as a conservative floor, not a ceiling. AI attribution in IC3 reports depends entirely on victims recognizing and describing an AI element in their experience. The vast majority of phishing emails enhanced by large language models, voice clone calls that victims believed were genuine family members, and deepfake investment videos are never identified as AI by their victims — which means the true AI-related loss figure embedded within the $17.7 billion in cyber-enabled fraud is almost certainly multiples of the $893 million formally counted. The investment fraud sub-category’s $632 million in AI-linked losses — representing 70.8% of the entire AI-related IC3 total — confirms that AI-enhanced pig butchering scams, fake crypto platforms, and AI-generated investment advisors are currently the highest-dollar-value AI crime category in the US.


FTC Consumer Fraud and AI Scam Losses in the US 2026

FTC Consumer Fraud Reported Losses — Annual Trend 2020–2025
(FTC Consumer Sentinel Network; FTC Congressional Testimony, March 25, 2026)

2020 |████████████                                    ~$3.3B
2021 |████████████████████                            ~$5.8B
2022 |████████████████████████████                    ~$8.8B
2023 |████████████████████████████████████            $10.4B (provisional)
2024 |████████████████████████████████████████████    $12.5B
2025 |████████████████████████████████████████████████████ $15.9B ← RECORD
      ─────────────────────────────────────────────────────────────────
      $0      $4B      $8B      $12B     $15.9B

+430% increase in reported losses since 2020
FTC Fraud Category (2025) Reported Losses Reports / Complaints Key AI/Scam Vector
Investment Scams $7.9 billion Significant Fake AI advisors; deepfake celebrity endorsements
Imposter Scams $3.5 billion Over 1 million Voice cloning; AI government impersonation
Social Media Scams (all types) $2.1 billion ~30% of all loss reports AI-generated profiles; deepfake ads
Social Media Investment Scams $1.1 billion AI-generated “advisors” in WhatsApp groups
Romance Scams Included in imposter ~60% start on social media Deepfake video; AI chatbot relationships
Tech Support Scams Significant 37,560+ AI voice impersonation; fake alerts
Total all consumer fraud $15.9 billion 3 million reports All categories AI-amplified
2024 comparison $12.5 billion 2.6 million reports
Year-over-year increase +$3.4B (+27%) +400,000 reports
FTC consumer redress obtained (FY2025) $1.8 billion 40 enforcement actions

Data Sources: FTC Congressional Joint Economic Committee Testimony of Lois Greisman, Associate Director, Division of Marketing Practices (March 25, 2026); FTC Data Spotlight — “New FTC Data Show People Have Lost Billions to Social Media Scams” (FTC.gov, April 2026); FTC Annual Report to Congress — “Protecting Older Consumers 2024–2025” (December 1, 2025); National Consumers League White Paper — “Deepfakes, Chatbots, and Scams: How AI Is Fueling Fraud” (September 17, 2025)

The FTC’s record $15.9 billion in reported consumer fraud losses in 2025 — delivered to Congress in sworn testimony on March 25, 2026 — tells a story that goes beyond raw dollar amounts. The 430% increase in reported losses since 2020 occurred over a five-year window that precisely maps to the commercial availability and rapid improvement of generative AI tools, and the FTC’s own testimony notes that the dollar increase is being driven not merely by more scam attempts but by “a sharp increase in the number of consumers reporting large losses of $100,000 or more” — meaning AI is enabling scams that extract larger individual amounts, not just more frequent smaller ones. The $7.9 billion in investment scam losses — representing nearly half of all FTC-reported losses — is heavily AI-influenced: the FTC testimony and the National Consumers League’s September 2025 white paper both document how AI-generated investment advisors, deepfake celebrity endorsement videos, and automated WhatsApp “investor communities” filled with AI-generated testimonials are now standard tools of investment fraud.

The FTC’s social media scam Data Spotlight published in April 2026 adds a dimension that directly implicates AI at the platform level. With $2.1 billion in losses originating from social media in 2025 — an eightfold increase since 2020 — and nearly 30% of all fraud loss reports tracing back to social media as the initial contact point, the infrastructure of AI-enabled fraud has migrated emphatically onto consumer platforms. The finding that Facebook alone generated more scam losses than text messages or email in 2025 reflects the AI advertising ecosystem’s role in enabling fraud: scammers use the same audience-targeting tools as legitimate advertisers, purchasing ads that reach demographically optimized audiences with AI-generated deepfake celebrity endorsements, AI-cloned influencer videos, and algorithmically personalized investment pitches. The romance scam sub-finding is particularly striking: nearly 60% of people who reported losing money to a romance scam in 2025 said it started on a social media platform — and these scams increasingly involve AI chatbots that maintain months-long conversations and deepfake video calls that fabricate a visual “person” who never existed.


AI Voice Cloning and Deepfake Scam Statistics in the US 2026

AI Voice Cloning & Deepfake Fraud — Key Metrics (US, 2025–2026)
(FBI IC3 2025; Keepnet Labs March 2026; Vectra AI March 2026; McAfee Research)

Deepfake-enabled vishing surge Q1 2025 vs Q4 2024:
+1,600%   |████████████████████████████████████████████████████████████████

AI-enabled fraud growth vs traditional fraud (2025):
AI fraud:   +1,210%  |████████████████████████████████████████████████████
Traditional: +195%   |██████████████████████████████████████

Deepfake files worldwide 2023→2025:
2023: 500,000  |████████████████
2025: 8,000,000 |████████████████████████████████████████████████████████████████

CEO fraud via deepfakes now targets at least 400 companies per day
77% of voice clone scam victims who engaged — lost money
Voice clone audio minimum required: 3 seconds (85% voice match)
Scam-as-a-Service AI fraud kit cost: from $60/month
AI Voice / Deepfake Scam Metric Figure Source / Date
Deepfake-enabled vishing surge Q1 2025 vs Q4 2024 +1,600% Keepnet Labs Deepfake Statistics 2026 (March 2026)
Generative AI-enabled fraud surge in 2025 +1,210% Vectra AI analysis (March 2026)
AI use of voice cloning for fraud jumped in 2025 +400% year-over-year BlackFog / FBI data (December 2025)
Deepfake files worldwide 2023 → 2025 500,000 → 8,000,000 UK Government projection (cited Keepnet Labs 2026)
Voice clone required audio minimum (McAfee) 3 seconds for 85% voice match McAfee Research (2025)
Share of voice clone scam victims who lost money 77% Keepnet Labs / multiple sources (2026)
AI-driven fraud FBI 2025 — family distress / voice clone Included in $893M FBI IC3 2025 Annual Report
AI losses to older adults (60+) from AI IC3 crimes $352 million AARP/FBI IC3 2025 (April 2026)
CEO fraud via deepfakes (daily targeting rate) 400+ companies per day Keepnet Labs 2026
1 in 10 Americans experiencing voice clone scam ~33 million Americans 2026 survey data (ScamWatchHQ, April 2026)
Human accuracy detecting high-quality deepfake media As low as 24.5% iProov 2025 study
Arup engineering firm deepfake loss (single incident) $25.6 million FBI-cited real-world case (2024)
Scam-as-a-Service AI fraud kit monthly cost From $60/month Trend Micro / Norton research 2026
Deepfakes now ranked among top 5 US fraud types (2025) Top 5 Keepnet Labs citing FBI IC3 data 2025
Projected global AI fraud losses by 2027 $40 billion annually Deloitte Center for Financial Services / Javelin Strategy

Data Sources: FBI IC3 2025 Annual Report (April 6, 2026); AARP — “FBI Report: Internet Crime Losses Hit $20.9 Billion” (April 2026); Keepnet Labs — Deepfake Statistics & Trends 2026 (March 14, 2026); Vectra AI — “AI Scams in 2026” (March 23, 2026); BlackFog — “FBI Warning AI Voice Phishing” (December 10, 2025); Trend Micro News — “AI Voice Cloning: The Scam That Sounds Exactly Like Someone You Love” (April 16, 2026); McAfee Research on voice cloning audio requirements (2025); iProov — Biometric Intelligence Report 2025

The AI voice cloning and deepfake scam statistics for the US in 2026 reflect a technology arms race in which the fraudsters are, at present, winning. The 1,600% surge in deepfake-enabled vishing attacks in just one quarter — Q1 2025 versus Q4 2024 — is not a statistical artifact; it reflects a genuine market maturation event in which multiple commercial voice cloning tools became simultaneously cheaper, faster, and more convincing, enabling criminal operations to scale overnight. The FBI’s formal warning issued in May 2025 about AI-cloned voice attacks targeting senior US government officials — using voice clones to impersonate cabinet-level figures in smishing and vishing campaigns — elevated this from a consumer protection issue to a national security concern. The documented $25.6 million loss at engineering firm Arup from a single deepfake video conference call — in which every participant seen and heard on the call was an AI fabrication — has become the defining real-world case study for corporate deepfake risk, and is now cited in FBI enforcement materials, FTC guidance, and Congressional testimony.

The human detection capacity data from iProov’s 2025 biometric intelligence study is among the most important contextual statistics in this entire dataset: only 0.1% of participants correctly identified all fake and real media when tested, and human accuracy drops to as low as 24.5% for high-quality deepfakes. The 70% of people who report they cannot confidently distinguish a real voice from a cloned one (McAfee survey data, 2025) is directly consistent with this finding. 77% of voice clone scam victims who engaged with the call reported losing money — a conversion rate that would be the envy of any legitimate sales operation and that explains why criminal networks have invested so heavily in building and scaling voice fraud infrastructure. With the cost of entry at $60/month for a complete Scam-as-a-Service AI fraud kit and the required audio sample at just 3 seconds, the structural barrier between intent and execution has effectively vanished.


AI Scams Demographics and Victims in the US 2026

AI Scam Victims by Age Group — FBI IC3 2025 / FTC Data
(FBI IC3 2025 Annual Report; FTC Protecting Older Consumers 2024-2025, Dec 2025)

Adults 60+ (FBI IC3 losses 2025):
$7.7 billion total     |████████████████████████████████████████████████████████  +60% vs 2024
$352 million AI-attr.  |███████████████  39% of all AI-attributed IC3 losses

FTC: Who loses the most per incident (median loss)?
Adults 80+      |████████████████████████████████████████  Median loss exceeded $1,600
Adults 60-79    |████████████████████████████████████      High median loss
Adults under 60 |██████████████████████                    Lower median individual loss

But young adults report the HIGHEST EXPOSURE RATE:
Adults 18-29    |████████████████████████████████████████  Highest scam encounter rate
(per FTC; 18-29 more likely to report losing money than 60+ due to lower reporting among older victims)
Demographic / Victim Metric Figure Source
Total losses — Americans 60+ (FBI IC3 2025) $7.7 billion — ~37% of all IC3 losses FBI IC3 2025 Annual Report
Year-over-year increase in 60+ losses (2024→2025) ~+60% AARP citing FBI IC3 2025
AI-attributed losses — adults 60+ (2025) $352 million — 39% of AI-related IC3 total FBI IC3 2025 / AARP (April 2026)
FTC older adult fraud losses 2020→2024 growth ~4× increase: $600M → $2.4B FTC Protecting Older Consumers 2024–25 (Dec 2025)
Adults 80+ median reported loss per incident Over $1,600 per incident FTC Older Adults Report, Dec 2025
Top FTC fraud category for adults 60+ Investment scams — highest $ loss FTC Older Adults Report, Dec 2025
Adults 60+ targeted primarily on Social media FTC Older Adults Report, Dec 2025
Adults 18–29 Highest exposure rate to AI scams FTC Consumer Sentinel 2025
Share of all 2025 FTC fraud initiated on social media ~30% of loss reporters FTC Data Spotlight, April 2026
Share of romance scam victims starting on social media ~60% FTC Data Spotlight, April 2026
Average individual investment scam loss (FTC 2025) Over $10,000 per incident FTC JEC Testimony, March 25, 2026
Older adults with losses over $100,000 to scams Sharp increase in 2025 FTC JEC Testimony, March 25, 2026
Share of fraud victims who reported to any authority ~4.8% (mass-market fraud) FTC / Anderson K.B. 2021 study, cited 2026
FTC: Share who lost money to fraud in 2024 38% of reporters — up from 27% in 2023 NCL White Paper citing FTC 2024 data

Data Sources: FBI IC3 2025 Annual Report (April 2026); AARP — “FBI Report: Internet Crime Losses Hit $20.9 Billion” (April 2026); FTC Annual Report to Congress — “Protecting Older Consumers 2024–2025” (December 1, 2025); FTC Congressional JEC Testimony of Lois Greisman (March 25, 2026); FTC Data Spotlight — “New FTC Data Show People Have Lost Billions to Social Media Scams” (April 2026); National Consumers League White Paper — “Deepfakes, Chatbots, and Scams” (September 17, 2025)

The demographic profile of AI scam victims in the US in 2026 reveals a paradox that runs through virtually every federal dataset on fraud: older Americans lose more money per incident, but younger Americans are more likely to encounter and report scams. The $7.7 billion lost by Americans aged 60 and older in 2025 — representing approximately 37% of all FBI IC3-reported cybercrime losses while that demographic accounts for roughly 22% of the US adult population — reflects both the higher per-incident loss amounts documented by the FTC (median losses exceeding $1,600 per incident for adults 80 and older) and the deliberate targeting strategy of AI-enabled fraud operations that specifically deploy grandparent voice clone scams, tech support impersonations, and romance fraud toward older, higher-asset demographics. The $352 million in AI-attributed IC3 losses for adults 60+ — drawn from the formal AI crime category in the 2025 annual report — is almost certainly the most dramatically undercounted figure in the entire dataset, for a population that is statistically least likely to identify the technological mechanism behind the fraud they experienced.

The 38% of 2024 fraud reporters who lost money — up from 27% in 2023, a finding from the NCL White Paper citing FTC data — captures the conversion effectiveness that AI is delivering to the fraud economy. Scam attempts were not meaningfully more numerous in 2024 than in 2023; the FTC’s total complaint count remained essentially stable. But the share of people who were targeted and who actually lost money jumped 11 percentage points in a single year, confirming that AI-enhanced persuasion, personalization, and impersonation are not just increasing the volume of scam attempts — they are making each individual attempt more likely to succeed. The only 4.8% of fraud victims who report their experience to any government authority or consumer protection body (a figure from a peer-reviewed study cited in FTC Congressional testimony) means that the FBI’s $20.877 billion and the FTC’s $15.9 billion in reported losses are themselves a small fraction of actual total losses — one of the most important, and most alarming, interpretive contexts for every number in this article.


AI Scam Types, Tactics, and Emerging Threats in the US 2026

Top AI Scam Types by Financial Impact — US 2025–2026
(FBI IC3 2025 / FTC 2025 / Experian Future of Fraud Forecast Jan 2026)

AI Crypto / Investment Fraud    |████████████████████████████████████████████████████ $632M (IC3 AI-attributed)
AI Voice Clone / Imposter Scams |████████████████████████████████████████            $352M+ (60+ age group alone)
AI BEC (Business Email Comp.)   |████████████████████████                            $30M (confirmed AI within $3B BEC)
AI Tech Support Scams           |████████████████████                                $19.5M (confirmed AI)
AI Phishing / Spear Phishing    |██████████████████████████████████████████          $215.8M total phishing losses
AI Romance / Dating Scams       |██████████████████████████████████                  Included in $3.5B imposter total
AI Deepfake Social Media Ads    |████████████████████████████████████████████████    $1.1B social media invest. losses
                                 ────────────────────────────────────────────────────────────────────
                                 All figures from FBI IC3 2025 Annual Report & FTC 2025 data
AI Scam Type Mechanism Key 2025–2026 Statistic Primary Victim
Pig Butchering / AI Investment Fraud Fake crypto platforms; AI “advisors” $632M AI-attributed (IC3); $7.9B FTC investment total All age groups; social media initiated
AI Voice Clone / Grandparent Scam Voice cloned from social media audio +400% growth 2025; $352M older adult AI losses Adults 60+
Deepfake Social Media Investment Ads Celebrity/influencer deepfakes as paid ads $1.1B social media investment losses (FTC 2025) All ages; Facebook dominant
AI Business Email Compromise (BEC) LLM-written executive impersonation emails + voice $3.046B total BEC; $30M confirmed AI-component Corporate finance teams
AI Romance Scams AI chatbot + deepfake video “relationships” ~60% of romance scam reports start on social media Ages 30–60; social media
AI Phishing / Spear Phishing LLM-generated personalized phishing content 4× higher click rate vs human-crafted; $215.8M losses All demographics
Deepfake Job Candidate / Hiring Fraud North Korean IT workers using deepfake personas 136+ US companies infiltrated (FBI/DOJ/CISA documented) HR / Tech employers
AI Tech Support Scams AI voice impersonates tech company agents $2.1B total (IC3); $19.5M confirmed AI component Adults 60+; Windows users
AI Government Impersonation Cloned voice of officials; fake badges 1M+ imposter scam reports (FTC); FBI alert May 2025 All ages; IRS/SSA fraud
Machine-to-Machine (Bot) Fraud AI bots blend with legitimate traffic Top 2026 threat per Experian Future of Fraud Forecast Financial institutions

Data Sources: FBI IC3 2025 Annual Report (April 6, 2026); FTC JEC Testimony (March 25, 2026); FTC Data Spotlight on Social Media Scams (April 2026); FBI/DOJ/CISA documented North Korean IT worker deepfake hiring fraud; Experian Future of Fraud Forecast 2026 (January 13, 2026); Norton — “Top 5 Ways Scammers Have Used AI and Deepfakes in 2025” (October 2025); World Economic Forum Global Cybersecurity Outlook 2026

The taxonomy of AI scam types in the US in 2026 has expanded and diversified far beyond the original phishing email paradigm, and understanding the specific mechanisms behind each category is essential for both individual protection and institutional defense. Pig butchering — the long-con investment fraud model in which a scammer builds a relationship over weeks or months before steering the victim into a fake investment platform — has been almost entirely AI-automated: AI chatbots now handle the relationship-building phase, AI-generated investment dashboards create a convincing facade of portfolio growth, and AI voice or deepfake video confirms the “advisor’s” credibility on demand. This is why investment fraud dominates both the FBI’s ($8.6B) and the FTC’s ($7.9B) loss rankings simultaneously — it is the category where AI’s ability to sustain a long-term, personalized deception at scale has been most fully exploited.

The machine-to-machine fraud threat flagged as Experian’s top emerging risk for 2026 in its Future of Fraud Forecast (published January 13, 2026) represents the next frontier. As consumers and businesses increasingly deploy AI shopping bots, automated banking agents, and personal AI assistants that act on their behalf, criminal AI systems are being designed to impersonate legitimate bots — blending malicious traffic with the authorized AI activity that financial institutions and platforms can no longer simply block. This “good bot versus bad bot” detection problem has no current solution at scale, and Experian’s chief innovation officer for fraud explicitly identified it as the #1 threat to companies in 2026. Meanwhile, the documented North Korean state-sponsored deepfake job candidate operation — in which operatives used AI-generated personas to infiltrate 136 or more US technology companies, earning over $300,000 per year per operative before escalating to data theft and extortion — represents the convergence of AI scam technology with national security threat, a dimension that marks 2026 as a qualitative turning point in the history of AI-enabled fraud in the United States.

Disclaimer: The data research report we present here is based on information found from various sources. We are not liable for any financial loss, errors, or damages of any kind that may result from the use of the information herein. We acknowledge that though we try to report accurately, we cannot verify the absolute facts of everything that has been represented.

📩Subscribe to Our Newsletter

Get must-read Data Reports, Global Insights, and Trend Analysis — delivered directly to your inbox.