Home / Blog / Voice Deepfake Fraud

Voice Deepfake Fraud Hit $2.19B in 2025. What an MSP Should Be Doing About It Right Now.

The phishing email is dead. What replaced it sounds exactly like your CFO.

Voice deepfake fraud incidents in the United States grew approximately 680% year over year through 2025. Cumulative reported losses crossed $2.19 billion. More than 50% of CISOs in a recent survey reported a successful deepfake-based intrusion in the last 18 months — up from roughly 10% the year before. The technology threshold collapsed: a usable voice clone now requires three to ten seconds of clean audio. Your CEO’s last earnings call is enough. Your CFO’s podcast appearance is enough. The custodian who introduced themselves at the company all-hands is enough.

This isn’t a future risk. This is what wire fraud and vendor-impersonation look like in 2026, and the MSPs serving mid-market firms have to update their playbook accordingly.

Why your existing controls don’t catch this

Most managed-services firms still operate a security stack designed for binary attacks. Email gateways flag malicious attachments. EDR catches known-bad executables. The SIEM correlates logs across endpoints. None of that fires when the attack is an interactive phone call where a voice the recipient recognizes asks to wire $480,000 by end of day.

Worse, the social-engineering training your firm rolled out in 2023 doesn’t generalize. It taught your finance team to spot bad grammar in emails. The attacker now sounds exactly like the person whose voice they’ve cloned, speaks in real time, adapts to whatever the target says, and has no grammar at all because there’s no email. The defensive layer between "it sounds right" and "it isn’t right" has to be procedural, not perceptual.

What we’re shipping at Intelligent IT right now

The response we’re putting into production for our clients sits on three layers, in this order:

1. Out-of-band verification, codified in the AP workflow

Every wire over a defined threshold (we recommend $10,000 to start) requires a callback to a known phone number on file — not the one displayed in caller-ID, not the one in the most recent email, but the number stored in your authoritative vendor master. The callback uses a verification phrase agreed at vendor onboarding. The phrase is rotated quarterly. The procedure is enforced in the accounting system, not just in policy.

This sounds basic. The reason it works is precisely that it’s procedural rather than perceptual: the attacker can clone any voice, but they can’t clone the verification phrase or the number you call back from your master record.

2. AI-aware SOC oversight

Our SOC Sentinel platform watches for the operational fingerprints of an in-flight deepfake intrusion: unusual after-hours wire-transfer initiation, vendor-banking-detail changes within 72 hours of a wire request, sudden privilege-escalation requests via support channels, and the social-engineering precursors that often precede a deepfake call (LinkedIn-scrape activity, OSINT reconnaissance, lookalike-domain registration).

None of these alone proves a deepfake attack. Together they trigger an investigative escalation that reaches the client’s designated on-call within minutes. Most fraud is preventable in the 30-minute window between reconnaissance and execution.

3. Voice-print verification, where it matters most

For executives whose voices are publicly available (and that’s effectively all of them), we’re piloting voice-print verification on the calls that matter. The technology isn’t perfect; it’s an additional signal layered with the procedural defenses above. But for a $400,000 wire request, an additional signal is welcome.

What you should ask your current MSP this week

  1. What out-of-band verification procedure protects our wire-transfer process today — and is it enforced in our accounting system, or only in policy?
  2. If a deepfake voice call asked one of our employees for a privilege escalation right now, which alert in your SOC fires?
  3. How do we know a vendor banking-detail change wasn’t initiated by an attacker who already compromised the vendor’s mailbox?
  4. What’s your incident response time once a deepfake-related incident is reported by an employee?

If your MSP can’t answer those concretely, the conversation is overdue.

The deepfake threat surface is one of five we cover end-to-end

Our AiT SOC Sentinel and AiT Voice Concierge products are the operational backbone of the response above — tenant-scoped, audit-trailed, and integrated with the procedural controls your accounting team will actually use. We run them on our own MSP first, then ship them to clients.

See the full AI security stack →

The bottom line

Voice deepfake fraud isn’t a hypothetical risk and it isn’t the next thing your security training will catch. It’s an active, well-funded, rapidly-growing attack class that bypasses every perceptual defense in the typical mid-market security stack. The MSPs that update their procedures, their SOC tooling, and their SLAs to address it directly will keep their clients out of the $2.19 billion that the attackers are taking. The MSPs that don’t will explain themselves in deposition rooms.

We’d rather have the conversation now.