$25 Million Paid After a Video Call With a Deepfake CFO

A finance team just wired $25 million to scammers after a video call with their CFO โ€” except their CFO was never on the call.

The deepfake was so convincing that three employees verified it independently. None caught it. The video bypassed their detection software. And here’s the part that should terrify every company: creating that fake cost about $50 and took roughly 3 hours.

We’ve crossed into a new phase where making deepfakes is cheaper than detecting them. The gap is accelerating.

Your security team is fighting a war they’ve already lost

Detection tools are failing spectacularly right now. Modern deepfake videos bypass security software with over 90% accuracy, according to Cyble’s latest analysis. That’s not a vulnerability โ€” that’s a complete breakdown.

One Indonesian bank reported 1,100 deepfake fraud attempts. Not over years. Recently. The volume suggests attackers are running industrial-scale operations because the economics finally work in their favor. The $25 million CFO scam isn’t an outlier anymore. It’s a proof of concept.

When 91% of convincing deepfakes cost just $50 and take 3.2 hours to create, every mid-level employee becomes a potential attack vector. Your Zoom calls, your voice authentication, your video verification โ€” all of it assumes the person on screen is real. That assumption just became dangerous.

Like scams exploiting trust gaps in payment systems, deepfakes target the moment when verification feels unnecessary. Companies are realizing their entire trust infrastructure doesn’t work anymore. And there’s no obvious replacement.

The cost collapse that broke everything

The economics shifted so fast that most security teams are still operating on 2023 assumptions. Voice cloning dropped from $300โ€“$20,000 per minute to pennies. Video synthesis followed the same curve. What used to require a production team and specialized skills now runs on consumer AI tools anyone can access.

Here’s the asymmetry: attackers can iterate faster than defenders when you’re relying on one signal. A fraud strategist at a multinational bank admitted this to researchers โ€” the people building defenses know they’re behind. The same AI capabilities that enable AI agents finding security flaws are now being weaponized to create undetectable deepfakes at scale.

Just as employees using AI tools without IT approval creates security blind spots, the proliferation of accessible deepfake creation tools means threats can come from anywhere. Forrester predicts 40% growth in deepfake detection spending for 2026. But that money is chasing a moving target.

The tools companies are buying this quarter are designed to catch last quarter’s deepfakes. The real problem isn’t that detection is hard โ€” it’s that detection is always reactive while creation is getting exponentially easier.

Why spending more won’t fix this

The security industry’s answer is “deploy more tools” โ€” but that creates gaps, not coverage. No single tool catches all deepfakes. Companies are buying voice-only detection, video-only detection, static identity verification โ€” each one covers a specific attack surface and leaves everything else exposed.

Your HR team might catch synthetic identities but miss voice clones on hiring calls. Your finance team might verify static documents but fall for real-time video deepfakes. The pattern mirrors what’s happening with text: detection becoming impossible as generation improves exponentially.

Even worse: deploying detection technology creates false confidence. Teams assume they’re protected when they’re just protected against known techniques. One cybersecurity expert put it bluntly: “Nobody can be a be-all end-all deepfake detector. That’s a recipe for failure.” When the people selling solutions admit they can’t solve the problem, that tells you everything.

The uncomfortable truth is that we’re entering a period where you can’t trust video or voice as proof of identity โ€” but we don’t have a replacement verification system yet. Companies are spending millions on detection while attackers are spending $50 on creation. If that cost gap keeps widening, what happens when every video call requires the same skepticism we give to email? How do you run a business when “I saw them on Zoom” stops being evidence?

alex morgan
I write about artificial intelligence as it shows up in real life โ€” not in demos or press releases. I focus on how AI changes work, habits, and decision-making once itโ€™s actually used inside tools, teams, and everyday workflows. Most of my reporting looks at second-order effects: what people stop doing, what gets automated quietly, and how responsibility shifts when software starts making decisions for us.