In February 2024, a finance worker at a Hong Kong multinational transferred $25 million to scammers. Why? Because on a video call, he saw and heard his company’s CFO—along with several other colleagues—authorize the transfer.
Every person on that call was a deepfake.
This isn’t a one-off incident. Deepfake video scams increased 700% in the first quarter of 2025 alone. And unlike voice cloning, which only fools your ears, deepfake video creates the full illusion of real-time visual presence.
How Deepfake Videos Are Created
The Basic Process
1. Source Collection Scammers gather video footage of the target:
- Corporate presentations and speeches
- YouTube videos and podcasts
- Social media videos
- LinkedIn and company website headshots
- Zoom recordings from compromised accounts
2. Face Mapping AI analyzes thousands of frames to understand:
- Facial structure and proportions
- Skin texture and lighting
- Expression patterns
- Head movement tendencies
- Lip-sync timing
3. Model Training The AI creates a “face model” that can be applied to any base video. The more source material, the better the result—but modern systems work with surprisingly little.
4. Face Swapping The target’s face is mapped onto someone else’s body in real-time or pre-recorded video. The AI handles:
- Matching lighting conditions
- Blending edges seamlessly
- Synchronizing lip movements to speech
- Preserving natural head movements
5. Enhancement Post-processing adds:
- Background consistency
- Audio-video synchronization
- Artifact cleanup
- Compression (which hides imperfections)
Real-Time vs. Pre-Recorded
Pre-recorded deepfakes are higher quality. Scammers can iterate, fix errors, and perfect the result. Used for:
- Fake CEO announcements
- Investment fraud videos
- Impersonation in pre-recorded “messages”
Real-time deepfakes run during live video calls. Lower quality but incredibly effective for:
- Business email compromise follow-ups
- “Emergency” video calls from executives
- Romance scams with video “proof”
Real-time systems like DeepFaceLive can run on gaming hardware, making this accessible to sophisticated criminal groups.
The Technology Behind It
Key Algorithms
Autoencoders: Neural networks that compress and reconstruct faces. By training two autoencoders on different faces, you can “encode” one face and “decode” it as another.
GANs (Generative Adversarial Networks): Two AIs compete—one generates fakes, one detects them—improving quality through iteration.
Diffusion Models: The newest approach, producing higher quality results with better detail preservation.
Hardware Requirements
- Consumer GPUs (RTX 3080 or better) for real-time
- Cloud computing for high-quality rendering
- Some services offer deepfake-as-a-service
This isn’t nation-state technology anymore. It’s accessible to organized crime, and increasingly to individual scammers.
Common Attack Scenarios
The Executive Impersonation
Target: Finance departments Method: Pre-recorded or live video call from “CEO” or “CFO” Payload: Wire transfer authorization, data access, credential sharing
The Hong Kong case is the template: create urgency, impersonate authority, request financial action.
The Investment Scam
Target: Individual investors Method: Pre-recorded videos of celebrities or business leaders Payload: Cryptocurrency investments, fake trading platforms
Elon Musk, Warren Buffett, and other business figures are commonly deepfaked to promote fake investment opportunities.
The Romance Escalation
Target: Romance scam victims who are getting suspicious Method: “Video call” to prove the relationship is real Payload: Continued emotional and financial manipulation
When victims demand proof, scammers can now provide video “evidence” of their fake persona.
The Verification Bypass
Target: KYC (Know Your Customer) systems Method: Deepfake of legitimate ID holder during verification Payload: Account takeover, fraudulent account creation, money laundering
Banks and exchanges are seeing increasing attempts to use deepfakes to bypass video verification.
Detection Challenges
What Used to Work
- Unnatural blinking patterns (early deepfakes forgot to blink)
- Lighting inconsistencies
- Blurry edges around faces
- Audio-video desync
What Scammers Do Now
- AI specifically trained to add natural blinking
- Lighting normalization in post-processing
- Edge blending algorithms
- Real-time lip-sync improvement
What Still Helps
Visual tells:
- Hair detail at edges (still challenging for AI)
- Teeth rendering (often imperfect)
- Earrings and accessories (tracking issues)
- Reflections in glasses or eyes
Contextual tells:
- Unusual requests regardless of who’s asking
- Pressure to act immediately
- Requests to bypass normal procedures
- “Secret” projects others shouldn’t know about
Technical tells:
- Ask person to turn their head sharply (breaks tracking)
- Request they hold up random fingers (breaks pre-recording)
- Note the background consistency (look for glitches)
The Business Impact
Financial Losses
The $25M Hong Kong case made headlines, but smaller incidents happen constantly:
- Average BEC (Business Email Compromise) loss: $125,000+
- Deepfake-assisted BEC losses trending higher
- Insurance often doesn’t cover “authorized” transfers
Reputational Damage
- Executives impersonated in scam videos
- Companies associated with fraudulent endorsements
- Trust erosion with partners and customers
Operational Disruption
- Mandatory verification procedures slow business
- Increased IT security spending
- Employee training and awareness programs
Defense Strategies
For Businesses
1. Verification Protocols
- Code words for financial authorizations
- Callback verification using known numbers
- Multi-person authorization for large transactions
- Never authorize based solely on video/voice
2. Technical Controls
- Deepfake detection software (emerging market)
- Metadata analysis of video files
- Network monitoring for unusual access
3. Culture of Verification
- Make it acceptable to “challenge up”
- Reward employees who question suspicious requests
- Regular training on latest techniques
For Individuals
1. Skepticism of Video “Proof”
- Video calls can be faked
- Pre-recorded messages can be synthetic
- Real-time interaction helps but isn’t foolproof
2. Verification Through Other Channels
- Call back on known numbers
- Verify through trusted contacts
- Meet in person when stakes are high
3. Know the Red Flags
- Urgency and secrecy
- Requests to bypass normal processes
- Emotional manipulation
- Financial requests from unexpected sources
The Future
Deepfake quality will continue improving while creation becomes easier. Detection technology is advancing, but it’s an arms race where attackers often have the advantage.
The most reliable defense isn’t technical—it’s procedural. Create verification systems that don’t rely on trusting what you see and hear, because increasingly, you can’t.
For practical protection strategies, see our guides on protecting your business and family verification systems.