OpenAI's first and only cybersecurity investment.

Read more

Deepfake Phishing: The Next Evolution in Cyber Deception

Adaptive Security Team

Last Updated: Sep 19, 2025

September 17, 2025

6
min read

TABLE OF CONTENTS

Get started with Adaptive

Get started

Want to download an asset from our site?

Download now

How to Protect Your Company from Deepfake Phishing Attacks

By 2026, 30% of enterprises will no longer trust identity verification tools that rely on face biometrics, according to research advisory firm Gartner. The reason? AI-generated deepfakes are already good enough to bypass them.

This is the inflection point security leaders need to pay attention to. Phishing has moved beyond suspicious links or misspelled emails. It’s now about voices and faces that look and sound real enough to fool both people and machines.

This article explores how deepfake phishing works, why it’s so effective, and what actions security leaders can take.

What is deepfake phishing?

Deepfake phishing is a new breed of social engineering attack where cybercriminals use artificial intelligence to create highly realistic fake voices and videos of trusted individuals. These can be executives, colleagues, regulators, or even family members. 

Unlike traditional phishing, which relies mainly on suspicious-looking emails, deepfake phishing manipulates what people see and hear, making these attacks much more difficult to detect and resist.

Here’s how deepfake phishing differs from traditional phishing: 

  • Traditional phishing relies on text, emails, or messages, often featuring red flags such as odd phrasing, spelling mistakes, or generic greetings.
  • Deepfake phishing delivers deception across voice and video using AI tools. These synthetic voice calls and videos mimic tone, cadence, and emotion, making the victim feel as though they’re interacting with a real person.

This difference matters. It’s one thing to question a strange-looking email, but it’s much harder to question the familiar voice or face of your CFO on a live video call. According to a survey, 66% of security professionals report having already encountered deepfake-based attacks. And that number is growing.

These scams work because they exploit two powerful psychological levers:

  • Urgency: Attackers engineer stressful, high-stakes situations (“We need this transfer approved immediately before the deal collapses”).
  • Authority: Seeing or hearing a CEO, CFO, or regulator heightens trust. Employees are accustomed to deferring to authority, especially when scammers add urgency to the mix.

AI makes these attacks even more dangerous because it doesn’t just clone appearances but personality and delivery as well. Modern deepfake tools can replicate an executive’s accent, emotional tone, and conversational cadence within minutes of being fed samples scraped from earnings calls, keynote videos, or even podcasts.

How deepfake phishing happens: the anatomy of an attack

Deepfake phishing doesn’t unfold as a single suspicious email. It’s a multi-stage, coordinated attack: part social engineering and part synthetic media. Here’s how it typically happens:

Step 1. Reconnaissance: building the profile

Every deepfake phishing attack starts with information gathering. Hackers don’t need to break into your systems to obtain sensitive information; they can simply scrape it from public sources. 

LinkedIn bios, social media videos, conference talks on YouTube, quarterly earnings calls, or even a casual Instagram story with your voice in the background are enough. That content becomes raw training data to create deepfake content and carry out identity theft.

These bad actors then leverage off-the-shelf AI tools, such as ElevenLabs for voice or DeepFaceLab for video, to turn even a few minutes of clean audio or video into a convincing replica.

Fortunately, some modern-day security awareness training platforms now offer simulations that take publicly available clips and spin up a deepfake of your own executives. 

For example, Adaptive Security generates a cloned voice or video of your CEO in a safe, controlled environment. By experiencing how convincing these fakes are, your staff will be far more likely to pause and verify when a real attack happens.

Step 2. The multi-channel hook: email, voice, and video in play

Picture this. It’s mid-afternoon when an email from your CFO pops up: “Can you process this vendor invoice before 5 p.m.? It’s tied to the Q3 deadline.” The timing, names, and project details all look legitimate and nothing raises a red flag.

Next, your phone rings. You see your CFO’s caller ID, and when you answer, you hear their familiar voice, steady, authoritative, and insistent: “I just sent you an email about that transfer. It’s important we get this done today.”

Before you can second-guess, you’re pulled into a scheduled Zoom meeting with your team leader reiterating the request. The face looks real. The voice sounds authentic. Every channel confirms the same message: move fast.

This barrage of attacks from multiple channels, combined with fake urgency, lowers your guard and throws you off the scent of suspecting that anything’s wrong. And that’s what makes multi-channel deepfake phishing so effective.

In 2024 alone, at least five FTSE 100 companies, including WPP and Octopus Energy, reported that their CEOs had been impersonated in deepfake scams. 

This was eerily similar to the American Express GBT attack, which used a deepfake voice and video to push fraudulent approvals. 

It’s likely some caught it in time, while others may not have been so lucky and fell for it. This just shows how advanced and convincing multi-channel deepfake phishing can be. 

Deepfake phishing scam showing CEO impersonation via WhatsApp messages, voice notes, and video calls (Source: X)

Step 3. The transfer: irreversible damage

Once these scammers manage to establish trust, the unsuspecting victim ends up fulfilling the request. This can involve a wire transfer or sending sensitive credentials or confidential data. 

The real challenge isn’t spotting a typo anymore. Because the interaction is multi-layered (email, call, and video), the victim believes the sender is authentic.

Modern security awareness training should be role-based and context-specific. Tools like Adaptive Security make it easy to run AI phishing simulations tailored to teams, such as finance facing an urgent “CEO invoice” scenario and IT receiving a request to reset credentials. 

Practicing these high-pressure situations builds the muscle memory to catch something fishy early on, rather than clicking “approve.”

The real business consequence of deepfake phishing in 2025

The danger of deepfake phishing is already costing organizations millions, with financial fraud the most visible impact.

A finance employee at a multinational firm in Hong Kong approved a $25.6 million transfer after joining a video call with what appeared to be the CFO and several colleagues. Every participant on that call was a deepfake.

This is not an isolated deepfake example. A survey by Regula Forensics found that 49% of organizations in 2024 faced losses tied to deepfake incidents. That’s up from 37% in 2023 and 29% in 2022. The average damages from these attacks exceeded $450,000.

Deepfakes detected by companies (Source)

Financial loss is only the first layer of damage. When a company is tricked into transferring funds or exposing sensitive data, regulators like FINRA are increasingly treating deepfake-enabled fraud as a compliance issue and requiring firms to provide employees with phishing training to keep pace with these new risks. 

Public trust and reputational damage are also at stake. Once news breaks that a CEO has been impersonated or a fraudulent transaction has slipped through, companies face headlines questioning their governance, even if they eventually recover the funds. 

Operationally, these attacks drain resources long after the incident. Internal investigations, legal consultations, insurance claims, and remediation consume time and money. It’s also a traumatic experience for staff, extending the damage beyond public trust and financial loss. 

Why traditional phishing defenses are falling short

For years, phishing defenses were designed around one channel: email. Companies invested in filters, scanners, and awareness training that taught staff to hover over links or spot spelling mistakes. That worked when scams were limited to clumsy mass emails.

However, today’s scams are dynamic and multi-channel. A spam filter can block a suspicious domain, but it can’t flag a convincing Zoom call featuring a deepfake of your CFO. Employees trained only on “bad emails” are caught off guard when the scam escalates to voice or video.

Organizations are already feeling the impact. In 2024, 40% of executives reported being targeted by deepfake attempts. And security experts are starting to warn of this shift.

“As AI technology advances, attackers are shifting their focus from technical exploits to human emotions using deeply personal and well-orchestrated social engineering tactics,” says Chris Pierson, founder and CEO of BlackCloak. 

Unfortunately, many organizations still run awareness programs based on outdated static email templates. These approaches give staff a false sense of confidence since the examples they practice bear little resemblance to the types of phishing attacks they’ll actually face.

Deepfake phishing countermeasures that actually work

Defending against deepfake phishing requires a shift from static training to human-in-the-loop preparation. Technology can help flag anomalies, but it’s an employee who ultimately clicks “approve” or wires funds. That decision needs to be rehearsed in advance, not made for the first time under pressure.

Here’s what a phishing training program that delivers ROI needs to include in 2025:

  • Human-in-the-loop training: Instead of passively consuming awareness slides, employees need to experience a deepfake scam in a safe environment. A recent report found that after approximately a dozen simulation rounds, detection success surged from 34% to 74%. Continued training increased success rates.
  • Role-based simulations and targeted retraining: Not all staff face the same risks. For example, finance teams need to rehearse invoice fraud attempts, while IT staff should practice handling fake credential resets. 
  • Executive impersonation drills: Deepfake scams often center on leadership, including cloned CEO voices, CFOs on fake video calls, and urgent acquisition directives. Running executive impersonation drills helps both leaders and staff recognize the red flags. 
Simulation training success rate improvement (Source: Phishing Trends Report Page)

In 2025, training has to move beyond teaching staff to hover over links like it’s 2010. The challenge now is to detect when a convincing voice on the phone or a familiar face on Zoom telling you to act fast is an AI-crafted fake. 

The fix is simple: give employees practice slowing down, double-checking through other channels, and knowing when to escalate. Once they’ve rehearsed it, those steps become second nature.

Modern security awareness tools like Adaptive Security can build simulations that target employees by role, so your team can practice scenarios that mirror the pressures they’ll face in real attacks.

Prepare your team for deepfake phishing attacks with Adaptive Security

Recent deepfake phishing instances show that scammers can now bypass filters, exploit trust, and cost companies millions. The old approach of spotting phishing emails doesn’t work when the “CEO” is on a “live phone call.”

The best defense is practice. Teams get exposure to realistic scenarios, including finance handling fake invoice requests, IT facing spoofed resets, and executives witnessing their likeness misused. Everyone learns to handle deepfake phishing attempts under pressure, even when it appears to be a genuine request from an authority within the company. 

Simulation-based training platforms make this possible. Adaptive Security replicates voice and video deepfakes safely, so your employees can experience firsthand how convincing these attacks can be. When staff rehearse real-life deepfake phishing attempts, they’re far less likely to be caught off guard.

Request a custom simulation today and see how your team performs under pressure from deepfakes. 

Frequently asked questions about deepfake phishing

How do attackers create voice deepfakes?

Attackers only need a few minutes of clean audio, which they can easily pull from conference calls, YouTube talks, or podcasts. 

Using tools like ElevenLabs or other AI voice generators, they can clone tone, accent, and cadence with alarming accuracy. Once built, the fake voice can deliver urgent messages over calls or voice notes that sound just like a trusted leader.

What’s the difference between vishing and deepfake vishing?

Vishing (voice phishing) is where fraudsters trick victims over the phone, usually by pretending to be banks, tech support, or company staff. 

Deepfake vishing takes it a step further by using AI-generated voices. Instead of a stranger’s voice, victims hear their CEO, CFO, or regulator making the request, which sounds far more convincing and harder to challenge.

How can companies protect against deepfake phishing?

Don’t rely only on filters or awareness slides. Build verification protocols, such as confirming high-risk requests through a second trusted channel, even if they seem urgent. 

Utilize next-generation Security Awareness tools, such as Adaptive Security, to train employees with real-time simulations (including simulated voice or video calls) so they can recognize and respond to AI-powered deepfake attacks even under pressure. 

Get your team ready for
Generative AI

Subscribe to the Adaptive newsletter today.