Protect your organization from deepfakes
Get started with Adaptive
Want to download an asset from our site?
Real or fake? Everything digital raises the question, and it’s all due to threat actors creating deepfakes that achieve a level of realism taking AI phishing to an unprecedented level.
Deepfakes are highly realistic, AI-generated or manipulated video and image content designed to deceive. While the technology itself has fascinating creative potential, deepfakes also serve as a weapon for cybercriminals.
The fuel for their AI-generated illusions is often the vast amount of images, videos, and voice recordings readily available online through open-source intelligence (OSINT), which leads to hyper-realistic attacks.
IT and security leaders recognize that deepfakes are only growing more dangerous. The line between real and fake is becoming increasingly blurred, and awareness and preparations — strengthening the human firewall — are the strongest defenses against these emerging threats.
Everyone should be familiar with deepfake examples of AI-generated videos and images, even though we’ve all likely encountered them knowingly or not. Let’s dive in.
What is a Deepfake?
Deepfake technology utilizes artificial intelligence, specifically machine learning (ML) models such as generative adversarial networks (GANs). Think of it like an apprentice forger trying to create a fake so perfect that an art critic can’t tell it apart from the original. It’s a process repeated millions of times until the AI produces compelling fake video, audio, or images.
The vast amounts of data necessary to train AI models are often meticulously gathered from publicly available sources. Cybercriminals typically scour:
- Corporate Websites: ’About Us’ pages with staff photos and bios, promotional videos, recorded webinars, and executive interviews provide high-quality source material.
- Social Media: Profiles on LinkedIn, Facebook, Instagram, X, TikTok, and YouTube are treasure troves of images, videos, and audio recordings.
- Public Appearances: Keynotes, conference presentations, news interviews, and podcasts available online offer extensive audio-visual data.
Any digital asset an individual or organization puts into the public domain can be used to create a deepfake, which makes understanding and managing one’s digital footprint more critical than ever.
In terms of the types of deepfake techniques most common today, here’s what to expect:
- Face Swapping: Replacing one person’s face with another’s in a video.
- Voice Cloning: Synthesizing a person’s voice to make it say anything, whether in a pre-recorded video or during a live conversation.
- Lip-Syncing: Altering an existing video so that the person appears to say something they never actually said.
- Full-Body Reenactment: Manipulating a person’s entire body posture and movement to create fake gestures.
Only a decade ago, this level of sophistication was a dream for cybercriminals. However, they’re now able to easily access everything needed to craft a believable deepfake in minimal time.
Real-World Deepfake Examples
Spend any time online? AI videos and images will appear, guaranteed. Therefore, it’s important to be proactive and familiarize yourself with deepfake examples to understand the various forms they take and the tactics cybercriminals employ.
Let’s explore a few real-world examples of deepfakes, their multifaceted nature, and how publicly available information contributes to creation.
Deepfakes targeting voters during elections

Elections around the world see their fair share of deepfakes, as NPR reported last year.
In the United States, a political consultant used AI to create robocalls of President Joe Biden during the 2024 primaries. Indonesians, meanwhile, saw a deepfake of Suharto, the country’s longtime leader who died in 2008, endorse a slate of candidates.
AI is also being heavily utilized by U.S. politicians and political parties to create advertisements that promote themselves or ridicule their rivals.
Arup loses $25 million due to CFO deepfake
In a stunning AI-powered scam, engineering firm Arup lost $25 million. Attackers developed deepfakes of the company’s chief financial officer (CFO) and other colleagues, which were then used to facilitate a meeting with a finance department employee during a video conference.
Not realizing they were interacting with cybercriminals, the employee was tricked into transferring the large amount of money. It was done under the impression that the CFO was making an urgent request, and so the employee acted with little hesitation since other colleagues were seemingly present.
Arup fell victim to the scam due to the sophistication of the deepfake, which led the employee to feel a sense of urgency from an authority figure.
Rob Grieg, who leads information security at Arup, was interviewed by the World Economic Forum, recapping the company’s experience that all IT and security professionals can learn from.
'Tom Cruise' deepfake achieves fame on TikTok
Oscar-nominated actor Tom Cruise isn’t on TikTok, but @deeptomcruise is. In 2021, Chris Umé began posting videos that appeared to show Cruise engaging in a variety of activities that seemed unusual. Well, it wasn’t Cruise at all.
Umé, a visual effects artist with expertise in AI, teamed up with actor Miles Fisher to utilize deepfake technology, allowing a fake version of Cruise to be the star of a TikTok account with over 5 million followers and 19.5 million likes. It’s also led to the launch of Umé’s AI-driven entertainment company, Metaphysic.
Elon Musk deepfakes contribute to massive fraud losses
Cybercriminals are using the name, image, and likeness of one of the world’s wealthiest people to commit fraud.
CBS News reported that ads on several social media platforms use deepfakes of Elon Musk to peddle investment opportunities. In one case, a 62-year-old woman followed through on the pitch and opened an account with more than $10,000 — all under the illusion that Musk himself was in the ads.
Musk is, of course, one of the most popular people used in deepfake scams. His business acumen makes him a trusted figure, and attackers can easily obtain open-source intelligence to develop hyper-realistic deepfakes of him.
Fake Volodymyr Zelenskyy urges Ukrainian troops to stand down

As the war in Ukraine continues to unfold, AI has played a significant role in disseminating disinformation.
In 2022, Ukrainian President Volodymyr Zelenskyy appeared in a deepfake video that urged the country’s military to stand down as Russia continued its invasion. If believed, Ukrainian soldiers and citizens could’ve been left vulnerable to catastrophic attacks.
While platforms, including Facebook, later removed the video, it was still viewed by (and thus deceived) millions of people globally.
How to Spot a Deepfake: The First Line of Defense
Deepfake technology is rapidly improving, but there are still telltale signs that anyone can look for.
- Unnatural Eye Movement & Expressions: Observe if the eyes blink excessively or not enough, or if the facial expressions are inconsistent with the tone of voice.
- Awkward Pacing or Emotion in Speech: Listen for unnatural pauses, strange intonations, or a robotic lack of emotional range in audio.
- Blurry or Mismatched Edges: Look for digital artifacts, blurring, or strange transitions where the face meets the hair or neck, or around the mouth.
- Poor Lip-Syncing: Is the audio perfectly synchronized with the mouth movements? Often, it’s close but not perfect.
- Inconsistent Visual & Audio Quality: Notice if the video and audio quality differ, and if there are unusual background noises or a lack thereof.
- Uncanny Valley Effect: A deepfake sometimes looks almost perfect but still feels ‘off’ or unsettling in a way that’s hard to define. Trust that instinct.
- Verification Protocol: For any unusual or high-stakes request, always verify through a separate, trusted communication channel.
Remember that attackers often use details gathered from public sources to make their fake requests feel more plausible, making out-of-band verification even more critical.
Beyond Detection: Building a Resilient Workforce
Detecting deepfakes is challenging and only becoming more difficult as the technology evolves.
Endless personal and corporate data already lives online, so the focus must shift from solely trying to prevent data availability to preparing for its misuse. This is why security awareness training is critical for an organization.
Adaptive Security understands that the only true defense is a prepared, skeptical, and well-trained workforce. Our next-generation platform for security awareness training and phishing simulations is designed to prepare organizations and their employees for emerging, AI-powered threats.
Organizations training employees with Adaptive Security's platform go far beyond static modules. Company OSINT allows IT and security teams to create their own deepfakes of senior leadership, and the entire content library is fully customizable — and includes AI Content Creator.
As a result, employees engage with tailored training modules and then face real-world deepfake attack simulations for phishing training that actually test their knowledge.
The Future is Here: Are You Prepared?
Deepfake technology, amplified by the ease with which source material can be found through open-source intelligence, represents a significant evolution in the threat landscape. While it offers creative and beneficial applications, its potential for malicious use in fraud, disinformation, and reputational damage is undeniable.
Any demarcation between what’s real and digital is blurred — and permanently. Waiting to react to a deepfake attack is a risk most organizations can’t afford, but preparation, continuous education, and fostering a culture of security awareness are the most effective strategies.