Arup’s $25M Deepfake Loss: Anatomy of an AI-Powered Scam

Adaptive Security Team
May 16, 2024

3
min read

TABLE OF CONTENTS

Want to test your team’s readiness?

Try a demo today

Want to download an asset from our site?

Download now

Arup, a United Kingdom-based engineering firm, confirmed it fell victim to one of the most audacious deepfake scams ever seen.

An employee in the organization’s Hong Kong office was tricked into transferring a staggering $25 million to fraudsters after participating in a video conference call where everyone else, including senior executives, was an artificial intelligence (AI)-generated fake.

The incident displays how phishing attacks have moved beyond simple communications such as text or voice. With AI phishing, attacks have entered the realm of multi-person, real-time video deepfakes used for large-scale fraud.

Organizations in every industry need to see Arup’s $25 million loss as a wake-up call about the evolving nature of cybersecurity threats.

Anatomy of the Attack: From Phishing Email to Deepfake Video Call

In this attack, the scam didn’t start with the video call. As many attacks do, this one began with a seemingly routine communication: a phishing email.

The finance worker in Arup’s Hong Kong office received a message purporting to be from the company’s U.K.-based chief financial officer (CFO), detailing the need for a “secret transaction.”

Showing good initial awareness, the employee was reportedly skeptical of the email request. But then the attackers deployed an advanced weapon: deepfake technology.

To overcome the employee’s doubts, the fraudsters invited them to a video conference call. On the call, the employee saw and heard individuals who looked and sounded exactly like the real CFO and several other colleagues.

Yet all participants on the call, other than the victim employee, were AI-generated deepfakes. The realistic visuals and audio, combined with the presence of multiple seemingly familiar senior figures discussing the transaction, ultimately convinced the employee of the request’s legitimacy.

Following the instructions during the deepfake video conference, the employee made 15 transfers totaling $25 million to five different Hong Kong bank accounts controlled by the scammers.

The fraud was only discovered later when the employee followed up with Arup’s actual headquarters.

Why Made This Deepfake Scam Effective?

Arup’s devastating attack highlights the psychological power of deepfakes, especially when deployed skillfully:

  • Overcoming Skepticism: The initial phishing email raised doubts, but the multi-person video call provided a powerful, seemingly irrefutable layer of ‘proof’ that overwhelmed the employee’s caution.
  • Exploiting Trust in Visuals: Humans are conditioned to trust what we see and hear, especially in a professional context like a meeting with known colleagues. Deepfakes directly exploit this trust.
  • Leveraging Authority: Impersonating the CFO and other leaders leveraged the perception of authority to pressure the employee to comply with the financial transfer requests.
  • Accessibility of Technology: Technology to create convincing deepfakes is becoming increasingly accessible and easier to use, even for attackers with limited technical skills.

Rob Greig, Arup’s Chief Information Officer (CIO), emphasized that this wasn’t a traditional cyberattack involving system breaches. “None of our systems were compromised and there was no data affected,” he stated to Building, describing it as “technology-enhanced social engineering.”

A Trend of Increasingly Sophisticated Fraud

The incident, while shocking in scale, is part of a broader, alarming trend. Deepfake technology is rapidly moving from novelty to a potent tool for financial fraud, executive impersonation, and disinformation.

Reports indicate a dramatic surge in AI-driven identity fraud attempts globally, with deepfakes becoming a leading method. Attackers are targeting financial institutions, high-profile individuals, and employees within organizations who have access to funds or sensitive systems.

While the Arup case involved video, sophisticated AI voice cloning scams (like the one attempted against LastPass) are also proliferating.

Defending Against the Undetectable?

What occurred with Arup and the deepfake attack demonstrates that basic awareness or simple verification might not be enough to defend an organization against elaborate schemes. IT and security team leaders need to turn to a multi-layered defense strategy.

Mandating multi-channel verification for significant requests is critical. Employees should never trust instructions received solely via video or audio, and high-value transactions or sensitive actions should be confirmed through a separate, trusted channel like a known phone number or secure internal system.

Implement strict financial controls, particularly requiring multiple independent approvals for large fund transfers, which prevents any single employee from executing major payments based solely on potentially compromised communications.

In addition, security awareness training must evolve beyond basic phishing. Focus specifically on deepfake threats (video and audio), teaching employees critical thinking, verification discipline, and spotting potential-though-subtle signs of fakes. Employees also need realistic simulations of attacks to put their knowledge to the test.

And where feasible, explore and invest in emerging technical solutions designed to detect deepfakes. While no tool is perfect, AI-powered analysis of video and audio provides an extra, valuable layer of defense.

Consider the potential risks associated with widely available, high-quality video and audio recordings of executives and other employees online. Such content can, unfortunately, serve as training data for sophisticated deepfake models.

A Sobering Reminder

Arup confirmed that despite the massive financial loss, its overall financial stability and business operations weren’t affected. However, Greig said, “I hope our experience can help raise awareness of the increasing sophistication and evolving techniques of bad actors.”

The $25 million Arup scam is a sobering reminder that organizations must proactively adapt their defense. Relying on human vigilance alone is insufficient when faced with AI-generated impersonations this convincing. Instead, combining stringent processes, advanced technical tools, and continuous, sophisticated security awareness training is essential to mitigate the risks of this new era in cyber deception.

Get your team ready for
Generative AI

Schedule your demo today