A finance worker joined a video call with his CFO and five colleagues. Every single one was fake.
In January 2024, an employee at Arup - the British engineering firm behind the Sydney Opera House and the Beijing Bird’s Nest - received an email from the company’s chief financial officer requesting a confidential transaction. The employee was suspicious. It looked like a phishing email.
Then the “CFO” invited him to a video call.
On the call, the employee saw and heard what appeared to be the real CFO alongside several colleagues he recognised. They discussed the transaction. The faces looked right. The voices sounded right. The employee’s doubts evaporated.
He made 15 transfers totalling $25 million to five bank accounts controlled by the fraudsters.
Every person on that video call - except the victim - was an AI-generated deepfake.
This is the new reality of business fraud
The Arup attack wasn’t a one-off. It was a signal of what corporate fraud looks like now.
Deepfake technology has crossed a critical threshold. Modern AI can generate near-perfect lip-sync, facial expressions, and voice in real time. The artefacts that once made deepfakes easy to spot - unnatural eye movements, lagging facial muscles, inconsistent lighting - have been largely eliminated, especially in the compressed video quality of a typical video call.
Arup’s own CIO, Rob Greig, tried to make a deepfake video of himself after the incident. It took him 45 minutes using open-source software. It wasn’t perfect - but it was surprising what could be achieved by a non-expert in under an hour.
Now consider what a motivated criminal organisation with dedicated resources can produce.
The threat to businesses is escalating fast
The WPP attack came next. Scammers cloned the voice of Mark Read, CEO of one of the world’s largest advertising companies, and used it on a fake Teams-style video call. The attack was detected before money changed hands - but it demonstrated that even the highest-profile executives are viable targets.
Across the financial sector, deepfake incidents surged 194% in Asia-Pacific in 2024. Over 10% of banks surveyed reported losing more than $1 million each to a deepfake call. Globally, losses from deepfake-enabled fraud exceeded $200 million in Q1 2025 alone.
The pattern is consistent: scammers impersonate executives with authority over financial decisions, create urgency around a “confidential” transaction, and use deepfake technology to overcome the natural scepticism that would stop a phishing email in its tracks.
Why video calls are no longer proof of identity
For years, the standard advice when you received a suspicious email was: “jump on a quick video call to confirm.” Video was the gold standard of verification. If you could see and hear the person, you could trust the request.
That assumption is now dangerously outdated.
A video call provides the illusion of verification without the substance of it. The Arup employee did exactly what security training teaches - he was sceptical of an email, so he sought visual confirmation. The deepfake provided it. The better your instincts, the more effective the deepfake becomes at overriding them.
Businesses need a verification mechanism that doesn’t depend on trusting what you see and hear. Something the other person knows, not something they look or sound like. The FBI recommends creating “a secret word or phrase” to verify identities. Cybersecurity researchers agree: a shared secret is the strongest countermeasure to synthetic media.
How TrustWord protects teams
TrustWord provides every pair of people in your organisation with a unique, rotating passphrase that changes every 2.5 minutes. Generated cryptographically on-device, with no server involvement.
In the Arup scenario, the finance worker could have asked: “Before we proceed - what’s your TrustWord for me?” The real CFO would open the app and read the current phrase. The deepfake can’t.
It works the other way too. The CFO can ask for the employee’s TrustWord, verifying both sides of the conversation. Mutual verification means a compromised device on one side doesn’t break the whole system.
No internet connection needed after setup. No accounts. No data collection. Works for video calls, phone calls, and text messages.
Think of it as two-factor authentication for human conversations.