Skip to content
// TRUSTWORD THREAT INTELLIGENCE

SCAM_DIRECTORY_

70 documented AI deepfake incidents · $161M+ in verified losses · 15 attacks prevented

70
Documented incidents
$200M+
Lost in Q1 2025 alone
3 sec
Audio needed to clone a voice
$40B
Projected losses by 2027

Deepfake Corporate Fraud

AI-generated video and voice used to impersonate executives and authorize fraudulent transfers.

15 incidents
$25,000,000 Lost

Arup Engineering - Hong Kong

A finance worker in Arup's Hong Kong office received an email from the company's UK-based CFO requesting a "secret transaction." He was initially suspicious, thinking it looked like phishing, but his doubts evaporated when he joined a video call and saw the CFO and several colleagues he recognised, all speaking and moving naturally. Every person on the call was a deepfake, generated from publicly available video of the real employees found in conference recordings and company meetings.

Convinced by what he saw, the worker made 15 transfers totalling HK$200 million ($25.6 million) to five bank accounts. The fraud wasn't discovered until the employee later checked with head office. Hong Kong police confirmed the investigation but as of late 2025, no arrests have been made. The World Economic Forum cited the case as proof that "seeing is no longer believing."

January 2024 Deepfake video call
$499,000 Lost

Multinational Firm - Singapore

A finance director at a multinational firm's Singapore office was contacted via WhatsApp on March 24, 2025 by someone impersonating the company's CFO, who told her to join a Zoom call about a "regional business restructuring." Two days later she joined the call and saw multiple senior executives on screen with synchronized facial movements, realistic voices matching each person's known speech patterns, and natural body language.

She authorised a transfer of nearly $500,000 to what turned out to be a money mule account. The scammers then pushed for a second transfer of $1.4 million, which finally raised suspicion. The Singapore Police Force's Anti-Scam Centre worked with Hong Kong's Anti-Deception Coordination Centre to trace and freeze the initial funds, making it one of the rare cases where the money was actually recovered.

March 2025 Deepfake video call
€220,000 Lost

UK Energy Company - United Kingdom

In what became the first widely reported AI voice cloning fraud, the CEO of a UK-based energy company received a phone call from someone who sounded exactly like his boss, the CEO of the German parent company, complete with the man's slight German accent and familiar speech patterns. The voice instructed him to make an urgent wire transfer of €220,000 to a Hungarian supplier within an hour. The CEO complied, recognising his boss's voice with total confidence.

The attackers then called a second time requesting another transfer, and a third time to claim the first payment had been reimbursed. The funds were routed through Hungary to Mexico and scattered across multiple accounts. The fraud was only discovered when the real German CEO called about an unrelated matter. Insurers covered the loss, but the case, reported by the Wall Street Journal in September 2019, shocked the cybersecurity world and foreshadowed the deepfake fraud epidemic that followed.

2019 (landmark case) Voice cloning
PREVENTED Stopped

Ferrari - Italy

A senior Ferrari executive received unexpected WhatsApp messages from a number displaying CEO Benedetto Vigna's profile photo, claiming he needed to discuss a confidential acquisition and couldn't use his normal line "for security reasons." The messages were followed by a phone call in which an AI-generated voice convincingly replicated Vigna's distinctive southern Italian accent and cadence.

Growing suspicious, the executive decided to verify identity with a personal question: "What was the title of the book you recommended to me last week?" The line went dead. Ferrari's IT security team was immediately notified. MIT Sloan later published the case as a model for how simple personal verification questions can defeat even sophisticated deepfakes.

July 2024 Voice cloning
PREVENTED Stopped

WPP - United Kingdom

Scammers orchestrated a multi-layered impersonation of WPP CEO Mark Read, the head of the world's largest advertising conglomerate. They created a fake WhatsApp account using publicly available photos of Read, then sent a message to an agency leader within the WPP network requesting an urgent Microsoft Teams meeting.

During the video call, the attackers used a YouTube recording of Read displayed in a meeting window while deploying AI voice cloning to speak in real-time. They posed as Read discussing the creation of a new business venture, requesting sensitive financial documents and credentials. The agency leader grew suspicious when the conversation pushed for unusual financial commitments and verified through internal channels that Read hadn't scheduled the meeting. Read himself later shared the story publicly as a warning.

2024 Voice cloning, deepfake video
PREVENTED Stopped

LastPass - United States

Scammers generated a convincing deepfake audio clone of LastPass CEO Karim Toubba's voice and sent it via WhatsApp voice messages to an employee. The fake Toubba requested urgent action on a sensitive matter.

The targeted employee noticed several red flags: the use of WhatsApp was outside normal business channels, the message had an unusual sense of urgency, and the phrasing didn't match Toubba's typical style. The employee reported the attempt to the internal security team, who confirmed it was a social engineering attack. LastPass publicly disclosed the incident as a teaching moment, publishing a detailed blog post to help other companies recognise similar attempts.

Early 2024 Voice cloning
PREVENTED Stopped

Wiz - United States

Attackers targeted Wiz, the cloud security startup valued at $12 billion, by cloning CEO Assaf Rappaport's voice from a publicly available conference recording. They then blasted dozens of Wiz employees with AI-generated voicemails requesting login credentials for an "urgent security review."

The attack failed due to an unexpected detail: Rappaport has well-known public speaking anxiety, and his voice during formal conference presentations sounds noticeably different from his casual day-to-day speaking voice. Employees who knew him personally immediately recognised the mismatch. Rappaport went public about the incident at a tech conference, calling it a warning to all CEOs that their public appearances are now attack surfaces. He noted the voice clone took only minutes to generate from a single conference talk.

Late 2024 Voice cloning
$15,000,000 Lost

Retool / Fortress Trust - San Francisco

Attackers deepfaked the voice of an IT team member and called a Retool employee, convincing them to hand over a multi-factor authentication code. The breach compromised 27 cloud customer accounts - all in the crypto industry. Fortress Trust, the hardest hit, lost nearly $15 million in cryptocurrency. Google Authenticator's cloud sync feature turned MFA into single-factor auth.

August 2023 Voice cloning, social engineering
Targeted Attempted

Binance CCO Deepfake "Hologram" - Global

Hackers created a deepfake of Binance Chief Communications Officer Patrick Hillmann using footage from news interviews and TV appearances. The AI-generated Hillmann appeared on Zoom calls with cryptocurrency project teams, discussing exchange listing opportunities. Several "highly intelligent" crypto community members were fooled. Hillmann only discovered it when grateful project teams thanked him for meetings he never attended.

August 2022 Deepfake video
Targeted Attempted

European Mayors - Berlin, Madrid, Vienna, Budapest

Deepfake technology was used to impersonate Kyiv Mayor Vitali Klitschko in video calls with the mayors of Berlin, Madrid, Vienna, and Budapest. Berlin Mayor Franziska Giffey ended her call when the fake Klitschko asked for Ukrainian refugees to be returned - a request the real Klitschko would never make. She called it "a means of modern warfare." The attack occurred during the height of Russia's invasion of Ukraine.

June 2022 Deepfake video call
Millions CHF Lost

Swiss Businessman - Canton of Schwyz, Switzerland

A successful entrepreneur in the canton of Schwyz received a phone call from what he believed was a trusted long-time business partner. The AI-cloned voice was so convincing that the businessman had no reason to doubt it. The caller knew details about their business relationship and spoke with familiar mannerisms.

Over a two-week period, through a series of calls, the fake partner convinced him to make several wire transfers to a bank account in Asia, each time with a plausible business justification. The total losses reached several million Swiss francs. The businessman only realised something was wrong when he spoke to the real business partner in person at a scheduled meeting and learned that none of the calls or transactions had been authorised. Swiss police confirmed the investigation is ongoing, but the funds, routed through multiple Asian financial institutions, are considered extremely difficult to recover.

January 2026 Voice cloning
$35,000,000 Lost

UAE Bank Heist - United Arab Emirates

A bank branch manager received a call from what sounded like a company director he knew personally, supported by forged emails from a fake lawyer. He authorized transfers totalling $35 million for a purported company acquisition. At least 17 people were involved in the ring, with funds dispersed to banks worldwide. The UAE requested US help tracing approximately $400K sent to Centennial Bank accounts.

January 2020 Voice cloning
PREVENTED Stopped

Pindrop Security - "Ivan X" Deepfake Job Candidate

Cybersecurity firm Pindrop received 800+ applications for one senior engineering role. Over one-third of 300 analysed candidate profiles were fraudulent. One candidate "Ivan X" used a face-swap deepfake and AI voice clone during a live video interview. The recruiter noticed facial expressions lagged behind words by a fraction of a second.

2024 Deepfake video, voice cloning
PREVENTED Stopped

KnowBe4 - North Korean Fake IT Worker

Security awareness company KnowBe4 unknowingly hired a North Korean operative as a Principal Software Engineer. The operative used a stolen US identity and an AI-enhanced stock photo to pass four video interviews and background checks. The workstation was shipped to an "IT mule laptop farm" and accessed via VPN from North Korea. Malware was loaded immediately but detected within 25 minutes by EDR software. FBI confirmed a state-sponsored operation.

July 2024 AI-enhanced photo, deepfake identity
PREVENTED Stopped

Vidoc Security Lab - Deepfake Job Interview

Polish cybersecurity startup interviewed a developer who claimed to live in Poland but had a strong Asian accent. During the video call, co-founder Dawid MoczadΕ‚o noticed real-time face-swapping artifacts. When asked to wave his hand in front of his face, the candidate immediately left the call. "His responses were good; I kind of wanted to hire him." Suspected North Korean IT worker scheme - the DOJ estimates Pyongyang has netted $88M+ over 6 years from such operations.

December 2024 Real-time face-swap

These scams target everyone - from Fortune 500 firms to families. A TrustWord stops them.

Get TrustWord

Family Voice Cloning Scams

Cloned voices of children and grandchildren used to exploit family bonds and extract money under pressure.

12 incidents
$15,000 Lost

Sharon Brightwell - Dover, Florida

Sharon Brightwell answered a frantic call from what sounded exactly like her daughter, sobbing and saying she'd been in a terrible car accident and that her unborn baby had died. A "lawyer" then came on the line and explained that Sharon's daughter faced criminal charges and needed $15,000 immediately for bail. Overwhelmed with fear, Sharon rushed to withdraw the cash and handed it to a courier who arrived at her front door within the hour.

Shortly after, the scammers called again: the family of the other driver, they claimed, were "Christian people" who would agree not to press charges for an additional $30,000. It was only when Sharon finally called her real daughter directly that she learned the truth. Her daughter was perfectly safe and had never been in an accident. The family believes the scammers harvested her daughter's voice from Facebook videos, needing only a few seconds of audio to generate the clone.

July 2025 Voice cloning
Targeted Attempted

Alice & Frank Boren - Alabama

When Alice Boren, 79, answered the phone, she immediately recognised her great-grandson Cameron's voice: "Mawmaw, I'm in a lot of pain. I have a broken nose and I'm bleeding." The voice was panicked, trembling, and unmistakably Cameron. He said he'd been in a car wreck and was being arrested. A supposed lawyer then took over, pressing for thousands of dollars in bail money.

Alice's husband Frank, 80, was ready to drive to the bank when a family member intervened and called the real Cameron, who was at home and completely fine. Alabama investigators told WBRC that hundreds of families across the state have been targeted with nearly identical scripts: car accident, broken nose, going to jail. The only difference is whose voice the AI had cloned.

2025 Voice cloning
~$10,000 Lost

Family in East Aurora - Western New York

An elderly couple in East Aurora, New York received a phone call from what sounded exactly like their daughter-in-law, hysterical and claiming she had been involved in an accident that killed another person. She said she was being held and needed nearly $10,000 for bond money immediately. A man identifying himself as her lawyer got on the line and provided specific instructions for withdrawing cash.

Within hours, a courier arrived at the couple's front door to collect the money in person, a common tactic that avoids traceable wire transfers. The couple only discovered the scam when they spoke to their actual daughter-in-law later that day and learned she was safe. Local police warned that multiple families in the Western New York area had been hit with nearly identical calls in the same week, suggesting an organised operation.

2025 Voice cloning
$200,000 Lost

Eight Seniors - Newfoundland, Canada

Over a devastating three-day period from February 28 to March 2, 2023, at least eight senior citizens across Newfoundland were hit by the same scam. Each received a call from what sounded exactly like their grandchild, panicked, claiming they'd been in an accident and that drugs were found in their car. The callers demonstrated unsettling knowledge of real personal details: where the grandchildren lived, where they worked, and the names of other family members.

A "lawyer" would then take over to arrange bail. In every case, the victim was instructed to withdraw cash and hand it to a courier. The combined losses reached $200,000 in just 72 hours. Police caught a break when 23-year-old Charles Gillen, acting as a money courier, was arrested on the tarmac at St. John's International Airport as he attempted to board a flight out of the province with the collected cash. Investigators believe AI voice cloning was used in all eight cases, making this one of the earliest documented mass-scale operations.

February–March 2023 Voice cloning
PREVENTED Stopped

Marilyn Crawford - Canada

Marilyn Crawford received a call from someone claiming to be a police officer, who said her grandson Ian had been arrested for stealing a car and needed $9,000 for bail. Then "Ian" came on the line, his voice trembling and desperate, pleading with his grandmother for help. The scammers had likely cloned his voice using AI.

What happened next showed remarkable sophistication: within 30 minutes, a taxi arrived at Marilyn's door, sent and paid for by the scammers, to drive her to a nearby CIBC bank branch in Oshawa, Ontario. The plan was to rush her through a withdrawal before she could think clearly. But an alert customer service agent sensed something was wrong and flagged the transaction. A financial advisor contacted Marilyn's son to verify before any money changed hands, stopping the scam just in time. CBC Marketplace later investigated her case as part of a documentary on AI voice cloning scams sweeping Canada.

2024 Voice cloning
CAD $3,000 Lost

Ruth Card & Greg Grace - Regina, Saskatchewan

Ruth, 73, received a call from what sounded exactly like her grandson Brandon, claiming he was in jail and needed bail money. She and her husband Greg, 75, rushed to their bank and withdrew CAD $3,000 - the daily maximum. They were heading to a second branch when a bank manager warned them it sounded like a scam. One of the earliest reported AI voice-cloning grandparent scams.

Early 2023 Voice cloning
CAD $21,000 Lost

Benjamin Perkin's Parents - Canada

Elderly parents received a call from a "lawyer" claiming their son had killed an American diplomat in a car accident and needed money for legal fees. The lawyer then handed the phone to their "son" - a convincing AI clone of Benjamin's voice. The parents went to the bank and wired CAD $21,000 in Bitcoin. They only discovered the scam when the real Benjamin called that evening for his regular check-in. "The money's gone. There's no getting it back."

Early 2023 Voice cloning
Thousands Lost

Mother - South Carolina

A South Carolina mother received a phone call that began with her daughter's voice - crying, panicked, begging for help. A man then took the phone and claimed the daughter had been in an accident and was being detained, demanding thousands of dollars to resolve the situation. The mother, operating on pure parental instinct, withdrew cash and sent it as directed before she had a chance to verify anything. It was only after the money was gone that she reached her real daughter, who was completely safe. First Alert 4 reported that the scammers had likely harvested the daughter's voice from social media posts, illustrating how even a few seconds of publicly available audio can be weaponised.

October 2025 Voice cloning
PREVENTED Stopped

Gary Schildhorn, Attorney - Pennsylvania

The attorney received a call from what sounded exactly like his son, claiming he'd been in a car accident injuring a pregnant woman and was being taken to jail. "I'm a father; I'm a lawyer. My son's in trouble - I'm in action mode," he said. He was about to send money when his actual son called to say he was fine. A Consumer Reports investigation found four of six leading voice-cloning products had no meaningful safeguards against misuse.

2025 Voice cloning
Thousands Lost

Multiple Seniors - Suffolk County, New York

Across Suffolk County, Long Island, multiple seniors lost thousands of dollars in 2025 to a coordinated wave of AI voice cloning scams. In each case, victims received urgent calls from what sounded like their grandchildren, in tears, claiming to be in legal trouble after a car accident. The callers followed a nearly identical script: accident, arrest, urgent need for bail money in cash.

Couriers were dispatched to collect the money in person, often within hours of the initial call. The Suffolk County Police Department issued a public warning urging residents to verify any emergency call by hanging up and calling the family member directly. CBS News New York reported that the pattern matched a nationwide trend that has accelerated dramatically since voice cloning tools became publicly available in 2023.

2025 Voice cloning
Millions Lost

Hundreds of Victims - Alabama (statewide)

Alabama investigators report hundreds of people in the state have lost money - in some cases millions - to AI voice cloning scams. Criminals are also using "FraudGPT" and voice cloning to sign victims up for fake insurance and subscriptions, collecting upfront payments into fake accounts. The likelihood of recovering funds is described as "slim to none."

2025 Voice cloning, FraudGPT
$4.9B 2024 total

Seniors Over 60 - United States (aggregate)

The FBI's Internet Crime Complaint Center (IC3) reported that Americans over 60 lost a staggering $4.9 billion to cybercrime in 2024, a 43% increase from the previous year and the highest figure ever recorded for any age group. Voice cloning grandparent scams were identified as a significant and rapidly growing contributor, with losses from impersonation scams alone exceeding $1.3 billion.

The data revealed that seniors are disproportionately targeted because they tend to answer unknown calls, are more trusting of voice-based communication, and often have accessible savings. CBS News highlighted that experts now unanimously recommend families establish a shared code word or phrase that can be used to verify identity during any unexpected emergency call.

2024 FBI IC3 data

The FBI, FTC, and every cybersecurity expert recommend the same defence: a code word.

Get TrustWord

AI-Powered Virtual Kidnapping

Fake kidnapping calls using cloned voices of loved ones to demand ransom payments.

6 incidents
PREVENTED Stopped

Jennifer DeStefano - Scottsdale, Arizona

Jennifer received a call from someone claiming to have kidnapped her 15-year-old daughter. An AI-cloned voice of her daughter was heard crying and screaming in the background. The ransom demand started at $1 million, negotiated down to $50K. Her husband called their daughter directly - she was safe at ski practice. DeStefano later testified before the US Senate about the experience, helping drive national awareness of AI voice cloning threats.

April 2023 Voice cloning, virtual kidnapping
$500+ Lost

Couple - Brooklyn, New York

A Brooklyn couple was jolted awake in the middle of the night by a phone call. On the line, they heard a female family member's voice, terrified, sobbing, begging for help. A man then took over, claiming he was holding her hostage and demanding immediate payment. The couple sent $500 via Venmo within minutes.

But the caller didn't stop there. Over the next 25 agonising minutes, he kept demanding more money, escalating his threats each time. The call finally ended, and the couple immediately reached out to the family member, who was safe at home and had no idea any of it had happened. The voice they'd heard was an AI clone, generated from publicly available audio. Futurism reported the case as an example of how virtual kidnapping scams have become terrifyingly personalised with AI voice cloning.

March 2024 Voice cloning, virtual kidnapping
Targeted Attempted

School Families - King County, Washington

Two families in the Highline School District of King County, Washington received terrifying calls claiming their children had been kidnapped, with AI-generated voices of the children heard screaming and crying in the background. The scammers demanded ransom for the children's safe return.

What made these cases particularly disturbing was the deliberate targeting: both families were non-English-speaking immigrants, chosen because language barriers make it harder to verify information quickly. The King County Sheriff's Office confirmed they were investigating "several recent cases" following the same pattern. KOMO News reported that the scam exploited immigrant families' fear of interacting with law enforcement, making them less likely to call 911 and more likely to pay.

2024 Voice cloning, virtual kidnapping
$1,000 Lost

Amanda Hardesty Family - St. Louis, Missouri

A 19-year-old son received a call claiming his younger sister had been kidnapped, with an AI-cloned voice of his sister "begging her brother for her life." The sister was actually in a high school classroom at the time. The son panicked and wired $1,000 to an overseas account before the family could verify she was safe.

2025 Voice cloning, virtual kidnapping
PREVENTED Stopped

Crystal Welch - Eureka, Missouri

Crystal's uncle received a call claiming she had been kidnapped, hearing Crystal's cloned voice on the line. The scammer used the same script as other victims: car accident, drugs found in the vehicle, didn't want police called. The family recognised it as a scam before paying. Eureka Police confirmed they were investigating "several recent cases" of the same scheme.

2025 Voice cloning, virtual kidnapping
Ongoing Trend

Children's Voices Cloned - Nationwide, USA

Law enforcement agencies across the United States have issued urgent warnings about an escalating nationwide trend: scammers using AI to clone children's voices and call parents with fake kidnapping scenarios. The FBI, state attorneys general, and local police departments from Kansas City to Los Angeles have all reported sharp increases since late 2024.

The pattern is disturbingly consistent. A child's cloned voice crying or screaming, followed by a "captor" demanding immediate payment via wire transfer, cryptocurrency, or gift cards. Voice cloning tools that cost thousands of dollars in 2023 are now available for free online. KCTV5 reported that parents who hear their child's voice in distress bypass all rational thinking, which is exactly the psychological exploit the scammers rely on. The FBI's recommended defence: establish a family code word that only real family members would know.

2025–2026 Voice cloning, virtual kidnapping

One question with a TrustWord could have ended each of these calls in seconds.

Get TrustWord

Celebrity Deepfake Scams

AI-generated likenesses of celebrities used for romance scams, product endorsements, and financial fraud.

14 incidents
$850,000 Lost

"Brad Pitt" Romance Scam - France

A 53-year-old French interior designer named Anne was contacted on Instagram by someone claiming to be Brad Pitt's mother, Jane Etta, who told her that her son "needed a woman just like her." What followed was an 18-month romance with a fake Brad Pitt who sent AI-generated selfies, voice messages, and love letters through WhatsApp.

The manipulation escalated methodically: first came luxury "gifts" that required Anne to pay thousands in customs fees, then AI-generated hospital photos showing "Pitt" with kidney cancer, and finally claims that Angelina Jolie had frozen his bank accounts during their divorce. A fake doctor emailed Anne saying Pitt was "fighting to survive," creating urgency. Anne, who was going through her own divorce at the time, transferred €830,000 ($850,000) from her divorce settlement to accounts in Turkey.

The scam unravelled when she saw real photos of Pitt with his girlfriend Inés de Ramon. After appearing on French TV show TF1's "Sept à huit" to share her story in January 2025, Anne faced such severe online mockery that TF1 pulled the interview from their platforms to protect her. Nigerian scammers were later accused of orchestrating the scheme.

2023–2024 Deepfake images/video/voice
$362,000 Lost

"Brad Pitt" Romance Scam - Spain

Spanish police arrested five people in September 2024 who had scammed two women out of a combined €325,000 by impersonating Brad Pitt through AI-generated messages, photos, and video clips. The criminal ring profiled potential victims through social media, specifically targeting women who appeared emotionally vulnerable or recently divorced.

Once contact was established through Instagram and messaging apps, the "Brad Pitt" persona built trust over months with personalised AI-generated messages and photos. The women were told Pitt needed money for medical bills, frozen assets, and film production costs. One victim remortgaged her home. Police in the Andalusia and Valencia regions recovered €85,000 of the stolen funds and seized multiple phones and computers.

September 2024 AI-generated messages, deepfakes
CNN
$81,000+ Lost

"Steve Burton" Romance Scam - Los Angeles

A Los Angeles woman named Abigail was convinced she was in a private romantic relationship with General Hospital actor Steve Burton after scammers used deepfake AI to create personalised video and voice messages. The deepfake Burton addressed her by name, saying "Abigail, my queen," in messages convincing enough that she never questioned their authenticity.

Over months of escalating financial requests, Abigail sold her condominium below market value and drained her life savings, losing over $81,000. Her daughter eventually intervened when she noticed her mother's deteriorating financial situation. Burton himself has spoken out about the scam, warning fans that he would never contact anyone through social media asking for money.

2025–2026 Deepfake video and voice
$15,000 Lost

"George Clooney" Romance Scam - Argentina

An Argentine woman was lured into a six-week romance with someone she believed was George Clooney after being contacted through social media. What made this scam remarkable was the use of real-time deepfake video calls - the fake Clooney appeared on screen with natural gestures, a convincing smile, and realistic blinking that gave the woman no reason to doubt his identity. The technology had advanced to the point where the calls were indistinguishable from a real video chat. Over the course of the relationship, the AI Clooney explained he needed $15,000 to fund his divorce from his wife Amal, claiming his assets were tied up in legal proceedings. The woman transferred the money before friends and family intervened and confronted her with the reality. IBTimes reported the case as a warning that deepfake video calls have crossed a quality threshold where even sceptical people can be convinced they're speaking with a real celebrity.

2025 Deepfake video calls
Widespread Ongoing

Taylor Swift Deepfakes - Global

McAfee's 2025 report crowned Taylor Swift the most deepfaked celebrity in the world, with her face, voice, and name weaponised across an enormous range of scam campaigns. The most viral was a fake Le Creuset cookware giveaway on Facebook and Instagram, using AI-generated video of Swift endorsing the brand. Victims who clicked through were enrolled in recurring subscription charges. Her likeness has also been used for fake concert ticket sales, bogus merchandise stores, and fraudulent cryptocurrency platforms.

McAfee found that 72% of Americans have encountered a fake celebrity endorsement online, with Swift being the single most exploited identity. The problem extends beyond financial fraud: in January 2024, explicit AI-generated images of Swift went viral on X (formerly Twitter), prompting bipartisan Congressional action on non-consensual deepfakes. Swift's enormous digital footprint, with tens of thousands of hours of video and audio publicly available, makes her an ideal target for AI cloning tools that need only seconds of source material.

2024–2025 Deepfake endorsements
Widespread Ongoing

Tom Hanks Deepfake Dental Ad - Global

An AI-generated deepfake video of Tom Hanks endorsing a dental plan spread rapidly across Facebook and Instagram in October 2023. Hanks posted an urgent warning on Instagram: "There's a video out there promoting some dental plan with an AI version of me. I have nothing to do with it."

The timing was significant. The incident arrived in the middle of the SAG-AFTRA and WGA strikes, where AI protections for actors' likenesses had become a central bargaining issue. Just weeks earlier, Hanks had told the Adam Buxton podcast that AI technology meant he could "appear in movies" long after his death, and that there was "nothing to stop" someone from recreating his performance without consent. The dental plan ad proved his point in real time, and was cited repeatedly during strike negotiations as evidence that actors needed contractual AI protections.

October 2023 Deepfake video
Widespread Ongoing

MrBeast iPhone Giveaway Scam - TikTok

A deepfake of YouTube's biggest creator MrBeast (Jimmy Donaldson, 300M+ subscribers) appeared as a paid, promoted TikTok advertisement, meaning TikTok's own ad review system approved it for distribution. The AI-generated Donaldson announced "I'm doing the world's largest iPhone 15 giveaway" and directed viewers to a website where they could claim an iPhone 15 Pro for just $2. Victims who entered their payment details were enrolled in a recurring $6.95/month subscription that was extremely difficult to cancel.

The ad was particularly effective because MrBeast is genuinely known for extravagant giveaways, making the premise believable. Donaldson posted on X: "Are social media platforms ready to handle the rise of AI deepfakes? This is a serious problem." TikTok removed the ad within hours, but the incident raised uncomfortable questions about how the ad passed review in the first place.

October 2023 Deepfake video ad
Β£250,000+ Lost

Martin Lewis Deepfake Scams - United Kingdom

Martin Lewis, the UK's most trusted financial expert and founder of MoneySavingExpert.com, has become the country's most deepfaked individual. AI-generated videos of Lewis endorsing fake investment platforms circulate constantly on Facebook and Instagram, with Meta struggling to remove them faster than they appear.

The damage has been devastating: a Wokingham pensioner lost £254,120 after being drawn in by a deepfake Lewis promoting a "guaranteed returns" platform; a Surrey man lost £20,000 of his life savings; and a vulnerable pensioner with early-stage dementia lost nearly £60,000 to a cryptocurrency scheme using Lewis's likeness. The scams typically start with a small £200 "trial investment" that shows fabricated returns, encouraging victims to deposit more before discovering withdrawals are impossible.

Lewis has been vocal in his fury, publicly threatening to sue Meta, testifying before Parliament, and launching a campaign called "Stop Scam Ads" demanding that social media platforms be held legally liable for fraudulent advertising. NatWest reported that Lewis's likeness is used in more scam ads than any other UK public figure.

2024–2025 Deepfake endorsements
200+ reports Ongoing

Oprah Winfrey Fake Endorsements - United States

AI-generated deepfakes of Oprah Winfrey have become a staple of fraudulent weight loss advertising on Facebook and Instagram, with fake videos showing her endorsing products like "Lipomax" and "Prozenith" that don't work or don't exist at all. The Better Business Bureau's Scam Tracker received over 200 reports from consumers who purchased products after seeing these deepfake ads.

The scammers exploit Oprah's well-known weight loss journey. Her public discussions of Ozempic and Weight Watchers make fake endorsements of diet products seem plausible. Similar campaigns use the likenesses of Dr. Oz, Gayle King, and other daytime TV personalities. The BBB warned that these ads are typically served as sponsored posts that blend seamlessly into social media feeds, making them nearly indistinguishable from legitimate content.

2024–2025 Deepfake endorsements
$160,000 Lost

"Keanu Reeves" Romance Scam - Palmetto, Florida

Dianne Ringstaff of Palmetto, Florida was contacted through the popular mobile game Words With Friends by someone claiming to be Keanu Reeves. Over the next two and a half years, the scammer maintained the illusion using AI-generated voice messages and deepfake video calls that Ringstaff said sounded and looked exactly like the real Reeves.

The fake Reeves told her his assets were frozen by the FBI and that he desperately needed her help. Ringstaff took out a home equity loan, sold her car, and sent approximately $160,000 in total. When Manatee County Sheriff's deputies investigated, they discovered an additional layer of exploitation: the scammers had also been using Ringstaff's bank account to launder money from other victims, routing funds through her account to make the transactions harder to trace.

2025 Deepfake video/voice
$40,000+ Lost

"Keanu Reeves" Romance Scams - Marion & Hillsborough County, Florida

In separate but overlapping cases in Florida, a Marion County woman lost $40,000 and a Hillsborough County woman lost $170,000 to scammers impersonating Keanu Reeves using AI-generated voice and deepfake video calls. Both women were initially contacted through social media, where the fake Reeves built romantic relationships over weeks before requesting financial help.

The money was sent in Bitcoin, a deliberate choice by the scammers because cryptocurrency transactions are irreversible and difficult to trace. Investigators traced the wallet addresses to Nigeria, but recovery has proven virtually impossible. Reeves himself has no social media presence, which paradoxically makes it easier for scammers to operate fake accounts without being immediately contradicted.

2025 Deepfake video/voice
$100,000 Lost

"Kevin Costner" Romance Scam - Southern United States

A 73-year-old retired office manager named Margaret connected via Facebook with someone claiming to be Kevin Costner. Over months, she made weekly Bitcoin deposits totalling $100,000 for a fake "production company." She left her husband and drove to a hotel to meet "Costner" - he never showed, sending a fake car crash photo instead. After the scam unravelled, new impersonators targeted her again posing as actor Jonathan Roumie.

2024–2025 AI-generated messages, deepfakes
PREVENTED Stopped

"Kevin Costner" Deepfake - Melbourne, Australia

An Australian woman walked into a National Australia Bank (NAB) branch in Melbourne and asked to open a new account for "Kevin Costner," with whom she had been having deepfake AI video calls for several weeks. The fake Costner had told her he wanted to set up an Australian office and needed her help purchasing commercial property. She planned to transfer $100,000 of her own savings.

A quick-thinking bank teller, trained in scam detection protocols, recognised the telltale signs of a romance scam and refused to process the transaction. NAB's fraud team later confirmed the video calls were generated using real-time deepfake technology. NAB published the case as part of a public awareness campaign, noting that deepfake video calls have made celebrity romance scams dramatically more convincing than text-only impersonations of previous years.

2025 Deepfake video calls
PREVENTED Stopped

"Kevin Costner" Deepfake - Sussex, United Kingdom

A 57-year-old medical practice owner from Sussex, known publicly as "Rachel," received eerily lifelike voice notes and deepfake video calls from someone claiming to be Kevin Costner over a three-month period. Multiple people posing as Costner's friends and family members also contacted Rachel independently, creating an entire social network around the fake celebrity.

Rachel grew suspicious when "Costner" repeatedly declined live video chats at unpredictable times and tried to move their communication to unfamiliar messaging platforms. She also noticed small inconsistencies in his knowledge of details that the real Costner would know. Rachel came forward publicly to warn others, noting that the experience left her feeling humiliated despite the fact that the technology was specifically designed to fool people.

2024 Deepfake voice/video

If you can't trust what you see or hear, you need something only the real person would know.

Get TrustWord

Crypto & Investment Deepfake Fraud

Deepfake videos of public figures promoting fraudulent cryptocurrency and investment schemes.

7 incidents
$1,700,000 Lost

"Elon Musk" Crypto Scam - Markham, Ontario

A Markham, Ontario woman saw a deepfake video of Elon Musk on Facebook promoting what appeared to be a legitimate cryptocurrency investment platform. She started with just $250, a low-risk amount designed to build trust. The platform showed her account growing rapidly with fabricated returns, and a "personal account manager" called regularly to encourage her to invest more.

Over several weeks, she drained her entire retirement savings, took out a second mortgage on her home, and borrowed $500,000 from family and friends. The fake dashboard continued showing impressive gains the entire time. When she finally tried to withdraw her funds, she was told she needed to pay "taxes" and "fees" first, a classic secondary extraction tactic. The total loss reached $1.7 million.

2025 Deepfake video
$690,000 Lost

Steve Beauchamp, 82 - United States

Steve Beauchamp, an 82-year-old retiree, saw a deepfake video of Elon Musk endorsing a cryptocurrency investment opportunity on social media. Like many victims, he started small, opening an account with just $248. The platform showed fabricated gains, and a convincing "account manager" walked him through increasingly larger deposits.

When the deposits slowed, the scammers escalated: they convinced him to install remote desktop software on his computer, ostensibly for "technical support," then used that access to initiate transfers directly from his bank accounts. By the time his family intervened, $690,000 was gone. Once the victim gives control of their computer, the scammers can drain accounts, access email, and even intercept fraud alerts from banks.

2024–2025 Deepfake video
$5,000,000+ Lost

"Elon Musk" YouTube Livestream Scam - Global

A deepfake Elon Musk appeared in live YouTube broadcasts promoting cryptocurrency "giveaways" using the classic doubling scam: send Bitcoin to an address and receive double back. The AI-generated Musk spoke in real time, responding to chat comments and creating the illusion of an authentic livestream. Some streams ran for hours, accumulating thousands of concurrent viewers.

The scam wallet addresses collected at least $5 million between March 2024 and January 2025, though the true figure is likely much higher. Research by Sensity found that Musk's face appears in approximately 90% of all cryptocurrency deepfake scams, making him the single most impersonated person in financial fraud globally. YouTube took down streams when reported but new ones appeared within hours, often on compromised channels with large subscriber bases. The streams were timed to coincide with real Musk events like SpaceX launches and Tesla earnings calls to maximise authenticity.

2024–2025 Deepfake livestream
$600,000 Lost

Crypto Scam Victim - Prince Edward Island, Canada

A resident of Prince Edward Island, Canada's smallest province, lost $600,000 to a deepfake cryptocurrency investment scam that followed an almost identical playbook to the Ontario case. The victim was lured by AI-generated videos of public figures promoting a fake trading platform, started with a small deposit, and was systematically encouraged to invest more as fabricated returns appeared on a dashboard.

Combined with the Markham woman, the two Canadian victims lost $2.3 million between them. The Canadian Anti-Fraud Centre has warned that cryptocurrency investment fraud, often driven by deepfake celebrity endorsements, is now the single largest category of financial fraud in Canada by dollar amount.

2025 Deepfake video
500% Surge

AI Crypto Scam Surge - Global (aggregate)

The volume of AI-generated deepfake videos promoting fake cryptocurrency investment schemes surged 500% in 2025 compared to the previous year, according to fraud monitoring firms. The most common lure is "Quantum AI," a family of fraudulent platforms that use deepfake endorsements from Elon Musk, Bill Gates, Warren Buffett, Jeff Bezos, and Mark Zuckerberg.

The platforms share a common playbook: AI-generated video ads appear as sponsored content on Facebook, YouTube, and Google; victims click through to a professional-looking trading site; a "personal broker" calls within minutes; fabricated gains appear on a dashboard; and requests for larger deposits escalate until the victim runs out of money or tries to withdraw. The 500% increase reflects not just more scams but better scams. AI tools have dramatically lowered the cost and skill required to produce convincing celebrity deepfakes.

2025 Trend data
€19,000,000 Lost

Operation COINBLACK - Spain

Spanish police arrested six people behind a €19 million cryptocurrency scam that used AI-generated deepfake celebrity endorsements to lure over 200 victims. Algorithms targeted people who engaged with crypto content online. Victims saw fabricated dashboards showing impressive returns, then discovered funds couldn't be withdrawn. In a cruel twist, scammers re-contacted victims posing as investment managers, then as Europol agents, extracting "recovery fees" for money that was already gone.

2025 Deepfake celebrity ads, crypto fraud
$54.7M Trend

Canadian Romance Scam Losses - Canada (aggregate)

The Canadian Anti-Fraud Centre reported that Canadians lost $54,684,677 to romance scams between January and September 2025 alone. Experts say this represents only a fraction of actual losses, since the vast majority of victims never report out of shame. Deepfake voice cloning and real-time face-swapping on video calls have driven a dramatic increase in both the success rate and the average dollar amount of each scam.

The CAFC noted that many romance scams now evolve into "pig butchering" investment fraud: the scammer first builds an emotional relationship, then introduces a fake cryptocurrency platform. Victims lose money both through direct gifts to the fake partner and through the fraudulent investment platform. The demographic most affected is adults over 50, though reports from victims in their 20s and 30s are increasing as deepfake video calls make even tech-savvy individuals vulnerable.

Feb 2026 Voice cloning, video deepfake

A cloned voice is all it takes. A rotating code word is all you need.

Get TrustWord

Political & Official Impersonation

AI-generated voices of government officials and public figures used to defraud high-profile targets.

9 incidents
€1,000,000 Lost (recovered)

Italy's Business Elite - Defence Minister Impersonation

A coordinated wave of deepfake attacks targeted Italy's wealthiest business leaders. AI voice impersonated Defence Minister Guido Crosetto and his staff, including a fake "General Giovanni Montalbano" (a name borrowed from a popular Italian TV series). Targets included Giorgio Armani, Prada co-founder Patrizio Bertelli, Pirelli's Marco Tronchetti Provera, the Beretta family, the Menarini/Aleotti family, Diego Della Valle (Tod's CEO), and the Caltagirone and Del Vecchio families. The story: kidnapped Italian journalists in the Middle East needed ransom, and the Bank of Italy would reimburse. Massimo Moratti (former Inter Milan owner) transferred €1M in two payments to a Hong Kong account. Armani's team demanded a written request from the Ministry - it never arrived, stopping the scam. The Beretta family recognised the fraud because their president knew Crosetto personally. The €1M was fully recovered - traced to a Dutch bank account and frozen.

February 2025 Voice cloning
75K+ signatures Petition

Secretary of State Marco Rubio Impersonated - United States

After AI-generated voice was used to impersonate Secretary of State Marco Rubio in a series of scam calls, Consumer Reports launched a petition that gathered over 75,000 signatures urging the FTC to regulate the voice-cloning industry. The petition called on state Attorneys General to investigate voice-cloning apps that operate with insufficient guardrails. Many commercial tools require only a few seconds of audio and no consent from the person being cloned.

Consumer Reports argued that the technology's legitimate uses do not justify the current lack of safeguards, and that companies profiting from voice cloning should be held responsible when their tools are used for fraud. The Rubio impersonation was cited as proof that even the highest-ranking officials are vulnerable.

2025 Voice cloning
PREVENTED Stopped

Marco Rubio Deepfake - Targeted Foreign Ministers & US Officials

An AI voice deepfake impersonated Secretary of State Marco Rubio, contacting foreign ministers, a US governor, and a member of Congress with AI-generated voicemails designed to extract sensitive information or gain account access. No evidence any recipients were fooled. A State Department cable dated July 3 confirmed the incident, prompting experts to warn government officials against relying on Signal for identity verification.

July 2025 Voice cloning
PREVENTED Stopped

Zelenskyy "Surrender" Video - Ukraine

A deepfake video showing President Zelenskyy telling Ukrainian soldiers to lay down their arms was broadcast via a hacked news crawl on Ukraine 24 TV and spread across VKontakte, YouTube, and Facebook. Though the deepfake was crude - the head appeared too large and the lighting was off - experts called it dangerous. Zelenskyy had pre-emptively warned about deepfakes weeks earlier and quickly posted a rebuttal: "We don't plan to lay down any arms. Until our victory." Facebook, YouTube, and Twitter removed the video.

March 2022 Deepfake video
Election Impact Attempted

Slovak Election Audio Deepfake - Slovakia

Two days before Slovakia's parliamentary election, during a media moratorium when parties and press couldn't respond, a fabricated audio recording surfaced on Telegram purporting to show Progressive Slovakia leader Michal Simecka and journalist Monika Todova discussing how to rig the vote. The deepfake jumped to TikTok, YouTube, and Facebook. It emerged just an hour after Russia's SVR released a statement accusing the US of interfering in Slovak elections. Believed to be the first use of AI deepfakes to manipulate an EU election.

September 2023 Deepfake audio
$6M Fine Lost

Biden Deepfake Robocall - New Hampshire

Two days before New Hampshire's 2024 primary, thousands of voters received robocalls with an AI-cloned voice of President Biden urging them not to vote. Political consultant Steve Kramer orchestrated the calls, later claiming it was a "stunt" to raise awareness. The FCC finalized a $6 million fine against Kramer, and the telecom carrier Lingo Telecom agreed to a $1 million fine. Kramer was indicted on 13 felony counts of voter suppression and 13 misdemeanor counts of impersonating a candidate.

January 2024 Voice cloning, robocall
5M+ views Trend

Imran Khan AI Campaign Rally - Pakistan

Jailed former Prime Minister Imran Khan delivered a four-minute campaign speech using AI-cloned audio of his voice, generated from text he wrote in prison and approved through lawyers. His social media team used ElevenLabs' synthetic audio over archival rally footage. The virtual rally drew over 5 million views on YouTube, Facebook, and Twitter despite internet outages. After the February 2024 election, Khan's team released an AI-generated "victory speech." A first-of-its-kind use of synthetic voice in political campaigning.

December 2023 AI voice clone, political campaign
Ongoing FBI Alert

Senior US Officials Impersonated - United States

The FBI issued a Public Service Announcement in May 2025 warning that malicious actors are using AI-generated voice messages to impersonate senior US government officials in an ongoing campaign. The attacks target current and former senior federal and state officials, as well as their personal contacts including family members, colleagues, and acquaintances.

The FBI noted that the voice clones are generated from publicly available audio of speeches, press conferences, and media appearances, and are convincing enough that recipients have difficulty distinguishing them from genuine calls. The PSA advised officials to independently verify the identity of anyone contacting them, even if the voice sounds familiar. CNBC reported the warning as a sign that AI-powered social engineering now threatens national security, not just individual finances.

May 2025 Voice cloning
Escalating FBI Alert

FBI Expanded Warning - U.S. Government Officials

In December 2025, the FBI issued an expanded follow-up to its May warning, revealing that the campaign was far more extensive than initially disclosed. Activity traced back to at least 2023 showed that malicious actors had impersonated senior state government officials, White House staff, Cabinet-level officials, and members of Congress using AI-generated voice messages and SMS-based phishing ("smishing").

The attacks specifically targeted the officials' family members and personal acquaintances. A key tactic was luring victims onto encrypted messaging platforms like Signal, where conversations are harder for law enforcement to monitor. The FBI noted that the escalation from crude text impersonation to convincing AI voice clones represented a qualitative shift in the threat to government operations.

December 2025 Voice cloning, smishing

From presidents to parents - no one is immune. Protect your circle.

Get TrustWord

AI-Enhanced Romance Scams

Deepfake video calls and AI-generated personas used to build trust in romance scams, often funnelling victims into crypto fraud.

4 incidents
$46,000,000 Lost

Hong Kong Deepfake Romance Syndicate - Hong Kong

Hong Kong police raided an industrial office building in Hung Hom and arrested 27 people operating a sophisticated romance-to-crypto fraud ring that had stolen HK$360 million ($46 million). The syndicate used real-time AI face-swapping technology on live video calls, allowing male operators to appear as attractive women to their victims. The operation was industrial in scale: rows of workstations were set up with scripts, target profiles, and AI tools, and operators worked in shifts targeting victims across mainland China, Taiwan, India, and Singapore through dating apps. Victims were "fattened" over weeks of romantic conversations before being directed to fake cryptocurrency trading platforms where deposits were impossible to withdraw. Police seized computers, phones, and luxury goods during the raid. In January 2025, a second wave of arrests saw 31 more people detained in connection with the same network, with an additional HK$34 million in losses uncovered. CNN and Ars Technica reported the case as the largest known bust of a deepfake-powered romance scam operation, though police acknowledged that similar syndicates continue to operate across Southeast Asia.

October 2024 Real-time face-swap, video calls
$280,000 Lost

Joe Novak - Wallington, New Jersey

Joe Novak, a father from Wallington, New Jersey, received an unsolicited Facebook message from an attractive stranger and developed what he believed was a genuine online romance over several months. The scammer used deepfake video calls to appear as a real woman. The face moved naturally, the voice matched, and the conversations felt personal.

Once trust was established, "she" introduced him to a cryptocurrency investment platform that showed impressive returns on small deposits. Joe began transferring larger amounts, eventually funnelling $280,000, nearly his entire life savings, into the fraudulent platform. When he tried to withdraw, the platform demanded additional "tax payments." Then the woman, the Facebook profile, and the investment platform all vanished simultaneously. "I lost everything. I lost my kids' future. I lost my future," Joe told CNN.

November 2025 Deepfake video, crypto fraud
CNN
$26,000 Lost

Beth Hyland - Portage, Michigan

A divorced woman matched with "Richard" on Tinder and received deepfake video calls - "He looked a little fuzzy, but I didn't know about deepfakes." She took out loans totalling $26,000 and sent the money. Her financial advisor eventually identified it as a romance scam. Police told her they couldn't take the case further because there was "no coercion, threat or force." She now advocates for the Romance Scam Prevention Act and has met with Senator Marsha Blackburn.

2024–2025 Deepfake video calls
$8,800,000 Lost

South Korean Romance Ring - Cambodia / South Korea

Over 100 victims, primarily middle-aged and elderly Korean men, were defrauded by a romance scam ring led by a Korean couple operating from Cambodia. The ring employed workers who used real-time deepfake face-swapping technology during video calls, appearing as attractive young women to build romantic relationships with targets. Once trust was established, victims were directed to fraudulent cryptocurrency trading platforms that displayed fabricated returns. One victim, identified as "Mr. Park" (age 60), lost over 200 million won (approximately $150,000) in just two months. The total losses exceeded $8.8 million. The operation was linked to the Prince Group, a Cambodian conglomerate connected to organised "pig butchering" networks that traffic workers from across Southeast Asia to staff scam compounds. The Korean couple fled to Vietnam when Cambodian authorities began investigating, but were arrested there in January 2025 and extradited. Seoul Economic Daily and Korea Times reported the case as part of a broader crackdown on transnational romance-crypto fraud rings operating out of Cambodia, Myanmar, and Laos.

2024–2025 Deepfake video, crypto fraud

Deepfake video calls make anyone look real. TrustWord makes identity certain.

Get TrustWord

Voice Biometric Authentication Failures

AI voice clones used to defeat banking voice authentication systems β€” proving voice alone is not secure identity verification.

2 incidents
Bypassed Vulnerability

Journalist Bypasses Bank Security with AI Voice Clone

A Business Insider tech journalist cloned her own voice using an inexpensive, commercially available AI tool that required only a few minutes of recorded speech. She then called her bank's automated IVR phone system, which uses voice biometrics as an authentication layer, and was granted access on the first attempt.

She escalated to a live human agent and maintained the synthetic voice for a five-minute conversation, reciting her account number and Social Security number (information readily purchasable on dark web marketplaces). The human agent never detected anything unusual. The experiment exposed a fundamental flaw in voice-based authentication: the same AI tools that make voice cloning trivially easy also make voice biometrics trivially breakable. If she could do this with a consumer-grade tool and no technical expertise, criminal organisations could do it at scale.

2024 Voice cloning, biometric bypass
Bypassed Vulnerability

Vice Reporter Bypasses Lloyds Bank Voice ID - United Kingdom

Vice journalist Joseph Cox used free AI voice-cloning software to generate a synthetic version of his own voice and called Lloyds Bank's automated phone system. The AI voice successfully passed the "My voice is my password" verification on the first attempt, granting full access to account information including balances and recent transactions. Lloyds serves over 26 million customers. Cox noted the entire process took only minutes of setup.

February 2023 Voice cloning, biometric bypass

If banks can't tell real voices from fake ones, you need a different kind of verification.

Get TrustWord

Deepfake Criminal Infrastructure

The industrialisation of deepfake fraud β€” commercial services, dark web marketplaces, and criminal toolkits making AI scams accessible at scale.

1 incident
8,065 Attacks

Group-IB: Deepfake KYC Bypass Attempts - Single Bank

A single financial institution recorded 8,065 attempts to bypass liveness checks for loan applications between January and August 2025 using AI-generated deepfake images via biometric injection attacks. Deepfake image services cost $10–$50; ready-to-use synthetic identities sell for up to $15. Face-swapping software is rented to criminals for $1,000–$10,000 by Chinese companies (Haotian AI, Chenxin AI). Over 300 dark web and Telegram posts advertising deepfake services were found between 2022 and September 2025.

January 2026 Deepfake images, biometric injection

Don't become the next entry in this directory.

Every expert - the FBI, FTC, and cybersecurity researchers - recommends the same defence: a verification code word. TrustWord makes it automatic, rotating, and impossible to forget.

Free for up to 10 people. No accounts. No data collection.