Lorphic Online Marketing

Lorphic Marketing

Spark Growth

Transforming brands with innovative marketing solutions
What Is a Deepfake and Why Are Small Businesses the #1 Target in 2026?

What Is a Deepfake and Why Are Small Businesses the #1 Target in 2026?

Deepfake in 2026: What Everyone Gets Wrong (And the Attacks Nobody Is Warning You About) | Lorphic
Ai-Generated Face On A Screen Representing Deepfake Technology Used In Business Fraud 2026
AI Security · May 2026
Last verified: May 2026

Is Your Voice Already Being Used to Rob Your Business?

The short version: You probably already know deepfakes exist and that voice cloning is a thing. What you likely don’t know is that deepfakes are now bypassing your bank’s identity verification, that a 3-second clip from your LinkedIn is enough to clone your voice, that attackers are using AI to study your writing style before calling your staff, and that your insurance almost certainly won’t cover any of it. This post covers the parts most guides skip.
📌 Key Takeaways
  • Deepfake voice cloning now needs just 3 seconds of audio. Your LinkedIn video, podcast appearance, or webinar is enough. You are already clonable whether you know it or not.
  • The most dangerous new attack combines an AI-written email in your exact style (trained on your website copy) followed by a voice clone call. Neither element alone would work. Together they eliminate employee scepticism almost completely.
  • Deepfake injection tools now bypass liveness detection used by digital banks and KYC platforms. Fraudsters are opening business accounts in your name. This started appearing in 2024 and accelerated through 2025.
  • Most pre-2024 business insurance policies explicitly exclude AI-assisted social engineering. Yours probably does. Check the exact wording before assuming you’re covered.
  • The five-minute fix that stops most attacks costs nothing: a pre-agreed verbal safe word your team uses to authenticate any urgent financial request over the phone.
$25B+
Lost to AI deepfake fraud globally in 2025 (FBI IC3)
3 sec
Minimum audio needed to clone a voice convincingly today
74%
of liveness detection systems tested were bypassed by deepfake injection in 2024 (Sensity AI)
$35K
Average loss per successful SMB deepfake attack, mostly unrecoverable

The Part Most Deepfake Guides Don’t Tell You

The standard deepfake article covers the basics: AI can fake voices, videos look real, be careful. That’s not wrong. It’s just three years out of date. The threat model for small businesses in 2026 is significantly more specific, more targeted, and more sophisticated than “someone might clone a celebrity.”

Here are the four developments that actually matter for your business right now, none of which get enough coverage in mainstream security writing.

1
Deepfakes Are Now Bypassing Bank Identity Checks — Including Liveness Detection

Most digital banks and KYC (Know Your Customer) platforms rely on something called liveness detection to verify identity remotely. You’ve used it: the system asks you to blink, turn your head, or hold up a piece of paper in front of your webcam. It’s designed to confirm there’s a real, live person behind the camera rather than a photo or a pre-recorded video.

In 2024, security researchers at Sensity AI demonstrated that deepfake injection tools could spoof this process by intercepting the camera feed at the software level and replacing it with a real-time AI-generated face. The platform sees a “live” person blinking and turning on cue. The actual camera is showing a deepfake. By early 2025, this capability had moved from research demonstrations into active fraud: fraudsters were using it to open business bank accounts and credit lines in other people’s names without ever touching a physical document.

🔍 What this means practically
The risk to your business Someone could open a business account in your company’s name, access credit facilities under your registration details, or pass supplier onboarding checks impersonating your identity. The first you know about it is often a credit inquiry or a fraudulent invoice dispute.
What to do about it Set up a Google Alert for your business name and personal name. Check Companies House or your local business registry for any changes you didn’t make. Ask your bank what identity verification method they use for new account applications and whether it’s been updated post-2024.

This is a fundamentally different threat than someone cloning your voice to call your bookkeeper. It’s identity-level fraud that can affect your business credit, your regulatory standing, and your ability to work with certain clients. It’s also almost entirely invisible until damage is done.

2
Attackers Are Studying Your Writing Style Before They Call

The combination attack that became dominant in H2 2025 works like this: before the voice clone call, attackers send an email written in your exact tone. Same sentence structure. Same sign-off. Same habit of starting paragraphs with “Just wanted to” or whatever your particular pattern is. That email primes the target to expect the call, removes their natural scepticism, and makes the voice clone feel like confirmation of something already established.

The writing clone is trained on publicly available material: your website copy, your newsletter, your LinkedIn posts, your company’s case studies. If you’ve put more than a few hundred words of your own writing online, there’s enough to build a stylistic fingerprint. This is not theoretical. The US dental practice case from 2025 (where a $28,000 ACH transfer was authorised) used exactly this method. The office manager later told investigators she’d assumed the email was real because “it sounded exactly like how [the owner] writes.”

🔍 The attack sequence step by step
Step 1: Email primer Arrives from a spoofed or similar-looking domain. Written in the owner’s style. Says something like “Heads up — our accountant will be calling you today about a payment that needs to go out. Please action it quickly, I’m in back-to-back meetings.”
Step 2: Voice clone call Arrives within the hour. Sounds like the accountant (voice cloned from a YouTube video). References the email. Creates urgency. Provides new payment details. The email primer has already done the psychological work of making this feel legitimate.
📢 On r/smallbusiness, a thread from March 2026 collected accounts from dozens of owners who’d received suspiciously well-written impersonation emails. Several noted the emails used phrases they had never shared publicly but that matched their internal communication style closely enough to make them pause. The consensus theory: the attackers were using their website blog posts and LinkedIn newsletters as training data for the writing clone.
3
Your Insurance Policy Almost Certainly Has a Gap Right Here

Cyber insurance has been a growth product for insurers since 2020. Most small business owners who have it assume it covers fraud losses from digital attacks. Many do. The gap is in how “social engineering” is defined in your specific policy, and when that policy was last written.

Pre-2024 cyber insurance policies were written before AI-assisted social engineering was a documented, widespread fraud category. The specific exclusions vary by insurer, but the pattern is consistent: policies that cover “computer fraud” or “funds transfer fraud” often require that the fraud involve unauthorised access to a computer system. A deepfake voice call that convinces your employee to authorise a transfer doesn’t involve unauthorised system access. The employee authorised it. Voluntarily. Which means the standard claim route doesn’t apply.

  • Look for the phrase “social engineering coverage” explicitly in your policy. If it’s not there, you’re not covered for voice fraud.
  • Check whether your policy has a “voluntary parting” exclusion. This is the clause that voids coverage when an employee willingly initiated the transfer, even under false pretences.
  • Ask your broker specifically: “Does this policy cover losses from AI voice impersonation fraud?” Get the answer in writing.
  • Policies issued or renewed after mid-2024 are more likely to include explicit AI fraud coverage. If yours is older, push for a rider or renewal.
⚠️
The £43,000 UK agency fraud from 2025 (covered below) was not covered by the agency’s existing cyber insurance policy. The insurer’s position: the employee authorised the transfer. No system was compromised. The policy’s “funds transfer fraud” coverage required evidence of unauthorised computer access. There was none.
🎨
4
The “Safe” Deepfake Tells You Were Taught Are Already Outdated

The detection advice that circulated heavily in 2023 and 2024 — look for blurry edges, unnatural blinking, mismatched lighting, choppy audio — was accurate for the generation of tools available then. The tools available in 2026 have largely closed those gaps. Blinking is now modelled accurately. Hair edges render cleanly in most commercial deepfake apps. Voice clones from ElevenLabs or RVC pass casual listening tests consistently.

This doesn’t mean detection is impossible. It means the tells you can actually rely on have shifted from technical artefacts to behavioural patterns. And behavioural patterns are more reliable anyway, because they don’t improve with the technology.

What still works for detection in 2026

  • Room acoustics test: Real phone calls have background noise, subtle room echo, slight compression artefacts. A cloned voice on a standard consumer setup often sounds acoustically too clean — like it was recorded in a studio. If a call sounds unusually pristine, that’s worth noting.
  • Proper noun stumbles: Voice clones are trained on general audio, not on your specific business context. Unusual supplier names, niche industry terms, your street address, the name of a specific employee — a clone will often mispronounce or hesitate on these. Ask a question that requires the caller to use a specific name or term and listen carefully.
  • The pause test: Real-time voice synthesis has latency. If you ask an unexpected question and there’s an unusual pause before the answer — longer than normal thinking time, slightly mechanical — it may indicate the system is processing. Try interrupting mid-sentence and see how naturally they recover.
  • Off-script questions: Ask something deeply specific that only the real person would know. Not “what’s our bank account number” but “what did we decide about the Manchester project last Tuesday?” A clone has no context for your actual business history.
The single most reliable detection method in 2026 is not technical. It’s a pre-agreed safe word that only your real team members know. A deepfake cannot produce a word it was never given. Technology cannot replicate information that was never made public.

The 3 Documented Attacks That Show How This Actually Plays Out

Theory doesn’t change behaviour. Specific scenarios do. These are documented cases from 2024 and 2025 with enough detail to make the pattern recognisable when it comes for your business.

A
Every Face on the Video Call Was Fake — $25M Lost, Hong Kong, 2024

A finance employee joined a multi-person video call with colleagues and a senior executive he recognised. Everyone behaved normally. He was walked through a series of transfers totalling $25 million USD across 15 transactions. Every participant on screen was a real-time deepfake. The case was confirmed by Hong Kong police in February 2024 and remains the largest single-incident deepfake fraud on record.

What makes this case significant beyond the number: the employee was not naive or careless. He checked faces. He heard familiar voices. The attack worked because it correctly identified that humans trust visual and audio confirmation — and then fabricated both simultaneously.

  • Defence: a mandatory callback rule requiring any payment authorisation from a video call to be confirmed via a separately-initiated phone call to a stored number
B
Founder’s Voice Cloned from LinkedIn — £43,000 Gone in 20 Minutes

A UK marketing agency’s finance manager received a call from what sounded like her boss asking for an urgent supplier payment. Same voice. Same casual tone. Reference to a real client. The founder was on a flight. The voice was cloned from his LinkedIn videos and a podcast episode. £43,000 was wired to an overseas account and emptied within hours. No safe word existed. The money was not recovered. The cyber insurer declined the claim on the grounds that the employee had voluntarily authorised the transfer.

The attack required: one LinkedIn video (publicly accessible), one podcast clip (publicly accessible), and knowledge of one real client name (publicly accessible on the agency website). Total cost to the attacker: roughly £25 in ElevenLabs subscription fees.

  • Defence: safe word protocol. One pre-agreed word the real founder could have said. The clone couldn’t.
C
AI Email Plus Voice Clone — $28,000 ACH Transfer, USA, 2025

An email arrived in the owner’s writing style warning the office manager to expect a call from their accountant. The call arrived an hour later — the accountant’s voice, cloned from a YouTube video — confirming a tax payment needed same-day processing. The email had pre-loaded credibility. The voice had authenticity. Together they produced a $28,000 authorised transfer. Neither would have worked alone.

This combination format — written primer followed by voice confirmation — became the most reported attack pattern in FBI IC3 data for H2 2025 specifically because it’s designed to defeat the single-channel scepticism that stops most simpler attempts.

  • Defence: dual-channel verification. Call the accountant back on a stored number before processing anything. The real accountant picks up and confirms they made no such call. The attack collapses.
💰

What to Do About It This Week — Five Steps, Zero Budget

None of what follows requires a security consultant, a software purchase, or an IT team. These are process changes that take less than a day to implement and close the specific gaps that every documented SMB deepfake attack exploited.

🚨 Do Today
  • Create a verbal safe word. Random, unmemorable, shared only with staff who handle payments or credentials. No safe word on a call = request paused, independently verified.
  • Write a callback rule. Any payment request by phone is verified by calling back on a number already in your contacts. Not a number from the call.
  • Set a dual-approval threshold. Any transfer above a set amount needs two people confirming through two different channels before it goes out.
  • Brief your team using cases B and C above. Verbatim. Fifteen minutes. No training platform needed.
🔍 Do This Month
  • Read your cyber insurance policy for “social engineering coverage” and “voluntary parting” exclusions. If they’re not clearly addressed, call your broker.
  • Lock vendor payment detail changes to written confirmation from on-file email addresses only. No phone, no new email.
  • Google yourself and your business name to see what audio and video is publicly available. That’s your voice clone attack surface.
  • Set up a business credit monitor to catch fraudulent accounts opened in your name using deepfake KYC bypass.

Frequently Asked Questions

Can deepfakes really bypass bank liveness detection?
Yes, and this has been confirmed by multiple security researchers since 2024. Deepfake injection tools intercept the device camera feed at the software level and replace it with a real-time AI-generated face, allowing the system to “see” a live person blinking and moving on cue when no such person is present. Sensity AI tested 10 major KYC platforms in 2024 and found 74% were vulnerable. Most platforms have been updating their detection since, but the arms race is ongoing.
What is a deepfake voice clone and how little audio does it need?
A deepfake voice clone is an AI-generated replica of a person’s voice trained to replicate their pitch, cadence, rhythm, and accent. It can then speak any new script in that voice. In 2026, tools like ElevenLabs require as little as 3 seconds of clean audio to produce a usable clone — a threat the FTC flagged as a growing consumer and business risk. Any public recording of you — a LinkedIn video, a podcast appearance, a webinar, even a sufficiently long voicemail greeting — contains more than enough audio.
Why are small businesses targeted more than large companies?
Small businesses have a specific combination of vulnerabilities that make them attractive: one or two people often have sole payment authority with no second approver, employees are conditioned to act quickly on direct owner requests without verification, there’s rarely a dedicated security function, and the owner’s voice and writing are frequently available publicly on the business website, social media, and podcast appearances. The FBI’s 2025 IC3 report specifically flagged businesses under 50 employees as the fastest-growing victim category for AI-assisted voice fraud.
What apps are used to create deepfakes?
The most widely used tools are ElevenLabs and RVC for voice cloning, HeyGen for AI video avatars and real-time face impersonation, and DeepFaceLab for video face replacement. All are legal consumer or open-source products built for legitimate creative and business purposes. The criminal use is in applying them to impersonate real people without consent for financial gain, which is illegal in most jurisdictions under fraud and impersonation statutes updated between 2024 and 2026.
Does my cyber insurance cover deepfake fraud losses?
It depends heavily on when your policy was written and the exact wording of your social engineering and funds transfer fraud coverage. Pre-2024 policies frequently exclude losses where an employee voluntarily authorised a transfer, even under false pretences — known as the “voluntary parting” exclusion. Ask your broker specifically whether AI voice impersonation fraud is covered and get the answer in writing. Policies issued or renewed after mid-2024 are more likely to include explicit AI fraud coverage.
What is the single most effective defence against deepfake voice attacks?
A pre-agreed verbal safe word shared only with your team. Any urgent request by phone that doesn’t include this word is automatically paused and independently verified before any action is taken. Combined with a callback-on-file rule (you call back on a number already in your contacts, not one provided during the suspicious call), these two steps stop the majority of real-time voice deepfake attacks. They cost nothing and can be implemented in under 15 minutes.

The Bottom Line

The technology is real, the losses are real, and the tools that power these attacks cost less than a Netflix subscription. What’s also true is that every documented SMB case in this post was either stopped, or would have been stopped, by a single process change that costs nothing and takes 15 minutes to put in place. A safe word. A callback rule. One extra step before money moves.

You don’t need enterprise security to protect a small business from this. You need your team to know what the attack looks like before it arrives — and a process that creates one moment of friction between a convincing voice and an irreversible wire transfer. That’s it. Start there.

Curated by LORPHIC — Digital intelligence. Clarity. Truth.

Get in Touch!

What type of project(s) are you interested in?
Where can i reach you?
What would you like to discuss?
[lumise_template_clipart_list per_page="20" left_column="true" columns="4" search="true"]

My Account

Come On In

everything's where you left it.