CYBERSECURITY AWARENESS | MAY 2026
When Seeing Is No Longer Believing: How AI Deepfake Scams Are Targeting Businesses
This Month’s Cybersecurity Minute
Each month, we put one cybersecurity topic on your radar — not to cause panic, but because an informed team is a protected team. This month’s topic honestly still catches us off guard when we talk through it with clients: AI-powered deepfake scams. Real voices. Familiar faces. Live, on a call with you. And they’re showing up right here in Florida.
Sharon Brightwell of Dover, Florida, was in the middle of a normal Wednesday when her phone rang. The voice on the other end was her daughter — sobbing, hysterical, saying she’d been in a car accident and desperately needed $15,000 for bail.
“There is nobody that could convince me that it wasn’t her. I know my daughter’s cry.”
Sharon withdrew the cash. A driver picked it up. Only when her grandson tracked down her real daughter — safe, at work, unaware any of this was happening — did Sharon learn the truth: she hadn’t been talking to her daughter at all. She’d been talking to an AI-generated clone of her daughter’s voice, built from social media videos. Her retirement savings. Gone.
“I pray this doesn’t happen to anyone else,” she said afterward. “My husband and I are recently retired. That money was our savings.” (Source: WFLA / Fox 13, July 2025)
This is what AI deepfake scams look like in 2026. And they’re not just targeting families — they’re hitting businesses, executives, and employees across the country.
How AI Deepfake Scams Actually Work
The term “deepfake” used to feel like a Hollywood problem. It’s not anymore.
Here’s what’s happening: attackers pull publicly available photos, videos, and audio clips from LinkedIn, YouTube, Facebook, and social media. Using AI, they build a real-time model that matches a trusted person’s face, voice, and mannerisms. It takes as little as three seconds of audio and one clear photo. That model then runs live — on a FaceTime, Zoom, or Teams call with you.
The person you’re looking at isn’t there. But everything — the voice, the face, the expressions — says they are.
According to Hiya’s State of the Call 2026 report — a survey of over 12,000 consumers across the U.S. — one in four Americans has already received a deepfake voice call in the past 12 months. Another 24% aren’t sure they could tell the difference. That means nearly half the country has either encountered this or couldn’t identify it if they had.
It’s Not Just Happening “Somewhere Else”
Sharon’s story from Dover is one of thousands. Here are two more that show how wide this threat has spread:
📢 Texas Sheriff’s Voice Stolen for Supplement Ads: Brewster County Sheriff Ronny Dodson discovered that a Facebook account had posted a video using real footage of him — but with completely fabricated AI audio of him endorsing a health supplement he had never heard of. He never gave permission. He never said those words. The video spread before he even knew it existed. “They can do anything,” he said. “They can ruin your life.” (Source: NewsWest9, November 2025)
🎬 Virginia Content Creator’s Image Hijacked: Karen Flowers, a licensed cosmetologist and content creator from Henrico, Virginia, found out from her cousin that her exact image — pulled directly from her YouTube hair care tutorials — was being used in ads selling life insurance. Her face. Someone else’s voice. On a channel she had nothing to do with. The platform took months to act. “I know my image. I know what I look like,” she said. (Source: InvestigateTV, April 2026)
These aren’t isolated incidents. AI-enabled scams increased 20-fold from 2023 to 2025. (Source: Microsoft / AARP / BBB research) The FBI’s Internet Crime Report recorded $16.6 billion in total cybercrime losses in 2024 — a 33% jump year over year — with AI-enhanced fraud driving a growing share of those incidents. (Source: FBI IC3, 2025)
What This Means for Your Business and Your Team
If you’re a business owner or manager, here’s what keeps us up at night:
Your employees are making judgment calls every day. A video call from “the CEO” asking for a wire transfer. A voice message from “the CFO” with urgent instructions before a big weekend. A call that looks like it’s from a vendor your team has worked with for years.
The technology doesn’t care how smart your team is. It’s designed to get past the part of the brain that asks questions. Urgency. Emotion. Familiarity. Those are the real attack vectors — and AI just gives them a face and a voice.
Red Flags to Train Your Team to Recognize
Your eyes and ears are no longer reliable on their own — and that’s not a knock on your team. It’s just where the technology is. Here’s what to watch for:
- An unexpected request for money, credentials, or sensitive information — even from a face or voice you recognize
- Urgency and pressure to act right now, before you have time to verify
- Requests to keep the conversation confidential until after you act
- Visual glitches during video: lip sync slightly off, lighting that doesn’t match, unnatural blinking
- Contact information that was provided during the call, rather than a number you already had
- A request arriving through an unexpected channel — WhatsApp from the “CEO,” a Teams message from a rarely-heard-from contact
One technique worth knowing: ask the person on the call to do something simple and unscripted — tilt their head, pick something up, wave. Real-time deepfake rendering can glitch under unexpected movement. If they hesitate or the video distorts, trust that reaction.
Five Things Your Team Can Do Right Now
- Build a verify-before-you-act habit. If anyone on a call — no matter how familiar they look or sound — asks you to transfer funds, share passwords, or take sensitive action: stop. Hang up and call that person back on a number you already have saved. Don’t use contact info provided during the suspicious call. No legitimate request falls apart because you took 60 seconds to double-check.
- Set a team passphrase for sensitive requests. Decide on a private code word between leadership and finance teams — in person, never posted online. If someone calling with an urgent payment request can’t produce it, the call stops. Sharon’s family now uses one. Many businesses already do. It’s the simplest, lowest-tech defense against a very high-tech threat.
- Be more intentional about what you post online. Scammers built a clone of Sharon’s daughter’s voice from Facebook and Snapchat videos. They built Sheriff Dodson’s from public footage. The more audio and video your team — and your leadership — has online, the more material attackers have to work with. That doesn’t mean disappearing from the internet. It means being thoughtful.
- Apply extra scrutiny to any request that arrives through a new or unexpected channel. A WhatsApp from the “CEO.” A Teams message from a vendor you haven’t worked with in months. An email thread that suddenly shifted platforms. These are signals worth pausing on, every time.
- Trust your gut and give your team permission to pause. If something feels off — the tone, the urgency, the ask — it probably is. Create a culture where pausing to verify isn’t second-guessing leadership. It’s protecting the business. That instinct is worth more than any technology right now.
Questions Worth Bringing to Your Team This Month
- Do we have a verification protocol for wire transfers or sensitive requests made over phone or video?
- Does our team know what to do if they receive an urgent request from “the CEO” through an unexpected channel?
- Have we established a passphrase or call-back protocol for high-stakes requests?
- Who on our team is most likely to receive this kind of call — and are they prepared?
The Bottom Line
AI deepfake scams aren’t coming — they’re already here. In Florida. In Texas. In Virginia. In businesses and living rooms across the country. And they’re built to get past the trust your team has worked hard to earn with each other.
The defense isn’t complicated. It’s a habit: verify through a separate channel, build a passphrase protocol, and give your people permission to pause without feeling like they’re being difficult. That instinct to double-check? It’s one of the most valuable things your team has right now.
You can’t prevent every threat. But you can make your business a harder target. It starts with your people knowing what to look for — and knowing they have permission to trust their gut.
As always, your Paradigm team is just a call, email, or text away if any of this raises questions for your business.
If you’d like us to walk your team through a quick awareness conversation on AI impersonation threats, we’re here to help. No pressure, no sales pitch — just an honest conversation about where you stand.
— Angie, Oscar, and Your Paradigm Team
P.S. If you haven’t had a chance to read our piece on phishing emails in 2026, it’s worth a few minutes — AI has changed what those look like too. [Link to Phishing Blog — Blog 5]