🤖

AI-Powered Scams: What's Changed and What to Do

12 minute read

Scams used to be easy to spot. AI changed that. Here's what modern attacks actually look like — and why the old advice no longer works.

I need to be honest with you: most of the advice you’ve been given about spotting scams is outdated. Not slightly outdated — fundamentally broken. The rules we all learned were designed for a world where scammers were sloppy, writing in a second language, blasting the same badly formatted email to a million people. That world is gone.

AI didn’t just make scams a little better. It removed nearly every surface-level signal we used to rely on. And if we keep teaching people to look for typos and bad grammar, we’re setting them up to get hurt.

This guide is about what actually changed, what stayed the same, and what you can do right now to protect yourself and the people you care about.


The old advice is dead

For twenty years, security awareness training hammered the same checklist: look for misspellings, watch for awkward grammar, be suspicious of generic greetings. And for twenty years, that worked reasonably well. Most phishing emails were poorly written. Most scam messages did have obvious tells.

Here is what a typical phishing email looked like in 2020:

You probably smiled reading that. “You’re” instead of “your” three times. “Compromize.” “Securty.” “Informations.” Every sentence is a red flag. Your spam filter would catch this before you ever saw it, and if it didn’t, you’d delete it in two seconds.

Now here is what a phishing email looks like in 2026:

Read that again carefully. The grammar is flawless. The tone is casual and reassuring — the opposite of the panic-driven scam emails we were trained to spot. It uses your real name. It references a specific card ending. It even includes a plausible explanation and tells you that you might not need to take action. That last detail is what makes it devastating, because real Amazon emails sound exactly like this.

The only tell? The domain amazon-accounts.com is not Amazon’s real domain. But that requires you to know what Amazon’s real domain looks like — and most people don’t check.

This is where we are now. The grammar check is useless. The “does it feel professional” test is useless. The spelling test is useless. AI writes better English than most humans.


What AI actually gives attackers

To protect yourself, you need to understand what changed and why. AI didn’t give scammers one new trick — it gave them an entire toolkit that removes the friction from every stage of an attack.

Perfect language, in any language

Large language models write fluent, natural prose in dozens of languages. A scam operation based anywhere in the world can now produce emails that read like they were written by a native speaker in your country, your region, even your dialect. The broken English that used to betray foreign-operated scam rings is gone overnight. An attacker in any country can generate a message that sounds like it came from your local bank branch.

Voice cloning from seconds of audio

Modern voice cloning models need roughly three to ten seconds of sample audio to produce a convincing replica of someone’s voice. Think about what that means. A voicemail greeting. A TikTok video. A conference talk posted on YouTube. A podcast appearance. If your voice exists anywhere on the internet — and for most people, it does — an attacker can clone it.

The cloned voice can then speak any text in real time. It handles intonation, pacing, and even emotional registers like urgency or distress. When your “daughter” calls sobbing, saying she’s been in an accident and needs money wired immediately, the voice on the line can now sound exactly like her.

Real-time deepfake video

Video synthesis has crossed the uncanny valley for live calls. Attackers can now run a deepfake model during a video call, mapping their face onto a synthetic version of someone else’s in real time. This has been used in business fraud — an employee joins a video call with what appears to be their CFO and three other colleagues, all of whom are AI-generated, and authorizes a wire transfer. It has happened. It will happen more.

Hyper-personalization at scale

This is the capability that worries me most. AI can scrape your public social media profiles, your LinkedIn, your company’s website, your recent posts, your interests, your family members’ names, your recent travel — and use all of it to craft a message that feels deeply personal.

The 2020 phishing email said “Dear Valued Customer.” The 2026 version says “Hi Sarah” and references your actual card ending, your Prime membership start year, and uses the same casual tone your real service emails use. That information is either scraped, leaked in a data breach, or simply inferred from context — and AI stitches it together into a coherent, believable message automatically.

Scale without repetition

Before AI, attackers faced a tradeoff: personalize each message (slow, expensive) or blast the same template to millions (fast, but easy to detect and block). AI eliminates this tradeoff. An attacker can now generate thousands of unique phishing messages per hour, each one tailored to its recipient, each one with different wording so pattern-matching filters can’t catch them as a batch.

Your email provider’s spam filter works partly by recognizing that ten thousand identical messages were just sent. When every message is unique, that signal disappears.


What AI does NOT change

Here is the part that matters most, and the reason I’m not writing a doom-and-gloom piece: AI changed the surface of scams, but it did not change the structure.

Every scam — every single one, from a Nigerian prince email in 2003 to a deepfake CEO call in 2026 — requires you to do something. Click a link. Call a number. Wire money. Share a code. Download a file. Hand over a password. Approve a transaction.

The attacker’s message can be perfect. The voice can be indistinguishable. The video can be flawless. But none of it matters if you don’t take the action they need you to take.

This is why the old advice of “look for typos” was always the wrong layer to focus on. It was a convenient shortcut, not a real defense. It worked when scams were sloppy, but it was never the actual thing keeping you safe. The actual thing keeping you safe is your behavior — whether you verify before you act.

The PUSHED framework targets this layer. It doesn’t ask you to evaluate the quality of a message — it asks you to evaluate the pattern of the interaction. Is someone creating pressure? Was this unsolicited? Are they pushing urgency? Are the stakes high? Is there something exciting or desperate about the situation? Those behavioral signals don’t depend on grammar. They don’t depend on whether the voice sounds real. They work against every scam because they target the one thing an attacker can never remove: the need for you to act.


Real examples: what modern AI scams look like

The perfect corporate phishing email

You receive an email that appears to be from your company’s IT department:

Why this works: It references a real person at your company (scraped from LinkedIn). It mentions a specific meeting (your company’s all-hands schedule is probably public or inferable). It gives a reasonable technical explanation. It even provides an “out” — “if you already did this, disregard” — which makes it feel low-pressure and legitimate.

How to catch it: Don’t evaluate the email. Verify the request. Open a new browser tab, go to your company’s actual IT portal, or send Jessica Chen a message on Slack. If the request is real, you’ll find it there. If it’s not, you just stopped an attack.

The voice clone emergency call

Your phone rings. It’s your son’s number — or at least, it appears to be. The voice on the line is unmistakably his.

“Mom, I messed up. I got in a car accident and the other driver is saying I was at fault. The police are here and they’re saying I might get arrested if I can’t pay for the damages right now. I’m so scared. Can you send money through Zelle? The officer said if I can pay the other driver directly, they won’t press charges. I need like three thousand dollars. Please don’t tell Dad, he’s going to kill me.”

The voice is shaking. It sounds exactly like him. The caller ID shows his number (spoofed). The scenario is designed to trigger every parental instinct — your child is in danger, they need you now, and they’re asking you to keep it secret.

How to catch it: This hits nearly every PUSHED signal. Pressure (arrest threat), urgency (right now), secrecy (don’t tell Dad), high-stakes action (send money via Zelle, which is irreversible). Hang up. Call your son back directly, using the number in your contacts. If he doesn’t answer, text him. If he doesn’t respond, call another family member who might be with him. The real version of this scenario can wait three minutes for you to verify. The scam version cannot, which is exactly why they pressure you not to.

This is also exactly why family code words matter. A pre-agreed word or phrase that your family members know — something that would never appear in public audio — is one of the few defenses against voice cloning that works reliably.

The deepfake job interview

You apply for a remote position at a well-known company. You receive a professional email inviting you to a video interview. On the call, you’re greeted by someone who appears to be the hiring manager — their face matches the LinkedIn profile you researched, their background looks like a corporate office.

The interview goes well. At the end, they say they’d like to move forward and ask you to fill out onboarding paperwork. The forms ask for your full legal name, date of birth, Social Security number, bank routing number for direct deposit, and a copy of your driver’s license. Standard new-hire paperwork — except the person you’re looking at doesn’t exist. The entire interview was a deepfake, and you just handed a stranger everything they need to steal your identity.

How to catch it: Verify the opportunity through channels you control. Go to the company’s official website and find their real careers page. Call the company’s main number and ask for the hiring department. Check whether the job posting exists on their official site, not just on a third-party job board. No legitimate employer will penalize you for verifying that a job offer is real before handing over your Social Security number.


What actually works now

The old checklist is dead. Here is what replaces it.

1. Verify through a separate channel

This is the single most effective defense against every scam listed above, AI-powered or not. When someone asks you to do something — anything involving money, credentials, personal information, or software installation — verify the request through a channel you initiate.

Got an email from your bank? Don’t click the link. Open your banking app or call the number on the back of your card. Got a call from your boss? Hang up and call them back on their known number. Got a text from your kid? Call them directly.

The key word is separate. Replying to the suspicious email doesn’t count — an attacker controls that channel. You need to start a new, independent communication through a path you trust.

2. Use the PUSHED framework

When a message or call arrives, run through the PUSHED checklist before acting. Is there pressure? Was it unsolicited? Does it feel surprising? Are the stakes high? Is there excitement or desperation?

If you hit two or more of those signals, stop and verify. The framework is explained in full in our PUSHED/VERIFY guide, and it’s worth reading — it’s the single most important page on this site.

3. Establish family code words

Pick a word or phrase that your family agrees on — something obscure that would never appear in social media posts, voicemail greetings, or public audio. If anyone calls with an emergency and asks for money or action, ask for the code word first. A real family member will know it. A cloned voice reading from a script will not.

This simple step defeats voice cloning attacks entirely. Set it up today. Our family code word guide walks you through how to choose one and how to teach it to kids and elderly relatives.

4. Let your password manager do the detecting

Here is something most people don’t realize: a password manager is one of the best anti-phishing tools ever built, and it works against AI-generated scams just as well as sloppy ones.

When you use a password manager, it auto-fills your credentials based on the domain of the website you’re on. If a phishing email sends you to amazon-accounts.com instead of amazon.com, your password manager will not offer to fill your Amazon password — because it knows that’s not Amazon’s website. You’ll reach for your password and it won’t be there, and that moment of friction is your signal that something is wrong.

This only works if you use the password manager consistently and never manually type passwords into websites. Let the manager fill them. If it doesn’t offer to fill, treat that as a red flag and investigate.

5. Treat unexpected contact about money as hostile until verified

This is a mindset shift, not a technical control, but it might be the most important one. Any unexpected communication that involves money — a request to pay, a notification about a charge, a refund you didn’t expect, an invoice you don’t recognize — should be treated as potentially malicious until you verify it independently.

This doesn’t mean you need to be paranoid. It means you need to build a simple habit: before you act on anything involving money, you verify it through a separate channel. Every time, no exceptions. The two minutes it takes to verify will never cost you anything. The one time you skip it could cost you everything.


The real reason AI scams are dangerous

It’s not the technology itself. It’s the confidence gap.

Most people believe they can spot a scam. They think scams happen to gullible people, to the elderly, to people who aren’t tech-savvy. That confidence was somewhat justified when scams were obviously sloppy. It is actively dangerous now.

The people most at risk from AI-powered scams are the people who think they’re too smart to fall for one. They’ve internalized the old rules — check for typos, look for generic greetings, trust your gut — and they believe those rules will protect them. When a flawlessly written, deeply personalized, contextually aware phishing email arrives, their old rules say “this looks fine.” And so they click.

Humility is a security tool. The willingness to say “I’m not sure this is real, let me check” is worth more than any amount of technical knowledge about how phishing works. Build that habit. Teach it to the people around you.


Test yourself

Think you can tell the difference between AI-generated and human-written messages? Our AI Detection Quiz puts you through real examples — some written by people, some by AI, some by scammers using AI tools. It’s harder than you think.

Take the AI Detection Quiz

Next up QR Code Safety