The New Online Danger: How AI Has Supercharged Scams, Fraud, and Identity Theft
The World I Warned About Has Arrived
When I wrote Online Danger in 2018, I warned that our digital world was changing faster than people realized. I talked about how every action we take online leaves a trace—how we live in a world where almost everything is recorded, tracked, analyzed, and stored. Back then, that statement felt bold. Today, it feels almost soft compared to the reality we’re living in.
In just a few years, the threat landscape has shifted in a way I never imagined we’d see this quickly. We’re no longer dealing with criminals who need skill, effort, or even basic intelligence. Today’s attackers don’t have to write code. They don’t need to speak your language. They don’t need to understand human psychology or craft believable stories.
Artificial Intelligence now does the heavy lifting for them.
And because of that, scams that used to take a certain level of effort can now be created at the speed of thought. What used to require hours of planning can be generated in seconds. What used to look suspicious now looks perfect. What used to feel “off” now feels authentic.
We’re not dealing with the same internet. We’re not dealing with the same adversaries. And if we continue thinking about cybersecurity the way we did even five years ago, we’re going to lose—badly.
It’s time to wake up.
This is the new online danger.
AI Didn’t Create More Criminals—It Made Them More Capable
One of the core ideas in Online Danger was that the internet didn’t create more evil people—it simply allowed evil to scale faster. That was true then, and it’s even more true now.
AI gives criminals something they have never had before:
effortless power.
A criminal no longer needs to craft believable phishing emails or learn how to write code. They can simply tell an AI tool:
“Write an email that looks exactly like Bank of America’s fraud department, informing the user of suspicious activity.”
And within seconds, they get a perfect replica of a message their bank might send them. The grammar is correct. The tone is right. The logo is accurate. The sense of urgency is there. AI even knows how real customer service reps sound, and it mimics that language flawlessly.
The quality of the scam has improved so dramatically that the classic “red flags” we all learned to look for—spelling mistakes, awkward phrasing, incorrect formatting—are disappearing. The obvious scam is gone. Today’s scam looks legitimate because AI is designed to make things look legitimate.
And that is why more people, including intelligent, experienced adults who never fell for scams before, are now becoming victims.
AI erased the barrier between “amateur scammer” and “professional criminal.”
The Rise of the Deepfake Voice Scam
One of the most disturbing trends I’ve seen recently is the explosion of deepfake voice scams. Criminals used to need long recordings to clone someone’s voice. Today, they need five to ten seconds—just enough audio from a TikTok video, a voicemail greeting, or a YouTube clip.
Once they have that sample, they can call you pretending to be your spouse, your child, your sibling, or anyone close to you. And it doesn’t just sound like them. It sounds like them on their worst day—panicked, emotional, and desperate for help.
Imagine getting a call in the middle of the afternoon:
“Mom, it’s me… something happened… I need you to send money right now…”
Most parents don’t stop to breathe, let alone verify. They react emotionally because they think they’re hearing the real voice of their child.
This is why deepfakes are so dangerous:
they bypass your brain and attack your heart.
But this is also where one of the principles I taught in Online Danger becomes even more important today:
Always verify out-of-band.
Never trust anything that comes to you. Only trust what you initiate. If a loved one calls you sounding distressed, hang up and call them back at the number you already know. That is how you separate reality from manipulation.
AI is creating perfect illusions. Verification is the antidote.
AI-Written Phishing: The End of the “Obvious Scam” Era
For years, cybersecurity experts encouraged people to look for signs that an email was fake: spelling errors, odd grammar, generic greetings, or strange formatting. Those days are over.
AI can now write emails that:
- sound exactly like the company you do business with
- reference items you’ve recently purchased
- use the exact tone of your bank’s fraud alerts
- mimic your boss’s writing style
- include specific details from your social media posts
- match your communication habits
This new generation of phishing—what I call contextual phishing—is incredibly effective because it uses your real data against you. Criminals don’t just send mass emails to millions of people and hope someone clicks. They gather your address, phone number, employer, family members, hobbies, and online behavior, and then use AI to craft a message that feels personal.
When an email references an actual shipment you’re expecting or the name of your child’s school, your guard drops. And that’s exactly what attackers want.
We’re in the middle of an identity crisis—literally. The digital version of you is now easier to impersonate than ever.
AI Reconnaissance: Criminals Know More Than You Think
In Online Danger, I wrote about digital footprints and digital fingerprints—how we leave behind more information than we realize, and how that data can be used against us. That message is even more critical today.
Back then, the challenge for criminals was collecting and analyzing those breadcrumbs. Today, AI does that instantly.
AI can scan:
- your LinkedIn profile
- your company website
- your Facebook photos
- your TikTok videos
- public records
- online purchases
- leaked data from past breaches
And within seconds, create a detailed profile that includes:
- your interests
- your schedule
- your family members
- your writing style
- your recent activity
- your risk level
- your likely vulnerabilities
The result is simple:
The attacker knows almost everything they need before they strike.
This is a new form of digital stalking—automated, scalable, and frighteningly accurate.
Romance Scams Have Become AI-Driven Manipulation Engines
Another disturbing trend is the transformation of romance scams. In Online Danger, I emphasized that online identities cannot be reliably verified—anyone can pretend to be someone else. That was true then, and it’s devastatingly true now.
But the game has changed.
Criminals no longer pretend to be people.
They create AI-powered “companions” and let the AI do the emotional manipulation.
These AI agents can:
- talk to victims 24/7 without fatigue
- show empathy and compassion
- craft detailed backstories
- remember personal details
- mirror emotional needs
- form what feels like a genuine relationship
People aren’t falling for a scammer—they’re falling for a personalized AI that adapts to their emotional vulnerabilities in real time.
And when the moment is right, the requests begin:
- “I need help with a medical emergency.”
- “My account was frozen.”
- “I need travel money to come see you.”
- “I have an investment opportunity…”
These scams don’t just cost money—they break people emotionally. And AI makes them easier to scale and harder to detect.
Synthetic Identities: When Criminals Become You
Traditional identity theft involved stealing your Social Security number or your credit card details. Synthetic identity theft is more dangerous because it combines real and fake data to create a new identity that partially belongs to you—but not entirely.
Criminals use AI to generate:
- fake addresses
- fake utility records
- fake employment histories
- fake identification documents
- fake credit profiles
Then they attach your SSN or your birth date to that identity and start building credit.
You don’t find out until:
- a credit card company calls
- your credit score tanks
- the IRS investigates inconsistencies
- debt collectors start contacting you
Synthetic identities are harder to track, harder to prosecute, and harder to clean up. And AI is accelerating the growth of this crime at an alarming rate.
AI Has Turned Business Email Compromise (BEC) Into an Art Form
BEC used to be straightforward: spoof the CEO, apply pressure, demand money. Today, AI-powered BEC is one of the most financially damaging forms of cybercrime on the planet.
Here’s why:
AI can now imitate not just a writing style, but a personality.
It can study months of email history and learn:
- how the CEO signs emails
- what time of day they usually write
- how they address employees
- what their priorities are
- what phrases they use
- their tone under stress
Then, it generates a message that feels exactly like the person you think you’re hearing from.
Some attackers even leave voicemail messages using the CEO’s cloned voice to “confirm” the request.
To the employee on the receiving end, everything feels real—because AI makes it real.
The result?
Billions in losses and a threat that grows stronger every year.
So How Do You Protect Yourself in the Age of AI?
Here’s the truth:
The technology has evolved, but the fundamental principles of cybersecurity have not. In fact, the guidance from Online Danger is even more relevant today—but it needs to be applied with greater discipline and urgency.
Start with this mindset:
If it comes to you, it cannot be trusted.
If you initiate it, it is far more likely to be legitimate.
That mindset alone will save more people than any antivirus software or security tool ever will.
Let’s go deeper.
1. Adopt a Zero-Trust Mindset
In the book, I talked about the importance of having a “healthy dose of fear and distrust” when operating online. Today, that’s no longer a suggestion. It’s mandatory. Zero trust isn’t a technical framework—it’s a mental model.
It means:
- don’t trust emails
- don’t trust texts
- don’t trust phone calls
- don’t trust QR codes
- don’t trust voicemail
- don’t trust links
- don’t trust anything that reaches you unexpectedly
Criminals rely on blind trust.
Zero trust removes their fuel source.
2. Verify Everything Outside the Channel
One of the most important principles I teach—one that has saved countless people—is this:
Always verify using a second method of communication.
If you get an email from your bank, call the number printed on the back of your debit card.
If you get a call from a family member, call them back on the number saved in your phone.
If your boss messages you, verify in person or through another channel.
Criminals can impersonate one communication channel.
They rarely control two.
Verification is the modern version of a seatbelt.
You don’t need it—until you need it. And by then, it’s too late.
3. Reduce Your Digital Footprint
In Online Danger, I wrote about digital footprints and digital fingerprints—the intentional and unintentional traces we leave behind. What’s changed today is the way that data is weaponized.
Your digital footprint is the blueprint for AI-driven scams.
Start reducing it by:
- deleting old online accounts
- removing personal information from social media
- opting out of data broker sites
- limiting what you post publicly
- restricting what apps can track
- reducing the number of digital breadcrumbs you leave
Visibility creates vulnerability.
4. Upgrade Your Cyber Hygiene
Cyber hygiene isn’t about technology. It’s about habits.
And the habits that used to protect you in 2018 aren’t enough anymore.
You need:
- app-based multi-factor authentication
- strong passwords or passkeys
- regular device updates
- locked credit reports
- weekly account checks
- careful handling of QR codes
- a modern password manager with strong protection
- reduced online exposure
Good hygiene won’t make you invincible—but it will make you a very hard target.
And when attackers have millions of easy targets, they won’t waste time on someone who requires work.
5. Teach Your Family About AI Deception
This is the one area where I’ve seen people fall the hardest.
Kids and teens live online more than any generation in history. They trust technology. They trust what they see. They trust what they hear. And AI is exploiting that trust in ways we’ve never seen.
They need to understand:
- deepfake voices
- fake friends
- fake relationships
- fake emergencies
- fake opportunities
- fake authority figures
If we don’t teach them how to recognize deception, they will walk straight into it.
Cybersecurity doesn’t start with technology.
It starts with conversations at home.
The New Online Danger Requires a New Level of Awareness
The threat today is not a smarter criminal.
The threat is a criminal with smarter tools.
AI is not evil—but it is amplifying the capabilities of people who are.
And that is why awareness is more important now than ever. Cybersecurity is not something your bank, your employer, or the government can do for you. It is something you must take responsibility for personally. It is your mindset, your discipline, your level of awareness, and your willingness to treat online interactions with the same seriousness that you treat your physical safety.
Because the truth is simple:
AI has erased the line between what’s real and what’s fake.
Your awareness is the only line of defense left.