Leading cybersecurity experts say the balance of power online is changing as cybercriminals develop generative AI to escalate social engineering, generate identities, and automate fraud. The worry is not only more attacks, but also faster, cheaper, and increasingly convincing scams that challenge the line between reality and fabrication.
Behind the headlines is a simple proposition: Models that generate text, audio, and video now permit us to easily create expedient imitations of a colleague’s voice, foolproof phishing emails, or even an entire crime empire with little more than some temperamental IT infrastructure. The overall result is a wider range of victims and greater success rates at every level.
- AI Supercharges Social Engineering Tactics
- Deepfakes Go From Novelty to Cash-Out Schemes
- The Ransomware and Crimeware-as-a-Service Revolution Continues
- Malvertising and Synthetic Ads Prey on the Vulnerable
- Defenders Race to Catch Up as Threats Accelerate
- How to Prepare Now: Practical Steps for Individuals and Teams
AI Supercharges Social Engineering Tactics
What used to take weeks can now happen in minutes. Big language models have the capacity to scrape public profiles, breach dumps, and data broker dossiers to compose messages that sound exactly like your trusted correspondents. When attackers can personalize at scale, business email compromise — already one of the most costly cybercrimes — becomes painfully exact.
The methods the scammers use vary widely but are often focused on impersonating someone else — a loved one, employer, family member, or government official — and convincing victims to pay them in some way, usually with gift cards. Artificial intelligence will accelerate that curve higher by automating reconnaissance and tailoring lures to a target’s role, calendar, and communication style, experts say.
Criminal forums have also sprouted ads for “fraud GPTs” and turnkey phishing kits that produce sophisticated emails, landing pages, or phone scripts tailored to industries like finance or health care. That means more mid-sized businesses and individual consumers who were once below the threshold of targeted scams are now squarely in scope.
Deepfakes Go From Novelty to Cash-Out Schemes
Voice and video parodies are no longer only party tricks. Banks have recorded instances of voice biometrics being bypassed with cloned audio, and investigators have warned about corporate wire fraud initiated via video calls — in which multiple “participants” were synthetic. In one high-profile case, a finance worker approved a multimillion-dollar transfer after participating in what appeared to be an ordinary call with deepfaked colleagues.
The technology threshold is falling. Open-source software can be trained on minutes of voice, and consumer-grade apps produce serviceable approximations from a few dozen photos. Researchers at Europol and major fraud labs caution that campaigns capitalizing on the spread of deepfakes will merge with traditional scams like romance, tech support, and investment scams to take greater sums from victims over longer periods.
Detection remains a race. Watermarking and content provenance efforts like the C2PA-backed Content Credentials, being pushed by big media and software companies, help to some extent, but criminals can remove metadata or transcode assets. Even top-of-the-line detectors falter at the edges, where anything from illumination, compression, and accent diversity work against accuracy.
The Ransomware and Crimeware-as-a-Service Revolution Continues
Classic malware is no longer outdated, but modern crews are more prone to extortion, data theft, and shaming sites that can increase pressure to pay. Crypto analysts observed that ransomware profits surpassed $1B in a single year and say the time is ripe for action. They use those profits to pay developers, negotiators, and initial access brokers — just like legitimate startups that hire sales and DevOps.
Generative AI slides right into this framework. It writes and drafts negotiation emails, localizes threats into dozens of languages, and scripts phone calls that sound legitimate. It also facilitates building realistic-looking fake portals for “proof-of-breach,” and automates a number of tasks related to low-level coding in order to refresh payloads and avoid detection.
Malvertising and Synthetic Ads Prey on the Vulnerable
At Black Hat, researchers showed how to hijack ad networks, including those from Google, and reach nearly 1 billion Chrome users. AI-authored creatives — approving smiles, authoritative voice-overs, spick-and-span grammar — work better than the ham-fisted scams of yore. But it’s a powerful brew when mixed with microtargeting derived from browsing history and inferred life events.
Consumer watchdogs say the losses from fraud are in the tens of billions, and most types are on the rise, with impostor scams and investment fraud that target nonpublic securities leading the pack. Older adults and individuals in financial distress are hit the hardest. The harshness is in the cadence: bad actors return to the well again and again, deploying synthetic identities and new payment rails to siphon off savings over months.
Defenders Race to Catch Up as Threats Accelerate
Businesses are turning to AI for anomaly detection, behavioral baselining, and fast triage. Security teams are also applying immediate defenses and model governance to mitigate threats unique to AI systems, including prompt injection (adding a phony claim when seeding an AI system for training so the system learns recommendations based on that fake input) and data exfiltration through chat interfaces. The OWASP Top 10 for LLM Applications and the NIST AI Risk Management Framework have now become standard references for safe AI workflow design.
Yet fundamentals still decide outcomes. Humans are at the root of most breaches — phishing, stolen credentials, and misconfigurations are still the front door. Even so, with DMARC enforcement, phishing-resistant authentication for payments, and very strict change control for payments, you can drive risk exposure way, way down, but adoption is spotty outside highly regulated sectors.
How to Prepare Now: Practical Steps for Individuals and Teams
- Do not assume hyper-personalized phishing won’t target you. Implement out-of-band confirmation for any request that results in money, credentials, or personal data — full stop. Treat unverified speakers and video as untrusted until a second channel is used to verify instructions.
- Upgrade identity security. Transition to FIDO2 or other hardware security keys for critical accounts and turn off SMS one-time codes if you can. Monitor for credential stuffing and implement just-in-time approvals.
- Protect email and payments. Gate the inbox and payment flow. Enforce SPF, DKIM, and DMARC at reject. Perform callback verification with known contacts before wires or vendor banking changes, and sandbox links and attachments by default. Review ad buys that use synthetic imagery or overly dramatic claims for consumer protection teams.
- Secure your AI systems. Use input/output filtering to guard against prompt injection, isolate models and embeddings from your crown-jewel data, and log each model interaction for audit. Train employees to spot injection attempts in documents, resumes, and emails that could alter an assistant’s response.
For those on the front lines, the takeaway is blunt: AI gives criminals an advantage; it also allows defenders to react more quickly than ever before. Those who upgrade identity, confirm instructions through authoritative conduits, and instrument their AI operations will reverse the curve in their favor. The rest of us can count on better lies, coming sooner.