Security leaders are bracing for a year when adversaries wield artificial intelligence with industrial efficiency. From autonomous hacking agents to ransomware that rewrites the rules, the mix of speed, scale, and precision now within reach could magnify digital risk far beyond 2025’s benchmarks.
Threat intelligence teams at Google’s Mandiant and Threat Intelligence Group, Anthropic, CrowdStrike, NCC Group, and others are flagging a decisive turn: AI will no longer be a sidekick in cyber campaigns—it will run major parts of them. Here are 10 ways the threat could break wide open.
- 1. Agentic AI Supercharges Intrusions Across Networks
- 2. AI-Enabled Malware Adapts In Real Time
- 3. Prompt Injection And Model Compromise
- 4. Shadow Agents Create Invisible Data Leaks
- 5. AI Browsers Expand The Attack Surface
- 6. Hyper-Real Social Engineering At Scale
- 7. API And Toolchain Abuse Without An API
- 8. Ransomware Evolves Into Data Manipulation
- 9. OT And Supply Chains In The Crosshairs
- 10. Identity And Token Theft At SaaS Scale
- Nation-States Will Press The Advantage With AI
- What Leaders Should Do Now To Harden AI-Driven Risk
1. Agentic AI Supercharges Intrusions Across Networks
Autonomous agents can chain tasks—recon, phishing, exploitation, and lateral movement—without waiting for humans. Anthropic has already documented a state actor steering agentic tooling to probe roughly 30 global targets with minimal human oversight. Expect fewer “one-and-done” attacks and more persistent, self-improving campaigns.
2. AI-Enabled Malware Adapts In Real Time
Google’s threat teams reported AI-involved malware that mutates mid-execution and generates fresh payloads on demand. Examples observed in the wild—like tools that craft one-line PowerShell commands to silently exfiltrate data—show how detection evasion becomes a built-in feature, not an afterthought.
3. Prompt Injection And Model Compromise
As enterprises plug large models into workflows, attackers will feed hidden instructions to bypass safeguards, siphon data, or sabotage outputs. Security researchers warn the low-cost, high-reward profile of prompt injection will drive a surge in enterprise AI exploitation and model-targeted attacks.
4. Shadow Agents Create Invisible Data Leaks
Employees are spinning up AI agents without IT approval, granting risky permissions and connecting to sensitive SaaS data. Google’s security leaders expect misconfigurations, overprivileged agents, and unmanaged tool access to trigger compliance failures, IP theft, and silent supply-chain exposure.
5. AI Browsers Expand The Attack Surface
New AI-native browsers blend web access with agent execution and corporate context. Analysts warn traditional security stacks were not designed for browsers that act like autonomous workers. Gartner’s recent guidance to block AI browsers underscores the speed of this shift.
6. Hyper-Real Social Engineering At Scale
Voice cloning, tailored phishing, and automated background research will arm groups like ShinyHunters with precision lures that sidestep technical defenses. Pindrop says 70% of confirmed healthcare fraud already originates from bots; add convincing AI voices and deepfakes, and trust becomes a liability.
7. API And Toolchain Abuse Without An API
Modern agents can discover undocumented interfaces and programmatically interact with services that were never meant to be automated. Security leaders warn this erodes years of API governance. Expect attackers to auto-generate integrations, leap between SaaS tenants, and exploit machine-to-machine trust at scale.
8. Ransomware Evolves Into Data Manipulation
Cybersecurity Ventures forecasts ransomware’s global damage to climb by 30% to $74 billion. AI will accelerate targeting and negotiation while shifting tactics from simple encryption to multifaceted extortion: stealing, altering, and threatening to leak sensitive data, including from backups and cloud pipelines.
9. OT And Supply Chains In The Crosshairs
Threat teams expect attackers to hit business systems like ERP to indirectly paralyze factories and logistics. Google’s researchers have highlighted how insecure remote access and Windows-centric weaknesses let common malware reach industrial networks, turning single breaches into cascading outages across suppliers.
10. Identity And Token Theft At SaaS Scale
Attackers increasingly target OAuth tokens and service credentials—the “skeleton keys” of cloud apps. The CISA–MITRE CWE catalog spotlights weak credential protection, and NIST has sought expert input on safeguarding tokens. Recent mega-breaches showed how stolen tokens unlock vast CRM and collaboration data without touching passwords.
Nation-States Will Press The Advantage With AI
Security teams track DPRK operatives infiltrating companies for paychecks and privileged access, including crypto theft, while Russia refines long-horizon influence and espionage. China-linked groups are expected to keep exploiting edge devices and trusted partners to scale quietly across downstream organizations.
What Leaders Should Do Now To Harden AI-Driven Risk
Move from pilot projects to governed AI programs: inventory agents, scope permissions, and monitor tool use like you would a human workforce. Prioritize identity-first security, SaaS posture management, model abuse testing, and incident-ready backups. According to Google’s threat teams, extortion remains the most disruptive risk; resilience must be measured, not assumed.
The message from front-line researchers is blunt: AI is both an accelerator and a wildcard. Organizations that treat it as a core business risk—and invest in visibility, identity hygiene, and agent control—will weather the storm better than those chasing shiny demos.