AI-fueled cyberattacks are escalating faster than corporate defenses, and half of security leaders say they are not prepared. A new EY survey of more than 500 senior cybersecurity officials finds a stark readiness gap: 96% view AI-enabled attacks as a major threat, yet only 46% feel strongly confident their safeguards can withstand them. Most teams remain stuck in pilot mode, even as adversaries industrialize AI at scale.
Budgets and governance are the pressure points. EY reports 85% believe current security funding is insufficient for AI-era risks. At the same time, 97% say a formal framework for secure AI use is essential to ROI, but just 20% have one fully in place. Investment is starting to catch up: organizations allocating at least a quarter of their security budget to AI-native solutions are expected to surge from 9% today to 48% within two years.

Why enterprise AI security readiness continues to lag
Organizations want AI’s speed and scale, but execution often stalls. An oft-cited MIT analysis found 95% of enterprise AI initiatives struggled to deliver meaningful ROI, a signal that pilots don’t automatically translate into production-grade outcomes. Skills are another constraint: a global survey of business leaders across 21 countries showed 87% expect AI to transform work, while only 29% believe their teams have adequate training to get there.
Security leaders also face architectural debt. Many SOCs weren’t designed to log model interactions, inspect prompts, or trace training data lineage—capabilities now vital for investigating prompt injection, data poisoning, and model abuse. Without clear ownership and measurement, AI security remains a side project, not a program.
How AI is supercharging cyber threats and attacker tactics
Adversaries are already using generative models to craft convincing spear-phishing at scale, automate reconnaissance, and write polymorphic malware that mutates faster than signature-based tools can keep up. OpenAI and industry threat reports have documented how AI streamlines criminal workflows, lowering both skill and cost barriers.

Deepfakes are moving from curiosity to cash-out. Hong Kong Police recently described a multimillion-dollar fraud in which deepfaked executives on a video call persuaded an employee to authorize transfers—proof that controls around verification, not just malware defense, must evolve. Meanwhile, the Verizon Data Breach Investigations Report continues to show social engineering and credential theft as dominant entry points, and IBM’s Cost of a Data Breach research pegs the average breach in the multimillion-dollar range—underscoring why AI-speed detection and response matter.
Four security actions to take now to counter AI-driven threats
- Build an AI threat playbook and red-team it. Create runbooks for prompt injection, data exfiltration via chat interfaces, model hijacking, and supply-chain risks in third-party AI services. Use frameworks like MITRE ATT&CK and MITRE ATLAS to map likely techniques, then simulate them. Instrument robust logging of prompts, outputs, and model calls so your SOC can investigate end-to-end.
- Stand up an AI security governance framework. Inventory all models, data sources, and integrations; classify what data can be used for training and inference; and enforce human-in-the-loop for sensitive actions. Align with the NIST AI Risk Management Framework and relevant ISO standards (such as ISO/IEC 27001 and AI risk guidance). Define approval gates, vendor requirements, and incident procedures specific to AI components.
- Deploy AI-native defenses where they move the needle. Prioritize email and identity protections that leverage behavioral analytics and LLM-based anomaly detection, EDR that flags living-off-the-land plus code-signed abuse, and fraud controls that can spot synthetic media. Augment analysts with vetted copilots to accelerate triage, but measure outcomes—MTTD, MTTR, false-positive rates—so tools are accountable.
- Tighten data and access controls for the AI stack. Apply Zero Trust to models and agents: least-privilege tokens, secrets isolation, and policy-guarded retrieval-augmented generation. Use DLP on both inputs and outputs to prevent sensitive leakage, enable content filters, and adopt provenance signals or watermark detection where feasible to counter deepfakes. Require suppliers to provide security attestations, model cards, and software bills of materials.
What good AI security looks like in high-maturity programs
High-maturity programs treat AI as both a new surface and a new shield. They integrate model telemetry into SIEM, run continuous adversarial testing, and harden identity as the blast door for everything from privileged prompts to automated agents. They also couple technology with people: recurring tabletop exercises, secure coding for ML pipelines, and a security champion in every product team.
EY’s findings point to a simple truth: waiting for clarity is a risk decision. With 50% of leaders acknowledging they’re not ready, the advantage shifts to organizations that operationalize AI security now—codifying governance, pressure-testing defenses, and funding measurable capabilities that shrink attacker dwell time. The window for pilot projects is closing; the window for resilience is open to those who act.