FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

50% Of Security Leaders Not Ready For AI Attacks

Gregory Zuckerman
Last updated: March 24, 2026 2:01 am
By Gregory Zuckerman
Technology
6 Min Read
SHARE

AI-fueled cyberattacks are escalating faster than corporate defenses, and half of security leaders say they are not prepared. A new EY survey of more than 500 senior cybersecurity officials finds a stark readiness gap: 96% view AI-enabled attacks as a major threat, yet only 46% feel strongly confident their safeguards can withstand them. Most teams remain stuck in pilot mode, even as adversaries industrialize AI at scale.

Budgets and governance are the pressure points. EY reports 85% believe current security funding is insufficient for AI-era risks. At the same time, 97% say a formal framework for secure AI use is essential to ROI, but just 20% have one fully in place. Investment is starting to catch up: organizations allocating at least a quarter of their security budget to AI-native solutions are expected to surge from 9% today to 48% within two years.

Table of Contents
  • Why enterprise AI security readiness continues to lag
  • How AI is supercharging cyber threats and attacker tactics
  • Four security actions to take now to counter AI-driven threats
  • What good AI security looks like in high-maturity programs
50% of security leaders unprepared for AI-driven cyberattacks

Why enterprise AI security readiness continues to lag

Organizations want AI’s speed and scale, but execution often stalls. An oft-cited MIT analysis found 95% of enterprise AI initiatives struggled to deliver meaningful ROI, a signal that pilots don’t automatically translate into production-grade outcomes. Skills are another constraint: a global survey of business leaders across 21 countries showed 87% expect AI to transform work, while only 29% believe their teams have adequate training to get there.

Security leaders also face architectural debt. Many SOCs weren’t designed to log model interactions, inspect prompts, or trace training data lineage—capabilities now vital for investigating prompt injection, data poisoning, and model abuse. Without clear ownership and measurement, AI security remains a side project, not a program.

How AI is supercharging cyber threats and attacker tactics

Adversaries are already using generative models to craft convincing spear-phishing at scale, automate reconnaissance, and write polymorphic malware that mutates faster than signature-based tools can keep up. OpenAI and industry threat reports have documented how AI streamlines criminal workflows, lowering both skill and cost barriers.

A detailed infographic titled Gartners Top Cybersecurity Predictions (2027-2030): Adapting to the Age of AI & Sovereignty. The infographic is divided into several sections, each highlighting a key prediction or trend in cybersecurity. Key themes include the increasing adoption of AI security platforms, the impact of AI on incident response, the importance of remediating AI data debt, the need for identity visibility and intelligence, and the growing requirement for sovereignty of cloud security controls. Visual elements include shields with eye motifs, scales balancing custom-built AI apps and security controls, a gavel representing fines and regulations, and a globe symbolizing sovereign cloud.

Deepfakes are moving from curiosity to cash-out. Hong Kong Police recently described a multimillion-dollar fraud in which deepfaked executives on a video call persuaded an employee to authorize transfers—proof that controls around verification, not just malware defense, must evolve. Meanwhile, the Verizon Data Breach Investigations Report continues to show social engineering and credential theft as dominant entry points, and IBM’s Cost of a Data Breach research pegs the average breach in the multimillion-dollar range—underscoring why AI-speed detection and response matter.

Four security actions to take now to counter AI-driven threats

  1. Build an AI threat playbook and red-team it. Create runbooks for prompt injection, data exfiltration via chat interfaces, model hijacking, and supply-chain risks in third-party AI services. Use frameworks like MITRE ATT&CK and MITRE ATLAS to map likely techniques, then simulate them. Instrument robust logging of prompts, outputs, and model calls so your SOC can investigate end-to-end.
  2. Stand up an AI security governance framework. Inventory all models, data sources, and integrations; classify what data can be used for training and inference; and enforce human-in-the-loop for sensitive actions. Align with the NIST AI Risk Management Framework and relevant ISO standards (such as ISO/IEC 27001 and AI risk guidance). Define approval gates, vendor requirements, and incident procedures specific to AI components.
  3. Deploy AI-native defenses where they move the needle. Prioritize email and identity protections that leverage behavioral analytics and LLM-based anomaly detection, EDR that flags living-off-the-land plus code-signed abuse, and fraud controls that can spot synthetic media. Augment analysts with vetted copilots to accelerate triage, but measure outcomes—MTTD, MTTR, false-positive rates—so tools are accountable.
  4. Tighten data and access controls for the AI stack. Apply Zero Trust to models and agents: least-privilege tokens, secrets isolation, and policy-guarded retrieval-augmented generation. Use DLP on both inputs and outputs to prevent sensitive leakage, enable content filters, and adopt provenance signals or watermark detection where feasible to counter deepfakes. Require suppliers to provide security attestations, model cards, and software bills of materials.

What good AI security looks like in high-maturity programs

High-maturity programs treat AI as both a new surface and a new shield. They integrate model telemetry into SIEM, run continuous adversarial testing, and harden identity as the blast door for everything from privileged prompts to automated agents. They also couple technology with people: recurring tabletop exercises, secure coding for ML pipelines, and a security champion in every product team.

EY’s findings point to a simple truth: waiting for clarity is a risk decision. With 50% of leaders acknowledging they’re not ready, the advantage shifts to organizations that operationalize AI security now—codifying governance, pressure-testing defenses, and funding measurable capabilities that shrink attacker dwell time. The window for pilot projects is closing; the window for resilience is open to those who act.

Gregory Zuckerman
ByGregory Zuckerman
Gregory Zuckerman is a veteran investigative journalist and financial writer with decades of experience covering global markets, investment strategies, and the business personalities shaping them. His writing blends deep reporting with narrative storytelling to uncover the hidden forces behind financial trends and innovations. Over the years, Gregory’s work has earned industry recognition for bringing clarity to complex financial topics, and he continues to focus on long-form journalism that explores hedge funds, private equity, and high-stakes investing.
Latest News
Refurbished Lenovo Chromebook Sells For Under $60
Oppo Find N6 Challenges Galaxy Z Fold 7 With Bold Design
Swish Raises $38M In Third Round In 18 Months
Kalshi And Polymarket CEOs Back $35M Prediction Fund
Air Street Capital Becomes Major Solo VC With $232M Fund
Apple Maps Ads Reportedly Near Launch, Sources Say
Hackers Claim Crunchyroll Breach Exposes 7 Million Users
Tecno To Integrate OpenClaw, Mimicking Pixel 10 Magic Cue
Jensen Huang Redefines AGI, Claims It Is Here
Apple Maps Will Start Showing Ads This Summer
Lenovo Deals Arrive Early In Amazon Big Spring Sale
iPhone Exploit Kit Leak Puts Millions at Risk
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.