FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

Ami Luttwak On What AI Means For Cyberattacks

Bill Thompson
Last updated: October 29, 2025 9:51 am
By Bill Thompson
Technology
8 Min Read
SHARE

Artificial intelligence is altering the rhythm and nature of cyberattacks, and Luttwak may have a clearer view of this transformation than anyone else working in cybersecurity today. It is the same tools used to accelerate the pace of software delivery that are being repurposed by adversaries who are now able to deploy and modify attacks at machine speed, making every hastily integrated system and every ill‑purposed AI agent a potential entry point.

At one company, Fortune 1000 AI director Ami Luttwak had his engineers wire AI into the company’s code pipelines and business workflows and kept encountering a pattern: prompt writers have developers build features fast with “vibe coding” and AI agents, but seldom do security requirements stand out. And the result is inevitable—band‑aid‑like authentication shortcuts, fragile secret‑handling processes, and ungrounded permissiveness that attackers (including those with state‑level resources and motivations) are ready to pounce on.

Table of Contents
  • How attackers weaponize AI to infiltrate modern systems
  • The authentication trap in vibe coding for AI projects
  • Supply chain exposures and the rise of token fraud in AI
  • Defending at AI speed with horizontal security practices
  • A playbook for AI startups and enterprise technology buyers
A professional head shot of a smiling man with a beard and short dark hair, wearing a black t- shirt against a plain white background, resized to a 16

Wiz, which is now owned by Google as part of a multibillion‑dollar acquisition, has audited AI‑generated applications and often discovered weak identity checks and misconfigured authorization. “If you don’t ask an agent for the most secure design, it will not invent one for you,” Luttwak observes. Offense enjoys the same accelerated mileage curve: code‑driven probing, independent agents for recon, and rapid iteration once on the inside.

How attackers weaponize AI to infiltrate modern systems

Attackers now “speak” to enterprise AI like employees. They implant quick injections of documents and tickets, then command internal agents or chatbots to exfiltrate secrets, kill services, or alter access. The traditional boundary between user interface and system control is eroding, because conversational tools frequently are found near complex APIs with wide reach.

  • A breach at Drift, an AI provider of sales chatbots, exposed tokens that allowed attackers to imitate chatbots and ask customers for Salesforce data.
  • Those tokens could be used to imitate bots when requesting sensitive information, enabling further movement inside high‑profile environments, including Cloudflare and Palo Alto Networks.
  • Another example is “s1ingularity,” where attackers snuck malware into Nx’s JavaScript build ecosystem that looked for developer AI tools, and were kicked off a rented, expensive AWS GPU when GPU‑based CI/CD caught them.

The lesson is clear: when AI is integrated into essential workflows, new supply chain edges form. If a third‑party agent can reach your data, an attacker who has stolen its token can as well.

The authentication trap in vibe coding for AI projects

The code generated by AI often “works” during demos, but trips over the basics of security. There is a string of mistakes that feature in Wiz assessments, including homegrown auth, not checking state for the OAuth flow, sloppy JWT validation, and agents being lax about leftover credentials. There is nothing new about these bugs—AI just speeds up their arrival in production.

Data supports the risk. Humans alone: the vast majority of intrusions today are attributable to human error and downright flubs; hijacking credentials is still #1 on the Outbreak list. AI’s pace accelerates both sides of that equation: developers package in risky defaults faster, and attackers turn over cards on them—enumerate, phish, and test faster than ever.

Luttwak’s prescription is remarkably straightforward: write security into the prompt; don’t try to sprinkle it on as an afterthought in the backlog. Call out MFA, least‑privilege scopes, token rotation, signed and audience‑validated tokens, and denial‑by‑default authorization. If you’re not asking your agent to create a sturdy door, you are going to get a fast one.

Supply chain exposures and the rise of token fraud in AI

AI agents are desperate for access: calendars, CRMs, build systems, cloud APIs. Attacks use them by stealing tokens from targets and pivoting. Orgs give broad OAuth scopes so their pilots “just work” and forget to clip wings. Since so many AI tools operate as headless automation, their actions can get lost amid normal workflows unless logs are rich and correlated.

A smiling man with dark hair and a beard wearing a dark blue hoodie with orange accents. He is looking slightly to the right with a background of blur

Mitigations begin with ruthless scope minimization and short token lifetimes, as well as ongoing shadow AI application discovery. Implement single sign‑on, conditional access, and per‑customer data isolation for every SaaS that gets its hands on your crown jewels. Watch and kill tokens on misuse. Build in an egress‑aware manner and keep sensitive data as close to the customer as possible while still maintaining higher layers of semantics out of the way.

Defending at AI speed with horizontal security practices

Defenders need their own acceleration. Wiz has risen from scanning for cloud misconfiguration to developer‑centric code security and runtime threat detection in what the company hopes will help tie design‑time controls better with production realities. Luttwak calls this “horizontal security”—knowing more about an application than where it lives, so that detections and policies are meaningful in a business context.

Industry frameworks are catching up. OWASP has also released an LLM‑10, aimed at addressing direct injection, over‑delegation, and data exposure. MITRE’s ATLAS lists specific adversarial AI techniques, while NIST’s AI Risk Management Framework provides guardrails for model and system risk. State‑of‑the-art is more important than acronyms: sanitize and constrain model inputs and outputs, isolate agent identities, log every move a model makes, and enforce budget limits on autonomous tools to ensure they do not run away.

Telemetry is critical. Record captures, requests, tool calls, and the actual API calls they caused. Without that thread, incident response becomes a guessing game when an “assistant” goes rogue.

A playbook for AI startups and enterprise technology buyers

Luttwak advises startups to start with trust and let it lead the way from there. Hire a fractional CISO early, design for SSO (single sign‑on), audit logs, customer‑managed keys, and strong access controls from day one, and go after compliance—like SOC 2 or ISO 27001—before going to market. IBM’s Cost of a Data Breach research finds the average breach costs millions—far more than the relatively little investment in early hygiene.

Architecturally, customer data should remain in the customer’s environment as much as possible, use ephemeral sandboxes, and store secrets in managed vaults. Think of “security debt” like tech debt with interest: what you don’t harden now will have an exponentially higher cost when customers grow and attackers take notice.

The takeaway from Luttwak: AI is rewriting the attackers’ playbook and defenders’ toolkits alike. Every security category—email, endpoint, identity, data, and cloud—will be thought of anew. The teams who succeed won’t merely adopt AI; they’ll corral it, watch it, and design for failure before the first prompt is ever run.

Bill Thompson
ByBill Thompson
Bill Thompson is a veteran technology columnist and digital culture analyst with decades of experience reporting on the intersection of media, society, and the internet. His commentary has been featured across major publications and global broadcasters. Known for exploring the social impact of digital transformation, Bill writes with a focus on ethics, innovation, and the future of information.
Latest News
Spotify Executes Legal Ambush on Anna’s Archive
AI Liza Minnelli Headlines Eleven Album Launch
Virt-Manager Emerges As Reliable VirtualBox Alternative
Linux Powers AI and Reshapes Modern IT Careers
Samsung Galaxy S26 leaks reveal colors and prices
Nvidia And Microsoft Chiefs Reject AI Bubble At Davos
Sennheiser Launches First Auracast TV Headphones
Apple Reportedly Developing AI Wearable
AI Piano App Turns Beginners Into Party Pianists
Apple Develops AirTag‑sized AI Pin Wearable
RadixArk Spins Out From SGLang At $400M Valuation
Amazon Offers $350 Off Shark AV2501AE Robot Vacuum
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.