FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

Enterprises Adopt Five Critical AI Security Tactics

Gregory Zuckerman
Last updated: March 2, 2026 8:12 pm
By Gregory Zuckerman
Technology
7 Min Read
SHARE

AI is being rolled into production at record speed, and so are the risks. The same systems that accelerate coding, content creation, and decision support also widen the attack surface, push data into new places, and invite legal gray areas. Missteps are expensive: IBM’s 2023 Cost of a Data Breach report pegs the global average at $4.45 million per incident.

Security leaders aren’t slamming the brakes—they’re tightening the controls. Frameworks such as the NIST AI Risk Management Framework, the UK NCSC and US CISA’s secure AI development guidance, and OWASP’s Top 10 for LLM Applications offer a blueprint. Here are five tactics organizations are adopting now—and why they matter the most.

Table of Contents
  • Know Your Data Thoroughly Before The Model Does
  • Secure The AI Supply Chain And Contracts
  • Build Guardrails Into Identity And Access
  • Red-Team The Model And Monitor In Production
  • Stand Up Cross-Functional AI Governance Programs
  • Why Acting Now Matters for Enterprise AI Security
A diagram illustrating the AI Risk Management Framework, with three interconnected phases: Map, Measure, and Manage, surrounding a central Govern icon.

Know Your Data Thoroughly Before The Model Does

Most AI security failures are, at their core, data governance failures. Classify sensitive data, minimize what models can see, and gate everything with least-privilege access. That means clear policies for PII, health data, and trade secrets; DLP controls on prompts and outputs; and approved repositories for retrieval-augmented generation so models only pull from vetted sources.

Two practical moves pay off fast: strip or tokenize identifiers before ingestion, and apply retention windows to prompts, logs, and embeddings. Verizon’s DBIR has long shown the human element in breaches—74% in the 2023 edition—so guardrails must assume someone will paste the wrong thing into a chatbot. Samsung reportedly learned that lesson in 2023 after proprietary code made its way into a public model, prompting new internal restrictions.

When possible, use synthetic or masked data to develop and test. And keep an “AI bill of materials” for each solution—cataloging datasets, features, models, and downstream systems—to speed audits and incident response.

Secure The AI Supply Chain And Contracts

Your risk now includes third-party models, APIs, plugins, and agentic tools. Treat them like any other high-risk supplier: conduct security reviews, require attestations, and map dependencies. Ask pointed questions:

  • Is customer data used to train shared models?
  • Where is it stored and for how long?
  • What’s the incident notification timeline?
  • Can we opt out of training by default?

Model provenance matters. Request model cards, safety test summaries, and a software bill of materials for AI components. The March 2023 incident that exposed some ChatGPT conversation titles—triggered by a library flaw—was a reminder that upstream bugs can leak downstream data. Analysts at Gartner warn that many vendor contracts shift liability to customers; negotiate explicit safeguards for data misuse, IP leakage, and output integrity.

Build Guardrails Into Identity And Access

AI systems shouldn’t get superuser privileges by default. Scope tokens and API keys to the minimum actions a model or agent must perform, rotate secrets often, and prefer just-in-time access. For high-risk operations—payments, code merges, data exports—require a human-in-the-loop approval and log the decision path.

Add friction where it counts: MFA, conditional access, and rate limiting for model endpoints; approvals for plugin installation; and strong isolation for tools that let models act on the physical or digital world. With deepfake-enabled social engineering rising—the MGM Resorts breach in 2023 showed how fast voice-based pretexting can escalate—tie identity controls to AI use cases, not just users.

A circular diagram illustrating the People and Planet centered AI development lifecycle, divided into four main quadrants: Collect and process data, Build and use model, Verify and validate, Deploy and Use, Operate and Monitor, and Plan and Design. Each quadrant contains sub-sections like Data and Input, AI Model, Task and Output, and Application Context. The background is a soft gradient with subtle patterns.

Major providers have published secure patterns for AI agents; align to those patterns and codify them as reusable templates so teams don’t re-invent (or weaken) controls with every new use case.

Red-Team The Model And Monitor In Production

Assume adversaries will try prompt injection, data exfiltration via clever outputs, and jailbreaks. Build an AI red team that tests against the OWASP LLM Top 10 and uses MITRE ATLAS techniques to probe model behavior, tool invocation, and guardrails. Attack your inputs and your context windows; many leaks ride in through benign-seeming files and URLs.

Once live, treat AI like any other tier-one system. Capture telemetry on prompts, responses, tools called, and data touched—while honoring privacy and minimization. Monitor for drift, toxicity, policy violations, and unusual data access patterns. Create playbooks for AI-specific incidents, including rollback steps for poisoned embeddings, compromised agents, or prompt leakage.

Security teams report real gains by pairing humans and AI: using models to scan code for insecure patterns, draft threat models, or review infrastructure-as-code. The key is verification—automation proposes, engineers dispose.

Stand Up Cross-Functional AI Governance Programs

The fastest-moving AI programs share a trait: a standing governance forum with security, data, engineering, legal, and business owners at the same table. This group vets use cases, approves data sources, and sets escalation paths. Map responsibilities to the NIST AI RMF functions—Govern, Map, Measure, Manage—so nothing falls through the cracks.

Write clear acceptable-use policies, train teams on prompt hygiene and data handling, and require impact assessments for use cases touching regulated data or safety-critical actions. Culture matters: position AI as a copilot, not an autopilot, and measure success by safe value delivered, not just model counts.

Why Acting Now Matters for Enterprise AI Security

Security debt compounds in AI just as it does in cloud. Get the fundamentals right early—data discipline, contractual clarity, identity guardrails, adversarial testing, and governance—and you’ll ship faster with fewer surprises. Cut corners, and you will pay for them in breaches, regulatory scrutiny, and stalled adoption. The organizations that treat AI security as a product, not a paperwork exercise, are the ones turning experimentation into advantage.

Gregory Zuckerman
ByGregory Zuckerman
Gregory Zuckerman is a veteran investigative journalist and financial writer with decades of experience covering global markets, investment strategies, and the business personalities shaping them. His writing blends deep reporting with narrative storytelling to uncover the hidden forces behind financial trends and innovations. Over the years, Gregory’s work has earned industry recognition for bringing clarity to complex financial topics, and he continues to focus on long-form journalism that explores hedge funds, private equity, and high-stakes investing.
Latest News
Oracle Cloud ERP Outage Sparks Renewed Debate Over Vendor Lock-In Risks
Why Digital Privacy Has Become a Mainstream Concern for Everyday Users
The Business Case For A Single API Connection In Digital Entertainment
Why Skins and Custom Servers Make Minecraft Bedrock Feel More Alive
Why Server Quality Matters More Than You Think in Minecraft
Smart Protection for Modern Vehicles: A Guide to Extended Warranty Coverage
Making Divorce Easier with the Right Legal Support
What to Know Before Buying New Glasses
8 Key Features to Look for in a Modern Payroll Platform
How to Refinance a Motorcycle Loan
GDC 2026: AviaGames Driving Innovation in Skill-Based Mobile Gaming
Best Dumbbell Sets for Strength Training: An All-Time Buyer’s Guide
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.