FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

43% Of Workers Disclose Sensitive Information With AI

Bill Thompson
Last updated: October 28, 2025 4:48 pm
By Bill Thompson
Technology
7 Min Read
SHARE

Almost half of workers are now pasting sensitive information into AI tools, and most have never been trained not to. Some business owners are also recklessly giving personal data to these tools. More than four in 10 (43%) admit sharing personal details, such as favorite restaurants and hobbies. The National Cybersecurity Alliance and security platform CybSafe conducted the research.

The researchers, who surveyed more than 6,500 people across seven countries, found that 65 percent now use AI in their daily lives — a leap from last year. But 58% say their employers have offered no direction on the security or privacy risks at stake. As leadership at the NCA have cautioned, workers are adopting AI faster than organizations are arming them to use it responsibly.

Table of Contents
  • A Surge In Use With No Guardrails In Place
  • What Workers Are Uploading To AI Systems Today
  • Why This Actually Poses a Risk To Businesses
  • Policy And Training Lag Behind Widespread AI Use
  • What Organizations Need to Do Now To Reduce Risk
The logo for the National Cybersecurity Alliance featuring a stylized blue circular icon composed of radiating lines and the text NATIONAL CYBERSECURI

A Surge In Use With No Guardrails In Place

Generative AI has been baked into the tools that people use regularly — from an office suite to a help desk application — so it’s no sweat to ask a chatbot for assistance with a spreadsheet, contract, or presentation. That ease encourages a copy-and-paste culture, where “just to see what it does” can lead to uncontrolled sharing of data with a third‑party model.

This gulf between use and policy has created a shadow‑AI problem. Workers often don’t know if consumer chatbots save utterances, how input service providers process I/O, or whether legal responsibilities — GDPR, HIPAA, and industry nondisclosures — apply when workers share internal content.

What Workers Are Uploading To AI Systems Today

The survey revealed a disturbing laundry list: budget documents, client lists, personally identifiable information, legal contracts, and even product roadmaps and source code. Those categories are not hypothetical. Samsung engineers found out the hard way in 2023, when they accidentally shared private code with the company’s chatbot, which was then banned as an internal tool. A number of banks, including some of the biggest ones, have also restricted access as they consider compliant options.

There is a difference between consumer AI and enterprise AI. Consumer‑targeted models may keep inputs for quality improvement unless you opt out, while enterprise models increasingly promise “no training on your data,” private tenancy, and stronger audit controls. Many employees cannot differentiate — or do not understand why it matters.

Why This Actually Poses a Risk To Businesses

There are a number of ways data shared with AI can leak: it’s stored by the provider, misconfigured integrations spill details from one system to another, accounts get hacked — and there may also be downstream prompts that lead models to release memories of past inputs in bits and pieces. The OWASP Top 10 for Large Language Model Applications specifically cites prompt injection, data exfiltration, and overbroad permissions as common failure modes.

These stakes are raised even further by AI agents. These systems browse the web, call internal tools, and run workflows — capabilities that necessitate broad privileges. In a recent survey of IT professionals conducted by SailPoint, 96% said AI agents add security risk, even though 84% said their companies are already using them. And that’s the tension in the workplace: the productivity upside is real, but so is the blast radius when an agent is phished or misdirected or over‑privileged.

There is also the danger of getting it simply wrong. When models hallucinate and staff mistake it for gospel, organizations can make decisions based on made‑up numbers or leak data they never meant to let slip, leaving them exposed in terms of compliance and reputation.

The National Cybersecurity Alliance logo and text on a black fabric background.

Policy And Training Lag Behind Widespread AI Use

The NCA–CybSafe results reveal a fundamental governance gap: many organizations have failed to define what can be placed into an AI tool, which tools are sanctioned, and how logs should be reviewed. Small and medium‑sized businesses are particularly stretched; they don’t usually have privacy counsel or a person in charge of AI security who can translate what the rules mean into daily activity.

That’s starting to change with the assistance of regulators and standards bodies. The NIST AI Risk Management Framework provides a template for mapping AI use cases, evaluating harms, and establishing controls. It defines industry best practices such as data minimization and role‑based access that can be repurposed for AI workflows. But nothing sticks if you don’t have clear internal policies and hands‑on training.

What Organizations Need to Do Now To Reduce Risk

Begin by directing employees to enterprise‑grade offerings that provide zero‑retention modes, private data stores, and contractual commitments not to use your inputs for training. Couple that with data loss prevention tools, identity protections (think multifactor authentication), and logging so a security professional can see what’s being shared — and by whom.

Restrict the permissions available to AI agents with least‑privilege access, an allowlist for tools the agents are allowed to call, and red‑team prompts that probe for leakage and injection tricks.

By default, do not paste secrets and keys into prompts, and use scanners to prevent them from being shared. GPT‑3 requires a huge amount of data, but if you can, use retrieval‑augmented generation so any sensitive content remains in your controlled corpus and is not sucked into the actual model.

Most importantly, train people.

  • Provide simple, back‑pocket rules of thumb; for example, if you’d need to get an NDA in place to share data with a vendor, don’t paste it into public AI.
  • Strip or mask identifying information from records.
  • Label confidential files accordingly.
  • Check the model’s data use policy before uploading anything.

A playbook of 15 minutes can erase that ambiguity and shrink the ranks of the 43% who are oversharing.

The lesson from the data is clear: AI has arrived in everyday work, but trust demands discipline. Companies that combine the technology’s speed with common‑sense controls and realistic training will reap benefits without handing off their financials, client lists, or credibility to Russian goons.

Bill Thompson
ByBill Thompson
Bill Thompson is a veteran technology columnist and digital culture analyst with decades of experience reporting on the intersection of media, society, and the internet. His commentary has been featured across major publications and global broadcasters. Known for exploring the social impact of digital transformation, Bill writes with a focus on ethics, innovation, and the future of information.
Latest News
How to Choose a Safe and Trustworthy Online Gaming Platform
How to Use iTop Voicy as a Free Voice Changer for Discord and Online Games
The Best Free 3D Camera Control For Consistent Character Design in 2026
Intravenous Vitamin C in Contemporary Medical Research: Evidence, Mechanisms, and Clinical Context
AI prices are set to soar in 2026 due to memory costs
From Static to Smart: Designing Digital Menu Boards Customers Actually Read
Solitaire Without the Solitude: Inside the Solitaire Clash Original Video Series
How to Protect Your Laptop & Tech Devices While Traveling
How to Add Roman Page Numbers to a PDF (And Switch Number Styles Easily)
How Block Orientation Impacts Your Granny Flat Floor Plan
The Ultimate Checklist for Starting Your Custom T-Shirt Printing Business
Lumus Debuts 70-Degree AR Waveguide Prototype
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.