Almost half of workers are now pasting sensitive information into AI tools, and most have never been trained not to. Some business owners are also recklessly giving personal data to these tools. More than four in 10 (43%) admit sharing personal details, such as favorite restaurants and hobbies. The National Cybersecurity Alliance and security platform CybSafe conducted the research.
The researchers, who surveyed more than 6,500 people across seven countries, found that 65 percent now use AI in their daily lives — a leap from last year. But 58% say their employers have offered no direction on the security or privacy risks at stake. As leadership at the NCA have cautioned, workers are adopting AI faster than organizations are arming them to use it responsibly.
A Surge In Use With No Guardrails In Place
Generative AI has been baked into the tools that people use regularly — from an office suite to a help desk application — so it’s no sweat to ask a chatbot for assistance with a spreadsheet, contract, or presentation. That ease encourages a copy-and-paste culture, where “just to see what it does” can lead to uncontrolled sharing of data with a third‑party model.
This gulf between use and policy has created a shadow‑AI problem. Workers often don’t know if consumer chatbots save utterances, how input service providers process I/O, or whether legal responsibilities — GDPR, HIPAA, and industry nondisclosures — apply when workers share internal content.
What Workers Are Uploading To AI Systems Today
The survey revealed a disturbing laundry list: budget documents, client lists, personally identifiable information, legal contracts, and even product roadmaps and source code. Those categories are not hypothetical. Samsung engineers found out the hard way in 2023, when they accidentally shared private code with the company’s chatbot, which was then banned as an internal tool. A number of banks, including some of the biggest ones, have also restricted access as they consider compliant options.
There is a difference between consumer AI and enterprise AI. Consumer‑targeted models may keep inputs for quality improvement unless you opt out, while enterprise models increasingly promise “no training on your data,” private tenancy, and stronger audit controls. Many employees cannot differentiate — or do not understand why it matters.
Why This Actually Poses a Risk To Businesses
There are a number of ways data shared with AI can leak: it’s stored by the provider, misconfigured integrations spill details from one system to another, accounts get hacked — and there may also be downstream prompts that lead models to release memories of past inputs in bits and pieces. The OWASP Top 10 for Large Language Model Applications specifically cites prompt injection, data exfiltration, and overbroad permissions as common failure modes.
These stakes are raised even further by AI agents. These systems browse the web, call internal tools, and run workflows — capabilities that necessitate broad privileges. In a recent survey of IT professionals conducted by SailPoint, 96% said AI agents add security risk, even though 84% said their companies are already using them. And that’s the tension in the workplace: the productivity upside is real, but so is the blast radius when an agent is phished or misdirected or over‑privileged.
There is also the danger of getting it simply wrong. When models hallucinate and staff mistake it for gospel, organizations can make decisions based on made‑up numbers or leak data they never meant to let slip, leaving them exposed in terms of compliance and reputation.
Policy And Training Lag Behind Widespread AI Use
The NCA–CybSafe results reveal a fundamental governance gap: many organizations have failed to define what can be placed into an AI tool, which tools are sanctioned, and how logs should be reviewed. Small and medium‑sized businesses are particularly stretched; they don’t usually have privacy counsel or a person in charge of AI security who can translate what the rules mean into daily activity.
That’s starting to change with the assistance of regulators and standards bodies. The NIST AI Risk Management Framework provides a template for mapping AI use cases, evaluating harms, and establishing controls. It defines industry best practices such as data minimization and role‑based access that can be repurposed for AI workflows. But nothing sticks if you don’t have clear internal policies and hands‑on training.
What Organizations Need to Do Now To Reduce Risk
Begin by directing employees to enterprise‑grade offerings that provide zero‑retention modes, private data stores, and contractual commitments not to use your inputs for training. Couple that with data loss prevention tools, identity protections (think multifactor authentication), and logging so a security professional can see what’s being shared — and by whom.
Restrict the permissions available to AI agents with least‑privilege access, an allowlist for tools the agents are allowed to call, and red‑team prompts that probe for leakage and injection tricks.
By default, do not paste secrets and keys into prompts, and use scanners to prevent them from being shared. GPT‑3 requires a huge amount of data, but if you can, use retrieval‑augmented generation so any sensitive content remains in your controlled corpus and is not sucked into the actual model.
Most importantly, train people.
- Provide simple, back‑pocket rules of thumb; for example, if you’d need to get an NDA in place to share data with a vendor, don’t paste it into public AI.
- Strip or mask identifying information from records.
- Label confidential files accordingly.
- Check the model’s data use policy before uploading anything.
A playbook of 15 minutes can erase that ambiguity and shrink the ranks of the 43% who are oversharing.
The lesson from the data is clear: AI has arrived in everyday work, but trust demands discipline. Companies that combine the technology’s speed with common‑sense controls and realistic training will reap benefits without handing off their financials, client lists, or credibility to Russian goons.