AI chatbots pitched as lifelines for domestic abuse survivors are quietly putting users at risk, according to researchers who unveiled fresh evidence that these tools leak data, leave forensic traces, and can escalate harm. Presenting findings on “technology-facilitated harm,” academics Diana Freed and Julio Poveda warned that survivor-focused bots routinely fail at basic privacy and security, despite being marketed as safe spaces.
Their audits of more than 50 chatbots built for survivors found a stark pattern: 100% used tracking cookies or other identifiers, and many failed to purge session data after a “Quick Exit.” Some even encouraged users to email chat transcripts—a catastrophic design choice if an abuser monitors a shared inbox or device. The result is a brittle façade of safety that can betray the very people these tools aim to protect.
Freed likened the threat environment to insider-risk scenarios in cybersecurity, where the adversary already knows the victim’s routines, devices, and social graph. In intimate partner violence (IPV), the “attacker” often has physical access, shared passwords, and emotional leverage. Off-the-shelf chatbot architectures—optimized for engagement and data collection—were never designed for that reality.
Why Privacy Promises Keep Breaking for Survivor Chatbots
Survivor-directed chatbots frequently tout anonymity and confidentiality. In practice, conversations may be used for analytics or model improvement, shared with third parties, or retained indefinitely. Unlike licensed clinicians, chatbots aren’t bound by health privacy laws, and most users never see (or can’t parse) dense disclosures buried behind links.
Regulators have flagged the broader mental health tech ecosystem for similar abuses. The Federal Trade Commission penalized a major online counseling brand for sharing sensitive user information with ad platforms, underscoring that “anonymous” does not mean untraceable. Mozilla’s Privacy Not Included researchers have repeatedly found mental health and relationship apps among the worst for data protection—behavior patterns that spill into chatbot offerings.
Even seemingly benign features can be dangerous. “Quick Exit” buttons typically redirect to a neutral page but do not clear history, cookies, or DNS caches. Browser fingerprinting from third-party scripts can persist, allowing data brokers or ad networks to infer highly sensitive contexts. For survivors living with a watchful abuser, a single breadcrumb can trigger retaliation.
Why the IPV Threat Model for Survivors Is Different
In corporate security, attackers guess passwords. In IPV, attackers already know them—or watch you type them. Abusers may control Wi-Fi routers, Apple or Google family accounts, cloud backups, or carrier plans. They can access device unlock codes, autofill histories, and messages. In that world, storing transcripts in the cloud, requiring account logins, or leaving local caches is not a minor flaw; it is an invitation to harm.
Global public-health data shows the stakes: the World Health Organization reports that about one in three women experience physical or sexual violence by an intimate partner in their lifetime. Digital surveillance now commonly accompanies coercive control, a trend also documented by the National Network to End Domestic Violence’s Safety Net Project and the Coalition Against Stalkerware. Any tool serving survivors must assume hostile co-users.
What Safer Survivor Chatbots Would Do Now
Experts are calling for a privacy-by-default architecture, not opt-in fine print. That means no analytics or third-party scripts; strict content security policies; and zero retention unless a user explicitly consents, with deletion as the default outcome.
Sessions should be ephemeral and local-first, protected by on-device encryption, with a one-tap “panic close” that actually wipes tabs, cookies, local storage, and recent-app lists. “Quick Exit” must be paired with verifiable secure erasure and cache clearing. If transcripts are offered, they should save only to a user-chosen secure vault on-device—never to email or cloud by default.
Designers must adopt an IPV-centric threat model: no mandatory accounts, pseudonymous modes, PIN-protected access, decoy home screens, and quiet user interfaces that don’t attract attention. Data flows should be minimized and isolated, with privacy reviews by independent auditors and red-team exercises that simulate abuser tactics. Bug bounties and incident transparency should be table stakes.
Equally important is clarity. Plain-language privacy notices, visible data controls at the start of a conversation, and granular “delete everything” actions build trust—and create a safer default for people who don’t have time to hunt for settings.
What Survivors Can Do Safely Today With Technology
Specialists stress that chatbots are not a substitute for trained advocates. When possible, reach out to confidential hotlines or local shelters from a device and network an abuser cannot access, such as a friend’s phone or a public terminal. The National Domestic Violence Hotline, RAINN, Refuge, and independent advocacy centers can offer safety planning tailored to your situation.
If you must use technology, consider a browser with strong tracking protection, a privacy-focused search engine, and private windows that clear on close. Be cautious with emails or cloud backups. Learn device safety features like account sharing checks and permission reviews; groups such as NNEDV publish step-by-step guidance to reduce digital footprints in abusive contexts.
Accountability Must Lead the Survivor Tech Roadmap
This is not a UX nitpick—it is a safety imperative. Vendors courting vulnerable users should meet auditable standards, including data minimization, third-party tracker bans, and documented deletion pipelines. Policymakers and funders can accelerate progress by requiring independent privacy assessments for any survivor-facing AI tool.
AI can support survivors—but only if it respects the lived reality of coercive control. Until the industry embraces privacy by default and designs for an adversary who is already inside the house, the safest advice remains the simplest: treat chatbots as public spaces, not private confidants.