A new watchdog is calling out the apps most likely to spill your personal information. Firehound, a project from security firm CovertLabs, has published a live leaderboard of the worst offenders, and the Top 10 skews heavily toward AI tools such as chatbots and image generators.
The rankings focus on real exposure risks—think accessible email addresses, usernames, device IDs, and, in some cases, chat histories—rather than routine advertising telemetry. The takeaway is blunt: the convenience of rapid-fire AI services often hides sprawling data pipelines where mistakes or misconfigurations can have outsized consequences.
What Firehound Tracks And Why It Matters
Firehound compiles evidence of data exposures tied to mobile and web apps, prioritizing severity and user impact. These are not simply “apps that collect data”—it surfaces cases where sensitive information was left accessible through public endpoints, poorly secured cloud buckets, exposed logs, or overly permissive third-party SDKs.
The project highlights categories users care about: email addresses and names that enable phishing, chat content that can reveal private or corporate information, and identifiers that allow persistent tracking across services. For consumers, the difference between “shared” and “exposed” is the difference between targeted ads and an actual privacy incident.
Why AI Apps Dominate The Worst Offenders
AI apps process a lot of sensitive input—prompts, documents, photos—and route it through multiple services for model inference, content filtering, analytics, and storage. Each hop adds potential failure points. If any component logs prompts or uploads metadata in clear text, even briefly, data can surface where it shouldn’t.
The architecture itself raises risk. LLMs are often accessed through APIs, paired with content moderation services and tracking SDKs, then cached to improve response times. When systems misbehave, the fallout can be public. A well-known example outside Firehound’s list: a ChatGPT bug exposed some users’ chat titles, underscoring how even seemingly harmless metadata can leak context.
Developers also face pressure to ship quickly, integrate plug-and-play AI, and add monetization. That haste can lead to inconsistent access controls, verbose logging in production, and weak redaction of prompt data—classic pitfalls described in the OWASP Mobile Top 10 and modern LLM security guidance.
The Real-World Stakes Backed By Evidence
Privacy harms are not theoretical. The Federal Trade Commission has fined companies for mishandling or misrepresenting sensitive data, including actions against GoodRx and BetterHelp for sharing health-related information with advertising platforms. Those cases show regulators will penalize misuse even without a headline-grabbing hack.
Independent audits echo the concern. Mozilla’s Privacy Not Included project found that the vast majority of mental health and prayer apps it reviewed were flagged for subpar privacy practices, highlighting how sensitive data can be at risk in popular consumer categories. The pattern: expansive data collection, opaque sharing, and inadequate controls.
For AI apps, leaked prompts can reveal client names, unreleased product details, or personal identifiers. If those logs end up indexed or scraped, remediation becomes complicated—deletions don’t instantly purge downstream caches and backups. That persistence is why Firehound’s focus on exposure, not just policy language, is critical.
How To Protect Yourself From App Data Leaks Right Now
Audit your app list and uninstall tools you don’t use. In app settings, revoke camera, mic, contacts, and location permissions that aren’t essential. Disable contact syncing in messaging and social apps unless you truly need it.
For AI tools, assume prompts may be retained unless a vendor explicitly offers and honors no-retention mode. Avoid pasting proprietary or sensitive personal information. Where possible, choose on-device or enterprise offerings with contractual data controls.
Use Sign in with Apple to mask email addresses or email aliases to compartmentalize logins. Regularly review your Apple, Google, or Microsoft account dashboards and remove third-party access you no longer require. If an app in Firehound’s top tier is essential, ensure you’re running the latest version and check whether the developer has issued a security notice.
What Developers And Platforms Must Change
Adopt data minimization: do not collect what you cannot protect. Enforce token-based access, short log retention, and strict redaction of identifiers in telemetry. For AI, separate production prompts from analytics, and default to opt-out for training on user content unless there is explicit consent.
Follow established standards like OWASP MASVS, maintain a vulnerability disclosure program, and integrate static and dynamic checks into CI pipelines. Vet SDKs for silent data collection and region-lock storage to meet GDPR and other regulatory requirements.
The Top 10 Is A Moving Target As Fixes And Flaws Emerge
Firehound’s leaderboard will shift as developers patch issues and new exposures surface. Today’s worst offender could fall off the list after a fix, while another app climbs due to a misconfiguration or a rushed feature rollout.
The practical advice remains steady: keep apps updated, be judicious with what you share—especially with AI services—and periodically check whether tools you rely on appear in independent rankings or audits. Data leaks thrive on complacency; vigilance, even small habits, meaningfully reduces risk.