Increasingly, though, those free connections are not as independent as the companies would have users believe — and they’re cutting the same corners on security. A fresh academic study has confirmed a “fingerprints-based” tracking mechanism that takes a different approach, along with the sharing of codebases and connectivity, and reused protocols across multiple “brands,” which has privacy advocates sounding the alarm.
Such hidden connections matter for one reason above all, the researchers said: when dozens of apps share the same opaque operator and they rely on highly brittle cryptographic principles, a single most-atom-sized mistake can cascade across millions of users. And with hundreds of millions of installs on the line, it is anything but theoretical.

What new research found
The study, which was peer-reviewed and presented at the Privacy Enhancing Technologies Symposium, looked at the 100 most-downloaded free VPN apps in Google Play and found that virtually all of them clustered into three “families” of related apps. Let’s block ads! (Why?) Source link Co-authors Benjamin Mixon-Baca, Jeffrey Knockel of Citizen Lab and Jedidiah R. Crandall tracked connections between apps downloaded at least 700 million times in total.
On the new network, the team saw telltale overlaps: nearly identical Java code and native libraries, shared assets and the same server IPs. Some of the apps used the same corporate entity in their privacy policies — Innovative Connecting — despite being advertised as separate companies. In a different group, the use of identical obfuscation techniques as well as a closed source protocol implementation implicated an upstream developer.
The secret families of “independent” VPNs
Why would one provider launch a dozen brands? Scale and monetization. White-label VPN platforms make it easy to create new names, specimens and icons without throwing away the codebase and servers. It appears no choice at all, but a distribution ploy—a single operator dominating larger and larger shares of search results, ad slots, installs while hiding common ownership.
It’s not just a marketing quibble that it does so in the absence of disclosure. If the infrastructure underlying multiple “independent” apps is run by a single company, that company has access to the same authentication systems, logging policies and update pipelines. A policy adjustment or breach at one place can cascade across an entire family of services without anyone knowing they were all interconnected.
Shared code = shared servers = shared risks
The researchers discovered that there were fundamental reasons not to trust VPNs in general. Some apps also included hard-coded Shadowsocks passwords in their APKs — a rookie mistake. Because Shadowsocks uses a preshared secret, if someone extracts that key from the app they can decrypt any captured traffic to servers using it. In reality, this converts “encrypted” sessions into plaintext as far as a long-term adversary is concerned.
The research also highlighted weak crypto configurations, and connection inference designs — where a snoop is able to link a user’s use of an app with VPN traffic patterns. These are the sorts of issues that experienced security teams manage to avoid, but seem to happen across many brands that look like they share upstream code.
This pattern isn’t new. A seminal study by CSIRO Data61 a few years back found that lots of Android VPNs were asking for crazy permissions, including third-party trackers and in some cases no tunneling encryption. That effort found that almost one in five of the apps it analyzed used tunneling without encryption, and a large number triggered virus alerts on VirusTotal checks. The new findings suggest that the industry has yet to internalize the lesson.
Why free VPNs keep making the same mistakes
Running a world wide VPN costs real dollars, servers and bandwidth aren’t free nor is ddos mitigation, audits and engineers. Free apps have to pay the rent somehow, and that can be via ad SDKs, data collection pushing boundaries in terms of what consumers might tolerate, or selling “premium” upgrades with minimal investment in secure design. White-label suppliers reduce the barrier to entry yet more, and reuse of code and undisclosed affiliations are common practice rather than exceptions.
History also teaches that “free” can have lurking trade-offs. The Hola/Bright Data controversy was the first prominent example of a VPN-like service that could monetize users as exit nodes. The specifics are different, but the misaligned incentives are the same: users who don’t pay are what is being sold — or used to test marketable shortcuts on security.
What app stores and users can do now
App stores have leverage. In the case of Google Play, developers receive a badge for an independent security review as part of its Mobile App Security Assessment program courtesy of the App Defense Alliance. Requiring this sort of review for VPNs, the study’s authors contend, and ensuring some kind of meaningful developer identity verification being performed could provide users with better signals and raise the floor on security.
Users’ best safeguard against such maneuvers may be to insist on transparency and proof, not promises. Seek out independent audits that have been published by established security companies, and explicit no-logs claims tested in court or by forensic analysis, as well as open source clients where they exist. Look for modern protocols like WireGuard or properly configured OpenVPN, run a few leak tests and be suspicious of apps seeking unrelated permissions or containing many ad and analytics SDKs.
The news is not that some free VPNs are harmful; it’s that lots of the most popular ones have been known to make the same mistakes, again and again often because they’re really no different at all.
Until there is lineage, security design and audits are treated as table stakes, users should expect the “new” free VPN to be just the same old operator in a different mask.