Neon, the buzzy app that pays people to share recorded phone calls for artificial intelligence training, remains offline following a serious security lapse that exposed sensitive user data within the service. The developer behind the software has taken the app offline indefinitely and vowed that it will only be relaunched when solutions can be implemented and a wider security audit is undertaken.
Why Neon Went Dark After a Severe Security Failure
The app’s removal came after reports that a logged-in user could access the information of other users, including phone numbers, call recordings, and transcripts.
In testing the app, a reporter found that the backend did not limit access to data objects that were associated with multiple accounts — a type of failure widely known as an “access-control” failure and which is categorized under the OWASP API Security Top 10 as an IDOR or broken object-level authorization issue.
Developer Alex Kiam informed users that he took the “drastic measure” of closing servers in order to protect their privacy while engineers work on fixing this vulnerability and audit the platform. While the outage is in progress, calls and cash-outs won’t be possible, and in-app balances might temporarily display as zero, though the developer of HQ Trivia says any current earnings will stay put. An after-clean-and-verify relaunch is aimed for release around an estimated time frame of one to two weeks.
A cursory spot check of app stores finds Neon still available to download but unable to onboard, while the service is also offline. That implies the client is still in service, even as the underlying infrastructure is intentionally taken offline to avoid further exposure.
How Neon’s payout model works and potential earnings
Neon sells itself as a means to turn everyday chitchat into profit by licensing anonymized “lumber” — or recordings of conversations — to AI developers building conversational systems. The app pays by the minute, more when both people are using Neon, but a reduced rate if only one user is on Neon. Users can also make referral bonuses, and daily payouts are capped to limit expenses.
For context, users who wish to hit a $30 daily cap at a top rate of 30 cents per minute would need around 100 minutes’ worth of qualifying calls. That reality — vast phone time, plus steady approved conversations — goes a long way toward explaining the aggressive referral incentives and push for rapid scaling. But rapid growth can test how far it stretches security practices, particularly around authentication and data segregation.
Anonymization promises versus reality in voice data
Neon says that it scrubs P.I.I., or personally identifiable information, and encrypts its recordings before selling them to vetted partners. But voice is inherently identifying — tone, accent, and speech patterns can all be biometric markers. “Reidentification is much easier than most people think,” regulators have warned. The Federal Trade Commission has offered guidance that deidentification should be a process, not a one-time switch — and downstream uses of data must reflect that.
The exposure revealed here belies such assurances, because it made raw or weakly processed offerings available not only to trusted purchasers but also to average users.
If no outside theft is found, the ability to just wander around and view other people’s recordings and transcripts would still be a huge failure in least-privilege access design.
Consent, compliance, and risk for call recording apps
Outside of platform security, call recording adds the legal headache. In the U.S., some states require signed consent from both parties to record, and others permit one-party consent. If parties in more than one jurisdiction are having a discussion, the higher rule applies. Laws such as California’s CPRA and Illinois’ biometric statute add other responsibilities around notice, purpose limits, and retention over voice data collection and monetization. At the international level, GDPR brings its own consent and data minimization regulations.
These frameworks don’t ban the selling of data for cash, but they do raise questions about transparency and control. Pew Research supports consumer discomfort: majorities feel they have no control over how companies use their data and have long-term concerns. Amid all that, any weakness in how you treat audio, transcripts, or metadata can undermine trust quickly.
What users can expect next if Neon returns online
If Neon comes back according to the promised schedule, be on the lookout for visible changes:
- Tighter API authorization
- Tougher storage for audio files
- Outside validation of fixes
- More transparent documentation on methods used to obscure data
- Finer-grained user controls over what is recorded, stored, and licensed
For would-be earners, the math is no sweat but is far from being a cinch. It’s real money, but also real trade-offs. Before opting back in, check what is collected, who gets access to it, and how long they keep it. Confirm if you can delete recordings and revoke licensing, and if the app offers explicit advice about how consent works in your jurisdiction. Even as data-for-dollars services multiply, these basics continue to distinguish responsible platforms from the edgier experiments with riskier sidelines.
For now, Neon is in the penalty box for a mistake that should have been detected in standard prelaunch testing. Fixing the bug is the simple part. Rebuilding that trust — with users, partners, and regulators — will be the heavy lifting if this model is going to scale in a way that feels safe.