Neon, a buzzed-about calling app, is skyrocketing in the U.S. App Store by making people pay to record calls so artificial intelligence companies can use those recordings to train their algorithms. The pitch is straightforward and slightly surreal: talk, and you or something a bit like you (in this case, your voice) will be paid. And then, what you say can train artificial intelligence to do all sorts of things.
TechCrunch reported the surge as Neon rose up the Social Networking chart, apparently reaching No. 2.
The company characterizes the arrangement as profit-sharing over data that’s already being used to power the AI economy, presenting users as paid participants rather than passive products.
How Neon Works, and Who Buys the Recorded Audio
Neon directs calls through its service, logs the users who joined in, and licenses the audio to “vetted” AI developers. Rates are per minute: $0.30 when you’re speaking to another Neon user; and $0.15 if you’re on a call with a non-Neon user (the only one being charged, in that case, is the Neon user). You’re paid up to $30 per day, with a $30 referral bonus for each verified new user.
According to Neon, payouts are made two to three days after a user earns at least $0.10. According to Reddit community chatter, average rates for meals were around $18–19 per hour in optimal time slots as of October 2017. However, some drivers reported difficulty with payments combined with a few bugs. Like many gig-style platforms, the headline rate will depend on call volume and who answers the phone.
What Neon Offers, and Its Terms for User Audio
The company says it scrubs audio of personally identifiable information — names, numbers and addresses — before turning them over to buyers. But its terms of service give Neon, which operates a highly trafficked website, an extensive and sublicensable license to use, modify and sell submitted recordings — including rights to create derivative works — across the universe. That’s a gulf that you find in data-siloed services, but it is something which significantly constrains the ability of a user to claw back content once it’s out there.
Experts at organizations like the Electronic Frontier Foundation have long warned that “anonymized” data sets can be re-identified. Voice can be a biometric; even if the contact details are scrubbed, tone, speech pattern and who one is talking to can expose identity. The Federal Trade Commission also indicated that the commission believes voice data requires special treatment, citing previous enforcement related to the retention of voice recordings.
Consent Laws And Platform Policy Questions
The question of legality in recording a call depends on consent. Most U.S. states are “one-party consent,” though a handful — including California, Pennsylvania and Washington — mandate all-party consent. If you’re located in, or calling from an all-party state, everyone must consent to being recorded. Apps usually deal with this through pre-call disclosures, or an audible beep, though it can be tricky to comply if your call is crossing state lines.
In iOS, call-recording apps tend to use VoIP bridges rather than recording your system dialer. Apple App Store rules mandate disclosure of data collection and that apps obtain consent where required, as well as clear privacy labels. If Neon is able to keep growing, it seems likely that its in-app onboarding, during-call notifications and data handling will be examined for their adherence to both state law and platform policy.
Why AI Firms Need Everyday Calls for Training
Speech models feed on real-world sound: overlapping voices, background noise, accents, code-switching and slang. Public datasets (for example, Mozilla’s Common Voice and LibriSpeech) have been mainstays, but they’re heavily biased toward clean recordings and reading tasks. Spontaneous conversations and speaking style: conversational data, especially spontaneous telephone calls, are used to improve recognition, diarization (i.e., identify speaker turns in data) and agentic dialogue systems for diverse demographic groups and acoustic environments.
That demand has only grown as businesses pursue voice-enabled assistants and call-center automation. Performance gaps in NIST evaluations persist for underrepresented accents or noisy conditions, and have driven developers to seek more diverse audio. Neon’s offer of “diverse, real-world speech” is exactly the sort of dataset pitch buyers want to hear.
Risk–reward trade-off for users considering Neon
Neon’s pay-per-minute model has obviously benefited budget-strapped users, but it also came with trade-offs that can be easy to ignore.
Recordings can be retained by an unlimited number of third parties after the license with […] Continue reading at ipkat.com
State biometric laws — such as Illinois’ Biometric Information Privacy Act and Texas’ statute on biometrics — may be implicated when forms of “voiceprints” are obtained, which poses additional compliance considerations for purchasers and intermediaries.
Neon says you can go into your account to stop future recording. The larger question is what happens to existing shared audio and any derived models trained on it. Says executive director Carly Kind: “As datasets used in AI become increasingly regulated, under systems referenced by the FTC and overseas through frameworks such as the EU’s proposed AI Act, data provenance and revocation rights will suddenly become a whole lot more important than they are today.”
Bottom line on Neon’s paid call-recording model
Neon is tapping into a potent consumer reflex: If tech giants are making money off data, why can’t users? The app’s fast rise indicates the concept might be striking a chord. But before you trade hours of your voice for dollars, read the fine print, know your state consent rules and remember that “anonymized” doesn’t always mean untraceable. For now, at least, the real calculus is not so much riddle as easy: Is $30 a day and surrender of a perpetual license to your everyday conversations worth it?