Anker’s home security brand, Eufy, has come under renewed scrutiny after asking the owners of its cameras to submit footage of crimes for use in training an AI system — and to stage porch piracy and attempted car break-ins if no crimes were available. Community forum posts and reporting from TechCrunch suggest the company paid $2 per approved clip, and capped earnings at $40 per camera for each type of incident.
The program requested clips of package theft or people attempting car doors, asking participants to send clear views and even position themselves in view of two cameras at a time. The reason is simple: contemporary vision models benefit from more labeled data. But offering simulated theft as an incentive also raises thorny questions about data quality, consumer safety, and how far a security brand should go to perfect its algorithms.

How the controversial Eufy data collection program worked
According to forum posts, Eufy asked users for “donations” of small video clips that fit certain categories its AI was trying to detect. Each accepted submission led to a small payment, with per-camera limits designed to prevent spamming. The company even encouraged users, when real-world incidents were lacking, to pretend they were stealing packages from porches or trying door handles, stressing clear angles and multiple camera coverage in order to train its models.
Crowdsourced data for training is widespread in tech. The unusual rub here is the staging. When mislabeled or overrepresented, a model can learn shortcuts from simulated behaviors — picking up on patterns that are specific to reenactments rather than the gritty truths of actual crimes.
Why staged training data raises red flags for accuracy
AI systems are only as good as the data they’re trained on. The National Institute of Standards and Technology’s AI Risk Management Framework emphasizes representativeness and correct labeling as fundamental protections. If a large proportion of clips involve choreographed thefts — fabricated angles, impossible timing, or telltale body language — models might overfit to those cues and underperform on the tougher cases, or at worst label innocent behavior as suspicious.
Civil liberties groups, including the Electronic Frontier Foundation, have cautioned that surveillance algorithms can introduce or exacerbate error and bias if training data does not track with real-world behavior. And in a consumer context, false positives mean you get bothered by alarms, erode trust, and maybe even come to blows. Since clear discrimination of prepared scenes versus real footage and proper validation in realistic scenarios are not provided with this data, claims about accuracy remain a bit hard to check.
Consumer Confidence and Industry Backdrop
Eufy is not the first smart camera brand to come under scrutiny for how it trains and sells AI features. Earlier reporting in 2022 by security researchers and tech publications investigated elements of Eufy’s privacy practices, which led the company to pledge changes. More generally, rivals have been slammed for their “crime-focused marketing” practices and data-sharing deals that got ahead of consumers’ knowledge and acceptance.

Against that sobering backdrop, the idea of paying customers to reenact crimes seems out of step with the caution this category ultimately requires. It blurs the line between accurate incident reporting and performance theater, and it relies on users of the feature who are not best equipped to understand the risks inherent in staging suspected behavior for public examination.
The stakes for home security AI and consumer safety
Porch piracy is a real, if seasonally recurring, problem. Package theft continues to be prevalent, according to data from multiple consumer surveys conducted by C+R Research, where the average reported loss equates to over $100 per incident. Vendors are in a race to promise smarter alerts that can tell the difference between routine motion and actual threats, and they need diverse, accurately labeled datasets to do so.
Best practice would regard reenactments as an entirely separate class of synthetic data, not used in conjunction with footage from the real world during training and only brought out to augment edge-case scenarios where possible. Clear labeling, third-party validation, and published metrics — say, precision, recall, and false-alarm rates for the technology in diverse environments — would give consumers a way to judge whether “AI theft detection” works beyond demos.
What to watch next as Eufy responds and regulators act
Key unanswered questions linger.
- Was there some distinction that Eufy made between staged clips and real events?
- What proportion of the training set was from reenactments?
- Was post-deployment accuracy improvement assessed separately in actual homes?
Eli Lilly declined to say whether it had pressured workers to test positive for COVID-19, the disease caused by the coronavirus.
Regulators are also taking notice. The Federal Trade Commission has warned companies about deceptive or unsupported AI marketing claims and weak data governance. For a product category that is built on trust, the path to greater safety is clear: Focus on real-world validation, explain how training data is sourced and labeled, and refuse to get customers to dress up as criminals to show off how smart your tech has become.
