Meta’s vast advertising operation is subject to little human oversight and delivers an estimated 15 billion “higher risk” ads to users each day, according to company documents reviewed by Reuters, even as Meta pockets billions of dollars from promotions bearing clear or likely warning signs of fraud. The materials indicate that Meta’s platforms are a major vector for scams at enormous scale, despite the company’s claims to be spending heavily on safety.
These “higher risk” ads are spread across categories that have long been linked to consumer harm: crypto schemes, counterfeit health products and unlicensed gambling, among others. That same document mentions that Meta could be making around $7 billion a year from such ads, with a different estimate suggesting as much as 10% of annual revenue — about $16 billion — could be connected to fraud or scams when indirect schemes like DM-based cons are considered.

How ‘Higher Risk’ Ads Slip Through Meta’s Systems
An automatic review at Meta says ads are only blocked when their system thinks there’s a “>95%” chance that an ad is bad. The high threshold is aimed at cutting back the number of false positives for legitimate advertisers, but there’s a huge gray area that gets through. Savvy scammers take advantage of this space by flitting between creatives, landing pages, and copy constantly enough to remain right underneath automated detection thresholds.
Adding to the worry, Meta charges companies higher rates to run “higher risk” advertisements, and permits large-spending accounts dozens or hundreds of policy “strikes” — as many as 500 — before imposing a ban, people informed about the matter told Reuters. That mix raises difficult incentive questions: if marginal ads pay more and top spenders get pretty wide flexibility, the economic calculus can shift toward accepting a degree of risk.
Imagine if a fake weight-loss product ad skirted around banned words and featured glossy lifestyle images. If it hovers around the 95% fraud threshold, it can impact millions before iterative enforcement can chase it down — especially when you rotate domains and creative to avoid pattern matching.
Personalization Can Amplify Harm For Vulnerable Users
Meta’s ad-delivery system is designed to optimize for user engagement. That’s the equivalent, in this context, of someone who clicks one shady crypto pitch being more likely to get pitched on another later. The personalization flywheel transforms one misstep into a cascade of dangerous enticements, overwhelming the most vulnerable users.
In practice, this establishes a feedback loop: engagement determines relevance, relevance leads to more impressions and more impressions — increasing the likelihood of conversions. For scams, the same dynamics that make advertising effective can amplify consumer harm.
Billions Are At Stake, And Incentives Are Misaligned
Internal projections cited by Reuters estimate that bad ads directly account for billions in revenue, with total exposure possibly as high as $16 billion with indirect scams included. Because Meta makes essentially all of its money on advertising, reining in profitable but dangerous categories sets up a clash between revenue targets and promises of safety. That tension shows up in policy thresholds, pricing and how high-spend accounts are treated.

The cost to society is not theoretical. The Federal Trade Commission has documented record consumer losses to fraud, exceeding $10 billion a year ago, a sum that illustrates how digital platforms can be both bazaar and minefield. When one platform sees a third of successful scams in a big market, as Meta’s safety team presentation has been reported to estimate, policy choices about such things as detection thresholds and vetting advertisers acquire public significance.
Regulatory Heat And Business Benchmarks
Former Meta safety researcher Sandeep Abraham told Reuters that regulators would not tolerate banks making money from fraud should the same conditions apply to tech. Across the industry, the problem is large: in its latest ads safety report, Google said it suspended more than 39 million ad accounts and removed over five billion ads — an increase of more than 300% from a year earlier. X has been notified that an organized cybercrime group has attempted to pay bribes to certain of X’s own staff to facilitate unauthorized use of such accounts, illustrating the pressures experienced by all significant platforms.
Regulatory systems like the E.U.’s Digital Services Act already demands that large platforms manage and minimize systematic risks, such as fraud and deception. It is under that backdrop that internal investigations of monetized “higher risk” ads could lead to tougher audits, required transparency and even penalties if it’s found the necessary risk controls aren’t in place.
What Good Solutions Could Look Like In Practice
Experts cite common-sense defenses: lower the automated block threshold for high-risk categories, add human oversight to borderline calls and mandate stronger advertiser verification through know-your-business checks. Restricting frequency and reach for unproven advertisers, escrowing payments for risky verticals, and publicly logging denied creatives could also help lower the recidivism rate.
Importantly, personalization should be mitigated when scam signals are observed — down-ranking risky categories based on a single complaint rather than allowing repeated harm to occur. Third-party audits of ad policies, enforcement data and revenue from risky segments would give confidence that safety wasn’t being put at risk for the sake of short-term profits.
Meta’s numbers inside paint a stark picture: an ad system optimized for engagement that also serves up, every day, a deluge of questionable pleas — and profits from them. How long that balance remains in place depends on whether the revolutionary change comes from within the company, or is imposed by regulators watching closely.
