Meta is profiting from billions in revenue generated by scam advertising on Facebook and Instagram, an investigation commissioned by Reuters found based on the company’s internal documents. The records show Meta’s platforms serve or surface an estimated 15 billion “higher risk” scam ads to users daily, and approximately $7 billion in annualized revenue is associated with scam ads and other banned goods.
The same cache of documents, reviewed by Reuters, indicates Meta was estimating that as much as 10% of its 2024 ad revenue would derive from ads tied to scams or banned products — a figure sure to raise eyebrows for a company that reaped more than $100 billion in sales from advertising in the last year. The figures highlight how fraud at scale can be a line item, not merely an annoyance.
Inside the numbers behind Meta’s estimated scam ads
“Higher risk” ads often include pitches for investment scams, counterfeit products, illegal online casinos and unapproved pharmaceuticals — content that transgresses policy outright or is a gray area where enforcement can get thorny. The 15 billion figure doesn’t mean each of those ads is ever shown to users; Meta’s systems spot huge amounts every day, and then lift them up for closer investigation.
The $7 billion figure, meanwhile — reported to be the annualized take — underscores the commercial gravity of the issue. Although it accounts for just a mid‑single‑digit share of Meta’s total ad revenue, the amount is on par with the annual ad business at many standalone media firms. It also begins to explain why cleaning up the ad ecosystem is both an integrity challenge and a revenue trade‑off.
Why Meta’s Enforcement Decisions Are So Important
Meta’s automated systems typically need a high threshold of confidence — usually 95% — before banning an advertiser for being deemed fraudulent, Reuters reported. Under that bar, advertisers could be subject to higher prices or other forms of friction rather than being immediately removed. It is a method that emphasizes precision (no false positives), but it can allow sophisticated scammers to persist, especially when they are rapidly rotating domains, creatives and accounts.
That is the trade‑off between precision and recall, which is a classic machine‑learning trade‑off. Lower the threshold and you pull more bad actors out of the system, but catch a greater proportion of legitimate advertisers by mistake; raise it and you protect real businesses, but end up leaving in lucrative fraud. With billions of ad impressions flowing through Meta’s pipes each day, small tuning decisions can have outsize safety and financial implications.
Why It Matters for Users and Brands Facing Scams
For consumers, the stakes are just as real. Ponzi schemes in social ads have resulted in massive losses claimed by regulators across various borders. Fake drugs, hawked online and offline as quick cures, are a frequent presence in takedown actions. Illegal gambling promos frequently focus on regions with a ban on online casinos, and mislead users via illicit landing pages and intermediaries.
Advertisers pay a price, too. Scam campaigns undermine user trust, artificially inflate auction dynamics, and poison the well of brand‑safe supply. Responsible advertisers ultimately compete in the same marketplaces, and that pushes up costs while also risking adjacency to misleading content. Over time this slows down performance, and it makes third‑party brand safety tools and allow‑lists table stakes rather than a nice‑to‑have.
Regulators are closing in with tougher ad oversight
Regulatory pressure is intensifying. In the European Union, the Digital Services Act would oblige very large platforms to mitigate systemic risk, root out illegal advertisements and open up their processes to external inspection. Failure to comply may result in fines of as much as 6% of worldwide turnover. The financial and consumer protection regulators in the U.K., Australia and elsewhere have also pushed platforms to verify the identity of financial advertisers, as well as to quickly take down false advertising.
In the United States, the Federal Trade Commission and state attorneys general have recently stepped up actions against deceptive advertising and enabling fraud. And although enforcement is often aimed at the scammers themselves, regulators are increasingly looking more closely at systems that help enable their reach — ad verification, transparency and turnaround time after users report them.
Meta’s stance and what to watch in ad enforcement
Meta has long maintained that it bans illegal and deceptive ads, pours resources into integrity technology and employs tens of thousands in safety and security. It regularly heralds removals of bad actors and updates to advertiser verification. Yet the internal data reported by Reuters beg a hard question: are current thresholds and financial incentives properly calibrated to minimize harm at scale?
Key signals to look for are:
- Whether Meta reduces ban thresholds for high‑risk categories
- Whether it widens pre‑verification requirements for financial and health advertisers
- Whether it tightens domain whitelisting and introduces required post‑click monitoring to capture cloaking or bait‑and‑switch tactics
Transparent reporting — how many scam ads are blocked pre‑impression versus taken down after delivery, for example — would also help to restore trust.
For users, the basics still matter: Don’t fall for high‑pressure investment pitches, double‑check domains and use reporting tools within platforms to flag questionable ads. For brands, using platform controls in conjunction with independent verification and rigorous allow‑lists is now close to mandatory. If the $7 billion number is true, then the fight against scam ads is anything but a sideshow — it’s how much of the modern ad economy operates.