Meta was counting largely on a minimum of about 10% of its annual revenue coming from fraudulent ads that appear on Facebook and Instagram, according to documents reviewed by Reuters. Framed in the context of Meta’s shopping spree, that share would amount to some $16 billion in ad dollars associated with bogus promotions — an astronomical figure that illustrates the scale and sneakiness involved in fighting deception on the planet’s biggest social platforms.
What the Estimate Signals for Meta’s Ad Ecosystem
The figure indicates that scam ads aren’t just a nuisance, but rather part of an ongoing revenue source from so-called bad actors who are not only purchasing reach but still finding places to do it. On the inside, Meta has models that identify likely fraud and says it only suspends an advertiser if it’s 95% confident that its account is scamming people. Under that threshold, the company is said to charge more to suspicious buyers for ads — a friction tactic meant to discourage spending. But if those ads get run, the auction books the revenue anyway.

That design underscores an enduring quandary for platforms: Act aggressively and risk penalizing legitimate advertisers mistakenly, or set too high a standard to be removed and let more harmful ads through. A 95% certainty level is organized to minimize wrongful takedowns but also makes trillions of mistaken, harmful attacks eminently possible.
The Scam Ad Problem From the Inside at Meta
Meta’s internal reviews, Reuters reported, turned up weaknesses in safeguarding users against paid promotions for outlawed gambling and investment schemes as well as unauthorized medical products. These schemes also typically impersonate legitimate brands or public figures, utilize cloned websites and direct victims to offline payment flows off the platform, all of which makes recovery and enforcement more difficult.
Meta has responded that it is getting better at enforcing the rules. A company spokesperson said user reports of scam ads fell 58 percent over the last 18 months and that more than 134 million scam ads were removed in that period. That’s big money, but it is combined with the figure for internal revenue — a reflection that removals are, if anything, running than a larger intake of harmful campaigns.
Why the Rule of 95% Is So Important for Ad Safety
Platform risk teams iteratively tune thresholds to balance precision and recall. Payments processors, for example, may delay funds holdings, introduce step-up verification, or require more documentation when probability scores exceed low-risk thresholds. In contrast, waiting for 95% certainty prior to deactivating an account can help salvage revenue and minimize advertiser friction but heightens potential for user damage (expected harm from false negatives).
A more tiered system — pre-vetting in high-risk categories, mandatory business verification, automatic holdbacks for new advertisers, and forced creative review of some claims ads — could all shift that balance. It would probably cause some ad spend to slow (in the short term), but materially decrease exposure to serial abusers.

The Consumer Harm Just Keeps Piling Up for Users
The larger cost is not abstract. Over $10 billion in fraud losses were reported by U.S. consumers in a recent year, according to the Federal Trade Commission, and social media is a prevalent conduit for impostor and investment scams. That total represents a more than 50% jump in global scam losses, and points to a shift from one-off fraud schemes to the large-scale targeting of users with criminal operations that leverage data.
Regulators are sharpening focus. The UK’s Financial Conduct Authority has urged platforms to ban unlicensed financial promotions. The EU’s Digital Services Act would also require very large platforms to monitor and address systemic risks, such as nefarious advertisements or recommender systems that can amplify them. Australia’s competition regulator has also called for greater platform accountability over scam ads.
Deepfakes and Brand Impersonation Up the Ante
AI-produced deepfakes have turbocharged a known playbook. The predators have staged dubious investment pitches with the likenesses of celebrity creators and actors, spoofed news segments to promote miracle products and cloned brand pages in order to hawk nonexistent goods. On Meta’s apps, these tactics are frequently linked to “pig-butchering” schemes across Messenger or WhatsApp by which victims are groomed into fake trading platforms.
Advertiser-verification and content-authentication standards, including provenance metadata from industry coalitions, can be useful. But scaled defenses demand earlier checks in the ad-buying funnel, tighter controls on landing pages and payment flows, and real-time takedown loops with banks and telecom providers to choke off cash-out paths.
What to Watch From Meta as Enforcement Evolves
Look for calls to demand independent audits of the prevalence of scams and the effectiveness of enforcement, information on exactly how many misleading ads were shown, alongside takedown numbers, and risk-weighted thresholds that trigger pre-approval for categories like finance, health and crypto. Key will be partnerships with regulators and financial institutions to share signals, as well as machine learning that’s trained to identify coordinated brand impersonation.
The headline number — 10 percent of revenue related to scams — will force Meta’s incentives into sharp relief. The company says it is taking down more bad ads and receiving fewer complaints from users. The question now is whether it’s willing to forfeit near-term auction revenue, lower the bar below 95% where appropriate, and harden its systems sufficiently to make a real dent in a vibrant fraud economy.
