Big social media platforms will be financially liable for scams that proliferate on their sites under new European Union rules, one of the most aggressive attempts by an industrialized economy to curb disinformation and other harmful internet content. The legislation, finalized by EU lawmakers after marathon negotiations, means platforms would have to pay banks when customers are scammed and the platform did not act on a scam reported by the user, as well as requiring banks to repay consumers in instances of bank impersonation or unauthorized transfers.
What the new EU rules demand from social platforms
The proposal complements the Digital Services Act and Digital Markets Act by linking content moderation to specific financial results. Platforms need to bolster advertiser verification, speed up the removal of such reports and maintain auditable logs about how they responded to alerts. If, say, an investment scam, a fake customer support account or a deepfake advertisement continued to circulate after the content was reported and someone lost money on the platform as a result, the costs will fall on that user’s bank.
- What the new EU rules demand from social platforms
- Why regulators are targeting social media over scams
- In practice, how the new platform liability will work
- Industry Response And The Challenge Of Compliance
- What users can expect under the new EU scam rules
- A jolt with global implications for platform liability
And banks, for their part, must refund and pay account holders in instances where scammers hijack a bank or transactions are made without permission. The dual requirement is designed to fill in the gaps that scammers have been able to exploit: It forces platforms to keep fraud at bay from the start, and banks are under obligation to safeguard customers when money goes missing.
Enforcement will be the responsibility of the European Commission and national Digital Services Coordinators, with sanctions based on the DSA’s regime. For systemic failures, fines could amount to as much as 6% of global turnover, to focus minds in boardrooms around the industry.
Why regulators are targeting social media over scams
Scam operations have moved to where users are spending their days and nights: short-form video feeds, messaging apps and influencer-driven ads. Seeding pig-butchering schemes on social networks. One investment and “pig-butchering” scheme type that Europol identified in its Internet Organized Crime Threat Assessment sees a strong link with the sharing economy and partly through social networks. UK Finance has consistently reported that a significant proportion of authorised push payment scams originate online, with social media an overarching vector.
Consumer advocacy groups like the European Consumer Organisation have pushed Brussels to address misleading influencer marketing and crypto promotions. High-profile cases illustrate the issue: A prominent UK consumer advocate previously compelled sweeping ad-verification policy changes from a major platform after his image was used in scam ads, and regulators in Australia have taken legal action for fake celebrity endorsement ads.
In practice, how the new platform liability will work
Imagine a deepfake video ad promoting an investment scam, identified as fraudulent by users but that was left online long enough to deceive enough people. Under the new guidelines, the bank reimburses the customer if it was an unauthorized transaction or authorized via what is known as bank impersonation. If the platform has indeed sleepwalked into failing to put that report in place on time, then it must therefore pay the bank for the loss caused as a consequence. The framework establishes a clean chain: user repaid, bank protected, platform pays if there is negligence.
For arbitration of disputes, platforms will be required to keep evidence trails of when a scam was flagged, which detection tools were used and when content was erased. Banks are going to have logs with records that document strong customer authentication and alerts about fraud. Regulators are drawing up technical standards for secure data sharing between platforms, banks and the police in order to spot clusters of scams and mule accounts more quickly.
Industry Response And The Challenge Of Compliance
Tech companies are warning that open-ended liability under a broad new standard could lead to over-removal of content while also raising compliance costs, particularly for smaller platforms. They maintain that organized crime is adept at moving skillfully from one action to another, therefore perfect detection is unattainable. Banks generally welcome clearer rules on reimbursement but warn that fraud could migrate to cross-border routes if platforms outside the EU are slower to comply.
In reality, anticipate that platforms will broaden checks on advertiser know your business, ban risky categories like unlicensed financial promotions and put in place pre-publication screening for ads relating to investments, banking support or crypto. Some of these will be in-product prompts to verified bank contact pages and standardisations around “report scam” flows — things that might trip fast-track review queues.
What users can expect under the new EU scam rules
People should encounter fewer scam ads and removal of the ads they report should happen more quickly. Advertisers could feel the brunt of sharper verification requirements and more explicit disclosures, while posts that ask for payments in connection to investments, giveaways or customer support callbacks may also be accompanied by friction screens or explanations. Crucially, the reimbursement backstop will cut the financial damage done when scams slip through, though regulators stress that prevention is still where the focus should lie.
A jolt with global implications for platform liability
The EU’s action is in line with a broader shift to platform responsibility. The United Kingdom’s Payment Systems Regulator has required refunds for APP fraud, and Singapore’s central bank has proposed shared-responsibility approaches to phishing, while regulators from the United States to Australia have examined scam ads and impersonation schemes. With the EU having connected content moderation failures with direct financial liability, social media companies have entered a new era of compliance where turning a blind eye to fraud is no longer simply a risk to their reputations, but also their balance sheets.