Match Group is being sued in a comprehensive lawsuit that alleges its apps let a convicted serial rapist use Tinder and Hinge for years without taking action, despite multiple reports from women against him, and after the apps’ recommendation engines continued to surface his profile. Filed by six women, the complaint takes aim at Match Group, its former parent IAC, Tinder, Hinge, and Stephen Matthews — a retired Denver doctor who was convicted of drugging and sexually assaulting several women.
What the lawsuit alleges about Match Group and its apps
Match Group had received detailed reports as far back as 2020 that Matthews had allegedly drugged and raped users he met on Hinge, according to the filing. Despite such warnings, Matthews’ profiles stayed active and were promoted to other users after that time, the suit claims. Plaintiffs say the platforms recommended his profile to them, and in some cases re-recommended it to survivors who had already reported him.
It also states Matthews used his real name and had the same photos and profession on the profiles, matched up by both screen captures of the profiles and his phone number. The plaintiffs say this should have made it easy to locate and block the same person from Match-owned apps. The central claim: Match Group did not have effective cross-app bans in place, and a known abuser was able to move around from product to product while continuing to interact with new potential victims.
How Matthews’ conviction raises the stakes for the case
Matthews was arrested after a police report which eventually led to a larger investigation. He was subsequently convicted of drugging and/or sexually assaulting 11 women over two decades and sentenced to 158 years to life in prison. During much of that time, plaintiffs say Tinder and Hinge presumably continued to victimize targets while complaints stacked up.
Reports, recommendations, and safety gaps
The lawsuit describes a familiar pattern for trust and safety workers: A user reported for serious harm is banned from one product, only to show up again on another app also owned by the same corporate parent. Here, plaintiffs allege Hinge admitted receipt of the survivor’s report but still re-recommended to the same user months later. They allege that Matthews was logged in on Tinder while there was evidence from Hinge comms that action was being taken.
Investigative reporting from The Markup and the Pulitzer Center’s AI Accountability Network had previously found that Matthews was reported on Hinge but still active there. That earlier reporting sets the stage for the story’s center of gravity: content moderation and recommendation systems can be at odds if bans aren’t enforced consistently and signals aren’t shared across a company’s portfolio.
The bigger picture of safety in online dating
Match Group has talked up a number of safety features that it offers across its apps in recent years, such as in-app reporting flows; photo and video verification to make sure users aren’t being catfished; prompts discouraging abusive messages; emergency features designed to offer potential assistance with daters out on a date who feel unsafe; and, for a brief time, background checks you could opt into on Tinder via a third-party partner.
But background checks are not without limitations: they are unlikely to pick up allegations that never make their way into the criminal justice system, and a large share of sexual assaults go unreported. RAINN estimates that only around 31 percent of sexual assaults are reported to the police, so dangerous behavior often cannot be flagged by public records alone.
At scale, dating apps face a challenge of billions of context profile views and swipes per day, where content safety often depends on engineering decisions beyond just content removal: device- and phone-based bans, cross-app identity linking, and models training recommendation systems that can downrank users with repeated serious reports. Online daters, especially women under 35, will see a spike in harassment and abuse this year, much of it taking their users out into the real world on bad dates that they struggle to end, according to Pew Research data — suggesting at least some of what’s at stake for platforms designed with algorithms promoting discovery and in-person meet-ups.
Match Group’s stand and what happens next
Match Group has made clear that user safety is a top priority, a statement said, adding: “Numerous advancements have been made over the past couple years, including but not limited to partnerships, technology investments and the addition of new features which are designed to enhance user safety both online and offline.” The company often touts those efforts, including round-the-clock moderation teams and its cooperation with law enforcement, as well as frequent updates to verification and reporting tools. The case will test whether Match Group’s procedures and cross-brand enforcement were reasonable in light of the warnings claimed at this stage to have been received by it.
The plaintiffs’ lawyers have invited other survivors who potentially met Matthews on dating apps and had experiences similar to J.H. to share their story via a dedicated website. The suit also names IAC, the former parent of Match Group, for behavior tied to the period before Match fully separated from IAC — illustrating how liability lines can cross corporate timelines as well as product lines.
Either way, the case is likely to challenge how courts think about platform responsibility when recommendation systems and moderation practices meet offline harm — particularly where multiple apps are concerned and back-end data is shared. The central policy conundrum is straightforward, even if the technology isn’t: When platforms understand that a user represents a significant risk, how quickly and completely should they allow that understanding to be reflected in true proximate bans within their ecosystem?
If you or someone you know has been affected by sexual violence, we can help: Call 800.656.HOPE (4673) or visit rainn.org for free confidential assistance.