The New York Times has sued the AI search startup company Perplexity, alleging that it has built a series of commercial products based on the newspaper’s journalism without permission or payment. Central to the complaint is an allegation that Perplexity’s systems ingest and repackage Times reporting, including that behind a paywall, in ways that serve as replacements for the original and cut into the outlet’s subscription model.
Details of the Lawsuit and Claims Against Perplexity
The case involves retrieval-augmented generation, or RAG, the technique Perplexity uses to extract data from the web and generate answers in chat-style interfaces and in its Comet browser assistant. The Times says those outputs often replicate its articles word for word or nearly so, or provide such detailed summaries that they serve as substitutes more than referrals to the source.

But the lawsuit isn’t just about copying; it also accuses Perplexity of doing things like putting paywalled facts in front of non-subscribers and, on occasion, making up fake assertions that were fraudulently attributed to the Times — behavior that the paper says injures both its brand and original reporting overall.
The lawsuit seeks monetary damages, as well as a court order to prohibit further unlicensed use of its content.
The complaint caps a year and a half of demands that Perplexity cease using Times journalism while the financial dispute was unresolved — including through a cease-and-desist letter.
The suit is the second significant one by the publication against an AI company, as it vies in court with OpenAI and Microsoft over training on millions of articles.
Perplexity’s business model and its publisher partnerships
Perplexity has tried to position itself as a partner to newsrooms, introducing a Publishers’ Program that shares ad revenue with participating outlets and Comet Plus, which gives 80 percent of its $5 monthly fee to partners that join the platform. The company has also revealed a multi-year licensing deal with Getty Images and distribution deals with publishers like Gannett, TIME, Fortune and the Los Angeles Times.
Voluntary programs, however, are no substitute for enforceable licenses that apply to a news organization’s training data and real-time use of its journalism — especially when your systems can surface content from behind paywalls or repost the distinctive reporting with little modification on websites brazenly built around rumors rather than solid journalistic work.

Wider publisher pushback and rising legal challenges
It comes in the wake of a rash of media actions against unlicensed AI uses. Perplexity has been sued by other publications (Chicago Tribune in particular), while News Corp, Encyclopedia Britannica, Merriam-Webster, Nikkei, Asahi Shimbun and Reddit have also filed lawsuits or leveled public accusations. Wired, Forbes and other outlets have reported that Perplexity has plagiarized content and that it had crawled sites that indicated they did not want to be scraped — claims supported by confirmation from internet infrastructure firm Cloudflare.
For publishers, the legal strategy is as much about leverage as law. And while some have cut deals with AI firms — OpenAI has licensing deals with the Associated Press, Axel Springer, Vox Media and The Atlantic — others are looking to push their claims about where that line between fair use and uncompensated extraction of value should be drawn. The Times itself has also signed a separate licensing deal with Amazon to help AI development, according to the report.
Fair use questions and potential risks to AI search
AI companies often contend that training models on publicly available text is fair use, citing older precedents allowing search indexing and nonexpressive analysis. Now courts are being called upon to make finer distinctions: training from output reproduction, public webpages from paywalled content and summary that attracts traffic from synthesis that displaces it.
Analogous litigation between Anthropic and authors and publishers who sued over pirated books yielded a settlement of $1.5 billion after a court indicated that legally acquired material may be treated differently from pirated copies. Though not strictly dispositive, the verdict speaks to the legal perils when provenance and permissions are unsettled.
The Times–Perplexity fight underscores another acute hazard: Real-time systems that scrape and regurgitate journalism can cross a line from the abstract “training” squabbles examined in the last section into concrete claims of copying. If the courts accept that output from RAGs can stand in for original articles — particularly if it circumvents paywalls — AI search products could be put at risk of injunction or forced to re-engineer themselves for tighter citation, link-out, licensing and compliance with robots.txt and paywall controls.
What’s next in the court battle and possible outcomes
Look for dogged discovery into how Perplexity sources, caches and renders news content; how its systems process paywalled pages; and what guardrails are in place to prohibit hallucinated attributions. Look for signals too from regulators and standards bodies, as the industry scrabbles toward norms on things like attribution, access controls and revenue sharing.
If this concludes with a high-stakes decision or a settlement, the implications will ripple well beyond one startup company. The result could establish pragmatic guidelines for how AI assistants acknowledge, pay and coexist with the journalism they are increasingly summarizing on behalf of millions of people.
