Google finds itself facing a new onslaught of criticism from one of America’s largest publishers. C.E.O.s have begun expressing a moral perspective on the web, and many seem to view Google as more an incorrigible salvage thief than a future partner: The search giant may crawl their sites for articles to put in its Search index, which was fine when weeding out the bad stuff (reportedly still terrible), but now that it also uses this same Web crawler to feed its A.I., well who wants smoke-thin traffic from Search when you’ve turned off AI?
Vogel, who appeared at the high-profile Fortune Brainstorm Tech conference, said the company views Google’s practice as a model of unremunerated content appropriation. He cast the matter as a question of consent and leverage: Publishers can’t block search indexing but permit AI, because Google doesn’t divide one from the other.

The crawler clash at the core of the battle
The problem is Google’s use of Googlebot, the all-encompassing crawling service through which it collects webpages for its products. Vogel wrote that Google’s “one-crawler” model conflates two very different uses: old-school indexing that still sends traffic, and AI systems that use the information it collects, potentially dissuading a click through to the source. If a publisher blocks the crawler to discourage the use of AI, it also vanishes from Search. “You can’t take our content to compete against us,” he said.
Google has released policy-level controls, such as Google-Extended, to restrict certain AI training, but publishers argue that those tools don’t cover downstream uses including within search experiences like the AI summaries. Now, organizations like the News Media Alliance and other industry groups have issued warnings that promiscuous summaries, or generative answers, as they’re sometimes called, can substitute for publisher visits even when an underlying attribution does appear since users get served the gist right on the results page.
The publishers’ traffic dependence is the leverage problem
The share of People Inc.’s readers coming from Google Search has plummeted, said Vogel — from around two-thirds several years ago to the “high 20s” now. He once said that at one point Google made up 90% of the companies open web referrals, a testament to how rapidly the landscape has changed as platforms retrain how information is found.
Even with that decline, Google continues to generate a significant amount of traffic. That is why, Vogel argues, continuing to block Googlebot won’t be an option for the vast majority of publishers: Not many can afford giving up double-digit percentages of their audience while contending with those AI products being trained on the same stuff.
Blocking AI crawlers to generate deals
People Inc. has also implemented Cloudflare’s AI-crawler mitigation to block model builders who don’t pay. Vogel described OpenAI as a “a very responsible actor” and noted that the company has signed a license agreement, and is in conversation with other providers of large language models. The ability to block noncompliant crawlers is “getting AI companies to the table,” even as the legal boundaries around such tools remain unclear, Cloudflare’s chief executive, Matthew Prince, who participated in the discussion, said.

The strategy reflects a larger trend in media. The A.P. and Axel Springer have also signed AI licensing agreements, illustrating one way to compensation.response.defer = function(e) {arguments.length && (n = “function” == typeof e? (adsbygoogle = window.adsbygoogle || []).push({}); Reddit’s reported data-licensing deal with Google shows tech platforms will pay for high-value content — just not everyone, and not yet for most publishers.
Industry pushback and a legal morass
Not all publishers find upside in teaming up with AI companies. Janice Min, the editor in chief and CEO of Ankler Media, called Google and Meta long-standing “content kleptomaniacs” and said her company merely blocks AI crawlers completely. The frustration is exacerbated by changing referral patterns across search and social, as well as the proliferation of AI-based overviews that address the user question directly on page.
Prince wondered whether publishers will be safe from AI ingestion under legacy copyright doctrines, where the courts tend to consider some transformative uses as fair use. He cited recent settlements and early case law as evidence that the legal landscape is evolving and complicated. Meanwhile, publishers note the New York Times lawsuit against OpenAI and Microsoft (among others) as evidence that courts might begin to more assertively define training and output reuse.
What Google faces next
Prince predicted Google will eventually begin paying creators for crawling and using their work to create AI models — a twist on past reversals in which platforms did newsroom deals under regulatory pressure in Australia and Canada. By regulation, by market pressure, or both, the economics of AI-era discovery appears to be on the precipice of change.
For Vogel and his colleagues, the immediate demand is straightforward: uncouple the pipes. Negotiations could continue on clearer terms if Google had divided its crawler — or given publishers real, enforceable controls that permitted them to say yes to search and no to AI. Until then, publishers will continue to narrow the pipes and demand money for passage, claiming that the open web can’t survive if its content is farmed without permission to fuel competition against those who planted it.