Cloudflare chief executive Matthew Prince is calling on the UK competition regulator to force Google to split its search crawler from systems dredging data-needy artificial intelligence, arguing that the current bundle gives the search giant a deeply entrenched edge and publishers an unsolvable dilemma.
Prince says he has referred the case directly to the Competition and Markets Authority, which recently handed Google extensive oversight of search and ads under Britain’s new digital markets regime. The pitch is straightforward: run your AI harvesting service under a separate user agent, with its own discretion and consequences, such that opting out from AI training doesn’t also mean vanishing from search or breaking ad delivery.

Why Unbundling Google’s Crawlers Matters
Today, a great deal of the web is found and monetized through Google’s index. Publishers often share that blocking Google’s main crawler to exclude AI ingestion could limit search visibility. Prince argues that the tie-in is even deeper than that, alleging that blocks can also disrupt ad measurement and brand-safety checks across Google’s advertising stack, a nonstarter for many commercial sites.
Companies that compete with Google in AI — they include OpenAI, Anthropic and Perplexity — are increasingly being called on to out their bots and make deals or respect paywalls. Google, on the other hand, can depend on its omnipresent crawler to reach content that is used for both ranking results and fueling AI-powered features. Publishers are being strong-armed into every setting if they want the same level of search-driven audiences and revenue, Prince argues.
Cloudflare, which runs one of the world’s largest networks and has recently opened a marketplace to let sites charge AI bots for access, says it fights this imbalance on its network all day long.
Prince adds that a large percentage of AI companies are Cloudflare customers, meaning the firm has visibility into scraping patterns and is able to present empirical evidence to regulators about the relative behavior of Google’s systems.
The Competition Lens in the UK’s Digital Markets
The UK’s digital markets regime will give the CMA’s Digital Markets Unit powers to designate companies with strategic market status and then apply rules of conduct to stop them using their power in one market to abuse it in another. With Google’s UK search share consistently above 90% by StatCounter, the regulator has scope to order remedies that support rivals being able to compete on the merits in adjacent spaces — including AI-powered ‘answers’ and content summarization.
Unbundling crawlers is a classic interoperability fix: distinct identities, separate controls, and transparent, non-discriminatory behaviour — alongside the revocation of consent for one use not to degrade unrelated uses. In practice, this might involve separate AI user agent, standardized robots and meta directives for AIs, transparent attribution and provenance, non-retaliation for explicit refusal of AI training on a text search indexed site, etc.

Publishers Fear an All-or-Nothing Choice on AI Crawling
For news organizations and commerce sites, search is a lifeline. Industry surveys by the Reuters Institute and trade groups invariably show search as a top discovery channel, and publishers routinely credit some meaningful chunk of digital revenue to search-driven audiences. Prince frames the gamble in stark terms: You could lose all Google visibility overnight and make no revenue — a lever no AI startup can pull.
Other executives have expressed the same concern. Neil Vogel, the chief executive of Dotdash Meredith, has slammed the combination of search and AI crawling and said his company is deploying tools to block non-paying bots while negotiating licenses with big model providers. Imperva’s Bot Traffic Report, which has set automation at around half of all internet traffic, illustrates how fast scraping can transform publisher economics if left unaddressed.
What Unbundling Could Mean for Publishers and Users
More than a new user agent string would be needed in order for anyone to take it as a serious cure. The solution that experts say is needed would involve a package of measures ranging from granular opt-ins across AI capabilities to human- and machine-readable explanations of how data are used; robust audit trails for training sets; fair, standardized licensing pathways — whether through direct deals, collective management or marketplaces like those Cloudflare is testing.
Crucially, unbundling would rule out tying: no search ranking penalties, ad stack disruptions or feature downgrades for sites which refuse AI ingestion. It would also require reporting on crawl volumes and model inputs, so that regulators and rights holders can monitor compliance. Such tools are old friends in competition enforcement, repurposed for the data-powered quirks of generative AI.
Google’s Likely Counterpoints and Current Controls
Google has said it provides publisher controls and honors robots directives, and that search and AI features deliver consumer value and drive traffic back to the open web. The company is, in addition, introducing new labels and settings associated with AI summaries. The question for the CMA is whether those measures are clear enough, effective enough and search-independent enough — and whether they prevent that behaviour being leveraged into AI.
The Stakes for AI and the Future of the Open Web
Prince’s crusade thus upholds a broader question of policy: if a single company can legally merge the world’s most powerful discovery engine with the data appetite of frontier models, competitors may never assemble like corpora without forking over access that incumbents happened to acquire as an externality of search. Unbundling crawlers would not choose winners; it would set the terms of competition.
For publishers, distinguishing more clearly between indexing and AI training could unlock a functioning market for data — thousands of websites licensing to thousands of AI companies — while preserving search’s oxygen. For regulators, it’s a focused act that falls within existing digital markets instruments. And for its users, it keeps the web open, competitive and worth contributing to in the first place.