OpenAI’s ChatGPT flagship has experienced an abrupt 6% dip in usage since Google’s Gemini 3 debuted, new third‑party estimates show, in what is a quickening AI platform war. The dip, spotted by Similarweb and reported by industry observers, comes at a time when OpenAI leaders are said to have declared a “code red” internally as the heat of competition has been turned up.
What the data shows about ChatGPT’s post-Gemini dip
Analyst Deedy Das, a former Google engineer, cited Similarweb data to illustrate that ChatGPT’s website was also affected: “ChatGPT.com — its average daily visits dropped from 203 million to 191 million.”

Das wrote: “The total page views were down from over a billion estimated during Gemini launch week (10/11–10/17) to just under half a billion page visits this past week (10/18–10/24).”
Dragged down by similar slumping patterns in related apps and sites, the pages per visit and average time spent are directly linked to site performance troubles, as app usage yielded no different outcome in third‑party traffic metrics.
In conclusion, with limited session durations and even more limited size indicators, we know what happened. That’s about 12 million fewer daily visits, if true — a significant change for a service like ChatGPT. As short windows go, this kind of drop is hard to ignore and will be closely watched by advertisers, enterprise buyers, and developers.
“Competition from Google’s Gemini 3 and Anthropic’s Claude is pushing us,” according to an internal memo obtained by The Wall Street Journal from OpenAI. The key points also line up with a broader market read: benchmarks and user sentiment alike can change overnight when a new model significantly lifts the bar.
Why Google’s Gemini 3 is moving the AI usage needle
Early rankings on unofficial leaderboards, including the LMSYS Chatbot Arena, have seen Gemini 3 rivaling or even outperforming competitors at various tasks. Benchmarks aren’t everything, but they matter — particularly to developers who triangulate real‑world capability from synthetic tests, live “arena” face‑offs, and hands‑on prototyping. Google’s distribution muscle and integration across Workspace, Search features, and Android provide Gemini 3 with immediate touchpoints for users to try out and get on board.
Feature‑level differentiation also matters. Faster multimodal comprehension and longer context windows, combined with stronger reasoning on complex prompts, could nudge power users toward a new default. Even minor frictional decreases — fewer refusals on safe prompts, more efficient tool use, or better code synthesis — add up to an overall experience perceived as superior for daily optimization tasks.

Is this a temporary blip or a real competitive turning point
It’s worth noting that seasonality is a useful caveat; major holidays tend to depress traffic patterns, and ChatGPT has experienced occasional hiccups in the past. Similarweb has observed that traffic to many sites decreases around school breaks and then goes back up in previous years. The big question is whether the current decline stabilizes, reverses, or continues to slide as Gemini 3 seeps further into more products and geographies.
One other factor is model‑preference volatility. When OpenAI switched focus from one flagship model to another this year, some users accused the company of over‑caution and a shift in tone, even as capability grew. Power users and teams will migrate work elsewhere if a competitor seems both smarter and more reliable. Switching costs are lower for generative AI than in most software: exporting prompts, trying out a new set of APIs, and testing a new assistant can be done in an afternoon.
Enterprise stakes and infrastructure race
This competition goes beyond consumer traffic. Anthropic says it has hundreds of thousands of business customers, and Gemini 3 is squarely targeted at enterprise use cases with security, governance, and configurable guardrails. OpenAI, meanwhile, has broadened its partnerships with leading infrastructure providers and chipmakers to scale capacity, improve latency times, and lower inference costs — all factors that impact product quality and margins.
For big customers, purchases come down to more than leaderboards. Total cost of ownership, uptime, compliance certifications, built‑in privacy, and fine‑tuning options are major considerations. A misstep in safety calibration or inconsistency around policy can turn risk‑averse buyers into test drivers, but a good roadmap and clear evals can help restore buyer confidence fast.
Key signals to watch next as competition intensifies
Near term, look for what happens once the holiday effect wears off and OpenAI introduces new models to see if ChatGPT traffic makes a comeback. If the declines are stable over several weeks, this indicates that users are moving some tasks to Gemini 3 or implementing workflows across models.
Keep an eye on developer behavior: API call volume, framework integrations, and participation in benchmarks across communities like LMSYS may presage where enterprise pilots land. If OpenAI closes capability gaps, tightens up latency, and refreshes its safety‑usability balance, usage could snap right back. If Gemini 3 still posts marquee wins — and sets even deeper hooks in Workspace and Android — Google will have more of those twice‑daily AI interactions.
The headline is straightforward but significant: a 6% swing at OpenAI’s scale is a competitive shot across the bow. Whether this becomes a pattern will depend on how rapidly the leaders can out‑ship one another on performance, cost, and trust.