Two new windows into real-world AI behavior speak volumes: ChatGPT is the world’s writing coach and research buddy, while Claude is increasingly the delegate you can hand a task to and walk away. Fresh analyses from OpenAI and Anthropic highlights not just how people are using these systems, but why their use patterns are diverging — and what that portends for the next stage of AI at work and in the home.
What the data shows
Non‑work usage overtakes work in millions of ChatGPT conversations, research by OpenAI shows. In June 2024, some 47 percent of messages were work related; by June 2025 that share fell to 27 percent even as activity exploded from about 451 million to nearly 2.6 billion daily messages. Information seeking, practical guidance and writing together now represent almost 80% of conversations. Coding sits at about 4.2%.

OpenAI also categorizes activity in terms of Asking, Doing and Expressing: 49% Asking (guidance or facts), 40% Doing (producing outputs) and 11 % Expressing (opinions or feelings). On the Work front, Doing jumps — 56% of work messages find themselves in that bucket. But even in those places, more than 40% of work use is writing and over two‑thirds of that is editing text that people wrote themselves. Translation: ChatGPT is usually the second pair of eyes, not the sole author.
Asking vs. doing, automating vs. augmenting
Anthropic’s Economic Index traces another axis: automation vs. augmentation. Its analysis of Claude transcripts reveals a marked increase in the use of directive automation — users delivering a single instruction and anticipating a completed response — with 27% to 39% of conversations following that template between early winter and early spring of 2025. For the first time, automation has surpassed augmentation (49.1% compared with 47%).
The contrast is striking. Asking — that is to say, guidance, explanations and polishing — is where ChatGPT’s center of gravity is located. Claude’s is moving toward directive Doing (the users trust it to perform discrete work with little back-and-forth). That dichotomy represents two kinds of expectations: My Content for collaboration with ChatGPT, Outsource Delegates to Claude.
Why the behaviors diverge
Product design nudges behavior. Claude’s long context windows, strong summarization and ever-so-slight agentive tooling make it feel natively comfy as a “do this for me” assistant, particularly inside workflows. ChatGPT’s large consumer footprint and conversational chops make it well-suited for tutoring, brainstorming, and editorial assistance. Both models can perform both tasks — but defaults are important. When people come in expecting a dialog, they cycle when they expect an operator and delegate.
Trust dynamics matter, too. Anthropic’s numbers imply that users increasingly feel comfortable letting Claude run to completion, especially on highly-structured tasks. OpenAI’s dissection suggests that ChatGPT is relied upon as a tutor and editor of a broader swath of everyday existence, where guardrails and tone matter as much (if not more) than raw execution.
Enterprise patterns vs the coding reality
Among companies, the use of Claude is largely automated. Anthropic cites 77% of API customer interactions are automated-like, and most directive. Coding is the biggest driver: nearly 44 percent of Claude API traffic comes from computer and mathematical work pieces, compared with 36 percent on Claude’s consumer interface. Claude. ai itself leans more towards the educational and editorial.
ChatGPT’s enterprise story looks different. At work, it’s almost entirely about writing and editing, document clean-up and quick snippets of research. The paradox is instructive: even technical teams regularly rely on ChatGPT for narrative tasks—PRDs, release notes, stakeholder updates—as they wire Claude inya pipelines to handle code and data chores. Developer surveys and industry reports written by the likes of Stack Overflow and McKinsey reflect this contrast: widespread adoption for everyday productivity, deeper integration where automation wins on speed and repeatability.

Geography, demographics, and equity
7 OpenAI projects 700 million weekly ChatGPT users as of mid‑2025, with faster relative growth in low‑ and middle‑income countries. The user base has broadened: male users made up about 80% of early usage, but that was down to 48% by June 2025, with more activity among people who had names identified as typically feminine. Almost half of adult messages are from people under 26, and highly educated, higher‑paid workers are far more likely to use ChatGPT in a capacity related to work.
Anthropic says the United States contributes 21.6% to Claude usage, but in some sense it takes Israel’s preeminence on a per-capita basis, with an AI Usage Index around 7—roughly seven times what population alone would predict.
In the US, 25.3% of activity comes from California (mostly work related to IT); on a per-capita basis Washington, DC has an index higher approaching 3.82 used mostly for document editing; information delivery and job application).
There’s a cautionary macro trend: Anthropic’s analysis correlates 1% higher GDP per capita with a 0.7% higher usage index, hinting that richer places might steal outsized productivity gains. Lower‑income ones lean heavily on coding work — more than 50% of usage in India —while higher‑income markets spread more into broader, collaborative use. This is followed by trends highlighted by academic initiatives like Stanford’s AI Index: capability leakages are global, but the benefits accumulate where skills and infrastructure are already in place.
What this means — and what to do next
Actual usage indicates a persistent segmentation. ChatGPT has become my constant companion for learning, ideas, and prose-polishing — my cognitive prosthetic in everyday life. Claude is becoming a trusted representative for well ‑ scoped work, especially in engineering and operations. For teams the pragmatic move is to match tool to task: use ChatGPT where you want to increase the quality and speed of thought and expression; wire Claude where you can specify desired outcomes and measure them.
The danger isn’t too much use — it’s ill-fitting use. Automating into unclear work can degrade quality; soft automating into repeatable tasks is leaving value undelivered. The winning capability is not “prompting” now but task design: stating objectives, constraints and validation procedures clearly. As these systems gain agency, the premium shifts to oversight — fact‑checking, evaluation and safe escalation.
The headline is straightforward: people aren’t just “chatting” to A.I. anymore. They are either consulting with it — or deputizing it. Knowing which mode of thinking you’re in, and selecting the right model for that mode is what separates novelty from sustained productivity.