OpenAI CEO Sam Altman is pushing back on rising scrutiny of artificial intelligence’s environmental footprint, arguing that the energy and water debate needs more nuance and better math. In a recent onstage interview in India, Altman dismissed viral claims about ChatGPT’s water intensity as wildly overstated and said the more meaningful lens is how much energy it takes for AI to answer a question compared with a human doing the same task once the model is already trained.
Altman acknowledged the total energy demand of AI is climbing quickly, but framed that as a reason to accelerate clean power buildout, not to slam the brakes on AI. He pointed to nuclear, wind, and solar as critical solutions to meet surging compute needs while cutting emissions.

Altman Pushes Back on AI Water-Use Claims
Altman called internet claims that each chatbot prompt devours double-digit gallons of water “totally disconnected from reality.” He noted that hyperscale operators have largely moved away from the most water-intensive evaporative cooling in favor of closed-loop liquid and direct-to-chip cooling, plus more aggressive use of non-potable and reclaimed water where available.
Public disclosures underscore why the topic is heated: Microsoft reported a roughly 34% jump in companywide water use in 2022, citing AI as a contributor, while Google reported a near 20% increase in the same period, with notable spikes in data center regions. Academic researchers, including teams studying “AI water footprints,” have estimated substantial water use for training very large models. But the range of per-query estimates is wide because it depends on siting, cooling design, and when the workload runs.
The bigger problem, experts say, is opacity. Unlike emissions reporting, there is no uniform, mandatory standard globally for disclosing data center water and energy per workload. That vacuum leaves room for sensational numbers and conflicting narratives.
The Fair Comparison Debate Over AI Energy Use
Altman argues that critics often compare the energy to train a model with the energy for a human to answer a single query, which stacks the deck. His counterpoint: a better comparison is per-task energy after the model is trained. He even extends the analogy to human development, noting that it takes decades of food energy, education infrastructure, and society’s accumulated knowledge to “train” a person.
How does the math look? The answer varies. The human brain runs at roughly 20 watts, remarkably efficient for cognition, but time, tools, and context matter. On the AI side, inference costs swing widely based on model size, prompt length, and hardware. Independent measurements from industry and academia show orders-of-magnitude differences between small, fine-tuned models running on efficient accelerators and frontier models spanning many GPUs. That spread makes any single “per query” number suspect without specifying hardware, model, and load.

AI’s Growing Power Draw in Global Context
The International Energy Agency estimates data centers used around 460 TWh of electricity in 2022 and could approach 620–1,050 TWh by 2026, with AI a major driver. In some hubs, rapid growth has strained grids and pressured rates; Ireland and Northern Virginia have both tightened interconnection and capacity planning as hyperscale campuses expand.
In the United States, the Energy Information Administration pegs data centers at roughly 4% of national electricity consumption and rising. Utilities are rewriting demand forecasts around accelerated AI adoption, with multi-gigawatt request queues now common in fast-growing regions. The upshot: AI’s footprint is big enough to matter for power markets and climate targets, but still small enough to be shaped by policy, procurement, and technology choices.
Pathways to Cleaner Compute and Lower Emissions
Altman’s prescription centers on more zero-carbon power. That dovetails with hyperscalers’ moves: long-term wind and solar contracts, grid-scale batteries, and growing interest in 24/7 carbon-free energy matching. Nuclear is back in the conversation, too. Altman chairs advanced fission startup Oklo, and Microsoft has explored small modular reactors and even a future fusion supply agreement via Helion. Whether those bets arrive on time is an open question, but the direction is clear.
On the efficiency side, progress continues. Average data center PUE has improved over the past decade, and direct-to-chip liquid cooling reduces both energy and water in hot climates. Next-generation accelerators and software stacks deliver more inferences per joule. Model architecture is shifting toward specialization, distillation, and retrieval techniques that can slash compute for many enterprise tasks without sacrificing quality.
What to Watch Next on AI Energy and Water Policy
Expect a push for standardized reporting: per-inference energy and water labels for APIs, location-based and time-matched emissions accounting, and clearer separation of training versus inference footprints. The European Union’s evolving rules require large data centers to report detailed energy and water metrics, and similar frameworks could spread.
Altman’s core message is less a dismissal than a reframing: yes, AI uses a lot of energy, but so do people and the systems that enable human work. The policy question is whether AI’s productivity gains can be delivered with a cleaner, more transparent power mix than the status quo. With grid investment, better disclosure, and relentless efficiency work, that outcome is still on the table.