OpenAI chief executive Sam Altman pushed back hard against mounting scrutiny over artificial intelligence’s thirst for water, dismissing viral claims that every chatbot prompt guzzles gallons as “totally fake.” Speaking at a public Q&A in India, Altman argued the most salient comparison is overall energy and resource use once a model is trained, not sensational per‑query tallies, and said OpenAI-aligned infrastructure has moved away from earlier, water-intensive cooling methods.
What Altman Said And Why It Matters For AI Water Use
Altman framed criticisms through an efficiency lens: training a human takes decades of food and energy, he said, whereas a trained model can answer questions at a fraction of the ongoing cost. That analogy drew immediate pushback from technologists who cautioned against equating human development with machine inference. The debate lands amid a historic buildout of AI data centers and growing concern from communities near those facilities about water withdrawals, noise, and grid stress.
He also acknowledged that earlier use of evaporative cooling—a common technique that trades water for energy savings—contributed to higher consumption but said OpenAI’s partners have pivoted to approaches that use far less potable water. Altman characterized headlines about “gallons per query” as unmoored from the realities of modern operations.
What The Data Shows On AI And Water Consumption
Industry disclosures do show a rapid rise in water use as AI workloads scale. Microsoft’s environmental report attributed a 34% year‑over‑year jump in the company’s 2022 water consumption—roughly 1.7 billion gallons in total—in part to AI research and data center expansion with OpenAI. Google’s environmental data reported higher overall withdrawals in 2022 as it ramped up advanced computing, with notable community scrutiny around facilities in Oregon and the US Midwest.
Academic work has tried to translate these system‑level impacts into relatable terms, though the nuance often gets lost. A widely cited analysis from researchers at the University of California, Riverside estimated that large models’ training and inference can indirectly consume significant freshwater depending on location, season, and cooling design. Their estimates—sometimes summarized online as “a bottle of water every few dozen prompts”—were scenario‑based, not blanket figures, and assumed specific grid mixes and cooling choices. In other words, “water per query” is not a fixed number; it varies dramatically by time of day, region, and facility design.
The physical context also matters. Many hyperscale sites historically chose evaporative cooling to save electricity, especially in arid regions where hot, dry air makes evaporation efficient—but water‑intensive. As AI training clusters densify, the tradeoffs between kilowatt‑hours and kilogallons have become a flashpoint for planners and local utilities.
Cooling Tech And Cleaner Power Are Shifting The Curve
Operators are now accelerating alternatives that cut freshwater use. These include closed‑loop chillers, direct‑to‑chip liquid cooling with minimal make‑up water, heat reuse to district networks, and sourcing non‑potable supplies such as reclaimed wastewater. Coastal sites can tap seawater for heat rejection. The net effect: where adopted, potable‑water draw per unit of compute can fall sharply, though energy demand—and its carbon intensity—must still be managed.
Altman said the longer‑term answer is rapid scale‑up of low‑carbon electricity—citing nuclear, wind, and solar—to meet surging AI demand. That addresses emissions more than water directly, but greener grids enable operators to choose mechanical cooling over evaporative systems without worsening climate impacts. Several major cloud providers have also set targets to replenish more water than they consume in stressed basins, though progress varies by region and verification method.
The Metrics Gap Drives Confusion In AI Water Reporting
Part of the controversy stems from inconsistent reporting. Many data center operators publish power metrics but fewer disclose water usage effectiveness (WUE) at the site level, and even fewer break out potable versus non‑potable sources or seasonal swings. Without standardized, audited disclosures, per‑query talking points—pro or con—tend to oversimplify a system‑wide footprint that depends on siting decisions, hourly grid mixes, and cooling design.
Experts from organizations such as the Uptime Institute and The Green Grid have urged comparable WUE reporting alongside energy metrics, plus transparent accounting for water risk in local watersheds. Researchers behind “Making AI Less Thirsty” likewise recommend dynamic scheduling of AI jobs to cooler hours and water‑abundant regions, aligning compute with both clean power and sustainable water availability.
Bottom Line On AI’s Water Use, Cooling, And Transparency
Altman is right that “water per query” soundbites are misleading and that technology choices can slash consumption. But industry data and community experience also make clear that AI’s overall water footprint is growing alongside demand. The real test will be transparency and engineering: publish comparable water metrics, site responsibly, prioritize non‑potable and closed‑loop cooling, and align workloads with clean energy. Without that, the argument over what’s “totally fake” will keep evaporating into a vacuum of missing facts.