All empires have a creed. In artificial intelligence, that faith is what’s known as artificial general intelligence — the idea that a computer can do anything the human brain can do (which includes making high-stakes decisions and acting inspired), only better and faster. The journalist Karen Hao says this conviction has grown to be the industry’s organizing religion — one that excuses breakneck scaling that restructures incentives and externalizes huge costs onto society.
An AGI empire
OpenAI popularized one definition of AGI that focuses on autonomy and economic power, but is often paired with slogans to “benefit all humanity.” Framing has worked like imperial doctrine, drawing capital together, concentrating power and generating norms others feel they must adhere to. The outcome, she contends, is a private power that extends into compute supply chains, safety standards and public policy well beyond what most corporations have been able to pull off thus far in history.

Proof of that reach is everywhere. Industry labs are now generating a large share of the most-cited AI research, according to an analysis by the Allen Institute for AI, while top talent has flowed from universities into companies with enormous training budgets. More and more standards bodies, think tanks and safety consortia now revolve around a handful of foundation-model suppliers. What pulls me toward AGI is gravitational, not just technical.
Speed, scale and a unitary objective
Hao argues that the industry’s central dogma is straightforward: scale is king. Instead of pursuing algorithmic gains that bring down the amount of data or compute needs, labs have generally shot for larger models, more GPUs and faster deployment cycles. And the “scaling laws” literature produced by industry researchers — finding continuous improvement as parameters, data and compute grow — helped solidify this paradigm even though the returns have started to seem less exponential.
The downside is a culture that values speed over everything else: efficiency, safety and exploratory science. Reinforcement learning from human feedback, red-teaming, and evals are all useful tools but they’re frosting on top of the same race dynamic cakes that generate these risks in the first place. If the directive is to achieve AGI, then a punitive measure becomes traffic management, not course-setting.
The bill comes due: cash, carbon and labor
The price tag is staggering. OpenAI has indicated that it could go through over $100 billion in the near future. Meta predicts tens of billions of dollars on AI infrastructure a year. Google has informed investors that tens of billions in capital expenditures will go primarily to AI and cloud. These are bets of nation-state scale and they are being placed in a private boardroom about how much society should spend chasing an uncertain horizon.
As they say, if money talks, energy and water demand listen. The International Energy Agency has estimated that data center electricity demand could skyrocket, in part because of AI workloads. According to academic researchers at UC Riverside, training just one state-of-the-art model can use hundreds of thousands of liters of fresh water, if cooling requirements are included. The impacts of data centers play out locally first — strained grids, increased water draw, new peaker plants — while the benefits are more diffuse and difficult to audit.
And then there is the invisible work force. Reports published by TIME, Amnesty International and others have tracked low-paid data labelers, content moderators in countries including Kenya and Venezuela being paid next to nothing to sit exposed not just to traumatic material but also child sexual abuse content for a few dollars an hour. The human toll lurks in the blind spot of victorious stories about productivity gains.

Value proved, minus the theology
Hao is cautious to differentiate between AGI race and AI’s actual utility in the world. Take Google DeepMind’s AlphaFold, created in collaboration with partners like EMBL-EBI and published in peer-reviewed publications, which forecasts protein structure with unprecedented accuracy and has been applied by researchers globally to look for drugs and study biology. The system depends on targeted scientific data and domain innovations, not oceans of scraped text, and it comes with far fewer externalities than frontier language models.
There are analogous cases in weather nowcasting, medical image triage, grid optimization and materials discovery. These are areas in which we already can see that AI could be incredibly valuable without hoovering up the internet or requiring power stations. The lesson isn’t to end progress — it’s to broaden what “progress” means.
The geopolitics of the race story
The strategic plank of the AGI creed: build faster than the Chinese to save liberal values. But the “race logic” often serves as a justification for shortcuts. Experts at the Center for Security and Emerging Technology and Stanford HAI have cautioned that competitive framing could undermine safety culture and governance. Nor has Silicon Valley exporte platform dynamics liberalized the world: in many places, they’ve amplified illiberal actors.
Governance, the profit motive and the price of belief
The strange structure of OpenAI — a nonprofit that has a capped-profit subsidiary — was created to keep the mission in place. But the model has also had its contradictions: mission-speak and calls for collaboration juxtaposed with hypercompetitive product launches; safety bodies populated by the very companies gunning for supremacy; deep commitment to a single strategic partner, Microsoft — through multibillion-dollar investment and cloud addiction. And news of deal structures that could clear the way toward a public listing only increase the pressure between fiduciary duty and claims of universal good.
What might an alternative strategy look like? Hao points to concrete, quantifiable measures: compute and energy disclosures that conform with the methodologies of the Green Software Foundation and IEA; worker protections for data laborers, which are based on guidelines from the International Labour Organization; adoption of NIST’s AI Risk Management Framework as well as external audits; and public funding for research that emphasizes efficiency and domain-specific impact over leaderboard wins.
At its heart, Hao’s criticism is not anti-AI; it’s anti-mysticism. Once the belief in AGI becomes a redemption narrative that covers everything, all these negative consequences can be rationalized away as mere sacrifices on the road to salvation. It’s a story that empires have told before. The issue for A.I. is whether the industry can produce lasting, quantifiable value without requiring the rest of us to pick up a seemingly ever-expanding tab.