OpenAI’s Sam Altman says the company is on track to creating an autonomous “capable AI researcher” by 2028, with a research-assistant-level one in the works before then. At a time when AI can already convince news-watching bots to read back stories they were fed, and even manipulate human writers into penning fictional tales, the claim, made in conversation with Quanta alongside coauthor Jakub Pachocki, heralds a drive from chatbots now toward systems that can plan research on their own, run it and then write about it — joining science (as opposed to just reading summaries of science).
What OpenAI Means by an Autonomous AI Researcher
The target system, Pachocki said, is one that can pose subgoals, choose methods to achieve or extend those goals, conduct experiments and iterate its way to publishable results without the prodding of a human teacher.

Current models already solve problems with “time horizons” of about five hours and they can deal with competition problems, like those in the International Mathematical Olympiad, as well as top humans, if given enough time to reason through.
This roadmap has two levers: improving the algorithms and scaling “test-time compute,” or how much computation to throw at a problem. In practice, this means letting the system search deeper, simulate more hypotheses, and check its own work. Progress on these fronts could, OpenAI believes, so dramatically stretch problem horizons that it would be worthwhile to in some cases have entire data centers dedicated to solving a single problem — at least for ultra-high-stakes breakthroughs in fields such as biology or materials science.
The leaders of OpenAI also reaffirmed an increasingly fashionable opinion among frontier labs: that deep learning itself would eventually build systems beyond human capability in most or all domains. Superhuman breadth is a decade away, if the trends continue, Pachocki said.
Compute Ambitions and the Energy Challenge Ahead
Altman connected the schedule to a huge buildout of infrastructure, with more than 30 gigawatts of capacity and about $1.4 trillion in commitments that include it. That scale highlights how the advancements in modern AI are tied not just to intelligent algorithms but also to power, cooling and a certain kind of silicon supply chain.
Energy and environmental concerns loom over everything. The International Energy Agency has also cautioned that global electricity demand for data centers could double from now to the mid-2020s, in significant part because of AI. OpenAI’s approach suggests higher dependence on high-efficiency accelerators, more effective load management, and potentially deeper tapping of renewables and grid-scale storage to ensure such workloads are sustainable.
A New Corporate Structure for a Bigger Bet
OpenAI completed a transition to the structure of a public benefit corporation, which will allow the for-profit arm more flexibility to raise capital while tying obligations to serve the public interest in its charter. The for-profit will be held 26% by a nonprofit, the OpenAI Foundation, that will steer research direction, and is backed by $25 billion in commitments dedicated toward scientific discovery, including using AI to battle common diseases or fund safety projects.

The governance move reflects an industrywide trend: frontier AI research now demands resources previously out of reach for academia. The Stanford AI Index has also observed the shift of state-of-the-art model development to industry, influenced by increasing training costs and compute centralization. OpenAI’s structure aims to mitigate its scaling-up capital requirements with clear oversight and mission constraints.
How Near Is Fully Autonomous Science, and What’s Missing
There are precedents in the way AI can speed up discovery. DeepMind’s AlphaFold revolutionized protein structure prediction, and the related projects AlphaTensor and AlphaDev discovered more efficient algorithms for matrix multiplication and sorting. But there were limits to these victories. The average “researcher” needs to be not only a thinker and coder but also an experimentalist, custodian of instruments and data, and be able to critique its own outputs with statistical and methodological acumen.
Key technical challenges are long-horizon planning, tool use through complex software stacks, robust retrieval and citing, and verifiable reasoning under uncertainty. The safety and reliability requirements are no less stringent. NIST’s AI Risk Management Framework and the multilateral Bletchley Declaration target alignment, evaluation, and incident reporting — interpretational areas for which a lab-grade autonomous agent would need auditable guardrails.
Milestones to Watch Before 2028 on the Road to AI
Our halfway point is an intern who’s able to take over literature review automation, write runnable code for theory-backed simulations, and suggest new experimental variations non-pathologically. Look to see thicker use of “test-time compute,” iterative checking, and toolchains that combine lab software, compilers and cloud experiment managers.
Independent evaluations will matter. Seek standardized benchmarks that are more than the same old multiple-choice test and require an end-to-end research task — reproducing a published result, writing up a new hypothesis or contributing a methods section that is good enough to stand peer review. The test of credibility is not a demo but reproducible, publishable work for which the system’s contribution has been quantified and its provenance is clear.
Altman’s timetable is aggressive, but not far from the curve of AI’s recent trajectory: We have seen bigger models, better reasoning, improved tools and much more compute. Whether the “legitimate AI researcher” emerges right on time is as much about infrastructure, governance, and energy as it is about the next clever architecture — but this progression of direction is clear.