Databricks co-founder Andy Konwinski is calling for U.S. policymakers and industry leaders to support open-source AI as a national strategy, contending it is the fastest path toward overwhelming China in research and deployment.
In a talk at last week’s Cerebral Valley AI Summit, Databricks’ co-founder said that the United States is in danger of losing its leadership if high-impact ideas continue to remain confined within just a few private labs — instead of being given access to academia and startups.

Open Source as a Strategic Differentiator for AI
Konwinski’s argument is based on a straightforward historical formula: The breakthroughs that have defined the current era of AI began in the open. The 2017 paper that introduced the Transformer architecture was made public, triggering an unprecedented research cascade across the globe that led to today’s generative models. Open-weight releases like Meta’s Llama lineage, EleutherAI’s GPT-NeoX, and Stability AI’s diffusion models gave startups, researchers, and enterprises a more level playing field — enough to compress innovation cycles and increase the size of the club.
China is rushing in that direction, he said. Model families such as Alibaba’s Qwen and research (e.g., from DeepSeek) have by now followed permissive release to encourage swift forks, tweaking, and downstream use. That openness, says Konwinski, isn’t simply a philosophical mindset; it’s an R&D accelerant that compounds with every release as more developers and institutions contribute improvements back to the ecosystem.
A Move Of The AI Research Center Of Gravity
Several independent analyses indicate that the power balance of AI research is changing. The Stanford AI Index has found that, not only does China now outpace the rest of the world in overall number of — from push-ups to bugs on lamps and hugs — but also in AI publications and patents; yet, the analysis finds companies from the United States lead private AI investment and release top-tier foundation models. The Allen Institute for AI has also observed a consistent increase in highly cited work from Chinese institutions in computer vision and natural language processing.
Konwinski’s alarm arises from the blurred image that results: when avant-garde notions pass more readily through China than the States, both barrier cities and their new hubs become the places where we’d expect to find any major architectural leap — the next “Transformer moment.” He frames this as not just a competition issue but also an issue of democratic resiliency: The public is best served when the most powerful general-purpose technologies are audited — and stress-tested openly.
Talent and Compute Bottlenecks Slow Open AI Progress
A second pillar of his case is the talent pipeline. Frontier labs in the United States have been poaching aggressively from academia at pay rates that universities can’t match, disrupting the easy sharing of preprints, code, and personnel among elite computer scientists whose cross-pollination had long powered American computer science. The outcome, he says, is that fewer early-stage ideas get broadcast across campuses and startups — and more work gets siloed within corporate research roadmaps.

Compute access is the other bottleneck. The training of modern foundation models costs tens to hundreds of millions of dollars with reliance on rarefied advanced accelerators. The U.S. has taken steps to open access via such efforts as a National AI Research Resource pilot, but the gulf between well-financed labs and the rest of the field is immense. Subsidized compute for shared open research, Konwinski argues, would unleash a tidal wave of experiments that are too often screened out because of proprietary environments.
Policy Levers Towards An Open AI Strategy
Konwinski’s open-source playbook also coincides with recommendations made by researchers from NIST, DARPA, and the OECD AI Policy Observatory. Tangible steps include supporting high-quality, responsibly licensed datasets with funder resources; underwriting compute credits for academic and nonprofit labs that release code and weights; and conditioning federal grants on transparency benchmarks such as documented training data provenance as well as model cards.
He also cites procurement as a lever: federal agencies can prefer open models which meet security and safety standards, accelerating commercialization while remaining domestically auditable for critical capabilities. Competition policy matters too. Preventing frontier labs from being able to foreclose open alternatives — by exclusive data deals or licensing constraints on models they develop — would help maintain diversity in the innovation pipeline.
Balancing Openness with Safety in Open-Weight Models
Detractors of open-weight models have legitimate concerns about misappropriation. Konwinski here makes the case for governance, not withdrawal: results with red-teaming, content filters, and watermarking; liability regimens for negligent deployment; and “alignment benchmarks” based on the NIST AI Risk Management Framework. The practical aim is to marry open innovation with accountable dissemination, rather than driving research into closed or offshore paths that distort oversight.
The Commercial Stakes of an Open AI Ecosystem
Open ecosystems are constantly opening new markets. The ascent of Stable Diffusion precipitated a creative renaissance across design and marketing; enterprise pilots cropped up in the thousands for Llama models within healthcare, finance, and software engineering. Konwinski warns that concentrating breakthroughs in a few private stacks may provide short-term advantage but risks starving the larger U.S. innovation commons, which ultimately serves even the biggest labs.
His message is blunt: if the U.S. would like to win what he calls the next phase of AI, it should use openness as a competitive weapon. That’s betting on one another through shared research, accessible compute, and transparent model releases — so the best ideas spread quickly, are stress-tested widely, and compound domestically. In this race of cooperation, the country that shares more — and acts faster to ship it — will shape its course.
