Wikipedia co-founder is not so keen on Elon Musk’s Grokipedia. In parsing early entries, he says the AI-written encyclopedia produces substance leavened with what he characterizes as standard large-language-model “bullshittery,” which raises concerns about both accuracy and sourcing and indicates little to no editorial accountability.
Sanger’s verdict on Grokipedia and its accuracy claims
“The long-form articles for Grokipedia can be informative, but the certainty with which A.I. just makes up believable information is troubling,” Sanger says. It alleges his father had a scientific career and taught him all about “the principle of evolution,” a line Sanger says he never wrote and for which there is no verifiable source.

He also challenges the reasons given for his departure from Wikipedia in 2002. Grokipedia frames the parting as a standards-and-neutrality dispute; Sanger says he was fired when the money dried up, a distinction that for years has been one of his hobbyhorses. The Washington Post and the Tampa Bay Times have also covered that dispute, including the dispute over his co-founder role and clashes with the early community.
His broader take: Much of Grokipedia’s copy reads as if in “LLM‑ese,” which is readable but pretty boring, sometimes spiraling into unintelligible word salads. When it comes to politics, it is biased in multiple directions rather than consistently balanced — a pattern that any scholar of model output knows well when researching contentious subjects.
What Grokipedia Is Trying to Do Differently
If Sanger is willing to credit Grokipedia, it’s process, not content. The project encourages the public to flag mistakes and to suggest corrections, which are then implemented by the A.I. — a novel experiment in combining automated rewriting with crowdsourced oversight. It’s novel — but unproven. Sanger cautions that the system will be successful only if it has guardrails: who checks changes, how conflicting sources are reconciled, and whether the system can withstand coordinated manipulation.
Given enough money and the engineering firepower, Sanger believes an AI-first encyclopedia could one day outstrip Wikipedia on speed and breadth. But he emphasizes that quality will be determined by governance details — citation requirements, fully transparent revision logs, editorial rollback powers, and a way to manage expertise without gatekeeping.
Wikipedia’s AI crossroads and its human-led future ahead
The debate isn’t one-sided. A spokesman for the Wikimedia Foundation has clarified that the platform’s knowledge “is—and always will be—human,” while allowing that the organization is experimenting with AI in limited capacities. Already this year, a trial of AI-generated mobile summaries was halted after editors objected at the highest levels — an indication of how touchy the community is about machine-authored prose being placed on top of human-curated pages.

On podcasts, Wikipedia’s other co-founder, Jimmy Wales, has said that AI may be useful for mundane chores — copy cleanup or maintenance work — while human editors keep applying editorial judgment. The foundation has also reported that human page views fell by approximately 8% year over year from March through August, a sign of continuing change in user behavior as AI answers multiply.
Sanger’s Reform Push And The Musk Factor
Sanger’s criticism of Grokipedia appears alongside his “Nine Theses on Wikipedia,” a reform manifesto of neutrality, source openness, and more accountability. He tells me he didn’t collaborate with Musk and isn’t vying for some conservative clone; his throughline is procedural neutrality, not ideology. He’s even described Wales as a potential ally if it means better verification and a system for appealing disputes.
Timing has, however, conjoined the tales: Musk introduced Grokipedia a day after Sanger’s theses appeared, and both men are publicly active in discussing AI and its place in knowledge systems.
His message to reformers-to-be is pragmatic: go back to editing, try policy changes in the open, and give better process a chance.
Why this fight matters for the future of online knowledge
Whether Grokipedia grows up to become a de facto encyclopedic resource or fizzles out as more AI hype we’ll struggle to believe existed in its time will depend on something every encyclopedia — online, originating from a garage in Palo Alto, or hoary old Britannica — has had to live and die on: the ability to verify its sources, the openness of process, and, most crucially of all, how much you can believe it. LLMs write at superhuman speed, though they have problems with sourcing discipline and hallucinations. Wikipedia, on the other hand, has strong norms but struggles with scale and attention as AI intermediates more of the web.
We should read Sanger’s frank assessment less as a flame and more as a stress test. If Grokipedia could promise careful citations, auditable edits, and resistance to gaming — all without the witless polyurethane gloss of which he complains — it could light a fire under Wikipedia. If it can’t, then it’s another cautionary tale about AI hubris. Either way, the standard is unchanged: Get the facts right, show your work, and win the confidence of the reader.
