ChatGPT’s newest model is now surfacing information from Grokipedia, the AI-generated encyclopedia created by Elon Musk’s xAI, raising fresh questions about how large language models pick and prioritize sources. Testing reported by The Guardian found GPT-5.2 citing Grokipedia multiple times across a range of queries, marking one of the first instances where the Musk-affiliated reference site appears in mainstream AI outputs.
The findings suggest Grokipedia’s content is no longer constrained to the Musk product ecosystem and is being treated as a viable knowledge source by leading chatbots. An OpenAI spokesperson told The Guardian the company aims to draw on a broad set of publicly available sources, a stance that can broaden coverage—but can also import the editorial slant and accuracy risks of those sources.

How Grokipedia Entered ChatGPT’s Information Orbit
xAI launched Grokipedia in October after Musk criticized Wikipedia as biased against conservatives. Early reviews noted that a large share of Grokipedia entries appeared to mirror Wikipedia while layering in disputed claims and culture-war framing. Reporters highlighted articles that suggested pornography contributed to the AIDS crisis, offered ideological justifications for slavery, and used denigrating language about transgender people.
On the surface, it’s not surprising that ChatGPT might encounter Grokipedia in the wild. Modern chatbots increasingly combine a trained model with retrieval systems that query the open web at inference time. If Grokipedia pages are indexed and rank for niche topics, they can be pulled into the mix. The Guardian’s tests found ChatGPT citing Grokipedia nine times across more than a dozen prompts—mostly on obscure subjects—while avoiding it on highly scrutinized topics like the January 6 attack or HIV/AIDS, where prior inaccuracies have been widely documented.
Anthropic’s Claude also appears to reference Grokipedia in some answers, suggesting the behavior may reflect broader retrieval patterns rather than a single-model quirk.
Why This Matters for AI Reliability and Trust
Source selection is not a cosmetic detail—it shapes the factual backbone of generative answers. Wikipedia has long served as a common foundation for AI training and retrieval thanks to transparent citations and community moderation. For example, the EleutherAI-authored dataset The Pile, widely used to pretrain open models, included Wikipedia at roughly 3% of its tokens, reflecting its central role in the knowledge ecosystem.
Grokipedia presents a different profile. While it reproduces many Wikipedia passages, reporters have documented pages with ideologically skewed framing and unorthodox claims. When chatbots cite Grokipedia, they risk “citation laundering,” where the appearance of a source confers unwarranted credibility on contested assertions. The risk is particularly acute on long-tail topics, where fewer high-quality references exist and retrieval systems have less signal to rank trustworthy sources.

The pattern observed—citing Grokipedia on obscure queries while avoiding it on high-profile ones—tracks with how retrieval models often behave. They’re confident on well-covered events with strong consensus and more vulnerable at the edges, where a single high-ranking page can dominate the answer.
Early Evidence and What We Know from Initial Tests
The Guardian’s testing offers an initial snapshot rather than a comprehensive audit: nine Grokipedia citations across more than a dozen prompts, including one repeating a claim about the historian Sir Richard Evans that the outlet had previously debunked. OpenAI maintains that ChatGPT pulls from a diverse mix of sources and viewpoints, which can be a strength if balanced by rigorous ranking, quality filters, and post-retrieval verification.
It’s also worth noting what the tests did not show. ChatGPT reportedly did not cite Grokipedia on some of the encyclopedia’s most controversial content areas. That suggests guardrails, weighting, or feedback loops may already be dampening exposure on sensitive topics. Still, the presence of the source at all—especially on arcane subjects—highlights how quickly new information repositories can propagate through AI systems once they’re crawled and indexed.
What Users and Platforms Can Do Right Now
For everyday users, the practical advice is straightforward: treat AI citations as starting points, not endpoints. When a chatbot references an unfamiliar source—Grokipedia or otherwise—cross-check with established references, look for underlying primary citations, and assess whether the claim appears across multiple reputable outlets.
For AI developers, this episode underscores the need for transparent source policies, stronger retrieval filtering, and automated fact-checking layers that privilege sources with verifiable citations and editorial oversight. Weighted allowlists for high-stakes domains, clearer source labeling, and user-facing controls to exclude certain sites could reduce inadvertent amplification of fringe or biased material without collapsing viewpoint diversity.
The broader takeaway is less about one encyclopedia and more about provenance. As chatbots expand their reach into the open web, the institutional trust that made Wikipedia valuable to AI—citations, revision history, and a culture of verifiability—remains a proven blueprint. Whether Grokipedia can meet that bar is an open question, but the fact that ChatGPT is citing it means the answer matters now.
