FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

Chatbots cite Elon Musk’s Grokipedia, new report finds

Gregory Zuckerman
Last updated: January 26, 2026 6:06 pm
By Gregory Zuckerman
Technology
5 Min Read
SHARE

Two of the most widely used AI assistants are drawing from Elon Musk’s Grokipedia, a controversial wiki linked to his AI startup xAI, according to a new investigation. The findings raise fresh questions about how chatbots choose sources and what happens when they amplify material from sites accused of misinformation and extremist citations.

A Musk-built wiki faces growing scrutiny and debate

Grokipedia was launched as a crowd-editable alternative to Wikipedia, backed by xAI and tied to the company’s Grok chatbot. Unlike Wikipedia’s mature moderation norms and long-standing editorial policies, Grokipedia is relatively new and has been criticized by researchers and media analysts for copying large portions of Wikipedia while also hosting disputed entries on politically charged topics.

Table of Contents
  • A Musk-built wiki faces growing scrutiny and debate
  • What the new report found about chatbot source use
  • Why it matters for AI reliability and user trust
  • The source quality gap and transparency in citations
  • What companies and users can do next to improve sourcing
The Grok logo and the Microsoft Azure logo are displayed side-by-side on a light gray background, separated by a vertical line.

Reported examples include pages that mischaracterize the AIDS epidemic, language that appears to rationalize slavery, and references to white supremacist websites. Grok itself has faced separate safety controversies after producing offensive and extremist content on X, incidents that underscored how quickly generative systems can be nudged into harmful output without rigorous guardrails.

What the new report found about chatbot source use

The Guardian reported that OpenAI’s ChatGPT cited Grokipedia when responding to questions about Iran and other historical topics. In one example described by the outlet, ChatGPT echoed debunked claims about the British historian Sir Richard Evans, attributing material to Grokipedia among its sources. The report further noted that Anthropic’s Claude also surfaced Grokipedia citations in certain answers.

OpenAI told the newspaper that ChatGPT’s web-enabled answers draw on a broad range of publicly available sources and that the company applies safety filters to reduce the chance of high-severity harms. The company also emphasized that the assistant provides clear citations so users can evaluate provenance. Anthropic did not provide a detailed comment in the report, though the observation that Claude cited Grokipedia points to a wider, industry-level issue: retrieval systems are only as reliable as the sources they select.

Why it matters for AI reliability and user trust

Modern chatbots increasingly rely on retrieval-augmented generation, pulling live web snippets or database entries to ground their answers. If those pipelines include poorly vetted sources, misinformation can be laundered through the authoritative tone of an AI response and legitimized by a citation users may not know how to assess.

AI chatbots cite Elon Musk’s Grokipedia, report finds

Security researchers warn that tactics like data poisoning, prompt injection, and so-called “LLM grooming” can tilt what large models retrieve and repeat. In practice, it can take only a handful of strategically seeded pages to skew answers on sensitive topics. By contrast, Wikipedia’s model—backed by a global volunteer community, transparent edit histories, and verifiability policies—tends to correct vandalism and bias more rapidly on high-traffic entries. Grokipedia does not yet demonstrate comparable oversight or community depth.

The source quality gap and transparency in citations

AI companies often describe their filters and safety layers but rarely disclose detailed source lists, scoring criteria, or thresholds for excluding sites with repeated policy violations. Without that transparency, users cannot easily tell whether a citation reflects editorial rigor or mere availability.

Experts in information integrity have called for provenance signals that travel with content: who wrote or last edited a page, what moderation occurred, and whether independent fact-checks exist. For high-risk topics—public health, elections, extremist violence—platforms can deploy stricter whitelists, dynamic trust scores, and human-in-the-loop reviews to prevent low-quality wikis from shaping answers.

What companies and users can do next to improve sourcing

Short-term, AI providers can label lesser-vetted sources more prominently, reduce their weighting in retrieval, and escalate to higher-assurance references on sensitive queries. Periodic audits—publishing the share of answers that cite different source tiers—would help the public gauge progress. Independent red-team evaluations should explicitly test whether controversial sites can steer outputs.

For users, the best defense is to click citations, cross-check claims with established encyclopedic references, primary documents, or reputable news outlets, and be cautious when a response leans on Grokipedia for contentious subjects. Chatbots can streamline research, but they are not substitutes for editorial judgment—especially when their sources include a fledgling wiki already flagged for accuracy problems.

Gregory Zuckerman
ByGregory Zuckerman
Gregory Zuckerman is a veteran investigative journalist and financial writer with decades of experience covering global markets, investment strategies, and the business personalities shaping them. His writing blends deep reporting with narrative storytelling to uncover the hidden forces behind financial trends and innovations. Over the years, Gregory’s work has earned industry recognition for bringing clarity to complex financial topics, and he continues to focus on long-form journalism that explores hedge funds, private equity, and high-stakes investing.
Latest News
2150 Raises €210M to Tackle Urban Climate Challenges
Saudi Satirist Wins Damages Over Pegasus Hacking
TikTok Outage Scrambles For You Feeds And Uploads
Nexus Mods To Support Steam Deck And Steam Machine On Linux
Eufy SoloCam S220 4-Pack Sees $220 Price Cut Today
YouTube Tests Paywall For Playback Speed Feature
Winter Storm Road Maps Updated Across States
Star Wars: Unlimited Booster Display Hits $36
Mega Pokémon Pikachu Build Drops 44% At Amazon
Apple Poised To Unveil Gemini-Powered Siri
Tech Workers Urge CEOs To Oppose ICE After Alex Pretti
TCL 65-Inch S5 4K TV Gets 18% Discount at Amazon
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.