Wikipedia has tightened its editorial rules by prohibiting contributors from using large language models to generate or rewrite article content, reaffirming that the encyclopedia’s prose must be crafted by humans drawing from reliable sources. The move, approved through a community vote, clarifies earlier guidance and reflects growing concern that AI text can smuggle in errors, distort nuance, and fabricate citations.
Why Wikipedia Drew a Line Against AI-Written Prose
Verifiability is the backbone of Wikipedia’s model. Articles live or die by clear citations and careful synthesis of secondary sources. Generative AI upends that equilibrium: models are prone to convincing but unsupported statements, subtle shifts in meaning, and invented references. Editors have long tolerated bots that fix typos or format citations, but they see a categorical difference between deterministic maintenance scripts and generative systems trained to predict plausible text.
- Why Wikipedia Drew a Line Against AI-Written Prose
- What the Policy Allows and Forbids in Wikipedia Articles
- Community Vote and Governance Behind the New Ban
- Enforcement Without Overreach or Faulty AI Detection
- The Stakes for Reliability and Trust in the Encyclopedia
- A Signal to Other Platforms on Responsible AI Use
- What Comes Next for Guidance, Workflows, and Exceptions
The new rule replaces vaguer language that merely discouraged AI-written articles from scratch. Now it explicitly bars LLM-produced or LLM-rewritten article content. In short, if a sentence appears in an encyclopedia entry, it must be the editor’s own synthesis—not a model’s output repackaged as human authorship.
What the Policy Allows and Forbids in Wikipedia Articles
While the door is closed to AI-generated prose in articles, it is not slammed shut on AI altogether. The policy permits editors to use LLMs to suggest modest copyedits—grammar tweaks, style cleanups—to their own writing, with the caveat that a human must review and that the model must not introduce any new content or change the meaning of sourced material. This carve-out recognizes practical uses of AI as a writing assistant without letting it become a stealth author.
The line is intentional: AI can help polish, but it cannot be the source. That distinction echoes existing norms that already prohibit close paraphrasing, undisclosed paid editing, and unverifiable claims. Wikipedia’s north star remains the same—reliable sources first, human judgment second, tooling last.
Community Vote and Governance Behind the New Ban
The change emerged from an open community process, where editors debated trade-offs and ultimately endorsed the ban by a 40 to 2 margin, as reported by 404 Media. That outcome underscores a broad consensus across veteran contributors who patrol recent changes, curate featured articles, and mediate disputes. While the Wikimedia Foundation supports platform operations and grants, content policy on Wikipedia is largely set by volunteers through Requests for Comment and enforced by on-wiki administrators and noticeboards.
Enforcement Without Overreach or Faulty AI Detection
Detecting AI text at scale is notoriously unreliable. Academic groups, including Stanford researchers, have shown that AI detectors flag human writing—especially from non-native English speakers—at troubling rates. OpenAI itself withdrew its AI-writing classifier due to poor accuracy. Wikipedia’s enforcement will therefore hinge less on trying to “catch AI” and more on longstanding content criteria: if a claim lacks citations, conflicts with sources, or introduces unsourced synthesis, it is removed regardless of how it was produced.
That approach leverages the site’s existing defense systems: watchlists, page protection, edit filters, and tireless human patrollers. Across all language editions, roughly 100,000+ editors make at least five edits in a typical month, and the English Wikipedia alone hosts over six million articles. Quality control at that scale has always been human-led triage backed by bots—now with a clearer bright line for generative content.
The Stakes for Reliability and Trust in the Encyclopedia
Wikipedia isn’t just another website; it is a reference layer for classrooms, newsrooms, and search engines. Even small degradations in accuracy can cascade outward. Editors cite recent examples across the web where AI tools fabricated court cases, mangled scientific facts, or blended sources into novel but unsupported claims—exactly the kind of subtle drift that undermines encyclopedia standards.
By drawing a clear boundary, the community aims to shield core processes like neutral point of view and reliable sourcing from a technology that, for now, remains stochastic and opaque. The policy also protects contributors: if AI detectors are fallible, the fairest test is still the one Wikipedia knows best—show your sources and let humans check the work.
A Signal to Other Platforms on Responsible AI Use
Wikipedia’s stance adds weight to a broader recalibration across user-generated platforms. Stack Overflow initially banned ChatGPT answers after moderators found accuracy problems, then introduced narrower allowances with strict verification. Academic publishers have updated author guidelines to require disclosure of AI assistance and to bar AI from being listed as an author. Newsrooms are experimenting with labels and limited use cases while reasserting editorial accountability.
The common thread is not hostility to AI but a demand for provenance and responsibility. Tools are welcome; invisible, unaccountable authorship is not.
What Comes Next for Guidance, Workflows, and Exceptions
Expect follow-on guidance from project pages that govern verifiability, biographies of living persons, and medical content—areas where risk of harm is higher. Communities may pilot workflows that use AI for non-content tasks such as duplicate detection or citation formatting, while maintaining a hard stop against AI-written prose. If future AI systems become reliably source-grounded and auditable, editors can revisit guardrails. For now, Wikipedia is choosing the conservative path: human-written articles, human accountability, open debate, and transparent citations.