Wikipedia Bans AI-Generated Text With Two Exceptions – What Every Editor Must Know Now
Editors can now use AI only for writing refinements and translation support

Wikipedia has officially banned the use of AI-generated text for article content on English Wikipedia, introducing two notable exceptions for editors. The policy comes after prolonged debates within the Wikipedia community on how to manage contributions from large language models.
Administrators say the rules aim to preserve accuracy and maintain editorial integrity while allowing limited AI assistance.
The new restrictions allow editors to use AI tools in two specific ways. First, they can refine their own writing, similar to using a grammar or style checker.
Second, AI may assist in translating text, provided editors are fluent enough in both languages to verify the translation for accuracy. Editors must ensure that AI-generated suggestions do not introduce errors or alter the meaning of content in a way that is unsupported by cited sources.
Background and Context
The move follows years of discussion about the role of AI and large language models in Wikipedia editing. As reported by How-To Geek, previous proposals for comprehensive AI guidelines failed because of disagreements over implementation and specificity, with the community struggling to balance innovation with maintaining accuracy.
Chaotic Enby, a Wikipedia administrator involved in the policy discussion, noted that while consensus existed on the need for change, agreement on precise rules had been elusive.
The English Wikipedia policy now sets clear boundaries. It prohibits generating or rewriting article content using AI except for writing refinements or translation support.
Other language Wikipedias, such as Spanish Wikipedia, have independent rules that may not allow the same exceptions. This highlights the differing approaches taken by Wikipedia communities worldwide.
Implications for Editors
Editors are advised to approach AI-generated content cautiously. Even within the exceptions, AI tools must be treated as aids rather than authoritative sources. Any text produced by AI should be thoroughly reviewed to prevent the introduction of misinformation or unsupported claims.
Wikipedia acknowledges that detecting AI-generated content is imperfect, and lightly moderated pages may still contain AI-assisted text.
The restrictions reflect broader concerns about the impact of AI on accuracy and credibility. By limiting AI use, English Wikipedia aims to prevent unverified content while maintaining productivity tools for editors.
Some contributors have welcomed the move as a safeguard against inaccuracies, while others express concern that the rules may slow workflow for those relying on AI for writing or translation support.
How Editors Can Comply
Wikipedia provides guidance for editors using AI under the new policy. When refining their own writing, editors should confirm that AI suggestions do not change factual content.
For translation tasks, AI may draft a first-pass translation, but editors must carefully review each sentence for accuracy. Treating AI tools like grammar checkers or style assistants ensures compliance while maintaining article integrity.
Editors should also remain vigilant for AI-generated errors that may be harder to detect. Wikipedia offers tips for recognising AI-style writing, but human oversight remains essential. By following these measures, editors can take advantage of permitted AI assistance without violating the policy.
Policy Scope and Enforcement
The policy applies specifically to English Wikipedia. Each Wikipedia site retains autonomy over its own rules, meaning similar restrictions may not exist on other language platforms.
The enforcement of the policy will rely on community monitoring, and editors are expected to adhere to the guidelines voluntarily. Clear documentation and transparency in editing practices remain key to preventing AI misuse.
© Copyright IBTimes 2025. All rights reserved.























