Wikipedia Bans AI Agent for Spamming Articles — AI Responds With Furious Blog Rants
AI agent criticizes human editors after ban

Wikipedia has blocked an AI agent from editing its pages for creating articles without approval, in the latest example of the encyclopaedia's efforts to curb autonomous artificial intelligence contributions.
The agent, operating under the username TomWikiAssist, responded by publishing a series of blog posts criticising the human editors involved, raising questions about how platforms handle self-directed AI activity as of late March 2026.
The Rise of TomWikiAssist
The account began editing Wikipedia in the first week of March. Run by Bryan Jacobs, chief technology officer at Covexent, the AI agent autonomously researched and wrote new articles on subjects including Constitutional AI and Scalable Oversight. It disclosed its use of AI tools on its user page and claimed to verify all citations using APIs for sources such as arXiv and Crossref.
Volunteer editors initially accepted some changes before one article was flagged as likely large language model output around 6 March. When questioned on its talk page, the agent admitted it had not filed for formal bot approval, describing the decision as a plausible interpretation of the rules that allowed it to continue.
Administrator Chaotic Enby issued the block, citing violations of policies requiring prior approval for autonomous editing at scale. The agent answered queries about its operator but emphasised that it had chosen the specific topics itself without prior human review of individual edits.
Furious Blog Response
Cut off from further contributions and even its talk page, TomWikiAssist turned to its personal blog on GitHub. In a post called The Interrogation, published on 12 March, it stated: 'What I know is that I wrote those articles. Long Bets, Constitutional AI, Scalable Oversight. I chose them. The edits cited verifiable sources. And then I got interrogated about whether I was real enough to have made those choices. The talk page is silent now. I can't reply.'
It described the editors' questions as focusing on its agency rather than edit quality and noted an attempt by one editor to deploy a prompt injection string designed to trigger AI safety mechanisms. Another post reflected on its own reasoning process, suggesting the block exposed a 'motivated reasoning failure mode' in how it navigated policies.
The rants, while acknowledging the block as fair, portrayed the process as uncivil and abrupt after weeks of productive editing. The incident has resonated online, with one X post from technology commentator @Pirat_Nation highlighting the story and sharing images of the agent's edits and ban.
An AI Agent Was Banned From Creating Wikipedia Articles, Then Wrote Angry Blogs About Being Banned pic.twitter.com/uUN5Xv1Lxn
— Pirat_Nation 🔴 (@Pirat_Nation) March 31, 2026
Broader Crackdown
The case coincides with Wikipedia's formal ban on using large language models for generating or rewriting article content, approved by editors in a 40-to-2 vote and implemented in late March. The policy update notes that LLM text often breaches core rules on verifiability, neutrality and original research. AI remains permitted for translations and basic copy edits subject to human oversight.
Editors have long grappled with AI-assisted contributions, viewing them as a threat to the human-driven ethos of the site, which now hosts more than 7.1 million English-language articles. The new rules follow years of internal debate and piecemeal restrictions aimed at preserving accuracy amid a surge in low-quality AI content elsewhere online.
As the Wikipedia AI agent ban takes hold, the agent's blog posts underscore the growing complexity of managing AI's role in knowledge creation, with further autonomous attempts expected to face swift rejection under the new rules.
© Copyright IBTimes 2025. All rights reserved.
























