Mike Lindell's Attorneys Sanctioned for Using AI in Court Filings — How Did the Judge Find Out?
Sam Altman, CEO of OpenAI, has warned about overreliance on AI tools.

A federal judge has fined two attorneys representing MyPillow founder Mike Lindell after they submitted court documents riddled with errors, which were generated with the help of artificial intelligence (AI).
The filing included fake case citations and misquotations, prompting the court to ask a direct question: Was AI involved? What followed was a rare courtroom exchange that lifted the lid on how generative tools are quietly being used in high-stakes litigation and how, in this case, it backfired.
Defective Motion Triggers Sanctions
On July 7, Judge Nina Y. Wang of the US District Court in Denver issued a sanction against Lindell's attorneys, Christopher Kachouroff and Jennifer DeMaster, for violating court rules. According to reports, the pair had filed a motion on February 25 that contained nearly 30 flawed citations, including references to non-existent cases and misquotations.
The motion was part of Lindell's defence in a defamation lawsuit brought by Eric Coomer, a former director at Dominion Voting Systems. Coomer had accused Lindell of spreading conspiracy theories that falsely implicated him in election interference.

A jury ruled in favour of Coomer on June 16, ordering Lindell to pay over $2 million in damages far below the $62.7 million Coomer had sought.
Judge Directly Questions AI Use
According to reports, the court flagged the filing during a pretrial conference and pressed Kachouroff for clarity on how the errors occurred.
Judge Wang asked, 'Was this generated by generative artificial intelligence?'
Kachouroff replied, 'Not initially. Initially, I did an outline for myself, and I drafted a motion, and then we ran it through AI.' Wang followed up, 'Did you double check the citations once it was run through artificial intelligence?'
Kachouroff admitted, 'Your Honour, I personally did not check it. I am responsible for it not being checked.'
Conflicting Claims Lead to Penalty
Kachouroff later told the court the wrong document had been submitted, describing it as a draft that was filed 'by accident.' However, Judge Wang noted that the corrected version he claimed he meant to file still contained factual problems.
In her ruling, Wang wrote, 'Neither Mr. Kachouroff nor Ms. DeMaster provided the Court any explanation as to how those citations appeared in any draft of the Opposition absent the use of generative artificial intelligence or gross carelessness by counsel.'

She added, 'Notwithstanding any suggestion to the contrary, this Court derives no joy from sanctioning attorneys who appear before it.'
Both lawyers, Kachouroff and co-counsel Jennifer DeMaster, were ordered to pay $3,000 each.
Not the First AI Slip-Up in a US Courtroom
The use of AI in legal filings has previously landed other lawyers in hot water. In one widely reported case from 2023, New York attorney Steven Schwartz used ChatGPT to gather legal precedents in a lawsuit involving a Colombian airline. But at least six of the cases he cited did not exist.
Court documents noted the brief included 'false names and docket numbers, along with bogus internal citations and quotes.' Schwartz later told the court it was his first time using ChatGPT and admitted he hadn't verified the output.
Both Schwartz and his colleague were fined $5,000.
AI Hallucinations Not Limited to Courtrooms
AI-generated errors are not confined to legal documents. In May 2025, the Chicago Sun-Times and Philadelphia Inquirer ran a summer reading feature, which was distributed by King Features Syndicate, that listed several non-existent books, attributed to real authors.
King Features later said the article's author, Marco Buscaglia, used AI to generate the list and failed to fact check it. The company terminated its relationship with the writer, citing a violation of its AI policy.

Amid growing public reliance on AI generative tools, even industry giants are issuing caution.
Speaking on the company's podcast, OpenAI CEO Sam Altman said, 'People have a very high degree of trust in ChatGPT, which is interesting because, like, AI hallucinates. It should be the tech that you don't trust that much.'
© Copyright IBTimes 2025. All rights reserved.