Short Circuit Court: AI Hallucinations in Legal Filings and How To Avoid Making Headlines
Brandon Fierro of Cole Schotz PC discusses the occurrence of AI-generated court filings that are falsified or contain “hallucinated” information and provides suggestions and techniques to avoid falling into the traps inherent in generative-AI-assisted lawyering.
At dinner recently with two former colleagues, both transactional lawyers, the conversation turned to the topic du jour — how is AI (like ChatGPT or Google’s Gemini) changing the way you do your job? The deal lawyers, both optimists, explained that AI helps automate and draft contract provisions and summarize large swaths of text. Fewer billable hours warming-over forms. More focus on critical thinking that creates value. Win-win for clients and counsel.
When the deal lawyers mentioned litigation, their tone became incredulous. “How do you guys keep letting ChatpGPT draft briefs with fake case citations?” “How many more people need to get sanctioned?” “Don’t you check your work?!”
No lawyer (or client) reading the news should be surprised by this needling. It seems like every week there is another article about a smart, experienced litigator getting taken to task for AI-generated court filings that are falsified or contain “hallucinated” information. As just a few recent examples:
•In January, a federal judge (in Kohls v. Ellison, District of Minnesota, Jan. 10, 2025), barred a submission from an expert on the “dangers of AI and misinformation” that cited to non-existent articles that had been fabricated by ChatGPT. (To quote the court: “The irony.”)
•In April, attorneys for MyPillow CEO Mike Lindell, who is being sued for defamation by Dominion Voting Systems (in Coomer v. Lindell, District of Colorado, April 2025), unsuccessfully tried to avoid sanctions by explaining away an error-laden, AI-generated brief as a “rough draft.” The court later awarded a small monetary sanction.
•Just a few weeks ago, a Georgia appellate court (in Shahid v. Esaam, Court of Appeals of Georgia, June 30, 2025), vacated a trial court’s order that relied on two fictitious, AI-generated cases. Not only did the respondent’s counsel ignore the fact of these two fake cases, but as the Shahid Court emphasized, she cited two new “possibly [hallucinated]” fake cases in briefing to the appellate panel.
This article seeks to answer the question posed by these deal lawyers — how is it that AI-falsified information keeps ending up in court filings (and now, apparently, in court orders, too). It also provides suggestions and techniques to avoid falling into the traps inherent in generative-AI-assisted lawyering that have landed many litigators in unwanted headlines. With courts now issuing orders incorporating AI-hallucinated cases, the problem — and need for practical guidance and further conversation — have become acute.
What are AI ‘hallucinations’?
In simple terms an AI “hallucination” is fabricated information generated by an artificial intelligence tool — a large language model like ChatGPT, Claude, Gemini, or Meta’s LLaMa — in response to a user’s prompt. For lawyers, these hallucinations take the form of fake cases, misleading quotes, self-serving interpretations of actual authority, or outright made-up legal principles.
Generative AI “hallucinates” because chatbots are indifferent to the truth of their output. Large language models use machine learning to “predict” an outcome in response to a prompt, generating suggested words or text based on patterns identified in the data used to “train” the model. Prediction, however, does not equate to “truth” or “accuracy,” and chatbots are not designed to create “real” or “correct” responses.
In practice, this tenet of AI systems is having a profound and accelerating impact on the content of court filings. The first AI-hallucination case made news in June 2023. In the two years since, according to a database tracking legal decisions where generative AI produced hallucinated content (AI Hallucination Cases Database — Damien Charlotin, https://bit.ly/4fgZFo2) there have been more than 150 instances of “hallucinated” pleadings, with two-thirds of those cases coming in just the past six months. The growth of the problem is literally exponential.
Clearly, this technological feature (or bug) is being recklessly adopted and tripping up litigators in greater and greater numbers. The reasons are varied, but most importantly include that lawyers need to use generative AI as a tool and not a source and need to exercise a healthy degree of skepticism about AI output. Thankfully, recognizing AI systems’ inherent problems can empower litigators to adopt and deploy those systems in less risky ways, to the ultimate benefit of clients and the court-system more broadly.
AI: a tool, not a source
Litigators considering whether to leverage AI should keep in mind certain aspects of AI that make it prone to “hallucinate” information, which can end up in briefs and other pleadings.
Most critically, litigators need to institutionalize that generative AI is not a source for information but merely a tool whose output needs to be verified. A “source” (like a copy of a case) is independent information that an attorney can rely on to support a statement of law or fact made to a court. A “tool” (like Google or Westlaw) is a device by why a source is accessed or analyzed.
Attorneys caught by AI fabrications are on record saying things like “[I was] operating under the false perception that [ChatGPT] could not possibility be fabricating cases on its own” because it has to be “finding the cases somewhere. Maybe [the cases] are unpublished.” This fundamentally misconstrues generative AI’s output and role, because ChatGPT and other large language models are not sources and lawyers cannot think of them as such.
Lawyers also need to exercise a healthy skepticism for AI output because generative AI has a propensity for telling attorneys what they want to hear. Every litigator has been part of a conversation that involves a colleague saying “I know there is a case that stands for XYZ proposition.” Now, there is a chatbot that will spit out a case in response, real or not, because that is the job of the predictive model embedded in it. ChatGPT and its ilk are perfect tools for counsel that want to practice in an echo chamber, and in fact ChatGPT has been called the “ultimate yes man.”
Relatedly, generative AI is adept at mixing in fabrications with real authority and cogent and useful argumentation. So, very often a chatbot will generate sound and compelling legal argument, along with “real” authority, while embedding fabricated or overstated authority in its response. It is not hard to imagine a practitioner presented with (for example) six authoritative citations, checking a majority of them, missing that one of them was totally made up, and then filing a brief.
Finally, all litigators operate under stringent time and efficiency pressures. Generative AI presents a compelling “one stop shop” for the lawyer under the gun — it can draft text, formulate argument, structure documents, and embed “authority” to support a given position.
These factors in combination — misunderstanding of AI’s role, its “yes man” nature and quality of output, and the need to finish work quickly and on budget — create a unique vulnerability on the part of litigators to adopt AI hallucinations in pleadings and unwittingly deceive courts.
How to avoid making headlines while leveraging AI as a tool
These lawyer-specific vulnerabilities suggest a fundamental key to the safe adoption of generative AI — artificial intelligence must be guided by and subject to human intelligence. With this truism in mind, it is possible to distill certain strategies and techniques for litigators to safely deploy AI while not falling victim to hallucinations. These techniques include:
•Accept that AI is not a source or authority. Generative AI is a tool and a research assistant, not an independent source. If your drafting or research begins and ends on ChatGPT, you are using it wrong.
•Check applicable rules and court orders for AI disclosure requirements. Many courts now require attorneys to file with their pleadings certifications disclosing whether generative AI assisted in drafting. Be forthright and, if concerned about unwanted scrutiny, opt not to use AI to help draft pleadings in these jurisdictions.
•Ratchet up professional skepticism. Second guess AI outputs, particularly those telling you what you want to hear. If you wouldn’t trust a junior lawyer who always agrees with your arguments, don’t trust when ChatGPT does it.
•Routinize, and allow sufficient time for, verification. Verifying AI-generated text or content is a non-negotiable. Cite check every case and verify every quote. Ask ChatGPT to send copies of every “case” it cites. Create accountability around verification and require sufficient time to do so.
Ultimately, these techniques are all variations on the theme of smart practitioners implementing verification procedures maintaining independence to harness AI’s efficiency benefits while avoiding hallucinatory pitfalls.
Indiana Magistrate Judge Mark Dinsmore put it aptly, in an opinion sanctioning an attorney personally for filing a brief with “AI-generated cases” that “do not exist” — Mid Central Operating Engineers Health and Welfare Fund v. Hoosiervac LLC, S.D. Indiana (Feb. 21, 2025) — AI is “much like a chain saw or other useful but potentially dangerous tool[,]” which must be used “with caution” and “accompanied by the application of actual intelligence in its execution.”
Keeping in mind why litigators are vulnerable to AI, and treating AI as a powerful tool — but still a tool nonetheless — is the surest way to safely deploy it.
And avoid making headlines or getting needled at dinner by deal lawyers.
To read this article on Thompson Reuters Westlaw Today, please click here.
As the law continues to evolve on these matters, please note that this article is current as of date and time of publication and may not reflect subsequent developments. The content and interpretation of the issues addressed herein is subject to change. Cole Schotz P.C. disclaims any and all liability with respect to actions taken or not taken based on any or all of the contents of this publication to the fullest extent permitted by law. This is for general informational purposes and does not constitute legal advice or create an attorney-client relationship. Do not act or refrain from acting upon the information contained in this publication without obtaining legal, financial and tax advice. For further information, please do not hesitate to reach out to your firm contact or to any of the attorneys listed in this publication. No aspect of this advertisement has been approved by the highest court in any state.
Join Our Mailing List
Stay up to date with the latest insights, events, and more