

Two lawyers were fined for submitting fake citations generated by AI, highlighting the serious risks of unchecked technology in the courtroom.
This example illustrates the risks of using generative AI tools like ChatGPT without proper verification. AI “hallucinations” occur when AI generates false references or misrepresents legal facts, particularly in generative tools trained on broad data sets lacking specialized legal knowledge. Hallucinations can lead to professional sanctions and significant reputational damage.
This article breaks down the incident above, examines similar cases, outlines the legal implications of using AI in legal workflows, and provides best practices for its responsible use.
The Mata v. Avianca, Inc. case involves the plaintiff’s lawyers who used AI to draft a motion that included fabricated citations, which misled the court. Opposing counsel identified false information in the filings, leading to hearings and sanctions, with the court emphasizing the importance of accurate legal documentation.
The tool in question is ChatGPT, and it was used to automate motion drafting and generate legal citations. The issue arose when ChatGPT generated content that included misleading references, violating ethical standards.
AI fabricated the fake case law and citations but used language that presented the illusion of legitimate authority. The opposing counsel didn't detect the inaccuracies immediately, but noticed the discrepancies during review.
The plaintiff’s lawyer had relied on a tool that bypassed legal protocols, leading to the submission of false information in an official court document.
The court fined both lawyers and their law firm $5,000, and the Bar Association reviewed the incident for ethical violations. Legal proceedings were contested, as the defense team argued that the use of AI tools should be considered legitimate; however, the judge challenged the legal findings that allowed such errors to slip through.
The lawyers were sanctioned not just for the AI's error, but for their failure to verify the AI-generated information. The incident serves as a reminder that maintaining human oversight is crucial when using AI tools for legal tasks.
While many were initially skeptical of whether AI could transform the legal profession, its benefits quickly became clear. However, instances of lawyers using ChatGPT reminded everyone of the risks associated with the improper use of AI tools. Here's how to avoid missteps with AI:
Additional high-profile cases that highlight the risks of relying on unverified AI tools in legal work and the need for caution and verification when using AI in legal practice include:
1. Utah Appeals Court Sanction (Richard Bednar)
Richard Bednar was sanctioned by the Utah Court of Appeals for submitting a brief with fake citations generated by ChatGPT, which also included a non-existent case, "Royer v. Nelson."
Bednar explained that an unlicensed law clerk had drafted the brief, and he had submitted the brief without proper verification. As a result, he was ordered to pay attorney fees, refund client fees, and donate $1,000 to a legal non-profit.
2. California Law Firms Sanctioned
A California judge fined two law firms $31,000 for submitting a brief with fake citations generated by AI. The brief, which was initially created using AI tools such as Google Gemini and Westlaw Precision, contained unverified research that had not been reviewed prior to filing.
Judge Wilner criticized the firms for undisclosed AI use, saying it misled the court and could have led to errors in the judicial order.
3. Wyoming Walmart Case Penalty
In a Walmart personal injury lawsuit, three lawyers were fined a total of $5,000 for citing fake AI-generated cases in a court filing. U.S. District Judge Kelly Rankin emphasized that lawyers must verify sources, even when using AI tools, and fined one lawyer $3,000 while removing him from the case.
The judge emphasized the importance of honesty and AI oversight, stressing that lawyers should not rely blindly on AI-generated citations.
Using AI without proper oversight carries significant risks. The cases above underscore the need for lawyers to directly supervise and verify any work produced with AI assistance. They also further reinforce the legal community's stance that AI tools are aids, not substitutes for diligent human review and verification. Here’s a closer look at the broader impact:
AI has evolved rapidly in legal practice, but regulations governing its use remain a matter of debate.
Current Regulations and Guidelines
Debating AI's Role: Ban or Regulate?
Developing Robust Safeguards
As AI evolves, more legal professionals are asking the question: Will ChatGPT replace lawyers? The answer is no. AI can never replace the human judgment, ethical reasoning, strategic thinking, and interpersonal skills that lawyers bring to their work. The goal is to use AI to enhance legal practice while maintaining human oversight.
Courts are setting precedents for AI-generated legal documents, especially fake citations, stressing lawyer accountability and the need for oversight. Here's the summary of court responses:
Legal writing requires adherence to strict citation rules. Lawyers must cite only real, verifiable authorities because submitting fake citations or misleading the court with fabricated references can lead to serious professional consequences.
The ABA emphasizes the need for lawyers to exercise competence and diligence, especially when using AI tools. AI can be used responsibly, but lawyers are still accountable for verifying all content, including AI-generated citations.
Yes, they can still use AI in their legal practice. However, AI-generated content should always be verified and aligned with legal ethics.
Yes, general-purpose tools like ChatGPT are more likely to generate hallucinated citations compared to legal-specific tools like Spellbook, which have built-in safeguards to prevent such issues.
Lawyers should always double-check all AI-generated content, train their staff, and use AI tools specifically designed for legal work.
Thank you for your interest! Our team will reach out to further understand your use case.