Last Updated on Nov 14, 2025 by Kurt Dunphy

Lawyer Fined for Using AI-Generated Legal Documents with Fake Citations

Two lawyers were fined for submitting fake citations generated by AI, highlighting the serious risks of unchecked technology in the courtroom.

This example illustrates the risks of using generative AI tools like ChatGPT without proper verification. AI “hallucinations” occur when AI generates false references or misrepresents legal facts, particularly in generative tools trained on broad data sets lacking specialized legal knowledge. Hallucinations can lead to professional sanctions and significant reputational damage.

This article breaks down the incident above, examines similar cases, outlines the legal implications of using AI in legal workflows, and provides best practices for its responsible use.

Overview of the Incident: Mata v. Avianca, Inc. (Federal Court, New York)

The Mata v. Avianca, Inc. case involves the plaintiff’s lawyers who used AI to draft a motion that included fabricated citations, which misled the court. Opposing counsel identified false information in the filings, leading to hearings and sanctions, with the court emphasizing the importance of accurate legal documentation.

Nature of the AI Usage

The tool in question is ChatGPT, and it was used to automate motion drafting and generate legal citations. The issue arose when ChatGPT generated content that included misleading references, violating ethical standards. 

Specifics of the Fake Citations

AI fabricated the fake case law and citations but used language that presented the illusion of legitimate authority. The opposing counsel didn't detect the inaccuracies immediately, but noticed the discrepancies during review. 

The plaintiff’s lawyer had relied on a tool that bypassed legal protocols, leading to the submission of false information in an official court document.

Court's Response and Penalties

The court fined both lawyers and their law firm $5,000, and the Bar Association reviewed the incident for ethical violations. Legal proceedings were contested, as the defense team argued that the use of AI tools should be considered legitimate; however, the judge challenged the legal findings that allowed such errors to slip through. 

The lawyers were sanctioned not just for the AI's error, but for their failure to verify the AI-generated information. The incident serves as a reminder that maintaining human oversight is crucial when using AI tools for legal tasks. 

Lessons Learned and Best Practices of Using AI in Legal Practice

While many were initially skeptical of whether AI could transform the legal profession, its benefits quickly became clear. However, instances of lawyers using ChatGPT reminded everyone of the risks associated with the improper use of AI tools. Here's how to avoid missteps with AI:

  • Understand AI's Capabilities and Limitations: Lawyers must understand AI's limitations and ensure that it does not replace human judgment. AI can err, particularly in areas such as case law citations and legal analysis.
  • Maintain Human Oversight: All AI-generated content must be reviewed by a licensed attorney to verify thoroughness, accuracy, veracity, and credibility. 
  • Law Firm Policies and Training: Firms can establish internal policies, including robust citation verification and review processes, and provide staff training.
  • Ethical Duty of Competence: Lawyers must stay informed about AI tools and ensure their accuracy and legitimacy to meet ethical obligations and avoid potential misconduct and sanctions.
  • Judicial Guidance and Rules: Lawyers should stay up to date with local court rules and judicial guidance regarding AI use to avoid violating ethical standards. 

Notable Cases with Lawyers Involving AI-Generated Fake Legal Citations

Additional high-profile cases that highlight the risks of relying on unverified AI tools in legal work and the need for caution and verification when using AI in legal practice include:

1. Utah Appeals Court Sanction (Richard Bednar)

Richard Bednar was sanctioned by the Utah Court of Appeals for submitting a brief with fake citations generated by ChatGPT, which also included a non-existent case, "Royer v. Nelson." 

Bednar explained that an unlicensed law clerk had drafted the brief, and he had submitted the brief without proper verification. As a result, he was ordered to pay attorney fees, refund client fees, and donate $1,000 to a legal non-profit. 

2. California Law Firms Sanctioned 

A California judge fined two law firms $31,000 for submitting a brief with fake citations generated by AI. The brief, which was initially created using AI tools such as Google Gemini and Westlaw Precision, contained unverified research that had not been reviewed prior to filing. 

Judge Wilner criticized the firms for undisclosed AI use, saying it misled the court and could have led to errors in the judicial order.

3. Wyoming Walmart Case Penalty

In a Walmart personal injury lawsuit, three lawyers were fined a total of $5,000 for citing fake AI-generated cases in a court filing. U.S. District Judge Kelly Rankin emphasized that lawyers must verify sources, even when using AI tools, and fined one lawyer $3,000 while removing him from the case. 

The judge emphasized the importance of honesty and AI oversight, stressing that lawyers should not rely blindly on AI-generated citations.

Implications of AI Errors in Legal Context

Using AI without proper oversight carries significant risks. The cases above underscore the need for lawyers to directly supervise and verify any work produced with AI assistance. They also further reinforce the legal community's stance that AI tools are aids, not substitutes for diligent human review and verification. Here’s a closer look at the broader impact:

  • Legal and Ethical Repercussions: Submitting unverified AI content can breach the principles of competence and candor, potentially leading to sanctions and career damage.
  • Consequences for Involved Lawyers: Lawyers may face fines, reprimands, and potential investigation for submitting AI-generated content with fabricated citations.
  • Impact on Trust in AI Tools: If such incidents continue, it may lead to growing doubts about AI tools, and courts may be reluctant to adopt AI-based solutions that could enhance access to justice.

Regulatory Considerations for Using AI in Legal Practice

AI has evolved rapidly in legal practice, but regulations governing its use remain a matter of debate. 

Current Regulations and Guidelines

  • The ABA's Formal Opinion 512 outlines ethical guidelines for lawyers who use generative AI, and the importance of adhering to traditional rules on competence, confidentiality, client communication, and reasonable fees. Lawyers should understand AI's risks and benefits, protect client information, and ensure proper billing for AI-related work. 

Debating AI's Role: Ban or Regulate?

  • Should AI be banned or regulated? Critics warn that AI can mislead courts with false citations, advocating for strict regulation or a ban. Supporters argue that it can boost efficiency and potentially increase access to justice. They advocate for disclosure and human oversight.

Developing Robust Safeguards

  • Proposals for safeguards include a mandatory review of AI-generated legal documents by qualified members of the bar association, along with the development of tools to identify and correct inaccurate legal documents before submission.

As AI evolves, more legal professionals are asking the question: Will ChatGPT replace lawyers? The answer is no. AI can never replace the human judgment, ethical reasoning, strategic thinking, and interpersonal skills that lawyers bring to their work. The goal is to use AI to enhance legal practice while maintaining human oversight.

Court's Ruling and Sanctions for Fake Citations Using AI

Courts are setting precedents for AI-generated legal documents, especially fake citations, stressing lawyer accountability and the need for oversight. Here's the summary of court responses:

  1. Judicial Proceedings
  • Courts review the discrepancies caused by AI-generated errors and hold hearings to investigate the source of the fake citations. Lawyers are given the opportunity to explain, but unverified AI content results in an ethical violation.
  1. Sanctions Imposed
  • Courts issue fines, mandate continuing legal education (CLE), and refer cases to the local bar for review. These sanctions illustrate the serious consequences of using false citations.
  1. Court's Rationale
  • Judges focus on the misrepresentation of legal facts as the key issue, highlighting that reliance on AI does not absolve lawyers from their responsibility to verify the accuracy of submitted content.

Key Takeaways

  • All AI-generated content should be verified by lawyers to prevent the submission of fake citations or misleading information.
  • Lawyers who submit unverified AI content may face fines, sanctions, and damage to their reputation.
  • AI can be beneficial if used with safeguards such as mandatory disclosure and review processes.

Frequently Asked Questions

What are the Rules Surrounding Citation in Legal Writing?

Legal writing requires adherence to strict citation rules. Lawyers must cite only real, verifiable authorities because submitting fake citations or misleading the court with fabricated references can lead to serious professional consequences.

What is the ABA's Stance on Lawyers Using AI with Fake Citations?

The ABA emphasizes the need for lawyers to exercise competence and diligence, especially when using AI tools. AI can be used responsibly, but lawyers are still accountable for verifying all content, including AI-generated citations.

Can Lawyers Still Use AI in Legal Practice?

Yes, they can still use AI in their legal practice. However, AI-generated content should always be verified and aligned with legal ethics.

Are Specific Types of AI Tools More Prone to Errors?

Yes, general-purpose tools like ChatGPT are more likely to generate hallucinated citations compared to legal-specific tools like Spellbook, which have built-in safeguards to prevent such issues.

How Can Lawyers Protect Themselves from Legal Malpractice Related to AI?

Lawyers should always double-check all AI-generated content, train their staff, and use AI tools specifically designed for legal work.

Start your 7-day free trial

Join 4,000 legal teams using Spellbook

please enter your business email (not gmail, yahoo, etc)
*Required

Thank you for your interest! Our team will reach out to further understand your use case.

Oops! Something went wrong while submitting the form.