Published on Oct 24, 2024

What Happened to the Lawyer Who Used ChatGPT? Lessons to Learn

Have you heard about the New York lawyer who faced disciplinary action for submitting a brief generated by AI? This incident raises important questions for lawyers who need clarification on AI's role in the legal field. Understanding what went wrong and whether the blame lies with AI or the lawyer's misuse is crucial.

This article will help illustrate how to effectively use AI tools for tasks like contract drafting and review while maintaining client confidentiality and adhering to ethical standards.

Who Was the Lawyer Who Used ChatGPT?

The lawyer in question is Steven A. Schwartz, a seasoned attorney with over 30 years of experience – not a novice in the field. Schwartz specializes in workers' compensation claims and personal injury lawsuits, often handling cases related to construction accidents and malpractice.

He graduated from the State College of Albany and earned his law degree from New York Law School in 1992. Schwartz then maintained his license for 32 years before the ChatGPT incident occurred. Though he typically handles cases in the state jurisdiction, he remained involved in critical aspects of the case after it moved to federal court.

Ultimately, both Schwartz and his colleague, Peter LoDuca, were found responsible for violating professional conduct rules during this incident.

What is the Exact Task the Lawyer Used ChatGPT for?

Schwartz admitted to using ChatGPT for legal research. According to court transcripts, he used the AI chatbot to:

  • Gather information on relevant legal frameworks
  • Research case citations to identify past court decisions on similar complaints
  • Draft a legal brief for the court that included citations generated by ChatGPT.

At first glance, Schwartz’s intention to streamline his legal research appears commendable. However, he ultimately failed to consider the ethical implications of using AI and competency requirements.

AI tools can enhance various aspects of legal practice by automating time-consuming tasks. Still, they aren't a replacement for lawyers because of their current limitations, such as lack of accuracy and potential bias in results.

Currently, legal AI tools focus on:

  • Legal research
  • Automation of repetitive tasks
  • Document management
  • Redlining and document reviews
  • Brief drafting 

What Went Wrong in the NY Lawyer's Use of ChatGPT?

While Schwartz submitted a legal brief that included relevant information and arguments, it also included six fake cases generated by ChatGPT. Known as "hallucinations," the cases were fabricated, inaccurate outputs that AI tools present as facts. This misstep violated professional standards, and the court clarified that this occurred because:

  • Schwartz didn't understand ChatGPT’s limitations
  • He didn't verify the AI-generated results himself
  • He relied on ChatGPT’s self-verification instead of cross-checking its outputs

Regardless of the evolving regulatory frameworks surrounding AI in legal practice, lawyers must always adhere to the existing ABA’s Rules of Professional Conduct, which require diligence, accuracy, and reasonable effort.

To avoid similar pitfalls in your legal practice, learn how to integrate AI ethically by focusing on the following:

  • Supervision and oversight: Always verify AI-generated outputs and cross-check legal findings for accuracy.
  • Client confidentiality: Protect sensitive client information by adhering to strict confidentiality protocols when using AI tools.
  • Avoid bias: Be aware of potential biases in AI tools and actively work to identify and mitigate problematic results.

Legal and Ethical Questions Raised by Using ChatGPT in Law Practice 

The introduction of specialized AI tools for legal professionals has raised several legal and ethical concerns, such as:

  • Lack of accountability and responsibility
  • Data privacy issues
  • Potential bias in results
  • Misleading research
  • Plagiarism
  • Lack of transparency in information gathering
  • Inaccuracy in responses

ChatGPT and Client Confidentiality Concerns 

In legal practice, using third-party tools or external software can lead to risks such as breaching client confidentiality. Lawyers are ethically obligated to protect client data from unauthorized access, and this responsibility extends to their use of AI tools.

When you enter queries into AI systems, including chatbots, developers can typically use your information for further AI training, meaning conversations may not be entirely private. To safely incorporate AI into your practice while protecting client information, consider the following:

  • Choose AI tools designed for legal use: Spellbook aids transactional lawyers in contract drafting, review, and redlining. Moreover, Spellbook emphasizes privacy and security with precise data handling agreements with partners.
  • Implement firm-wide AI usage policies: Develop clear guidelines for your team on how and when to use AI. For example, instead of relying on AI to complete entire research for a case, trust it to manage documents, compare language with benchmarked standards, and do other tasks where it can ensure accuracy more easily.
  • Avoid Sharing Confidential Client Data: While Spellbook's focus on transactional law ensures accuracy in drafting and reviewing documents, you can also create general document templates, establish contract frameworks, and perform industry-standard comparisons without exposing sensitive client information.
  • Minimize Information Input: Provide only the data necessary to complete a task. Thanks to Spellbook's advanced capabilities and legal-specific databases, you can obtain reliable results with minimal input, reducing the risk of oversharing sensitive data.
  • Secure Client Consent When Necessary: If you must use AI and sharing client information is unavoidable, always seek written permission from your client.

The Consequences of Using ChatGPT in Legal Proceedings

Schwartz and LoDuca’s disciplinary hearing highlights the serious consequences of misusing AI. It’s important to note that the disciplinary action stemmed not from Schwartz’s use of ChatGPT but from his incompetence, which led to a legal brief filled with fabricated citations.

As a result of submitting a court filing with false citations, Schwartz and LoDuca faced a financial penalty of $5,000 for misleading the court. In a 34-page opinion, the judge declared that he would not have imposed discipline measures if the lawyers had been honest about their actions. He required them to send letters to all judges mentioned in the false case citations. 

The repercussions of misusing AI in legal work can vary based on the severity of the misconduct. Potential legal consequences include:

  • Reprimand: A formal warning issued by the bar association for violating professional conduct rules.
  • Suspension: The lawyer may face temporary suspension from practicing law.
  • Fines: Monetary penalties, like the one imposed on Schwartz and LoDuca, for submitting false or misleading information.

In severe cases, such as copyright infringement involving AI or breaches of client confidentiality, lawyers risk permanent revocation of their license.

Using ChatGPT for case research carries specific risks that could lead to case dismissal or other legal repercussions:

  • Dismissal by defendants: Defendants may seek dismissal if they believe the arguments contain factual inaccuracies or fabricated citations.
  • Negligence claims: If you fail to use AI competently and without verifying its output, you could face negligence claims, which may result in case dismissal.

Public and Legal Community Reaction to the Lawyer's ChatGPT Use

The damage to Schwartz's reputation is significant. The judge found that he acted in bad faith by knowingly submitting misleading statements, which justified the disciplinary action taken against him. 

Schwartz and LoDuca notably lied to the court initially, calling into question their credibility. They got further into trouble when the court ordered copies of the ChatGPT-generated cases to be submitted. The nonsensical nature of the cases submitted made it clear that they had not been reviewed by a human lawyer.

Schwartz admitted that he did not understand the technology or take the time to verify its accuracy, raising questions about his integrity, credibility, and technological competence. The long-term impact on his career remains uncertain.

In Texas, the legal industry's response has been particularly dramatic. One judge banned solely AI-generated briefs, stating that lawyers can use AI-generated documents in court only if they certify that a human reviewed the content.

Media coverage of the incident suggests that Schwartz and his colleague have taken the fall for this issue. Newspapers reported the facts while discussing the legal standards for AI usage in law, emphasizing the need for accuracy.

Overall, this incident has not deterred people from relying on AI. Instead, it has sparked concerns and increased public debate about AI's impact on the legal profession and the need for a solid legal framework for its proper use.

How Can Lawyers Safely Integrate AI Tools into Their Practice?

Follow these guidelines to ethically integrate AI tools into your legal practice. By taking precautionary measures, you can avoid accuracy issues in your documents and enjoy the benefits of AI without facing legal or financial consequences:

  • Educate yourself on AI: Understand the benefits, drawbacks, limitations, and technological updates related to AI. Stay informed about advancements that enhance its application in the legal field while being aware of potential limitations to prevent problems.
  • Verify AI responses: Remember that AI can generate false responses that sound legally valid. Always check the credibility and reliability of the sources used.
  • Learn from others' mistakes: Take lessons from incidents like the Schwartz case, where improper use of AI led to severe sanctions. Engage with colleagues to share experiences and improve your AI usage.
  • Start with low-risk tasks: Begin by delegating low-risk tasks to AI. As you become more comfortable, gradually increase your trust in AI while continuously overseeing its work. For example, Spellbook's trial period is an excellent way to build your routine with AI responsibly.
  • Stay updated on guidelines: Keep informed about the latest state guidelines and regulatory frameworks for AI in legal practice. Familiarity with current standards ensures your AI usage complies with professional rules of conduct.
  • Choose your tools wisely: Use trusted, reputable AI platforms to minimize risks. Spellbook is specifically designed for lawyers, helping you avoid risks associated with unreliable or untested platforms.
  • Maintain transparency with your team: Be open with your colleagues about how you use AI in document drafting, review, and contract management. Transparency builds trust and helps everyone understand AI's capabilities and limitations.

Key Takeaways

Many lawyers rely on AI daily, but only a few make headlines for using it incompetently. Instead of joining them in the media, learn from their mistakes. Here are the key lessons to remember:

  • Adhere to professional and ethical obligations: While AI can be helpful, always prioritize your professional responsibilities.
  • Double-check AI-generated content: Verify the accuracy of all AI-generated content, even if it seems credible.
  • Stay informed about AI-related laws: Be proactive and stay informed about AI-related laws to avoid potential legal issues.

Frequently Asked Questions

Should Lawyers Stop Using ChatGPT or AI Tools?

No, lawyers shouldn’t stop using ChatGPT or other AI tools. Instead, they should understand the pros and cons of incorporating AI into their legal practice. Benefits include increased productivity, reduced errors, and improved compliance. Many lawyers worry about potential accuracy issues, data confidentiality risks, and the loss of essential legal skills, but there are many ways to address these concerns.

Can ChatGPT Be Used to Represent Clients in Court?

No, ChatGPT cannot represent clients in court. Only licensed lawyers with expertise and moral judgment are qualified to represent clients.

Does Using ChatGPT Affect the Lawyer-Client Relationship?

Yes, using ChatGPT can affect the lawyer-client relationship. AI can streamline workflows in law offices, giving lawyers more time to focus on and strengthen client relationships. AI can also lead to confidentiality breaches, which may harm client-lawyer relationships and result in legal actions against lawyers. Being proactive about AI and ensuring that AI is used ethically and responsibly can help lawyers avoid these problems.

Get product updates, AI news, and legal tips straight to your inbox.

The latest legal AI news will be in your inbox soon!
Oops! Something went wrong while submitting the form.

You can unsubscribe at any time. Read our Privacy Policy for more.

Start your 7-day free trial

Please enter your work email address (not gmail, yahoo, etc.)
*Required

Thank you for your interest!

Oops! Something went wrong while submitting the form.