Last Updated on Dec 22, 2025 by Kurt Dunphy

Can ChatGPT Be Used Against You? Risks and How to Use Legal AI Tools

A client pastes a confidential draft into a public chatbot. Now, opposing counsel is holding that chat log, ready to use it against you in court. Can an AI disclosure destroy your case?

The answer is yes. In contemporary practice, the information a client, assistant, or vendor enters into public artificial intelligence (AI) systems can surface in court as evidence, complicate legal privilege, and trigger discovery burdens. Even brief disclosures can erode confidentiality and create risk.

This article explains how misuse happens, the principal legal risks involved, and offers concrete precautions to help you avoid them. You’ll also learn why legal-specific tools are built differently for privacy, security, and confidentiality in high-stakes legal document drafting.

Key Takeaways

  • When clients and lawyers share confidential communications with a third party (the AI company) that does not maintain the required confidentiality, courts and regulators treat AI chat logs and prompts as electronically stored information (ESI) and do not consider them privileged.
  • Lawyers should treat public AI prompts as discoverable content that can expose sensitive data, such as personal information.
  • Legal AI tools such as Spellbook mitigate these risks through high-level security measures, including zero data retention, in-Word workflows, and robust governance features that are safer defaults than those of consumer chat tools. Spellbook operates under specific, secure enterprise contracts that ensure data privacy to preserve confidentiality and the attorney-client privilege.

Real Risks: Ways ChatGPT Can Be Used Against You

Public AI systems are automated, data-driven, and ethically questionable in legal practice. They can collect and store sensitive content, posing risks for legal matters that require strict data control and accountability.

Privacy & Confidential Data Exposure

Entering sensitive or privileged text (contracts, trade secrets, client communications) into a consumer AI platform risks downstream exposure. Service policies typically permit access by a vendor’s team, and regulators or litigants can later request chat logs. Never share client identifiers, deal details, financial terms, or litigation strategy with a public AI tool.

Data Retention and Access

OpenAI's standard policy schedules deleted ChatGPT chat logs for permanent deletion within 30 days. Preservation duties and legal matters can extend the retention period. 

In 2025, a judicial preservation directive required OpenAI to keep account logs, including deleted chats, for specific service tiers. Subsequent orders narrowed that scope. Judicial directives and usage policies govern ChatGPT’s privacy and retention framework, which may possibly change over time.

Evidence in Court

Courts widely treat AI prompts and outputs as a new form of ESI, similar to emails, text messages, and other digital communications. Because they are subject to the same discovery rules (e.g., the Federal Rules of Civil Procedure in the U.S.), an opposing party can request them. 

Chat logs can speak to a party’s state of mind (what the party knew or intended), knowledge of risks (e.g., a flagged indemnity term), or contradictions (a prior prompt versus a later position). Lawyers should assume anything entered into a public app can be used as evidence.

Legal Discovery / Subpoena Risk

ChatGPT histories are digital records that may be subpoenaed in civil or criminal matters, similar to emails or collaboration messages. OpenAI can be compelled to produce chat logs via subpoena, court order, search warrant, or equivalent. 

Impeachment and Contradiction

There is now a well-known sanctions line of cases arising from fabricated AI citations. In the Mata v. Avianca case, attorneys were sanctioned after submitting non-existent authorities produced by ChatGPT. 

Separately, opposing counsel can use records of previous AI chats (which lack privilege) to impeach a party's testimony in court or to question an attorney’s competence and diligence in fulfilling their ethical duties. 

Reputation or Compliance Risk

Public AI tools can mirror public and user biases and generate misleading content. Relying on such output can create compliance issues in regulated environments. European authorities have already penalized OpenAI over GDPR concerns. Government oversight measures and standard-setting initiatives are accelerating, increasing the regulatory and compliance burden on companies using AI. The use of non-compliant public tools poses a significant risk.

Waiving Attorney‑Client Privilege

Using public AI platforms risks waiving the attorney-client privilege. Submitting confidential client information to a public service constitutes a disclosure to an unprivileged third party, triggering a waiver. This can occur when a well-meaning staff member uses a consumer site to “improve wording.” The effects of waiving privilege can be irreversible.

Read more: Can ChatGPT have legal privilege?

Why Trusted Legal AI Tools Are Safer

Unlike consumer-grade AI tools, legal-specific tools such as Spellbook are engineered for legal practitioners, prioritizing privacy, security, and professional oversight when handling sensitive information. Spellbook’s architecture and feature set ensure that lawyers retain full control over their work product while mitigating the risks associated with general-purpose AI. 

Data Privacy & Zero Retention Policies

Spellbook avoids consumer-grade drawbacks by design, as it: 

  • Never allows the use of input to train AI models.
  • Retains no data nor repurposes any for secondary utilization.
  • Confines all outputs strictly to the user’s active document workspace

This stands in contrast to public chat tools that may store anonymized data and record, store, or repurpose chat logs in accordance with policy-driven retention.

In‑Word Integration (No Copy‑Paste to Open AI Chat)

Because Spellbook works in Microsoft Word, lawyers do not share confidential text with a public company via a browser page or site. This reduces the surface area for leak, scrape, or inadvertent disclosure events and minimizes the chance that a customer interaction on a consumer platform could undermine protection obligations.

Suggestions to Help Lawyers Stay in Control

Lawyers see suggestions as "Track Changes" and accept, reject, or modify the AI's input to remain fully responsible for the final work product. This reinforces ethical and editorial control.

Custom Playbooks & Review Rules

General AI platforms are vulnerable to advanced threats like prompt injection attacks and social engineering. Spellbook counters these risks by deploying custom playbooks—encoded rulesets that define and enforce company policies, preferred fallback clauses, and non-negotiable legal compliance mandates.

System guardrails constrain algorithmic behavior to approved, pre-vetted legal positions. This rigorous governance helps protect your drafting processes from external compromise and ensures adherence to established standards.

Best Practices: How to Use ChatGPT & Legal AI Tools Safely

A disciplined approach acknowledges that public AI tools are powerful and flexible, but not risk-free. Because these systems respond to prompts and context, they can be misused if safeguards are not in place. 

Never Input Sensitive or Privileged Information

Avoid entering complete contracts, personally identifiable data, trade secrets, or strategy into public AI tools. Use anonymized or redacted excerpts for experimentation.

Always Include Lawyer‑Oversight 

Do not auto-publish AI-produced content. Verify each suggestion for accuracy, legal compliance, and critical issues such as jurisdictional fit and warranty posture.

Use Enterprise Tools With Privacy Guarantees

Select tools that offer zero data retention, private deployment, or self-hosted options. Request Security Operations Center (SOC)  or International Organization for Standardization (ISO) documentation, report packages, and independent expert assessments to assess privacy.

Manage Prompt Hygiene

Minimize prompt histories that could unintentionally reveal patterns or sensitive information. Start a new AI session for each matter rather than continuing prior conversations. Avoid carrying over client context, facts, or strategy from one interaction to the next.

Keep Audit Trails and Logs

Maintain version control for AI prompts and outputs in internal document management systems rather than in public AI applications. Organizations should not rely on an AI vendor’s internal chat history or memory as the system of record.

Educate Stakeholders & Define Usage Policy

Adopt a written AI use policy. Include permissible uses, prohibited share categories, incident response for a breach, and retention instructions. Train attorneys and staff to spot misleading outputs.

Lawyers who manage AI carefully today will safeguard their clients, their licenses, and the profession's credibility tomorrow.

Frequently Asked Questions 

Can What I Type into ChatGPT Be Subpoenaed in Court?

Yes. Depending on jurisdiction and facts, prompts and outputs can be treated as ESI and discovered on familiar relevance/proportionality grounds. There is no special AI privilege.

Will ChatGPT Remember My Inputs Forever?

Even if “memory” isn’t permanent, sessions can be cached or preserved, and policies can change. In 2025, a court ordered OpenAI to preserve all logs; OpenAI later stated that the order ended on Sept. 26, 2025.

Is It Safe to Use ChatGPT for Legal Drafting or Contracts?

Public tools can help with brainstorming. However, they may misinterpret queries. Legal counsel should supervise and verify all outputs.

Start your 7-day free trial

Join 4,000 legal teams using Spellbook

please enter your business email (not gmail, yahoo, etc)
*Required

Thank you for your interest! Our team will reach out to further understand your use case.

Oops! Something went wrong while submitting the form.