

A client pastes a confidential draft into a public chatbot. Now, opposing counsel is holding that chat log, ready to use it against you in court. Can an AI disclosure destroy your case?
The answer is yes. In contemporary practice, the information a client, assistant, or vendor enters into public artificial intelligence (AI) systems can surface in court as evidence, complicate legal privilege, and trigger discovery burdens. Even brief disclosures can erode confidentiality and create risk.
This article explains how misuse happens, the principal legal risks involved, and offers concrete precautions to help you avoid them. You’ll also learn why legal-specific tools are built differently for privacy, security, and confidentiality in high-stakes legal document drafting.
Public AI systems are automated, data-driven, and ethically questionable in legal practice. They can collect and store sensitive content, posing risks for legal matters that require strict data control and accountability.
Entering sensitive or privileged text (contracts, trade secrets, client communications) into a consumer AI platform risks downstream exposure. Service policies typically permit access by a vendor’s team, and regulators or litigants can later request chat logs. Never share client identifiers, deal details, financial terms, or litigation strategy with a public AI tool.
OpenAI's standard policy schedules deleted ChatGPT chat logs for permanent deletion within 30 days. Preservation duties and legal matters can extend the retention period.
In 2025, a judicial preservation directive required OpenAI to keep account logs, including deleted chats, for specific service tiers. Subsequent orders narrowed that scope. Judicial directives and usage policies govern ChatGPT’s privacy and retention framework, which may possibly change over time.
Courts widely treat AI prompts and outputs as a new form of ESI, similar to emails, text messages, and other digital communications. Because they are subject to the same discovery rules (e.g., the Federal Rules of Civil Procedure in the U.S.), an opposing party can request them.
Chat logs can speak to a party’s state of mind (what the party knew or intended), knowledge of risks (e.g., a flagged indemnity term), or contradictions (a prior prompt versus a later position). Lawyers should assume anything entered into a public app can be used as evidence.
ChatGPT histories are digital records that may be subpoenaed in civil or criminal matters, similar to emails or collaboration messages. OpenAI can be compelled to produce chat logs via subpoena, court order, search warrant, or equivalent.
There is now a well-known sanctions line of cases arising from fabricated AI citations. In the Mata v. Avianca case, attorneys were sanctioned after submitting non-existent authorities produced by ChatGPT.
Separately, opposing counsel can use records of previous AI chats (which lack privilege) to impeach a party's testimony in court or to question an attorney’s competence and diligence in fulfilling their ethical duties.
Public AI tools can mirror public and user biases and generate misleading content. Relying on such output can create compliance issues in regulated environments. European authorities have already penalized OpenAI over GDPR concerns. Government oversight measures and standard-setting initiatives are accelerating, increasing the regulatory and compliance burden on companies using AI. The use of non-compliant public tools poses a significant risk.
Using public AI platforms risks waiving the attorney-client privilege. Submitting confidential client information to a public service constitutes a disclosure to an unprivileged third party, triggering a waiver. This can occur when a well-meaning staff member uses a consumer site to “improve wording.” The effects of waiving privilege can be irreversible.
Read more: Can ChatGPT have legal privilege?
Unlike consumer-grade AI tools, legal-specific tools such as Spellbook are engineered for legal practitioners, prioritizing privacy, security, and professional oversight when handling sensitive information. Spellbook’s architecture and feature set ensure that lawyers retain full control over their work product while mitigating the risks associated with general-purpose AI.
Spellbook avoids consumer-grade drawbacks by design, as it:
This stands in contrast to public chat tools that may store anonymized data and record, store, or repurpose chat logs in accordance with policy-driven retention.
Because Spellbook works in Microsoft Word, lawyers do not share confidential text with a public company via a browser page or site. This reduces the surface area for leak, scrape, or inadvertent disclosure events and minimizes the chance that a customer interaction on a consumer platform could undermine protection obligations.
Lawyers see suggestions as "Track Changes" and accept, reject, or modify the AI's input to remain fully responsible for the final work product. This reinforces ethical and editorial control.
General AI platforms are vulnerable to advanced threats like prompt injection attacks and social engineering. Spellbook counters these risks by deploying custom playbooks—encoded rulesets that define and enforce company policies, preferred fallback clauses, and non-negotiable legal compliance mandates.
System guardrails constrain algorithmic behavior to approved, pre-vetted legal positions. This rigorous governance helps protect your drafting processes from external compromise and ensures adherence to established standards.
A disciplined approach acknowledges that public AI tools are powerful and flexible, but not risk-free. Because these systems respond to prompts and context, they can be misused if safeguards are not in place.
Avoid entering complete contracts, personally identifiable data, trade secrets, or strategy into public AI tools. Use anonymized or redacted excerpts for experimentation.
Do not auto-publish AI-produced content. Verify each suggestion for accuracy, legal compliance, and critical issues such as jurisdictional fit and warranty posture.
Select tools that offer zero data retention, private deployment, or self-hosted options. Request Security Operations Center (SOC) or International Organization for Standardization (ISO) documentation, report packages, and independent expert assessments to assess privacy.
Minimize prompt histories that could unintentionally reveal patterns or sensitive information. Start a new AI session for each matter rather than continuing prior conversations. Avoid carrying over client context, facts, or strategy from one interaction to the next.
Maintain version control for AI prompts and outputs in internal document management systems rather than in public AI applications. Organizations should not rely on an AI vendor’s internal chat history or memory as the system of record.
Adopt a written AI use policy. Include permissible uses, prohibited share categories, incident response for a breach, and retention instructions. Train attorneys and staff to spot misleading outputs.
Lawyers who manage AI carefully today will safeguard their clients, their licenses, and the profession's credibility tomorrow.
Yes. Depending on jurisdiction and facts, prompts and outputs can be treated as ESI and discovered on familiar relevance/proportionality grounds. There is no special AI privilege.
Even if “memory” isn’t permanent, sessions can be cached or preserved, and policies can change. In 2025, a court ordered OpenAI to preserve all logs; OpenAI later stated that the order ended on Sept. 26, 2025.
Public tools can help with brainstorming. However, they may misinterpret queries. Legal counsel should supervise and verify all outputs.
Thank you for your interest! Our team will reach out to further understand your use case.