Last Updated on Dec 12, 2025 by Kurt Dunphy

Is ChatGPT Private? A Lawyer’s Guide to Securing Confidential Client Data

As more lawyers use AI tools to draft and review legal documents, one question defines the conversation: Is ChatGPT private?

Below, we examine how OpenAI stores, shares, and trains ChatGPT’s models on user inputs, including prompts and responses. The article outlines ChatGPT’s privacy practices, the associated privacy risks, and the settings you can use to limit data sharing, helping you make informed decisions about confidentiality and compliance. 

We also demonstrate how the legal-specific AI tool, Spellbook, offers enhanced protections and seamless Word-native workflows that increase not just your privacy but also your efficiency. You’ll also find actionable steps you can implement today to use AI more safely.

Key Takeaways

  • Consumer ChatGPT plans (Free/Plus) may store and access your chats to train its AI models (unless you manually disable this). Inputting client information can waive the attorney-client privilege.
  • ChatGPT Teams/Enterprise/API plans do not use your data for training purposes. They add encryption, admin/retention controls, and compliance features.
  • Spellbook runs in Word, does not train on your data, offers zero data retention practices and audit trails, and includes other security features that make it the safest option for sensitive legal work. Its automated features include legal document drafting, review, and redlining, and contract benchmarking.

Does Using ChatGPT Waive Attorney-Client Privilege?

Sharing client information with ChatGPT can waive the attorney-client privilege because the tool is a third party. Chats that include your prompts and responses may be accessible to OpenAI personnel or contractors, which means your disclosures aren’t “in confidence”.

Even with privacy settings, a public AI tool isn’t a privileged channel. Inputs may be stored or reviewed (and, unless disabled, used to train to improve AI models). Entering identifiable client facts counts as disclosure to an outsider. 

Practically, privilege does not apply when client information is shared with a public AI tool. Both lawyers and clients who enter sensitive details risk breaching confidentiality. If you must use a public AI tool, stick to entering only generalized hypotheticals that do not identify a client or matter. 

To use AI without violating professional standards, choose tools built for law. Spellbook is an AI-powered tool that automates tasks like identifying risks in contracts and generating new clauses. It is designed to address privacy issues directly by implementing Zero Data Retention (ZDR), meaning it does not retain personal conversations after a session ends. It operates as a Microsoft Word plug-in to avoid the "third-party" risk, making it the most compliant option that pairs AI speed with lawyer oversight and strict privacy controls.

For a deeper dive on AI and confidentiality, see Is it legal for lawyers to use ChatGPT?

How ChatGPT Handles User Data

Before using ChatGPT, it is important to understand how it collects, stores, and protects the information you give it. Data practices vary by plan, with varying levels of privacy, control, and compliance for Free, Plus, and Enterprise users.

H3: ChatGPT’s Free Plan 

For users of ChatGPT’s free plan, OpenAI may use the content you submit to improve the model's performance, unless you disable chat history. Prompts, responses, and uploaded files can be reviewed by authorized OpenAI personnel or trusted service providers under strict confidentiality and security obligations (e.g., not for marketing purposes). OpenAI aims to collect data necessary to improve system functionality and track metadata to enhance ChatGPT’s performance and reliability. 

Data is encrypted in transit and at rest (TLS 1.2+ / AES-256), but there is no end-to-end encryption. 

ChatGPT stores chat histories until you delete them. You can clear specific chats or all history. Deleted conversations are typically removed within 30 days, unless retained for security or legal reasons. 

H3: ChatGPT Plus Plan

ChatGPT Plus offers faster, more capable models (e.g., GPT-4), but privacy risks are the same as free ChatGPT. Unless you turn off chat history, Plus saves chats, which may be reviewed internally by authorized staff or trusted providers for abuse/security, support, legal matters, or model improvement.

You must still manually opt out on the Plus plan to prevent your conversations from being used for model training, just like the Free plan. The upgrade is about performance and access, not additional data protection.

H3: Team/Enterprise/API (Enterprise Plans)

Enterprise plans are for organizations that require strict privacy and control. By default, OpenAI does not use input data for training.

Enterprise plans provide enhanced data isolation, administrative controls, and audit logging. Communications are secured through encrypted connections and secure data transfer (encryption in transit and at rest). Your business data and information are not shared with third parties without consent. Data is shared only with trusted service providers under strict confidentiality obligations (e.g., abuse monitoring or required legal compliance). 

With compliance options, custom data residency and retention controls, ChatGPT enterprise plans are more suitable for sensitive or regulated use, as law firms and in-house teams require.

Confidentiality: Who Can Access Your ChatGPT History?

Authorized OpenAI personnel and trusted service providers may access user content when needed to operate a service, prevent abuse, provide support, maintain security, comply with legal requirements, or improve performance. This access is highly restricted and subject to strict confidentiality/security obligations. It is primarily for moderation, safety, and necessary business operations.

Deleted content is scheduled for permanent removal from OpenAI's systems within 30 days for Free/Plus users, unless it must be retained longer for security, legal, or other legitimate purposes. Enterprise plans allow administrators to set custom (often shorter) retention windows. Administrators can also manage user accounts, including controlling user access, viewing usage, and setting retention policies.

There is no user-side encryption control because OpenAI manages encryption. OpenAI reviews privacy practices periodically to adapt to new regulations and user needs.

Are Your Inputs Used to Train ChatGPT?

Yes, by default, ChatGPT may use the content you provide, including chats and memories, to help improve its models for everyone. 

Enterprise plans are excluded from training by default. Users on Free/Plus plans must disable "Improve the model for everyone" in Data Controls to opt out of their conversations being used for model training. You can turn off “saved memories” or “chat history” in Settings. When history or memory is disabled, your inputs are not used to train models.

For sensitive items, such as contracts, passwords, or health data, all plans offer a Temporary Chat feature that does not store history, use memory, or contribute to model training. OpenAI may still retain the chat for up to 30 days for abuse monitoring and security checks. After 30 days, this log is typically purged unless legally required otherwise. 

H3: How to Restrict ChatGPT Data Sharing in Settings

  1. Go to your Profile/Account: Click your profile icon (usually in the bottom-left corner of the web interface or in the side menu of the mobile app).
  2. Select Settings: Click Settings.
  3. Navigate to Data Controls: In the Settings menu, select Data Controls.
  4. Turn Off the Toggle: Turn OFF the setting labeled:
  • "Improve the model for everyone"

When using the Temporary Chat mode, conversations are not saved to your history, not used to train the models, and are deleted from OpenAI's systems within 30 days.

H3: Steps to Hide or Delete Shared ChatGPT Chats

  1. Revoke shared links via Settings: Go to Settings → Data Controls → Shared Links → Manage, then click the trash can next to any link (or Delete all).
  2. Delete the original chat: In the sidebar, open the chat menu () → Delete to invalidate its shared link.
  3. Account data options: Request Data export or Account deletion from your account settings.

Remember: Deletion may not immediately erase data. OpenAI may limit data sharing to internal systems and ensure security standards, but retention can last for weeks.

Learn more about the risks of search engine indexing for shared ChatGPT conversations.

Version Comparison: ChatGPT Free vs. Plus vs. API vs. Enterprise

Understanding how each ChatGPT version handles data is key to choosing the right option for your needs. The table below summarizes the main differences in privacy, training use, and suitability for sensitive information.

ChatGPT Free vs. Plus vs. API vs. Enterprise

Version Training Use Privacy & Security Best For
Free & PlusUsed for training by default unless the user explicitly opts out  (disabling "Improve the model for everyone").Standard consumer controls (opt-out toggle, Temporary Chat). Encrypted data in transit and at rest.Everyday use, personal learning, and research where performance and feature access are priority.
APINo training on inputs/outputs  (since March 1, 2023).Secure by design; encrypted in transit/at rest; Zero Data Retention options for maximum security (excludes some features).Secure integrations, building custom applications, and scenarios requiring programmatic control and the highest baseline privacy guarantee.
Enterprise/TeamsExcluded from training by default. Organizations must explicitly opt in to share data for training purposes.Private, contract-bound; enterprise compliance (SOC 2, GDPR support); admin controls (SSO, audit logs, data residency).Sensitive business and legal work, corporate deployment, and large-scale team collaboration requiring governance.
Best for Sensitive UseAPI (especially with ZDR) or Enterprise

What You Shouldn’t Share with ChatGPT

Treat every ChatGPT interaction as reviewable. Never enter information that could expose you or your organization to privacy, security, or ethical risks. Avoid entering sensitive or privileged content, including:

  • Passwords or personal ID numbers
  • Legal case information
  • Confidential client communications
  • Internal company documents
  • Medical data or HIPAA-protected content

Rule of thumb: if you wouldn’t post it online, don’t type it into ChatGPT.

What are the Privacy Risks of Using ChatGPT? 

While ChatGPT includes built-in privacy safeguards, several practical risks remain when sensitive or identifiable information is shared on the platform.

  • Data retention after deletion: Deleted or hidden chats may be kept temporarily for security, legal, or maintenance needs.
  • Training-data exposure: Large training pipelines can be targeted. A breach could expose stored or logged information despite encryption.
  • Human review: Authorized staff or contractors may review snippets for safety or quality.
  • Professional misuse: Entering client/patient details risks confidentiality and, for lawyers, privilege waiver.

Treat chats as reviewable and avoid sharing sensitive or identifiable information.

Ethical Considerations for Lawyers Using AI

Lawyers must safeguard confidentiality, protect client interests, and obtain informed consent when using AI, consistent with ABA Model Rules regarding competence (Rule 1.1), confidentiality (Rule 1.6), communication (Rule 1.4). U.S. bars, including the ABA and the State Bar of California, have issued additional guidance and resolutions emphasizing transparency, oversight, and data security in the use of legal AI.

The ABA's Formal Opinion 512 (issued July 2024) addresses Generative AI. Informed consent from a client is required before using confidential client information in a self-learning GAI tool, given the unique risks of data exposure and training.

ABA Model Rule 1.1 (Competence), Comment 8, explicitly requires lawyers to understand the benefits and risks of the technology they use, including how the tool handles data.

Using consumer AI tools (e.g., public ChatGPT) can expose data to third parties or human reviewers, risking breaches of ethics or the waiver of privilege. When a lawyer inputs confidential client information into a consumer AI tool that is not contractually obligated to protect it (i.e., the default settings of free/Plus ChatGPT), they are essentially disclosing that information to a third-party vendor (OpenAI) and its service providers. 

Legal-specific or firm-controlled AI tools built for confidentiality, data retention controls, and no training on client data can better align with ethical duties. AI tools like Spellbook offer secure, lawyer-directed workflows that preserve privilege and meet professional standards.

How Spellbook’s Legal Protections Compare with Native ChatGPT

Built for lawyers, Spellbook keeps client data out of training and runs on secure, encrypted infrastructure with GDPR-aligned, contract-level safeguards (including zero-retention options). 

Operating as a Microsoft Word plug-in, Spellbook fits real legal workflows. It offers full version tracking, clause libraries and precedents, market-standard benchmarking, targeted review and redlining modes. Automated Playbooks give consistent guidance, and an Associate feature coordinates multi-document updates while maintaining a complete audit trail. 

Trusted by 2,000+ law firms, Spellbook is AI you can trust to protect your reputation, integrity, and client confidentiality. For a comparison of privacy practices across AI tools, see how Google Gemini handles privacy.

Frequently Asked Questions

Is ChatGPT HIPAA, GDPR, or CCPA Compliant?

No. ChatGPT isn’t HIPAA-compliant. GDPR and CCPA protections apply, but Free and Plus users must proactively manage their privacy. Data is used for model training by default unless you manually opt out in the Data Controls settings. Furthermore, fully exercising rights such as the right to deletion often requires a separate request.

Can You Make ChatGPT More Private?

Yes, to a point. Turn off chat history, avoid sharing sensitive data, and consider Enterprise or API plans. Still, it’s not designed for legal or medical privacy.

Is ChatGPT Safer on Mobile Apps vs. Web? 

No. Both apps use the same platform, privacy settings, and data sharing rules.

Start your 7-day free trial

Join 4,000 legal teams using Spellbook

please enter your business email (not gmail, yahoo, etc)
*Required

Thank you for your interest! Our team will reach out to further understand your use case.

Oops! Something went wrong while submitting the form.