

As more lawyers use AI tools to draft and review legal documents, one question defines the conversation: Is ChatGPT private?
Below, we examine how OpenAI stores, shares, and trains ChatGPT’s models on user inputs, including prompts and responses. The article outlines ChatGPT’s privacy practices, the associated privacy risks, and the settings you can use to limit data sharing, helping you make informed decisions about confidentiality and compliance.
We also demonstrate how the legal-specific AI tool, Spellbook, offers enhanced protections and seamless Word-native workflows that increase not just your privacy but also your efficiency. You’ll also find actionable steps you can implement today to use AI more safely.
Sharing client information with ChatGPT can waive the attorney-client privilege because the tool is a third party. Chats that include your prompts and responses may be accessible to OpenAI personnel or contractors, which means your disclosures aren’t “in confidence”.
Even with privacy settings, a public AI tool isn’t a privileged channel. Inputs may be stored or reviewed (and, unless disabled, used to train to improve AI models). Entering identifiable client facts counts as disclosure to an outsider.
Practically, privilege does not apply when client information is shared with a public AI tool. Both lawyers and clients who enter sensitive details risk breaching confidentiality. If you must use a public AI tool, stick to entering only generalized hypotheticals that do not identify a client or matter.
To use AI without violating professional standards, choose tools built for law. Spellbook is an AI-powered tool that automates tasks like identifying risks in contracts and generating new clauses. It is designed to address privacy issues directly by implementing Zero Data Retention (ZDR), meaning it does not retain personal conversations after a session ends. It operates as a Microsoft Word plug-in to avoid the "third-party" risk, making it the most compliant option that pairs AI speed with lawyer oversight and strict privacy controls.
For a deeper dive on AI and confidentiality, see Is it legal for lawyers to use ChatGPT?
Before using ChatGPT, it is important to understand how it collects, stores, and protects the information you give it. Data practices vary by plan, with varying levels of privacy, control, and compliance for Free, Plus, and Enterprise users.
For users of ChatGPT’s free plan, OpenAI may use the content you submit to improve the model's performance, unless you disable chat history. Prompts, responses, and uploaded files can be reviewed by authorized OpenAI personnel or trusted service providers under strict confidentiality and security obligations (e.g., not for marketing purposes). OpenAI aims to collect data necessary to improve system functionality and track metadata to enhance ChatGPT’s performance and reliability.
Data is encrypted in transit and at rest (TLS 1.2+ / AES-256), but there is no end-to-end encryption.
ChatGPT stores chat histories until you delete them. You can clear specific chats or all history. Deleted conversations are typically removed within 30 days, unless retained for security or legal reasons.
ChatGPT Plus offers faster, more capable models (e.g., GPT-4), but privacy risks are the same as free ChatGPT. Unless you turn off chat history, Plus saves chats, which may be reviewed internally by authorized staff or trusted providers for abuse/security, support, legal matters, or model improvement.
You must still manually opt out on the Plus plan to prevent your conversations from being used for model training, just like the Free plan. The upgrade is about performance and access, not additional data protection.
Enterprise plans are for organizations that require strict privacy and control. By default, OpenAI does not use input data for training.
Enterprise plans provide enhanced data isolation, administrative controls, and audit logging. Communications are secured through encrypted connections and secure data transfer (encryption in transit and at rest). Your business data and information are not shared with third parties without consent. Data is shared only with trusted service providers under strict confidentiality obligations (e.g., abuse monitoring or required legal compliance).
With compliance options, custom data residency and retention controls, ChatGPT enterprise plans are more suitable for sensitive or regulated use, as law firms and in-house teams require.
Authorized OpenAI personnel and trusted service providers may access user content when needed to operate a service, prevent abuse, provide support, maintain security, comply with legal requirements, or improve performance. This access is highly restricted and subject to strict confidentiality/security obligations. It is primarily for moderation, safety, and necessary business operations.
Deleted content is scheduled for permanent removal from OpenAI's systems within 30 days for Free/Plus users, unless it must be retained longer for security, legal, or other legitimate purposes. Enterprise plans allow administrators to set custom (often shorter) retention windows. Administrators can also manage user accounts, including controlling user access, viewing usage, and setting retention policies.
There is no user-side encryption control because OpenAI manages encryption. OpenAI reviews privacy practices periodically to adapt to new regulations and user needs.
Yes, by default, ChatGPT may use the content you provide, including chats and memories, to help improve its models for everyone.
Enterprise plans are excluded from training by default. Users on Free/Plus plans must disable "Improve the model for everyone" in Data Controls to opt out of their conversations being used for model training. You can turn off “saved memories” or “chat history” in Settings. When history or memory is disabled, your inputs are not used to train models.
For sensitive items, such as contracts, passwords, or health data, all plans offer a Temporary Chat feature that does not store history, use memory, or contribute to model training. OpenAI may still retain the chat for up to 30 days for abuse monitoring and security checks. After 30 days, this log is typically purged unless legally required otherwise.
When using the Temporary Chat mode, conversations are not saved to your history, not used to train the models, and are deleted from OpenAI's systems within 30 days.
Remember: Deletion may not immediately erase data. OpenAI may limit data sharing to internal systems and ensure security standards, but retention can last for weeks.
Learn more about the risks of search engine indexing for shared ChatGPT conversations.
Understanding how each ChatGPT version handles data is key to choosing the right option for your needs. The table below summarizes the main differences in privacy, training use, and suitability for sensitive information.
Treat every ChatGPT interaction as reviewable. Never enter information that could expose you or your organization to privacy, security, or ethical risks. Avoid entering sensitive or privileged content, including:
Rule of thumb: if you wouldn’t post it online, don’t type it into ChatGPT.
While ChatGPT includes built-in privacy safeguards, several practical risks remain when sensitive or identifiable information is shared on the platform.
Treat chats as reviewable and avoid sharing sensitive or identifiable information.
Lawyers must safeguard confidentiality, protect client interests, and obtain informed consent when using AI, consistent with ABA Model Rules regarding competence (Rule 1.1), confidentiality (Rule 1.6), communication (Rule 1.4). U.S. bars, including the ABA and the State Bar of California, have issued additional guidance and resolutions emphasizing transparency, oversight, and data security in the use of legal AI.
The ABA's Formal Opinion 512 (issued July 2024) addresses Generative AI. Informed consent from a client is required before using confidential client information in a self-learning GAI tool, given the unique risks of data exposure and training.
ABA Model Rule 1.1 (Competence), Comment 8, explicitly requires lawyers to understand the benefits and risks of the technology they use, including how the tool handles data.
Using consumer AI tools (e.g., public ChatGPT) can expose data to third parties or human reviewers, risking breaches of ethics or the waiver of privilege. When a lawyer inputs confidential client information into a consumer AI tool that is not contractually obligated to protect it (i.e., the default settings of free/Plus ChatGPT), they are essentially disclosing that information to a third-party vendor (OpenAI) and its service providers.
Legal-specific or firm-controlled AI tools built for confidentiality, data retention controls, and no training on client data can better align with ethical duties. AI tools like Spellbook offer secure, lawyer-directed workflows that preserve privilege and meet professional standards.
Built for lawyers, Spellbook keeps client data out of training and runs on secure, encrypted infrastructure with GDPR-aligned, contract-level safeguards (including zero-retention options).
Operating as a Microsoft Word plug-in, Spellbook fits real legal workflows. It offers full version tracking, clause libraries and precedents, market-standard benchmarking, targeted review and redlining modes. Automated Playbooks give consistent guidance, and an Associate feature coordinates multi-document updates while maintaining a complete audit trail.
Trusted by 2,000+ law firms, Spellbook is AI you can trust to protect your reputation, integrity, and client confidentiality. For a comparison of privacy practices across AI tools, see how Google Gemini handles privacy.
No. ChatGPT isn’t HIPAA-compliant. GDPR and CCPA protections apply, but Free and Plus users must proactively manage their privacy. Data is used for model training by default unless you manually opt out in the Data Controls settings. Furthermore, fully exercising rights such as the right to deletion often requires a separate request.
Yes, to a point. Turn off chat history, avoid sharing sensitive data, and consider Enterprise or API plans. Still, it’s not designed for legal or medical privacy.
No. Both apps use the same platform, privacy settings, and data sharing rules.
Thank you for your interest! Our team will reach out to further understand your use case.