
.png)
In late July 2025, reporters discovered that shared ChatGPT conversations were appearing in Google Search results. This confirmed a risk many lawyers did not realize existed: anything placed on a public AI share link can be indexed by web crawlers, treated as published content, and surfaced to anyone running a search query.
Once exposed, Google can process ChatGPT text the same way it processes standard pages. It can assess content for unique value, filter out low-quality content, and support content displayed in search results even when the publication was accidental.
For legal work, research prompts, clauses, or strategy notes placed into a shared chat “just to review” may be posted to the public web. This article summarizes how the privacy event happened, what information remains discoverable, and how to prevent a repeat incident. You’ll also learn why using Spellbook means avoiding this entire class of exposure.
Google has since begun working with OpenAI to remove the indexed and searchable pages. However, lawyers still must understand how the exposure occurred and how widespread it was.
OpenAI ChatGPT’s “Share” feature generated a public URL for any conversation. For many users, there was an added checkbox (“Make this chat discoverable”) which allowed the URL to be crawled like any other public page. That design made the links eligible for indexing by Google and other engines without further user action.
Investigations by TechCrunch and Search Engine Land showed the exposure was easy to replicate in seconds just by running commands like: site:chatgpt.com/share + keyword
Once exposed, Google treated shared conversations like any other indexed page, evaluating them for relevance and prioritizing original content in search results. This matters because many firms already rely on AI in early-stage drafting and risk analysis, a trend documented in guidance on ChatGPT for lawyers.
Early checks found around 4,500 shared ChatGPT conversations appearing in Google results. Further research showed that roughly 100,000 conversations were scraped or archived by third parties.
Reporters found contract language, client-related prompts, negotiation notes, and personal information in indexed pages. This made otherwise internal prompts suddenly part of the searchable web, including confidential legal work product. At this scale, the issue moved from an edge case to a broad exposure event with severe, real-world privacy and legal implications.
Indexing behavior has shifted after the issue was reported. The following two parts outline what OpenAI changed and what lawyers should still keep in mind going forward.
After the exposure was reported, OpenAI removed the “make this chat discoverable” option. OpenAI described the feature as a short-lived experiment and has been working with search engines to remove URLs that were surfaced.
But the cleanup is not instantaneous. Though new chats are no longer made discoverable, previously exposed links can still appear until they are explicitly de-indexed or they expire from the search engine’s cache.
New shared links should no longer be indexable. However, older links and cached copies may still appear in search results. Whether Google continues to show them depends on how the link behaves now (404 vs. still accessible), whether it was removed, and how search engines handle their next crawl cycle.
Given that search engines track new content for indexing and may catalog pages based on relevance until they are removed, lawyers should assume some exposure may still exist until they check and clean up any shared links themselves.
When ChatGPT conversations become public, the exposure is not just technical. It creates legal, ethical, and reputational consequences.
Many disclosures were not intentional. Lawyers, staff, or even clients used the share link feature without realizing it could make the chat publicly viewable. Some links were sent only for internal review or client sign-off, but became searchable because the discoverability setting was misunderstood. Misunderstanding a "discoverable" setting can be viewed as a failure to take reasonable steps, negating any defense of inadvertence.
Lawyers may assume that ChatGPT conversations are private, but whether ChatGPT is actually private is an unreliable assumption when share links are involved. If a shared ChatGPT conversation contains client names, facts, strategy, or legal advice, making that link public can amount to a waiver of privilege.
Even a partial disclosure—for example, a prompt that reveals a deal fact or litigation position—can weaken privilege protection. And when a link is public, control is lost. You cannot undo the fact that the information became available to anyone, including an opposing party.
Once a shared chat is indexed, opposing counsel can find it using simple search queries without a subpoena. If the content is relevant, it can be used in a briefing or as circumstantial evidence, even if it was not intended to be public. And because indexed pages are treated as material placed in the public domain, a court can treat the disclosure the same way it would treat any other published information.
If privileged or sensitive content leaks, client confidence drops immediately. Exposure can also invite ethics complaints or malpractice claims for failure to protect client information. If the leaked material is embarrassing or high-stakes, public coverage can harm the firm’s reputation in ways that are hard to reverse.
Firms should confirm if exposure exists. This section walks through how to check for indexed chats and what to do if you find them.
For legal work, treat cleanup like any other record-sensitive action. This ensures you can show when action was taken if the issue is later questioned in discovery, complaints, or audits.
Avoiding a repeat of the indexing incident requires more than cleanup. The following outlines policies and controls that should be in place before anyone uses ChatGPT on client matters.
Firms should block or prohibit the use of ChatGPT share links for any client or matter-related content. Where controls allow, disable the feature outright to remove the risk entirely. If collaboration is required, route drafts through internal systems or secure platforms rather than public share URLs.
Lawyers are right to be cautious in the wake of this indexing incident. Firms are reconsidering publicly available tools and evaluating whether AI copilots are private by default before deploying them across matters.
Unlike public chat tools, Spellbook runs in a controlled environment rather than on publicly shared URLs. Documents and data used during the drafting process in Spellbook are never exposed to search engines or third-party crawlers, meaning there's no indexing risk.
All documents, data, and prompts remain in Word, under your firm’s control, with no path for inadvertent publication or crawling. You get the speed of AI without the public-web behavior that made the indexing incident possible.
Policy alone is not enough. Train lawyers, staff, and vendors on the risk of using publicly shared links. Create a simple internal rule: “Do not use ‘Share’ for anything tied to a client or matter.” Then, make it a point to review AI usage policies on a set schedule and run periodic audits to confirm no shared links are still live.
Before you share AI output, remove client names, facts, and strategy. When possible, route content through internal review tools instead of public AI interfaces. And if you must share a document or link externally, add no-index controls so search engines cannot index it.
Yes. If a shared ChatGPT conversation containing client names, case facts, strategy, or legal advice becomes publicly viewable or indexed, it can waive privilege and breach confidentiality. Once discoverable in search results, opposing counsel can use it as evidence that privileged information was placed in the public domain.
Yes. If a ChatGPT conversation is shared through a public link, and especially if it is indexed, it is treated as publicly available content. Even if the link was shared by mistake or was only meant for internal review, courts can treat the disclosure as publication and allow its use in litigation.
The highest-risk data includes contract drafts, negotiation strategy, client facts, internal firm discussions, and sensitive details. These may be sent to ChatGPT for convenience, but if shared through a public link, it becomes a liability.
Spellbook avoids the risk of exposure by running in Word, keeping work within your environment, and never generating public URLs that could be indexed or scraped.
Thank you for your interest! Our team will reach out to further understand your use case.