Solve complex legal tasks with surprising accuracy. With Spellbook you get:
Is it ethical for lawyers to use Perplexity? Yes, but its citation model poses a risk of misattribution and inaccuracy.
Perplexity links to sources in its answers, which makes outputs feel reliable. However, they may not be.
This article covers the compliance risks of using Perplexity AI, the four Model Rules that present the highest exposure, and the workflows that help lawyers remain compliant with their professional and ethical obligations.
[cta-1]
Perplexity’s risk profile is categorically different from that of other AI tools. Unlike ChatGPT (OpenAI) and Claude (Anthropic), Perplexity AI runs on large language models with a retrieval layer that pulls from the live web and attaches links to sources. That makes results feel verified when they are not.
Perplexity might attribute a dissent’s reasoning to the majority or cite a vacated opinion because it appeared in a recent news summary. A source may misstate a holding, cite a secondary article with no precedential authority, or surface a blog post as though it were binding. While Perplexity can still hallucinate, the risk shifts primarily from fabrication to misattribution and contextual inaccuracy.
Attorneys evaluating Perplexity must also account for the platform's unresolved data sourcing litigation. The New York Times filed a copyright suit against Perplexity in December 2025, alleging unauthorized crawling and reproduction of its reporting. Reddit filed a lawsuit in October 2025, accusing Perplexity and three data-scraping intermediaries of bypassing access controls on an industrial scale.
These cases raise direct questions about the provenance of the data that generates the citations attorneys are warned to verify.
No jurisdiction currently prohibits attorneys from using Perplexity AI. The table below summarizes guidance from the ABA, key state bars, and a landmark federal ruling.
Can lawyers use Perplexity responsibly under these standards? Yes, but only when they maintain competence, protect confidentiality, and treat every result as unverified until independently confirmed.
ABA Formal Opinion 512 governs the four compliance risks below, ordered from most to least severe.
Perplexity shows citations that link to web pages, but those pages may be blogs, news articles, or secondary summaries that courts reject as authoritative.
We’ve seen what happens when attorneys rely on AI-generated citations without verifying them. In Mata v. Avianca, 678 F. Supp. 3d 443 (S.D.N.Y. 2023), attorneys submitted a brief containing AI-generated citations to nonexistent cases. Judge P. Kevin Castel sanctioned both attorneys and their law firm a total of $5,000 for acting in subjective bad faith.
Mata involved fabricated citations from ChatGPT (OpenAI) rather than misattributed ones from Perplexity, but the Model Rule 3.3 exposure is identical, and the court made clear that reliance on the AI tool is not a defense.
Every citation must be cross-referenced in Westlaw, LexisNexis, or an official database before any court filing.
On February 10, 2026, Judge Jed S. Rakoff ruled in United States v. Heppner (S.D.N.Y.) that 31 documents a defendant generated through consumer Claude (Anthropic) were not protected by attorney-client privilege or the work-product doctrine. The court based its decision on three grounds:
Judge Rakoff noted that enterprise tools with strict confidentiality guarantees "may present a materially different analysis."
Perplexity's standard tiers carry the same risk profile established in Heppner. Before any use involving client data, attorneys must audit the Perplexity API or the specific tier's agreements to determine whether they provide adequate protection.
For teams that need enterprise-grade confidentiality controls, Spellbook enforces Zero Data Retention policies with its AI partners.
The competence obligation under Model Rule 1.1 goes beyond understanding how Perplexity works. Attorneys must understand that Perplexity sources information from the live web, not verified legal databases such as Westlaw, LexisNexis, or Fastcase. Unpublished opinions, subscription-gated case law, and jurisdiction-specific statutory updates are all outside its indexing reach or paywalled.
While Perplexity can find a case, it cannot perform a citator check. It cannot reliably tell an attorney if a case has been overruled, vacated, or superseded—a core requirement of Rule 1.1 competence.
Evolution of Comment 8 to Model Rule 1.1 positions AI search competence as a mandatory professional skill, and Continuing Legal Education (CLE) programs increasingly reflect this expectation.
Partners and supervising attorneys are ethically responsible for ensuring that the conduct of associates and nonlawyer staff—including their use of Perplexity—aligns with the lawyer’s professional obligations.
If a subordinate’s unverified Perplexity output leads to a filing error or a confidentiality breach, the supervising attorney bears the disciplinary risk. Effective AI governance must go beyond a written policy; it requires active training and audit workflows to ensure every AI-generated citation is cross-referenced against a primary legal database.
[cta-2]
Attorneys who use Perplexity can maintain ethical and professional compliance by following a three-step workflow: research, verification, and handoff.
Use Perplexity for early-stage research and determine the authority level of each source it returns. Then, verify every citation against an official legal database before it enters client work. Once the research is confirmed, move the contract work into a purpose-built legal tool
Under Model Rule 1.1, an attorney must understand that Perplexity retrieves public web content rather than authoritative legal research databases.
Use Perplexity for early-stage research and background information. Before acting on any result, determine whether it is a primary authority, a secondary summary, or a blog post with no precedential value.
Perplexity attaches citations to its outputs, which helps attorneys locate sources faster. Lawyers must verify all citations and legal assertions generated by Perplexity against official legal databases before use.
After research through Perplexity is complete and verified, contract work requires a different tool.
Spellbook runs in Microsoft Word, enabling attorneys to draft, review, redline, and benchmark without switching platforms. Its Thomson Reuters Practical Law integration grounds suggestions in vetted legal content, and the Library feature powers AI with a firm's own knowledge and precedents, meaning every suggestion reflects how the firm actually writes.
Does Perplexity generate reliable legal work product? That depends on what the attorney does with the output, not on the platform's capabilities.
Permissible (attorney treats outputs as unverified starting points):
High-Risk Non-Compliant Actions:
These gaps represent compliance violations for any contract attorney, in-house counsel, or lawyer relying on Perplexity's standard configuration for contract work. Spellbook closes each one inside the contract workflow.
If your team reviews contracts, Spellbook is built to handle the compliance obligations that Perplexity cannot meet. Try Spellbook.
Can lawyers use Perplexity? Yes, within the four-rule framework. No jurisdiction prohibits it, and existing Model Rules apply to Perplexity use without requiring new ethical standards.
The practical question is whether the verification burden outweighs the research utility for contract-heavy work. For drafting, review, redlining, and benchmarking, Perplexity creates friction at every stage.
Perplexity orients. Attorneys move into Spellbook to execute. It runs in Microsoft Word, handling contract drafting, review, and redlining grounded in firm precedent.
Start a free trial with Spellbook.
Generally, no. There is no blanket "AI disclosure" rule in the Model Rules. However, disclosure obligations are triggered in three specific scenarios:
No. A Perplexity citation link is not a recognized legal authority. Perplexity indexes the live web, meaning it may surface a summary from a blog post, a retracted news article, or a vacated opinion as if it were binding law.
Under Model Rule 3.3 (Candor) and Rule 11 (Sanctions), attorneys have a non-delegable duty to ensure the accuracy of every citation. Most Standing Orders (e.g., District of Kansas Standing Order 26-01) now explicitly require attorneys to certify that every citation has been checked by a human against an official legal reporter or a sanctioned database such as Westlaw or LexisNexis before filing.
Not on standard or Pro tiers. While Perplexity Pro offers an improved user experience, its standard terms do not provide the "reasonable precautions" required by Model Rule 1.6 to prevent the waiver of attorney-client privilege.
Following the February 2026 ruling in U.S. v. Heppner, federal courts established that using a consumer-grade AI tool with client data constitutes a disclosure to a third party, potentially waiving privilege because the platform’s terms allow for data collection and model training.
A firm-level policy should address mandatory training, acceptable use, citation verification responsibilities, and data tiering (Consumer vs. Enterprise). Under Model Rule 5.1 and 5.3, supervising attorneys are responsible for AI-related errors or data breaches by their staff.
To meet the "reasonable efforts" standard established in ABA Formal Opinion 512, a 2026-compliant policy must include:
Purpose-built legal AI tools like Spellbook reduce this supervisory burden by incorporating these efforts in the software, ensuring compliance is automated rather than manual.
For contract work, use Spellbook. The tools are not interchangeable. Perplexity is useful for drafting outlines and brainstorming clause language. A lawyer who uses both correctly uses Perplexity to orient to a legal question, then moves into Spellbook to draft, review, and benchmark a contract.
[cta-3]
ChatGPT | Claude | Perplexity | Grok | Google AI Mode




Get the latest news, trends, and tactics in legal Al—straight to your inbox.

Lawyer-built prompts to help you draft, review, and negotiate contracts faster—with any LLM.

Get 270+ clause benchmarks across 13 agreement types. Plus, read our full analysis on the future of data-driven negotiation.
Thank you for your interest! Our team will reach out to further understand your use case.