.jpeg)

You didn’t start using AI because you wanted novelty. You started because legal work is piling up faster than you can handle. And yet, regardless of the hype, AI hasn’t consistently helped.
Sometimes AI produces useful results. Other times, it gives you a semi-polished mess that takes longer to fix than starting from scratch.
However, that inconsistency isn’t a technology problem. It’s a communication problem.
AI responds similarly to how a junior lawyer might if you gave them vague instructions, missing context, or unclear expectations. “Draft a motion.” “Review this contract.” “Summarize the law.” You know what happens next.
That gap is what we're closing today with effective AI prompting strategies that allow AI to handle the clerical and structural heavy lifting, giving you more time to focus on the legal judgment and strategy that the AI cannot handle. Thoughtful prompting is key to accurate, useful legal work product on the first try (or second).
.png)
You wouldn't just toss a junior associate a contract and say, "handle this." You'd walk them through how you want it done. AI needs the same level of clarity, if not more.
There's a reason most lawyers get mediocre results from AI. The way you ask matters just as much as what you ask. A research memo requires different instructions than a client email. A contract review needs a different framing than a motion draft. Once you understand strategic prompting, the quality of the output from AI improves dramatically.
Contract review is where vague prompts waste the most time, because AI is almost never told what risk means in this deal or which standards it should use. Without clear direction, AI defaults to surface-level summaries.
Ineffective Prompt: “Review this NDA and flag any issues.”
Strategic Prompt: “Review this NDA governed by New York law. Compare the NDA against our firm's attached 'Standard Mutual NDA Template' and flag any deviations in the Indemnification section. Summarize risks in a table with columns for clause, issue, and potential impact.”
The difference is intent. You define the contract type, the legal lens, and the output format. The result is something you can actually work with. As always, the final NDA requires attorney review. Strategic prompting streamlines contract review processes, but it can't decide what’s acceptable in your deal.
Asking AI to “find cases on X” invites overconfidence and under-contextualized answers, especially in unsettled areas of law. AI can be fast, but speed without control is dangerous. Give it the issue, jurisdiction, key facts, depth of analysis, and citation format. Ask it to flag uncertainties.
Ineffective Prompt: “Research employment non-competes in California.”
Strategic Prompt: “Analyze whether non-compete provisions in employment contracts are enforceable in California for software engineers in senior technical roles, focusing on post-2021 case law under Bus. & Prof. Code § 16600. Include any recent statutory changes, conflicting circuit authority, and practical enforceability considerations. Structure as an IRAC memo with case citations."
Every citation and holding must still be verified. Effective prompts simply reduce the distance between your question and the sources to check.
First drafts are AI's sweet spot, but only if you give it the right scaffolding. Specify the court, procedural posture, applicable rules, tone, and required sections of a document. Explain the strategic goal. Are you preserving arguments? Narrowing issues? Setting up a later filing?
Ineffective Prompt: “Draft a motion to dismiss.”
Strategic Prompt: “Draft a motion to dismiss under Rule 12(b)(6) for the Southern District of New York. Assume the complaint alleges breach of contract and fraud. Apply the Twombly/Iqbal standard and include sections for legal standard, argument, and conclusion. Reference the SDNY Local Rules regarding page limits and the specific Individual Practices of Judge [Name] regarding pre-motion letters.”
The more context you provide, the less you'll need to fix later. And as always, treat drafts as raw material: useful, but never final.
Client emails and correspondence need a lighter touch, but this is where generic AI outputs fall the flattest. AI typically defaults to neutral corporate language unless you tell it otherwise, which can feel cold or misaligned in sensitive situations. A good prompt here should feel less like legal analysis and more like emotional intelligence.
Ineffective Prompt: "Write an email updating the client on case status."
Strategic Prompt: "Draft a case status update email to general counsel at tech startup (sophisticated client, prefers brief updates). Tone: professional and reassuring. Cover: (1) discovery deadline extended to May 15 by agreement, (2) we're preparing responses to interrogatories, (3) next steps include expert designation in June, (4) timeline still on track for fall mediation. Length: 3-4 paragraphs. Close with an offer to discuss a strategy call if needed."
Keep privilege in mind. Don't include confidential client information in prompts unless you're using a secure, firm-approved tool. Anonymize when you can. Every piece of client communication needs attorney review before it goes out. AI can't read the room the way you can.
Once basic prompting feels natural, advanced techniques will help you tackle the multi-layered analysis that defines sophisticated legal work. At this level, the goal is not speed alone, but depth, accuracy, and strategic insight in complex matters.
Chain-of-thought prompting asks the AI to reason step by step rather than pattern-matching to the most common outcome. It’s the equivalent of saying, “Don’t just give me the answer. Show me how you got there.” You will see where the logic goes off track (if it does) and can course-correct in follow-up prompts.
By instructing the AI to walk through the legal standard, apply the facts element by element, and consider counterarguments, you reduce the risk of shallow analysis. This technique structures complex legal arguments logically and supports evidence-based legal strategy.
Few-shot learning means showing AI at least two examples of what good output looks like before asking for new work. AI then mirrors what the firm already considers good work.
If you need case summaries in a specific format, provide examples from previous memos. If you want a contract clause drafted in your firm's style, paste two similar clauses from prior deals.
When everyone's prompts include the same examples of excellent work product, AI becomes a vehicle for spreading best practices. Not to mention improving legal writing quality and consistency across your entire team.
Legal work almost never gets done in one draft, and AI-assisted work isn’t any different. When the first output is close but not quite right, the instinct shouldn’t be to abandon it. It should be to refine it. Treat prompting as a conversation.
For example, "Good start, but focus more on the damages analysis and less on liability."
Narrow the scope. Correct assumptions. Ask the AI to expand one section, tighten another, or reframe an argument using a different standard. Over time, you’ll learn when refinement works and when it’s faster to start over with a cleaner prompt.
One of the easiest ways to strengthen legal work is to step outside your own position and stress-test it. Ask “How would opposing counsel attack this?” or “What’s a judge likely to be skeptical about?” Multi-perspective prompting accelerates case analysis and strategy development.
Instead of asking AI for a single answer, ask it to analyze the issue from multiple viewpoints, such as opposing counsel, the court, and the client. That will surface potential weaknesses, which can mean the difference between winning and losing.
Whether you're drafting a complaint at 5 p.m. or reviewing a commercial lease first thing Monday morning, strategic prompting practices will get better results faster. Consider this list before assuming the AI is the problem.
If you spent twenty minutes crafting the perfect prompt for a motion to dismiss, why start from scratch next time? Building a personal prompt library turns one-off prompting wins into repeatable workflow improvements.
Using AI doesn’t change a lawyer’s professional and ethical obligations. It introduces new ways to test those obligations.
Using AI competently means understanding its limits. These tools can produce results that sound confident even when they're wrong. They can misstate the law, hallucinate citations, or gloss over nuance. That’s why every legal conclusion, citation, and factual assertion must be checked by a human lawyer before it reaches a client or a court.
Lawyers must understand where data goes and how it’s handled. Never use 'Public' or 'Consumer' AI models for client-identifiable data. Only use enterprise-grade tools with 'Opt-Out' data training policies. If a lawyer puts a client's trade secret into a public version of ChatGPT (the "Free" version), that data may be used to train future iterations of ChatGPT. If you wouldn’t disclose a detail to a third party, it doesn’t belong in a prompt.
Finally, transparency matters. How AI is used, how time is billed, and what efficiencies are passed on to clients are evolving questions. When the rules are unclear, the safest course remains the profession's oldest: exercise judgment, document your decisions, and err on the side of caution.
Everything this article covers, clarity, structure, iteration, judgment, is already built into Spellbook. While general-purpose AI tools force you to become a prompt engineer, Spellbook lets you stay a lawyer. It filters out noise so you can exercise your legal judgment where it matters most.
With Spellbook, you don’t start from a blank prompt. You start with tools designed for how lawyers actually think and draft, directly in Microsoft Word.
Curious how intelligent prompting can transform your legal workflows without the typical learning curve? Explore Spellbook and see what purpose-built legal AI can do.
.png)
No. Effective prompting is about clear communication, not coding. If you can explain what you want to a junior associate, you can prompt AI effectively. The skills you use every day translate directly into better prompts. Technical knowledge helps with advanced techniques, but the fundamentals are structured thinking applied to AI conversations.
You'll see immediate improvement by using basic techniques such as providing clearer instructions, better context, and specific output requirements. Within a week of intentional practice, most lawyers notice they get usable first drafts instead of starting over multiple times. The ongoing gains come from building your prompt library, learning what works for your specific practice area, and refining techniques through iteration.
No. AI prompting complements traditional research but doesn't replace it. AI accelerates the initial research phase, but you still need to verify citations, check for subsequent history, and apply independent legal judgment. Think of AI as a research assistant who works incredibly fast but still needs supervision.
Poor prompting wastes time and produces unreliable outputs that create more work than they save. Vague prompts generate generic results that require extensive editing. Missing context leads to irrelevant analysis. Worst case: poorly prompted AI produces confident-sounding but incorrect legal analysis that makes it into client work before anyone catches the error.
Track output quality (whether it’s usable on first try versus extensive revision needed), time savings (faster completion compared to manual work), consistency (similar tasks produce similar quality), and minimal editing required. If you spend less time reformatting and more time on substantive review, your prompts are working. If you get results you can build on rather than starting over, you're on the right track. Lawyers who track these metrics can see measurable improvement within 2-3 weeks of deliberate prompting practice.
ChatGPT | Claude | Perplexity | Grok | Google AI Mode
Thank you for your interest! Our team will reach out to further understand your use case.