Last reviewed on May 12, 2026.
The first AI policy at most firms gets written under pressure — usually after someone has used a consumer chatbot in a way the partners did not expect, and management decides it is time to have something in writing. The result tends to be a document that prohibits everything, gets pinned to the intranet, and is never read again.
A usable AI policy is different. It is short, opinionated, and tied to specific approved tools. It tells lawyers what they can do, not just what they cannot. This guide is a working framework for drafting that kind of policy. It assumes the firm has already made (or is about to make) at least one AI tool decision; the policy is what surrounds that decision.
This is operational guidance and not a substitute for advice from counsel in the relevant jurisdiction. Professional-conduct rules differ by regulator. The legal AI ethics framework covers the underlying principles.
Section 1 — Scope and who the policy covers
State plainly who the policy applies to. Most firms cover partners, associates, paralegals, business-services staff, and contract attorneys. Some extend it to outside counsel doing work the firm has staffed. Each group has different obligations under the firm's existing supervision and confidentiality rules, and the policy should reference those.
Be specific about what is in scope and what is not. A typical scope statement covers: generative AI assistants, AI features inside existing legal software (research, contract review, eDiscovery), and any external chatbot or model accessed through a personal account. It excludes well-understood predictive features that have been part of legal software for years (citation suggestions in research, auto-categorisation in document management) unless those features change materially.
Section 2 — Approved and prohibited tools
This is the section that matters most in practice. Lawyers want a list. Maintain one.
Approved tools
List the AI tools the firm has reviewed and contracted with, with a short note on what each is approved for. For example, "Tool X is approved for first-pass contract review on transactional matters. Outputs must be reviewed by a qualified lawyer before they leave the firm." A reader should be able to skim the list and know what they can use right now without escalating.
Prohibited tools
List the consumer-grade chatbots and free model APIs that staff are not permitted to use with client information. Use named products, not generic categories — "Do not use ChatGPT, Claude.ai, or Gemini consumer accounts with client matter information" is enforceable in a way that "do not use unauthorised AI tools" is not.
The grey-area path
For tools that have not been reviewed yet, give a clear process: who to email, what information to provide, and how long an answer takes. If approval is not built into the policy, lawyers default to either ignoring the policy or never trying anything new.
Section 3 — Client confidentiality
This is the section that hardest-codifies into the policy because the answers depend on the firm's data agreements with each AI vendor. The minimum content is:
- No client-confidential information is to be entered into a tool that has not been reviewed and approved under section 2.
- For approved tools, where the contract provides that customer data will not be used to train models, that protection should be relied on but staff should still treat highly sensitive material (active litigation strategy, sealed material, M&A live data) with additional care.
- Where redaction or pseudonymisation is practical without losing the legal utility of the prompt, it should be the default.
Section 4 — Supervision and human review
Lawyers are responsible for AI-assisted work product to the same standard as work they drafted themselves. The policy should state this explicitly and describe what supervision looks like in practice:
- Any AI-generated output that leaves the firm — to a client, court, or counterparty — has been read and verified by a qualified lawyer.
- Citations generated or summarised by an AI tool are checked against the underlying source. Citation hallucinations have been a recurring issue with general-purpose models, even in legal-specific products.
- Numerical claims (dollar amounts, percentages, dates) are checked against the source documents the model was given access to.
- Material produced for a regulator or tribunal carries the additional checks the firm normally applies to filings.
Section 5 — Disclosure to clients
This section depends on jurisdiction and on each client's own expectations. The policy should at minimum require:
- That engagement letters are reviewed and, where appropriate, updated to address AI use.
- That where a client has specifically asked about or restricted AI use on their matter, those instructions are recorded and followed.
- That where a client's outside-counsel guidelines or in-house policy address AI, the lawyer reviews them before starting work.
Some firms have adopted a standing position of disclosing material AI use to clients on every matter; others disclose on request. Both are defensible. The policy should state which the firm has chosen.
Section 6 — Billing
Where AI tools reduce the hours spent on a task, the firm needs a position on whether those savings flow through to the client. The two common positions are:
- Pass-through. The client is billed for the time actually spent, which is now lower. Hourly rates do not change.
- Value-based. The fee reflects the value of the work done, not the time it took. AI is an input cost.
Either is defensible. The policy should state the default position and the circumstances in which it changes (for example, where a client's outside-counsel guidelines specify).
Section 7 — Training
Lawyers cannot supervise tools they do not understand. The policy should require that anyone using an approved AI tool has completed the firm's training for that tool, and that the training is refreshed when material features change. Build a record of training completion; it is part of the firm's supervision posture.
Section 8 — Incidents and reporting
Define what an "AI incident" is and how it gets reported. Examples that should trigger reporting:
- A piece of AI-generated material went out with an error that should have been caught.
- Client information was inadvertently shared with an unapproved tool.
- A tool produced output that suggests a substantive failure (a hallucinated citation that made it into a filing, a misread material clause).
- A vendor reported a security incident affecting customer data.
The reporting path should be short and well known. Most firms route this through the general counsel's office or the equivalent.
Section 9 — Review cadence
State when the policy gets reviewed. Annually is common; six-monthly is reasonable in the first two years of AI adoption when products and vendor practices are changing quickly. Name the owner who is accountable for the review.
What to leave out
Policies fail when they try to anticipate every situation. Resist the temptation to draft prohibitions for hypotheticals nobody has encountered. Resist the temptation to copy long sections from a vendor's marketing material about responsible AI. Resist the temptation to make the policy a substitute for a training programme — a policy is a baseline, not a tutorial.
A working AI policy fits on a few pages and points outwards to the specific procurement, training, and ethics documents that do the heavy lifting. If the policy gets longer than that, it is probably trying to be a textbook.
Common mistakes
- Drafting the policy without input from the lawyers who will be subject to it. Adoption is low when the policy reads as if written by people who do not do the work.
- Pinning a policy to a specific tool that gets retired six months later. Reference roles and behaviours rather than product names where you can; keep the approved-tools list separate so it can be updated without re-opening the policy.
- Treating the policy as a one-time deliverable. Vendor practices change, model capabilities change, regulator guidance changes. A static policy ages badly.
- Stopping at prohibition. A policy that only says "no" pushes adoption underground rather than preventing it.
Related reading
For the principles underlying the policy, see the legal AI ethics framework. For procurement decisions that feed the approved-tools list, see the AI vendor due-diligence checklist. For implementation that follows policy publication, see the AI implementation roadmap.