Building an AI Policy for a Law Firm

What goes in a usable, enforceable law-firm AI policy — and what to leave out so the document actually gets read.

Last reviewed on May 12, 2026.

The first AI policy at most firms gets written under pressure — usually after someone has used a consumer chatbot in a way the partners did not expect, and management decides it is time to have something in writing. The result tends to be a document that prohibits everything, gets pinned to the intranet, and is never read again.

A usable AI policy is different. It is short, opinionated, and tied to specific approved tools. It tells lawyers what they can do, not just what they cannot. This guide is a working framework for drafting that kind of policy. It assumes the firm has already made (or is about to make) at least one AI tool decision; the policy is what surrounds that decision.

This is operational guidance and not a substitute for advice from counsel in the relevant jurisdiction. Professional-conduct rules differ by regulator. The legal AI ethics framework covers the underlying principles.

Section 1 — Scope and who the policy covers

State plainly who the policy applies to. Most firms cover partners, associates, paralegals, business-services staff, and contract attorneys. Some extend it to outside counsel doing work the firm has staffed. Each group has different obligations under the firm's existing supervision and confidentiality rules, and the policy should reference those.

Be specific about what is in scope and what is not. A typical scope statement covers: generative AI assistants, AI features inside existing legal software (research, contract review, eDiscovery), and any external chatbot or model accessed through a personal account. It excludes well-understood predictive features that have been part of legal software for years (citation suggestions in research, auto-categorisation in document management) unless those features change materially.

Section 2 — Approved and prohibited tools

This is the section that matters most in practice. Lawyers want a list. Maintain one.

Approved tools

List the AI tools the firm has reviewed and contracted with, with a short note on what each is approved for. For example, "Tool X is approved for first-pass contract review on transactional matters. Outputs must be reviewed by a qualified lawyer before they leave the firm." A reader should be able to skim the list and know what they can use right now without escalating.

Prohibited tools

List the consumer-grade chatbots and free model APIs that staff are not permitted to use with client information. Use named products, not generic categories — "Do not use ChatGPT, Claude.ai, or Gemini consumer accounts with client matter information" is enforceable in a way that "do not use unauthorised AI tools" is not.

The grey-area path

For tools that have not been reviewed yet, give a clear process: who to email, what information to provide, and how long an answer takes. If approval is not built into the policy, lawyers default to either ignoring the policy or never trying anything new.

Section 3 — Client confidentiality

This is the section that hardest-codifies into the policy because the answers depend on the firm's data agreements with each AI vendor. The minimum content is:

Section 4 — Supervision and human review

Lawyers are responsible for AI-assisted work product to the same standard as work they drafted themselves. The policy should state this explicitly and describe what supervision looks like in practice:

Section 5 — Disclosure to clients

This section depends on jurisdiction and on each client's own expectations. The policy should at minimum require:

Some firms have adopted a standing position of disclosing material AI use to clients on every matter; others disclose on request. Both are defensible. The policy should state which the firm has chosen.

Section 6 — Billing

Where AI tools reduce the hours spent on a task, the firm needs a position on whether those savings flow through to the client. The two common positions are:

Either is defensible. The policy should state the default position and the circumstances in which it changes (for example, where a client's outside-counsel guidelines specify).

Section 7 — Training

Lawyers cannot supervise tools they do not understand. The policy should require that anyone using an approved AI tool has completed the firm's training for that tool, and that the training is refreshed when material features change. Build a record of training completion; it is part of the firm's supervision posture.

Section 8 — Incidents and reporting

Define what an "AI incident" is and how it gets reported. Examples that should trigger reporting:

The reporting path should be short and well known. Most firms route this through the general counsel's office or the equivalent.

Section 9 — Review cadence

State when the policy gets reviewed. Annually is common; six-monthly is reasonable in the first two years of AI adoption when products and vendor practices are changing quickly. Name the owner who is accountable for the review.

What to leave out

Policies fail when they try to anticipate every situation. Resist the temptation to draft prohibitions for hypotheticals nobody has encountered. Resist the temptation to copy long sections from a vendor's marketing material about responsible AI. Resist the temptation to make the policy a substitute for a training programme — a policy is a baseline, not a tutorial.

A working AI policy fits on a few pages and points outwards to the specific procurement, training, and ethics documents that do the heavy lifting. If the policy gets longer than that, it is probably trying to be a textbook.

Common mistakes

Related reading

For the principles underlying the policy, see the legal AI ethics framework. For procurement decisions that feed the approved-tools list, see the AI vendor due-diligence checklist. For implementation that follows policy publication, see the AI implementation roadmap.