Last reviewed on May 12, 2026.
Most law firms can describe the AI tools they have bought. Far fewer can describe what those tools are returning. The gap is rarely intentional. It is what happens when procurement gets handed off to one team, adoption to another, and nobody owns the question of whether the investment is worth keeping.
This guide is a framework for closing that gap. It is deliberately concrete. The point is not to discuss ROI in the abstract but to give a firm a working method for answering the only question that matters at renewal time: is this tool still earning its seat licence?
Step 1 — Establish the baseline before you buy
The most common reason firms cannot calculate ROI is that they did not measure the workflow before introducing the tool. The vendor showed a slide of "average customer saves 40%," the firm bought it, and ten months later nobody can say what it replaced.
Before any pilot, write down the existing workflow in measurable terms. For a contract-review tool that means: how many contracts per month, average length, current review hours per contract, hourly rate of the people doing the review. For an eDiscovery platform: gigabytes ingested per matter, hours to first-pass review, cost per gigabyte at current vendors. For a legal research tool: hours per week spent searching, billable vs. non-billable split, cost of current research subscriptions being replaced.
This baseline is the only honest comparison point. Without it, every later number is a guess.
Step 2 — Categorise the savings
Once the tool is in use, savings show up in three different forms. Treat them separately because they have different reliability.
Direct cost reduction
Hours not billed to a matter because the work was done faster. This is the most defensible category because it shows up in the matter ledger. The number is straightforward: (baseline hours per task minus current hours per task) × volume × loaded cost of the person doing the work.
Direct reduction is usually smaller than vendors imply. Vendor case studies cite the best results, not the average. A realistic expectation for a mature AI workflow is a 20 to 50 percent reduction in time on the specific task the tool addresses, not on the matter as a whole.
Capacity redirection
Hours the tool frees up that get redirected to higher-value work instead of being saved. For most firms this is the bigger source of value. An associate who would have spent three hours summarising a deposition can instead spend those hours on the strategy memo a partner has been asking for.
Capacity redirection is harder to measure cleanly but worth attempting. Track what people do with the time they recovered. If the answer is "I'm not sure" or "more administrative work," that is a finding in itself.
Quality improvements and reduced rework
Fewer missed clauses caught at signing, fewer redactions missed on production, fewer citation errors caught at review. These rarely show up as a line item but they show up as fewer fire drills. Track them with simple counts: how many post-signing amendments did we need last year compared to this year? How many citation corrections did the client come back with?
Step 3 — Count the costs that get missed
Vendor pricing pages list a per-seat fee. Real cost includes more.
- Implementation services. Many platforms require professional services to integrate with the document management system or to build templates. These can rival the first year's licence fee.
- Training time. Every hour a lawyer spends learning the tool is an hour not billed. A reasonable estimate for a new platform is four to eight hours per user over the first month, plus ongoing top-up training as features change.
- Ongoing administration. Someone has to manage users, update templates, build new workflows, and handle vendor escalations. For a mid-sized firm that is often half an FTE.
- Underused seats. If the firm bought twenty licences and twelve are used regularly, the per-active-user cost is meaningfully higher than the headline price. Audit usage every quarter and right-size at renewal.
- Integration upkeep. Each integration with another system needs maintenance when either side changes. Budget for the occasional integration outage and the engineering work to fix it.
Step 4 — Build the ROI calculation
The basic formula is unsurprising:
Annual ROI = (Annual savings − Annual fully-loaded cost) ÷ Annual fully-loaded cost
Where annual savings combine the three categories from Step 2, and annual fully-loaded cost combines licences with every cost from Step 3.
Expressed this way, a positive number means the tool is paying for itself. A break-even tool produces zero. A negative number means the tool is consuming more than it returns, and either has to grow into its value or be retired.
Worked example
A 60-lawyer firm is one year into a contract-review platform. The pre-pilot baseline showed an average of 6 contracts per week per transactional associate, with each contract taking 2.5 hours of review. There are 10 transactional associates. Loaded cost per associate hour is $200.
Current state, one year in: review time per contract has dropped to 1.5 hours on average. Volume is unchanged. Direct savings: (2.5 − 1.5) × 6 × 50 weeks × 10 associates × $200 = $600,000 per year.
Capacity redirection: associates report that roughly half of the recovered time gets used on higher-value drafting work that is billable, and half is absorbed by other administrative work. Conservatively count only the billable half: $300,000 already counted in the direct line. So no additional capacity number for this estimate.
Costs: $90,000 in annual licences, $40,000 in original implementation (amortised over three years = $13,300/year), 8 hours of training × 10 associates × $200 = $16,000, and a quarter of an operations FTE at $120,000 fully loaded = $30,000. Total: $149,300.
Annual ROI = (600,000 − 149,300) ÷ 149,300 = 3.02, or roughly 300%. The tool is comfortably paying for itself and would survive a renewal conversation.
If the same firm had only 3 transactional associates and 2 contracts per week each, direct savings would be $60,000 against the same $149,300 in costs. ROI would be negative. The tool is the same product; the question is whether the firm has the volume to justify it.
Common mistakes
- Counting vendor-cited savings instead of measured savings. Vendor numbers are sales material. Measure your own.
- Ignoring training and admin time. These costs land on someone's calendar even if they never get invoiced.
- Conflating capacity redirection with cash savings. Capacity redirection only becomes cash if the recovered time gets billed. Track that separately.
- Comparing against the wrong baseline. The baseline is what the firm actually did before, not what the vendor thinks the industry average is.
- Setting up the calculation once and never revisiting. ROI changes as adoption matures and as the firm's volume mix changes. Re-run the numbers annually at minimum.
What to do if the numbers do not work
A negative ROI is not automatically a reason to cancel. The question is whether the trajectory is improving. In the first year of adoption, costs are highest (implementation, training, ramp) and savings are lowest (people are learning the tool). Many platforms only pay back from year two.
If by year two the numbers are still negative, look at three things in order. First, is the tool covering the right volume? Some products only work above a certain throughput. Second, is the firm using the features that drive the savings, or is it using ten percent of the platform? Third, is the workflow itself the limiter? Sometimes the tool is fine but the process upstream of it is generating low-quality inputs.
Cancellation is a legitimate outcome. There is no badge for keeping a platform the firm cannot make work. A clean exit at the end of year two, with the lessons documented, is better than a third year of negative ROI and accumulated workflow debt.
Closing checklist
- Did we measure the baseline workflow before buying?
- Are direct savings tracked separately from capacity redirection?
- Have we counted implementation, training, admin, and underused-seat costs?
- Is the ROI calculation re-run at least annually, by a specific named owner?
- Do we have a defined exit point if the trajectory does not improve?
Related reading
The AI Implementation Roadmap covers what to do before you measure ROI. The Legal AI Ethics Framework covers obligations that sit outside the cost calculation. For the tools themselves, browse the tools directory or use the comparison library for head-to-head reviews.