Here’s a question every law firm training director should sit with: when your AI tools aren’t working, how long does it take you to find out?
If the answer is months 2014 or not sure 2014 you are not alone. And according to a compelling new column by Olga V. Mack in Above the Law, that lag is one of the most expensive problems in legal AI adoption today.
Mack draws on empirical data from AI-supported classroom pilots at Product Law Hub, using an AI legal coach called Frankie, to make a case that should reframe how firms think about training entirely: the classroom doesn’t trail practice. It predicts it.
Why Lawyers Keep Using Tools That Don’t Work (and Learners Don’t)
In practice, lawyers are expert adapters. When a tool is clunky, they find workarounds. When they stop trusting it, they keep using it anyway, because switching feels riskier than tolerating. The result: adoption metrics look fine while actual utility quietly collapses. No one raises a hand.
Learners in a training environment don’t have those constraints. No billable hours. No client waiting. If a tool doesn’t support their thinking, they disengage — and they say so. Mack’s pilots captured this in real time: when the AI behaved poorly, sessions shortened, follow-up questions dropped off, and feedback turned critical. The signal was immediate and honest.
That same tool, deployed in a firm without structured training, might have limped along for a year before anyone admitted it wasn’t working.
Disengagement Is a Training Signal, Not a People Problem
The most telling metric in Mack’s pilot data wasn’t wrong answers. It was disengagement. Learners stopping mid-session, skipping follow-up interactions, checking out without completing the work.
This is a crucial reframe for anyone designing legal technology training. When engagement drops, the instinct is often to question the learner — are they motivated? Do they see the value? But Mack’s data points in a different direction: disengagement is a diagnostic signal about the tool and the training design, not a referendum on the people using them.
Fast feedback loops — the kind that only structured training environments can generate — surface these failures in days, not quarters. That’s an enormous strategic advantage for firms willing to treat training seriously.
"The future of legal AI will be shaped by those who are willing to listen early, before the warning signs become too expensive to ignore."
Olga V. Mack, Above the Law, March 2026
What Great Legal AI Training Actually Does
Mack’s findings align with what we see at Savvy Training every day: the firms that get the most out of AI tools are the ones that invest in how people learn to use them, not just whether they’ve been shown them.
Effective AI training in a legal context does four things:
- It builds judgment, not just familiarity. AI tools should augment legal reasoning, not replace it. Training that only covers feature navigation misses the point.
- It treats feedback as a design input. How learners respond to AI — what confuses them, what builds confidence, what causes them to disengage — is valuable data that should shape both the training and the tool.
- It surfaces problems early. A structured training environment is the cheapest place to discover that an AI behaves poorly under real-world conditions. Don’t skip that stage.
- It closes the loop quickly. Rapid feedback cycles, weeks, not quarters, allow firms to adjust before poor AI experiences harden into institutional skepticism.
Savvy Training has built its practice around the idea that technology training is not a soft skill. It’s a strategic function. Mack’s research makes that case in empirical terms. Firms that treat the training environment as a diagnostic and design tool will adopt AI faster, more effectively, and with fewer expensive surprises.