The risk in legal AI often appears after the model has finished. A draft summary sitting in a workspace is one thing. A clause amendment sent to a client is another. The system needs to understand that difference.
Last mile liability is the risk that arises when output is relied on, sent, inserted into a live document or allowed to trigger action. The problem is not only a wrong generation. It is wrong use without evidence, review or authority.
Execution gates are the practical response: controls at the point of use, tied to destination and reliance, not only to the prompt that produced the text.
Framing governance as “after the fact compliance” misses where professional accountability actually attaches.