Orchestrate.legal

Reference model

Shared vocabulary for structuring legal AI around work. This is a practical reference layer for design, governance and review.

Working manifesto

A compact set of design commitments that anchor the model.

  • Legal AI should be designed around coordinated work, not isolated answers.
  • The firm should own the routing layer between legal intent, model execution and human judgement.
  • Controls should sit at the point of reliance, not only at the point of generation.
  • Matter state should be treated as a live operational record, not a folder of documents.
  • Evaluation should measure rework, variance and degradation over time, not just first-pass accuracy.

What is orchestration?

Orchestration is the layer between legal intent and execution. It decides what work is being requested, what context is required, which path (model, tool, human, policy check) should run, whether the output may proceed, what is logged and how matter state updates. It is not a synonym for automation. It implies sequencing, coordination, judgement and control.

Primary flow

Demos, playbooks and writing all use this backbone; treat it as the shared spine for design and policy discussions.

  1. 1.A legal task is requested.
  2. 2.The system identifies the type of work.
  3. 3.The system gathers the right context.
  4. 4.A routing decision is made.
  5. 5.The task is executed by a model, tool or human.
  6. 6.The output is checked against the intended use.
  7. 7.An execution gate determines whether it can proceed.
  8. 8.The matter state is updated.
  9. 9.The audit trail records the decision.
  10. 10.Monitoring continues where needed.

Concepts

Plain language definitions you can reuse in architecture reviews and policy drafts.

Task
The unit of work: an owner, inputs and a definition of done. Tasks chain; documents attach to them.
Context
What the model and reviewer may see: matter phase, sensitivity, prior decisions and retrieved sources.
Router
Policy that maps task attributes to path: model tier, environment, retrieval, human steps, cost controls and governance band.
Execution gate
A required pause: review, approval, block or audit before an output reaches a destination. Gates belong at the point of use.
Matter state
The live picture of phase, open tasks, pending decisions, obligations and risks, not a static file list.
Audit trail
Who ran what, on which policy version, with which inputs, evidence and reviewer actions attached.
Monitoring
Signals after deployment: drift, failure modes, rework rates and escalation paths.

Open questions

  • How should routing policies change across matter type, client terms and jurisdiction?
  • What evidence is enough before AI-generated work can leave the firm?
  • How can firms use multiple AI tools without fragmenting audit, policy and accountability?
  • When should matter state trigger automation, and when should it only guide human judgement?

See the Routing Simulator, Matter State Viewer and Execution Gates demos for concrete examples.