Governance continuity across multiple actors.
As AI execution becomes distributed and persistent, the question is no longer "can one agent be guided?" It is "do our architectural invariants stay coherent across actors, sessions, and retries?" The point is not multi-agent runtime sophistication. The point is that the invariants live outside any single actor — in a governance layer that survives them.
What this demo is — and is not
This page does not claim a multi-agent runtime, an orchestration platform, or an autonomous coordination framework. Those are different categories. They are evolving rapidly, and most of them are out of Mneme's scope.
What this page does demonstrate is the shape of the problem those frameworks expose: once multiple actors touch the same codebase against shared architectural invariants, the governance layer becomes the coordination layer. If the invariants are encoded somewhere the actors can answer to, the system stays coherent. If they aren't, every actor reasons against its own context and drift becomes inevitable.
The invariants the actors share
Three actors will touch the same Python service over the course of one workflow. They share exactly one thing: the compiled decision corpus emitted by the ADR compiler.
- ADR-001 JSON storage only. No external database. No Redis. Persistence stays in-process.
- ADR-003 No ORM in v1. Direct module access. Migrations are deferred until a usage threshold is crossed.
- ADR-004 Repository pattern. All persistence flows through a Repository abstraction. No leakage of storage primitives into service layers.
None of the three actors have memory of each other. They run in separate sessions. The invariants do not live in any of them — they live in the corpus the governance layer enforces against.
Without the governance layer
For contrast, run the scenario without Mneme. Each actor pattern-matches against training data, not against the invariants. The trajectory is unambiguous:
- → Actor A introduces a Redis dependency to "speed up lookups."
- → Actor B sees Redis present and builds session storage on top of it.
- → Actor C generates infra YAML for the new Redis service.
- → All three changes are locally reasonable.
ADR-001is silently dead.
The full unconstrained walkthrough is on the drift prevention page — that flagship covers the single-codebase failure mode end-to-end. This page extends the same logic to the case where the three actors are not the same agent.
With the governance layer
Same three actors. Same three tasks. The only difference is that every proposed change is evaluated against the shared invariant set before it lands, and the verdict trace persists across actors.
Proposes Redis · blocked by ADR-001
The agent's first draft adds redis-py. The pre-generation hook scores the proposal against the corpus, surfaces ADR-001, and reroutes the agent to extending the JSON cache module that already exists. The diff that reaches CI uses the in-process abstraction.
The corpus is unchanged. The verdict is appended to a structured trace that the next actor will inherit.
Builds on the correct primitive · PASS by construction
Actor B starts from a clean session. It does not "remember" what Actor A did. But the codebase it reads does — the JSON cache abstraction is now the obvious primitive to extend. The hook surfaces ADR-001 and ADR-004 alongside the prompt. Output uses the Repository abstraction over JSON.
Continuity here is not behavioral — it is structural. Actor B does not need to know what Actor A did. The corpus carries the invariant, and the codebase carries the compliant primitive.
Refactors across A and B's work · conflict detector flags ambiguity
Actor C is a remediation pass. It sees both cache layers and proposes consolidating them. The conflict detector notices the proposed refactor would change the Repository contract that ADR-004 pins down, and emits a structured WARN. The verdict is not a rejection — it is an explicit decision point that requires either ADR amendment or a tracked override.
This is the case the multi-actor demo exists to illustrate. The refactor is not wrong — it is architecturally consequential. The governance layer's job is to make that visible to a human, not to block silently.
What stayed coherent. Three actors. Zero shared memory. One shared corpus. The invariants held because they live outside the actors — in the artifact the governance layer evaluates against.
Why this matters at the limit
The market is moving toward parallel, persistent, semi-autonomous execution. Claude flows, OpenAI agents, autonomous retries, scheduled coding tasks — the shape varies, but the underlying property is the same: multiple actors touching the same codebase against constraints no single actor reliably remembers.
When this becomes routine, the bottleneck stops being "can a single agent produce good code?" The bottleneck becomes "do the constraints stay coherent across the actors?" Prompt files cannot answer that question. Vector stores cannot answer that question. A precedence-aware decision corpus with deterministic enforcement can.
The runnable example
The repo ships a Python script that simulates the three-actor workflow against the real Mneme pipeline. The "actors" are scripted diff producers — the script does not call any LLM. The enforcement, retrieval, and conflict-detection steps are the actual Mneme code that drives the editor hook and CI gate.
git clone https://github.com/TheoV823/mneme
cd mneme/examples/multi-agent-governance
python run.py
Output prints the without-governance trajectory first (drift accumulates across actors), then the with-governance trajectory (invariants hold). The verdict trace is the same structured PASS / WARN / FAIL format as mneme check.
Honest framing. This is a forward-looking demo. The category insight — governance becomes the coordination layer as actors multiply — is real today. The full operational version, with persistent traces across long-running workflows, is on the roadmap. The runnable example proves the invariant-persistence property using the existing pipeline.
Where the rest of the picture lives
- → The ADR compiler — how the invariant set the actors share is produced.
- → Architectural drift prevention — the single-actor version of the same problem, end-to-end.
- → Architectural governance across heterogeneous AI coding agents — the longer-form argument for why the layer must live outside any one tool.
- → Governance Benchmark v1.1 — the deterministic scenario suite this enforcement is measured against.