The language of AI-native governance
Infrastructure categories are defined by the teams building them. These are the concepts that shape how Mneme understands the problem of governing AI coding agents — and why most current approaches don't solve it.
Most of the terms on this page don't have industry-standard definitions yet. Architectural governance, governance before generation, decision continuity — these are concepts that the AI coding ecosystem is actively working out. The definitions here are Mneme's positions, not neutral encyclopedia entries.
Each concept page explains not just what the term means, but why the structural problem it names exists in AI-native software delivery — and why the solutions that feel intuitive (documentation, prompt engineering, code review) don't actually solve it.
If you're evaluating architectural governance tooling, these concepts are the vocabulary for comparing approaches. If you're building in this space, they're the framework for understanding where different systems sit in the stack.
The system that encodes team decisions as machine-evaluable constraints and enforces them at the point of AI code generation — before review, before drift compounds.
→The principle that architectural constraints must be evaluated before the AI writes code. The enforcement point is the strategic variable — and moving it upstream changes everything.
→A software delivery lifecycle designed from first principles for AI agents as primary code generators. The rate-limiting step has flipped — generation is no longer the bottleneck.
→The compound degradation in codebase coherence caused by AI agents producing code inconsistent with established decisions — uncorrected across sessions, agents, and time.
→Pre-registered, machine-evaluable assertions that define what a governance check must prove. The structural difference between measurable governance and governance you can only hope for.
→The property that architectural decisions remain enforced across agents, sessions, and time — regardless of which agent acts or what context it inherited. Prompt memory cannot provide it.
→The pipeline that converts documentation-form decisions into machine-evaluable constraint records. Parse → validate → resolve → emit → enforce. Compilation is the step that closes the documentation-to-enforcement gap.
→A governance check that produces the same verdict for the same inputs, always. Not a performance property — the precondition for governance auditability and improvement.
→A software development paradigm where AI agents are the primary code generators. When agents are first-class actors, architectural control stops being a cultural norm and becomes an operational engineering problem.
→The dedicated engineering platform layer that encodes, distributes, versions, and enforces architectural decisions across AI agents. Infrastructure — not process, not convention, not tooling bolt-on.
→The five-stage pipeline, DecisionRetriever field weights, top-K=3 selection, Layer 1/Layer 2 metrics, and why there are no embeddings.
The three-tier model (documentation / prompt memory / decision memory), schema comparison, precedence resolution, and ADR import workflow.
See these concepts in practice
The benchmark, the demos, and the open-source codebase are all running versions of the governance infrastructure described here.