How Mneme works, precisely
Technical deep dives into the pipeline — retrieval mechanics, scoring, decision memory, and the deliberate architectural choices behind deterministic governance. No embeddings, no ML, no approximations in the enforcement path.
Loads project_memory.json → scores decisions by field weights → injects top-K=3 into prompt → checks model output → emits PASS / FAIL / WEAK_RETRIEVAL. Same query, same corpus, same result every time.
The full DecisionRetriever walkthrough — tokenization, field weights (title×3.0, tags×2.5, constraint×1.5, content×1.0), tag boosting, top-K=3 selection, tie-break determinism, and why there are no embeddings. Includes Layer 1 vs. Layer 2 metric distinctions and WEAK_RETRIEVAL semantics.
→The three-tier model — documentation (prose, wikis, ADR bodies), prompt memory (CLAUDE.md, rules files, RAG injection), and decision memory (typed schema with scope, status, precedence, constraint fields). Why documentation retrieval ends in suggestion. Why decision memory enables enforcement.
→Architectural Governance, Deterministic Enforcement, Architectural Compiler, Decision Continuity, and seven more. Systems-level explanations of why each concept exists in AI-native software delivery.
Read the source
DecisionRetriever, MemoryStore, ContextBuilder, and the benchmark harness are all open source under MIT.