Docs.
Reference for the CLI, the governance violations Mneme catches, supported languages, and the benchmark methodology.
Reference surfaces
Mneme HQ's documentation is intentionally narrow. We document the surfaces that engineers actually need to operate the governance layer: the CLI you run, the violations Mneme catches, the languages Mneme governs today, and the methodology behind the benchmark. Everything else lives in the source repository.
Commands, flags, and exit codes for mneme: list_decisions, add_decision, test_query, check, cursor generate, and benchmark. GitHub Actions and pre-commit CI patterns.
Twelve worked examples across architecture, workflow, security, dependency, and platform governance. Each shows the rule, the AI's offending output, and the structured flag Mneme emits.
Governance violations →Canonical coverage matrix. Tier 1 (Python, TypeScript, JavaScript) and Tier 2 (Go, Java, C#, Rust). Capabilities, limitations, and roadmap per language.
Supported languages →How the v1.1 governance benchmark is measured: layered retrieval and enforcement scoring, structured-output verification, pre-registered thresholds, anti-gaming protocol.
Benchmark methodology →How to read these docs
The CLI reference is the operational entrypoint: every governance action — listing decisions, adding them, running a check, generating Cursor rules, executing the benchmark — flows through one of the documented commands. Pair it with the governance violations doc to understand what the check command is actually surfacing in practice.
The supported languages page is the canonical coverage matrix — Tier 1 (Python, TypeScript, JavaScript) gets native depth; Tier 2 (Go, Java, C#, Rust) gets repository-level governance through architectural and dependency rules. Mneme is language-agnostic by design; that page documents where the depth lives today and what is on the roadmap.
The benchmark methodology page is the methodology specification, not a results dashboard. It describes layered retrieval and enforcement scoring, structured-output verification, pre-registered thresholds, and the anti-gaming protocol — methodology before metrics. Cross-link: see the Insights hub for essays on the underlying architectural-governance category, and the source repository for everything the docs do not cover.