The position

  • Copilot is a generator: it produces suggestions inside VS Code and JetBrains via GitHub's model
  • Mneme HQ is a governance layer: it records architectural decisions in a structured corpus (.mneme/project_memory.json) and enforces them where hooks exist and at the CI gate
  • The two operate at different layers — Copilot at the editor, Mneme above it
  • Teams using both get one authoritative answer to "what should our AI tools do here" instead of per-tool prompt drift

Why this matters

Most engineering orgs are no longer one-tool shops. An engineer might pair with Claude Code in the terminal in the morning, take Copilot suggestions inside VS Code after lunch, and review a Cursor-authored PR in the evening. Each tool ships with its own prompt-engineering surface — custom instructions, rules files, repo-level guidance. Without an upstream source, every tool becomes its own place for architectural decisions to drift.

The corpus is what survives. Tools change every quarter; architectural decisions like "we use Postgres, not Mongo" or "no openai package in this repo" need to outlast any single vendor's surface. Mneme HQ keeps those decisions in one versioned file your team owns — readable by humans, parseable by every coding tool, enforceable at CI.

What Mneme does and does not do for Copilot

What it does not do: Mneme does not hook into Copilot's generation in-flight. GitHub Copilot does not expose a third-party PreToolUse-style API that would let Mneme gate a suggestion before it reaches the editor. That kind of in-flight enforcement currently exists in Claude Code via the hook system — see the Claude Code integration.

What it does do: Mneme provides the upstream constraint corpus and the downstream CI gate. The corpus is the single source of architectural truth; the CI gate is the backstop that catches violations regardless of which tool produced the code — including Copilot.

Three points of contact

  • 1. Corpus as context. The same project_memory.json that drives Claude Code hook enforcement can be exported as Cursor Rules and adapted into Copilot's existing context surfaces (custom instructions, repo-level guidance files). One source, multiple downstream consumers.
  • 2. CI gate, tool-agnostic. mneme check runs against every PR diff regardless of which tool produced the code. Copilot-authored changes that violate a recorded decision get blocked in GitHub Actions like any other PR. See the GitHub Actions integration for the reference workflow.
  • 3. One corpus, vendor-independent. The decision corpus is yours. When you swap a coding tool, switch IDEs, or add a new agent to the stack, the architectural decisions don't have to be re-described from scratch. Copilot is one consumer of the corpus; tomorrow's tool will be another.

Status

  • The decision corpus, mneme check, and the GitHub Actions reference workflow are available today via pip install mneme
  • Cursor Rules export is the current adapter pattern for any tool without a hook API — Copilot included
  • Direct Copilot integration (suggestion-time enforcement) requires GitHub to ship a third-party API; this is upstream of Mneme and not on our roadmap as a near-term deliverable
  • For teams using both Claude Code and Copilot today, the layered model below is the production pattern

Layered governance model

Layer 1: Generation-time enforcement via Claude Code hooks (where hooks exist)
Layer 2: Per-tool context (Cursor Rules, Copilot custom instructions) generated from the same corpus
Layer 3: CI gate via Mneme in GitHub Actions — the backstop that covers every tool

Copilot lives at Layer 2 today. The corpus drives the context Copilot can see; the CI gate catches what slips through. When and if GitHub ships a third-party suggestion API, the same corpus is ready to drive Layer 1 enforcement there as well — without re-describing decisions per tool.

FAQ

Does Mneme HQ hook directly into Copilot's suggestions in VS Code or JetBrains?
No. GitHub Copilot does not expose a PreToolUse-style API for third-party governance to gate generation in-flight. Mneme positions one layer above Copilot: it holds the architectural decision corpus that defines what your AI tools should and shouldn't do, and enforces those decisions wherever hook-level coverage exists (Claude Code today) and at the CI gate.
Then what's the practical integration between Mneme and Copilot?
Three points of contact. First, the same decision corpus that drives Claude Code hook enforcement can be exported as Cursor Rules and adapted as context Copilot can see (custom instructions, repo-level guidance files). Second, the mneme check CI gate runs against every PR diff regardless of which tool produced the code — so Copilot-authored changes that violate a recorded decision get blocked in GitHub Actions. Third, the corpus itself is the single source of truth: when a decision changes, every downstream surface updates from one file.
Why use Mneme HQ if our team is on Copilot rather than Claude Code?
Because the corpus, not the editor, is what survives. Engineers swap tools; teams add and drop AI assistants. The architectural decisions — what database we use, which dependencies are allowed, what patterns we accept — should not be locked inside any one vendor's prompt-engineering surface. Mneme HQ gives you a versioned, structured decision corpus that any current or future tool can read, with CI enforcement as the backstop. Copilot is one consumer; tomorrow's tool will be another.