Standards landscape

Where the standards for AI agent governance are forming

Cross-tool agent governance is not yet a finalized standard. Three efforts — one government-led, two community-led — are the credible foundations for what eventually lands. This page tracks them honestly: what is published, what is still in progress, and how Mneme HQ's design aligns with the direction each is taking.

Why this page exists

Engineering leaders evaluating governance tooling reasonably want to know whether what they adopt today will be compatible with whatever standard emerges. Marketing claims of "standards alignment" are cheap; verifiable references to organizational sources are not. This page links only to primary sources: NIST, the Linux Foundation, and the open specifications themselves.

It also draws a line between three things that often get conflated: standards that exist, standards that are forming, and what Mneme has actually contributed versus what we plan to. We do not claim NIST endorsement, foundation membership we have not joined, or contributions we have not filed. Where the answer is "tracking, not yet engaged," we say so.

The three efforts worth tracking

Government · forming U.S. NIST · CAISI

NIST AI Agent Standards Initiative

Launched February 2026 by the Center for AI Standards and Innovation (CAISI) at NIST, with the stated aim of helping AI agents "interoperate smoothly across the digital ecosystem." Current scope is identity, authorization, and agent security; output-policy enforcement is adjacent and likely to follow. Concrete artifacts so far: a January 2026 RFI on securing AI agent systems (closed March 9, 2026), the NCCoE concept paper on AI agent identity and authorization (February 2026), and listening sessions on barriers to AI adoption in healthcare, finance, and education.

Open protocol · published Spec 2025-11-25

Model Context Protocol (MCP)

An open, JSON-RPC-based protocol for exposing context, tools, and resources to AI clients. MCP does not specify a governance content format, but it is the substrate over which a structured decision store can be made queryable to any compliant agent. A decision corpus exposed as an MCP server is consumable by every MCP-aware client (Claude Code, Cursor's MCP support, custom SDK agents) without per-tool integration.

Community spec · published Linux Foundation

AGENTS.md

A markdown convention for per-repo instructions that AI coding agents can read. Adopted across OpenAI Codex, Cursor, Aider, Factory's Droids, Google's Gemini CLI and Jules, Zed, and others; stewarded by the Agentic AI Foundation under the Linux Foundation. As a static-context format, AGENTS.md does not resolve precedence between conflicting decisions or enforce anything at the hook layer, but it is the most credible cross-vendor baseline that exists today for the static portion of the problem.

How Mneme HQ aligns with the direction

Mneme HQ is a structured decision store and pre-generation enforcement layer for AI coding. Its design predates these standards efforts, but the architectural choices align with the direction each is taking. The table below maps the alignment honestly.

Mneme × emerging standards
StandardWhat it specifiesHow Mneme aligns
NIST CAISI Agent identity, authorization, security, and (forthcoming) governance scope. Decision store is auditable, scoped, and emits structured records suitable for downstream identity-bound enforcement. Tracking the initiative; planning to engage with future RFIs as scope expands toward output policy.
MCP JSON-RPC protocol for exposing context, tools, and resources to AI clients. Mneme's decision store is designed to be exposed as an MCP server so any MCP-aware agent can query it. Active integration work; see the GitHub roadmap.
AGENTS.md Markdown convention for per-repo instructions, vendor-portable. Mneme treats AGENTS.md as a first-class export target: scoped, precedence-resolved decisions can be rendered to AGENTS.md for tools that read it, while the structured store remains source of truth.
OAuth 2.0 / OIDC Identity standards NIST is adapting to agent identities. Out of scope for the governance store itself; deployment guidance follows whatever pattern the deploying organization already uses.

Design principles that pre-date the standards work

Mneme HQ was built on assumptions that the emerging standards happen to validate. Naming them explicitly makes the alignment legible, and gives buyers a basis on which to evaluate any future governance tool against the same criteria.

01

Tool-agnostic representation

Decisions are stored in a structured format that is not coupled to any one assistant's prompt convention. Markdown is an export, not the source of truth.

02

One canonical store, many readers

Every agent — interactive, async, third-party, in-house — queries the same store. Updates are made once, not fanned out across per-tool rule files.

03

Pre-generation injection, scoped

The relevant decisions for the current task are surfaced into whatever agent is running, in a format that agent can consume. Decisions are scoped, not dumped wholesale.

04

Post-generation enforcement at the seam

Generated diffs, from any agent, are checked against the same governance store before they are accepted. Enforcement lives at the file write, the commit, or the PR — not inside the model.

What Mneme has done, and what we plan to do

To be clear and verifiable, here is the current state of participation. This will be updated as it changes.

Done

Planned

Explicitly not claimed

Watch the second pillar. The most relevant NIST workstream for code-governance teams is the community-led open-protocol pillar — where MCP and similar protocols sit. The current published work centers on agent identity. If and when CAISI's scope expands toward behavioral and output policy, that is the moment to engage actively rather than passively track.

How to verify any of this

Every claim above links to a primary, organizational source: NIST, NCCoE, the MCP project, agents.md, OpenAI Codex documentation. Claims about Mneme link to the open-source repository or the public roadmap. If you find a claim on this page that you cannot trace to a primary source, that is a bug; please tell us and we will fix it or remove it.

Read the deep-dive

The full argument for why heterogeneous AI agents require a shared governance layer — with historical context from OCI, OpenTelemetry, and LSP — is in the article below.

Read the article →