Most discussion of AI coding governance focuses on a single surface — usually a single agent in a single editor. The unspoken assumption is that if the constraint works in that one place, governance is solved. It is not. The hard problem in real teams is not enforcing a constraint in one session; it is making sure the same constraint, with the same meaning, is enforced in every session, by every agent, on every developer's machine, and in CI. That property has a name: governance propagation.
What governance propagation actually means
Propagation has three operational components, each of which must hold for the property to be useful.
1. Source identity
Every consumer reads from the same compiled corpus. Not "a copy of the corpus that was up-to-date last Tuesday." Not "the Cursor-specific export." Not "the CLAUDE.md that the team manually keeps in sync with the wiki." A single artifact, versioned, durable, and addressable. Source identity is the prerequisite for everything else: if two consumers read from two different sources, they will reach different verdicts, and that divergence is the absence of propagation.
2. Semantic equivalence
Reading the same corpus is necessary but not sufficient. The consumers must interpret the corpus identically — the same scope syntax must select the same files; the same precedence rules must resolve conflicts the same way; the same predicate must produce the same verdict against the same input. Semantic equivalence is what governance propagation provides beyond simple replication. A YAML file shared between two tools that interpret it differently is two governance systems, not one.
3. Update propagation
When the source changes, every consumer must see the change. The propagation latency must be bounded — ideally near-zero, since every minute that one consumer has stale rules is a minute that team has multiple effective governance regimes. In practice, this means consumers query the live corpus at decision time (pull) or are notified of updates (push), and never operate on cached snapshots that can silently desync.
The property is "no second copy of the truth." Every export, sync script, or per-tool mirror is a future divergence event waiting to happen. Propagation is what you get when the corpus is the source — not a thing you sync from.
Why propagation is hard in practice
The hardness is not in the concept. It is in the fact that the consumers of governance — AI coding agents — are heterogeneous and were not designed to share governance state.
- Each agent has its own configuration surface. Claude Code reads CLAUDE.md and hook configs. Cursor reads .cursorrules. Copilot reads copilot-instructions.md. JetBrains AI reads project-level config. CI reads its own config files. These surfaces don't share a schema or a query API.
- Each surface has its own interpretation of "context." A scope rule of
services/billing/**might mean "include these files in the agent's context" in one tool and "apply this rule when modifying these files" in another. The semantics are not portable. - Each tool has its own update model. Some watch files for changes; some load configs at session start and never reload; some require explicit reload. Without a unifying layer, the "same" rule can be live in one tool and stale in another at any moment.
Propagation, in this environment, is not a configuration problem — it is an engineering one. It requires a layer that absorbs the heterogeneity of consumer surfaces and presents a uniform query API on top of a single corpus. That layer is the governance infrastructure. Without it, the team is running multiple parallel governance systems and calling them one.
Execution surfaces governance must propagate across
Most discussions of agent governance treat propagation as a property of the source tree: the code the agent writes must satisfy architectural constraints, and that is the whole problem. That is too narrow. Long-running, autonomous agents do not only write source code — they write everywhere the workflow touches. Each of the following is an execution surface where the same governance corpus must be evaluable, or the surface ungoverned by default.
- Source code. The obvious surface: edits, additions, deletions across the working tree. Governance must evaluate diffs against the active decision graph and refuse changes that violate it.
- Branch names. Auto-generated by harnesses, often outside the team's branch taxonomy. Conventions that downstream tooling, release notes, and audit traceability depend on quietly stop working when ungoverned.
- Commit messages. Workflow-generated commits accumulate in history. Message conventions (conventional commits, sign-offs, scope prefixes) are part of the architecture of the repo and need the same enforcement.
- PR titles and descriptions. The squash-merge title is what lands on
main. If the title taxonomy is the durable record of product decisions, it cannot be left to a model with no governance hook. - Tags and releases. Tag policies built around durable milestones get diluted by operational commits and ephemeral checkpoints when tagging is delegated to automation without governance.
- CI workflow and pipeline config. Workflow files, runner definitions, secret references, and approval gates are written by agents the same way they write code — but their governance constraints (least privilege, allowlists) are stricter and less visible.
- Deployment artifacts. Manifests, container tags, infra-as-code, generated changelogs, and release announcements all carry organizational intent that ungoverned automation can silently violate.
- Generated configuration. Feature flags, routing rules, scaling policies, and integration configs are generated as code. They are rarely reviewed with the same rigor and almost never checked against architectural decisions.
- Agent-produced documentation. READMEs, ADR drafts, runbooks, inline comments. Drift in docs propagates faster than drift in code, because the next agent session reads docs as authoritative.
A governance layer that enforces ADR compliance in src/ but ignores the surrounding automation artifacts is governing a fraction of the agent's output. The interesting, expensive failures live exactly at the boundaries the agent crosses on its way to "shipping a change" — the branch, the title, the workflow file, the deployment manifest, the release note. The companion essay Harness Engineering Still Needs Governance develops the execution-surfaces argument in full.
The common misread: shared documents as propagation
The most common shortcut is to keep a single CLAUDE.md or .cursorrules file and commit it to the repo, then assume every tool reads it consistently. This is shared documentation, not shared governance.
The problem is that each tool ingests the file differently. Claude Code may inject it into every prompt; Cursor may use it as a hint; Copilot may ignore it depending on file size; CI may not read it at all. Even if every tool theoretically reads the same file, the operational consequence of reading it is tool-specific. Decisions encoded in prose are interpreted by each consumer model — and interpretation varies across consumers, across runs, across context windows. That is not propagation; that is parallel interpretation of a shared hint.
Real propagation requires structured records and a query API, not a shared document. A predicate that can be evaluated identically by every consumer — not a paragraph that each consumer reads in its own way.
What propagation enables
When propagation holds, three downstream properties become possible — and none of them are achievable without it.
| Property | Without propagation | With propagation |
|---|---|---|
| Decision lifetime | Lives as long as the file is in sync | Enforced everywhere, immediately |
| Onboarding | Per-tool configuration, per-developer | One bootstrap; works in every agent |
| Auditability | Which file did this agent read? | Which compiled record applied? Same answer from every consumer |
| Drift | Inevitable — divergence is the default | Detectable — drift is a violation that surfaces |
| Multi-agent workflows | Each agent governed by its own subset | Workflow governed end-to-end by one corpus |
The connective property here is that propagation is what makes governance composable. You can have a constraint that applies to a Claude Code session that hands off to a Cursor refactor that gets merged through CI — and the same architectural decision is enforced at every step, because every step queries the same corpus.
Related concepts
- Multi-agent continuity — what propagation enables across time. Continuity is the temporal version of propagation; propagation is the spatial version of continuity.
- Precedence semantics — what every consumer must interpret identically for propagation to be meaningful. Without shared precedence rules, "the same corpus" produces different verdicts.
- Enforcement provenance — what propagation makes possible. When every consumer reports verdicts against the same corpus, you can trace any enforcement event back to a single, citable decision record.