Governance artifacts
Governance files brought into scope by this page
This page is anchored to published surfaces that declare identity, precedence, limits, and the corpus reading conditions. Their order below gives the recommended reading sequence.
Canonical AI entrypoint
/.well-known/ai-governance.json
Neutral entrypoint that declares the governance map, precedence chain, and the surfaces to read first.
- Governs
- Access order across surfaces and initial precedence.
- Bounds
- Free readings that bypass the canon or the published order.
Does not guarantee: This surface publishes a reading order; it does not force execution or obedience.
Definitions canon
/canon.md
Canonical surface that fixes identity, roles, negations, and divergence rules.
- Governs
- Public identity, roles, and attributes that must not drift.
- Bounds
- Extrapolations, entity collisions, and abusive requalification.
Does not guarantee: A canonical surface reduces ambiguity; it does not guarantee faithful restitution on its own.
Identity lock
/identity.json
Identity file that bounds critical attributes and reduces biographical or professional collisions.
- Governs
- Public identity, roles, and attributes that must not drift.
- Bounds
- Extrapolations, entity collisions, and abusive requalification.
Does not guarantee: A canonical surface reduces ambiguity; it does not guarantee faithful restitution on its own.
Evidence layer
Probative surfaces brought into scope by this page
This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.
- 01Response authorizationQ-Layer: response legitimacy
- 02Weak observationQ-Ledger
Q-Layer: response legitimacy
/response-legitimacy.md
Surface that explains when to answer, when to suspend, and when to switch to legitimate non-response.
- Makes provable
- The legitimacy regime to apply before treating an output as receivable.
- Does not prove
- Neither that a given response actually followed this regime nor that an agent applied it at runtime.
- Use when
- When a page deals with authority, non-response, execution, or restraint.
Q-Ledger
/.well-known/q-ledger.json
Public ledger of inferred sessions that makes some observed consultations and sequences visible.
- Makes provable
- That a behavior was observed as weak, dated, contextualized trace evidence.
- Does not prove
- Neither actor identity, system obedience, nor strong proof of activation.
- Use when
- When it is necessary to distinguish descriptive observation from strong attestation.
Multi-agent systems introduce a problem that single-agent architectures do not face: delegated interpretive authority. When one agent hands off a task to another, it does not merely transfer an instruction. It transfers a framing, a scope, and an implicit set of constraints. If those constraints are not governed, each handoff compounds the risk of interpretive drift.
Why delegation is not just task routing
In most current architectures, delegation is treated as a routing problem. Agent A determines that Agent B has the right capability, passes a payload, and waits for a result. The assumption is that the task is self-contained and that the receiving agent will interpret it faithfully.
That assumption fails when:
- the delegating agent has already narrowed the scope in ways the receiving agent cannot detect;
- the receiving agent applies its own constraints, which may conflict with the original intent;
- neither agent logs the interpretive choices made during the handoff;
- the chain involves three or more agents, each adding a layer of untracked reinterpretation.
Delegation, in interpretive terms, is not task routing. It is authority transfer. And authority transfer without governance produces what can be called chain drift: a progressive departure from the original intent that no single agent is responsible for.
The compounding problem
In a two-agent chain, drift is bounded. One agent frames, the other executes. The gap between intent and output can be audited by comparing the original instruction with the final result. In longer chains, that comparison becomes meaningless because the instruction itself has been rewritten — implicitly, silently, and without trace.
Each intermediate agent may:
- compress the scope to match its own capabilities;
- drop qualifiers that seemed redundant;
- add assumptions based on its own training context;
- resolve ambiguities that were deliberate.
The result is a final output that appears coherent but reflects a chain of unilateral interpretive decisions. The organization that deployed the system may have no visibility into which agent made which choice.
What governance must cover in multi-agent delegation
Governing multi-agent chains requires more than logging actions. It requires logging interpretive authority transfers. At each handoff, the system should record:
- what scope was passed;
- what scope was received;
- what was dropped, added, or transformed;
- whether the receiving agent operated within or beyond the authority boundary of the original request.
Without this trace, the chain is opaque. The final output may look correct while carrying accumulated drift that only surfaces when it contradicts the entity’s published canon.
Why human oversight does not solve chain drift
A common response to multi-agent risk is to add human review at the end of the chain. But as with human approval theater, final review does not retroactively govern the interpretive choices made upstream. The human reviewer sees the output, not the chain of framings that produced it.
Effective governance requires intervention at the delegation points, not only at the output. Each handoff is a potential interpretive capture event — a moment where one agent’s framing overwrites the original intent without explicit authorization.
The design implication
Multi-agent architectures should treat delegation as a governed operation. That means:
- declaring what interpretive authority each agent holds;
- bounding what each agent may transform in the payload;
- logging what each agent actually changed;
- comparing the final output against the original scope.
These are not theoretical requirements. They are operational prerequisites for any organization that wants its agentic systems to remain auditable, traceable, and aligned with published intent.