Governance artifacts
Governance files brought into scope by this page
This page is anchored to published surfaces that declare identity, precedence, limits, and the corpus reading conditions. Their order below gives the recommended reading sequence.
Definitions canon
/canon.md
Canonical surface that fixes identity, roles, negations, and divergence rules.
- Governs
- Public identity, roles, and attributes that must not drift.
- Bounds
- Extrapolations, entity collisions, and abusive requalification.
Does not guarantee: A canonical surface reduces ambiguity; it does not guarantee faithful restitution on its own.
Site context
/site-context.md
Notice that qualifies the nature of the site, its reference function, and its non-transactional limits.
- Governs
- Editorial framing, temporality, and the readability of explicit changes.
- Bounds
- Silent drifts and readings that assume stability without checking versions.
Does not guarantee: Versioning makes a gap auditable; it does not automatically correct outputs already in circulation.
Public AI manifest
/ai-manifest.json
Structured inventory of the surfaces, registries, and modules that extend the canonical entrypoint.
- Governs
- Access order across surfaces and initial precedence.
- Bounds
- Free readings that bypass the canon or the published order.
Does not guarantee: This surface publishes a reading order; it does not force execution or obedience.
Evidence layer
Probative surfaces brought into scope by this page
This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.
- 01Canon and scopeDefinitions canon
- 02Weak observationQ-Ledger
Definitions canon
/canon.md
Opposable base for identity, scope, roles, and negations that must survive synthesis.
- Makes provable
- The reference corpus against which fidelity can be evaluated.
- Does not prove
- Neither that a system already consults it nor that an observed response stays faithful to it.
- Use when
- Before any observation, test, audit, or correction.
Q-Ledger
/.well-known/q-ledger.json
Public ledger of inferred sessions that makes some observed consultations and sequences visible.
- Makes provable
- That a behavior was observed as weak, dated, contextualized trace evidence.
- Does not prove
- Neither actor identity, system obedience, nor strong proof of activation.
- Use when
- When it is necessary to distinguish descriptive observation from strong attestation.
An AI citation proves that a source was displayed. It does not prove that the source was understood, respected or allowed to govern the answer.
This distinction matters because many AI visibility audits still count visible citations as if they were unambiguous wins. They are not. A citation is a surface event. Fidelity is a source-to-output condition.
The citation event
A citation event tells us that a system associated a source with an answer. That source may have been retrieved, selected, displayed or inserted by the interface as support. The event is worth recording, but it remains incomplete.
The citation does not automatically say whether the source supports the exact claim. It does not say whether a stronger source was ignored. It does not say whether the final synthesis stayed inside the source perimeter. It does not say whether the displayed URL was governing, illustrative or decorative.
The fidelity test
Fidelity asks a stricter question: does the output preserve the canonical meaning of the source and the entity?
A faithful answer keeps the correct category, role, scope, date, exclusions and authority hierarchy. It does not extend a service claim beyond what the source authorizes. It does not turn a contextual note into a doctrine. It does not let a secondary source override the canonical surface.
This is why citation fidelity is a better diagnostic concept than citation count. It evaluates whether the citation performed the evidentiary work implied by the answer.
Common false positives
The most frequent false positive is the ornamental citation. The answer displays the official source, but the actual claim comes from a weaker synthesis, a directory, a competitor comparison or an assumption learned elsewhere.
Another false positive is partial support. The cited source supports one sentence, but the generated paragraph adds a broader conclusion. A third false positive is historical support: the source was valid at one time, but the answer treats it as current authority.
Why the distinction changes the audit
An audit that stops at “we were cited” cannot separate success from risk. A mature audit classifies the citation role, identifies the source legitimacy and compares the output against proof of fidelity.
The result is a different scorecard. The positive signal is not “source appeared”. The positive signal is “the right source governed the right claim under the right scope”.
Practical implication
Content should still be structured for citation. Pages need visible definitions, self-contained passages and answer-ready blocks. But the strongest pages also expose boundaries: what the claim covers, what it excludes, which source governs it and when the statement should be updated.
The goal is not merely to be cited. The goal is to make the citation hard to misuse.