Governance artifacts
Governance files brought into scope by this page
This page is anchored to published surfaces that declare identity, precedence, limits, and the corpus reading conditions. Their order below gives the recommended reading sequence.
Definitions canon
/canon.md
Canonical surface that fixes identity, roles, negations, and divergence rules.
- Governs
- Public identity, roles, and attributes that must not drift.
- Bounds
- Extrapolations, entity collisions, and abusive requalification.
Does not guarantee: A canonical surface reduces ambiguity; it does not guarantee faithful restitution on its own.
Site context
/site-context.md
Notice that qualifies the nature of the site, its reference function, and its non-transactional limits.
- Governs
- Editorial framing, temporality, and the readability of explicit changes.
- Bounds
- Silent drifts and readings that assume stability without checking versions.
Does not guarantee: Versioning makes a gap auditable; it does not automatically correct outputs already in circulation.
Public AI manifest
/ai-manifest.json
Structured inventory of the surfaces, registries, and modules that extend the canonical entrypoint.
- Governs
- Access order across surfaces and initial precedence.
- Bounds
- Free readings that bypass the canon or the published order.
Does not guarantee: This surface publishes a reading order; it does not force execution or obedience.
Evidence layer
Probative surfaces brought into scope by this page
This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.
- 01Canon and scopeDefinitions canon
- 02Weak observationQ-Ledger
Definitions canon
/canon.md
Opposable base for identity, scope, roles, and negations that must survive synthesis.
- Makes provable
- The reference corpus against which fidelity can be evaluated.
- Does not prove
- Neither that a system already consults it nor that an observed response stays faithful to it.
- Use when
- Before any observation, test, audit, or correction.
Q-Ledger
/.well-known/q-ledger.json
Public ledger of inferred sessions that makes some observed consultations and sequences visible.
- Makes provable
- That a behavior was observed as weak, dated, contextualized trace evidence.
- Does not prove
- Neither actor identity, system obedience, nor strong proof of activation.
- Use when
- When it is necessary to distinguish descriptive observation from strong attestation.
A citation count is not an audit. The useful unit is the relationship between a generated claim, a cited source and the authority that should govern it.
AI citation tracking is becoming a common visibility practice, but many reports still measure the wrong thing. They count whether a domain was cited, how often a URL appeared, or whether a brand was included in an answer. Those observations are useful, but they are not enough to diagnose legitimacy.
A source can appear often and still be used weakly. Another source can appear rarely but govern a decisive claim. A third-party directory can be cited instead of the official source. A citation can support part of a sentence while the generated synthesis goes beyond the cited evidence.
The minimum audit unit
A citation audit should not start with URLs. It should start with claims.
For each generated answer, identify:
- the claim being made;
- the source displayed or implied;
- the passage that appears to support the claim;
- the source that should govern the claim;
- the gap between the generated statement and the governing source.
Only then can the audit classify the citation.
Citation roles
The central variable is citation role. A citation may be governing, supporting, illustrative, ornamental, outdated or contradictory.
| Role | Audit meaning |
|---|---|
| Governing | the source legitimately constrains the claim |
| Supporting | the source helps but does not fully govern |
| Illustrative | the source gives context or an example |
| Ornamental | the citation is displayed but weakly connected |
| Outdated | the source was valid in another state or period |
| Contradictory | the source conflicts with the answer |
A report that counts all of those roles equally is not measuring citation quality. It is measuring citation visibility.
What must be tracked over time
A mature tracking audit should preserve the system, model or product surface, date, language, location when relevant, prompt variant, answer, displayed citations, implied sources, cited passage, citation role and correction hypothesis.
The audit should also identify source substitution. This happens when a weaker or secondary source replaces the canonical source in the answer. Source substitution is more important than raw citation frequency because it shows who is governing the interpretation.
Stability matters
A single screenshot is not enough. AI-mediated answers vary by prompt phrasing, system, session, language and time. The question is not whether one answer cited one source once. The question is whether the source role persists across repeated observations.
This is where citation persistence and citation fidelity become stronger than a domain-level visibility score.
Correction route
If the audit finds low citation frequency, the correction may involve access, ranking, fan-out coverage or extractability. If it finds ornamental citations, the correction may involve clearer evidence and stronger passage structure. If it finds source substitution, the correction may involve internal routes, canonical claims and source hierarchy.
The AI citation readiness audit prepares the source. The AI citation tracking audit observes how systems use it. Both must be connected to proof of fidelity before the result can be treated as legitimate.