Governance artifacts
Governance files brought into scope by this page
This page is anchored to published surfaces that declare identity, precedence, limits, and the corpus reading conditions. Their order below gives the recommended reading sequence.
Site context
/site-context.md
Notice that qualifies the nature of the site, its reference function, and its non-transactional limits.
- Governs
- Editorial framing, temporality, and the readability of explicit changes.
- Bounds
- Silent drifts and readings that assume stability without checking versions.
Does not guarantee: Versioning makes a gap auditable; it does not automatically correct outputs already in circulation.
Public AI manifest
/ai-manifest.json
Structured inventory of the surfaces, registries, and modules that extend the canonical entrypoint.
- Governs
- Access order across surfaces and initial precedence.
- Bounds
- Free readings that bypass the canon or the published order.
Does not guarantee: This surface publishes a reading order; it does not force execution or obedience.
Evidence layer
Probative surfaces brought into scope by this page
This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.
Q-Ledger
/.well-known/q-ledger.json
Public ledger of inferred sessions that makes some observed consultations and sequences visible.
- Makes provable
- That a behavior was observed as weak, dated, contextualized trace evidence.
- Does not prove
- Neither actor identity, system obedience, nor strong proof of activation.
- Use when
- When it is necessary to distinguish descriptive observation from strong attestation.
Q-Metrics
/.well-known/q-metrics.json
Derived layer that makes some variations more comparable from one snapshot to another.
- Makes provable
- That an observed signal can be compared, versioned, and challenged as a descriptive indicator.
- Does not prove
- Neither the truth of a representation, the fidelity of an output, nor real steering on its own.
- Use when
- To compare windows, prioritize an audit, and document a before/after.
A citation-quality audit should answer one question before all others: did the displayed source actually govern the claim?
Counting citations is easy. Qualifying citations is harder and much more useful. A source that appears 20 times as an ornamental citation is less valuable than a source that appears 3 times and correctly governs the answer.
Step 1: preserve the observation
Record the system, interface, date, language, location when relevant, prompt, answer text, visible citations and screenshots when necessary. Do not rely on memory. AI answers change, and citation displays are often unstable.
The observation should also preserve the user intent. A navigational prompt, comparison prompt, “best” prompt, legal prompt and definition prompt do not create the same source requirements.
Step 2: isolate the claim
Do not audit the answer as one block. Break it into claims. Identify which sentence the citation is supposed to support. Then ask whether the visible source contains the passage, evidence or authority needed for that sentence.
Many citation failures appear only at this level. The source may support the topic, but not the exact claim.
Step 3: classify the citation role
Use the AI citation quality matrix. A citation can be governing, supporting, illustrative, ornamental, outdated, contradictory or insufficient.
This classification prevents overcounting. It also shows which corrections are editorial, which are technical and which are governance problems.
Step 4: compare source hierarchy
Identify the source that should have governed the claim. It may be the official page, a definition, a policy, a current service page, a proof artifact, a third-party reference or a local jurisdictional source.
Then compare it to the displayed source. If the displayed source is weaker, older, broader or derivative, the answer has a source hierarchy problem.
Step 5: define the correction
Correction may require a clearer answer block, a stronger internal route, a better definition, a page update, a canonical replacement, a source-map correction or a governance-surface update in the appropriate repo.
The audit should not end with “good” or “bad”. It should name the correction path.
Minimum output
A useful citation-quality audit should produce a table with prompt, system, date, answer claim, displayed source, citation role, stronger source, fidelity gap, risk and recommended correction.
This is the difference between tracking citations and governing representation.