Skip to content

Framework

AI citation readiness checklist

Operational checklist for reviewing whether a page, source or corpus is ready to be retrieved, cited and governed in AI-mediated answers.

CollectionFramework
TypeMethod
Layertransversal
Version1.0
Stabilization2026-05-13
Published2026-05-13
Updated2026-05-13

Governance artifacts

Governance files brought into scope by this page

This page is anchored to published surfaces that declare identity, precedence, limits, and the corpus reading conditions. Their order below gives the recommended reading sequence.

  1. 01Definitions canon
  2. 02Site context
  3. 03Public AI manifest
Canon and identity#01

Definitions canon

/canon.md

Canonical surface that fixes identity, roles, negations, and divergence rules.

Governs
Public identity, roles, and attributes that must not drift.
Bounds
Extrapolations, entity collisions, and abusive requalification.

Does not guarantee: A canonical surface reduces ambiguity; it does not guarantee faithful restitution on its own.

Context and versioning#02

Site context

/site-context.md

Notice that qualifies the nature of the site, its reference function, and its non-transactional limits.

Governs
Editorial framing, temporality, and the readability of explicit changes.
Bounds
Silent drifts and readings that assume stability without checking versions.

Does not guarantee: Versioning makes a gap auditable; it does not automatically correct outputs already in circulation.

Entrypoint#03

Public AI manifest

/ai-manifest.json

Structured inventory of the surfaces, registries, and modules that extend the canonical entrypoint.

Governs
Access order across surfaces and initial precedence.
Bounds
Free readings that bypass the canon or the published order.

Does not guarantee: This surface publishes a reading order; it does not force execution or obedience.

Evidence layer

Probative surfaces brought into scope by this page

This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.

  1. 01
    Canon and scopeDefinitions canon
  2. 02
    Weak observationQ-Ledger
Canonical foundation#01

Definitions canon

/canon.md

Opposable base for identity, scope, roles, and negations that must survive synthesis.

Makes provable
The reference corpus against which fidelity can be evaluated.
Does not prove
Neither that a system already consults it nor that an observed response stays faithful to it.
Use when
Before any observation, test, audit, or correction.
Observation ledger#02

Q-Ledger

/.well-known/q-ledger.json

Public ledger of inferred sessions that makes some observed consultations and sequences visible.

Makes provable
That a behavior was observed as weak, dated, contextualized trace evidence.
Does not prove
Neither actor identity, system obedience, nor strong proof of activation.
Use when
When it is necessary to distinguish descriptive observation from strong attestation.

AI citation readiness checklist

This checklist reviews whether a page, source or corpus is ready to be retrieved, cited and governed in AI-mediated answers. It should be used before measuring citation frequency, because frequency without source role can mislead the diagnosis.

1. Accessibility

  • The canonical URL is stable.
  • The useful content is not hidden behind unnecessary JavaScript, tabs, blocked rendering or inaccessible preview rules.
  • Crawl restrictions, bot rules and preview controls do not contradict the citation objective.
  • The page has a clear canonical relationship to adjacent pages.

2. Query and fan-out coverage

  • The page answers the visible query directly.
  • The surrounding cluster covers adjacent subquestions.
  • Internal links connect definitions, service pages, doctrinal pages and proof surfaces.
  • The page can be found through more than one query path.

3. Extractability

  • The main answer appears early.
  • Headings name the concept precisely.
  • Important passages are self-contained.
  • Tables, lists and definitions preserve scope when extracted.
  • Critical claims do not depend on hidden context or previous paragraphs.

4. Citation support

  • Factual claims are precise enough to be cited.
  • Claims expose their limits, dates, scope or exclusions when relevant.
  • The page distinguishes definition, interpretation, service claim and proof claim.
  • Sources or proof artifacts are linked when the claim requires support.

5. Entity consistency

  • The entity name, category, role, service labels and conceptual vocabulary are stable across pages.
  • Related entities are not allowed to blur the perimeter.
  • Product, service and doctrine names are used consistently.
  • The page avoids ambiguous shorthand that could be lifted out of context.

6. Source hierarchy

  • The strongest source for each claim is identifiable.
  • Derivative pages do not appear to govern canonical statements.
  • Market-facing pages route back to stricter definitions or doctrine.
  • The page makes it possible to distinguish governing, illustrative and contextual sources.

7. Citation role testing

For each observed citation, classify the role:

RoleDiagnostic meaning
GoverningThe cited source legitimately constrains the claim
SupportingThe cited source supports the claim but does not fully govern it
IllustrativeThe cited source gives context or example only
OrnamentalThe citation is displayed but weakly connected to the answer
ContradictoryThe cited source conflicts with the answer
OutdatedThe cited source was valid in another time frame

8. Audit record

A citation test should preserve the system, date, language, prompt, answer, cited URLs, cited passage when visible, source role, competing sources and correction hypothesis. Without that record, the audit cannot distinguish observation from inference.

9. Advanced modules

For mature audits, add these modules:

ModuleCheck
Citation qualityDoes the citation support the claim, carry the right role and remain legitimate?
Preview governanceDo snippet and preview rules expose the governing passage?
Source substitutionIs a weaker source replacing the canonical source?
Language and geographyDoes the answer change source selection across language or market context?
Structured data alignmentDoes schema reinforce the visible claim instead of contradicting it?
StabilityDoes the citation pattern persist across prompts, systems and time?

Practical output

The checklist should produce a prioritized correction list: access fixes, retrieval gaps, page structure changes, missing passages, weak claims, source hierarchy issues and monitoring requirements.