Governance artifacts
Governance files brought into scope by this page
This page is anchored to published surfaces that declare identity, precedence, limits, and the corpus reading conditions. Their order below gives the recommended reading sequence.
Definitions canon
/canon.md
Canonical surface that fixes identity, roles, negations, and divergence rules.
- Governs
- Public identity, roles, and attributes that must not drift.
- Bounds
- Extrapolations, entity collisions, and abusive requalification.
Does not guarantee: A canonical surface reduces ambiguity; it does not guarantee faithful restitution on its own.
Site context
/site-context.md
Notice that qualifies the nature of the site, its reference function, and its non-transactional limits.
- Governs
- Editorial framing, temporality, and the readability of explicit changes.
- Bounds
- Silent drifts and readings that assume stability without checking versions.
Does not guarantee: Versioning makes a gap auditable; it does not automatically correct outputs already in circulation.
Public AI manifest
/ai-manifest.json
Structured inventory of the surfaces, registries, and modules that extend the canonical entrypoint.
- Governs
- Access order across surfaces and initial precedence.
- Bounds
- Free readings that bypass the canon or the published order.
Does not guarantee: This surface publishes a reading order; it does not force execution or obedience.
Complementary artifacts (1)
These surfaces extend the main block. They add context, discovery, routing, or observation depending on the topic.
Canonical AI entrypoint
/.well-known/ai-governance.json
Neutral entrypoint that declares the governance map, precedence chain, and the surfaces to read first.
Evidence layer
Probative surfaces brought into scope by this page
This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.
- 01Canon and scopeDefinitions canon
- 02Weak observationQ-Ledger
Definitions canon
/canon.md
Opposable base for identity, scope, roles, and negations that must survive synthesis.
- Makes provable
- The reference corpus against which fidelity can be evaluated.
- Does not prove
- Neither that a system already consults it nor that an observed response stays faithful to it.
- Use when
- Before any observation, test, audit, or correction.
Q-Ledger
/.well-known/q-ledger.json
Public ledger of inferred sessions that makes some observed consultations and sequences visible.
- Makes provable
- That a behavior was observed as weak, dated, contextualized trace evidence.
- Does not prove
- Neither actor identity, system obedience, nor strong proof of activation.
- Use when
- When it is necessary to distinguish descriptive observation from strong attestation.
SEO visibility, AI citability and interpretive fidelity
SEO visibility, AI citability and interpretive fidelity are related, but they are not the same layer.
SEO visibility asks whether a page can be found. AI citability asks whether a source or passage can be selected as support. Interpretive fidelity asks whether the generated answer preserves the canonical meaning, perimeter and authority of the source.
The mistake is to compress those three layers into a single visibility score. A page can rank without being cited. A page can be cited without governing the answer. A page can govern one claim and still fail to preserve the full perimeter of the entity.
Three layers, three failure modes
| Layer | Core question | Common success signal | Common failure |
|---|---|---|---|
| SEO visibility | Can the page be found? | ranking, impressions, indexing | ranking without source use |
| AI citability | Can the passage support an answer? | displayed citation, source reuse | citation without evidentiary force |
| Interpretive fidelity | Does the answer remain faithful? | correct scope, category and claim boundary | plausible but unauthorized synthesis |
A mature audit does not choose one layer against the others. It sequences them. First, make the source findable. Then make the passage recoverable. Then verify whether the answer remains legitimate.
Why SEO visibility remains necessary
Search visibility remains upstream. If a page is inaccessible, absent from the cluster, poorly titled, structurally weak or semantically isolated, it is less likely to become a reliable source candidate.
But SEO visibility is not final authority. A page that wins a search result can still be a weak governing source. A directory can rank for a brand. A review page can rank for a product. A comparison page can rank for a service. None of those rankings automatically makes the page legitimate for the final claim.
Why AI citability changes the writing standard
AI-mediated answers often operate at the passage level. They reward pages that expose clear definitions, direct answers, stable headings, tables, dates, source links and self-contained claims.
This is why AI citation readiness focuses on accessibility, retrievability, extractability, citability and governability. Citability is not only a marketing target. It is a structural condition: the page must contain passages that can be safely selected as evidence.
Why interpretive fidelity is the stricter test
Fidelity starts when citation is no longer enough. The audit must ask whether the answer preserves the correct entity category, service perimeter, exclusions, authority hierarchy and proof status.
A cited answer can still fail if the citation is ornamental, if the source is legitimate only for part of the claim, if a stronger source exists, or if the synthesis imports an outdated assumption from another source.
For that reason, the strongest diagnostic route is not “rank better, get cited more”. It is: rank where needed, structure for retrieval, classify citation role, verify source legitimacy, and measure proof of fidelity.
Practical route
Use this hub when a team asks one of three questions:
- “Why do we rank but not appear in AI answers?”
- “Why are we cited but not represented correctly?”
- “Why does an AI answer look sourced but remain indefensible?”
The reading path is: AI visibility audits, AI citation readiness, citation fidelity, source legitimacy and proof of fidelity.