Engagement decision
How to recognize that this axis should be mobilized
Use this page as a decision page. The objective is not only to understand the concept, but to identify the symptoms, framing errors, use cases, and surfaces to open in order to correct the right problem.
Typical symptoms
- AI systems cite competitors, directories or derivative sources instead of canonical pages.
- Pages rank in search but are absent, weakly cited or misused in AI answers.
- Citations appear but do not support the generated claim.
- Old URLs, outdated claims or secondary sources govern the answer.
Frequent framing errors
- Treating citation count as proof of answer legitimacy.
- Assuming that SEO ranking alone guarantees AI citation quality.
- Treating llms.txt as a ranking factor instead of a discovery route.
- Optimizing snippets without defining source hierarchy.
Use cases
- Preparing a site for AI-mediated answer systems.
- Diagnosing why strong pages are not cited or are cited weakly.
- Separating visibility, citation, source support and fidelity.
- Building a correction plan for canonical source reinforcement.
What gets corrected concretely
- Reposition answer-ready passages near the top of strategic pages.
- Create self-contained passages for important claims.
- Strengthen internal routes between service pages, definitions and proof surfaces.
- Classify citations by role, source strength and support quality.
Relevant machine-first artifacts
These surfaces bound the problem before detailed correction begins.
Governance files to open first
Useful evidence surfaces
These surfaces connect diagnosis, observation, fidelity, and audit.
References to open first
Governance artifacts
Governance files brought into scope by this page
This page is anchored to published surfaces that declare identity, precedence, limits, and the corpus reading conditions. Their order below gives the recommended reading sequence.
Definitions canon
/canon.md
Canonical surface that fixes identity, roles, negations, and divergence rules.
- Governs
- Public identity, roles, and attributes that must not drift.
- Bounds
- Extrapolations, entity collisions, and abusive requalification.
Does not guarantee: A canonical surface reduces ambiguity; it does not guarantee faithful restitution on its own.
Site context
/site-context.md
Notice that qualifies the nature of the site, its reference function, and its non-transactional limits.
- Governs
- Editorial framing, temporality, and the readability of explicit changes.
- Bounds
- Silent drifts and readings that assume stability without checking versions.
Does not guarantee: Versioning makes a gap auditable; it does not automatically correct outputs already in circulation.
Public AI manifest
/ai-manifest.json
Structured inventory of the surfaces, registries, and modules that extend the canonical entrypoint.
- Governs
- Access order across surfaces and initial precedence.
- Bounds
- Free readings that bypass the canon or the published order.
Does not guarantee: This surface publishes a reading order; it does not force execution or obedience.
Complementary artifacts (1)
These surfaces extend the main block. They add context, discovery, routing, or observation depending on the topic.
Canonical AI entrypoint
/.well-known/ai-governance.json
Neutral entrypoint that declares the governance map, precedence chain, and the surfaces to read first.
Evidence layer
Probative surfaces brought into scope by this page
This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.
- 01Canon and scopeDefinitions canon
- 02Weak observationQ-Ledger
Definitions canon
/canon.md
Opposable base for identity, scope, roles, and negations that must survive synthesis.
- Makes provable
- The reference corpus against which fidelity can be evaluated.
- Does not prove
- Neither that a system already consults it nor that an observed response stays faithful to it.
- Use when
- Before any observation, test, audit, or correction.
Q-Ledger
/.well-known/q-ledger.json
Public ledger of inferred sessions that makes some observed consultations and sequences visible.
- Makes provable
- That a behavior was observed as weak, dated, contextualized trace evidence.
- Does not prove
- Neither actor identity, system obedience, nor strong proof of activation.
- Use when
- When it is necessary to distinguish descriptive observation from strong attestation.
AI citation readiness audit
An AI citation readiness audit evaluates whether a site, corpus, page or entity is accessible, retrievable, extractable, citable and governable across AI-mediated answer systems.
It is not a promise of citation. It is a diagnostic method for identifying why a source can or cannot be selected, cited and used faithfully.
What the audit measures
The audit separates eight dimensions that are often confused in generic AI visibility reports.
| Dimension | Diagnostic question |
|---|---|
| Accessibility | Can relevant systems access the useful page, passage and source path? |
| Retrieval | Does the source appear across the main query and probable fan-out queries? |
| Extractability | Can the key passage be lifted without losing meaning or scope? |
| Citability | Is the claim specific, supported, current and safe to reuse? |
| Citation role | Is the source governing, illustrative, ornamental, outdated or contradictory? |
| Source hierarchy | Is the cited source the strongest legitimate source for the claim? |
| Fidelity | Does the generated answer preserve the canonical perimeter? |
| Stability | Does the pattern persist across prompts, systems, languages and time? |
When this audit is useful
Use this audit when a site appears to have strong SEO fundamentals but remains weak in AI citations, when answer engines cite competitors or directories instead of canonical pages, when citations do not actually support the answer, or when the organization cannot distinguish a visibility problem from a fidelity problem.
The audit is also useful before publishing a major content cluster. It identifies whether the pages are structured as human-readable articles only, or whether they can also function as machine-first evidence surfaces.
What the output should include
A useful audit should not stop at screenshots. It should preserve prompts, systems, dates, languages, visible citations, implied sources, competing sources, page sections, missing claims, substituted authorities and observed answer variants.
The output should produce:
- a query and fan-out map;
- an inventory of cited and implied sources;
- a classification of each citation role;
- a list of pages with weak extractability;
- a list of missing or misplaced answer-ready passages;
- a source hierarchy gap analysis;
- a correction plan for pages, definitions, service routes and proof surfaces.
Boundaries and non-promises
This audit does not force external models to cite a source. It does not guarantee inclusion in AI Overviews, ChatGPT Search, Gemini, Perplexity, Bing or any other system. It does not replace AI citation tracking or proof of fidelity.
Its value is that it turns a vague citation problem into a concrete correction sequence: access, retrieval, extraction, source role, source hierarchy and answer legitimacy.
Additional audit modules
The readiness audit can now be decomposed into specialized modules:
| Module | Purpose |
|---|---|
| Citation accessibility | Test access policy, rendering, preview control and passage visibility |
| Citation quality | Classify citation role, support strength, legitimacy and fidelity |
| Source substitution | Identify when a weaker source replaces the legitimate source |
| Stability and freshness | Separate current evidence requirements from persistent citation patterns |
| Language and geography | Test whether query language, market or jurisdiction changes the governing source |
| Structured data alignment | Verify that schema, visible content and source hierarchy do not conflict |
| AI-ready content blocks | Rebuild weak pages around extractable evidence units |
These modules can be used separately or as part of a complete citation readiness diagnostic.
Related routes
- AI citation readiness hub
- AI citation readiness definition
- AI citation tracking audit
- Citability audit
- AI answer audit
- AI citation readiness checklist
Measurement extensions
For repeatable scoring, pair this audit with the AI citation audit scoring matrix. For query decomposition, use the fan-out query map. These resources separate citation frequency from source role, support quality, fidelity and stability.
Technical routes to include in the audit
The audit should explicitly test preview control, AI-ready structure, machine-first routing and citation fidelity. These concepts prevent the audit from stopping at generic “AI-friendly content” recommendations.
For implementation, read Robots, AI crawlers and citation accessibility and How to structure a page for AI citations without weakening governance.