Governance artifacts
Governance files brought into scope by this page
This page is anchored to published surfaces that declare identity, precedence, limits, and the corpus reading conditions. Their order below gives the recommended reading sequence.
Definitions canon
/canon.md
Canonical surface that fixes identity, roles, negations, and divergence rules.
- Governs
- Public identity, roles, and attributes that must not drift.
- Bounds
- Extrapolations, entity collisions, and abusive requalification.
Does not guarantee: A canonical surface reduces ambiguity; it does not guarantee faithful restitution on its own.
Site context
/site-context.md
Notice that qualifies the nature of the site, its reference function, and its non-transactional limits.
- Governs
- Editorial framing, temporality, and the readability of explicit changes.
- Bounds
- Silent drifts and readings that assume stability without checking versions.
Does not guarantee: Versioning makes a gap auditable; it does not automatically correct outputs already in circulation.
Q-Ledger JSON
/.well-known/q-ledger.json
Machine-first journal of observations, baselines, and versioned gaps.
- Governs
- The description of gaps, drifts, snapshots, and comparisons.
- Bounds
- Confusion between observed signal, fidelity proof, and actual steering.
Does not guarantee: An observation surface documents an effect; it does not, on its own, guarantee representation.
Complementary artifacts (2)
These surfaces extend the main block. They add context, discovery, routing, or observation depending on the topic.
Q-Metrics JSON
/.well-known/q-metrics.json
Descriptive metrics surface for observing gaps, snapshots, and comparisons.
Iip Report Schema
/iip-report.schema.json
Observation surface that exposes logs, metrics, snapshots, or measurement protocols.
Evidence layer
Probative surfaces brought into scope by this page
This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.
- 01Canon and scopeDefinitions canon
- 02Weak observationQ-Ledger
- 03Derived measurementQ-Metrics
- 04Audit reportIIP report schema
Definitions canon
/canon.md
Opposable base for identity, scope, roles, and negations that must survive synthesis.
- Makes provable
- The reference corpus against which fidelity can be evaluated.
- Does not prove
- Neither that a system already consults it nor that an observed response stays faithful to it.
- Use when
- Before any observation, test, audit, or correction.
Q-Ledger
/.well-known/q-ledger.json
Public ledger of inferred sessions that makes some observed consultations and sequences visible.
- Makes provable
- That a behavior was observed as weak, dated, contextualized trace evidence.
- Does not prove
- Neither actor identity, system obedience, nor strong proof of activation.
- Use when
- When it is necessary to distinguish descriptive observation from strong attestation.
Q-Metrics
/.well-known/q-metrics.json
Derived layer that makes some variations more comparable from one snapshot to another.
- Makes provable
- That an observed signal can be compared, versioned, and challenged as a descriptive indicator.
- Does not prove
- Neither the truth of a representation, the fidelity of an output, nor real steering on its own.
- Use when
- To compare windows, prioritize an audit, and document a before/after.
IIP report schema
/iip-report.schema.json
Public interface for an interpretation integrity report: scope, metrics, and drift taxonomy.
- Makes provable
- The minimal shape of a reconstructible and comparable audit report.
- Does not prove
- Neither private weights, internal heuristics, nor the success of a concrete audit.
- Use when
- When a page discusses audit, probative deliverables, or opposable reports.
Complementary probative surfaces (2)
These artifacts extend the main chain. They help qualify an audit, an evidence level, a citation, or a version trajectory.
Citations
/citations.md
Minimal external reference surface used to contextualize some concepts without delegating canonical authority to them.
site-context.md
/site-context.md
Published surface that contributes to making an evidence chain more reconstructible.
Route the audit before naming it
Use Start here when the symptom is ambiguous. The same AI output can involve visibility, retrieval, evidence, entity stability, memory or execution. The audit label should follow the failure layer, not the other way around.
AI search and interpretive audits
AI search and interpretive audits organize the diagnostic work that sits between market-facing AI visibility questions and deeper governance questions. The page exists because organizations rarely begin with a precise doctrinal problem. They begin with symptoms: a brand is absent, a competitor is recommended, a source is cited incorrectly, a generated answer uses the wrong category, or different systems produce incompatible answers.
Those symptoms can look similar from the outside. They are not equivalent. An absence problem, a citation problem, a representation problem, a recommendation problem, a source hierarchy problem and an answer legitimacy problem require different methods. This hub routes the symptom toward the appropriate audit path.
AI search audit vs interpretive audit
An AI search audit focuses on how AI-enabled search surfaces, answer engines and generated summaries expose, cite, rank, summarize, compare or recommend an entity. It is useful when the organization needs to know what appears in market-facing systems and how those appearances vary across prompts, languages, dates and competitors.
An interpretive audit goes deeper. It asks whether the answer is faithful to the canonical corpus, whether the sources are properly ordered, whether the response can be defended and whether a correction plan exists. It is less concerned with the mere fact of visibility and more concerned with the legitimacy of the meaning that becomes visible.
The two audit types should not be separated too rigidly. AI search produces the symptom. Interpretive governance explains whether the symptom is acceptable, risky, correctable or structurally induced.
Start with the symptom
If the problem is presence or absence, start with LLM visibility audit and AI Search Monitoring. The goal is to observe whether the entity appears across systems and whether the pattern is stable enough to interpret.
If the problem is a specific generated answer, start with AI answer audit. The audit should test whether the answer is supported, whether the cited sources actually justify the claim and whether the response respects answer legitimacy.
If the problem is brand meaning, use AI brand representation audit and representation gap audit. The question is not merely whether the brand appears. The question is whether the model assigns the right category, role, comparison set, differentiation and risk context.
If the problem is source behavior, use AI citation analysis and AI source mapping. These routes separate cited sources, structuring sources, canonical sources, derivative sources, missing sources and sources that create conflict.
If the problem is instability, use comparative audits and drift detection. Instability can appear across systems, prompts, languages, dates, geographies, interfaces or output formats. The aim is to separate normal variation from interpretive drift.
If the problem appears before publication, use pre-launch semantic analysis. This route is useful before launching a site, offer, product, documentation layer, doctrine, public figure page or AI-facing corpus.
If the problem carries legal, reputational, operational or decision risk, use interpretive risk assessment and independent reporting. The question becomes whether the output can be challenged, reconstructed, bounded and corrected.
The audit stack
A complete AI search and interpretive audit usually has seven layers.
1. Observation
Observation records what appeared, disappeared, changed, was cited, was recommended, was omitted or was misframed. It preserves prompts, systems, dates, languages, source windows, answer text and visible variations.
Observation should avoid premature explanation. A screenshot is not yet a diagnosis. It is an event.
2. Comparison
Comparison tests whether the symptom survives changes in prompt wording, language, system, date, user intent and comparison set. This layer prevents overreacting to a single output and helps identify whether the pattern is stable, intermittent or context-dependent.
A strong comparison layer does not merely collect more answers. It asks what changes when the question changes.
3. Source mapping
Source mapping identifies which sources are cited, which sources are not cited but appear to structure the answer, which canonical sources are missing and which external sources may override the intended framing.
This is where an audit distinguishes citation from authority. A cited source may be weak. A non-cited source may still shape the answer. A canonical source may exist but fail to govern the response.
4. Canon comparison
Canon comparison evaluates the distance between the answer and the canonical corpus. It checks definitions, exclusions, scope, entity identity, service boundaries, risk language and claims that should not be inferred.
This layer connects the symptom to proof of fidelity, canon-output gap and interpretive fidelity.
5. Authority ordering
Authority ordering asks which source should win when several plausible sources point in different directions. This is critical when public pages, old articles, external profiles, client materials, documentation, social posts and third-party summaries all compete to frame the same entity.
Without authority ordering, the audit can identify conflict but not resolve it.
6. Risk qualification
Risk qualification determines whether the issue is cosmetic, commercial, reputational, legal, operational or procedural. A weak brand description is not the same as a false regulated claim, a misleading recommendation or an answer that could influence a decision.
This layer routes the audit toward interpretive risk, opposability and procedural validity when the stakes are higher.
7. Correction planning
Correction planning turns the diagnosis into an ordered set of actions. It may require strengthening a definition, clarifying a service page, adding source hierarchy, resolving entity collision, improving external references, correcting outdated language, reinforcing a hub or aligning governance artifacts.
The correction plan should state what can be changed directly, what requires external reinforcement and what must be monitored rather than promised.
Common diagnostic errors
The most common error is treating all audits as visibility audits. This produces reports that show presence, absence or mention variation without explaining meaning.
The second error is assuming that citation equals proof. A cited source can be outdated, derivative, partial or inconsistent with the claim it is used to support.
The third error is treating dashboard movement as correction. A metric can improve while the representation remains wrong. A visibility curve can rise while the answer still violates the canon.
The fourth error is ignoring the difference between market entry points and canonical governance terms. GEO, AI visibility and AI search optimization are useful labels. They are not sufficient diagnostic categories on their own.
What a useful audit should deliver
A useful audit should deliver a structured diagnosis that someone else can review. It should not depend only on the evaluator’s intuition.
The output should include:
- observed symptoms and affected systems;
- prompts, dates, languages and comparison conditions;
- cited, missing, structuring and conflicting sources;
- entity, category and competitor framing issues;
- canon-output gaps and unsupported claims;
- risk qualification;
- prioritized correction routes;
- next observation cycle.
The strongest audits make the problem contestable. They allow the organization to say not only that an answer is bad, but why it is bad, what caused the issue and what should be reinforced first.
Relationship with market-facing audit pages
This hub is broader than AI visibility audits. The visibility hub is the market entry point for people searching by labels such as LLM visibility, GEO, citability and recommendability. This page is the routing layer that explains how those labels become evidence, source analysis, canon comparison and correction planning.
Use the visibility hub when the initial concern is discoverability, mention, citation or recommendation. Use this hub when the concern is diagnosis, audit method, response legitimacy or interpretive risk.
Non-promises
These audits do not promise ranking, citation, recommendation, traffic, future model behavior, model inclusion or immediate correction. They produce structured evidence, canonical diagnosis and correction priorities.
Their value is not that they guarantee an outcome. Their value is that they make a problem intelligible, defensible and governable.
Definitions attached to this hub
- LLM visibility audit
- AI answer audit
- AI brand representation audit
- Representation gap audit
- AI citation analysis
- AI source mapping
- Comparative audits
- Drift detection
- Pre-launch semantic analysis
- Interpretive risk assessment
- Independent reporting