Governance artifacts
Governance files brought into scope by this page
This page is anchored to published surfaces that declare identity, precedence, limits, and the corpus reading conditions. Their order below gives the recommended reading sequence.
Definitions canon
/canon.md
Canonical surface that fixes identity, roles, negations, and divergence rules.
- Governs
- Public identity, roles, and attributes that must not drift.
- Bounds
- Extrapolations, entity collisions, and abusive requalification.
Does not guarantee: A canonical surface reduces ambiguity; it does not guarantee faithful restitution on its own.
Site context
/site-context.md
Notice that qualifies the nature of the site, its reference function, and its non-transactional limits.
- Governs
- Editorial framing, temporality, and the readability of explicit changes.
- Bounds
- Silent drifts and readings that assume stability without checking versions.
Does not guarantee: Versioning makes a gap auditable; it does not automatically correct outputs already in circulation.
Public AI manifest
/ai-manifest.json
Structured inventory of the surfaces, registries, and modules that extend the canonical entrypoint.
- Governs
- Access order across surfaces and initial precedence.
- Bounds
- Free readings that bypass the canon or the published order.
Does not guarantee: This surface publishes a reading order; it does not force execution or obedience.
Complementary artifacts (1)
These surfaces extend the main block. They add context, discovery, routing, or observation depending on the topic.
Canonical AI entrypoint
/.well-known/ai-governance.json
Neutral entrypoint that declares the governance map, precedence chain, and the surfaces to read first.
Evidence layer
Probative surfaces brought into scope by this page
This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.
- 01Canon and scopeDefinitions canon
- 02Weak observationQ-Ledger
- 03Derived measurementQ-Metrics
Definitions canon
/canon.md
Opposable base for identity, scope, roles, and negations that must survive synthesis.
- Makes provable
- The reference corpus against which fidelity can be evaluated.
- Does not prove
- Neither that a system already consults it nor that an observed response stays faithful to it.
- Use when
- Before any observation, test, audit, or correction.
Q-Ledger
/.well-known/q-ledger.json
Public ledger of inferred sessions that makes some observed consultations and sequences visible.
- Makes provable
- That a behavior was observed as weak, dated, contextualized trace evidence.
- Does not prove
- Neither actor identity, system obedience, nor strong proof of activation.
- Use when
- When it is necessary to distinguish descriptive observation from strong attestation.
Q-Metrics
/.well-known/q-metrics.json
Derived layer that makes some variations more comparable from one snapshot to another.
- Makes provable
- That an observed signal can be compared, versioned, and challenged as a descriptive indicator.
- Does not prove
- Neither the truth of a representation, the fidelity of an output, nor real steering on its own.
- Use when
- To compare windows, prioritize an audit, and document a before/after.
Confirm that visibility is the right layer
Before selecting a market-facing audit, use Start here to separate visibility from representation, citation, recommendation, answer legitimacy and source hierarchy. This prevents a visibility audit from being asked to solve a proof, authority or correction problem.
AI visibility audits
AI visibility audits are the market-facing entry point for questions about whether an entity, brand, service, product, doctrine or source is visible in AI-generated answers. They cover practical labels such as LLM visibility, ChatGPT visibility, AI citations, GEO, AI search optimization, citability, recommendability, answer quality and AI brand representation.
This page does not treat those labels as interchangeable. A brand can be visible but misrepresented. A source can be cited but not authoritative. A company can appear in an answer without being recommended. A page can be indexed and still fail to govern the wording used by a model. The role of this hub is to help route a broad visibility question toward the correct diagnostic layer.
Use this page when the first question is simple, but the underlying problem is probably more complex:
- “Are we visible in ChatGPT or other AI answer systems?”
- “Why are competitors mentioned while we are absent?”
- “Why are we cited, but described incorrectly?”
- “Why does the answer use the wrong category, market or comparison set?”
- “Can AI systems recommend us responsibly?”
- “Do GEO metrics prove that the representation is correct?”
These questions are legitimate. They become risky when they are answered only through dashboards, screenshots or mention counts.
What an AI visibility audit actually tests
An AI visibility audit should not stop at presence or absence. Visibility is only the outer surface of the problem. A serious audit tests whether the visible answer is supported, stable, source-aware and governable.
A complete audit usually examines six layers.
First, it observes presence: whether the entity appears in answers, summaries, recommendations, comparisons or citations. This is the layer most people mean when they ask about AI visibility.
Second, it tests framing: how the entity is categorized, described, compared and positioned. This matters because a visible entity can still be placed in the wrong market, role, risk category or competitive set.
Third, it checks source behavior: which sources are cited, implied, retrieved, ignored or silently used to structure the answer. Citation is not proof of authority, but it is an important clue.
Fourth, it evaluates answer legitimacy: whether the answer stays within the evidence, respects the source hierarchy and avoids unauthorized synthesis. This is where answer legitimacy, source hierarchy and proof of fidelity become more important than visibility scores.
Fifth, it measures stability: whether the representation survives variations in prompt, date, language, system, model and context. A one-time answer is not the same thing as durable visibility.
Sixth, it defines correction priorities: which pages, definitions, service pages, source mappings, external references or governance artifacts should be reinforced before another measurement cycle.
The main audit paths
Different symptoms require different audit paths. Treating every symptom as a generic visibility problem produces vague reports and weak correction plans.
Use an LLM visibility audit when the question is whether the entity appears across language models, prompts, systems or answer formats. The goal is to understand visibility patterns, not merely to count mentions.
Use an AI visibility audit when the question covers multiple visibility surfaces: AI search, answer engines, generated summaries, recommendations and comparison answers. This is the broadest service route.
Use an AI answer audit when the issue is a specific answer that may be wrong, misleading, overconfident, unsupported or impossible to defend. This route is closer to canon-output gap and interpretive fidelity than to marketing visibility.
Use an AI brand representation audit when the entity appears, but its meaning is distorted. Typical symptoms include wrong category assignment, weak differentiation, contamination by competitors, confusion with adjacent entities or reduction to an old positioning.
Use an AI citation tracking audit when the central issue is citation behavior: who is cited, what is cited, whether citations support the answer and whether the cited sources are canonical, derivative, outdated or merely convenient.
Use a Citability audit when the question is whether a source is structured, authoritative and clear enough to be cited. Citability is not the same as ranking. It depends on clarity, authority, source hierarchy, topical fit and resistance to ambiguity.
Use a Recommendability audit when the issue is whether an entity can be recommended responsibly. A recommendation requires more than visibility. It requires a defensible match between user need, source evidence, category, constraints and risk level.
Use a Generative engine optimization audit or an AI search optimization audit when the market language is GEO, AI SEO or AI search optimization. These audits are useful as entry points, but they should be routed toward evidence, canon, entity structure and answer quality rather than treated as score-chasing exercises.
Use a Brand visibility in ChatGPT audit when the question is specifically framed around ChatGPT. The diagnostic logic remains broader: a ChatGPT-specific symptom still has to be interpreted through source behavior, representation, answer legitimacy and correction discipline.
Why visibility alone is insufficient
A visibility-only audit often tells an organization that it appears, does not appear, appears less than competitors or appears differently across prompts. That information can be useful, but it rarely explains the cause.
The real question is not simply whether the entity is present. The real question is what governs the answer when the entity becomes present.
A visibility score can hide several distinct problems:
- the entity is visible, but the answer uses an outdated source;
- the entity is cited, but the cited passage does not support the claim;
- the entity is mentioned, but placed in the wrong category;
- the entity is absent because stronger external sources dominate the framing;
- the model recommends a competitor because the comparison set is contaminated;
- the answer is fluent, but not defensible under source scrutiny;
- the correction has been published, but old assumptions still persist.
This is why the hub routes broad visibility language toward stricter concepts such as semantic integrity, interpretive observability, citability, recommendability and AI brand representation.
What evidence should be collected
A useful AI visibility audit preserves enough evidence to make the finding reconstructable. Screenshots alone are weak. They may show what happened, but they often fail to show why it happened or whether it can be reproduced.
The evidence layer should include the prompt, system or interface used, date, language, location when relevant, answer text, cited or implied sources, competing entities, visible omissions, category labels, comparison terms and any constraints given to the system. When multiple systems are compared, the audit should also preserve model or product names, answer conditions and observed differences.
The goal is not to pretend that every answer can be perfectly reconstructed. The goal is to make the diagnostic path explicit enough to separate observation from inference, proof from assumption and visibility from legitimacy.
What the output should produce
A mature audit should produce more than a visibility table. It should produce a map of what is happening and what should be corrected first.
At minimum, the output should identify:
- the primary visibility symptoms;
- the affected prompts, systems, entities and languages;
- the source patterns that appear to govern the answer;
- the gap between visible answer and canonical framing;
- the likely cause of absence, distortion, weak recommendation or unstable citation;
- the pages, definitions, hubs, source mappings or external signals that need reinforcement;
- the limits of the finding and the next observation cycle.
This is where the audit becomes operational. It stops being a screenshot collection and becomes a correction plan.
Limits and non-promises
This hub does not promise ranking, citation, inclusion in a model, ChatGPT visibility, recommendation, traffic, correction by third-party systems or stable future model behavior.
The purpose of an AI visibility audit is to make a representation problem observable, explainable and correctable. It can identify where the site, the source hierarchy, the canonical surfaces, the service pages, the entity signals or the external references are too weak. It cannot force an external model or search system to adopt a representation.
That distinction is important. The audit does not control the model. It controls the quality of the diagnosis and the discipline of the correction plan.
Service-facing routes
- LLM visibility audit
- AI visibility audit
- AI answer audit
- AI brand representation audit
- AI citation tracking audit
- Citability audit
- Recommendability audit
- Generative engine optimization audit
- AI search optimization audit
- Brand visibility in ChatGPT audit