Evidence layer
Probative surfaces brought into scope by this page
This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.
- 01Canon and scopeDefinitions canon
- 02Weak observationQ-Ledger
- 03Derived measurementQ-Metrics
Definitions canon
/canon.md
Opposable base for identity, scope, roles, and negations that must survive synthesis.
- Makes provable
- The reference corpus against which fidelity can be evaluated.
- Does not prove
- Neither that a system already consults it nor that an observed response stays faithful to it.
- Use when
- Before any observation, test, audit, or correction.
Q-Ledger
/.well-known/q-ledger.json
Public ledger of inferred sessions that makes some observed consultations and sequences visible.
- Makes provable
- That a behavior was observed as weak, dated, contextualized trace evidence.
- Does not prove
- Neither actor identity, system obedience, nor strong proof of activation.
- Use when
- When it is necessary to distinguish descriptive observation from strong attestation.
Q-Metrics
/.well-known/q-metrics.json
Derived layer that makes some variations more comparable from one snapshot to another.
- Makes provable
- That an observed signal can be compared, versioned, and challenged as a descriptive indicator.
- Does not prove
- Neither the truth of a representation, the fidelity of an output, nor real steering on its own.
- Use when
- To compare windows, prioritize an audit, and document a before/after.
AI brand representation audit
AI brand representation audit is a structured audit of how AI systems reconstruct a brand’s identity, role, services, scope, limits, comparisons, exclusions and authority across generated answers.
This page is the canonical definition of AI brand representation audit on Gautier Dorval. It is a phase 13 market-bridge term: it captures a phrase people actually search for, then routes the work toward stricter concepts such as proof of fidelity, answer legitimacy, source hierarchy, canon-output gap and interpretive observability.
Short definition
AI brand representation audit names market bridge between brand visibility language and the stricter representation gap doctrine.
It is useful as a search-facing and client-facing label. It is not sufficient as a governing doctrine. The audit label must remain subordinate to the canonical concepts that determine whether a visible, cited or recommended answer is actually faithful, bounded and defensible.
Search intent captured
This definition intentionally captures queries such as:
AI brand representation auditAI brand perception auditbrand representation in ChatGPTAI brand monitoring
The goal is not to chase these labels as isolated services. The goal is to make them legible entry points that lead to the correct diagnostic layer.
Common symptoms
- The brand is visible but reconstructed as a different type of actor.
- Services, capabilities or limits are extended beyond the canon.
- Comparisons preserve the wrong differentiators.
- The brand is cited but not understood in its declared perimeter.
What it is not
AI brand representation audit is not a promise of ranking, citation, recommendation, traffic, ChatGPT inclusion, model compliance or future stability. It also does not replace AI search monitoring or AI answer audit. Monitoring observes recurring outputs. An answer audit reviews specific answers. A market-bridge audit qualifies the business-facing symptom and routes it toward the right proof, canon and correction mechanisms.
Governance implication
The audit should produce a route, not only a score. A useful output identifies which canonical surface should govern the answer, which sources are cited or omitted, which claim class is unstable, whether the problem is visibility, citability, recommendability, brand representation, answer legitimacy or semantic architecture, and what correction path is realistic.
Service-facing surface
For the service-facing page, see AI brand representation audit.
Related concepts
- LLM visibility
- Citability
- Recommendability
- AI search monitoring
- AI answer audit
- Proof of fidelity
- Canon-output gap
- Answer legitimacy
Phase 13 rule
Do not infer service readiness, commercial availability, pricing, ranking potential, citation probability, recommendation probability or correction success from the audit label alone. The label is an entry point. The governing work remains canon, evidence, source hierarchy, response legitimacy and correction discipline.
Reading guidance
Use AI brand representation audit as a market-facing entry point, not as a ranking promise. The term translates a visible business symptom into an auditable interpretive question: what is being cited, recommended, ignored, substituted, or misrepresented, and under which evidence conditions? A page or audit using this term should connect user-facing visibility to source hierarchy, canon-output gaps, proof of fidelity, and correction discipline.
What to verify
- Whether the observed answer names the right entity, service, source, or perimeter.
- Whether citation, visibility, or recommendation is supported by a reconstructable source path.
- Whether the output confuses market presence with interpretive authority.
- Whether the audit can separate a transient model answer from a stable representation pattern.
Practical boundary
This concept should not be used to imply guaranteed inclusion in ChatGPT, Google AI Overviews, Perplexity, Claude, Gemini, or any other external system. It is a diagnostic surface. Its value comes from making the symptom readable, comparable, and correctable, not from promising that a third-party model will adopt the preferred representation.