Skip to content

Definition

Representation gap audit: canonical definition

Canonical market-bridge definition of representation gap audit: diagnosis of the distance between canonical self-description and AI-mediated reconstruction.

CollectionDefinition
TypeDefinition
Version1.0
Stabilization2026-05-09
Published2026-05-09
Updated2026-05-09

Representation gap audit

Representation gap audit is a market-facing entry point into interpretive governance. It names a service, audit, or diagnostic layer that organizations can recognize before they have adopted the stricter vocabulary of canon, source hierarchy, proof, response legitimacy, and interpretive risk.

This page is the canonical definition of Representation gap audit on Gautier Dorval. It is part of phase 13: service, audit, and market bridge surfaces designed to connect real search demand to the doctrinal architecture built in the definitions, lexicons, frameworks, and machine-first artifacts.


Short definition

A representation gap audit identifies where generated answers diverge from the canonical identity, scope, offer, authority, exclusions, and intended category of an entity. It turns a vague complaint about being misunderstood into a mapped, testable, and correctable gap.

The important point is that this term is useful only when it remains routed. It may be a search query, commercial label, service label, or dashboard category, but it must eventually connect to canonical evidence, authority boundaries, and correction logic.


What it is not

It is not a general content audit. It focuses on the distance between canon and external reconstruction, including the sources and mechanisms that create that distance.

The phase 13 rule is simple: a market label is not yet a governance regime. It becomes governable when the target, corpus, source hierarchy, trace, proof threshold, and correction pathway are explicit.


Common failure modes

  • fixing copy without identifying the structuring source
  • measuring visibility without measuring misrepresentation
  • forgetting exclusions and category boundaries
  • failing to track correction resorption

These failures occur when the organization stays at the level of visibility language instead of moving toward interpretive control. A weak audit sees that something happened. A strong audit explains why it happened, which source governed the result, what level of evidence exists, and which correction can be defended.


How it should be used

Use Representation gap audit as an entry surface, not as a terminal label. The audit or service should begin with the user-facing symptom, then route the case toward the appropriate governing layer:

  1. identify the observed answer, absence, citation, comparison, or recommendation;
  2. declare the canonical target and the expected perimeter;
  3. separate visibility, citability, recommendability, fidelity, authority, and risk;
  4. preserve the prompt, system, date, answer, sources, and interpretation trace;
  5. qualify the gap against the canonical corpus;
  6. recommend a correction that can be tracked over time.

This is why phase 13 connects service pages back to definitions rather than leaving them as marketing pages. The service label opens the file. The doctrine governs the diagnosis.


Governance implication

The governance implication is that service-facing audits must not become black-box opinions. They should produce evidence that can be challenged, repeated, compared, and connected to correction. Their value comes from the transition from symptom to proof.

For this site, Representation gap audit should be read together with the service audits and market entry points, the AI search and interpretive audits hub, and the underlying evidence layer.


Reading guidance

Use representation gap audit as a market-facing entry point, not as a ranking promise. The term translates a visible business symptom into an auditable interpretive question: what is being cited, recommended, ignored, substituted, or misrepresented, and under which evidence conditions? A page or audit using this term should connect user-facing visibility to source hierarchy, canon-output gaps, proof of fidelity, and correction discipline.

What to verify

  • Whether the observed answer names the right entity, service, source, or perimeter.
  • Whether citation, visibility, or recommendation is supported by a reconstructable source path.
  • Whether the output confuses market presence with interpretive authority.
  • Whether the audit can separate a transient model answer from a stable representation pattern.

Practical boundary

This concept should not be used to imply guaranteed inclusion in ChatGPT, Google AI Overviews, Perplexity, Claude, Gemini, or any other external system. It is a diagnostic surface. Its value comes from making the symptom readable, comparable, and correctable, not from promising that a third-party model will adopt the preferred representation.