Skip to content

Page

Representation gap

Public hub that reframes 'visibility in AI' as a gap between the published brand and the brand reconstructed by AI systems.

CollectionPage
TypeHub

Governance artifacts

Governance files brought into scope by this page

This page is anchored to published surfaces that declare identity, precedence, limits, and the corpus reading conditions. Their order below gives the recommended reading sequence.

  1. 01Definitions canon
  2. 02Canonical AI entrypoint
  3. 03Public AI manifest
Canon and identity#01

Definitions canon

/canon.md

Canonical surface that fixes identity, roles, negations, and divergence rules.

Governs
Public identity, roles, and attributes that must not drift.
Bounds
Extrapolations, entity collisions, and abusive requalification.

Does not guarantee: A canonical surface reduces ambiguity; it does not guarantee faithful restitution on its own.

Entrypoint#02

Canonical AI entrypoint

/.well-known/ai-governance.json

Neutral entrypoint that declares the governance map, precedence chain, and the surfaces to read first.

Governs
Access order across surfaces and initial precedence.
Bounds
Free readings that bypass the canon or the published order.

Does not guarantee: This surface publishes a reading order; it does not force execution or obedience.

Entrypoint#03

Public AI manifest

/ai-manifest.json

Structured inventory of the surfaces, registries, and modules that extend the canonical entrypoint.

Governs
Access order across surfaces and initial precedence.
Bounds
Free readings that bypass the canon or the published order.

Does not guarantee: This surface publishes a reading order; it does not force execution or obedience.

Complementary artifacts (3)

These surfaces extend the main block. They add context, discovery, routing, or observation depending on the topic.

Canon and identity#04

Identity lock

/identity.json

Identity file that bounds critical attributes and reduces biographical or professional collisions.

Observability#05

Q-Ledger JSON

/.well-known/q-ledger.json

Machine-first journal of observations, baselines, and versioned gaps.

Observability#06

Q-Metrics JSON

/.well-known/q-metrics.json

Descriptive metrics surface for observing gaps, snapshots, and comparisons.

Evidence layer

Probative surfaces brought into scope by this page

This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.

  1. 01
    Canon and scopeDefinitions canon
  2. 02
    Response authorizationQ-Layer: response legitimacy
  3. 03
    Weak observationQ-Ledger
  4. 04
    Derived measurementQ-Metrics
Canonical foundation#01

Definitions canon

/canon.md

Opposable base for identity, scope, roles, and negations that must survive synthesis.

Makes provable
The reference corpus against which fidelity can be evaluated.
Does not prove
Neither that a system already consults it nor that an observed response stays faithful to it.
Use when
Before any observation, test, audit, or correction.
Legitimacy layer#02

Q-Layer: response legitimacy

/response-legitimacy.md

Surface that explains when to answer, when to suspend, and when to switch to legitimate non-response.

Makes provable
The legitimacy regime to apply before treating an output as receivable.
Does not prove
Neither that a given response actually followed this regime nor that an agent applied it at runtime.
Use when
When a page deals with authority, non-response, execution, or restraint.
Observation ledger#03

Q-Ledger

/.well-known/q-ledger.json

Public ledger of inferred sessions that makes some observed consultations and sequences visible.

Makes provable
That a behavior was observed as weak, dated, contextualized trace evidence.
Does not prove
Neither actor identity, system obedience, nor strong proof of activation.
Use when
When it is necessary to distinguish descriptive observation from strong attestation.
Descriptive metrics#04

Q-Metrics

/.well-known/q-metrics.json

Derived layer that makes some variations more comparable from one snapshot to another.

Makes provable
That an observed signal can be compared, versioned, and challenged as a descriptive indicator.
Does not prove
Neither the truth of a representation, the fidelity of an output, nor real steering on its own.
Use when
To compare windows, prioritize an audit, and document a before/after.
Complementary probative surfaces (1)

These artifacts extend the main chain. They help qualify an audit, an evidence level, a citation, or a version trajectory.

Report schemaAudit report

IIP report schema

/iip-report.schema.json

Public interface for an interpretation integrity report: scope, metrics, and drift taxonomy.

Why this page exists

The market increasingly talks about “visibility in AI”, share of voice, citations, or presence in ChatGPT, Claude, Perplexity, or Google. Those signals are useful. They still describe only part of the problem.

The deeper issue sits elsewhere: AI systems do not merely reflect a brand, an offer, or an entity. They reconstruct a synthetic version of it.

That reconstruction may remain close to the canon, become partial, stretch beyond scope, or be silently requalified by third-party sources. This page makes that gap readable by focusing on the distance between the published version and the reconstructed one.

The false problem and the real problem

The false problem is easy to summarize:

  • am I mentioned;
  • am I cited;
  • how often do I appear;
  • which provider cites me the most.

The real problem is more demanding:

  • which version of my identity is being reconstructed;
  • which critical attributes are preserved, smoothed, or extended;
  • which source actually governs the final answer;
  • under which conditions that reconstruction remains stable or drifts.

In other words, an organization can be visible and still be badly defined, badly bounded, badly categorized, or reconstructed from a third party that exerts more influence than its own canonical source.

What “representation gap” means on this site

On this site, the representation gap is a public entry term.

It designates the difference between:

  • what an organization publishes about itself, its offers, its limits, and its scope;
  • what AI systems retain, recombine, infer, and repeat.

This term is intentionally more accessible than the canon-output gap, but it does not replace it. It provides an entry into the problem before redistributing it toward the stricter objects that actually govern it: proof of fidelity, authority boundary, interpretive SEO, and the representation gap audit.

Part of the market now reaches the same issue through the label AI Search Monitoring. On this site, that label is treated as a useful descriptive monitoring layer and then redistributed toward the more demanding problem of governed representation.

Another frequent entry point starts from the citations themselves: the official source appears, but the team can no longer tell whether it is actually being understood. That is the role of AI citation analysis and Being cited vs being understood on this site: making that passage readable without confusing citation with faithful understanding.

A third frequent dissociation appears when the official source is visible, yet a third party remains more structuring or more governing than that displayed source. That is the role of AI source mapping and Cited source vs structuring source vs governing source: making readable that dissociation between documentary visibility, structuring capacity, and authority that actually prevails.

A fourth frequent dissociation appears when the official site becomes visible again, yet the external environment still provides the dominant category, comparison, or temporality. That is the role of Exogenous governance and Official site visible vs structuring third parties: making readable that intermediate moment where presence returns before precedence is actually restored.

Typical symptoms

The representation gap becomes readable when symptoms such as these appear:

  • the brand is present, but its service perimeter is extended beyond the canon;
  • the official site is cited, but limits, exclusions, or conditions disappear under synthesis;
  • an AI system attributes roles, expertise, or capabilities to the organization that were never published;
  • a third-party source, directory, comparator, or review page ends up defining the entity more strongly than the official source;
  • outputs remain plausible but differ enough from one system to another that no stable reconstruction can be presumed.

Why visibility is not enough

A visible source is not necessarily a governing source.

An explicit citation is not necessarily proof of fidelity.

A good answer on one query is not necessarily system-level stability.

This is precisely why the site maintains a doctrinal separation between:

  • visibility;
  • fidelity;
  • stability;
  • governability.

That boundary is stated explicitly in GEO metrics do not govern representation and extended in Interpretive auditability of AI systems.

What this page does not designate

The representation gap is not:

  • a mere sentiment or online reputation issue;
  • a synonym for presence in answers;
  • a standalone marketing score;
  • a ranking question in the classical sense;
  • a purely stylistic divergence.

It is a reconstruction gap.

Diagnosis therefore concerns how an entity, an offer, a relationship, or a perimeter is recomposed under machine reading.

When the gap becomes an audit matter

Moving to an audit becomes relevant when:

  • critical attributes are repeatedly distorted;
  • the same entity receives incompatible framings across systems;
  • a third party silently replaces the canonical source;
  • a local correction seems to produce little effect outside a favorable case;
  • the question is no longer “am I visible?” but “am I still being reconstructed correctly?”.

In those cases, the right entry point is often the representation gap audit, followed where needed by comparative audits, interpretive SEO, or interpretive governance.

To read this topic in order:

  1. The real problem is not visibility in AI, but the representation gap
  2. Representation gap vs canon-output gap
  3. Being cited vs being understood
  4. Cited source vs structuring source vs governing source
  5. Canon-output gap
  6. Proof of fidelity
  7. AI Search Monitoring vs representation governance
  8. AI citation analysis
  9. AI source mapping
  10. Exogenous governance
  11. Official site visible vs structuring third parties
  12. Representation gap audit

Conclusion

The important question is not only whether a brand appears in AI systems.

The important question is which version of itself AI systems fabricate when they speak on its behalf, summarize it, compare it, or recommend it.

It is that difference, and not raw presence alone, that this page calls the representation gap.