Governance artifacts
Governance files brought into scope by this page
This page is anchored to published surfaces that declare identity, precedence, limits, and the corpus reading conditions. Their order below gives the recommended reading sequence.
Definitions canon
/canon.md
Canonical surface that fixes identity, roles, negations, and divergence rules.
- Governs
- Public identity, roles, and attributes that must not drift.
- Bounds
- Extrapolations, entity collisions, and abusive requalification.
Does not guarantee: A canonical surface reduces ambiguity; it does not guarantee faithful restitution on its own.
Canonical AI entrypoint
/.well-known/ai-governance.json
Neutral entrypoint that declares the governance map, precedence chain, and the surfaces to read first.
- Governs
- Access order across surfaces and initial precedence.
- Bounds
- Free readings that bypass the canon or the published order.
Does not guarantee: This surface publishes a reading order; it does not force execution or obedience.
Public AI manifest
/ai-manifest.json
Structured inventory of the surfaces, registries, and modules that extend the canonical entrypoint.
- Governs
- Access order across surfaces and initial precedence.
- Bounds
- Free readings that bypass the canon or the published order.
Does not guarantee: This surface publishes a reading order; it does not force execution or obedience.
Complementary artifacts (3)
These surfaces extend the main block. They add context, discovery, routing, or observation depending on the topic.
Identity lock
/identity.json
Identity file that bounds critical attributes and reduces biographical or professional collisions.
Q-Ledger JSON
/.well-known/q-ledger.json
Machine-first journal of observations, baselines, and versioned gaps.
Q-Metrics JSON
/.well-known/q-metrics.json
Descriptive metrics surface for observing gaps, snapshots, and comparisons.
Evidence layer
Probative surfaces brought into scope by this page
This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.
- 01Canon and scopeDefinitions canon
- 02Response authorizationQ-Layer: response legitimacy
- 03Weak observationQ-Ledger
- 04Derived measurementQ-Metrics
Definitions canon
/canon.md
Opposable base for identity, scope, roles, and negations that must survive synthesis.
- Makes provable
- The reference corpus against which fidelity can be evaluated.
- Does not prove
- Neither that a system already consults it nor that an observed response stays faithful to it.
- Use when
- Before any observation, test, audit, or correction.
Q-Layer: response legitimacy
/response-legitimacy.md
Surface that explains when to answer, when to suspend, and when to switch to legitimate non-response.
- Makes provable
- The legitimacy regime to apply before treating an output as receivable.
- Does not prove
- Neither that a given response actually followed this regime nor that an agent applied it at runtime.
- Use when
- When a page deals with authority, non-response, execution, or restraint.
Q-Ledger
/.well-known/q-ledger.json
Public ledger of inferred sessions that makes some observed consultations and sequences visible.
- Makes provable
- That a behavior was observed as weak, dated, contextualized trace evidence.
- Does not prove
- Neither actor identity, system obedience, nor strong proof of activation.
- Use when
- When it is necessary to distinguish descriptive observation from strong attestation.
Q-Metrics
/.well-known/q-metrics.json
Derived layer that makes some variations more comparable from one snapshot to another.
- Makes provable
- That an observed signal can be compared, versioned, and challenged as a descriptive indicator.
- Does not prove
- Neither the truth of a representation, the fidelity of an output, nor real steering on its own.
- Use when
- To compare windows, prioritize an audit, and document a before/after.
Complementary probative surfaces (1)
These artifacts extend the main chain. They help qualify an audit, an evidence level, a citation, or a version trajectory.
IIP report schema
/iip-report.schema.json
Public interface for an interpretation integrity report: scope, metrics, and drift taxonomy.
Why this page exists
The market increasingly talks about “visibility in AI”, share of voice, citations, or presence in ChatGPT, Claude, Perplexity, or Google. Those signals are useful. They still describe only part of the problem.
The deeper issue sits elsewhere: AI systems do not merely reflect a brand, an offer, or an entity. They reconstruct a synthetic version of it.
That reconstruction may remain close to the canon, become partial, stretch beyond scope, or be silently requalified by third-party sources. This page makes that gap readable by focusing on the distance between the published version and the reconstructed one.
The false problem and the real problem
The false problem is easy to summarize:
- am I mentioned;
- am I cited;
- how often do I appear;
- which provider cites me the most.
The real problem is more demanding:
- which version of my identity is being reconstructed;
- which critical attributes are preserved, smoothed, or extended;
- which source actually governs the final answer;
- under which conditions that reconstruction remains stable or drifts.
In other words, an organization can be visible and still be badly defined, badly bounded, badly categorized, or reconstructed from a third party that exerts more influence than its own canonical source.
What “representation gap” means on this site
On this site, the representation gap is a public entry term.
It designates the difference between:
- what an organization publishes about itself, its offers, its limits, and its scope;
- what AI systems retain, recombine, infer, and repeat.
This term is intentionally more accessible than the canon-output gap, but it does not replace it. It provides an entry into the problem before redistributing it toward the stricter objects that actually govern it: proof of fidelity, authority boundary, interpretive SEO, and the representation gap audit.
Part of the market now reaches the same issue through the label AI Search Monitoring. On this site, that label is treated as a useful descriptive monitoring layer and then redistributed toward the more demanding problem of governed representation.
Another frequent entry point starts from the citations themselves: the official source appears, but the team can no longer tell whether it is actually being understood. That is the role of AI citation analysis and Being cited vs being understood on this site: making that passage readable without confusing citation with faithful understanding.
A third frequent dissociation appears when the official source is visible, yet a third party remains more structuring or more governing than that displayed source. That is the role of AI source mapping and Cited source vs structuring source vs governing source: making readable that dissociation between documentary visibility, structuring capacity, and authority that actually prevails.
A fourth frequent dissociation appears when the official site becomes visible again, yet the external environment still provides the dominant category, comparison, or temporality. That is the role of Exogenous governance and Official site visible vs structuring third parties: making readable that intermediate moment where presence returns before precedence is actually restored.
Typical symptoms
The representation gap becomes readable when symptoms such as these appear:
- the brand is present, but its service perimeter is extended beyond the canon;
- the official site is cited, but limits, exclusions, or conditions disappear under synthesis;
- an AI system attributes roles, expertise, or capabilities to the organization that were never published;
- a third-party source, directory, comparator, or review page ends up defining the entity more strongly than the official source;
- outputs remain plausible but differ enough from one system to another that no stable reconstruction can be presumed.
Why visibility is not enough
A visible source is not necessarily a governing source.
An explicit citation is not necessarily proof of fidelity.
A good answer on one query is not necessarily system-level stability.
This is precisely why the site maintains a doctrinal separation between:
- visibility;
- fidelity;
- stability;
- governability.
That boundary is stated explicitly in GEO metrics do not govern representation and extended in Interpretive auditability of AI systems.
What this page does not designate
The representation gap is not:
- a mere sentiment or online reputation issue;
- a synonym for presence in answers;
- a standalone marketing score;
- a ranking question in the classical sense;
- a purely stylistic divergence.
It is a reconstruction gap.
Diagnosis therefore concerns how an entity, an offer, a relationship, or a perimeter is recomposed under machine reading.
When the gap becomes an audit matter
Moving to an audit becomes relevant when:
- critical attributes are repeatedly distorted;
- the same entity receives incompatible framings across systems;
- a third party silently replaces the canonical source;
- a local correction seems to produce little effect outside a favorable case;
- the question is no longer “am I visible?” but “am I still being reconstructed correctly?”.
In those cases, the right entry point is often the representation gap audit, followed where needed by comparative audits, interpretive SEO, or interpretive governance.
Recommended reading sequence
To read this topic in order:
- The real problem is not visibility in AI, but the representation gap
- Representation gap vs canon-output gap
- Being cited vs being understood
- Cited source vs structuring source vs governing source
- Canon-output gap
- Proof of fidelity
- AI Search Monitoring vs representation governance
- AI citation analysis
- AI source mapping
- Exogenous governance
- Official site visible vs structuring third parties
- Representation gap audit
Conclusion
The important question is not only whether a brand appears in AI systems.
The important question is which version of itself AI systems fabricate when they speak on its behalf, summarize it, compare it, or recommend it.
It is that difference, and not raw presence alone, that this page calls the representation gap.