Governance artifacts
Governance files brought into scope by this page
This page is anchored to published surfaces that declare identity, precedence, limits, and the corpus reading conditions. Their order below gives the recommended reading sequence.
Definitions canon
/canon.md
Canonical surface that fixes identity, roles, negations, and divergence rules.
- Governs
- Public identity, roles, and attributes that must not drift.
- Bounds
- Extrapolations, entity collisions, and abusive requalification.
Does not guarantee: A canonical surface reduces ambiguity; it does not guarantee faithful restitution on its own.
Identity lock
/identity.json
Identity file that bounds critical attributes and reduces biographical or professional collisions.
- Governs
- Public identity, roles, and attributes that must not drift.
- Bounds
- Extrapolations, entity collisions, and abusive requalification.
Does not guarantee: A canonical surface reduces ambiguity; it does not guarantee faithful restitution on its own.
Canonical AI entrypoint
/.well-known/ai-governance.json
Neutral entrypoint that declares the governance map, precedence chain, and the surfaces to read first.
- Governs
- Access order across surfaces and initial precedence.
- Bounds
- Free readings that bypass the canon or the published order.
Does not guarantee: This surface publishes a reading order; it does not force execution or obedience.
Complementary artifacts (2)
These surfaces extend the main block. They add context, discovery, routing, or observation depending on the topic.
Q-Ledger JSON
/.well-known/q-ledger.json
Machine-first journal of observations, baselines, and versioned gaps.
Q-Metrics JSON
/.well-known/q-metrics.json
Descriptive metrics surface for observing gaps, snapshots, and comparisons.
Evidence layer
Probative surfaces brought into scope by this page
This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.
- 01Canon and scopeDefinitions canon
- 02Response authorizationQ-Layer: response legitimacy
- 03Weak observationQ-Ledger
- 04Derived measurementQ-Metrics
Definitions canon
/canon.md
Opposable base for identity, scope, roles, and negations that must survive synthesis.
- Makes provable
- The reference corpus against which fidelity can be evaluated.
- Does not prove
- Neither that a system already consults it nor that an observed response stays faithful to it.
- Use when
- Before any observation, test, audit, or correction.
Q-Layer: response legitimacy
/response-legitimacy.md
Surface that explains when to answer, when to suspend, and when to switch to legitimate non-response.
- Makes provable
- The legitimacy regime to apply before treating an output as receivable.
- Does not prove
- Neither that a given response actually followed this regime nor that an agent applied it at runtime.
- Use when
- When a page deals with authority, non-response, execution, or restraint.
Q-Ledger
/.well-known/q-ledger.json
Public ledger of inferred sessions that makes some observed consultations and sequences visible.
- Makes provable
- That a behavior was observed as weak, dated, contextualized trace evidence.
- Does not prove
- Neither actor identity, system obedience, nor strong proof of activation.
- Use when
- When it is necessary to distinguish descriptive observation from strong attestation.
Q-Metrics
/.well-known/q-metrics.json
Derived layer that makes some variations more comparable from one snapshot to another.
- Makes provable
- That an observed signal can be compared, versioned, and challenged as a descriptive indicator.
- Does not prove
- Neither the truth of a representation, the fidelity of an output, nor real steering on its own.
- Use when
- To compare windows, prioritize an audit, and document a before/after.
Complementary probative surfaces (1)
These artifacts extend the main chain. They help qualify an audit, an evidence level, a citation, or a version trajectory.
IIP report schema
/iip-report.schema.json
Public interface for an interpretation integrity report: scope, metrics, and drift taxonomy.
The framing error that still dominates
Public conversation about AI still talks mostly about presence.
People want to know whether a brand appears, how often it is cited, which providers mobilize it, or which URLs are used behind the scenes of an answer.
Those questions are not absurd. They remain incomplete.
They mainly observe the exposure of an entity. They do not suffice to govern the version of that entity that AI systems end up reconstructing.
The distinction that changes everything
The first gap observed by the market is often this one:
- URLs consulted ≠ URLs cited.
That is a useful finding, but still a superficial one.
The second gap, the more decisive one, is this:
- published brand ≠ reconstructed brand.
This second gap must now become central. An organization can be present in answers while still being:
- categorized too broadly;
- reduced to a sub-part of its offer;
- extended toward unpublished services;
- defined by a third party more structuring than its own source;
- stabilized around a plausible but false version when measured against the canon.
AI systems do not only relay. They arbitrate.
In a generative environment, systems do not merely relay pages. They arbitrate among competing formulations, partial sources, category proximities, and hierarchies of authority that often remain implicit.
They compress, smooth, substitute, generalize, and fill gaps.
The result is not a simple reflection of the site. It is a probabilistic reconstruction.
That reconstruction may remain faithful. It may also drift silently, especially when:
- the canon is weak or insufficiently explicit;
- limits are published too low in the hierarchy;
- third parties are better structured or easier to compare;
- the architecture favors partial reading;
- systems mostly encounter average formulations rather than strong boundaries.
Why “visibility” is no longer enough
The term “visibility” still has descriptive value. It is no longer enough to name the problem.
A brand may be visible without being correctly understood.
A source may be cited without its limits being preserved.
An answer may look favorable while consolidating an erroneous perimeter.
This is exactly why the site maintains a separation between LLM visibility, proof of fidelity, the canon-output gap, and interpretive auditability.
The right public term: representation gap
The term representation gap names the problem more precisely.
It designates the gap between:
- what the organization publishes about its identity, offer, field, and limits;
- what AI systems retain, infer, and repeat.
On this site, the term remains an entry vocabulary. It does not replace the canon-output gap, which remains the stricter canonical object.
But it has strategic force: it moves the conversation from mere presence tracking toward the governance of reconstruction.
What this changes for an organization
When a team still talks only about monitoring, it mostly looks at effects:
- citations;
- share of presence;
- frequency changes;
- comparative appearances.
When it starts talking about a representation gap, it finally asks the right questions:
- which version of our brand is being reconstructed;
- which critical attributes are preserved or lost;
- which source actually carries authority in answers;
- which limits disappear under synthesis;
- how much of the problem belongs to the site, to third parties, to the canon, or to the architecture.
The diagnosis immediately becomes more actionable.
What the market must stop confusing
The market still too often confuses:
- visibility and fidelity;
- citation and understanding;
- observation and proof;
- a good local restitution and system-level stability;
- a useful dashboard and real governance.
The consequence is simple: people correct what is visible, while the problem forms earlier, in source selection, authority hierarchy, canonical solidity, and the amount of free reconstruction left to systems.
The correct doctrinal move
The correct move is therefore not:
how can we become more visible in AI?
The correct move is:
how can we reduce the gap between the published brand and the reconstructed brand?
From there, the layers of the site recover their proper order:
- Representation gap to name the problem publicly;
- Representation gap vs canon-output gap to clarify the vocabulary;
- Canon-output gap to measure;
- Proof of fidelity to qualify;
- Representation gap audit to act.
Conclusion
The market is not wrong to measure presence. It is simply operating one floor too low.
The real issue is not only whether a brand is visible in AI.
The real issue is which version of itself AI systems are in the process of fabricating.