Evidence layer
Probative surfaces brought into scope by this page
This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.
- 01Canon and scopeDefinitions canon
- 02Observation mapObservatory map
- 03Weak observationQ-Ledger
- 04Derived measurementQ-Metrics
Definitions canon
/canon.md
Opposable base for identity, scope, roles, and negations that must survive synthesis.
- Makes provable
- The reference corpus against which fidelity can be evaluated.
- Does not prove
- Neither that a system already consults it nor that an observed response stays faithful to it.
- Use when
- Before any observation, test, audit, or correction.
Observatory map
/observations/observatory-map.json
Machine-first index of published observation resources, snapshots, and comparison points.
- Makes provable
- Where the observation objects used in an evidence chain are located.
- Does not prove
- Neither the quality of a result nor the fidelity of a particular response.
- Use when
- To locate baselines, ledgers, snapshots, and derived artifacts.
Q-Ledger
/.well-known/q-ledger.json
Public ledger of inferred sessions that makes some observed consultations and sequences visible.
- Makes provable
- That a behavior was observed as weak, dated, contextualized trace evidence.
- Does not prove
- Neither actor identity, system obedience, nor strong proof of activation.
- Use when
- When it is necessary to distinguish descriptive observation from strong attestation.
Q-Metrics
/.well-known/q-metrics.json
Derived layer that makes some variations more comparable from one snapshot to another.
- Makes provable
- That an observed signal can be compared, versioned, and challenged as a descriptive indicator.
- Does not prove
- Neither the truth of a representation, the fidelity of an output, nor real steering on its own.
- Use when
- To compare windows, prioritize an audit, and document a before/after.
Complementary probative surfaces (1)
These artifacts extend the main chain. They help qualify an audit, an evidence level, a citation, or a version trajectory.
IIP report schema
/iip-report.schema.json
Public interface for an interpretation integrity report: scope, metrics, and drift taxonomy.
AI Search Monitoring vs representation governance
This page clarifies a distinction that must remain explicit on this site: AI Search Monitoring is a descriptive monitoring layer; representation governance is a layer of bounding, proof, and correction.
The two may coexist. They do not perform the same work.
Why the confusion appears
The market first sees objects that are easy to show: screenshots, citations, share of presence, answer comparisons, visible variations between systems.
As soon as a tool makes those outputs readable, the temptation is strong to see it as a mechanism of control.
The confusion begins when a layer that describes what appears is read as a layer that governs what should appear, what should remain faithful, or what must be corrected.
What AI Search Monitoring names correctly
AI Search Monitoring correctly names a family of descriptive operations:
- recording appearances, absences, or citations;
- preserving comparable observations;
- detecting framing variation;
- showing that a symptom emerges or repeats.
It is therefore a useful layer of observed interpretive visibility.
What representation governance adds
Representation governance adds elements that monitoring alone does not yet publish:
- a canon and critical attributes that must be preserved;
- a hierarchy of sources and an authority boundary;
- rules concerning what may be inferred, extended, or refused;
- protocols that distinguish local variation, stable drift, and framing substitution;
- a path of correction and evidentiary follow-up.
In other words, monitoring can say that a problem exists. Governance must show why, where, and how that problem becomes administrable.
Where the representation gap sits
The representation gap serves precisely as the bridge between the two.
It makes it possible to move:
- from a symptom observed through monitoring;
- to a publicly formulable problem;
- and then to stricter objects such as the canon-output gap and proof of fidelity.
Practical reading rule used on this site
The site applies a simple rule:
- use AI Search Monitoring when the dominant layer is descriptive;
- use representation governance when the question concerns bounding reconstructed meaning;
- use canon-output gap when the canon-to-output comparison becomes explicit;
- use proof of fidelity when one claims that an output remained inside the canon.
What must not be flattened
The following distinctions must remain visible:
- observing is not governing;
- being cited is not yet being correctly understood;
- a local variation is not yet a stable drift;
- a dashboard is not yet a hierarchy of authority;
- descriptive monitoring is not yet a correction device.
Recommended reading path
- AI Search Monitoring
- Representation gap
- Canon-output gap
- Proof of fidelity
- GEO metrics do not govern representation
Closing rule
On this site, AI Search Monitoring makes the visibility of the problem legible; representation governance organizes its proof, its hierarchy of authority, and its correction.