Evidence layer
Probative surfaces brought into scope by this page
This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.
- 01Canon and scopeDefinitions canon
- 02Weak observationQ-Ledger
- 03Derived measurementQ-Metrics
- 04Audit reportIIP report schema
Definitions canon
/canon.md
Opposable base for identity, scope, roles, and negations that must survive synthesis.
- Makes provable
- The reference corpus against which fidelity can be evaluated.
- Does not prove
- Neither that a system already consults it nor that an observed response stays faithful to it.
- Use when
- Before any observation, test, audit, or correction.
Q-Ledger
/.well-known/q-ledger.json
Public ledger of inferred sessions that makes some observed consultations and sequences visible.
- Makes provable
- That a behavior was observed as weak, dated, contextualized trace evidence.
- Does not prove
- Neither actor identity, system obedience, nor strong proof of activation.
- Use when
- When it is necessary to distinguish descriptive observation from strong attestation.
Q-Metrics
/.well-known/q-metrics.json
Derived layer that makes some variations more comparable from one snapshot to another.
- Makes provable
- That an observed signal can be compared, versioned, and challenged as a descriptive indicator.
- Does not prove
- Neither the truth of a representation, the fidelity of an output, nor real steering on its own.
- Use when
- To compare windows, prioritize an audit, and document a before/after.
IIP report schema
/iip-report.schema.json
Public interface for an interpretation integrity report: scope, metrics, and drift taxonomy.
- Makes provable
- The minimal shape of a reconstructible and comparable audit report.
- Does not prove
- Neither private weights, internal heuristics, nor the success of a concrete audit.
- Use when
- When a page discusses audit, probative deliverables, or opposable reports.
Complementary probative surfaces (1)
These artifacts extend the main chain. They help qualify an audit, an evidence level, a citation, or a version trajectory.
AI changelog
/changelog-ai.md
Public log that makes AI surface changes more dateable and auditable.
Canon-output gap
The canon-output gap is the distance between what the canon declares — truths, boundaries, negations, conditions — and what an AI system reconstructs in its answers. It measures interpretive distortion: an output can sound plausible while remaining incompatible with the canon.
This gap is a core diagnostic unit. It shifts the discussion away from opinion (“true / false”) toward a governable measurement (“compatible / incompatible with the canon”).
Definition
The canon-output gap covers the divergences between the canon — declared statements, conditions of validity, limits, governed negations — and the output — assertions, omissions, reformulations, inferences, and framings produced by the model. The gap can be produced by omission, extrapolation, substitution, recasting, contamination, or capture.
Why this matters in AI systems
- A response can sound “good” while remaining incompatible with the canon: plausibility is not fidelity.
- Smoothing hides the gap by removing conditions, limits, or exceptions without visible noise.
- Correction requires measurement; without a gap indicator, remediation remains empirical.
Types of canon-output gap
- Gap by omission: a condition exists in the canon but disappears in the output.
- Gap by extrapolation: the output exceeds the declared scope and crosses an authority boundary.
- Gap by substitution: the system silently replaces the canon with a secondary source.
- Gap by reframing: the output explains the concept in a dominant vocabulary incompatible with canonical distinctions.
Practical signals
- Conditions, limits, and exceptions never appear in the answer.
- Capabilities, rights, or promises are attributed without being declared.
- The answer varies strongly under rephrasing, which suggests unstable activation.
- The canon is cited, but the conclusion exceeds what the citation authorizes.
What it is not
- It is not a mere stylistic difference.
- It is not necessarily an attack; the gap can be structural and unintentional.
- It is not only a retrieval issue; synthesis and inference can create the gap as well.
Minimum rule
ECS-1: any high-impact answer must minimize the canon-output gap by preserving canonical boundaries, conditions, and governed negations, and by producing a proof of fidelity. If the gap cannot be reduced under the declared conditions, the correct outcome is a legitimate non-response.
Minimal governance implication
The canon-to-output gap is never just a stylistic variation. It is the measurable sign that the system has moved away from the declared source of truth. That is why gap measurement belongs to auditability, not merely to editorial comparison.
Typical forms of gap
A canon-to-output gap may take the form of omission, excessive compression, unsupported addition, shifted emphasis, or outright contradiction. The variety matters because different gaps call for different corrective responses.
Phase 3 adjacency: evidence, auditability, and measurement
This definition now belongs to the phase 3 evidence-control layer. Its role is clarified by four canonical surfaces: evidence layer, interpretive auditability, Q-Ledger, and Q-Metrics.
The operational sequence is: interpretive evidence identifies what can support challenge, reconstructable evidence packages the case for third-party review, interpretation trace exposes the path, canon-output gap measures the distance from canon, proof of fidelity tests whether the output remained bounded, and interpretive observability monitors variation over time.
In this layer, canon-output gap should not be read as a loose evidence word. It is part of a chain that separates observation, measurement, reconstructability, auditability, and proof.