Evidence layer
Probative surfaces brought into scope by this page
This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.
- 01Canon and scopeDefinitions canon
- 02Response authorizationQ-Layer: response legitimacy
- 03Weak observationQ-Ledger
- 04AttestationQ-Attest protocol
Definitions canon
/canon.md
Opposable base for identity, scope, roles, and negations that must survive synthesis.
- Makes provable
- The reference corpus against which fidelity can be evaluated.
- Does not prove
- Neither that a system already consults it nor that an observed response stays faithful to it.
- Use when
- Before any observation, test, audit, or correction.
Q-Layer: response legitimacy
/response-legitimacy.md
Surface that explains when to answer, when to suspend, and when to switch to legitimate non-response.
- Makes provable
- The legitimacy regime to apply before treating an output as receivable.
- Does not prove
- Neither that a given response actually followed this regime nor that an agent applied it at runtime.
- Use when
- When a page deals with authority, non-response, execution, or restraint.
Q-Ledger
/.well-known/q-ledger.json
Public ledger of inferred sessions that makes some observed consultations and sequences visible.
- Makes provable
- That a behavior was observed as weak, dated, contextualized trace evidence.
- Does not prove
- Neither actor identity, system obedience, nor strong proof of activation.
- Use when
- When it is necessary to distinguish descriptive observation from strong attestation.
Q-Attest protocol
/.well-known/q-attest-protocol.md
Optional specification that cleanly separates inferred sessions from validated attestations.
- Makes provable
- The minimal frame required to elevate an observation toward a verifiable attestation.
- Does not prove
- Neither that an attestation endpoint exists nor that an attestation has already been received.
- Use when
- When a page deals with strong proof, operational validation, or separation between evidence levels.
Complementary probative surfaces (1)
These artifacts extend the main chain. They help qualify an audit, an evidence level, a citation, or a version trajectory.
AI changelog
/changelog-ai.md
Public log that makes AI surface changes more dateable and auditable.
Interpretation trace
Definition
Interpretation trace is the minimum footprint that makes it possible to explain how an AI output was produced: which sources were mobilized, which rules or constraints were applied, and under which context the answer was generated.
The objective is not to open the model’s internal black box. The objective is to make interpretation auditable. A trace links an output to a canon, an authority boundary, and a set of response conditions.
Why it is critical in AI systems
When interpretation leaves no trace, outputs become difficult to contest. A response may sound coherent and still be impossible to attribute to any stable source hierarchy, perimeter, or rule set. In such conditions, error is not only factual. It becomes procedural.
Interpretation trace matters because it allows a human or system to reconstruct the path from source to answer. It reduces silent extrapolation, exposes conflicts of authority, and makes legitimate non-response easier to justify when the conditions are not met.
Interpretation trace vs citation
A citation names a source. An interpretation trace goes further. It explains how the source entered the answer, with which status, through which constraints, and under which decision logic.
An answer can contain citations and still lack interpretive traceability. That happens when sources are listed but their role in the output remains opaque.
Practical indicators when no trace is available
When interpretation trace is missing, several symptoms tend to appear:
- the answer cites sources but does not distinguish canon from inference;
- conflicts between sources are smoothed over rather than exposed;
- the system cannot state why a non-response or clarification would have been legitimate;
- the output cannot be tied to an authority perimeter, a version, or a context window.
What interpretation trace is not
Interpretation trace is not full model introspection. It is not a hidden chain-of-thought dump. It is not a promise that every token can be reconstructed. And it is not equivalent to a legal attestation.
It is a minimum governance requirement: enough information to explain how a response was produced, bounded, and authorized.
Minimal rule (opposable formulation)
A governable AI output should make it possible to identify:
- the canonical source or sources used;
- the authority boundary that framed the answer;
- the response condition under which the output was authorized;
- the reason for abstention or clarification when a full answer was not legitimate.
Example
A model summarizes a doctrinal page and concludes that a concept applies by default. A citation alone is not sufficient. An interpretation trace would show whether the concept was explicitly stated by the canon, inferred by the model, or stabilized through another authoritative layer.
Recommended internal links
- Proof of fidelity
- Canon-to-output gap
- Interpretive observability
- Legitimate non-response
See also
- Interpretation integrity audit
- Q-Layer
- Canon vs inference
Phase 3 adjacency: evidence, auditability, and measurement
This definition now belongs to the phase 3 evidence-control layer. Its role is clarified by four canonical surfaces: evidence layer, interpretive auditability, Q-Ledger, and Q-Metrics.
The operational sequence is: interpretive evidence identifies what can support challenge, reconstructable evidence packages the case for third-party review, interpretation trace exposes the path, canon-output gap measures the distance from canon, proof of fidelity tests whether the output remained bounded, and interpretive observability monitors variation over time.
In this layer, interpretation trace should not be read as a loose evidence word. It is part of a chain that separates observation, measurement, reconstructability, auditability, and proof.