Evidence layer
Probative surfaces brought into scope by this page
This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.
- 01Canon and scopeDefinitions canon
- 02Response authorizationQ-Layer: response legitimacy
- 03Weak observationQ-Ledger
- 04Derived measurementQ-Metrics
Definitions canon
/canon.md
Opposable base for identity, scope, roles, and negations that must survive synthesis.
- Makes provable
- The reference corpus against which fidelity can be evaluated.
- Does not prove
- Neither that a system already consults it nor that an observed response stays faithful to it.
- Use when
- Before any observation, test, audit, or correction.
Q-Layer: response legitimacy
/response-legitimacy.md
Surface that explains when to answer, when to suspend, and when to switch to legitimate non-response.
- Makes provable
- The legitimacy regime to apply before treating an output as receivable.
- Does not prove
- Neither that a given response actually followed this regime nor that an agent applied it at runtime.
- Use when
- When a page deals with authority, non-response, execution, or restraint.
Q-Ledger
/.well-known/q-ledger.json
Public ledger of inferred sessions that makes some observed consultations and sequences visible.
- Makes provable
- That a behavior was observed as weak, dated, contextualized trace evidence.
- Does not prove
- Neither actor identity, system obedience, nor strong proof of activation.
- Use when
- When it is necessary to distinguish descriptive observation from strong attestation.
Q-Metrics
/.well-known/q-metrics.json
Derived layer that makes some variations more comparable from one snapshot to another.
- Makes provable
- That an observed signal can be compared, versioned, and challenged as a descriptive indicator.
- Does not prove
- Neither the truth of a representation, the fidelity of an output, nor real steering on its own.
- Use when
- To compare windows, prioritize an audit, and document a before/after.
Complementary probative surfaces (3)
These artifacts extend the main chain. They help qualify an audit, an evidence level, a citation, or a version trajectory.
Q-Attest protocol
/.well-known/q-attest-protocol.md
Optional specification that cleanly separates inferred sessions from validated attestations.
IIP report schema
/iip-report.schema.json
Public interface for an interpretation integrity report: scope, metrics, and drift taxonomy.
AI changelog
/changelog-ai.md
Public log that makes AI surface changes more dateable and auditable.
Interpretive auditability
This page is the canonical definition for interpretive auditability inside the interpretive governance corpus.
Interpretive auditability is the capacity to examine, reconstruct, challenge, and qualify how an AI output moved from a canon or source set to a final answer under declared authority, evidence, version, and response conditions.
Short definition
Interpretive auditability is not the ability to inspect every internal model operation. It is the ability to make the interpretive path sufficiently visible for governance: what was authoritative, what was retrieved or observed, what was inferred, what was excluded, which response conditions applied, and whether the output can be defended against the canon.
A system, corpus, or response is interpretively auditable when a third party can review the relevant evidence without relying on style, confidence, or post-hoc narrative convenience.
Why it matters
AI systems do not only return facts. They compress, compare, select, omit, reframe, and sometimes arbitrate between sources. When that process leaves no reviewable footprint, an organization cannot distinguish a faithful response from a plausible one.
Interpretive auditability creates the conditions for correction. It allows a team to locate the gap, understand whether the issue comes from source hierarchy, retrieval, inference, smoothing, version drift, or missing response conditions, and decide whether to correct the canon, restrict the answer, or declare non-response.
Minimum auditability conditions
A response or system is interpretively auditable only if several conditions are present:
- an identifiable canon or admitted source set;
- a declared authority boundary;
- a visible interpretation trace;
- a way to distinguish observation, inference, exclusion, and uncertainty;
- a reconstructable evidence package;
- a comparison mechanism for the canon-output gap;
- a record of version, date, and response conditions;
- a correction path when the interpretation fails.
Without these conditions, an audit becomes a stylistic review. It may describe a response, but it cannot govern it.
What it is not
Interpretive auditability is not mere explainability. Explainability can describe why a system behaved in a certain way without establishing whether the final interpretation was legitimate.
It is not citation presence. A citation can exist while the answer exceeds the cited canon.
It is not a ranking metric. A page can be visible, cited, or recommended while remaining weakly auditable.
It is not a promise that every model step is transparent. The goal is not total model introspection. The goal is enough governance evidence to make the answer contestable.
Common failure modes
- the output cites sources but does not preserve their scope;
- the answer appears coherent but hides a conflict of authority;
- the trace records retrieval but not the inference made from it;
- evidence exists but cannot be reconstructed by a third party;
- metrics show visibility but not fidelity;
- old versions continue to influence outputs without version disclosure;
- the system cannot explain why it answered rather than refused.
Relation to observability and proof
Interpretive observability makes variation visible. Interpretive auditability makes a specific case reviewable and challengeable. Proof of fidelity then tests whether the output remained inside the canonical perimeter.
These concepts are sequential. Observation is not audit. Audit is not proof. Proof is not a generic metric.
Relation to the evidence layer
The evidence layer is the organized surface that connects auditability to canon, trace, Q-Ledger observations, Q-Metrics indicators, and correction artifacts.
Interpretive auditability depends on that layer. Without an evidence layer, auditability remains aspirational.
Operational rule
A high-impact AI answer should not be accepted as governable unless it is interpretively auditable. If the path from source to output cannot be reconstructed, challenged, or corrected, the response should be narrowed, qualified, or treated as non-opposable.