Skip to content

Definition

Evidence layer

Canonical definition of the evidence layer: the governance layer that connects canon, response legitimacy, observation, trace, proof of fidelity, audit, and correction into a contestable chain.

CollectionDefinition
TypeDefinition
Version1.0
Stabilization2026-05-07
Published2026-05-07
Updated2026-05-07

Evidence layer

Probative surfaces brought into scope by this page

This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.

  1. 01
    Canon and scopeDefinitions canon
  2. 02
    Response authorizationQ-Layer: response legitimacy
  3. 03
    Observation mapObservatory map
  4. 04
    Weak observationQ-Ledger
Canonical foundation#01

Definitions canon

/canon.md

Opposable base for identity, scope, roles, and negations that must survive synthesis.

Makes provable
The reference corpus against which fidelity can be evaluated.
Does not prove
Neither that a system already consults it nor that an observed response stays faithful to it.
Use when
Before any observation, test, audit, or correction.
Legitimacy layer#02

Q-Layer: response legitimacy

/response-legitimacy.md

Surface that explains when to answer, when to suspend, and when to switch to legitimate non-response.

Makes provable
The legitimacy regime to apply before treating an output as receivable.
Does not prove
Neither that a given response actually followed this regime nor that an agent applied it at runtime.
Use when
When a page deals with authority, non-response, execution, or restraint.
Observation index#03

Observatory map

/observations/observatory-map.json

Machine-first index of published observation resources, snapshots, and comparison points.

Makes provable
Where the observation objects used in an evidence chain are located.
Does not prove
Neither the quality of a result nor the fidelity of a particular response.
Use when
To locate baselines, ledgers, snapshots, and derived artifacts.
Observation ledger#04

Q-Ledger

/.well-known/q-ledger.json

Public ledger of inferred sessions that makes some observed consultations and sequences visible.

Makes provable
That a behavior was observed as weak, dated, contextualized trace evidence.
Does not prove
Neither actor identity, system obedience, nor strong proof of activation.
Use when
When it is necessary to distinguish descriptive observation from strong attestation.
Complementary probative surfaces (6)

These artifacts extend the main chain. They help qualify an audit, an evidence level, a citation, or a version trajectory.

Descriptive metricsDerived measurement

Q-Metrics

/.well-known/q-metrics.json

Derived layer that makes some variations more comparable from one snapshot to another.

Attestation protocolAttestation

Q-Attest protocol

/.well-known/q-attest-protocol.md

Optional specification that cleanly separates inferred sessions from validated attestations.

Report schemaAudit report

IIP report schema

/iip-report.schema.json

Public interface for an interpretation integrity report: scope, metrics, and drift taxonomy.

Compliance schemaObserved compliance

CTIC compliance report schema

/ctic-compliance-report.schema.json

Public schema for publishing compliance findings without exposing the full private logic.

Citation surfaceExternal context

Citations

/citations.md

Minimal external reference surface used to contextualize some concepts without delegating canonical authority to them.

Change logMemory and versioning

AI changelog

/changelog-ai.md

Public log that makes AI surface changes more dateable and auditable.

Evidence layer

This page is the canonical definition for the evidence layer inside the interpretive governance corpus.

The evidence layer is the governance layer that connects canon, response legitimacy, observation, trace, proof of fidelity, audit, and correction into a contestable chain.

Short definition

An evidence layer is not a single file, score, citation list, or dashboard. It is the organized set of artifacts, rules, records, and thresholds that make an interpretation reviewable.

It answers a practical question: when an AI system produces a claim, recommendation, summary, refusal, or classification, what evidence allows a third party to understand whether that output was authorized, faithful, bounded, and correctable?

Why it matters

Most AI visibility work stops at the output: did the system mention the brand, cite the page, rank the entity, or produce a favorable summary?

Interpretive governance asks a deeper question. Can the answer be defended? Can the path be reconstructed? Can a contradiction be located? Can an outdated version be separated from the current canon? Can a correction be applied without relying on impressions?

The evidence layer is what makes those questions operational.

Minimum components

A mature evidence layer normally includes:

  • canonical sources and versioned reference surfaces;
  • source hierarchy and authority boundaries;
  • response conditions and refusal conditions;
  • interpretation traces;
  • proof of fidelity requirements;
  • records of observations and consultation events;
  • Q-Ledger entries for weak observations;
  • Q-Metrics indicators derived from those observations;
  • audit reports and correction logs;
  • explicit distinction between evidence, metric, citation, and attestation.

The layer is stronger when each artifact declares its scope, date, version, authority level, and proof status.

What it is not

The evidence layer is not a compliance badge. It is not a generic observability dashboard. It is not a page of testimonials, a list of backlinks, or a mechanical citation index.

It is also not a substitute for canon. Evidence can test a canon, monitor it, and defend it, but it cannot invent the authority it is supposed to verify.

Common failure modes

  • citations are treated as proof even when the conclusion exceeds them;
  • metrics are treated as evidence without observation records;
  • audit reports cannot be reconstructed because versions were not preserved;
  • evidence is scattered across files with no hierarchy;
  • dashboards track visibility but not response legitimacy;
  • corrections are published without a record of the interpretation they correct;
  • observation and attestation are collapsed into one signal.

Relation to Q-Ledger and Q-Metrics

Q-Ledger records weak observations about governance surfaces, entry points, and continuity. Q-Metrics derives descriptive indicators from those observations.

Both belong to the evidence layer, but neither is sufficient alone. Q-Ledger records. Q-Metrics summarizes. The evidence layer organizes their role inside a larger chain of proof, audit, and correction.

Relation to answer legitimacy

Answer legitimacy asks whether a system was allowed to answer. The evidence layer preserves the material needed to evaluate that permission after the fact.

A response may be fluent and accurate at the fragment level while still failing the legitimacy test if the evidence layer cannot support the authority, perimeter, or inference behind the answer.

Operational rule

Every serious interpretive governance program should define its evidence layer before relying on metrics. Metrics without evidentiary architecture tend to optimize visible symptoms while leaving authority, proof, and correction unresolved.