Page

Evidence layer

Cross-cutting hub that connects canon, observation, trace, proof of fidelity, audit, and correction into an explicit evidence chain.

EN FR
CollectionPage
TypeHub

Governance artifacts

Governance files brought into scope by this page

This page is anchored to published surfaces that declare identity, precedence, limits, and the corpus reading conditions. Their order below gives the recommended reading sequence.

  1. 01Definitions canon
  2. 02Q-Layer in Markdown
  3. 03Observatory map
Canon and identity#01

Definitions canon

/canon.md

Canonical surface that fixes identity, roles, negations, and divergence rules.

Governs
Public identity, roles, and attributes that must not drift.
Bounds
Extrapolations, entity collisions, and abusive requalification.

Does not guarantee: A canonical surface reduces ambiguity; it does not guarantee faithful restitution on its own.

Policy and legitimacy#02

Q-Layer in Markdown

/response-legitimacy.md

Canonical surface for response legitimacy, clarification, and legitimate non-response.

Governs
Response legitimacy and the constraints that modulate its form.
Bounds
Plausible but inadmissible responses, or unjustified scope extensions.

Does not guarantee: This layer bounds legitimate responses; it is not proof of runtime activation.

Observability#03

Observatory map

/observations/observatory-map.json

Structured map of observation surfaces and monitored zones.

Governs
The description of gaps, drifts, snapshots, and comparisons.
Bounds
Confusion between observed signal, fidelity proof, and actual steering.

Does not guarantee: An observation surface documents an effect; it does not, on its own, guarantee representation.

Complementary artifacts (3)

These surfaces extend the main block. They add context, discovery, routing, or observation depending on the topic.

Observability#04

Q-Ledger JSON

/.well-known/q-ledger.json

Machine-first journal of observations, baselines, and versioned gaps.

Observability#05

Q-Metrics JSON

/.well-known/q-metrics.json

Descriptive metrics surface for observing gaps, snapshots, and comparisons.

Observability#06

Q-Attest protocol

/.well-known/q-attest-protocol.md

Published protocol that frames attestation, evidence, and the reading of observations.

Evidence layer

Probative surfaces brought into scope by this page

This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.

  1. 01
    Canon and scopeDefinitions canon
  2. 02
    Response authorizationQ-Layer: response legitimacy
  3. 03
    Observation mapObservatory map
  4. 04
    Weak observationQ-Ledger
Canonical foundation#01

Definitions canon

/canon.md

Opposable base for identity, scope, roles, and negations that must survive synthesis.

Makes provable
The reference corpus against which fidelity can be evaluated.
Does not prove
Neither that a system already consults it nor that an observed response stays faithful to it.
Use when
Before any observation, test, audit, or correction.
Legitimacy layer#02

Q-Layer: response legitimacy

/response-legitimacy.md

Surface that explains when to answer, when to suspend, and when to switch to legitimate non-response.

Makes provable
The legitimacy regime to apply before treating an output as receivable.
Does not prove
Neither that a given response actually followed this regime nor that an agent applied it at runtime.
Use when
When a page deals with authority, non-response, execution, or restraint.
Observation index#03

Observatory map

/observations/observatory-map.json

Machine-first index of published observation resources, snapshots, and comparison points.

Makes provable
Where the observation objects used in an evidence chain are located.
Does not prove
Neither the quality of a result nor the fidelity of a particular response.
Use when
To locate baselines, ledgers, snapshots, and derived artifacts.
Observation ledger#04

Q-Ledger

/.well-known/q-ledger.json

Public ledger of inferred sessions that makes some observed consultations and sequences visible.

Makes provable
That a behavior was observed as weak, dated, contextualized trace evidence.
Does not prove
Neither actor identity, system obedience, nor strong proof of activation.
Use when
When it is necessary to distinguish descriptive observation from strong attestation.
Complementary probative surfaces (6)

These artifacts extend the main chain. They help qualify an audit, an evidence level, a citation, or a version trajectory.

Descriptive metricsDerived measurement

Q-Metrics

/.well-known/q-metrics.json

Derived layer that makes some variations more comparable from one snapshot to another.

Attestation protocolAttestation

Q-Attest protocol

/.well-known/q-attest-protocol.md

Optional specification that cleanly separates inferred sessions from validated attestations.

Report schemaAudit report

IIP report schema

/iip-report.schema.json

Public interface for an interpretation integrity report: scope, metrics, and drift taxonomy.

Compliance schemaObserved compliance

CTIC compliance report schema

/ctic-compliance-report.schema.json

Public schema for publishing compliance findings without exposing the full private logic.

Citation surfaceExternal context

Citations

/citations.md

Minimal external reference surface used to contextualize some concepts without delegating canonical authority to them.

Change logMemory and versioning

AI changelog

/changelog-ai.md

Public log that makes AI surface changes more dateable and auditable.

Why this page exists

The site already publishes doctrines on interpretive observability, proof of fidelity, the canon-output gap, and the interpretation integrity audit.

What was still missing was a simple assembly point: a page that shows how these objects line up into a coherent evidence regime.

In other words, governance files publish the conditions of reading. The evidence layer publishes the conditions of challengeability.

The minimal evidence chain

A serious evidence layer does not start with a score. It starts with an order.

  1. Canon: what is authoritative, and within what scope?
  2. Response legitimacy: when may a system answer, suspend, or refuse?
  3. Observation: what was actually seen or detected under declared conditions?
  4. Trace: which sources, rules, and window produced the observed state?
  5. Proof of fidelity: does the output still remain inside the canon?
  6. Audit: is the gap qualified, dated, versioned, and actionable?
  7. Correction: what changes, where, and how is resorption tracked?

As soon as one of these steps is missing, the chain becomes weaker. One may still comment on an effect. One can no longer really oppose a proof.

What each level makes possible

Canon and scope

The machine-first canon and the Q-Layer define the terrain. Without them, behaviors can still be observed, but what drifts cannot be clearly qualified.

Observation and derived measurement

Q-Ledger and Q-Metrics make some effects more visible. They do not, on their own, establish that an output is faithful.

This is exactly the line drawn in Observation vs attestation: why Q-Ledger is deliberately weak and in GEO metrics do not govern representation.

Trace and fidelity

The interpretation trace reconstructs the path. Proof of fidelity shows that the path remains compatible with the canon.

The page Proof of fidelity: why citation is no longer enough explains why citation alone is not yet proof.

Audit and correction

The interpretation integrity audit protocol then turns these objects into diagnosis, correction planning, and versioned follow-up.

That articulation is extended by Applied observability and published probative surfaces.

Reading the published evidence artifacts

The artifacts highlighted on this page do not all carry the same proof level.

What this layer is not

This page does not create certification, obedience guarantees, or performance promises.

It states a more modest and more useful frame:

  • do not confuse observation with attestation;
  • do not confuse citation with fidelity;
  • do not confuse metrics with proof;
  • do not confuse local proof with system-wide stability.

Recommended sequence: CanonQ-LayerObservations → Evidence layer → AuditCorrection.