Evidence layer
Probative surfaces brought into scope by this page
This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.
- 01Observation mapObservatory map
- 02Weak observationQ-Ledger
- 03Derived measurementQ-Metrics
- 04AttestationQ-Attest protocol
Observatory map
/observations/observatory-map.json
Machine-first index of published observation resources, snapshots, and comparison points.
- Makes provable
- Where the observation objects used in an evidence chain are located.
- Does not prove
- Neither the quality of a result nor the fidelity of a particular response.
- Use when
- To locate baselines, ledgers, snapshots, and derived artifacts.
Q-Ledger
/.well-known/q-ledger.json
Public ledger of inferred sessions that makes some observed consultations and sequences visible.
- Makes provable
- That a behavior was observed as weak, dated, contextualized trace evidence.
- Does not prove
- Neither actor identity, system obedience, nor strong proof of activation.
- Use when
- When it is necessary to distinguish descriptive observation from strong attestation.
Q-Metrics
/.well-known/q-metrics.json
Derived layer that makes some variations more comparable from one snapshot to another.
- Makes provable
- That an observed signal can be compared, versioned, and challenged as a descriptive indicator.
- Does not prove
- Neither the truth of a representation, the fidelity of an output, nor real steering on its own.
- Use when
- To compare windows, prioritize an audit, and document a before/after.
Q-Attest protocol
/.well-known/q-attest-protocol.md
Optional specification that cleanly separates inferred sessions from validated attestations.
- Makes provable
- The minimal frame required to elevate an observation toward a verifiable attestation.
- Does not prove
- Neither that an attestation endpoint exists nor that an attestation has already been received.
- Use when
- When a page deals with strong proof, operational validation, or separation between evidence levels.
Complementary probative surfaces (2)
These artifacts extend the main chain. They help qualify an audit, an evidence level, a citation, or a version trajectory.
AI changelog
/changelog-ai.md
Public log that makes AI surface changes more dateable and auditable.
q-metrics.yml
/.well-known/q-metrics.yml
Published surface that contributes to making an evidence chain more reconstructible.
Q-Metrics
This page is the canonical definition for Q-Metrics inside the interpretive governance corpus.
Q-Metrics are descriptive indicators derived from Q-Ledger observations to make governance-surface continuity, consultation, drift, and rupture measurable without converting metrics into proof.
Short definition
Q-Metrics transform weak observations into comparable signals. They help answer questions such as: were expected governance entry points present, were they observed over time, did a rupture appear, did consultation escape the declared path, and did continuity improve or degrade?
They are useful because interpretive governance needs measurement. They are dangerous when treated as proof.
Why it matters
Many AI visibility dashboards measure effects: presence, mention, citation, rank, answer frequency, or share of voice. Those signals can be useful, but they do not govern representation.
Q-Metrics sit one level closer to governance. They describe the state of governance surfaces and observation continuity. They can help detect drift, weakness, or missing entry points before a broader interpretive failure stabilizes.
Minimum indicator families
A Q-Metrics layer can include:
- entrypoint compliance: whether expected governance files or routes are observed;
- continuity: whether observations persist across time and versions;
- freshness: whether the current surface is recent enough to govern the claim;
- escape rate: whether systems bypass declared governance surfaces;
- sequence fidelity: whether expected consultation or reading sequences remain intact;
- rupture markers: whether a previously observed artifact disappeared, changed, or became incoherent;
- gap indicators: whether observed outputs suggest increasing distance from canon.
These indicators remain descriptive unless a separate proof or attestation threshold is declared.
What Q-Metrics is not
Q-Metrics is not proof of fidelity. It is not a compliance certification. It is not a universal visibility score. It does not prove that a model understood, used, or respected a source.
It reduces ambiguity around observed governance conditions. It does not replace interpretation trace, reconstructable evidence, or proof of fidelity.
Common failure modes
- treating a good metric as proof of representation quality;
- optimizing the metric while the canon remains vague;
- measuring visibility without measuring answer legitimacy;
- comparing snapshots without preserving version context;
- using Q-Metrics as a substitute for audit where a material decision is at stake.
Relation to Q-Ledger
Q-Ledger is the observation base. Q-Metrics is derived from that base. If the ledger is weak, incomplete, or ambiguous, the metrics inherit that weakness.
This dependency is useful because it keeps measurement honest: a metric should remain tied to the observations from which it derives.
Relation to observability and auditability
Interpretive observability uses metrics, ledgers, and proof surfaces to make variation visible. Interpretive auditability asks whether the resulting case can be reconstructed, challenged, and corrected.
Q-Metrics supports both. It replaces neither.
Operational rule
Every Q-Metric should declare its observation source, window, evidentiary level, and limitation. A descriptive metric should remain descriptive unless it is explicitly connected to a reconstructable evidence package and proof threshold.