Governance artifacts
Governance files brought into scope by this page
This page is anchored to published surfaces that declare identity, precedence, limits, and the corpus reading conditions. Their order below gives the recommended reading sequence.
Q-Metrics JSON
/.well-known/q-metrics.json
Descriptive metrics surface for observing gaps, snapshots, and comparisons.
- Governs
- The description of gaps, drifts, snapshots, and comparisons.
- Bounds
- Confusion between observed signal, fidelity proof, and actual steering.
Does not guarantee: An observation surface documents an effect; it does not, on its own, guarantee representation.
Q-Metrics YAML
/.well-known/q-metrics.yml
YAML projection of Q-Metrics for instrumentation and structured reading.
- Governs
- The description of gaps, drifts, snapshots, and comparisons.
- Bounds
- Confusion between observed signal, fidelity proof, and actual steering.
Does not guarantee: An observation surface documents an effect; it does not, on its own, guarantee representation.
Q-Ledger JSON
/.well-known/q-ledger.json
Machine-first journal of observations, baselines, and versioned gaps.
- Governs
- The description of gaps, drifts, snapshots, and comparisons.
- Bounds
- Confusion between observed signal, fidelity proof, and actual steering.
Does not guarantee: An observation surface documents an effect; it does not, on its own, guarantee representation.
Complementary artifacts (3)
These surfaces extend the main block. They add context, discovery, routing, or observation depending on the topic.
Q-Ledger YAML
/.well-known/q-ledger.yml
YAML projection of the Q-Ledger journal for procedural reading or tooling.
Q-Attest protocol
/.well-known/q-attest-protocol.md
Published protocol that frames attestation, evidence, and the reading of observations.
Observatory map
/observations/observatory-map.json
Structured map of observation surfaces and monitored zones.
Evidence layer
Probative surfaces brought into scope by this page
This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.
- 01Response authorizationQ-Layer: response legitimacy
- 02Observation mapObservatory map
- 03Weak observationQ-Ledger
- 04Derived measurementQ-Metrics
Q-Layer: response legitimacy
/response-legitimacy.md
Surface that explains when to answer, when to suspend, and when to switch to legitimate non-response.
- Makes provable
- The legitimacy regime to apply before treating an output as receivable.
- Does not prove
- Neither that a given response actually followed this regime nor that an agent applied it at runtime.
- Use when
- When a page deals with authority, non-response, execution, or restraint.
Observatory map
/observations/observatory-map.json
Machine-first index of published observation resources, snapshots, and comparison points.
- Makes provable
- Where the observation objects used in an evidence chain are located.
- Does not prove
- Neither the quality of a result nor the fidelity of a particular response.
- Use when
- To locate baselines, ledgers, snapshots, and derived artifacts.
Q-Ledger
/.well-known/q-ledger.json
Public ledger of inferred sessions that makes some observed consultations and sequences visible.
- Makes provable
- That a behavior was observed as weak, dated, contextualized trace evidence.
- Does not prove
- Neither actor identity, system obedience, nor strong proof of activation.
- Use when
- When it is necessary to distinguish descriptive observation from strong attestation.
Q-Metrics
/.well-known/q-metrics.json
Derived layer that makes some variations more comparable from one snapshot to another.
- Makes provable
- That an observed signal can be compared, versioned, and challenged as a descriptive indicator.
- Does not prove
- Neither the truth of a representation, the fidelity of an output, nor real steering on its own.
- Use when
- To compare windows, prioritize an audit, and document a before/after.
Complementary probative surfaces (1)
These artifacts extend the main chain. They help qualify an audit, an evidence level, a citation, or a version trajectory.
Q-Attest protocol
/.well-known/q-attest-protocol.md
Optional specification that cleanly separates inferred sessions from validated attestations.
Making governance measurable: Q-Metrics
Publishing governance files is necessary, but it is not sufficient. The operational question is straightforward: are the artefacts actually discovered, requested, and maintained in a stable way over time?
Q-Metrics is the measurement layer derived from Q-Ledger. Its role is to make discoverability, drift, and continuity signals readable across snapshots without pretending to certify intent or understanding.
From static file to observable signal
A static file is only a declaration until it leaves traces in the observable environment. Q-Metrics turns those traces into comparable indicators.
The key point is this: governance files publish reading conditions; Q-Metrics observes only the traces left by those conditions when they are reached, bypassed, or interrupted. To place the metric layer correctly, it helps to read Machine-first is not enough: why governance files change the reading regime, What each governance file actually does, and GEO metrics see the effect, not the conditions.
The objective is modest but crucial: determine whether the machine-first surface is being reached, how often it escapes the intended path, and whether the observed sequence remains compatible with the declared discovery model.
Three indicator families (minimal core)
The public core of Q-Metrics is intentionally small. It is meant to make the baseline legible without exposing proprietary calibration logic.
1) Entrypoint compliance
Entrypoint compliance measures whether requests begin where the governance surface expects them to begin. A compliant sequence does not prove good interpretation, but it does show that discoverability starts from the intended machine-first gateways rather than from accidental or derivative locations.
2) Escape rate
Escape rate tracks how often the observed sequence leaves the expected perimeter. An escape does not automatically mean failure. But repeated escapes indicate that the declared path is not stable enough, or that secondary surfaces are competing with the canonical route.
3) Sequence fidelity
Sequence fidelity checks whether the observed order of access remains compatible with the declared reading sequence. This is a continuity signal: it helps determine whether the ecosystem keeps traversing the machine-first surface in a coherent way from one snapshot to another.
How to read these signals without over-interpreting them
Q-Metrics is a layer of observability, not a theory of truth. Good values do not prove doctrinal fidelity, identity stabilization, or lawful use. Weak values do not automatically prove failure either. They indicate that discoverability behaviour should be reviewed in relation to the baseline and the archive.
The right reading is comparative and longitudinal. A single snapshot says little. A baseline plus later snapshots begins to show trends.
What Q-Metrics does not replace
Q-Metrics does not replace:
- the doctrinal canon;
- the machine-first files themselves;
- the archive and baseline logic of Q-Ledger;
- auditability of outputs;
- interpretive governance at the level of answer legitimacy.
It is a narrow but useful layer: a way to measure whether discoverability leaves a visible, stable, and comparable trail.
Resources
Q-Metrics should be read with:
- the baseline observations page;
- the Q-Ledger archive logic;
- the runbook that explains how logs become snapshots and audit surfaces.
Read also
- Baseline observations: Q-Ledger and Q-Metrics
- Runbook and ops from log to snapshot
- Public baseline (phase 0)