Governance artifacts
Governance files brought into scope by this page
This page is anchored to published surfaces that declare identity, precedence, limits, and the corpus reading conditions. Their order below gives the recommended reading sequence.
Q-Metrics JSON
/.well-known/q-metrics.json
Descriptive metrics surface for observing gaps, snapshots, and comparisons.
- Governs
- The description of gaps, drifts, snapshots, and comparisons.
- Bounds
- Confusion between observed signal, fidelity proof, and actual steering.
Does not guarantee: An observation surface documents an effect; it does not, on its own, guarantee representation.
Q-Metrics YAML
/.well-known/q-metrics.yml
YAML projection of Q-Metrics for instrumentation and structured reading.
- Governs
- The description of gaps, drifts, snapshots, and comparisons.
- Bounds
- Confusion between observed signal, fidelity proof, and actual steering.
Does not guarantee: An observation surface documents an effect; it does not, on its own, guarantee representation.
Evidence layer
Probative surfaces brought into scope by this page
This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.
- 01Canon and scopeDefinitions canon
- 02Response authorizationQ-Layer: response legitimacy
- 03Weak observationQ-Ledger
- 04Derived measurementQ-Metrics
Definitions canon
/canon.md
Opposable base for identity, scope, roles, and negations that must survive synthesis.
- Makes provable
- The reference corpus against which fidelity can be evaluated.
- Does not prove
- Neither that a system already consults it nor that an observed response stays faithful to it.
- Use when
- Before any observation, test, audit, or correction.
Q-Layer: response legitimacy
/response-legitimacy.md
Surface that explains when to answer, when to suspend, and when to switch to legitimate non-response.
- Makes provable
- The legitimacy regime to apply before treating an output as receivable.
- Does not prove
- Neither that a given response actually followed this regime nor that an agent applied it at runtime.
- Use when
- When a page deals with authority, non-response, execution, or restraint.
Q-Ledger
/.well-known/q-ledger.json
Public ledger of inferred sessions that makes some observed consultations and sequences visible.
- Makes provable
- That a behavior was observed as weak, dated, contextualized trace evidence.
- Does not prove
- Neither actor identity, system obedience, nor strong proof of activation.
- Use when
- When it is necessary to distinguish descriptive observation from strong attestation.
Q-Metrics
/.well-known/q-metrics.json
Derived layer that makes some variations more comparable from one snapshot to another.
- Makes provable
- That an observed signal can be compared, versioned, and challenged as a descriptive indicator.
- Does not prove
- Neither the truth of a representation, the fidelity of an output, nor real steering on its own.
- Use when
- To compare windows, prioritize an audit, and document a before/after.
Complementary probative surfaces (2)
These artifacts extend the main chain. They help qualify an audit, an evidence level, a citation, or a version trajectory.
Q-Attest protocol
/.well-known/q-attest-protocol.md
Optional specification that cleanly separates inferred sessions from validated attestations.
AI changelog
/changelog-ai.md
Public log that makes AI surface changes more dateable and auditable.
Q-Metrics
Q-Metrics is a derived metrics layer built on Q-Ledger. Its purpose is to make discoverability, continuity, and drift around governance entrypoints measurable in a form that remains comparable from one snapshot to another.
Warning: non-normative, descriptive. These metrics describe an observed state. They do not constitute proof of compliance, attestation, or certification.
Why Q-Metrics exists
Publishing governance files is not enough. The operational question is whether those artifacts are actually discovered, consulted, and maintained with continuity over time. Q-Metrics condenses those signals without pretending to be stronger than observation itself.
Entrypoints
- JSON:
/.well-known/q-metrics.json - YAML:
/.well-known/q-metrics.yml
Minimum indicator families
- Entrypoint compliance: proportion of expected entrypoints observed as consulted.
- Escape rate: proportion of observations leaving the expected entrypoint surface.
- Sequence fidelity: continuity of chained snapshots and absence of archive breaks.
How to read the metrics
A higher compliance rate suggests effective discoverability. A higher escape rate may indicate exploration outside the expected entrypoints, indexing instability, or routing drift. A lower sequence fidelity signals a continuity or archive problem that should be investigated before any stronger interpretation is made.
Limits
- Q-Metrics inherits the limits of Q-Ledger: edge visibility, caching, restrictions, and sampling bias.
- A signal is not a proof. It must be read inside its regime and time window.
- This layer does not replace a stronger attestation mechanism when such a mechanism is required.
For the doctrinal distinction between a descriptive metrics layer and actual representation governance, see GEO metrics do not govern representation.
Governance role
Q-Metrics gives the ecosystem a public measurement layer without pretending to replace audit, canon, or proof. Its role is to make discoverability and continuity legible enough to support later interpretation.
How Q-Metrics should be interpreted
Q-Metrics should be read comparatively and longitudinally. A single value says little by itself. What matters is the relation between the baseline, later windows, and the declared machine-first path.
Canonical definition linkage
For the canonical term definition, see Q-Metrics. This doctrine page explains the operational and machine-first metrics layer, while the definition fixes the term inside the interpretive-governance lexicon.
How Q-Metrics should be used
Q-Metrics should be read as interpretive indicators, not as vanity metrics. Their purpose is to detect whether a corpus, answer, or system is becoming more stable, more auditable, and more faithful to the canon. They are not a substitute for rankings, traffic, citations, or commercial conversion metrics.
Useful Q-Metrics usually compare expected interpretation with observed output. They may track canon-output gaps, unsupported inferences, source substitution, stale-state persistence, cross-system divergence, refusal quality, or proof completeness. The value comes from repeated observation, not from a single score.
Limits of measurement
No metric can eliminate interpretive judgment. A high score can hide the wrong assumption if the measurement target is poorly defined. A low score can be acceptable when the corpus intentionally refuses to answer. Metrics must therefore be tied to response conditions, source hierarchy, and proof of fidelity.
Reading rule
This doctrinal note on Q-Metrics should be read as a positioning surface within the interpretive governance corpus. It does not replace the canonical definitions or the operational frameworks. It explains why a distinction matters, where the doctrine draws a boundary, and what kind of error becomes more likely when that boundary is ignored.
The reader should separate three levels. First, the conceptual level: what this page names or refuses to name. Second, the procedural level: what a system, organization or evaluator would need to check before relying on a response. Third, the evidence level: what would make the interpretation reconstructable, contestable and corrigible. A doctrinal page is strongest when it keeps those three levels visible rather than collapsing them into a persuasive formulation.
Use in the corpus
Use this page as a bridge between definitions, frameworks and observations. It can guide a reading path, justify why a framework exists, or explain why a response should be bounded, refused or audited. It should not be treated as a runtime instruction, a guarantee of model behavior or a substitute for evidence. If a response based on this doctrine cannot show which source was used, which inference was allowed and which uncertainty remained unresolved, the doctrine remains a reading principle rather than an operational control.