Skip to content

Article

GEO metrics see the effect, not the conditions

A GEO metric observes a downstream effect. It does not publish the reading conditions that make that effect more or less probable.

CollectionArticle
TypeArticle
Categorygouvernance ai
Published2026-03-25
Updated2026-03-25
Reading time6 min

Governance artifacts

Governance files brought into scope by this page

This page is anchored to published surfaces that declare identity, precedence, limits, and the corpus reading conditions. Their order below gives the recommended reading sequence.

  1. 01Canonical AI entrypoint
  2. 02Definitions canon
  3. 03Identity lock
Entrypoint#01

Canonical AI entrypoint

/.well-known/ai-governance.json

Neutral entrypoint that declares the governance map, precedence chain, and the surfaces to read first.

Governs
Access order across surfaces and initial precedence.
Bounds
Free readings that bypass the canon or the published order.

Does not guarantee: This surface publishes a reading order; it does not force execution or obedience.

Canon and identity#02

Definitions canon

/canon.md

Canonical surface that fixes identity, roles, negations, and divergence rules.

Governs
Public identity, roles, and attributes that must not drift.
Bounds
Extrapolations, entity collisions, and abusive requalification.

Does not guarantee: A canonical surface reduces ambiguity; it does not guarantee faithful restitution on its own.

Canon and identity#03

Identity lock

/identity.json

Identity file that bounds critical attributes and reduces biographical or professional collisions.

Governs
Public identity, roles, and attributes that must not drift.
Bounds
Extrapolations, entity collisions, and abusive requalification.

Does not guarantee: A canonical surface reduces ambiguity; it does not guarantee faithful restitution on its own.

Complementary artifacts (3)

These surfaces extend the main block. They add context, discovery, routing, or observation depending on the topic.

Observability#04

Q-Ledger JSON

/.well-known/q-ledger.json

Machine-first journal of observations, baselines, and versioned gaps.

Observability#05

Q-Metrics JSON

/.well-known/q-metrics.json

Descriptive metrics surface for observing gaps, snapshots, and comparisons.

Entrypoint#06

Dual Web index

/dualweb-index.md

Canonical index of published surfaces, precedence, and extended machine-first reading.

Evidence layer

Probative surfaces brought into scope by this page

This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.

  1. 01
    Canon and scopeDefinitions canon
  2. 02
    Response authorizationQ-Layer: response legitimacy
  3. 03
    Weak observationQ-Ledger
  4. 04
    Derived measurementQ-Metrics
Canonical foundation#01

Definitions canon

/canon.md

Opposable base for identity, scope, roles, and negations that must survive synthesis.

Makes provable
The reference corpus against which fidelity can be evaluated.
Does not prove
Neither that a system already consults it nor that an observed response stays faithful to it.
Use when
Before any observation, test, audit, or correction.
Legitimacy layer#02

Q-Layer: response legitimacy

/response-legitimacy.md

Surface that explains when to answer, when to suspend, and when to switch to legitimate non-response.

Makes provable
The legitimacy regime to apply before treating an output as receivable.
Does not prove
Neither that a given response actually followed this regime nor that an agent applied it at runtime.
Use when
When a page deals with authority, non-response, execution, or restraint.
Observation ledger#03

Q-Ledger

/.well-known/q-ledger.json

Public ledger of inferred sessions that makes some observed consultations and sequences visible.

Makes provable
That a behavior was observed as weak, dated, contextualized trace evidence.
Does not prove
Neither actor identity, system obedience, nor strong proof of activation.
Use when
When it is necessary to distinguish descriptive observation from strong attestation.
Descriptive metrics#04

Q-Metrics

/.well-known/q-metrics.json

Derived layer that makes some variations more comparable from one snapshot to another.

Makes provable
That an observed signal can be compared, versioned, and challenged as a descriptive indicator.
Does not prove
Neither the truth of a representation, the fidelity of an output, nor real steering on its own.
Use when
To compare windows, prioritize an audit, and document a before/after.
Complementary probative surfaces (2)

These artifacts extend the main chain. They help qualify an audit, an evidence level, a citation, or a version trajectory.

Attestation protocolAttestation

Q-Attest protocol

/.well-known/q-attest-protocol.md

Optional specification that cleanly separates inferred sessions from validated attestations.

Report schemaAudit report

IIP report schema

/iip-report.schema.json

Public interface for an interpretation integrity report: scope, metrics, and drift taxonomy.

Editorial Q-Layer charter Assertion level: methodological clarification + doctrinal reframing Scope: the exact place of GEO metrics in relation to canon, governance, and observability Negations: this text does not disqualify observation, dashboards, or comparative measurement Immutable attributes: a metric sees a downstream probabilistic effect; it does not publish the conditions that produce that effect


The right shift

The GEO debate is often framed incorrectly. Metrics are asked whether a representation is good, faithful, stable, or governable. People then act surprised when the answers remain weak.

The reason is simple: a GEO metric first observes a downstream effect. It observes appearances, citations, frequencies, gaps, proximity, or absence. It does not publish the reading conditions that make that effect more or less probable.

What metrics actually see

Metrics mostly see output traces:

  • a name that circulates;
  • an attribute that returns;
  • a formulation that holds;
  • a competitor that substitutes;
  • a frequency shift;
  • a recurring drift.

Those are useful signals. But they remain downstream signals.

What they do not directly see

They do not directly see:

  • the canon that sets the reference;
  • the AI use policy that bounds response legitimacy;
  • the machine-first visibility doctrine that articulates readability, documentation, and governance;
  • the files that publish reading order, identity, exclusions, recurring errors, and non-goals.

In other words, they do not first see the reading regime. They see the traces it leaves behind when it works well or badly.

The actual steering chain

The doctrinally serious chain is not:

metric → truth of representation

The serious chain is rather:

canon → machine-first architecture → governance files → observed outputs → metrics

That is exactly why GEO metrics do not govern representation insists on the difference between visibility, fidelity, stability, and governability.

Why this distinction matters strategically

When this chain is forgotten, people correct what is visible instead of correcting what produces the visible. They act on the dashboard, not on reading conditions.

Conversely, once one understands that metrics see the effect and not the conditions, steering becomes coherent again:

  • the canon is published more clearly;
  • surfaces are better hierarchized;
  • governance files are reinforced;
  • outputs are then observed to see whether they become more compatible with that frame.

Where Q-Metrics actually sits

Q-Metrics illustrates that distinction well. The metric layer may describe discoverability, escape, and continuity. It does not, by itself, attest the fidelity of a reconstruction.

To understand why, one has to reread Making governance measurable: Q-Metrics in light of two upstream texts: Machine-first is not enough: why governance files change the reading regime and What each governance file actually does.

The right question

The wrong question is: how many times am I cited?

The right question is: which reading conditions have I published, and what traces do they leave in outputs?

From there, the metric becomes useful again. It stops pretending to replace doctrine. It returns to being an observational layer.