Skip to content

Expertise

Drift detection

Service-facing expertise entry for drift detection: detecting when variation becomes meaningful divergence from canon, baseline, or declared response regime across time, systems, or releases.

CollectionExpertise
TypeExpertise
Domaindrift-detection

Engagement decision

How to recognize that this axis should be mobilized

Use this page as a decision page. The objective is not only to understand the concept, but to identify the symptoms, framing errors, use cases, and surfaces to open in order to correct the right problem.

Typical symptoms

  • The same question produces materially different answers over time.
  • A correction appears to work briefly, then weakens or reverses.
  • Different systems preserve vocabulary while shifting scope or authority.
  • Variation becomes frequent enough that teams can no longer tell what is baseline and what is drift.

Frequent framing errors

  • Treating every variation as drift without a declared baseline.
  • Looking only at outputs while ignoring canon, scope, and response conditions.
  • Assuming that a correction is complete because one snapshot improved.
  • Confusing observability dashboards with proof of fidelity.

Use cases

  • Monitoring a corrected corpus after publication.
  • Tracking post-release or post-rebrand stability.
  • Detecting recurring drift across systems, prompts, or languages.
  • Prioritizing correction before drift hardens into debt.

What gets corrected concretely

  • Publication of a baseline and test window.
  • Separation between benign variance and canon-relevant drift.
  • Monitoring of correction lag and recurrence.
  • Escalation from observation to audit when thresholds are crossed.

Governance artifacts

Governance files brought into scope by this page

This page is anchored to published surfaces that declare identity, precedence, limits, and the corpus reading conditions. Their order below gives the recommended reading sequence.

  1. 01Definitions canon
  2. 02Q-Ledger JSON
  3. 03Q-Metrics JSON
Canon and identity#01

Definitions canon

/canon.md

Canonical surface that fixes identity, roles, negations, and divergence rules.

Governs
Public identity, roles, and attributes that must not drift.
Bounds
Extrapolations, entity collisions, and abusive requalification.

Does not guarantee: A canonical surface reduces ambiguity; it does not guarantee faithful restitution on its own.

Observability#02

Q-Ledger JSON

/.well-known/q-ledger.json

Machine-first journal of observations, baselines, and versioned gaps.

Governs
The description of gaps, drifts, snapshots, and comparisons.
Bounds
Confusion between observed signal, fidelity proof, and actual steering.

Does not guarantee: An observation surface documents an effect; it does not, on its own, guarantee representation.

Observability#03

Q-Metrics JSON

/.well-known/q-metrics.json

Descriptive metrics surface for observing gaps, snapshots, and comparisons.

Governs
The description of gaps, drifts, snapshots, and comparisons.
Bounds
Confusion between observed signal, fidelity proof, and actual steering.

Does not guarantee: An observation surface documents an effect; it does not, on its own, guarantee representation.

Complementary artifacts (1)

These surfaces extend the main block. They add context, discovery, routing, or observation depending on the topic.

Context and versioning#04

AI changelog

/changelog-ai.md

Log of governance, identity, and machine-first surface changes.

Evidence layer

Probative surfaces brought into scope by this page

This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.

  1. 01
    Weak observationQ-Ledger
  2. 02
    Derived measurementQ-Metrics
  3. 03
    Audit reportIIP report schema
  4. 04
    Memory and versioningAI changelog
Observation ledger#01

Q-Ledger

/.well-known/q-ledger.json

Public ledger of inferred sessions that makes some observed consultations and sequences visible.

Makes provable
That a behavior was observed as weak, dated, contextualized trace evidence.
Does not prove
Neither actor identity, system obedience, nor strong proof of activation.
Use when
When it is necessary to distinguish descriptive observation from strong attestation.
Descriptive metrics#02

Q-Metrics

/.well-known/q-metrics.json

Derived layer that makes some variations more comparable from one snapshot to another.

Makes provable
That an observed signal can be compared, versioned, and challenged as a descriptive indicator.
Does not prove
Neither the truth of a representation, the fidelity of an output, nor real steering on its own.
Use when
To compare windows, prioritize an audit, and document a before/after.
Report schema#03

IIP report schema

/iip-report.schema.json

Public interface for an interpretation integrity report: scope, metrics, and drift taxonomy.

Makes provable
The minimal shape of a reconstructible and comparable audit report.
Does not prove
Neither private weights, internal heuristics, nor the success of a concrete audit.
Use when
When a page discusses audit, probative deliverables, or opposable reports.
Change log#04

AI changelog

/changelog-ai.md

Public log that makes AI surface changes more dateable and auditable.

Makes provable
That a probative state can be placed back into an explicit version trajectory.
Does not prove
Neither the effective absorption of a drift nor third-party consultation of the change.
Use when
When a page deals with snapshots, rectification, withdrawal, or supersession.

Drift detection

This page captures a service-facing label. On this site, “drift detection” means detecting when variation becomes a meaningful divergence from canon, baseline, or declared response regime.

The label is useful, but only if drift is not reduced to vague “model changes” or raw dashboard volatility.

What counts as drift here

Not every difference is drift.

A difference becomes drift when it matters with respect to:

  • the canonical perimeter;
  • the preserved authority source;
  • the response conditions that should have bounded the answer;
  • the baseline or release state that should still govern.

Drift detection therefore belongs with interpretive observability, canon-output gap, interpretive debt, and interpretive sustainability.

When this entry point becomes useful

Drift detection becomes especially useful when:

  • a correction must be monitored after publication;
  • a rebrand, merger, or perimeter change has just been released;
  • cross-model answers remain unstable even though the site looks coherent;
  • recurring errors keep returning after partial fixes.

Detection requires a baseline

On this site, drift detection is never treated as a free-floating score.

It requires at minimum:

  • a declared baseline;
  • a test window or recurrence window;
  • a question family;
  • a declared canon or prevailing source hierarchy;
  • a distinction between observation and proof.

Without that discipline, one only accumulates noise.

Typical outputs

A drift detection engagement typically points toward:

  • a baseline and a comparison window;
  • a classification of stable variance versus actual drift;
  • recurrence and correction-lag signals;
  • an escalation path toward audit where needed;
  • a correction priority order.

What this label does not replace

Drift detection does not replace:

  • the canon;
  • proof of fidelity;
  • correction governance;
  • release discipline.

It is an upstream visibility function. It tells us that a divergence matters. It does not, by itself, settle what should prevail.

Doctrinal map

On this site, “drift detection” redistributes toward:

Back to the map: Expertise.