Skip to content

Definition

Drift detection: canonical definition

Drift detection defines a canonical concept for AI interpretation, authority, evidence and response legitimacy.

CollectionDefinition
TypeDefinition
Version1.0
Stabilization2026-05-09
Published2026-05-09
Updated2026-05-09

Drift detection

Drift detection is a market-facing entry point into interpretive governance. It names a service, audit, or diagnostic layer that organizations can recognize before they have adopted the stricter vocabulary of canon, source hierarchy, proof, response legitimacy, and interpretive risk.

This page is the canonical definition of Drift detection on Gautier Dorval. It is part of phase 13: service, audit, and market bridge surfaces designed to connect real search demand to the doctrinal architecture built in the definitions, lexicons, frameworks, and machine-first artifacts.


Short definition

Drift detection is the process of detecting and qualifying changes in how an AI system reconstructs an entity, brand, doctrine, offer, source, or concept over time. It becomes valuable when it distinguishes noise from stable state drift, interpretive drift, source substitution, and canonical erosion.

The important point is that this term is useful only when it remains routed. It may be a search query, commercial label, service label, or dashboard category, but it must eventually connect to canonical evidence, authority boundaries, and correction logic.


What it is not

It is not a volatility chart, a sentiment tracker, or a generic alert. Drift detection must ask whether the change affects meaning, authority, recommendation, scope, or defensibility.

The phase 13 rule is simple: a market label is not yet a governance regime. It becomes governable when the target, corpus, source hierarchy, trace, proof threshold, and correction pathway are explicit.


Common failure modes

  • flagging every variation as drift
  • ignoring the baseline canonical state
  • measuring only appearance while missing framing degradation
  • failing to separate model variation from corpus-driven change

These failures occur when the organization stays at the level of visibility language instead of moving toward interpretive control. A weak audit sees that something happened. A strong audit explains why it happened, which source governed the result, what level of evidence exists, and which correction can be defended.


How it should be used

Use Drift detection as an entry surface, not as a terminal label. The audit or service should begin with the user-facing symptom, then route the case toward the appropriate governing layer:

  1. identify the observed answer, absence, citation, comparison, or recommendation;
  2. declare the canonical target and the expected perimeter;
  3. separate visibility, citability, recommendability, fidelity, authority, and risk;
  4. preserve the prompt, system, date, answer, sources, and interpretation trace;
  5. qualify the gap against the canonical corpus;
  6. recommend a correction that can be tracked over time.

This is why phase 13 connects service pages back to definitions rather than leaving them as marketing pages. The service label opens the file. The doctrine governs the diagnosis.


Governance implication

The governance implication is that service-facing audits must not become black-box opinions. They should produce evidence that can be challenged, repeated, compared, and connected to correction. Their value comes from the transition from symptom to proof.

For this site, Drift detection should be read together with the service audits and market entry points, the AI search and interpretive audits hub, and the underlying evidence layer.


Reading guidance

Use drift detection AI representation as a bounded interpretive term. The page should help a reader decide when the concept applies, when it does not apply, and which neighboring concepts should be consulted before drawing a conclusion.

What to verify

  • Whether the concept is being used as a precise diagnostic term or as a generic label.
  • Whether the statement remains inside the canon and the declared perimeter.
  • Whether the output preserves uncertainty, source hierarchy, and response conditions.
  • Whether an adjacent concept would describe the situation more accurately.

Practical boundary

This concept should not be isolated from the rest of the corpus. It works best when read with the definitions, frameworks, observations, and service pages that clarify its evidence requirements and operational limits.