Glossary: drifts and interpretive inertia

Type: Lexicographic index

Associated canonical definitions: Interpretive hallucination, Interpretive governance, Interpretive debt, Interpretive sustainability

Conceptual version: 1.0

Stabilization date: 2026-02-20

This page groups the phenomena that degrade the fidelity of an interpretation produced by AI systems (LLMs, generative engines, agents, RAG) when meaning drifts, smooths, or freezes.
These phenomena are not isolated “bugs”: they result from a probabilistic reconstruction of meaning, fed by partial signals, successive aggregations, and unstable contexts.

Each entry links to:
a canonical definition (if it exists), a framework (if applicable), and related pages for deeper understanding.


Quick access


Terms in the “drifts and inertia” family

Interpretive hallucination

Production of a plausible but unenforceably anchored response, often stabilized by form rather than by evidence.

Interpretive smoothing

Tendency of an AI system to erase rough edges, nuances, and negations in order to fit a concept into a standardized category.

Interpretive inertia

Persistence of a prior interpretation, even after modification of sources, due to the progressive stabilization of an algorithmic “truth”.

Interpretive remanence

Reappearance of an old interpretation in certain contexts, even when a correction seems established elsewhere.

Interpretive tail

Intermediate phase where a correction progresses unevenly: some outputs correct themselves, others remain frozen or ambiguous.

State drift

Divergence between a real state (price, availability, policy, status) and the state returned by AI, when an update does not propagate.

Compliance drift

Progressive gap between expected constraints (canon, rules, response conditions) and observed outputs, despite an apparently stable documentary base.


Recommended links

Next page: Glossary: canon, authority, non-response