Glossary of interpretive governance

Type: Index (glossary)

Conceptual version: 1.0

Stabilization date: 2026-02-20

This glossary constitutes the structured map of observable phenomena in a web interpreted by AI systems.
It organizes concepts, risks, mechanisms, and operational frameworks around a central principle: the governance of meaning.

Each section below is a thematic entry point.
It links to canonical definitions, applicable frameworks, and doctrinal pages that enable the stabilization of an interpretation over time.


1. Drifts and interpretive inertia

Phenomena of degradation, instability, or rigidity of meaning in responses generated by AI systems.


2. Canon, authority, and non-response

Legitimacy boundaries: what a model can infer, what it must refuse, and how to arbitrate authority conflicts.


3. Evidence, audit, and observability

Measurement, traceability, and version discipline: making an interpretation enforceable rather than merely plausible.


4. Capture, contamination, and collisions

Signal warfare, semantic dominance, and entity confusions in open environments.


5. Agentic, RAG, and environments

Application surfaces for interpretive governance: open web, closed environments, agentic systems, RAG pipelines.


6. Sustainability, debt, and correction

The real cost of maintaining a canonical truth over time: interpretive debt, correction budget, and version discipline.


7. Interpretive risk (historical)

First mapping of risks linked to hallucinations, attribution, and meaning distortions.


How to use this glossary

  • To understand a specific concept: consult its page in /definitions/.
  • To apply a method: open the associated framework in /frameworks/.
  • To situate a phenomenon within doctrine: consult /doctrine/.