Visual schema
From term to framework
Definitions stabilize the vocabulary before doctrine, frameworks, and operational usage.
Canonical term
Name without ambiguity.
Scope
Delimit what the term covers.
Doctrine
Connect the term to the doctrinal frame.
Framework
Make it applicable inside a system.
Usage
Mobilize it in posts, cases, and audits.
Governance artifacts
Governance files brought into scope by this page
This page is anchored to published surfaces that declare identity, precedence, limits, and the corpus reading conditions. Their order below gives the recommended reading sequence.
Canonical AI entrypoint
/.well-known/ai-governance.json
Neutral entrypoint that declares the governance map, precedence chain, and the surfaces to read first.
- Governs
- Access order across surfaces and initial precedence.
- Bounds
- Free readings that bypass the canon or the published order.
Does not guarantee: This surface publishes a reading order; it does not force execution or obedience.
Public AI manifest
/ai-manifest.json
Structured inventory of the surfaces, registries, and modules that extend the canonical entrypoint.
- Governs
- Access order across surfaces and initial precedence.
- Bounds
- Free readings that bypass the canon or the published order.
Does not guarantee: This surface publishes a reading order; it does not force execution or obedience.
Definitions canon
/canon.md
Canonical surface that fixes identity, roles, negations, and divergence rules.
- Governs
- Public identity, roles, and attributes that must not drift.
- Bounds
- Extrapolations, entity collisions, and abusive requalification.
Does not guarantee: A canonical surface reduces ambiguity; it does not guarantee faithful restitution on its own.
Complementary artifacts (2)
These surfaces extend the main block. They add context, discovery, routing, or observation depending on the topic.
Identity lock
/identity.json
Identity file that bounds critical attributes and reduces biographical or professional collisions.
Claims registry
/claims.json
Registry of published claims, their scope, and their declarative status.
Definitions and canonical concepts
This page serves as a public registry of canonical definitions used in the interpretive governance doctrine developed by Gautier Dorval.
It lists the primary conceptual references that govern term usage on this site and that aim to frame machine interpretation when they are encountered.
This registry constitutes neither an operational method nor a promise of results. It exists to reduce ambiguity by declaring stable conceptual perimeters.
The canonical entity graph is published here: /entity-graph.jsonld
Quick navigation
- Observable phenomena (field)
- Authority, limits, and non-response
- Evidence, audit, and observability
- Governances and architecture
- Application contexts
Observable phenomena (field)
- Interpretive invisibilization
- Interpretive collision
- Interpretive capture
- Interpretive inertia
- State drift
- Interpretive smoothing
- Interpretive remanence
- Citation persistence
- Neighborhood contamination
- Interpretive trail
Authority, limits, and non-response
- Authority boundary
- Surviving authority
- Authority Governance (Layer 3)
- Authority conflict
- Legitimate non-response
- Canonical silence
- Governed negation
- Response conditions
- Interpretive hallucination
- Interpretability perimeter
Evidence, audit, and observability
- Interpretive evidence
- Reconstructable evidence
- Proof of fidelity
- Interpretation trace
- Canon-output gap
- Interpretation integrity audit
- Interpretive observability
- Semantic calibration
- Compliance drift
- Interpretive debt
- Interpretive sustainability
- Version power
- Canonical fragility
Governances and architecture
- Interpretive governance
- Endogenous governance
- Distributed interpretive authority governance
- Exogenous governance
- External coherence graph
- Memory governance
- SSA-E + A2 + Dual Web
- Semantic compression
- AI disambiguation
- Interpretive SEO
Application contexts
- Agentic
- Non-agentic systems
- Post-semantics (thinking & reasoning) vs interpretive governance
- Interpretive SEO vs Entity SEO vs GEO vs AEO
Market and bridge vocabulary
Some terms now circulate as easier entry labels for the same family of problems. On this site, they are captured explicitly and then redirected toward the doctrinal canon.
- Semantic integrity: readable entry label for stability of meaning under interpretation.
- Semantic accountability: bridge term for assumable meaning under proof, authority, and response conditions.
- LLM visibility: broad label requalified through structural visibility, citability, and recommendability.
- Delegated meaning: bridge expression for reconstructed meaning that no longer remains directly anchored to canon.
- Interpretive evidence: broader evidentiary family for how meaning was formed, bounded, and challenged.
- Reconstructable evidence: evidence packaged well enough for third-party reconstruction and later review.
Recommended clarifications:
- Semantic integrity vs interpretation integrity
- LLM visibility vs citability vs recommendability
- Delegated meaning vs silent delegation of authority
- Interpretive evidence vs proof of fidelity
Recently published definitions
- Citation persistence
- Surviving authority
- Interpretive evidence
- Reconstructable evidence
- Proof of fidelity
- Interpretation trace
- Canon-output gap
- Interpretation integrity audit
Recently captured risk, chain, and reporting vocabulary
These terms are now also captured through service-facing expertise pages:
On this site, they remain operational entry points. They redistribute toward Interpretive risk, Interpretive governance for AI agents, the Evidence layer, and Proof of fidelity.
In this section
Canonical definition of citation persistence: when a deleted, retracted, corrected, or superseded source continues to influence AI outputs through citations, rankings, profiles, summaries, and other secondary artifacts.
Canonical definition of surviving authority: when a source, reprise, archive, ranking, or secondary artifact keeps framing answer reconstruction as if it still held primacy.
Bridge definition of delegated meaning: a situation in which meaning is reconstructed by synthesis from dispersed signals rather than directly preserved from canon.
Bridge definition of interpretive evidence: the broader evidentiary family that makes a reading, synthesis, or answer contestable without confusing evidence with proof of fidelity.
Bridge definition of LLM visibility: a broad public term for presence or mobilizability in LLM outputs, requalified on this site through structural visibility, citability, and recommendability.
Bridge definition of reconstructable evidence: evidence packaged well enough that a third party can reconstruct the corpus, scope, version, and observed state without depending on narrative convenience.
Bridge definition of semantic accountability: the capacity to justify, delimit, and assume reconstructed meaning under explicit authority, proof, and response conditions.
Bridge definition of semantic integrity: a useful public term for meaning stability under AI interpretation, treated on this site as an entry point toward interpretation integrity.
Canonical definition of distributed interpretive authority governance: a multisite framework that explicitly assigns doctrinal, institutional, commercial, product, and probative authority roles across one ecosystem.
Canonical definition of structural visibility: the capacity of a source to become mobilizable as a framing, definition, or stabilization surface inside an AI response, even without dominating the initial query match.
Canonical definition of proof of fidelity: the minimum evidence required to show that an AI output remains faithful to the canon rather than merely plausible.
Canonical definition of early machine visibility: the capacity of a governed, documented, and technically sound site to be understood, extracted, and recommended by AI systems before strong classical organic performance has been consolidated.
Exogenous governance designates all methods aimed at reducing contradictions, ambiguity, and conflicts in external sources used by AI systems to reconstruct an entity.
External Authority Control (EAC). Canonical definition within interpretive governance, semantic architecture, and AI systems.
Agentic designates an execution mode where an AI system plans, sequences, and executes actions based on an objective, often over multiple steps, with varying autonomy.
AI disambiguation designates all methods aimed at stabilizing entity identification by search engines and generative AI, reducing confusions, semantic collisions, and erroneous attributions.
Canonical fragility designates the vulnerability of a declared truth when its authority depends on too narrow an anchoring: a single page, format, access path, or signal type.
Canonical silence designates a governed state where the absence of information in the canon is not a gap to fill, but an explicit bound: the system must not produce a statement beyond what is declared.
Compliance drift designates the phenomenon where an AI system produces responses increasingly incompatible with declared rules, policies, or constraints, without explicit canon change.
Endogenous governance designates all mechanisms by which an entity canonizes, stabilizes, and makes enforceable its own truth within its surfaces, so AI can activate it without depending on external interpretations.
The external coherence graph designates the mapping of public signals that frame how an entity is interpreted by AI systems in the open web.
Governed negation designates a canonical property where an entity, corpus, or system explicitly declares what is not true, not covered, or must not be inferred.
The interpretability perimeter designates the exact zone where an AI system can produce a legitimate interpretation from a given corpus, without crossing the authority boundary.
Interpretive capture designates the phenomenon where an actor or signal set manages to impose a framing in AI systems, making the produced interpretation oriented, stable, and dominant.
An interpretive collision occurs when an AI system fuses, confuses, or mixes two distinct entities, concepts, or reference frames because their signals are too close or ambiguous.
Interpretive debt designates the cumulative liability produced when approximations on high-impact information are repeated, reformulated, and stabilized by automated interpretation systems.
Primary canonical definition of interpretive governance: the mechanism by which the interpretation space of a site, entity, or corpus is explicitly bounded to limit plausible but erroneous AI inferences.
An interpretive hallucination is the production of a plausible but false statement, generated or reconstructed by a probabilistic system, then presented with a form of certainty.
Interpretive inertia designates an AI system's resistance to modifying an already stabilized interpretation, even after canon correction or clarification.
Interpretive invisibilization designates the phenomenon where information is present and accessible, but does not exist in AI-generated responses because it is not selected or activated.
Interpretive observability designates the capacity to measure, detect, and attribute interpretation variations produced by an AI system, to monitor canonical truth stability.
Interpretive remanence designates the persistence of an old interpretation in AI outputs, even after the canon has been corrected, clarified, or updated.
Interpretive SEO designates the discipline that aims to stabilize how inference systems interpret, infer, and attribute meaning from a site, entity, and content.
Canonical clarification of the relations, overlaps, and distinctions between interpretive SEO, Entity SEO, GEO, and AEO. Each discipline by role, action level, and purpose.
Interpretive smoothing designates AI's tendency to erase specificities, nuances, exceptions, or paradoxes of a concept in order to fit it into a standardized, more frequent, and easier-to-synthesize category.
Interpretive sustainability designates the property of an information system such that the meaning of high-impact information remains bounded, stable, and correctable over time.
The interpretive trail designates the transitory state where a canonical correction begins producing effects, but incompletely, irregularly, or contextually.
Legitimate non-response designates a governed output where an AI system does not respond because the question exceeds the interpretability perimeter or crosses the authority boundary.
Neighborhood contamination designates the phenomenon where an entity's interpretation is altered by the semantic proximity of neighboring content, to the point where AI attributes properties from the environment rather than the canon.
Non-agentic systems designate AI systems that produce an output without planning and executing a tool-driven action sequence oriented toward an objective.
Canonical clarification of relations and distinctions between post-semantic thinking, post-semantic reasoning, and interpretive governance applied to generative AI systems.
Response conditions designate explicit prerequisites determining if an AI system can respond, how it must respond, and in which cases it must produce a legitimate non-response.
Semantic calibration designates all actions aimed at aligning, tuning, and stabilizing the correspondence between a canonical truth and how an AI system interprets and returns it.
Semantic compression designates the mechanism by which a generative system condenses a complex informational space into a shorter, coherent, and statistically plausible formulation.
SSA-E + A2 + Dual Web designates a doctrinal implementation standard for interpretive governance, aimed at stabilizing entities, reducing ambiguity, and bounding machine interpretation.
State drift designates the divergence between the actual state of dynamic information and the state returned by an AI system, which responds as if the state were stable when it has changed.
Version power designates an entity's capacity to make a given canonical version prevail in AI systems, and to make previous versions explicitly obsolete, traceable, and not activatable by default.
The authority boundary designates the explicit limit between what a system can infer and what it is legitimate to present as authorized, official, or applicable.
An authority conflict arises when two or more sources claim legitimate authority on the same point but produce incompatible statements. Without arbitration, the correct output may be legitimate non-response.
Authority Governance (Layer 3) designates the adjacent governance regime that bounds executable authority when an interpretive output becomes an action-bearing input.
Memory governance: doctrinal extension applied to stateful systems (agents, advanced RAG, persisted memories) to prevent inference fossilization into facts.
Canonical definition of the canon-output gap: the distance between what the canon states and what an AI system reconstructs. Gap types, practical symptoms, and the minimum rule based on proof of fidelity.
Canonical definition of the interpretation integrity audit: a formal procedure, an opposable report, a snapshotted corpus, an evidence chain, and conditional validity.
Canonical definition of interpretation trace: the minimum footprint that makes an AI output understandable, auditable, and contestable without depending on style or post-hoc narrative.