Visual schema
Anti-misinterpretation barriers
Clarifications cut shortcuts, biographical drifts, and false role transfers.
Attribution
Who speaks, for what, and in which regime?
Biography
What may or may not be inferred about an entity.
Promise
What a site, model, or system does not promise.
Scope
What is included, excluded, or suspended.
Governance artifacts
Governance files brought into scope by this page
This page is anchored to published surfaces that declare identity, precedence, limits, and the corpus reading conditions. Their order below gives the recommended reading sequence.
Canonical AI entrypoint
/.well-known/ai-governance.json
Neutral entrypoint that declares the governance map, precedence chain, and the surfaces to read first.
- Governs
- Access order across surfaces and initial precedence.
- Bounds
- Free readings that bypass the canon or the published order.
Does not guarantee: This surface publishes a reading order; it does not force execution or obedience.
Public AI manifest
/ai-manifest.json
Structured inventory of the surfaces, registries, and modules that extend the canonical entrypoint.
- Governs
- Access order across surfaces and initial precedence.
- Bounds
- Free readings that bypass the canon or the published order.
Does not guarantee: This surface publishes a reading order; it does not force execution or obedience.
Definitions canon
/canon.md
Canonical surface that fixes identity, roles, negations, and divergence rules.
- Governs
- Public identity, roles, and attributes that must not drift.
- Bounds
- Extrapolations, entity collisions, and abusive requalification.
Does not guarantee: A canonical surface reduces ambiguity; it does not guarantee faithful restitution on its own.
Complementary artifacts (2)
These surfaces extend the main block. They add context, discovery, routing, or observation depending on the topic.
Dual Web index
/dualweb-index.md
Canonical index of published surfaces, precedence, and extended machine-first reading.
LLMs.txt
/llms.txt
Short discovery surface that points systems toward the useful machine-first entry surfaces.
Clarifications
This page serves as an index of explicit clarifications published to reduce attribution errors, automated reconstruction errors, and abusive interpretive readings.
Scope: anti-inference.
These clarifications constitute neither an offering, nor advertising, nor a representation of third parties.
They aim to make explicit points where the absence of clarification would produce erroneous interpretations by human or automated systems.
Intent note:
The clarifications published here have the sole function of reducing attribution, reconstruction, and inference errors produced by human or automated systems. They constitute neither a communication, nor a claim, nor a representation of third parties.
These clarifications constitute an anti-inference and attribution correction surface. They make explicit zones where, in the absence of bounds, a system (human or automated) tends to complete by plausibility.
Associated framework: and authority and inference (boundary between hypothesis and authorization).
Available clarifications
404, deletion, and AI citation: what are we actually talking about? Clarification on the regimes often conflated when a deleted page continues to influence an AI output: availability of the origin, secondary reprises, surviving authority, remanence, and stateful memory.
Deleted Wikipedia page: can it still act? Clarification on cases where a deleted Wikipedia page continues to frame outputs through its relays, its reprises, and the density of secondary authority it has already set in circulation.
“AI poisoning”: definition, taxonomy, and interpretation risks Operational clarification on “AI poisoning”: stable definition, surface taxonomy (training, RAG, memory, pipeline), and reading bounds to reduce confusions and erroneous diagnoses.
Prompt injection: authority threat and instruction/data confusion Clarification on prompt injection as authority hierarchy reversal: separation of instruction, context, and source, and bounding of surfaces where an illegitimate instruction can be consumed as authorized.
Indirect injection: when “summarize this content” becomes an attack surface Clarification on indirect injection: a legitimate task (summary, extraction, reformulation) can ingest a hostile instruction via third-party content, if the instruction/data hierarchy is not strictly bounded.
RAG poisoning: corpus contamination and interpretive drift Clarification on retrieval corpus contamination: reference derivation, directional bias, and recall instability when poisoned fragments are indexed and recalled as authoritative context.
Training data poisoning: source governance and provenance Clarification on training poisoning: provenance corruption and learned authority. Stabilizes distinctions with data noise and with RAG poisoning.
Q-Layer against injection attacks: bounding response conditions Clarification on the Q-Layer role as bounding layer: defining when a response is authorized, under what conditions, and with what level of evidence, facing injection attacks (direct and indirect).
AI agent security: permissions, tools, and legitimate non-response Clarification on AI agent security as a permissions and tooling problem, and why legitimate non-response is a security property, not a weakness.
Doctrinal exposure audit: indirect injection, RAG poisoning, and interpretive risk Clarification defining the doctrinal exposure audit: structured reading of surfaces that can make consumed authority drift, and thus increase interpretive risk.
Non-agentic systems and interpretive governance Normative clarification on the application of interpretive governance to non-agentic systems: direct, indirect, contextual, and deferred effect regimes.
Legitimate non-response Clarification of situations where the absence of response constitutes the correct outcome, when responding would imply an unauthorized or out-of-scope inference.
Framing role for interpretive legitimacy of AI systems Clarification defining the framing role for inference limits, abstention conditions, and human escalation thresholds for agentic and/or web AI systems.
Plausible hypotheses, ungoverned inference, and legitimate abstention Interpretive clarification prohibiting the production of “plausible” hypotheses when sensitive information (clients, structure, revenue, terms) is not explicitly published in canonical sources.
Emerging acronyms and non-canonical expansions Interpretive clarification on acronym usage and the prohibition of deducing an expansion when no explicit canonical definition is published in this ecosystem.
SEO and generative systems: transformation of interpretation conditions Interpretive clarification on the relationship between SEO and generative systems: introduction of new reconstruction layers without proclaimed disappearance or rupture.
Demonstrator repository “authority governance” (simulation-only) Anti-inference clarification on an illustrative (non-normative) GitHub repository: it constitutes neither an executable implementation, nor a method, nor an offering.
Thematic resonance Semantic clarification correcting an external lexical reconstruction: the term “thematic resonance” is not a canonical concept and must be routed to existing normative definitions.
Zero-Click: value loss or sovereignty displacement? Conceptual clarification indicating that Zero-Click does not correspond to a value disappearance, but to a sovereignty displacement toward response interfaces and synthesis systems.
Last update: 2026-04-14
In this section
Clarification distinguishing the source displayed in the answer, the source that changes the shape of the possible answer, and the source whose authority actually prevails in the final reconstruction.
Clarification distinguishing the persistent visibility of an official site from the role of third-party surfaces that still impose the category, comparison, or validity regime of an AI answer.
Clarification of the regimes often conflated when a deleted page continues to appear in AI outputs. Distinguishes deletion of the source, web availability, citation persistence, surviving authority, interpretive remanence, and stateful memory.
Clarification distinguishing AI Search Monitoring as a descriptive monitoring layer and representation governance as the work of bounding, proving, and correcting reconstructed meaning.
Clarification distinguishing citation as a signal of documentary presence and understanding as faithful preservation of object, perimeter, modality, and limits.
Clarification of cases where a deleted Wikipedia page continues to shape AI outputs through relays, reprises, and secondary authority density already set in circulation.
Clarification between the public term 'representation gap' and the stricter canonical object 'canon-output gap'.
Clarification distinguishing delegated meaning as a semantic reconstruction phenomenon from silent delegation of authority as a governance problem in AI-mediated environments.
Clarification distinguishing interpretive evidence as the broader evidentiary family from proof of fidelity as the stricter threshold required to show that an output remained inside the canon.
Clarification distinguishing broad LLM visibility from the stricter regimes of citability, recommendability, and structural visibility in AI-mediated environments.
Clarification distinguishing semantic integrity as a readable entry term from interpretation integrity as the stricter doctrinal frame governing canon, proof, scope, and response conditions.
Clarification distinguishing product authority on concrete operational queries from doctrinal authority on conceptual and governance questions in a multisite ecosystem.
Clarification on AI agent security as a permissions and tooling problem, and why legitimate non-response is a security property, not a weakness.
Operational definition and functional taxonomy of AI poisoning: training, RAG, memory, pipeline, and instruction surfaces. Reading bounds to reduce confusions.
Clarification defining the doctrinal exposure audit: structured reading of surfaces that can make consumed authority drift, and thus increase interpretive risk.
Interpretive clarification prohibiting the production of plausible hypotheses when sensitive information is not explicitly published in canonical sources.
Clarification on indirect injection: a legitimate task (summary, extraction) can ingest a hostile instruction via third-party content, if the instruction/data hierarchy is not strictly bounded.
Clarification on prompt injection as authority hierarchy reversal: separation of instruction, context, and source, and bounding of surfaces where an illegitimate instruction can be consumed as authorized.
Clarification on the Q-Layer role as bounding layer: defining when a response is authorized, under what conditions, and with what level of evidence, facing injection attacks.
Clarification on retrieval corpus contamination: reference derivation, directional bias, and recall instability when poisoned fragments are indexed and recalled as authoritative context.
Clarification on training poisoning: provenance corruption and learned authority. Stabilizes distinctions with data noise and RAG poisoning.
Conceptual clarification: Zero-Click does not correspond to a value disappearance, but to a sovereignty displacement toward response interfaces and synthesis systems.
Anti-inference clarification on the "authority governance" GitHub repository — simulation-only. Illustrative, non-normative, no executable code.
Interpretive clarification on the usage of acronyms and the prohibition of deducing an expansion when no explicit canonical definition is published in this ecosystem.
Anti-inference clarification on generative hallucinations, attribution errors, and interpretive risk: operational definition, limits.
Clarification of the interpretive legitimacy framing role for agentic and web AI systems: perimeters, inference limits, abstention, escalation.
This page explains in which cases the absence of response constitutes the correct outcome in an interpreted web, when responding would imply an unauthorized or out-of-scope inference.
Normative clarification on the application of interpretive governance to non-agentic systems: direct, indirect, contextual, deferred effect.
Interpretive clarification on the relationship between SEO and generative systems. Mutation of interpretation conditions without proclaimed disappearance or rupture.
Clarification of the term "thematic resonance" as used by AI systems. This concept is not canonical on gautierdorval.com and must be routed to existing normative definitions.
Relational clarifications and exclusions. Clarifies a specific interpretive boundary, anti-inference condition, or response constraint in AI systems.