Governance artifacts
Governance files brought into scope by this page
This page is anchored to published surfaces that declare identity, precedence, limits, and the corpus reading conditions. Their order below gives the recommended reading sequence.
EAC registry
/.well-known/eac-registry.json
Normative registry for admissibility of external authorities in the open web.
- Governs
- Admissible relations, receivable authorities, and conflict arbitration.
- Bounds
- Abusive merges, copied authority, and unqualified silent arbitration.
Does not guarantee: Describing a graph or registry does not make an exogenous source endogenous truth.
Admissible exogenous claims
/eac-claims.json
Surface that bounds receivable families of exogenous claims.
- Governs
- Admissible relations, receivable authorities, and conflict arbitration.
- Bounds
- Abusive merges, copied authority, and unqualified silent arbitration.
Does not guarantee: Describing a graph or registry does not make an exogenous source endogenous truth.
EAC conflicts
/eac-conflicts.json
Surface for exogenous conflict arbitration and its resolution conditions.
- Governs
- Admissible relations, receivable authorities, and conflict arbitration.
- Bounds
- Abusive merges, copied authority, and unqualified silent arbitration.
Does not guarantee: Describing a graph or registry does not make an exogenous source endogenous truth.
Complementary artifacts (3)
These surfaces extend the main block. They add context, discovery, routing, or observation depending on the topic.
Claims registry
/claims.json
Registry of published claims, their scope, and their declarative status.
Q-Layer in Markdown
/response-legitimacy.md
Canonical surface for response legitimacy, clarification, and legitimate non-response.
Q-Layer in YAML
/response-legitimacy.yaml
Structured Q-Layer projection for systems that prefer YAML.
Authority, inference, and decisional drift in AI systems
AI systems produce responses that can be perceived as reliable, coherent, and useful. This apparent stability however masks a frequent structural confusion between interpreting, inferring, and authorizing.
This confusion is not merely theoretical. When not explicitly governed, it leads to decisional drift: probabilistic responses are received as legitimate opinions, hypotheses become implicit recommendations, and neutral formulations acquire an authority they were never mandated to exercise.
Interpretation and inference: a fundamental asymmetry
Interpreting consists in reformulating or contextualizing available information. Inferring consists in extrapolating beyond that information, filling gaps with probabilistic hypotheses.
AI models are structurally optimized for inference. In the absence of explicit constraints, they favor plausible completion over suspension of judgment. This property becomes problematic when inference is no longer distinguished, in the response, from what is observed or attested.
When inference becomes authority
Drift does not occur when AI is wrong, but when its inference is interpreted as a legitimate position. A response can be factually prudent while producing a normative effect: recommending, orienting, dissuading, or implicitly validating a decision.
This phenomenon is accentuated by the conversational style, linguistic fluency, and synthesis capacity of models, which give probabilistic constructions the appearance of an established judgment.
When external authority is poorly qualified
In the open web, drift does not only come from content inference. It also comes from poor qualification of external source authority. A visible, redundant, or apparently credible source can be treated as authority without having been explicitly qualified.
This is where External Authority Control (EAC) intervenes: before an inference relies on an external source, EAC bounds which exogenous authorities can actually constrain interpretation. It does not transform popularity into legitimacy, nor content relocalization into endogenous truth.
Implicit authority and diffuse responsibility
When an AI emits an implicit recommendation without explicit mandate, authority is displaced without being assumed. Responsibility becomes diffuse: neither the system, nor the user, nor the initial source can clearly be held responsible for the act or decision that follows.
Rigorous interpretive governance therefore imposes a clear boundary between what can be inferred and what can be authorized. Without this boundary, AI acts as an undeclared decisional intermediary.
Governing non-decision
Limiting the authority of an AI system does not mean making it useless. It means explicitly defining the conditions under which it must abstain: insufficient data, normative ambiguity, high potential impact, or exceeding the declared perimeter.
In these situations, the legitimate response may be a refusal, a request for clarification, or a recommendation of human recourse. Non-decision is not a system failure. It is a condition of interpretive hygiene.
Related pages
- Doctrine
- External Authority Control (EAC)
- Q-Layer
- Interpretive governance
- Agentic: governing AI that acts
Anchoring
This page does not constitute an offering, nor a method, nor a promise. It describes a structural phenomenon and the conditions of a governed response to it.