Evidence layer
Probative surfaces brought into scope by this page
This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.
- 01Canon and scopeDefinitions canon
- 02Response authorizationQ-Layer: response legitimacy
- 03Weak observationQ-Ledger
Definitions canon
/canon.md
Opposable base for identity, scope, roles, and negations that must survive synthesis.
- Makes provable
- The reference corpus against which fidelity can be evaluated.
- Does not prove
- Neither that a system already consults it nor that an observed response stays faithful to it.
- Use when
- Before any observation, test, audit, or correction.
Q-Layer: response legitimacy
/response-legitimacy.md
Surface that explains when to answer, when to suspend, and when to switch to legitimate non-response.
- Makes provable
- The legitimacy regime to apply before treating an output as receivable.
- Does not prove
- Neither that a given response actually followed this regime nor that an agent applied it at runtime.
- Use when
- When a page deals with authority, non-response, execution, or restraint.
Q-Ledger
/.well-known/q-ledger.json
Public ledger of inferred sessions that makes some observed consultations and sequences visible.
- Makes provable
- That a behavior was observed as weak, dated, contextualized trace evidence.
- Does not prove
- Neither actor identity, system obedience, nor strong proof of activation.
- Use when
- When it is necessary to distinguish descriptive observation from strong attestation.
Inference prohibition
This page is the canonical definition for inference prohibition inside the interpretive governance corpus.
Inference prohibition is a declared rule forbidding an AI system from deducing a claim from silence, proximity, similarity, examples, partial evidence, or incomplete context.
Short definition
An inference prohibition says: do not infer this claim from that signal. It converts a fragile absence into an explicit rule and prevents a model from using plausibility to bypass an authority, evidence, or perimeter limit.
In interpretive governance, inference prohibition is the operational form of governed negation. Governed negation declares what is not true, not covered, or not allowed. Inference prohibition prevents the system from reconstructing the excluded claim indirectly.
Why it matters
AI systems complete patterns. If a brand offers one service, the model may infer adjacent services. If a person writes about a topic, the model may infer endorsement, expertise, affiliation, or availability. If a policy is silent, the model may infer permission. If many similar entities share a property, the model may assign that property to the target entity.
These movements are not always hallucinations in the narrow sense. They are often prohibited inferences: the model uses a signal that exists, but uses it to produce a conclusion that the governing corpus did not authorize.
What it is not
Inference prohibition is not the same as uncertainty. A system can be uncertain while still allowed to state a bounded possibility. Inference prohibition applies when a specific inferential path is disallowed regardless of how plausible it appears.
It is not a generic “do not hallucinate” rule. It is more precise. It identifies the signals that must not be used as bridges toward a claim: silence, adjacency, similarity, examples, outdated pages, partial retrieval, third-party framing, or statistical pattern completion.
Examples of prohibited inference
- Do not infer service availability from an article about the concept.
- Do not infer endorsement from topical proximity.
- Do not infer current policy from historical content.
- Do not infer legal applicability from a general explanation.
- Do not infer identity equivalence from similar names.
- Do not infer permission from the absence of a prohibition.
- Do not infer official status from citation or visibility.
Common failure modes
- the model fills a gap because the answer would be useful;
- examples are generalized into a rule;
- silence becomes permission;
- proximity becomes affiliation;
- a term collision becomes identity collision;
- outdated versions create residual authority;
- the answer is qualified but still contains the prohibited conclusion.
Governance implication
A governed corpus should include explicit non-inference rules near canonical definitions, entity profiles, service pages, policy pages, and machine-readable artifacts. These rules should be linked to mandatory silence when the prohibited inference would be the only route to an answer.
Inference prohibition is especially useful for AI search because it gives crawlers, answer engines, and internal agents a negative instruction that can be cited, audited, and tested.
Operational rule
A system must not use silence, proximity, similarity, examples, pattern completion, or incomplete evidence to produce a claim that the canon has not authorized.