Evidence layer
Probative surfaces brought into scope by this page
This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.
- 01Canon and scopeDefinitions canon
- 02Response authorizationQ-Layer: response legitimacy
- 03Weak observationQ-Ledger
Definitions canon
/canon.md
Opposable base for identity, scope, roles, and negations that must survive synthesis.
- Makes provable
- The reference corpus against which fidelity can be evaluated.
- Does not prove
- Neither that a system already consults it nor that an observed response stays faithful to it.
- Use when
- Before any observation, test, audit, or correction.
Q-Layer: response legitimacy
/response-legitimacy.md
Surface that explains when to answer, when to suspend, and when to switch to legitimate non-response.
- Makes provable
- The legitimacy regime to apply before treating an output as receivable.
- Does not prove
- Neither that a given response actually followed this regime nor that an agent applied it at runtime.
- Use when
- When a page deals with authority, non-response, execution, or restraint.
Q-Ledger
/.well-known/q-ledger.json
Public ledger of inferred sessions that makes some observed consultations and sequences visible.
- Makes provable
- That a behavior was observed as weak, dated, contextualized trace evidence.
- Does not prove
- Neither actor identity, system obedience, nor strong proof of activation.
- Use when
- When it is necessary to distinguish descriptive observation from strong attestation.
Free inference
This page owns the term “free inference” and distinguishes it from legitimate reasoning. The issue is not that a system reasons, but that it reasons beyond the evidence, authority and perimeter that are allowed to govern the answer.
Free inference is model inference that goes beyond the retrieved, cited, canonical or authorized corpus without an explicit basis in source hierarchy, response conditions or declared reading rules.
Short definition
Free inference is unauthorized meaning completion. It occurs when a system fills gaps, extends claims, merges contexts or derives conclusions that the admitted sources do not authorize.
Why it matters
AI systems are designed to complete patterns. In ordinary conversation, this can be useful. In a governed corpus, however, completion becomes risky when the system treats absence, proximity, analogy or probability as sufficient grounds for assertion.
Free inference is especially dangerous in brand, legal, medical, financial, institutional and agentic contexts. A system may infer a relationship, status, recommendation, price, obligation, capability or intention that was never stated. The resulting answer may look natural while being impossible to defend.
What it is not
Free inference is not every inference. A bounded inference can be legitimate when the source hierarchy allows it, the perimeter is clear, the conclusion is reconstructable, and uncertainty is disclosed. Free inference is the unsafe form: the system speaks as if its completion had the same authority as the canon.
It is also not only a prompt problem. The site can create free-inference exposure when canonical definitions, exclusions, source roles and non-inference rules are missing.
Common triggers
- a question asks for a conclusion that the corpus does not state;
- a model treats two adjacent concepts as equivalent;
- a missing answer is completed from market norms;
- a stale memory object is reused without freshness checks;
- a retrieved fragment is generalized beyond its scope;
- a refusal condition is ignored because an answer is expected.
Governance rule
Free inference should be replaced by bounded reasoning, qualification, source escalation, or legitimate non-response. The control stack is inference prohibition, non-inference regime, source hierarchy, interpretive perimeter and answer legitimacy.
For SERP ownership, pages should state what may not be inferred from a term. That negative layer reduces the probability that Google, LLMs or answer engines turn a concept into a broader, weaker or commercially convenient substitute.
Related canonical definitions
- Inference prohibition
- Default inference
- Unauthorized synthesis
- Interpretive error space
- Answer legitimacy
Corpus role and diagnostic use
In the corpus, Free inference is used to distinguish governed reasoning from uncontrolled completion. AI systems must infer in order to answer, but not every inference is legitimate. The central question is whether the inferential step remains inside a declared boundary, preserves the source hierarchy, exposes uncertainty and can be reconstructed under challenge.
This definition is especially useful when a generated answer fills a gap between sources. The answer may be fluent, useful or even directionally correct, but still fail if the missing step was never authorized. A governed system should be able to show whether it reasoned from admitted evidence, defaulted from pattern recognition, or completed a missing premise by proximity.
Failure pattern to detect
The main failure is plausible completion. It appears when a model treats silence as permission, examples as rules, adjacent concepts as equivalents, or partial evidence as a complete authority chain. In that case, the problem is not only hallucination. It is the absence of a defensible inference boundary.
Reading rule
Use this definition with inference prohibition, non-inference regime, interpretive fidelity, canon-output gap and answer legitimacy. The term should help decide when an answer may proceed, when it must qualify itself, and when silence is the legitimate output.