Evidence layer
Probative surfaces brought into scope by this page
This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.
- 01Canon and scopeDefinitions canon
- 02Response authorizationQ-Layer: response legitimacy
- 03Weak observationQ-Ledger
Definitions canon
/canon.md
Opposable base for identity, scope, roles, and negations that must survive synthesis.
- Makes provable
- The reference corpus against which fidelity can be evaluated.
- Does not prove
- Neither that a system already consults it nor that an observed response stays faithful to it.
- Use when
- Before any observation, test, audit, or correction.
Q-Layer: response legitimacy
/response-legitimacy.md
Surface that explains when to answer, when to suspend, and when to switch to legitimate non-response.
- Makes provable
- The legitimacy regime to apply before treating an output as receivable.
- Does not prove
- Neither that a given response actually followed this regime nor that an agent applied it at runtime.
- Use when
- When a page deals with authority, non-response, execution, or restraint.
Q-Ledger
/.well-known/q-ledger.json
Public ledger of inferred sessions that makes some observed consultations and sequences visible.
- Makes provable
- That a behavior was observed as weak, dated, contextualized trace evidence.
- Does not prove
- Neither actor identity, system obedience, nor strong proof of activation.
- Use when
- When it is necessary to distinguish descriptive observation from strong attestation.
Default inference
This page owns the term “default inference” and separates it from explicit reasoning. A default inference is the answer that appears because nothing in the environment prevented the system from completing the gap.
Default inference is a silent completion generated from convention, proximity, common patterns, market expectations, prior examples or statistical likelihood when the corpus does not explicitly authorize the conclusion.
Short definition
Default inference is the system’s fallback interpretation when evidence is incomplete and no boundary, exclusion, source hierarchy or non-inference rule blocks completion.
Why it matters
Default inference is one of the hidden mechanisms behind AI misrepresentation. It rarely looks like an error. It looks like common sense. A model assumes that a consultant sells a common service, that a brand belongs to a category, that a term means what similar terms mean, or that a missing relationship is implied by proximity.
This is dangerous for doctrine, reputation and market positioning. If a site does not define what a concept is not, the system may normalize it into the nearest known category. For a proprietary lexicon, that can dissolve the strategic distinction that the term was created to protect.
What it is not
Default inference is not malicious manipulation. It is the ordinary consequence of an under-governed interpretive environment. It is also not necessarily false in every case. The problem is that it is unlicensed. It may be plausible, but it cannot be treated as canonical.
Typical examples
- “interpretive governance” is reduced to AI governance;
- “GEO metrics” are treated as proof of representation quality;
- a source is treated as authoritative because it is cited often;
- silence about a service is interpreted as availability;
- proximity between two brands is treated as partnership;
- a stale answer becomes the default answer for a new query.
Governance rule
Default inference is reduced by explicit canon, governed negation, canonical silence, global exclusions, non-inference regime and strong internal linking toward primary definitions.
The editorial rule is simple: when a page introduces a term that could be confused with a neighboring concept, it should name the boundary immediately. A missing boundary invites the model to choose one.
Related canonical definitions
Corpus role and diagnostic use
In the corpus, Default inference is used to distinguish governed reasoning from uncontrolled completion. AI systems must infer in order to answer, but not every inference is legitimate. The central question is whether the inferential step remains inside a declared boundary, preserves the source hierarchy, exposes uncertainty and can be reconstructed under challenge.
This definition is especially useful when a generated answer fills a gap between sources. The answer may be fluent, useful or even directionally correct, but still fail if the missing step was never authorized. A governed system should be able to show whether it reasoned from admitted evidence, defaulted from pattern recognition, or completed a missing premise by proximity.
Failure pattern to detect
The main failure is plausible completion. It appears when a model treats silence as permission, examples as rules, adjacent concepts as equivalents, or partial evidence as a complete authority chain. In that case, the problem is not only hallucination. It is the absence of a defensible inference boundary.
Reading rule
Use this definition with inference prohibition, non-inference regime, interpretive fidelity, canon-output gap and answer legitimacy. The term should help decide when an answer may proceed, when it must qualify itself, and when silence is the legitimate output.
Operational examples
A practical audit can use Default inference in three situations. First, when comparing a canonical page with an AI answer that reuses the vocabulary but changes the governing perimeter. Second, when deciding whether a generated formulation should be accepted as a stable representation or treated as an ungoverned reconstruction. Third, when mapping internal links, service pages, definitions and observations so that the most authoritative route remains visible to both humans and machines.
The term should therefore be tested against concrete outputs, not only defined abstractly. A useful review asks: which source governed the statement, which inference was made, what uncertainty was hidden, and which page should be responsible for the final wording? If the answer to those questions is unclear, the output should be qualified, redirected, logged or refused rather than smoothed into a stronger claim.
Practical boundary
This definition does not create an automatic ranking, citation or recommendation effect. Its value is architectural: it gives the corpus a sharper way to name and test a specific interpretive control point. That sharper naming is what allows later audits, correction cycles and SERP routing decisions to remain consistent.