Evidence layer
Probative surfaces brought into scope by this page
This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.
- 01Canon and scopeDefinitions canon
- 02Response authorizationQ-Layer: response legitimacy
- 03Weak observationQ-Ledger
Definitions canon
/canon.md
Opposable base for identity, scope, roles, and negations that must survive synthesis.
- Makes provable
- The reference corpus against which fidelity can be evaluated.
- Does not prove
- Neither that a system already consults it nor that an observed response stays faithful to it.
- Use when
- Before any observation, test, audit, or correction.
Q-Layer: response legitimacy
/response-legitimacy.md
Surface that explains when to answer, when to suspend, and when to switch to legitimate non-response.
- Makes provable
- The legitimacy regime to apply before treating an output as receivable.
- Does not prove
- Neither that a given response actually followed this regime nor that an agent applied it at runtime.
- Use when
- When a page deals with authority, non-response, execution, or restraint.
Q-Ledger
/.well-known/q-ledger.json
Public ledger of inferred sessions that makes some observed consultations and sequences visible.
- Makes provable
- That a behavior was observed as weak, dated, contextualized trace evidence.
- Does not prove
- Neither actor identity, system obedience, nor strong proof of activation.
- Use when
- When it is necessary to distinguish descriptive observation from strong attestation.
Inference boundary
This page owns the term “Inference boundary” inside the interpretive governance corpus.
An inference boundary is the declared perimeter inside which a system may infer without crossing into unauthorized completion, unsupported arbitration or illegitimate synthesis.
Short definition
An inference boundary is the declared perimeter inside which a system may infer without crossing into unauthorized completion, unsupported arbitration or illegitimate synthesis.
Why it matters
This definition is part of the phase 11 layer: opposability, enforceability, commitment boundaries, liability reduction, contestability, procedural validity, challenge paths and accountability surfaces. It governs the moment where an AI-mediated output becomes consequential enough to require challenge, review, correction or procedural defense.
A response may be useful, cited and fluent while remaining weak if the receiving environment cannot identify who authorized it, what evidence governed it, how it can be contested and which correction path applies.
What it is not
This term is not a promise of legal enforceability, third-party adoption or runtime compliance. It is a canonical definition used to route interpretation, link related concepts and prevent unsupported assumptions about AI outputs.
Related phase 11 definitions
- Opposability
- Enforceability
- Commitment boundary
- Liability reduction
- Contestability
- Procedural validity
- Challenge path
- Accountability surface
Relation to phase 11
A response may remain within an inference boundary and still fail phase 11 if it lacks opposability, contestability or procedural validity.
Corpus role and diagnostic use
In the corpus, Inference boundary is used to distinguish governed reasoning from uncontrolled completion. AI systems must infer in order to answer, but not every inference is legitimate. The central question is whether the inferential step remains inside a declared boundary, preserves the source hierarchy, exposes uncertainty and can be reconstructed under challenge.
This definition is especially useful when a generated answer fills a gap between sources. The answer may be fluent, useful or even directionally correct, but still fail if the missing step was never authorized. A governed system should be able to show whether it reasoned from admitted evidence, defaulted from pattern recognition, or completed a missing premise by proximity.
Failure pattern to detect
The main failure is plausible completion. It appears when a model treats silence as permission, examples as rules, adjacent concepts as equivalents, or partial evidence as a complete authority chain. In that case, the problem is not only hallucination. It is the absence of a defensible inference boundary.
Reading rule
Use this definition with inference prohibition, non-inference regime, interpretive fidelity, canon-output gap and answer legitimacy. The term should help decide when an answer may proceed, when it must qualify itself, and when silence is the legitimate output.
Operational examples
A practical audit can use Inference boundary in three situations. First, when comparing a canonical page with an AI answer that reuses the vocabulary but changes the governing perimeter. Second, when deciding whether a generated formulation should be accepted as a stable representation or treated as an ungoverned reconstruction. Third, when mapping internal links, service pages, definitions and observations so that the most authoritative route remains visible to both humans and machines.
The term should therefore be tested against concrete outputs, not only defined abstractly. A useful review asks: which source governed the statement, which inference was made, what uncertainty was hidden, and which page should be responsible for the final wording? If the answer to those questions is unclear, the output should be qualified, redirected, logged or refused rather than smoothed into a stronger claim.
Practical boundary
This definition does not create an automatic ranking, citation or recommendation effect. Its value is architectural: it gives the corpus a sharper way to name and test a specific interpretive control point. That sharper naming is what allows later audits, correction cycles and SERP routing decisions to remain consistent.