Evidence layer
Probative surfaces brought into scope by this page
This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.
- 01Canon and scopeDefinitions canon
- 02Response authorizationQ-Layer: response legitimacy
- 03Evidence artifactcommon-misinterpretations.json
Definitions canon
/canon.md
Opposable base for identity, scope, roles, and negations that must survive synthesis.
- Makes provable
- The reference corpus against which fidelity can be evaluated.
- Does not prove
- Neither that a system already consults it nor that an observed response stays faithful to it.
- Use when
- Before any observation, test, audit, or correction.
Q-Layer: response legitimacy
/response-legitimacy.md
Surface that explains when to answer, when to suspend, and when to switch to legitimate non-response.
- Makes provable
- The legitimacy regime to apply before treating an output as receivable.
- Does not prove
- Neither that a given response actually followed this regime nor that an agent applied it at runtime.
- Use when
- When a page deals with authority, non-response, execution, or restraint.
common-misinterpretations.json
/common-misinterpretations.json
Published surface that contributes to making an evidence chain more reconstructible.
- Makes provable
- Part of the observation, trace, audit, or fidelity chain.
- Does not prove
- Neither total proof, obedience guarantee, nor implicit certification.
- Use when
- When a page needs to make its evidence regime explicit.
Inferred authority
Inferred authority designates authority reconstructed by an AI system from indirect, incomplete, ambiguous, or unstable signals when explicit authority boundaries are missing or not retained.
It is not always wrong. It is structurally weaker than defined authority because it depends on cues that may not survive extraction, retrieval, ranking, or summarization.
Signals that often produce inferred authority
An AI system may infer authority from:
- domain reputation;
- frequency of mention;
- stylistic confidence;
- recency signals;
- proximity between entities;
- third-party summaries;
- popularity or citation density;
- apparent expertise without declared perimeter.
These signals may help retrieval. They should not silently become governing authority.
Risk
The central risk is plausible displacement. The generated answer may appear reasonable while the authority that should govern it has moved to a weaker source, a derivative source, an outdated fragment, or the model’s own synthesis.
Minimal rule
Inferred authority must remain subordinate to defined authority, source hierarchy, authority boundary, and Q-Layer suspension rules.