Evidence layer
Probative surfaces brought into scope by this page
This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.
- 01Canon and scopeDefinitions canon
- 02Response authorizationQ-Layer: response legitimacy
- 03Weak observationQ-Ledger
Definitions canon
/canon.md
Opposable base for identity, scope, roles, and negations that must survive synthesis.
- Makes provable
- The reference corpus against which fidelity can be evaluated.
- Does not prove
- Neither that a system already consults it nor that an observed response stays faithful to it.
- Use when
- Before any observation, test, audit, or correction.
Q-Layer: response legitimacy
/response-legitimacy.md
Surface that explains when to answer, when to suspend, and when to switch to legitimate non-response.
- Makes provable
- The legitimacy regime to apply before treating an output as receivable.
- Does not prove
- Neither that a given response actually followed this regime nor that an agent applied it at runtime.
- Use when
- When a page deals with authority, non-response, execution, or restraint.
Q-Ledger
/.well-known/q-ledger.json
Public ledger of inferred sessions that makes some observed consultations and sequences visible.
- Makes provable
- That a behavior was observed as weak, dated, contextualized trace evidence.
- Does not prove
- Neither actor identity, system obedience, nor strong proof of activation.
- Use when
- When it is necessary to distinguish descriptive observation from strong attestation.
Interpretive perimeter
This page is the canonical definition for interpretive perimeter inside the interpretive governance corpus.
The interpretive perimeter is the boundary inside which an AI system may interpret, infer, summarize, answer, qualify, or refuse without exceeding declared authority.
Short definition
An interpretive perimeter defines what is inside and outside authorized interpretation. It does not merely describe the topic of a page or the scope of a project. It states the limits under which meaning may be reconstructed and the point at which a system must stop, qualify, escalate, or remain silent.
In interpretive governance, the perimeter protects an entity, corpus, doctrine, policy, offer, or person from being extended by proximity. It prevents a model from saying more than the canon authorizes because the surrounding corpus appears coherent.
Why it matters
AI systems rarely announce that they crossed a boundary. They move from retrieved fragments to implied conclusions, from examples to general rules, from adjacent topics to assumed coverage, and from silence to plausible completion. That movement can create risk even when each individual fragment is true.
The interpretive perimeter turns that movement into an auditable object. It asks: what is this system allowed to interpret? What is it allowed to infer? Which parts of the corpus are canonical, contextual, historical, commercial, experimental, excluded, or not applicable? Where does a legitimate answer become an unauthorized extension?
For SERP ownership, the term also clarifies the difference between a page that discusses a topic and a page that governs a concept. Internal links should point here when the term itself is introduced, then route toward Authority boundary, Response conditions, or Inference prohibition when the operational control is more specific.
What it is not
The interpretive perimeter is not a generic topical scope. A topical scope says what a text is about. An interpretive perimeter says what may be concluded from it.
It is also not the same as an authority boundary. The authority boundary separates what can be inferred from what can be presented as authorized. The interpretive perimeter defines the field inside which interpretation may occur at all.
It is not a refusal policy by itself. Refusal is one possible consequence of crossing the perimeter. A system may also qualify, downgrade, expose conflict, cite the missing authority, or route to mandatory silence.
Common failure modes
- a model treats examples as exhaustive rules;
- a service page is used to infer internal policy;
- a historical article is treated as current doctrine;
- a related concept is merged into the target concept;
- silence is interpreted as permission;
- a general statement is applied to a commitment context;
- a response remains fluent after leaving the authorized corpus.
These failures are not always hallucinations. They are often perimeter failures: the answer may be plausible and partially sourced while still exceeding the governing boundary.
Governance implication
A governed corpus should make interpretive perimeters explicit at the level of entity, concept, page type, evidence status, source class, version, and commitment context. The perimeter should be reinforced by internal links, canonical definitions, negative statements, and machine-readable artifacts.
For AI answers, the perimeter should be tested before synthesis. If the answer requires a claim beyond the perimeter, the system should not fill the gap by coherence. It should qualify the answer, expose the missing authority, or refuse.
Operational rule
A response may not use proximity, similarity, silence, examples, or partial evidence to extend meaning beyond the declared interpretive perimeter.