Evidence layer
Probative surfaces brought into scope by this page
This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.
- 01Canon and scopeDefinitions canon
- 02Response authorizationQ-Layer: response legitimacy
Definitions canon
/canon.md
Opposable base for identity, scope, roles, and negations that must survive synthesis.
- Makes provable
- The reference corpus against which fidelity can be evaluated.
- Does not prove
- Neither that a system already consults it nor that an observed response stays faithful to it.
- Use when
- Before any observation, test, audit, or correction.
Q-Layer: response legitimacy
/response-legitimacy.md
Surface that explains when to answer, when to suspend, and when to switch to legitimate non-response.
- Makes provable
- The legitimacy regime to apply before treating an output as receivable.
- Does not prove
- Neither that a given response actually followed this regime nor that an agent applied it at runtime.
- Use when
- When a page deals with authority, non-response, execution, or restraint.
Glossary: opposability, enforceability, and procedural accountability
This family groups the concepts that decide whether an AI-mediated output can be assumed, challenged, corrected or treated as procedurally valid after it becomes consequential.
The central rule is simple: a plausible answer is not automatically an opposable answer. Search engines, LLMs, RAG systems and agents can produce fluent outputs that remain weak when a user, organization, regulator, client or affected party asks who authorized the answer and how it can be contested.
Canonical terms
- Opposability
- Enforceability
- Commitment boundary
- Liability reduction
- Contestability
- Procedural validity
- Challenge path
- Accountability surface
Operational sequence
- Detect whether the output crosses a commitment boundary.
- Test whether the output is opposable under the declared authority and evidence.
- Check procedural validity before treating the answer as usable in a consequential context.
- Preserve contestability through traces, source roles and visible uncertainty.
- Provide a challenge path when the output may affect a person, organization or institutional position.
- Use liability reduction through bounded answers, escalation, refusal and correction resorption.
Relation to previous layers
This layer sits after inference control. Phase 10 asks whether reasoning, arbitration or completion was legitimate. Phase 11 asks whether the resulting output can be defended, challenged or assumed once it circulates.
It should be read with answer legitimacy, proof of fidelity, interpretive auditability, Q-Ledger, source hierarchy and agentic response conditions.
How to read this lexical family
This family moves from interpretation to institutional consequence. It asks whether a response can be relied on, challenged, defended, assumed or treated as procedurally valid. This is a higher threshold than being useful, fluent, sourced or technically executable.
Opposability concerns whether an answer can stand in a context of dispute. Enforceability concerns whether it can support a consequential rule or obligation. Procedural validity asks whether the right process was followed. The challenge path and accountability surface make it possible to inspect and contest the output.
Typical misreadings
The first mistake is to treat model confidence as procedural validity. A confident answer can still be inadmissible if it lacks authority, evidence, source hierarchy, response conditions or a valid path of challenge.
The second mistake is to treat enforceability as a legal claim only. In this corpus, enforceability is broader: it concerns whether a response can be treated as valid for a workflow, institution, contract-like process, compliance review, internal decision or agentic execution.
Use in audit and routing
Use this family when an AI output may affect a decision, obligation, record, user, client, institution or workflow. The audit should ask what would happen if the answer were challenged and what evidence would be available to defend or reverse it.
For routing, this family supports answer legitimacy, commitment boundary, liability reduction, agentic response conditions and interpretation trace pages. Its role is procedural: it defines the conditions under which an output can be assumed.
How to use this glossary family
This glossary family should be read as a conceptual map, not as a replacement for the individual canonical definitions. Its role is to show how the terms around Glossary: opposability, enforceability, and procedural accountability relate to one another and why they should not be collapsed into a single generic idea.
A useful reading starts with the failure pattern. Ask what kind of mistake the family helps prevent: confusing visibility with authority, retrieval with legitimacy, citation with proof, persistence with current validity, action with authorization, or coherence with fidelity. The definitions then become routing surfaces. They help decide which page should be primary, which page should support it, and which concept should remain separate.
Boundary of the family
The family does not prove that a model, search engine or agent follows these distinctions. It provides the vocabulary required to test whether they do. In practice, it should be used with observations, audits, source hierarchies and proof discipline. Without those layers, a glossary can name a risk but cannot show whether the risk has occurred.