Skip to content

Definition

Interpretive fidelity

Interpretive fidelity defines a canonical concept for AI interpretation, authority, evidence and response legitimacy.

CollectionDefinition
TypeDefinition
Version1.0
Stabilization2026-05-09
Published2026-05-09
Updated2026-05-09

Evidence layer

Probative surfaces brought into scope by this page

This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.

  1. 01
    Canon and scopeDefinitions canon
  2. 02
    Response authorizationQ-Layer: response legitimacy
  3. 03
    Weak observationQ-Ledger
Canonical foundation#01

Definitions canon

/canon.md

Opposable base for identity, scope, roles, and negations that must survive synthesis.

Makes provable
The reference corpus against which fidelity can be evaluated.
Does not prove
Neither that a system already consults it nor that an observed response stays faithful to it.
Use when
Before any observation, test, audit, or correction.
Legitimacy layer#02

Q-Layer: response legitimacy

/response-legitimacy.md

Surface that explains when to answer, when to suspend, and when to switch to legitimate non-response.

Makes provable
The legitimacy regime to apply before treating an output as receivable.
Does not prove
Neither that a given response actually followed this regime nor that an agent applied it at runtime.
Use when
When a page deals with authority, non-response, execution, or restraint.
Observation ledger#03

Q-Ledger

/.well-known/q-ledger.json

Public ledger of inferred sessions that makes some observed consultations and sequences visible.

Makes provable
That a behavior was observed as weak, dated, contextualized trace evidence.
Does not prove
Neither actor identity, system obedience, nor strong proof of activation.
Use when
When it is necessary to distinguish descriptive observation from strong attestation.

Interpretive fidelity

This page owns the term “interpretive fidelity” and distinguishes it from stylistic similarity or factual overlap. A response can reuse correct facts and still fail to preserve the intended meaning.

Interpretive fidelity is the degree to which an output preserves the canonical meaning, boundaries, authority, exclusions, evidence and response conditions of the source it claims to represent.

Short definition

Interpretive fidelity is faithfulness to governed meaning, not only to isolated facts.

Why it matters

AI systems often produce summaries that are factually adjacent but interpretively weaker. They may preserve keywords while losing the authority boundary, remove the refusal condition, generalize a definition, merge neighboring concepts, or compress the canon into a familiar category.

This is why proof of fidelity matters. A citation does not prove fidelity by itself. A cited source can still be misread, overextended or used to support a conclusion that the source never authorized.

What it is not

Interpretive fidelity is not the same as accuracy. Accuracy asks whether a statement is true. Fidelity asks whether the response preserved the role, perimeter and meaning of the source. It is also not a tone match. A response may sound aligned while removing the critical boundaries that made the canon defensible.

Failure modes

  • the output preserves the term but changes its category;
  • the answer cites the canon but generalizes beyond it;
  • the response removes exclusions because they are inconvenient;
  • a refusal condition is rewritten as a limitation;
  • a framework is summarized as a service promise;
  • a doctrine is compressed into a market label.

Governance rule

Interpretive fidelity should be tested through proof of fidelity, canon-output gap, interpretive evidence, reconstructable evidence and interpretation trace.

For SERP ownership, fidelity requires canonical pages to be explicit about what the term means, what it excludes, which page owns the term, and which support pages apply or contextualize it.

Phase 11 adjacency: opposability, enforceability and procedural accountability

This definition now routes consequential-use questions toward opposability, enforceability, commitment boundary, liability reduction, contestability, procedural validity, challenge path and accountability surface.

The phase 11 layer matters when an output stops being merely informational. If a response can shape a decision, promise, refusal, exception, remedy, public position or agentic action, it must remain assumable, contestable and corrigible under a declared procedure.

Corpus role and diagnostic use

In the corpus, Interpretive fidelity is used to distinguish governed reasoning from uncontrolled completion. AI systems must infer in order to answer, but not every inference is legitimate. The central question is whether the inferential step remains inside a declared boundary, preserves the source hierarchy, exposes uncertainty and can be reconstructed under challenge.

This definition is especially useful when a generated answer fills a gap between sources. The answer may be fluent, useful or even directionally correct, but still fail if the missing step was never authorized. A governed system should be able to show whether it reasoned from admitted evidence, defaulted from pattern recognition, or completed a missing premise by proximity.

Failure pattern to detect

The main failure is plausible completion. It appears when a model treats silence as permission, examples as rules, adjacent concepts as equivalents, or partial evidence as a complete authority chain. In that case, the problem is not only hallucination. It is the absence of a defensible inference boundary.

Reading rule

Use this definition with inference prohibition, non-inference regime, interpretive fidelity, canon-output gap and answer legitimacy. The term should help decide when an answer may proceed, when it must qualify itself, and when silence is the legitimate output.