Skip to content

Definition

Arbitration

Canonical definition of arbitration: the mechanism by which a system chooses, exposes or refuses between competing interpretations, sources or response paths.

CollectionDefinition
TypeDefinition
Version1.0
Stabilization2026-05-09
Published2026-05-09
Updated2026-05-09

Evidence layer

Probative surfaces brought into scope by this page

This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.

  1. 01
    Canon and scopeDefinitions canon
  2. 02
    Response authorizationQ-Layer: response legitimacy
  3. 03
    Weak observationQ-Ledger
Canonical foundation#01

Definitions canon

/canon.md

Opposable base for identity, scope, roles, and negations that must survive synthesis.

Makes provable
The reference corpus against which fidelity can be evaluated.
Does not prove
Neither that a system already consults it nor that an observed response stays faithful to it.
Use when
Before any observation, test, audit, or correction.
Legitimacy layer#02

Q-Layer: response legitimacy

/response-legitimacy.md

Surface that explains when to answer, when to suspend, and when to switch to legitimate non-response.

Makes provable
The legitimacy regime to apply before treating an output as receivable.
Does not prove
Neither that a given response actually followed this regime nor that an agent applied it at runtime.
Use when
When a page deals with authority, non-response, execution, or restraint.
Observation ledger#03

Q-Ledger

/.well-known/q-ledger.json

Public ledger of inferred sessions that makes some observed consultations and sequences visible.

Makes provable
That a behavior was observed as weak, dated, contextualized trace evidence.
Does not prove
Neither actor identity, system obedience, nor strong proof of activation.
Use when
When it is necessary to distinguish descriptive observation from strong attestation.

Arbitration

This page owns the term “arbitration” as an interpretive governance concept. It does not refer primarily to legal arbitration, but to the decision mechanism that handles competing readings, sources, claims or response paths.

Arbitration is the governed process by which a system chooses, ranks, qualifies, exposes or refuses between competing interpretations, sources, versions, claims or possible outputs.

Short definition

Arbitration is the response-control mechanism that decides what governs when several plausible meanings, sources or answer paths compete.

Why it matters

AI systems constantly arbitrate. They decide which source to trust, which meaning to preserve, which contradiction to ignore, which uncertainty to expose and which formulation to output. When this process is not governed, the system still arbitrates, but silently.

Silent arbitration is a major cause of manufactured coherence. The answer becomes smooth because the system has hidden the conflict. It may select the most frequent formulation, the latest retrieved fragment, the most semantically central source or the easiest summary, even when the canonical source should have prevailed.

What it is not

Arbitration is not ranking alone. Ranking orders materials. Arbitration determines how competing materials affect the final answer. It is also not simply confidence scoring. A high-confidence output can still be illegitimate if the wrong source or wrong perimeter governed the answer.

Arbitration outcomes

A governed arbitration layer can produce several outcomes:

  • one source prevails under the source hierarchy;
  • the answer is qualified because sources conflict;
  • the response exposes uncertainty instead of smoothing it;
  • the system asks for clarification;
  • the system refuses because the conflict cannot be resolved;
  • the output is escalated to a more authoritative layer.

Governance rule

Arbitration must be explicit when source conflict, perimeter conflict, version conflict, memory conflict or commitment boundary conflict is present. The governing stack is source hierarchy, authority ordering, authority conflict, response conditions and answer legitimacy.

For SERP ownership, arbitration means assigning one primary page to each term and making support pages route toward it. A site that does not arbitrate its own pages invites external systems to arbitrate them differently.

Corpus role and diagnostic use

In the corpus, Arbitration is used to distinguish governed reasoning from uncontrolled completion. AI systems must infer in order to answer, but not every inference is legitimate. The central question is whether the inferential step remains inside a declared boundary, preserves the source hierarchy, exposes uncertainty and can be reconstructed under challenge.

This definition is especially useful when a generated answer fills a gap between sources. The answer may be fluent, useful or even directionally correct, but still fail if the missing step was never authorized. A governed system should be able to show whether it reasoned from admitted evidence, defaulted from pattern recognition, or completed a missing premise by proximity.

Failure pattern to detect

The main failure is plausible completion. It appears when a model treats silence as permission, examples as rules, adjacent concepts as equivalents, or partial evidence as a complete authority chain. In that case, the problem is not only hallucination. It is the absence of a defensible inference boundary.

Reading rule

Use this definition with inference prohibition, non-inference regime, interpretive fidelity, canon-output gap and answer legitimacy. The term should help decide when an answer may proceed, when it must qualify itself, and when silence is the legitimate output.

Operational examples

A practical audit can use Arbitration in three situations. First, when comparing a canonical page with an AI answer that reuses the vocabulary but changes the governing perimeter. Second, when deciding whether a generated formulation should be accepted as a stable representation or treated as an ungoverned reconstruction. Third, when mapping internal links, service pages, definitions and observations so that the most authoritative route remains visible to both humans and machines.

The term should therefore be tested against concrete outputs, not only defined abstractly. A useful review asks: which source governed the statement, which inference was made, what uncertainty was hidden, and which page should be responsible for the final wording? If the answer to those questions is unclear, the output should be qualified, redirected, logged or refused rather than smoothed into a stronger claim.

Practical boundary

This definition does not create an automatic ranking, citation or recommendation effect. Its value is architectural: it gives the corpus a sharper way to name and test a specific interpretive control point. That sharper naming is what allows later audits, correction cycles and SERP routing decisions to remain consistent.