Authority conflict

Type: Canonical definition

Conceptual version: 1.1

Stabilization date: 2026-03-02

An authority conflict designates a situation where two (or more) sources claim legitimate authority on the same point, but produce incompatible statements. In an AI system, this conflict triggers a major risk: an invented “coherent” synthesis, arbitrary selection, or ungoverned extrapolation.

In interpretive governance, an authority conflict is not a difference of opinion. It is a governance event: without an arbitration rule, the correct output may be a legitimate non-response.


Definition

An authority conflict occurs when at least two sources:

  • are considered authorized within the interpretability perimeter;
  • bear on the same object (same entity, same rule, same period, same perimeter);
  • and produce mutually incompatible propositions.

In the open web, this authorization cannot be presumed. An external source enters an authority conflict only once its admissibility is qualified, notably via External Authority Control (EAC).


Why this is critical in AI systems

  • The model smooths: it can merge two truths into an undeclared average.
  • The model arbitrates: it implicitly chooses a source based on popularity or style signals.
  • The model invents: it fabricates a “reasonable” synthesis that exists nowhere.

What an authority conflict is not

  • It is not a nuance of style. It is a proposition incompatibility.
  • It is not a divergence outside admissibility. If a source is not receivable under EAC, it does not yet constitute an authority conflict in the strong sense.
  • It is not an invitation to synthesize. The correct output may be abstention.

Recommended internal links