Doctrinal note: this text is read through External Authority Control (EAC), the layer that qualifies the admissibility of external authorities in interpretive reconstruction. See EAC: minimum doctrinal decisions · EAC doctrine.

The problem is not only the absence of information. In a web interpreted by AI systems, the major risk can be the opposite: too much authority over the same semantic zone. Two sources may appear credible, stable, and widely repeated while still producing incompatible statements. Without governance, the AI system compensates through fusion, smoothing, or arbitrary selection. This is authority conflict.

Operational definition

Authority conflict: a situation in which two or more sources claim legitimacy over the same statement scope while declaring incompatible truths, forcing the system to arbitrate implicitly.

Why it is dangerous

  • Fusion: the AI system blends both truths and produces a hybrid synthesis.
  • Oscillation: the response varies depending on the prompt, the language, or the context.
  • Capture: a dominant secondary source displaces the primary source.
  • Interpretive debt: the longer the conflict persists, the more the ecosystem rigidifies around an erroneous version.

Typology of authority conflicts

1) Version conflict

Two documents describe the same object, but at two different moments in time: an older version and a newer one.

2) Scope conflict

Both sources discuss the same theme, but not the same scope: region, product, channel, or date.

3) Interpretive conflict

Source A describes, source B prescribes, and the AI system confuses the regimes: fact versus norm.

4) Institutional conflict

Two “official” authorities coexist: organizations, standards bodies, associations, or public authorities.

5) Exogenous conflict

The primary source is contradicted by aggregators, wikis, media, or dominant secondary listings.

Rapid diagnostic

  1. Define the conflicting statement: which exact sentence diverges?
  2. Identify the sources: who says what, where, and with which date?
  3. Qualify the type of conflict: version, scope, norm, institutional, or exogenous.
  4. Measure perceived authority: frequency of reuse, position in the external graph, citations.

Arbitration rules (interpretive governance)

1) Set an authority boundary

  • Declare which document is canonical for which scope.
  • Define the limits: version, date, region, product, channel.

2) Govern the conflict through versioning

  • Make chronology explicit: “since,” “until,” “effective from.”
  • Maintain a changelog or a “current state” page.

3) Reduce scope ambiguity

  • Make the conditions explicit to prevent the AI system from overgeneralizing.

4) Enable legitimate non-response when necessary

  • If the conflict cannot be resolved, the correct output may be a non-response or a conditional response.

5) Act exogenously

  • Correct secondary sources wherever possible.
  • Reinforce the citability and evidentiary visibility of the canonical source.

FAQ

Can an AI system arbitrate an authority conflict correctly?

Sometimes, but without an explicit framework, arbitration remains implicit: arbitrary selection, fusion, or smoothing.

What is the best strategy when the conflict persists?

Make the scope and the version explicit, reinforce the citability of the canonical source, and authorize legitimate non-response when the evidence is insufficient.

How is this related to exogenous governance?

Because authority conflicts often emerge in the external graph: aggregators, wikis, media, listings, and secondary citations.