Interpretive collision

An interpretive collision designates the phenomenon where an AI system fuses, confuses, or mixes two distinct entities, concepts, or reference frames, because their signals (names, descriptions, semantic neighborhood, attributes) are too close or too ambiguous.

An interpretive collision does not always produce a spectacular hallucination. It often produces a synthesis hallucination: a response that appears “coherent” but is composed of elements belonging to different objects.


Definition

Interpretive collision is a situation where:

  • two distinct objects (entities, products, concepts, frameworks) emit similar signals;
  • the AI system cannot keep them separated;
  • and the output results from a fusion (mixed attributes) or a substitution (one object replaces the other).

Interpretive collision is a central risk of exogenous governance (open web) and a routing risk in closed environments (RAG, agentic).


Common collision types

  • Identity collision: two entities bearing a similar name (or identical acronym).
  • Concept collision: a specific concept assimilated to a generic category.
  • Relational collision: two related entities whose hierarchy is poorly formalized.
  • Temporal collision: a former entity still persistent in outputs.

Recommended internal links