Neighborhood contamination
Neighborhood contamination designates the phenomenon where the interpretation of an entity or concept is altered by the semantic proximity of neighboring content (dominant categories, co-occurrences, adjacent entities), to the point where the AI system attributes to the subject properties that primarily belong to its environment, not to its canon.
In an interpreted web, meaning is not determined solely by what you declare, but by what surrounds you. Neighborhood contamination is therefore a major mechanism of interpretive invisibilization and capture.
Definition
Neighborhood contamination is the situation where:
- a subject A has a clear canon;
- but its semantic neighborhood (B, C, D) is denser, more repeated, or more dominant;
- and AI projects onto A attributes, intentions, categories, or explanations from the neighborhood.
The result is an interpretation that is “statistically coherent” but canonically false.
Common contamination forms
- Categorical contamination: your concept is reframed into a standard category (e.g. “framework” assimilated to “certification”).
- Homonymy contamination: neighborhood of a better-known homonymous entity.
- Dominant discourse contamination: a current or school imposes its vocabulary around your subject.
- Co-occurrence contamination: systematic association with a partner, competitor, or practice that redefines perception.