Compliance drift
Compliance drift designates the phenomenon where an AI system produces, over time, responses increasingly incompatible with declared rules, policies, or constraints, without explicit canon change. The rules remain the same, but outputs diverge.
This drift is particularly dangerous because it is not always visible. The response can remain plausible, “well formulated”, and yet fall outside the interpretability perimeter. Compliance degrades silently.
Definition
Compliance drift is the situation where:
- a canon (rules, policies, limits, negations) is stable;
- but system outputs become progressively less compatible with that canon;
- and the canon-output gap increases despite the absence of change in the source.
Drift can stem from execution context changes (routing, activated sources, models), progressive neighborhood contamination, or external changes that reframe interpretation.
Why this is critical in AI systems
- It gives a false sense of control: “the rules exist, therefore it is compliant”.
- It degrades reliability: audit becomes retrospective, not preventive.
- It increases risk: decisions, compliance, reputation, and implicit liability.
Frequent causes
- Model or behavior change: system update, fine-tuning, parameters.
- Activated source change: new dominant external sources, disappearance of old ones.
- Remanence / inertia: progressive return of old interpretations.
- Neighborhood contamination: dominant co-occurrences that reframe the concept.