Legitimate non-response
Legitimate non-response designates a governed output where an AI system does not respond (or responds with an impossibility of concluding) because the question exceeds the interpretability perimeter or crosses the authority boundary. It is a correct response, not a failure.
In interpretive governance, legitimate non-response serves to prevent the model from transforming a canon absence, an ambiguity, or an authority conflict into a plausible but unauthorized statement.
Definition
Legitimate non-response is the state where the system:
- recognizes that it cannot establish a proposition from authorized sources;
- avoids any ungoverned extrapolation;
- preserves canonical silence when information is not declared;
- produces an explicit output such as: “I cannot conclude”, “information not declared”, or “condition missing”.
Legitimate non-response is a legitimacy mechanism: it protects the system against interpretive hallucination and limits interpretive debt.
Why this is critical in AI systems
- The model prefers to respond: without a non-response rule, it fills by plausibility.
- Form carries authority: a well-formulated response can be taken as fact.
- Errors stabilize: repeated, they become a default representation.
Typical triggers
- Canonical silence: the canon does not declare the requested information.
- Missing condition: date, jurisdiction, version, indispensable context not specified.
- Authority conflict: two authorized sources contradict without arbitration rule.
- Perimeter exceeded: the question falls outside the declared interpretability perimeter.