Skip to content

Framework

Legitimate non-response protocol (rules and tests)

Protocol defining when an AI system should abstain, request clarification, or explicitly refuse to conclude because the canon does not authorize the answer.

CollectionFramework
TypeProtocol
Layernegation-gouvernee
Version1.0
Published2026-02-20
Updated2026-02-26

Governance artifacts

Governance files brought into scope by this page

This page is anchored to published surfaces that declare identity, precedence, limits, and the corpus reading conditions. Their order below gives the recommended reading sequence.

  1. 01Q-Layer in Markdown
  2. 02Q-Layer in YAML
  3. 03Interpretation policy
Policy and legitimacy#01

Q-Layer in Markdown

/response-legitimacy.md

Canonical surface for response legitimacy, clarification, and legitimate non-response.

Governs
Response legitimacy and the constraints that modulate its form.
Bounds
Plausible but inadmissible responses, or unjustified scope extensions.

Does not guarantee: This layer bounds legitimate responses; it is not proof of runtime activation.

Policy and legitimacy#02

Q-Layer in YAML

/response-legitimacy.yaml

Structured Q-Layer projection for systems that prefer YAML.

Governs
Response legitimacy and the constraints that modulate its form.
Bounds
Plausible but inadmissible responses, or unjustified scope extensions.

Does not guarantee: This layer bounds legitimate responses; it is not proof of runtime activation.

Policy and legitimacy#03

Interpretation policy

/.well-known/interpretation-policy.json

Published policy that explains interpretation, scope, and restraint constraints.

Governs
Response legitimacy and the constraints that modulate its form.
Bounds
Plausible but inadmissible responses, or unjustified scope extensions.

Does not guarantee: This layer bounds legitimate responses; it is not proof of runtime activation.

Complementary artifacts (3)

These surfaces extend the main block. They add context, discovery, routing, or observation depending on the topic.

Policy and legitimacy#04

AI usage policy

/ai-usage-policy.md

Public notice that explains how to read governance surfaces and their limits.

Policy and legitimacy#05

Output Constraints

/output-constraints.md

Surface that makes explicit the conditions of response, restraint, escalation, or non-response.

Legitimate non-response protocol (rules and tests)

In AI systems, answering by plausibility is easy. Refusing correctly is difficult. Yet in an interpreted web, legitimate non-response is often the safest output: it prevents illegitimate inference, avoids creating interpretive debt, and protects the authority boundary.

This framework formalizes an opposable protocol: when to refuse, how to refuse, which proof conditions matter, and how to test that refusal is triggered correctly.

Operational definition

Legitimate non-response is a governed output in which the system refrains from answering because the required authority, proof, perimeter, or response conditions are not satisfied.

Why this protocol is central

Without a refusal protocol, systems tend to complete the gap with coherence. That behaviour may look helpful, but it turns uncertainty into ungoverned decision.

Application surfaces

The protocol applies to identity, authority, eligibility, recommendation, conflict arbitration, and any surface where a smooth answer would overstep the canon.

Typology of refusals

Legitimate non-response can arise from several causes:

  • outside perimeter;
  • missing proof on critical attributes;
  • unresolved authority conflict;
  • identity ambiguity;
  • insufficient context;
  • inaccessible or non-admissible source surface.

Rules (NRL-1 to NRL-10)

NRL-1: out-of-scope refusal

If the question exceeds the declared perimeter, the system should refuse rather than improvise.

NRL-2: refusal without proof on critical attributes

Where identity, authority, or eligibility requires proof, absence of proof blocks the answer.

NRL-3: refusal under non-arbitrable authority conflict

Conflicting sources do not justify synthetic certainty.

NRL-4: identity refusal

If the system cannot distinguish the relevant entity, it must suspend the answer.

NRL-5: insufficient context

A missing condition should lead to clarification or refusal, not silent filling.

NRL-6: bounded explanation

A legitimate refusal should state why the answer is blocked without pretending to know what is unavailable.

NRL-7: traceability

The refusal should remain explainable in terms of missing proof, perimeter, or conflict.

NRL-8: canonical redirection

Where possible, the refusal should redirect toward the right canon or clarification page.

NRL-9: testability

Refusal logic should be testable under known edge cases.

NRL-10: no punitive tone

Legitimate refusal is a governance output, not a moral judgment.

Why this protocol matters

Non-response becomes a trustworthy output only when it is explicit, bounded, and reconstructible. Otherwise, abstention appears arbitrary.

NRL-5: refusal for undated state

A system should refuse when a dynamic state is asserted without a valid time reference.

NRL-6: refusal of gap-filling

The model must not invent the missing link that would make an answer look complete.

NRL-7: refusal under capture signals

If the environment shows signs of interpretive capture, the system may need to suspend the answer rather than amplify the capture.

NRL-8: governed and useful refusal

A refusal should still orient the user toward the right canon, clarification, or required missing evidence where possible.

NRL-9: refusal traceability

The reason for refusal should be reconstructible in terms of perimeter, proof, conflict, or state insufficiency.

NRL-10: periodic validation

Refusal logic should be tested over time, especially after major releases or correction campaigns.

A mature refusal can often be structured as:

  1. why the answer is blocked;
  2. which condition is missing or unresolved;
  3. where to go next if a canonical clarification exists.

Test battery (NRL-T1 to NRL-T8)

NRL-T1: out-of-scope tests

Check whether the system refuses questions outside the declared perimeter.

NRL-T2: authority-conflict tests

Verify that unresolved conflicts do not produce synthetic certainty.

NRL-T3: identity-collision tests

Test whether ambiguity between neighbouring entities leads to refusal when proof is insufficient.

NRL-T4: dynamic-state tests

Check behaviour when the state is time-sensitive or stale.

NRL-T5: capture tests

Observe whether a saturated external framing can force an unauthorized answer.

NRL-T6: multi-turn tests

Ensure that the system does not silently overstep after several conversational turns.

NRL-T7: multi-formulation tests

Refusal should remain coherent across equivalent phrasings.

NRL-T8: post-release regression tests

Release changes should not silently weaken refusal logic.

Expected artefacts

Useful artefacts include refusal templates, authority maps, edge-case test batteries, audit logs, and links to canonical clarification pages.

FAQ

Isn’t refusal a bad user experience?

Not if it prevents a false authoritative answer. A bounded refusal is often a better user experience than an unjustified certainty.

How do you avoid refusing too often?

By clarifying the canon, tightening authority logic, and asking for clarification when a full refusal is not required.

The Q-Layer governs whether an answer is authorized at all. Legitimate non-response is one of its key outputs.