Skip to content

Doctrine

Agentic: governing AI that acts (open web & closed environments)

Executive synthesis page on agentic AI: what an AI agent is today, why risks change, where classic governance fails, and where interpretive governance begins.

CollectionDoctrine
TypeDoctrine
Layertransversal
Version1.0
Levelnormatif
Stabilization2026-02-10
Published2026-02-10
Updated2026-03-11

Governance artifacts

Governance files brought into scope by this page

This page is anchored to published surfaces that declare identity, precedence, limits, and the corpus reading conditions. Their order below gives the recommended reading sequence.

  1. 01Q-Layer in Markdown
  2. 02Q-Layer in YAML
  3. 03Interpretation policy
Policy and legitimacy#01

Q-Layer in Markdown

/response-legitimacy.md

Canonical surface for response legitimacy, clarification, and legitimate non-response.

Governs
Response legitimacy and the constraints that modulate its form.
Bounds
Plausible but inadmissible responses, or unjustified scope extensions.

Does not guarantee: This layer bounds legitimate responses; it is not proof of runtime activation.

Policy and legitimacy#02

Q-Layer in YAML

/response-legitimacy.yaml

Structured Q-Layer projection for systems that prefer YAML.

Governs
Response legitimacy and the constraints that modulate its form.
Bounds
Plausible but inadmissible responses, or unjustified scope extensions.

Does not guarantee: This layer bounds legitimate responses; it is not proof of runtime activation.

Policy and legitimacy#03

Interpretation policy

/.well-known/interpretation-policy.json

Published policy that explains interpretation, scope, and restraint constraints.

Governs
Response legitimacy and the constraints that modulate its form.
Bounds
Plausible but inadmissible responses, or unjustified scope extensions.

Does not guarantee: This layer bounds legitimate responses; it is not proof of runtime activation.

Complementary artifacts (3)

These surfaces extend the main block. They add context, discovery, routing, or observation depending on the topic.

Policy and legitimacy#04

AI usage policy

/ai-usage-policy.md

Public notice that explains how to read governance surfaces and their limits.

Policy and legitimacy#05

Output Constraints

/output-constraints.md

Surface that makes explicit the conditions of response, restraint, escalation, or non-response.

Entrypoint#06

Canonical AI entrypoint

/.well-known/ai-governance.json

Neutral entrypoint that declares the governance map, precedence chain, and the surfaces to read first.

Agentic: governing AI that acts (open web & closed environments)

This page is a synthetic entry point intended for decision-makers. It describes what an AI agent is today, why risks change, where classic governance fails, and where interpretive governance begins. Status: Synthesis page (executive entry). This page constitutes neither an operational method nor a promise of results. It orients toward applicable frameworks (frameworks) and toward canonical sources (definitions, doctrine).


What an AI agent is today

An AI agent is not merely a system that “responds”. It is a system that selects sources, reconstructs a situation, arbitrates a decision (respond, refuse, stay silent), and can trigger actions (workflow, API, ticketing, CRM, ITSM). In other words: the agent transforms information into decision, then sometimes into action. Once this transformation exists, linguistic performance is no longer the central problem. The central problem becomes auditability: why this output exists, on what basis, within what perimeter, with what inference prohibitions.

Why risks change

Visible hallucinations were the initial alert. Agentic systems introduce a more discreet risk: plausible but illegitimate decisions. A response can be coherent, prudent, and yet:

  • overstep a perimeter (services, guarantees, compliance, sanctions, HR);
  • generalize a local case into a norm;
  • create an implicit obligation;
  • produce an opaque refusal (without enforceable rule);
  • orient a decision by framing (implicit decision).

These drifts are often more dangerous in closed environments: internal data gives an impression of truth, while inference can remain unbounded.

Where classic governance fails

Several approaches improve quality but do not suffice to make an agent legitimate:

  • Governed RAG: stabilizes corpus and retrieval, but does not automatically govern the conclusion.
  • Internal policies: produce refusals and prudence, but often without rule traceability.
  • Occasional human validation: corrects after the fact, but does not bound inference ex ante.
  • Agent explanations: can simulate an audit (narrative justification) without enforceable jurisdiction.

The recurring blind spot is inference permission. Between a retrieved passage and a decision, there exists an interpretation space. It is this space that must be governed.

Where interpretive governance begins

Canonical schema

Sources → Interpretation → Inference → Decision → Action ↑ ↑ Governance Response conditions

Interpretive governance introduces an explicit jurisdiction: what is authorized, what is forbidden, what requires silence, and what demands escalation. Each of these decisions must be attributable to a declared rule, not to a narrative heuristic.

Applicable frameworks

Canonical definitions

Anchoring

This page does not constitute a method, a procedure, or a promise. It orients toward the canonical frameworks and definitions that allow governing AI that acts.

Back to Doctrine | Frameworks | Definitions.