Agentic: governing AI that acts (open web & closed environments)

Type: Doctrinal principle

Conceptual version: 1.0

Stabilization date: 2026-02-10

This page is a synthetic entry point intended for decision-makers. It describes what an AI agent is today, why risks change, where classic governance fails, and where interpretive governance begins.

Status:
Synthesis page (executive entry). This page constitutes neither an operational method nor a promise of results. It orients toward applicable frameworks (frameworks) and toward canonical sources (definitions, doctrine).


What an AI agent is today

An AI agent is not merely a system that “responds”. It is a system that selects sources, reconstructs a situation, arbitrates a decision (respond, refuse, stay silent), and can trigger actions (workflow, API, ticketing, CRM, ITSM).

In other words: the agent transforms information into decision, then sometimes into action. Once this transformation exists, linguistic performance is no longer the central problem. The central problem becomes auditability: why this output exists, on what basis, within what perimeter, with what inference prohibitions.

Why risks change

Visible hallucinations were the initial alert. Agentic systems introduce a more discreet risk: plausible but illegitimate decisions. A response can be coherent, prudent, and yet:

  • overstep a perimeter (services, guarantees, compliance, sanctions, HR);
  • generalize a local case into a norm;
  • create an implicit obligation;
  • produce an opaque refusal (without enforceable rule);
  • orient a decision by framing (implicit decision).

These drifts are often more dangerous in closed environments: internal data gives an impression of truth, while inference can remain unbounded.

Where classic governance fails

Several approaches improve quality but do not suffice to make an agent legitimate:

  • Governed RAG: stabilizes corpus and retrieval, but does not automatically govern the conclusion.
  • Internal policies: produce refusals and prudence, but often without rule traceability.
  • Occasional human validation: corrects after the fact, but does not bound inference ex ante.
  • Agent explanations: can simulate an audit (narrative justification) without enforceable jurisdiction.

The recurring blind spot is inference permission. Between a retrieved passage and a decision, there exists an interpretation space. It is this space that must be governed.

Where interpretive governance begins

Canonical schema

Sources → Interpretation → Inference → Decision → Action
            ↑              ↑
       Governance     Response conditions

Interpretive governance introduces an explicit jurisdiction: what is authorized, what is forbidden, what requires silence, and what demands escalation. Each of these decisions must be attributable to a declared rule, not to a narrative heuristic.

Applicable frameworks

Canonical definitions

Anchoring

This page does not constitute a method, a procedure, or a promise. It orients toward the canonical frameworks and definitions that allow governing AI that acts.

Back to Doctrine | Frameworks | Definitions.