Agentic
Agentic designates an execution mode where an AI system does not merely produce a response, but plans, sequences, and executes actions (tools, API calls, navigation, writing, decisions) based on an objective, often over multiple steps, with a varying degree of autonomy.
In interpretive governance, agentic mode drastically raises stakes: an interpretation becomes an action. A plausible output can therefore produce a real effect. Hence the importance of authority boundary, response conditions, and legitimate non-response.
Definition
An agentic system is one where:
- AI possesses planning capacity (decomposing a task);
- it can call tools (browsers, APIs, databases, internal systems);
- it can chain actions over multiple turns;
- it produces outputs that can be operational (change a state, write, publish, trigger a flow).
Agentic mode can exist in closed environments (internal agent) or on the open web (agent navigating and relying on external sources).
Why this is critical in AI systems
- Interpretation = execution: an interpretation error materializes as an action.
- Error cost increases: silent errors, irreversible decisions, implicit liability.
- Attack surface expands: injection, poisoning, and manipulation become more dangerous when the agent acts.