Governance artifacts
Governance files brought into scope by this page
This page is anchored to published surfaces that declare identity, precedence, limits, and the corpus reading conditions. Their order below gives the recommended reading sequence.
Canonical AI entrypoint
/.well-known/ai-governance.json
Neutral entrypoint that declares the governance map, precedence chain, and the surfaces to read first.
- Governs
- Access order across surfaces and initial precedence.
- Bounds
- Free readings that bypass the canon or the published order.
Does not guarantee: This surface publishes a reading order; it does not force execution or obedience.
Public AI manifest
/ai-manifest.json
Structured inventory of the surfaces, registries, and modules that extend the canonical entrypoint.
- Governs
- Access order across surfaces and initial precedence.
- Bounds
- Free readings that bypass the canon or the published order.
Does not guarantee: This surface publishes a reading order; it does not force execution or obedience.
LLMs.txt
/llms.txt
Short discovery surface that points systems toward the useful machine-first entry surfaces.
- Governs
- Discoverability, crawl orientation, and the mapping of published surfaces.
- Bounds
- Incomplete readings that ignore structure, routes, or the preferred markdown surface.
Does not guarantee: A good discovery surface improves access; it is not sufficient on its own to govern reconstruction.
A crawler and an agent do not read the web in the same way.
The crawler seeks resources, follows links, extracts text, discovers signals, and feeds an index. It can ignore part of the interface if the documentary structure remains accessible.
The agent must transform an intention into a path. It does not only need to know that a page exists. It must determine where to go, which element triggers which action, which state is current, which field must be filled, and which result is expected.
Three verbs, three architectures
The crawler discovers. The model synthesizes. The agent executes.
These three verbs do not require the same architecture.
To discover, the site needs internal linking, sitemaps, canonicals, stable URLs, titles, and accessible content.
To synthesize, it needs clear entities, perimeters, definitions, source hierarchies, evidence, exclusions, and formulations that limit drift.
To execute, it needs named actions, visible states, native buttons, associated labels, understandable errors, explicit confirmations, and a stable layout.
The agentic web does not cancel the first two layers. It extends them.
HTML becomes a grammar again
In many modern sites, HTML has been treated as a rendering support. Abstract JavaScript components, generic wrappers, and client-attached behavior have sometimes replaced native semantics.
For an agent, that abstraction can become costly. An action carried by a poorly named element or opaque component forces the system to reconstruct intention from secondary signals: position, color, icon, proximity, or neighboring text.
The new grammar of the web requires the code to carry the intention. A button should be a button. A link should be a link. A field should have a label. An error should be associated with the faulty value. A modal should declare its title and manage focus.
The role of the Accessibility Tree
The Accessibility Tree becomes central because it exposes roles, names, states, and relationships. It is less decorative than visual rendering and more intentional than an overloaded DOM.
This does not mean accessibility should be instrumentalized for machines. The opposite is true. An interface that is more accessible to humans is often more explicit for agents precisely because it reduces role and state ambiguity.
The new audit unit
The audit unit is no longer only the page. It is the path.
An agent does not only ask: does this page contain the information? It asks: can I reach the objective? That changes the analysis. We must test offer discovery, service comparison, form completion, request confirmation, error correction, and exit from blocking states.
The page becomes a step in an action chain.
Strategic consequence
The move from crawler to agent does not make SEO obsolete. It makes it incomplete.
SEO worked on the ability of a system to find and understand. The agentic web works on the ability of a system to understand and act. Between the two, the central discipline becomes interface interpretability.
That is why agentic navigability should become a distinct layer in technical audits of modern sites.