Skip to content

Page

Start here: choose your reading path

Choose the right path through interpretive governance, AI visibility, entity stability, evidence, RAG governance, and agentic control.

CollectionPage
TypeHub

Governance artifacts

Governance files brought into scope by this page

This page is anchored to published surfaces that declare identity, precedence, limits, and the corpus reading conditions. Their order below gives the recommended reading sequence.

  1. 01Content inventory
  2. 02Route map
  3. 03serp-ownership.json
Discovery and routing#01

Content inventory

/site-content-index.json

Machine-first inventory of the pages, articles, and surfaces published on the site.

Governs
Discoverability, crawl orientation, and the mapping of published surfaces.
Bounds
Incomplete readings that ignore structure, routes, or the preferred markdown surface.

Does not guarantee: A good discovery surface improves access; it is not sufficient on its own to govern reconstruction.

Discovery and routing#02

Route map

/site-route-map.json

Map of FR ↔ EN routes and published public alignments.

Governs
Discoverability, crawl orientation, and the mapping of published surfaces.
Bounds
Incomplete readings that ignore structure, routes, or the preferred markdown surface.

Does not guarantee: A good discovery surface improves access; it is not sufficient on its own to govern reconstruction.

Artifact#03

serp-ownership.json

/serp-ownership.json

Published machine-first governance surface.

Governs
Part of the corpus reading conditions.
Bounds
An inference zone that would otherwise remain implicit.

Does not guarantee: This file does not, on its own, guarantee system obedience.

Complementary artifacts (1)

These surfaces extend the main block. They add context, discovery, routing, or observation depending on the topic.

Policy and legitimacy#04

Llm Intent Map

/llm-intent-map.json

Surface that makes explicit the conditions of response, restraint, escalation, or non-response.

Start here

This site is not meant to be read as a linear blog. It is a corpus: definitions stabilize terms, frameworks explain operating models, expertise pages translate those models into audits, observations document symptoms, and clarifications prevent plausible misreadings.

Use this page as the first routing layer. It does not replace the Definitions, Glossary, Frameworks, Expertise, or SERP ownership map. It tells you which door to open first depending on the problem you are actually trying to solve.

The practical rule is simple: start with the problem, not with the vocabulary. A visibility issue, a citation issue, a representation issue, a retrieval issue, an authority issue, and an agentic execution issue do not require the same path through the corpus.

If you are new to interpretive governance

Start with Interpretive governance, then read Interpretive risk, Answer legitimacy, Source hierarchy, and Proof of fidelity.

This path explains the central thesis of the site: AI-mediated environments do not merely repeat indexed content. They reconstruct meaning, assign authority, compress sources, resolve ambiguity, and sometimes produce answers that are fluent but not defensible.

After those definitions, move to Frameworks if you want operating models, or to AI search and interpretive audits if you want to understand how the doctrine becomes an audit problem.

What this path prevents

It prevents the first common mistake: reducing the entire corpus to SEO, visibility, hallucination, or prompt engineering. Those topics matter, but they are not the full category. Interpretive governance is about the conditions under which an answer, citation, recommendation, action, or refusal can be considered legitimate.

If your problem is AI visibility

Start with AI visibility audits, then compare LLM visibility, Citability, Recommendability, AI search monitoring, and AI brand representation.

Then move to the service pages that match the symptom: AI visibility audit, LLM visibility audit, AI answer audit, AI brand representation audit, or AI citation tracking audit.

This path is for teams that ask questions such as: Why are competitors appearing in AI answers? Why is our brand visible but described incorrectly? Why are we cited without being understood? Why do some systems recommend us while others ignore us?

Reading rule

Do not treat visibility as proof of representation. A brand can appear in an AI answer while still being categorized incorrectly, framed by a third party, associated with an old state, or cited without governing the response.

If your problem is entity stability

Start with Semantic architecture, Entity disambiguation, Entity collision, Semantic contamination, Framing stability, and Cross-system coherence.

Then read the Semantic architecture and entity stability glossary and the category Semantic architecture.

This path is useful when a person, brand, service, doctrine, product, or organization is being mixed with adjacent entities, older roles, nearby categories, directories, competitors, or weak external summaries.

Practical boundary

Entity stabilization does not mean forcing every model to say the same thing. It means reducing avoidable ambiguity, making canonical relationships easier to reconstruct, and preventing weak neighbors from becoming the dominant frame.

If your problem is evidence and auditability

Start with Evidence layer, then read Interpretive evidence, Reconstructable evidence, Interpretation trace, Canon-output gap, Proof of fidelity, Interpretive observability, and Interpretive auditability.

This path is for situations where a response must be challenged, defended, documented, compared, or corrected. It is less concerned with whether an answer looks plausible and more concerned with whether the path from source to output can be reconstructed.

What to verify

Look for the gap between the canonical source, the retrieved or cited source, the structuring source, and the final answer. A visible citation is not enough. The question is whether the cited material actually governs the response.

If your problem is RAG or retrieval

Start with RAG governance, Retrieval control, Documentary chain, Source admission, Retrieval provenance, and Chunk authority.

Then read RAG governance, retrieval, and inference control and RAG governance vs interpretive governance.

This path is for teams working with retrieval-augmented systems, internal knowledge bases, AI assistants, documentation corpora, or controlled source environments.

Reading rule

Retrieval is not legitimacy. A document can be retrieved, cited, and relevant while still being insufficient to authorize a final answer. RAG governance controls the documentary chain; interpretive governance controls whether the answer can be produced, qualified, refused, or escalated.

If your problem is agentic execution

Start with Agentic risk, Delegated action, Tool-mediated authority, Execution boundary, Agentic response conditions, and Cross-layer transactional coherence.

Then read Interpretive governance for AI agents, Agentic risk matrix, and Enforceable response conditions for AI agents.

This path is for situations where a model does not only answer. It may trigger tools, route requests, write records, update states, execute transactions, or pass instructions to another agent.

Practical boundary

Tool access is not execution authority. A system may be technically able to act without being authorized to act. Agentic governance therefore separates answer legitimacy from execution legitimacy.

If your problem is memory, correction, and persistence

Start with Memory governance, Interpretive remanence, Interpretive inertia, Version power, Stale-state handling, and Correction resorption.

This path is useful when an old representation survives after a correction, an outdated version continues to influence AI outputs, or an answer keeps repeating a state that is no longer current.

Reading rule

Persistence is not current authority. A statement that survives in memory, search results, third-party summaries, cached outputs, or old citations must still be checked against the current canon and response conditions.

If you need a service path

Use the Expertise hub if you already know the kind of intervention you need. Use AI visibility audits if the problem begins with search, citation, recommendation, visibility, or brand representation. Use AI search and interpretive audits if the problem involves answer quality, legitimacy, evidence, source hierarchy, or system behavior.

For a scoped request, use the Contact page and include the target entity, URLs, AI systems observed, examples of outputs, decision context, and the consequence of the misrepresentation.

If you need to understand what owns a query

Use the SERP ownership map. It explains which page should own a definition query, which page should own a service query, and which surfaces are supporting routes.

This matters because the corpus is large. Without explicit routing, a definition, a framework, a service page, a glossary family, and a blog article could all compete for the same query. The ownership map prevents that by assigning role, intent, and support relationships.

How to move from one path to another

Most real problems cross more than one layer. An AI visibility issue can become an evidence issue once the team needs to prove why a response is wrong. An entity stability issue can become a memory issue when an outdated representation survives after the canonical source has been corrected. A RAG issue can become an authority issue when the retrieved fragment is relevant but not authorized to govern the final answer.

Use the first path to name the primary failure mode, then move laterally. If the problem begins with visibility, start with the market-facing audit path, then move to proof of fidelity and source hierarchy. If the problem begins with a wrong answer, start with evidence and answer legitimacy, then move to representation gap or semantic architecture. If the problem begins with an agent acting too freely, start with execution boundary, then move back to response conditions and source hierarchy.

This lateral movement is intentional. The corpus is not a taxonomy where every page belongs to only one box. It is an interpretive architecture where the same symptom can be examined through visibility, authority, proof, memory, retrieval, or execution depending on the consequence of the output.

What not to do

Do not use the glossary as the first page if the problem is operational. The glossary is useful when you need to compare terms, but it can overwhelm a new reader if the failure mode is not already named. Do not use a service page as the first page if the problem is conceptual. The service page translates a problem into an intervention; it does not replace the canonical definition. Do not use a blog article as the first page if the problem requires proof. Articles often capture movement and diagnosis; proof requires trace, evidence, comparison and auditability.

Minimum route for a first audit

For a first audit, use this order: describe the symptom, identify the affected entity, collect examples of AI outputs, identify the systems observed, separate visibility from representation, compare outputs against the canonical source, then decide whether the problem is mainly market visibility, answer legitimacy, source hierarchy, semantic stability, memory, RAG, or agentic execution. Only after that should the audit label be chosen.

A final reading rule

Do not start everywhere. Choose the path that matches the failure mode. If the issue is visibility, start with visibility. If the issue is evidence, start with evidence. If the issue is authority, start with authority. If the issue is execution, start with agentic control. The corpus becomes easier to read once the problem layer is named correctly.