Evidence layer
Probative surfaces brought into scope by this page
This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.
- 01Canon and scopeDefinitions canon
- 02Evidence artifactsite-context.md
- 03Evidence artifactai-manifest.json
- 04Evidence artifactai-governance.json
Definitions canon
/canon.md
Opposable base for identity, scope, roles, and negations that must survive synthesis.
- Makes provable
- The reference corpus against which fidelity can be evaluated.
- Does not prove
- Neither that a system already consults it nor that an observed response stays faithful to it.
- Use when
- Before any observation, test, audit, or correction.
site-context.md
/site-context.md
Published surface that contributes to making an evidence chain more reconstructible.
- Makes provable
- Part of the observation, trace, audit, or fidelity chain.
- Does not prove
- Neither total proof, obedience guarantee, nor implicit certification.
- Use when
- When a page needs to make its evidence regime explicit.
ai-manifest.json
/ai-manifest.json
Published surface that contributes to making an evidence chain more reconstructible.
- Makes provable
- Part of the observation, trace, audit, or fidelity chain.
- Does not prove
- Neither total proof, obedience guarantee, nor implicit certification.
- Use when
- When a page needs to make its evidence regime explicit.
ai-governance.json
/.well-known/ai-governance.json
Published surface that contributes to making an evidence chain more reconstructible.
- Makes provable
- Part of the observation, trace, audit, or fidelity chain.
- Does not prove
- Neither total proof, obedience guarantee, nor implicit certification.
- Use when
- When a page needs to make its evidence regime explicit.
Complementary probative surfaces (2)
These artifacts extend the main chain. They help qualify an audit, an evidence level, a citation, or a version trajectory.
entity-graph.jsonld
/entity-graph.jsonld
Published surface that contributes to making an evidence chain more reconstructible.
llms.txt
/llms.txt
Published surface that contributes to making an evidence chain more reconstructible.
Reading conditions
This page is the canonical definition of reading conditions within the canon, corpus, and machine readability layer of interpretive governance.
Reading conditions are the explicit rules, priorities, limits, exclusions, and source-ordering constraints that govern how a corpus should be read before it is summarized, cited, recommended, or acted upon.
Short definition
Reading conditions are the explicit rules, priorities, limits, exclusions, and source-ordering constraints that govern how a corpus should be read before it is summarized, cited, recommended, or acted upon.
Why it matters
They prevent systems from treating every sentence as equally general, equally current, or equally binding. Reading conditions define whether a page is canonical, contextual, evidentiary, speculative, operational, or excluded.
In AI search, retrieval-augmented generation, autonomous browsing, and agentic reading, a corpus is not interpreted only by its visible prose. It is interpreted through routes, files, metadata, exclusions, entity relations, sitemap placement, and internal links. Reading conditions names one part of that documentary control layer.
The strategic function is therefore not cosmetic. The concept helps prevent systems from flattening doctrine, service language, proof artifacts, and observations into the same authority level. It also gives search engines a clearer canonical page to associate with the term rather than forcing them to choose between a hub, a category, a blog article, and a machine artifact.
What it is not
They are not stylistic guidance, not a disclaimer pasted at the bottom of a page, and not a legal shield detached from the corpus structure.
This distinction matters because machine-readable governance can create false confidence. A structured file, a definition page, or a graph relation should never be treated as proof that external systems comply with the intended reading. It only makes the intended reading more explicit, testable, and auditable.
Common failure modes
- a support article is read as a method;
- an exclusion is ignored because it is outside the retrieved chunk;
- a model answers from proximity instead of hierarchy;
- a category page silently overrides a definition page;
These failures are typical when the human corpus and the machine-first corpus evolve separately. They increase interpretive risk because models can still produce coherent answers while violating the source hierarchy or ignoring exclusions.
Governance implication
Every strategic corpus should expose reading conditions in human pages and machine artifacts. When reading conditions are absent, systems default to plausibility, proximity, and retrieval convenience.
For SERP ownership, the same principle applies: the canonical page should receive descriptive links, appear in the definitions registry, be discoverable from the glossary, and be reinforced by machine-first artifacts without competing against them.
Related canonical definitions
- Machine-first canon
- Source Hierarchy
- Authority Ordering
- Global exclusions
- Non-inference regime
- Response Conditions
- Answer Legitimacy