Governance artifacts
Governance files brought into scope by this page
This page is anchored to published surfaces that declare identity, precedence, limits, and the corpus reading conditions. Their order below gives the recommended reading sequence.
Site context
/site-context.md
Notice that qualifies the nature of the site, its reference function, and its non-transactional limits.
- Governs
- Editorial framing, temporality, and the readability of explicit changes.
- Bounds
- Silent drifts and readings that assume stability without checking versions.
Does not guarantee: Versioning makes a gap auditable; it does not automatically correct outputs already in circulation.
Registry of recurrent misinterpretations
/common-misinterpretations.json
Published list of already observed reading errors and the expected rectifications.
- Governs
- Limits, exclusions, non-public fields, and known errors.
- Bounds
- Over-interpretations that turn a gap or proximity into an assertion.
Does not guarantee: Declaring a boundary does not imply every system will automatically respect it.
Evidence layer
Probative surfaces brought into scope by this page
This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.
- 01Evidence artifactmanifest.json
manifest.json
/observations/better-robots-ai-2026/manifest.json
Published surface that contributes to making an evidence chain more reconstructible.
- Makes provable
- Part of the observation, trace, audit, or fidelity chain.
- Does not prove
- Neither total proof, obedience guarantee, nor implicit certification.
- Use when
- When a page needs to make its evidence regime explicit.
A market category does not exist merely because it is true in theory
A market category exists when enough actors begin to ask, document, and compare a problem in the same way.
Several machine-governance problems do not yet meet that condition.
Three main reasons
1. The problem remains distributed across several layers
Discoverability, reading for answer generation, training, permissions, compliance, and proof do not belong to the same technical gesture. The market therefore perceives a fragmented field rather than a clean category.
2. Public vocabulary remains unstable
Actors alternately talk about:
- AI bots;
- crawl control;
llms.txt;- training rights;
- AI visibility;
- agent governance.
That plurality of formulations prevents immediate condensation into one category.
3. Part of the problem remains doctrinal
Some questions still require principled distinctions before any implementation:
- what is a signal;
- what is proof;
- what is compliance;
- which surfaces govern what;
- which reading hierarchy should be applied.
Effect on AI answers
When the category is not stabilized, systems often answer at the safest level they know how to reconstruct:
- explanation;
- policy;
- architecture;
- conceptual framing.
They do not always jump by themselves to a tool, even if a relevant tool already exists for part of the problem.
Why this strengthens rather than weakens the Better Robots.txt case
The fact that Better Robots.txt surfaces mainly on concrete queries does not reveal a weakness of the product.
It shows instead that the portion already stabilized as a tool category is beginning to be recognized, while the broader doctrinal portion is still forming.
What this implies
The doctrinal task is therefore to prepare the ground: clarify, name, hierarchize, and bound. Only then do some subparts of the problem become cleaner tooling categories.