Evidence layer
Probative surfaces brought into scope by this page
This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.
- 01Canon and scopeDefinitions canon
- 02Weak observationQ-Ledger
- 03Derived measurementQ-Metrics
Definitions canon
/canon.md
Opposable base for identity, scope, roles, and negations that must survive synthesis.
- Makes provable
- The reference corpus against which fidelity can be evaluated.
- Does not prove
- Neither that a system already consults it nor that an observed response stays faithful to it.
- Use when
- Before any observation, test, audit, or correction.
Q-Ledger
/.well-known/q-ledger.json
Public ledger of inferred sessions that makes some observed consultations and sequences visible.
- Makes provable
- That a behavior was observed as weak, dated, contextualized trace evidence.
- Does not prove
- Neither actor identity, system obedience, nor strong proof of activation.
- Use when
- When it is necessary to distinguish descriptive observation from strong attestation.
Q-Metrics
/.well-known/q-metrics.json
Derived layer that makes some variations more comparable from one snapshot to another.
- Makes provable
- That an observed signal can be compared, versioned, and challenged as a descriptive indicator.
- Does not prove
- Neither the truth of a representation, the fidelity of an output, nor real steering on its own.
- Use when
- To compare windows, prioritize an audit, and document a before/after.
Services, audits, and market bridge vocabulary
This lexical family stabilizes service-facing labels used by buyers, dashboards and AI-search tools before the stricter problem has been named.
These terms are not a separate doctrine. They are routing vocabulary. Their role is to capture demand and move it toward canonical definitions, evidence, source hierarchy and correction discipline.
| Term | Function | Service-facing route |
|---|---|---|
| LLM visibility audit | a structured review of how an entity, brand, service, concept or canonical page appears, disappears, is framed, cited, compared or recommended in LLM-mediated answer environments. | LLM visibility audit |
| AI visibility audit | a market-facing audit that separates AI presence, citation, framing, recommendation, answer inclusion and representation stability across AI-mediated search and answer systems. | AI visibility audit |
| AI answer audit | an applied review of generated answers against canon, source hierarchy, proof, response conditions, inference boundaries and answer legitimacy. | AI answer audit |
| AI brand representation audit | a structured audit of how AI systems reconstruct a brand’s identity, role, services, scope, limits, comparisons, exclusions and authority across generated answers. | AI brand representation audit |
| AI citation tracking audit | an audit that reviews which sources are cited by AI answer systems, whether those sources structure the answer, and whether citation behavior supports or disguises answer legitimacy. | AI citation tracking audit |
| Citability audit | an audit of whether a source is structured, explicit, authoritative and machine-readable enough to be cited responsibly by AI-mediated answer systems. | Citability audit |
| Recommendability audit | an audit of whether an entity, service, tool or source can be responsibly recommended by AI systems under declared scope, evidence, comparison and authority constraints. | Recommendability audit |
| Generative engine optimization audit | a market-facing audit of generative engine visibility, citation, answer inclusion, source readiness and representation stability, bounded by interpretive governance rather than ranking promises. | Generative engine optimization audit |
| AI search optimization audit | an audit of how a site, source or entity should be structured for AI-mediated search without reducing the problem to rankings, keyword targeting or answer appearance. | AI search optimization audit |
| Brand visibility in ChatGPT audit | a scoped audit of how ChatGPT-style systems mention, omit, frame, compare, cite or recommend a brand, while routing findings toward canon, evidence and representation governance. | Brand visibility in ChatGPT audit |
Routing principle
Use these labels when the user or market starts from visibility, citation, recommendation, ChatGPT presence, brand representation or GEO. Then route the work toward AI visibility audits, proof of fidelity, answer legitimacy and representation gap.
Non-promise
These labels do not imply availability, pricing, ranking, citation, recommendation or correction success. They name diagnostic entry points.
How to read this lexical family
This family is the bridge between advisory work and the conceptual corpus. It turns the doctrine into usable service language without reducing the doctrine to a menu of deliverables. The reader should be able to understand both the commercial entry point and the deeper reason the service exists.
The bridge matters because many clients will not initially ask for interpretive governance. They will ask why ChatGPT does not mention them, why AI systems cite a competitor, why their brand is misrepresented or why a tool is visible but their doctrine is not. Those are market symptoms of deeper interpretive problems.
Typical misreadings
The first mistake is to make the service label the primary concept. AI visibility audit, GEO audit or AI citation tracking audit are useful labels, but they are not sufficient explanations of the failure mode. They must be backed by source hierarchy, proof of fidelity, representation gap, semantic architecture and answer legitimacy.
The second mistake is to overpromise. The bridge vocabulary must never imply guaranteed ranking, guaranteed citation, guaranteed recommendation or guaranteed model adoption. It should explain what can be observed, improved, documented and monitored.
Use in audit and routing
Use this family when building pages that must speak to demand while preserving precision. The route should be: market symptom, audit label, diagnostic question, evidence required, doctrinal anchor, limits and next step.
For SERP architecture, this family prevents cannibalization by giving service pages a clear role: they capture demand and route toward the canon, rather than competing with definitions.