Evidence layer
Probative surfaces brought into scope by this page
This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.
- 01Canon and scopeDefinitions canon
- 02Weak observationQ-Ledger
- 03Derived measurementQ-Metrics
Definitions canon
/canon.md
Opposable base for identity, scope, roles, and negations that must survive synthesis.
- Makes provable
- The reference corpus against which fidelity can be evaluated.
- Does not prove
- Neither that a system already consults it nor that an observed response stays faithful to it.
- Use when
- Before any observation, test, audit, or correction.
Q-Ledger
/.well-known/q-ledger.json
Public ledger of inferred sessions that makes some observed consultations and sequences visible.
- Makes provable
- That a behavior was observed as weak, dated, contextualized trace evidence.
- Does not prove
- Neither actor identity, system obedience, nor strong proof of activation.
- Use when
- When it is necessary to distinguish descriptive observation from strong attestation.
Q-Metrics
/.well-known/q-metrics.json
Derived layer that makes some variations more comparable from one snapshot to another.
- Makes provable
- That an observed signal can be compared, versioned, and challenged as a descriptive indicator.
- Does not prove
- Neither the truth of a representation, the fidelity of an output, nor real steering on its own.
- Use when
- To compare windows, prioritize an audit, and document a before/after.
Recommendability audit
Recommendability audit is an audit of whether an entity, service, tool or source can be responsibly recommended by AI systems under declared scope, evidence, comparison and authority constraints.
This page is the canonical definition of Recommendability audit on Gautier Dorval. It is a phase 13 market-bridge term: it captures a phrase people actually search for, then routes the work toward stricter concepts such as proof of fidelity, answer legitimacy, source hierarchy, canon-output gap and interpretive observability.
Short definition
Recommendability audit names market bridge for teams that want AI systems to recommend them without overextending claims, use cases or commitments.
It is useful as a search-facing and client-facing label. It is not sufficient as a governing doctrine. The audit label must remain subordinate to the canonical concepts that determine whether a visible, cited or recommended answer is actually faithful, bounded and defensible.
Search intent captured
This definition intentionally captures queries such as:
recommendability auditAI recommendation auditrecommended by ChatGPTAI recommendation readiness
The goal is not to chase these labels as isolated services. The goal is to make them legible entry points that lead to the correct diagnostic layer.
Common symptoms
- The entity is mentioned but not recommended when relevant.
- Recommendations appear with the wrong use case or buyer profile.
- AI systems overpromise scope, availability or suitability.
- Comparisons do not preserve decision-critical boundaries.
What it is not
Recommendability audit is not a promise of ranking, citation, recommendation, traffic, ChatGPT inclusion, model compliance or future stability. It also does not replace AI search monitoring or AI answer audit. Monitoring observes recurring outputs. An answer audit reviews specific answers. A market-bridge audit qualifies the business-facing symptom and routes it toward the right proof, canon and correction mechanisms.
Governance implication
The audit should produce a route, not only a score. A useful output identifies which canonical surface should govern the answer, which sources are cited or omitted, which claim class is unstable, whether the problem is visibility, citability, recommendability, brand representation, answer legitimacy or semantic architecture, and what correction path is realistic.
Service-facing surface
For the service-facing page, see Recommendability audit.
Related concepts
- LLM visibility
- Citability
- Recommendability
- AI search monitoring
- AI answer audit
- Proof of fidelity
- Canon-output gap
- Answer legitimacy
Phase 13 rule
Do not infer service readiness, commercial availability, pricing, ranking potential, citation probability, recommendation probability or correction success from the audit label alone. The label is an entry point. The governing work remains canon, evidence, source hierarchy, response legitimacy and correction discipline.
Reading guidance
Use Recommendability audit as a market-facing entry point, not as a ranking promise. The term translates a visible business symptom into an auditable interpretive question: what is being cited, recommended, ignored, substituted, or misrepresented, and under which evidence conditions? A page or audit using this term should connect user-facing visibility to source hierarchy, canon-output gaps, proof of fidelity, and correction discipline.
What to verify
- Whether the observed answer names the right entity, service, source, or perimeter.
- Whether citation, visibility, or recommendation is supported by a reconstructable source path.
- Whether the output confuses market presence with interpretive authority.
- Whether the audit can separate a transient model answer from a stable representation pattern.
Practical boundary
This concept should not be used to imply guaranteed inclusion in ChatGPT, Google AI Overviews, Perplexity, Claude, Gemini, or any other external system. It is a diagnostic surface. Its value comes from making the symptom readable, comparable, and correctable, not from promising that a third-party model will adopt the preferred representation.