Skip to content

Article

'AI citation tracking audit: what must actually be measured'

A citation count is not an audit. The useful unit is the relationship between a generated claim, a cited source and the authority that should govern it.

CollectionArticle
TypeArticle
Categorygouvernance ai
Published2026-05-13
Updated2026-05-13
Reading time3 min

Governance artifacts

Governance files brought into scope by this page

This page is anchored to published surfaces that declare identity, precedence, limits, and the corpus reading conditions. Their order below gives the recommended reading sequence.

  1. 01Definitions canon
  2. 02Site context
  3. 03Public AI manifest
Canon and identity#01

Definitions canon

/canon.md

Canonical surface that fixes identity, roles, negations, and divergence rules.

Governs
Public identity, roles, and attributes that must not drift.
Bounds
Extrapolations, entity collisions, and abusive requalification.

Does not guarantee: A canonical surface reduces ambiguity; it does not guarantee faithful restitution on its own.

Context and versioning#02

Site context

/site-context.md

Notice that qualifies the nature of the site, its reference function, and its non-transactional limits.

Governs
Editorial framing, temporality, and the readability of explicit changes.
Bounds
Silent drifts and readings that assume stability without checking versions.

Does not guarantee: Versioning makes a gap auditable; it does not automatically correct outputs already in circulation.

Entrypoint#03

Public AI manifest

/ai-manifest.json

Structured inventory of the surfaces, registries, and modules that extend the canonical entrypoint.

Governs
Access order across surfaces and initial precedence.
Bounds
Free readings that bypass the canon or the published order.

Does not guarantee: This surface publishes a reading order; it does not force execution or obedience.

Evidence layer

Probative surfaces brought into scope by this page

This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.

  1. 01
    Canon and scopeDefinitions canon
  2. 02
    Weak observationQ-Ledger
Canonical foundation#01

Definitions canon

/canon.md

Opposable base for identity, scope, roles, and negations that must survive synthesis.

Makes provable
The reference corpus against which fidelity can be evaluated.
Does not prove
Neither that a system already consults it nor that an observed response stays faithful to it.
Use when
Before any observation, test, audit, or correction.
Observation ledger#02

Q-Ledger

/.well-known/q-ledger.json

Public ledger of inferred sessions that makes some observed consultations and sequences visible.

Makes provable
That a behavior was observed as weak, dated, contextualized trace evidence.
Does not prove
Neither actor identity, system obedience, nor strong proof of activation.
Use when
When it is necessary to distinguish descriptive observation from strong attestation.

A citation count is not an audit. The useful unit is the relationship between a generated claim, a cited source and the authority that should govern it.

AI citation tracking is becoming a common visibility practice, but many reports still measure the wrong thing. They count whether a domain was cited, how often a URL appeared, or whether a brand was included in an answer. Those observations are useful, but they are not enough to diagnose legitimacy.

A source can appear often and still be used weakly. Another source can appear rarely but govern a decisive claim. A third-party directory can be cited instead of the official source. A citation can support part of a sentence while the generated synthesis goes beyond the cited evidence.

The minimum audit unit

A citation audit should not start with URLs. It should start with claims.

For each generated answer, identify:

  1. the claim being made;
  2. the source displayed or implied;
  3. the passage that appears to support the claim;
  4. the source that should govern the claim;
  5. the gap between the generated statement and the governing source.

Only then can the audit classify the citation.

Citation roles

The central variable is citation role. A citation may be governing, supporting, illustrative, ornamental, outdated or contradictory.

RoleAudit meaning
Governingthe source legitimately constrains the claim
Supportingthe source helps but does not fully govern
Illustrativethe source gives context or an example
Ornamentalthe citation is displayed but weakly connected
Outdatedthe source was valid in another state or period
Contradictorythe source conflicts with the answer

A report that counts all of those roles equally is not measuring citation quality. It is measuring citation visibility.

What must be tracked over time

A mature tracking audit should preserve the system, model or product surface, date, language, location when relevant, prompt variant, answer, displayed citations, implied sources, cited passage, citation role and correction hypothesis.

The audit should also identify source substitution. This happens when a weaker or secondary source replaces the canonical source in the answer. Source substitution is more important than raw citation frequency because it shows who is governing the interpretation.

Stability matters

A single screenshot is not enough. AI-mediated answers vary by prompt phrasing, system, session, language and time. The question is not whether one answer cited one source once. The question is whether the source role persists across repeated observations.

This is where citation persistence and citation fidelity become stronger than a domain-level visibility score.

Correction route

If the audit finds low citation frequency, the correction may involve access, ranking, fan-out coverage or extractability. If it finds ornamental citations, the correction may involve clearer evidence and stronger passage structure. If it finds source substitution, the correction may involve internal routes, canonical claims and source hierarchy.

The AI citation readiness audit prepares the source. The AI citation tracking audit observes how systems use it. Both must be connected to proof of fidelity before the result can be treated as legitimate.