Skip to content

Article

When the cited source is not the governing source

An official source may appear inside an AI answer while still losing the framing, comparison, or limits that actually govern the final synthesis.

CollectionArticle
TypeArticle
Categoryarchitecture semantique
Published2026-04-15
Updated2026-04-15
Reading time8 min

Governance artifacts

Governance files brought into scope by this page

This page is anchored to published surfaces that declare identity, precedence, limits, and the corpus reading conditions. Their order below gives the recommended reading sequence.

  1. 01Definitions canon
  2. 02Identity lock
  3. 03Q-Ledger JSON
Canon and identity#01

Definitions canon

/canon.md

Canonical surface that fixes identity, roles, negations, and divergence rules.

Governs
Public identity, roles, and attributes that must not drift.
Bounds
Extrapolations, entity collisions, and abusive requalification.

Does not guarantee: A canonical surface reduces ambiguity; it does not guarantee faithful restitution on its own.

Canon and identity#02

Identity lock

/identity.json

Identity file that bounds critical attributes and reduces biographical or professional collisions.

Governs
Public identity, roles, and attributes that must not drift.
Bounds
Extrapolations, entity collisions, and abusive requalification.

Does not guarantee: A canonical surface reduces ambiguity; it does not guarantee faithful restitution on its own.

Observability#03

Q-Ledger JSON

/.well-known/q-ledger.json

Machine-first journal of observations, baselines, and versioned gaps.

Governs
The description of gaps, drifts, snapshots, and comparisons.
Bounds
Confusion between observed signal, fidelity proof, and actual steering.

Does not guarantee: An observation surface documents an effect; it does not, on its own, guarantee representation.

Complementary artifacts (2)

These surfaces extend the main block. They add context, discovery, routing, or observation depending on the topic.

Observability#04

Q-Metrics JSON

/.well-known/q-metrics.json

Descriptive metrics surface for observing gaps, snapshots, and comparisons.

Policy and legitimacy#05

Citations

/citations.md

Surface that makes explicit the conditions of response, restraint, escalation, or non-response.

Evidence layer

Probative surfaces brought into scope by this page

This page does more than point to governance files. It is also anchored to surfaces that make observation, traceability, fidelity, and audit more reconstructible. Their order below makes the minimal evidence chain explicit.

  1. 01
    Canon and scopeDefinitions canon
  2. 02
    Response authorizationQ-Layer: response legitimacy
  3. 03
    Weak observationQ-Ledger
  4. 04
    Derived measurementQ-Metrics
Canonical foundation#01

Definitions canon

/canon.md

Opposable base for identity, scope, roles, and negations that must survive synthesis.

Makes provable
The reference corpus against which fidelity can be evaluated.
Does not prove
Neither that a system already consults it nor that an observed response stays faithful to it.
Use when
Before any observation, test, audit, or correction.
Legitimacy layer#02

Q-Layer: response legitimacy

/response-legitimacy.md

Surface that explains when to answer, when to suspend, and when to switch to legitimate non-response.

Makes provable
The legitimacy regime to apply before treating an output as receivable.
Does not prove
Neither that a given response actually followed this regime nor that an agent applied it at runtime.
Use when
When a page deals with authority, non-response, execution, or restraint.
Observation ledger#03

Q-Ledger

/.well-known/q-ledger.json

Public ledger of inferred sessions that makes some observed consultations and sequences visible.

Makes provable
That a behavior was observed as weak, dated, contextualized trace evidence.
Does not prove
Neither actor identity, system obedience, nor strong proof of activation.
Use when
When it is necessary to distinguish descriptive observation from strong attestation.
Descriptive metrics#04

Q-Metrics

/.well-known/q-metrics.json

Derived layer that makes some variations more comparable from one snapshot to another.

Makes provable
That an observed signal can be compared, versioned, and challenged as a descriptive indicator.
Does not prove
Neither the truth of a representation, the fidelity of an output, nor real steering on its own.
Use when
To compare windows, prioritize an audit, and document a before/after.
Complementary probative surfaces (1)

These artifacts extend the main chain. They help qualify an audit, an evidence level, a citation, or a version trajectory.

Report schemaAudit report

IIP report schema

/iip-report.schema.json

Public interface for an interpretation integrity report: scope, metrics, and drift taxonomy.

An AI answer may display the right source and still remain badly governed.

That is one of the most frequent reading traps: a team sees its official site, notices that the citation is present, and then assumes that real authority has been preserved.

That shortcut is reassuring. It does not suffice.

Inside a generative answer, the cited source is not always the source that structured the synthesis. And the source that structured the synthesis is not always the source whose authority ultimately governs the perimeter, exclusions, or modality of the answer.

Three roles that must be separated

To read the scene correctly, three roles must be distinguished.

1. The cited source

This is the source visible in the final rendering. It serves as the apparent support, link, or reference available to the user.

2. The structuring source

This is the source that changes the shape of the possible answer. It imposes a category, comparison angle, regime of validity, or relationship between entities. It may remain invisible.

3. The governing source

This is the source whose authority actually prevails over perimeter, limits, exclusions, and modality. It is the source that bounds the answer.

When those three roles converge, reading is simple. When they diverge, an answer may seem correctly sourced while remaining badly framed.

How the dissociation appears

The dissociation is not an exotic accident. It appears as soon as a system arbitrates between several heterogeneous fragments.

A few mechanisms return again and again:

  • a third-party source imposes the category through which the brand is read;
  • a comparator imposes a comparison logic stronger than the canonical definition;
  • an archive or former state keeps imposing a dominant temporality;
  • a short stable listing simplifies the object better than a richer, but costlier, official page;
  • a source becomes structuring on a second hop without appearing in the final rendering.

That is exactly why visibility alone is not enough. It still does not say which source truly holds the frame.

Typical case: the official site appears, but the third party governs

The most misleading case is simple.

The official site is indeed cited. Yet the answer reuses the category of a directory, the angle of a comparator, or the implicit limit of a third-party listing.

The user sees the official source. They believe everything is fine.

In reality, the official source no longer occupies the decisive role. It confirms the apparent object, but it no longer governs the answer.

The diagnosis then becomes more demanding: one must ask not only “who is cited?”, but “who imposed the shape of the synthesis?” and “who ultimately decided the boundaries?”.

Why this diagnosis changes the correction

If the cited source is always treated as the governing source, the wrong place is often corrected.

The official page is rewritten. More content is added. Sections that were already correct are made denser. Yet the real lever may sit elsewhere:

  • in an undeclared source hierarchy;
  • in poorly aligned editable third parties;
  • in an archive that was never downgraded;
  • in a structuring surface clearer than the canon itself;
  • in an authority boundary that was never made explicit.

In other words, correction does not always consist in publishing more. It often consists in reassigning source roles.

What monitoring sees, and what it does not yet see

An AI Search Monitoring setup can see that a source appears, disappears, or returns more often. AI citation analysis can show that the official source is cited while a doubtful framing persists.

Those layers are useful. They remain insufficient as long as the real distribution of roles is not read.

That is precisely the role of AI source mapping: to distinguish the cited source, the structuring source, and the governing source inside the same family of answers.

Where the problem joins the representation gap

A brand may therefore be visible and still badly reconstructed.

It may be cited and still badly bounded.

It may even be backed by the right source while still losing, inside synthesis, the definition of its own perimeter.

At that level, one is no longer speaking only about citation. One is speaking about the representation gap.

The official source appears, but the version retained by the system is no longer quite the right version of the entity.

Practical rule

The practical rule is simple:

  • do not conclude too quickly from a visible citation;
  • ask which source structured the answer;
  • then ask which source actually governs its boundaries;
  • launch correction only after that role-based reading.

Only from that point onward do proof of fidelity or a representation gap audit become correctly framed.

Conclusion

A displayed source does not suffice to prove that it is authoritative.

In generative environments, the decisive question is not only “who is cited?”, but “who structures?” and “who governs?”.

Until that dissociation is made explicit, many answers will look well sourced while remaining doctrinally unsound.