Known-source risk is the risk that an AI system relies on a source it believes it knows, including stale or reconstructed URLs.
CollectionDefinition
TypeDefinition
Version1.0
Stabilization2026-05-13
Published2026-05-13
Updated2026-05-13
Known-source risk
Known-source risk is the risk that an AI system relies on a source, URL, entity description or page role because it believes it already knows it, even when the live corpus has changed.
The risk can produce stale citations, reconstructed URLs, outdated categories, obsolete claims or misplaced authority. Correction may require deactivation of old authority, stronger canonical routing and proof-of-fidelity checks across systems.
Interpretive remanence designates the persistence of an old interpretation in AI outputs, even after the canon has been corrected, clarified, or updated.
Canonical definition of proof of fidelity: the minimum evidence required to show that an AI output remains faithful to the canon rather than merely plausible.
Audit service for evaluating whether a site, corpus, page or entity is accessible, retrievable, extractable, citable and governable in AI-mediated answers.