IIP-Scoring™: doctrinal framework for the interpretive integrity measurement protocol
This page defines the public doctrinal framework of IIP-Scoring™. It sets out the concepts, objectives, principles, and measurement perimeter without disclosing implementation details, operational thresholds, internal weighting, arbitration rules, or annotation procedures.
IIP-Scoring™ is a qualification protocol designed to measure the gap between an explicitly snapshotted canonical corpus and the outputs produced by language models under declared execution conditions. Its purpose is not to judge whether an answer is merely convincing. Its purpose is to assess whether an interpretation remains faithful, bounded, explainable, and sustainable.
Positioning
IIP-Scoring™ sits at the intersection of doctrine, auditability, and evidence. It belongs neither to raw observability nor to a generic content-quality score. It is a bounded public framework for interpretive integrity.
The protocol is therefore part of a governance stack in which canon, authority, response conditions, and proof must remain distinguishable.
GitHub reference surface (restricted specification)
The public website exposes the doctrinal perimeter. A more technical reference surface can exist on GitHub or related documentation repositories, but the public doctrinal page remains intentionally bounded. This distinction matters: a doctrinal page explains the framework and its admissible reading, while a technical repository may hold controlled artefacts, validators, examples, or derivative instruments.
Relation to interpretation integrity audit
IIP-Scoring™ does not replace the full interpretation integrity audit. The audit remains the broader process that defines the canon, sets the authority boundary, tests outputs, gathers proof, diagnoses drift, and organizes correction.
IIP-Scoring™ contributes a measurable layer inside that larger process. It is the part that qualifies the integrity of outputs against declared criteria.
Object and purpose
The object of IIP-Scoring™ is simple: qualify the distance between what the canon explicitly states and what a model actually outputs.
Its purpose is to make this distance readable in a way that is:
- reproducible enough to be compared across runs or versions;
- bounded enough to avoid pseudo-precision;
- useful enough to support diagnosis, correction, and governance decisions.
Normative principles
The public doctrinal framework rests on several principles:
- canon first: no score makes sense without an explicitly defined canonical reference;
- authority-bounded reading: outputs must be judged against an authority perimeter, not against free-floating expectations;
- proof over impression: apparent fluency is never equivalent to interpretive fidelity;
- declared execution conditions: prompts, contexts, or test conditions must be bounded if results are to be comparable;
- non-operational public view: the public framework explains the logic, but does not disclose sensitive internal calibration.
Scope of this public framework
This public page covers:
- the object of the protocol;
- the qualification logic;
- the high-level taxonomy;
- the major metric families;
- the interpretive role of the score.
It does not publish operational cutoffs, internal matrices, or decision rules that would turn the doctrine into a turnkey scoring product.
Qualification taxonomy
IIP-Scoring™ qualifies outputs according to the quality of their relation to the canon. At a public level, the taxonomy can be read along familiar distinctions:
- explicit canonical alignment;
- bounded inference compatible with the canon;
- unresolved ambiguity requiring clarification;
- extrapolation beyond authority;
- contradiction or drift relative to the canonical perimeter.
The value of the taxonomy is not to produce a single magical number. It is to distinguish kinds of alignment and kinds of deviation.
Main metrics
Several metric families can legitimately belong to the framework, provided they remain anchored to a canonical snapshot and a declared execution perimeter.
MVS™ — Machine Visibility Score
MVS™ captures how visible canonical elements remain from the point of view of the evaluated environment. It does not by itself prove fidelity, but it informs whether the canonical surface is even likely to be encountered and used.
Fidelity-oriented measures
These measures assess whether the output preserves canonical claims, boundaries, and distinctions. They are the closest to the core purpose of interpretive integrity.
Drift-oriented measures
These measures assess how far outputs move toward unsupported inference, silent compression, or category error.
Sustainability-oriented measures
These measures matter because integrity is not only a momentary property. It must be maintainable across time, versions, snapshots, and correction cycles.
What this public framework allows
This public doctrinal framework allows a reader, auditor, or system designer to understand what IIP-Scoring™ is for, what it measures at a high level, and what kinds of conclusions it can support.
It allows public discussion of integrity measurement without turning the framework into a disclosure of private scoring logic.
What it does not disclose
This page does not disclose:
- internal weighting;
- sensitive heuristics;
- proprietary thresholds;
- arbitration logic for edge cases;
- implementation details that belong to restricted technical surfaces.
That boundary is intentional. Public doctrine must remain interpretable, auditable, and opposable without becoming a blueprint for misuse.
See also
- Interpretation integrity audit
- IIP-Scoring™ operational method
- Interpretive observability
- Canon vs inference
IDI™ — Interpretation Distortion Index
IDI™ captures how far an output distorts the canon by omission, reframing, or illegitimate extrapolation. It is a distortion-oriented metric family rather than a mere relevance measure.
NSS™ — Narrative Stability Score
NSS™ helps qualify whether the interpretive narrative remains stable across runs, prompts, or environments, instead of oscillating between incompatible framings.
IIS™ — Interpretation Integrity Score
IIS™ functions as a synthetic integrity view, combining the major dimensions without pretending to erase their differences. Its purpose is to support diagnosis, not to hide structure under one number.
Combined reading of the metrics
No single metric is sufficient. Machine visibility without fidelity is weak. Stability without authority alignment is dangerous. A combined reading matters because interpretive integrity is multi-dimensional.
Interpretive mastery configuration
A mature configuration is one in which the canonical surface is visible, the authority perimeter is explicit, the output remains bounded, and the result can be monitored across time.
Stabilization of an inference
One of the key doctrinal questions is not only whether an inference appears, but whether it stabilizes as if it were part of the canon. IIP-Scoring™ is meant to detect precisely that transition from plausible inference to normalized interpretive surface.
Structural ambiguity
The framework must distinguish genuine ambiguity in the source environment from ambiguity introduced by the model. Without that distinction, diagnosis becomes too coarse.
Interpretive non-linearity
Interpretive outcomes are not linear. Small changes in context or source ordering can produce disproportionate changes in output. The doctrinal frame must remain compatible with that reality.
Architectural compatibility
The doctrinal framework is compatible with open-web surfaces, RAG systems, and agentic environments, provided that the canonical reference and authority logic remain explicit.
Implementation neutrality
This framework is implementation-neutral. It describes a measurement doctrine, not a commitment to one proprietary technical stack.
Limitations and precautions
A public doctrinal framework must be careful not to overstate what a score can prove. It qualifies integrity. It does not replace a full contextual audit.
Doctrinal anchoring
IIP-Scoring™ belongs to the broader doctrine of interpretive governance, response conditions, proof logic, and machine-first discoverability.
Further doctrinal clarifications
The framework should also be read in continuity with pages on fossilization, distortion vs inference, metrics, and interpretive observability.
Status and version
Public doctrinal framework. Restricted technical details may exist elsewhere, but the doctrinal perimeter exposed here remains the canonical public description.