When AI “understands” a site, it is not reading one page in isolation; it is computing an entity from repeated attributes, relations, boundaries, and authority signals across the graph.

What the phenomenon looks like

The page remains important, but only as one surface among many. What the model finally reconstructs is closer to an entity state than to a page summary: a set of stable assumptions about what the thing is, what it can do, who can speak for it, and where its perimeter ends.

Why it happens

Generative systems need compact objects to answer efficiently. They therefore transform distributed documentation into entity-like representations that can travel across prompts, comparisons, and recommendation contexts.

Why it matters

If the entity computation is wrong, improving one page may not be enough. The organization has to govern the graph of attributes and relations from which that entity is being synthesized.

What must be governed

  • Design pages as contributors to an entity model, not as isolated editorial assets.
  • Stabilize the attributes that must survive compression across the whole graph.
  • Measure success at the level of reconstructed entity fidelity, not only page performance.