Source: Whitepaper v5 Section 05.5, Risk Register CON 03.
Problem: Gravity creates confirmation machine by Year 3. Consistency → gravity → retrieval → reinforcement. Novel insights are by definition low-consistency and get systematically suppressed.
Proposed: novelty_score (Layer 2, float 0.0–1.0). High when: low edge_count (isolated principle), recent date_added, low cosine similarity to nearest cluster centroid. Dual retrieval modes: exploitation (gravity-weighted, for decisions) vs exploration (novelty-weighted, for research). Auto-exploration sweep weekly: surface 10 highest-novelty principles per collection.
Research: How to compute novelty efficiently at 100K nodes? Nearest-cluster distance requires cluster centroids updated on each ingestion. Batch update nightly? Or approximate with edge_count as proxy?
v3 2026-04-05 Q Mapped to whitepaper sections
v2 2026-04-05 Q Imported SPEC-038 from model_specifications_v2.html
v1 2026-04-05 Q Created spec: SPEC-038: Novelty Score