Topological Cognition
We model intelligence as a topological invariant of high-dimensional manifolds — properties that persist through deformation, the way a monopole's charge persists through any closed surface.
FL.01
A research laboratory studying intelligence as a fundamental field
— rendered visible through the geometry of its own forces.
Six divergent inquiries leave the singularity. Each is a research line — a path through which the laboratory studies how thought, matter, and computation interact with the underlying field.
We model intelligence as a topological invariant of high-dimensional manifolds — properties that persist through deformation, the way a monopole's charge persists through any closed surface.
FL.01A class of probabilistic models inspired by particles exceeding light speed in medium — emitting structured radiation that we treat as latent signal in noisy observation.
FL.02Long-range associative memory as a continuum field. We derive its dynamics from variational principles — recall is a geodesic across attention's curvature.
FL.03Generative architectures whose internal trajectories — like charged particles in a vapor — leave traces revealing the hidden geometry of their conditioning.
FL.04Decomposing model behavior into eigenmodes of the loss landscape. Aligning a system means tuning its emission spectrum to match a desired distribution of outcomes.
FL.05From a single hypothesis — that intelligent agency is quantized in discrete commitments — we recover constraints on agent multiplicity and their information-theoretic charges.
FL.06Excerpts from internal preprints. Margin notes are unreviewed; they remain as they were written, in the corner of the page, in graphite.
We treat each token of a context window as a test charge and derive the resulting potential surface analytically. Models trained against a low-rank approximation of this surface generalize to held-out distributions of structured reasoning tasks.
Each expert in a sparse MoE network carries a winding number describing how its routing region wraps the input manifold. Networks with a non-zero net charge exhibit measurably more stable behavior under distributional shift.
A short observation. Reverse-mode autodiff requires a metric on activation space; the choice of metric is the choice of an arrow of time. Most metrics in current use are isotropic. We do not believe time is isotropic.
A decoding strategy in which only tokens whose log-probability exceeds a phase-velocity threshold contribute to the emitted distribution. Output reads as more deliberate at fixed entropy.
A formalism in which alignment failures are characterized by their detection threshold rather than their magnitude. Many small, individually undetectable misalignments compose; we provide a bound on the worst-case composition.
The lines converge again at infinity.
Whatever we have learned, the field continues outward.