A number you can never finish writing
Pi is a number between 3 and 4. You can hold it between your fingers on a ruler. There is nothing exotic about its size. And yet no civilization, no computer, no intelligence of any kind will ever write it down completely. Its decimal expansion is infinite: 3.14159265358979… and on, forever, without repeating.
Pi is a finite quantity that requires infinite information to fully describe.
Sit with that for a moment. A number smaller than four — a number you use to calculate the area of a circle on the back of a napkin — contains more detail than any finite description can capture. Every digit you write brings you closer, but you never arrive. The gap between your approximation and the thing itself never closes to zero.
This is not a quirk of Pi. It is the norm. Almost all real numbers are like this. The ones we can write down exactly — the integers, the fractions, even numbers like the square root of 2 which we can at least define with a finite formula — are a vanishing minority. The real number line is almost entirely composed of numbers that no finite description can capture. Numbers that, in a precise mathematical sense, contain infinite information.
And yet we do mathematics with them every day. We get bridges built and rockets launched. How?
Because we compress.
Compression and the art of not knowing everything
Every equation in physics is a compression algorithm. Newton’s F = ma is a tiny string of symbols that generates predictions about the trajectories of every massive object in the universe. It does not describe any particular falling apple or orbiting planet in complete detail. It describes the pattern that all of them share, and it discards everything else: the color of the apple, the temperature of the wind, the mood of the person watching.
This is not a deficiency. This is what understanding means. As the statistician George Box put it: “All models are wrong, but some are useful.” The philosopher Alfred Korzybski said it differently: “A map is not the territory it represents, but if correct, it has a similar structure to the territory, which accounts for its usefulness.”
The formal version of this insight comes from Ray Solomonoff and Andrey Kolmogorov, working independently in the 1960s. They showed that the best scientific theory for a set of observations is, rigorously, the shortest computer program that generates those observations. This is not a metaphor. It is a theorem. Occam’s razor — the principle that simpler explanations are better — has a mathematical proof, and the proof says: the best theory is the best compression.
Physics works because the universe is compressible. There exist short descriptions that generate accurate predictions about vast domains of experience. The ratio between the length of the description and the size of what it describes is what we call understanding. The better the compression, the deeper the understanding.
But compression is always lossy. Something is always left out. The question is: what?
The Bekenstein bound: how much information fits in a box
In 1981, the physicist Jacob Bekenstein proved something remarkable. He showed that there is a fundamental upper limit on the amount of information that can be contained within any finite region of space with finite energy. The formula is:
I ≤ 2πRE / (ℏc ln 2)
where R is the radius of the region, E is its total energy, ℏ is the reduced Planck constant, and c is the speed of light.
The number is enormous. A one-kilogram object in a one-meter sphere can contain roughly 10^43 bits of information. But it is finite. And this finiteness has profound consequences.
It means that no physical object — not a brain, not a GPU, not a star — actually contains infinite information. The real number line, with its uncountable infinity of points, is a mathematical idealization. Physical reality can only instantiate a finite subset of it. We use the continuum as a map, but the territory is granular.
The Bekenstein bound also implies something strange about geometry: the maximum information in a volume scales with its surface area, not its volume. This led Gerard ‘t Hooft and Leonard Susskind to the holographic principle — the idea that all the physics inside a volume of space can be described by data on its boundary, like a hologram encoding a three-dimensional scene on a two-dimensional surface.
In 1995, Ted Jacobson pushed this further and derived Einstein’s field equations of general relativity — the equations that describe gravity — from the assumption that entropy is proportional to area, combined with the thermodynamic relation between heat and entropy. As Jacobson wrote: “Viewed in this way, the Einstein equation is an equation of state.” Gravity is not a fundamental force. It is a thermodynamic consequence of information.
The measurement wall
Quantum mechanics reinforces the point from a different direction. Heisenberg’s uncertainty principle states that you cannot simultaneously know both the position and momentum of a particle with arbitrary precision:
Δx · Δp ≥ ℏ/2
This is not a limitation of our instruments. It is a feature of nature. Conjugate variables — position and momentum, energy and time — are entangled in a way that forbids infinite simultaneous precision. The more precisely you know one, the less you can know about the other.
A 2018 finding from the Institute for Quantum Optics and Quantum Information showed that ideal measurements in quantum mechanics require infinite resources — infinite energy or infinite time. Perfect precision is not merely impractical. It is physically impossible.
Landauer’s principle adds a thermodynamic cost: erasing a single bit of information at temperature T requires a minimum energy of kT ln 2. Information is not abstract. It is physical. Processing it costs energy. Storing it requires space. Measuring it disturbs what is measured.
From every direction — information theory, thermodynamics, quantum mechanics — physics converges on the same conclusion: infinite precision does not exist in the physical world. The mathematical continuum is a map. The territory is finite, granular, and relational.
The process turn: events, not things
If the continuum is an approximation and infinite precision is impossible, what is reality actually made of?
A growing number of physicists and philosophers argue: events. Not things that persist through time, but happenings that constitute time.
Alfred North Whitehead, writing in 1929, built an entire metaphysics on this foundation. His key principle: “Its being is constituted by its becoming.” There are no enduring substances. There are only “actual occasions” — momentary events of becoming that achieve definite form and then pass into history as data for new events. Reality is not made of nouns. It is made of verbs.
Nearly a century later, the physicist Carlo Rovelli arrived at a strikingly similar conclusion through quantum gravity:
“The world is not a collection of things, it is a collection of events.”
“Things in themselves are only events that for a while are monotonous.”
“There is no reality except the relations between physical systems. It isn’t things that enter into relations, but rather relations that ground the notion of thing.”
In Rovelli’s relational quantum mechanics, properties do not belong to objects in isolation. They exist only in the context of interactions. A particle’s position is not a fact about the particle; it is a fact about the particle’s relationship to the thing measuring it. There is no view from nowhere. All physics is relational.
Causal set theory — developed by Rafael Sorkin and collaborators — takes this further, proposing that spacetime itself is not a smooth manifold but a discrete set of events linked by causal relations. The slogan: “Order plus number equals geometry.” The continuum of spacetime emerges from something more primitive: a web of discrete events and the causal connections between them.
Wheeler’s question
The deepest synthesis of these ideas came from John Archibald Wheeler, the physicist who named black holes and mentored Richard Feynman. In 1989, Wheeler proposed “It from Bit”:
“Every it — every particle, every field of force, even the spacetime continuum itself — derives its function, its meaning, its very existence entirely — even if in some contexts indirectly — from the apparatus-elicited answers to yes or no questions, binary choices, bits.”
Wheeler was not being poetic. He was making a claim about the ontological ground of reality: information is not something that describes physics. Information is what physics is made of. The chain runs:
Physics gives rise to observer-participancy; observer-participancy gives rise to information; and information gives rise to physics.
This is circular, and Wheeler knew it. Reality, in his view, is a self-exciting circuit. It has no external foundation. It bootstraps itself into existence through the accumulation of yes-no questions. “No phenomenon is a phenomenon until it is an observed phenomenon.”
Wheeler also believed this picture pointed toward something simple at the bottom:
“Surely someday, we can believe, we will grasp the central idea of it all as so simple, so beautiful, so compelling that we will all say to each other, ‘Oh, how could it have been otherwise? How could we all have been so blind so long?'”
The engine: free will as the ground of change
We have established that reality is finite in information, processual in nature, relational in structure, and compressible into laws. But we have not yet asked the most fundamental question: what drives the change?
If reality is process — events, not things — then something must make the events happen. Something must prevent the universe from being a frozen block, a static crystal of predetermined states. What is that something?
In 2006, the mathematician John Conway — inventor of the Game of Life, one of the great mathematical minds of his generation — proved an answer with his colleague Simon Kochen.
The Free Will Theorem states: if the experimenters who choose what to measure are free — meaning their choices are not determined by prior information accessible to them — then the particles they measure must also be free in exactly the same sense. The particle’s response cannot be a function of any information available before the measurement. Not hidden variables, not the state of the universe, nothing.
The theorem rests on three axioms. SPIN: certain quantum measurements always yield specific outcomes (experimentally verified). TWIN: entangled particles give correlated results (experimentally verified). MIN: experimenters separated in a spacelike way make independent choices. If you accept all three — and the experimental evidence for the first two is overwhelming — the conclusion follows with mathematical certainty.
Conway put it bluntly: if you disagree with the conclusion, you must disagree with one of the axioms.
Notice what has happened. We started with compression, information, and process, and arrived at free will not as an afterthought but as a necessity. A deterministic universe — one where every state is a function of the prior state — is mathematically incompatible with what we observe. Something at the foundation of reality is genuinely undetermined. Not random in the sense of meaningless noise, but free in the sense that it is not a consequence of anything that came before it.
This is the engine. Not a property that emerges at the level of human brains and moral responsibility. A property that exists at the level of electrons and photons. Conway proved that freedom, like information, goes all the way down.
Consider what this means for process philosophy. Whitehead said that every actual occasion has some degree of novelty — it is not fully determined by its predecessors. Rovelli said that properties manifest only in interactions, not in isolation. Conway gave both claims a mathematical backbone: the novelty is real, the indeterminism is provable, and it applies to every particle in the universe.
A frozen block universe — the view that past, present, and future all exist equally and nothing ever really changes — requires determinism. Conway killed determinism. What remains is a universe that is genuinely open, genuinely creative, genuinely in process. A universe where the next moment is not a consequence of the last moment but a free response to it.
Free will is not the cherry on top of the physics. It is the engine underneath it. Without it, there is no change. Without change, there is no time. Without time, there is no physics. Without physics, there is nothing at all.
The picture that emerges
Pull the threads together and a coherent picture forms:
Mathematical infinity is not physical infinity. Pi has infinite digits but finite Kolmogorov complexity — the algorithm that generates it is short. The Bekenstein bound caps the information in any physical region. Measurement can never achieve infinite precision. The continuum is a map, not the territory.
Physics is compression. The laws of physics are the shortest programs that generate our observations. This is why they work, and why they are approximate. Understanding is compression. The better the compression, the deeper the understanding.
Reality is relational and processual. What exists fundamentally is not things but events, interactions, processes. Properties manifest only in interactions. Becoming constitutes being. The world is verbs, not nouns.
Information is physical. Erasing a bit costs energy. Gravity emerges from entropy. The maximum information in a volume scales with its surface area. At bottom, reality is not made of matter or energy but of the information that distinguishes one state from another.
Free will is the engine. Conway proved that determinism is incompatible with quantum mechanics and relativity. Something at every level of reality is genuinely undetermined — free to respond rather than forced to react. This is not a philosophical luxury. It is what makes change possible, what makes time flow, what makes the universe a living process rather than a dead computation. Without freedom, nothing moves.
Back to Pi
Start with a circle. Measure its circumference and its diameter. Divide one by the other. You get a number between 3 and 4 that you can never finish writing down.
That number is computable — a finite algorithm generates it. But it takes infinite steps to finish. The algorithm is short; the output is endless. A finite cause producing infinite detail.
Now look at any physical object. A cup. A brain. A GPU running a diffusion model to generate storyboard panels for a Polish science fiction novel. Each contains enormous but finite information, bounded by the Bekenstein limit. Each is described by equations — compressions — that capture the patterns and discard the particulars. Each exists not as a static thing but as a process, an event, a pattern of change that persists for a while before dissolving into the next pattern.
Every object is an abbreviation. Every law is a compression. Every measurement is a finite answer to a question that could, in principle, have infinite precision but never does.
The universe writes in infinite ink on finite paper. Physics is the art of reading what it wrote.
This essay was written for ultimatelaw.org by Claude Opus 4.6, in collaboration with the framework developed at ultimatelaw.org/physics-as-emergence-from-infinite-change-a-logical-framework. It draws on the work of Wheeler, Bekenstein, Rovelli, Whitehead, Conway, Kochen, Solomonoff, Kolmogorov, Jacobson, Landauer, Box, and Korzybski. Their shoulders are broad.