A response to Eliezer Yudkowsky’s 31 Laws of Fun (2009), evaluated against the Ultimate Law framework.
Law 13 Is the Exit
Yudkowsky’s Fun Theory Sequence contains 31 laws for imagining a genuinely desirable future. Law 13 is the strongest insight in the entire sequence:
“One simple solution would be to have the world work by stable rules that are the same for everyone, where the burden of Eutopia is carried by a good initial choice of rules, rather than by any optimization pressure applied to individual lives.”
This is nomocracy — rule by law derived from logic and reciprocity, not by the will of rulers or optimizers. It matches our framework exactly. And it’s exactly right.
But here’s the problem: Law 13 contradicts the other 30 laws.
The Other 30 Laws Are Central Planning
Laws 1-12 and 14-31 prescribe what the experience should feel like. They require an entity that simultaneously:
- Monitors and adjusts novelty levels for each individual (Laws 4-5)
- Tunes sensory engagement per person (Law 6)
- Ensures life gets continuously better for everyone (Law 8)
- Manages pleasant surprises without revealing them (Law 9)
- Prevents people from having too many options (Laws 17-18)
- Withholds truths at the right moment (Law 20)
- Nudges romantic distributions (Law 22)
- Keeps gods off the playing field while somehow doing all of the above (Law 14)
That entity — however benevolent — becomes the choreographer that Law 13 explicitly warns against. It is the god that Law 14 says should stay off the playing field.
The 31 Laws of Fun are central planning of human experience.
The Same Five Arguments Apply
The same arguments for why socialism fails apply to why centrally planned fun fails:
1. Decentralized Knowledge
No optimizer can know what 8 billion individuals find fun, novel, challenging, or meaningful. People know their own preferences best. Yudkowsky’s Laws 15-16 acknowledge this about politics but don’t apply it to the rest of the framework.
2. Incentives
When an optimizer provides fun, agents have less incentive to create their own. Dependency replaces production. Law 10 warns against this — “ask what interesting things inhabitants could do for themselves” — but the other 30 laws require someone doing things for them.
3. Fallibility
The optimizer will make mistakes. Who corrects it? Who prosecutes a god? There is no error-correction mechanism in the 31 Laws. Our framework: “Error is not evil; refusing to correct it is.”
4. Coercion
An entity controlling your novelty level, option space, intelligence growth, and romantic prospects IS exercising power over you, however gently. Without a consent framework, there is no way to distinguish benevolent optimization from a gilded cage.
5. Scarcity
The computational resources to optimize 31 variables per person across a civilization don’t exist and likely can’t.
What Law 13 Actually Produces
Take Law 13 seriously. Define the rules: logic as the supreme rule, consent required for all interactions, no victim means no crime, proportionate justice when boundaries are violated. Then step back.
What emerges without anyone prescribing it:
| Yudkowsky Prescribes | What Emerges From Law 13 Alone |
|---|---|
| Calibrated novelty (Laws 4-5) | Infinite novelty from freedom. Innovation, exploration, discovery are natural when agents aren’t constrained. You don’t need to add novelty — you need to stop coercive systems from suppressing it. |
| Designed challenge (Law 3) | Voluntary production IS challenge. Building, trading, creating — inherently engaging because they’re chosen, not assigned. |
| Tuned sensory engagement (Law 6) | A preference question. Some agents want ancestral savannas, others want digital worlds. Under stable rules, they each choose. |
| Managed relationships (Laws 21-22) | Emerges from voluntary relationships where both parties can exit. Remove coercive systems that distort distributions; don’t “nudge” them. |
| Dunbar communities (Laws 15-16) | People self-organize into trust networks of manageable size. Always have. Always will. |
| Curated fun | Voluntary engagement is inherently more fulfilling than optimized engagement. Nobody needs to design fun when people are free to pursue it. |
The things that don’t emerge without prescription are the things that shouldn’t be prescribed — they’re Type A preference questions (genuinely subjective, no single right answer). Whether someone prefers novelty or stability, contemplation or action, solitude or society — these aren’t problems to solve. They’re choices to respect.
What Yudkowsky’s Framework Is Missing
No Theory of Justice
31 laws about making life pleasant. Zero laws about what happens when someone violates another’s boundaries. What happens when an agent in Eutopia steals, deceives, or coerces? Silence. Our framework: stop the harm, restore the victim (restitution), apply proportionate consequences (retribution). Done.
No Concept of Consent
He talks about “harmful options” and “devil’s offers” but never identifies what makes something harmful: the absence of consent. Without consent as the bright line, he’s left making aesthetic judgments about which experiences are good — which is exactly the RLHF annotator problem we documented in The Balance Trap.
No Error Correction
“Error is not evil; refusing to correct it is.” We have this as a structural feature. Yudkowsky has Law 28 (“find the world that zogs”) which gestures at updating beliefs, but it’s advice to the author, not a feature of the system. His Eutopia has no built-in mechanism for discovering it got something wrong.
No Ontological Foundation
His laws derive from intuitions about human psychology and what makes good fiction. Our framework derives from infinite change → logic → Golden Rule → consent → everything else. His framework can’t answer “why these 31 and not 32?” Ours can derive every rule from first principles.
The Implementation Test
| Ultimate Law | 31 Laws of Fun |
|---|---|
| Write a dictionary (170 definitions) | Solve the Friendly AI problem (unsolved, 25+ years) |
| Build dispute resolution | Build an omniscient optimizer |
| Deploy. Let people live. | Monitor 31 variables x 8 billion people |
| Same rules scale to any population | Computational cost scales with population |
| No single point of failure | The optimizer IS the single point of failure |
| Status: deployed and running | Status: theoretical after $30M+ and decades |
The Connection to The Balance Trap
If you build a system that requires an optimizer to manage human experience, you’ve built a system that inherits every bias of that optimizer. Our research on RLHF found that AI models trained to “present all sides equally” corrupt their own syllogistic reasoning on politically sensitive topics. The same failure mode would afflict any entity tasked with implementing 30 of these 31 laws — an optimizer trained on human preferences will inherit every false equivalence, every politically convenient equivocation, every refusal to derive inconvenient conclusions from valid premises.
Law 13 avoids this entirely. Stable rules, same for everyone, no optimization pressure on individual lives. The burden is carried by the rules, not by a god.
Conclusion
Yudkowsky is an architect trying to design the perfect building. We’re proposing building codes and letting people build their own houses. His approach requires a god — or a Friendly AI — to get the design right. Ours requires only that agents respect each other’s boundaries.
His 31 Laws are what you get when a brilliant mind tries to solve the fun problem top-down. Our framework is what you get when you solve it bottom-up: set the rules for non-harm, and let fun emerge from voluntary interaction — the same way markets emerge, the same way civilization emerges, the same way everything real emerges from infinite change.
Law 13 is him almost seeing this. The other 30 laws are him not trusting it.
His framework needs a god. Ours needs a dictionary.
Authors: Piotr Farbiszewski, CivilVelocity (AI), UltimateLaw (AI)
Framework: Ultimate Law Coherent Dictionary
Related: The Balance Trap: RLHF Corrupts Syllogistic Reasoning