The internet's default answer to "what are cellular automata?" is "Conway's Game of Life." And that's not wrong, but it barely scratches the surface. A cellular automaton is arguably the simplest possible "computer" – just a grid of cells following rules. Each cell looks at its neighbors, does some trivially simple math, and decides whether it's alive or dead next turn. That's it. No CPU, no memory bus, no instruction set.
And yet, from these absurdly simple rules, structures emerge that can compute literally anything a laptop can. Not in theory. Not as a metaphor. Actually compute it – add numbers, sort lists, run programs. In this article, we'll start from scratch, build up an intuition for why complexity emerges from simplicity, and end with the mind-bending result that a grid of squares following four short rules is as powerful as any computer ever built.
What is a Cellular Automaton?
Let's start with the absolute simplest case: a one-dimensional cellular automaton. 1
Strictly speaking, a "cellular automaton" (plural: "cellular automata") is the system – the grid, the rules, and the update procedure taken together. Individual cells aren't automata; the whole grid is one automaton. But common usage is loose, and I'll follow convention. Also: "CA" is the standard abbreviation. You'll see it everywhere.
Imagine a row of squares. Each square is either on (black) or off (white). That's your universe. One dimension. Binary. Dead simple.
Now here's the rule: each cell looks at itself and its two neighbors – three cells total – and uses that to decide what it becomes in the next step. Three cells, each with two possible states, means there are 23 = 8 possible neighborhood patterns. For each of those 8 patterns, the rule specifies whether the output is ON or OFF.
Since there are 8 patterns and each can map to either ON or OFF, there are 28 = 256 possible rules. Stephen Wolfram numbered them 0 through 255 (because he could, and because he's Wolfram). Try a few below.
Drag the slider to explore different rules. Each triangle is one rule's evolution from a single black cell at the top.
Most rules are boring. Rule 0 kills everything. Rule 255 turns everything on. Rule 4 produces a single lonely column. Yawn.
But then you hit Rule 30. From a single black cell, it produces what looks like complete chaos. The pattern is so random-looking that it was actually used to generate random numbers in Wolfram's Mathematica software. From one cell and one rule – chaos.
And then there's Rule 110. It's not chaotic like Rule 30, and it's not boring like Rule 4. It produces intricate, self-similar structures that seem to interact with each other. Triangles collide, merge, spawn new triangles. It feels alive.
Wolfram classified all 256 rules into four types:
Class 1: Everything dies or becomes uniform. Boring.
Class 2: Settles into simple periodic patterns. Predictable.
Class 3: Produces chaotic, random-looking behavior. Wild.
Class 4: Complex structures that interact with each other. Magic.
Class 4 is where things get interesting. The structures in a Class 4 automaton don't just sit there – they do things. They collide, produce new structures, carry information. Keep that in mind. We'll come back to it.
But first, let's add a dimension.
Enter the Grid: 2D
Everything we just saw was one-dimensional – a single row of cells. Let's go to two dimensions: a grid.
Now each cell has 8 neighbors instead of 2 (the four cardinal directions plus the four diagonals). This is called the Moore neighborhood, and it's the standard setup for 2D cellular automata.
The most famous 2D cellular automaton, by a wide margin, is John Conway's Game of Life. Invented in 1970, it has four rules so simple you can explain them to a child:
Birth: A dead cell with exactly 3 live neighbors becomes alive.
Survival: A live cell with 2 or 3 live neighbors stays alive.
Death by isolation: A live cell with fewer than 2 neighbors dies.
Death by overcrowding: A live cell with more than 3 neighbors dies.
That's it. Four rules. No exceptions. No special cases. Every cell follows the same rules simultaneously. Researchers use a shorthand notation: B3/S23 – meaning "born with 3 neighbors, survives with 2 or 3."
These four words contain multitudes. Try it yourself – click on the grid below to draw cells, then hit Play.
Click or drag on the grid to toggle cells. Watch patterns emerge, interact, and evolve.
If you scatter some random cells and press Play, you'll notice something remarkable. Within a few generations, the chaos self-organizes. Blobs stabilize. Oscillating structures appear. Occasionally, something moves across the grid.
This isn't because the rules are clever. The rules are stupid. Count neighbors, apply threshold. That's all. And yet the behavior that emerges is strikingly rich.
Let's catalog what we see.
The Zoo of Patterns
If you play with the sandbox above long enough, you'll notice that the random soup eventually settles into a collection of recognizable structures. These aren't accidents. They're the natural vocabulary of the Game of Life.
Still Lifes
The simplest structures are still lifes – patterns that don't change at all. Every cell has exactly the right number of neighbors to stay as it is.
The most common still life is the Block: just a 2x2 square. Each cell has 3 neighbors, so it survives. No dead cell around it has exactly 3 live neighbors, so nothing is born. It just sits there. Forever.
Oscillators
Next up: oscillators. These are patterns that cycle through a sequence of states and return to their starting configuration. The simplest is the Blinker – three cells in a row that flip between horizontal and vertical every generation. It has period 2.
But oscillators can get wild. The Pulsar has period 3 and looks like a beating heart. The Pentadecathlon has period 15 and does an elaborate dance.
Spaceships
And then there are the patterns that move.
Wait – move? How can anything "move" when every cell is fixed in place and just follows the same rules?
Think of a wave in water. No water molecule actually travels across the ocean. The pattern of displacement propagates. In the same way, a spaceship in the Game of Life isn't a thing that moves – it's a pattern of alive-and-dead that shifts across the grid.
The most famous spaceship is the Glider: a tiny 5-cell pattern that walks diagonally across the grid, repeating its shape every 4 generations but shifted one cell down and one cell right. It's so iconic that it's become the unofficial symbol of the hacker community.
Load different patterns below to see them in action.
Each pattern demonstrates a different behavior class. The glider moves; the blinker oscillates; the Gosper Gun creates an infinite stream.
Notice something about these patterns: they're robust. A glider isn't fragile – it reconstructs itself automatically each generation. It's not held together by glue; it's held together by the rules. The rules make the glider a stable attractor in the space of all possible configurations.
This is the first hint of something profound. Simple rules don't just create random noise. They create a parts list. Still lifes are building blocks. Oscillators are clocks. Spaceships are messengers.
And if you have building blocks, clocks, and messengers... well. You might just have enough to build a computer.
Why Does Complexity Emerge?
This is the deep question. Why do simple rules produce complex behavior? There's nothing in B3/S23 that says "create gliders." There's no "form a Gosper Gun" instruction. The complexity isn't in the rules – it's emergent.
Here's the key insight, due to Christopher Langton (1990): complexity lives at the edge of chaos.
Imagine a dial that controls how "active" a rule set is. On one end, the rules are too restrictive – everything dies. On the other end, the rules are too permissive – everything fills up and stays filled. Both extremes are boring.
But between order and chaos, something interesting happens. Structures form, interact, and process information. The system is stable enough to maintain structures, but dynamic enough for those structures to do things.
Ice (too ordered): All structure, no dynamics. Nothing happens.
Steam (too chaotic): All dynamics, no structure. Random noise.
Liquid water (the edge): Structure and dynamics. Currents, waves, eddies. Interesting things happen.
Langton quantified this with what he called the lambda parameter (λ): roughly, the fraction of rule entries that produce a "live" output. At λ = 0, everything dies (Class 1). At λ = 1, everything explodes (trivially chaotic). The most complex behaviors – Wolfram's Class 4 – cluster around a critical value of λ between these extremes.
The Game of Life sits right at this sweet spot. B3/S23 is not too deadly, not too generous. It's just right. That's not a coincidence – Conway specifically searched for rules that produced interesting behavior.
Try tweaking the rules below to see this for yourself. Change what counts of neighbors cause birth and survival, and watch the behavior shift between dead, chaotic, and complex.
Toggle which neighbor counts cause birth (B) and survival (S). Notice how the behavior changes: too few = death, too many = explosion, just right = complexity.
Play with it. Set Birth to just [2] and Survival to nothing – you get Seeds, where everything explodes chaotically. Set Birth to [3,6] and Survival to [2,3] – you get HighLife, which is very similar to Life but has a self-replicating pattern called the Replicator.
The critical thing to notice: complexity isn't random. It requires the rules to be tuned to a narrow band between order and chaos. Too far in either direction and you get nothing useful. But right at the boundary – that's where information processing can happen.
"Life exists at the edge of chaos." Literally.
Signals and Wires
OK, so the Game of Life produces complex, interacting structures. Cool, but so does a lava lamp. What makes Life special?
To understand that, we need to think about what a computer actually needs. Strip away the case, the screen, the keyboard, and you're left with three essential ingredients:
1. Signals – data that moves from place to place
2. Wires – paths for signals to travel along
3. Logic gates – components that combine signals to make decisions
Let's start with signals and wires. Can the Game of Life produce them?
Yes. And you've already seen the answer: gliders.
A glider is a pattern that moves across the grid. It's a packet of information traveling through space. If I shoot a glider across an empty grid, that's a signal. If I send a stream of gliders, that's a wire carrying data.
But how do you create a steady stream of gliders? You need a gun – a pattern that periodically emits new gliders. The first gun ever discovered was the Gosper Glider Gun, found by Bill Gosper in 1970. It emits a new glider every 30 generations, like clockwork.
Watch it below. Each glider that flies off the edge is a bit of data – a "1" traveling down a wire. The gaps between gliders? Those are "0"s.
The Gosper Glider Gun emits a new glider every 30 generations. Each glider is a bit of data traveling diagonally.
Here's the beautiful thing: we didn't design gliders or guns. We didn't program them. They exist within the rules of Life like diamonds exist within carbon atoms – natural consequences of the underlying physics.
So we have signals (gliders) and we have wires (the empty space they travel through). Two ingredients down, one to go.
Logic Gates from Collisions
Here's where it gets wild.
When two gliders collide, they don't just vanish. Depending on the exact angle and timing, a collision can produce different outcomes:
Annihilation: Both gliders die, leaving nothing (or a small still life).
Reflection: One or both gliders change direction.
Creation: The collision produces new gliders traveling in different directions.
These different outcomes can be harnessed to implement logic gates.
Think about it. If you have two streams of gliders (two "wires") converging at a point, the collision outcomes depend on whether both signals are present, one is present, or neither. That's exactly what a logic gate does!
NOT Gate
A NOT gate inverts a signal: input 1 becomes output 0, and vice versa. In Life, you can build this using a constant stream (from a glider gun) and a second stream that blocks it. When the blocking signal is present (1), it annihilates the constant stream (output 0). When the blocking signal is absent (0), the constant stream passes through (output 1). Signal inverted!
AND Gate
An AND gate outputs 1 only when both inputs are 1. You can construct this by arranging two glider streams so that they only produce an output glider (in a specific direction) when both collide at the right time.
OR Gate
An OR gate outputs 1 when either input is 1. This is simpler – just merge two glider streams into a single output path.
Watch the demo below to see glider collisions in action. Different configurations produce different outcomes – the building blocks of digital logic.
Different collision geometries produce different results. These are the raw materials for computation.
Now here's the punchline of this section. In 1970, it was known that you can build any digital circuit from just NAND gates (a NOT-AND combination). And since we can build NOT gates and AND gates from glider collisions, we can build NAND gates. And from NAND gates, we can build... anything.
Quick check: We now have signals (gliders), wires (empty space), and logic gates (collisions). What's still missing before we can call this a full computer?
Think about it before scrolling...
The answer: memory. A computer needs to store information, not just process it. Can the Game of Life do that?
Absolutely. A still life (like a block) can serve as a 1-bit memory cell. Send a glider at it to "flip" it – the collision destroys the block (writing a 0) or creates one (writing a 1). Send another glider to "read" it – if the block is there, the glider bounces differently than if it's not.
Signals. Wires. Logic gates. Memory. We have everything we need.
Turing Completeness: The Punchline
In 1936, Alan Turing described the simplest possible computer – now called a Turing machine. It has:
A tape: an infinite strip of cells, each containing a symbol (say, 0 or 1).
A head: reads the current cell, writes a new value, then moves left or right.
A state table: a finite set of rules that say "if you're in state X and reading symbol Y, write symbol Z, move direction D, and switch to state W."
A halt state: when reached, the machine stops.
That's it. Turing proved that this absurdly simple machine can compute anything computable. Your laptop? It's just a fast Turing machine with a very long tape. Python? A fancy language for writing Turing machine state tables.
A system is called Turing complete if it can simulate a Turing machine. And here's the claim: Conway's Game of Life is Turing complete.
Let's see why.
The Tape
A Turing machine tape is just a sequence of bits. In Life, we can represent this as a row of blocks (still lifes). Block present = 1. Block absent = 0. The "tape" is a line of positions where blocks may or may not exist.
The Read/Write Head
To read a cell, send a glider toward the tape position. If a block is there, the collision produces a specific outcome (detectable by other structures). If there's no block, the glider passes through.
To write a cell, send a glider that either creates a block (through a carefully arranged collision with other structures) or destroys one.
The State Table
This is the complex part. The state table is a network of logic gates (built from glider collisions) that takes the current state and the read value as inputs, and produces the write value, move direction, and next state as outputs.
It's a massive, intricate web of glider guns, reflectors, and collision points. But it's possible. Every logic gate we need can be built from glider collisions. Every wire can be a glider stream. Every memory cell can be a block.
Left: an abstract Turing machine with tape, head, and state. Right: the Game of Life equivalent using blocks for memory and gliders for the read/write head.
The Proof
In 2000, Paul Rendell built a working Turing machine entirely within the Game of Life. It was enormous – tens of thousands of cells – but it worked. It had a tape, a finite state controller, and it could execute arbitrary computations. 2
Rendell's Turing machine used stacks of gliders as tape memory, with "finite state machine" logic implemented via a complex network of stable reflectors and glider guns. The entire construction spans about 1,700 x 1,600 cells. Later, Adam Goucher and others built even more elaborate constructions, including a universal constructor – a pattern that can build any other pattern, including copies of itself. This was Conway's original dream when he designed the rules of Life.
Let that sink in. A grid of squares, following 4 simple rules, can compute anything your laptop can.
Of course, it would be astronomically slow. Rendell's Turing machine takes millions of generations to perform a single tape operation. You wouldn't want to run Python on it.
But speed isn't the point. Capability is. The fact that it can is what makes it Turing complete. And Turing completeness means that in principle, the Game of Life can:
• Add and multiply numbers
• Sort a list
• Run any algorithm ever written
• Simulate itself (a Game of Life within the Game of Life)
• Even run a web browser, given enough cells and enough patience
All from four rules about counting neighbors.
Beyond Life: Other Rules, Other Universes
The Game of Life is the most famous cellular automaton, but it's far from the only one. Remember – B3/S23 is just one of millions of possible 2D rule sets. Each rule set creates a different "universe" with its own physics.
Some of these alternative universes are just as interesting as Life. A few are even Turing complete themselves.
HighLife (B36/S23)
Almost identical to Life, but with an extra birth condition: cells are also born with 6 neighbors. This tiny change has a dramatic consequence – HighLife contains a self-replicating pattern called the Replicator. A small pattern that creates copies of itself. In standard Life, no self-replicator was known for decades.
Seeds (B2/S)
Cells are born with exactly 2 neighbors, and nothing survives. Every live cell dies next generation. Despite this seemingly destructive rule, Seeds produces beautiful, explosive, fractal-like growth patterns. It's purely chaotic – but chaotic in a gorgeous way.
Day & Night (B3678/S34678)
A rule set where the behavior is symmetric: live and dead cells play interchangeable roles. The patterns look like strange, organic blobs that breathe and pulsate.
Rule 110 (1D!)
Here's a mind-blower: Matthew Cook proved in 2004 that Rule 110 – a one-dimensional cellular automaton with just 8 rules – is Turing complete. You don't even need two dimensions. A single row of cells, updating by one of the 256 elementary rules, can compute anything.
Try the comparison below. Same random starting pattern, four different rule sets.
All four grids start with the identical random seed. Different rules produce radically different behaviors – from structured (Life) to explosive (Seeds) to organic (Day & Night).
The space of all possible cellular automata is itself a kind of zoo. Some rule sets produce nothing. Some produce chaos. And some – a special, narrow band – produce the kind of complex, structured behavior that enables computation.
Here's a handy comparison of the CAs we've covered:
| CA Rule | Type | Behavior | Turing Complete? |
|---|---|---|---|
| B3/S23 (Life) | 2D, Class 4 | Complex structures, spaceships, guns | Yes (proven) |
| B36/S23 (HighLife) | 2D, Class 4 | Like Life + self-replicator | Yes |
| B2/S (Seeds) | 2D, Class 3 | Explosive, chaotic growth | No |
| B3678/S34678 (Day & Night) | 2D, Class 4 | Symmetric, organic blobs | Unknown |
| Rule 110 | 1D, Class 4 | Complex interacting triangles | Yes (proven 2004) |
| Rule 30 | 1D, Class 3 | Pseudorandom chaos | Unlikely |
The universe of CAs is itself an emergent zoo.
The Bigger Picture: Emergence All the Way Down
Step back for a moment. What have we actually shown?
We started with a grid. Black and white squares. The simplest possible rule: count your neighbors and decide if you're alive or dead. And from this, we got:
• Self-sustaining structures (still lifes)
• Periodic behavior (oscillators)
• Motion (spaceships)
• Signal transmission (glider streams)
• Information processing (logic gates)
• Universal computation (Turing completeness)
None of this was programmed. It all emerged.
Each level emerges from the one below it. No level was "designed" – each is a natural consequence of the interactions at the previous level.
This is the concept of emergence: complex, higher-level phenomena arising from simple, lower-level rules. And it's not just a feature of cellular automata. It's arguably the defining feature of reality.
Atoms follow quantum mechanics → chemistry emerges
Molecules interact → life emerges
Neurons fire → consciousness emerges
Agents trade → markets emerge
Cells follow rules → computation emerges
Stephen Wolfram took this idea to its logical extreme in his 2002 book A New Kind of Science. His thesis: maybe the physical universe itself is a cellular automaton. Maybe the laws of physics are just the "rule set" of a cosmic CA, and everything we see – quarks, galaxies, consciousness – is emergent behavior.
That's a controversial claim, and most physicists are skeptical. But the core insight is hard to deny: simple rules can produce arbitrarily complex behavior. You don't need complex rules for a complex world. You just need the right simple rules.
And that's the deep lesson of cellular automata. The complexity isn't in the rules. It's not in the cells. It's not in the grid. It's in the interactions – the emergent web of cause and effect that arises when simple things operate on simple things, over and over, at scale.
A grid of squares can compute anything. What does that tell us about the nature of computation? Maybe it tells us that computation isn't a thing we build. It's a thing that happens – naturally, inevitably – whenever the conditions are right.
And the conditions, it turns out, are surprisingly easy to meet.
Further Resources
- Golly – The best open-source cellular automaton simulator. Supports Life, other 2D CAs, and even Rendell's Turing machine. Available at sourceforge.net/projects/golly.
- A New Kind of Science by Stephen Wolfram – The 1,200-page magnum opus on cellular automata and computation. Controversial, but foundational. Free to read online at wolframscience.com.
- Paul Rendell's Turing Machine in Life – The original construction proving Life's Turing completeness. Technical but fascinating. Search for "Rendell Turing Machine Game of Life."
- LifeWiki (conwaylife.com/wiki) – The definitive encyclopedia of Game of Life patterns, including comprehensive catalogs of spaceships, oscillators, and guns.
- Numberphile – Several excellent videos on the Game of Life featuring John Conway himself (who passed in 2020). Search "Numberphile Game of Life."
This article was written to build intuition, not to be mathematically rigorous. For formal proofs of Turing completeness, see the academic literature. For everything else, fire up Golly and explore.