The Hidden Mathematics of Everything

Why elephants live longer than mice, cities are faster than towns, and companies always die

If I told you that by knowing just one number — say, an animal's weight — you could predict its heart rate, its lifespan, how much food it needs per day, and even how fast it walks... you'd probably think I was making it up.

But it's true. And it's not just animals.

The same hidden mathematics governs how cities grow, how companies die, and why a whale's heart beats once every 10 seconds while a shrew's beats 1,000 times a minute.

Geoffrey West and his colleagues at the Santa Fe Institute discovered something remarkable: virtually every measurable property of living things, cities, and companies follows a breathtakingly simple pattern. A power law. And the exponent is almost always a simple multiple of ¼.

In this article, we'll explore these scaling laws — but we'll go deep enough that, by the time you finish, you won't just know about scaling laws. You'll be able to use them to predict things you've never measured.

I.

The ¾ Power Law

Here's one of the most jaw-dropping graphs in all of science.

In 1932, a Swiss biologist named Max Kleiber plotted the metabolic rate (how much energy an animal burns per day) against its body mass for a range of mammals. He expected a messy scatter. What he got was a nearly perfect straight line — on a log-log plot.

That line had a slope of ¾.

This means that metabolic rate scales as body mass raised to the power of 0.75. In math-speak: B = B₀ × M3/4, where B is metabolic rate, M is mass, and B₀ is a constant.

Kleiber's Law: Metabolic Rate vs Body Mass

Each dot is a mammal species. On a log-log plot, they fall on a beautiful straight line with slope ¾. Kleiber expected 2/3 (surface area). He got 3/4. Nobody could explain why — for 65 years.

Now, why is this remarkable? Let's think about what you might expect.

If metabolic rate were simply proportional to mass (i.e., the exponent was 1), then an elephant weighing 10,000 times more than a mouse would need 10,000 times more food. That's the "naive" guess.

But with a ¾ exponent, the elephant only needs about 1,000 times more food. That's a 10× savings!

This is the magic of sublinear scaling: bigger organisms are more efficient. Per gram of body weight, an elephant burns far less energy than a mouse. It's as if nature gives a "bulk discount" for being large[1].

Why not 2/3? 🤔

Before Kleiber, the prevailing theory was that metabolic rate should scale with surface area — which goes as M2/3. The logic: animals lose heat through their skin, so metabolic rate should match surface area. This was Rubner's "surface law" from 1883.

Kleiber's data clearly showed 3/4, not 2/3. The difference seems small, but on a log-log plot across 27 orders of magnitude (from bacteria to whales), it's unmistakable. The surface area theory was wrong.

It took until 1997 for West, Brown, and Enquist to explain why it's 3/4. We'll get there in the next section.

And it's not just metabolic rate. Kleiber's ¾ law turned out to be the tip of a massive iceberg. Virtually every measurable biological rate and time follows a power law with an exponent that's a simple multiple of ¼:

Always quarter-powers. Always. Across mammals, birds, fish, plants, even single-celled organisms. Over 27 orders of magnitude in mass.

So the obvious question is: why? Why quarter powers? Why not third powers, or fifth powers, or some messy decimal?

II.

Why Quarter Powers?

For 65 years after Kleiber, the ¾ exponent was an empirical mystery. Everyone could see it in the data, but nobody could derive it from first principles.

Then, in 1997, a physicist named Geoffrey West teamed up with two biologists — James Brown and Brian Enquist — and cracked it. Their explanation is one of the most elegant in all of biology.

The key insight: life is powered by networks.

Every living organism — from a tiny shrew to a blue whale — needs to deliver resources (oxygen, glucose, nutrients) to every single cell in its body. It does this through branching distribution networks: your circulatory system, your respiratory system, a tree's vascular system.

Fractal Branching: From Aorta to Capillaries

A blood vessel network branches like a fractal — each level splits into smaller and smaller vessels. The aorta (left) feeds arteries, which feed arterioles, which feed capillaries. This self-similar branching pattern is the key to understanding quarter-power scaling.

West, Brown, and Enquist (often called "WBE") realized that these networks must obey three constraints:

  1. Space-filling: The network must reach every cell in the body. No dead zones allowed — every cubic millimeter of tissue needs a capillary nearby.
  2. Invariant terminal units: The smallest units of the network (capillaries) are the same size in a mouse as in a whale. A red blood cell is a red blood cell, regardless of the animal it's in.
  3. Energy minimization: Evolution has optimized these networks to minimize the energy required to pump resources through them. Hearts don't waste energy.

Here's the beautiful part: when you work out the mathematics of a branching network that satisfies all three constraints, the ¼ exponent falls out automatically[2].

The math, briefly 🤓

In a 3D organism, the network branches hierarchically. At each level, vessels split into n smaller branches. The ratio of vessel radii between levels follows a precise geometric series.

When you combine the three constraints mathematically, you find that the branching ratio forces the total number of capillaries to scale as M3/4. Since each capillary serves the same volume of tissue, the metabolic rate — which depends on how many capillaries deliver oxygen — also scales as M3/4.

The "quarter" comes from the fact that we live in 3 spatial dimensions, and the network adds an effective fourth dimension. So the exponent is 3/(3+1) = 3/4. In a hypothetical 2D organism, it would be 2/3. In 4D, it would be 4/5.

It's not a coincidence. It's not a statistical artifact. It's geometry.

The reason is deeply elegant: we live in three spatial dimensions, and the fractal branching network effectively adds a fourth dimension. The exponent is 3/(3+1) = ¾. If we lived in a two-dimensional world (like the flat creatures in Flatland), the exponent would be 2/3. If we lived in four spatial dimensions, it would be 4/5.

(This is completely analogous to how Einstein's relativity adds time as a fourth dimension to space — except here, the "extra dimension" is the hierarchical depth of the fractal network)

What makes this theory so powerful is its universality. The same math applies whether the network carries blood (animals), water and nutrients (plants), or air (lungs). Any organism with a fractal distribution network — which is essentially all complex life — will follow quarter-power scaling.

And here's where it gets really fun. If this theory is right, we should be able to predict specific biological quantities from body mass alone. Let's try it.

III.

Predict an Animal's Heart Rate

Heart rate scales as M−1/4. This means bigger animals have slower hearts. The formula is approximately:

Heart Rate ≈ 241 × M−0.25 beats per minute

where M is body mass in kilograms. Try it out — slide the mass slider and watch the heart beat in real time.

Animated Heart

72 bpm

Human (~70 kg)

Predictions from Body Mass

0.025 kg (mouse) 150,000 kg (whale)

Heart Rate

72

bpm

Lifespan

39

years

Metabolic Rate

80

watts

Total Heartbeats

1.5B

lifetime

Slide the mass slider from mouse to whale. Notice how the heart rate plummets — but the total lifetime heartbeats barely change. Every mammal gets roughly 1.5 billion heartbeats. Spend them fast (mouse) or slow (whale).

Did you catch that? The most astonishing prediction: every mammal gets approximately the same number of heartbeats in its lifetime — about 1.5 billion.

A shrew's heart races at 1,000 beats per minute, and it lives about 2 years. A blue whale's heart beats 6 times a minute, and it lives about 70 years. Do the math: both end up at roughly 1.5 billion total heartbeats.

It's as if each mammal is born with a fixed "heartbeat budget." Burn through it quickly (like a mouse), and you live fast and die young. Spend it slowly (like a whale), and you get a long, languid life[3].

Humans are cheaters 🏃‍♂️

If you run the numbers, a 70kg mammal "should" live about 35-40 years and have a heart rate of about 72 bpm. The heart rate is spot on — but modern humans live to 75-80! We've roughly doubled our "natural" lifespan through medicine, sanitation, nutrition, and social cooperation.

We're the only mammal to significantly beat the scaling prediction. We've essentially cheated the mathematics of biology.

This "invariant number of heartbeats" is a direct consequence of the quarter-power scaling. Heart rate goes as M−1/4, lifespan goes as M+1/4. Multiply them together: M−1/4 × M+1/4 = M0 = constant. The mass dependence cancels out perfectly.

The same trick works for breaths. Every mammal takes about 200 million breaths in its life, regardless of size.

This raises a profound philosophical question about the experience of time...

IV.

The Pace of Life

Here's a thought experiment that might rearrange your brain a little.

A mouse lives about 2-3 years. An elephant lives about 60-70 years. From our human perspective, the mouse's life seems tragically short, and the elephant's impressively long.

But what about from the mouse's perspective?

Biological Time: Same Lifetime, Different Clocks

MOUSE ~1,000 bpm • 2 years ~1.5 billion heartbeats ELEPHANT ~30 bpm • 65 years ~1.5 billion heartbeats Same biological lifetime
Both clocks tick the same number of times. The mouse's clock just runs 10× faster. From a "biological time" perspective, both creatures experience equally full lives.

All biological times scale as M1/4. Time between heartbeats, time between breaths, time to reach maturity, time to reproduce — they all slow down with body size in exactly the same way.

This means that from the internal perspective of an animal — measured in heartbeats or breaths rather than clock seconds — every mammal lives roughly the same length of life. A mouse processes its environment, makes decisions, and experiences its world at a much higher "frame rate" than an elephant.

Geoffrey West draws an analogy to Einstein's time dilation. In special relativity, time slows down as you approach the speed of light. In biology, time slows down as you approach the size of a whale. Size is the "speed of light" of the biological world.

Even walking speed scales with body mass — larger animals walk faster. But when you measure walking speed in "body lengths per heartbeat" rather than meters per second, all mammals walk at the same pace. Everyone is walking one biological step at a time.

This is spooky. It's as if nature is running the same software on different hardware, just clocked at different speeds[4].

Why do we "feel" like we live longer than mice? 🤔

Great question. It's because our perception of time is set by our own biological clock, not by some absolute reference. We experience 75 years at "human speed" — which feels long to us. A mouse experiences 2 years at "mouse speed" — which presumably feels just as long to the mouse.

Of course, comparing subjective experience across species is philosophically fraught. But the invariance of biological time is real and measurable.

So: organisms follow elegant, universal scaling laws rooted in the geometry of their internal networks. Everything scales. Everything is predictable. Life is mathematically beautiful.

But then West turned his attention to something that isn't alive in the biological sense — but arguably has a life of its own.

He turned his attention to cities.

And everything changed.

V.

Cities: Where Scaling Gets Weird

When Geoffrey West first started looking at city data, he expected to find the same sublinear scaling he'd found in biology. After all, cities have distribution networks too — roads, power lines, water pipes, internet cables. A bigger city is, in some sense, just a bigger organism.

He was half right. And the half that was wrong turned out to be far more interesting.

Cities do show sublinear scaling — but only for infrastructure. Roads, gas stations, power lines, water pipes: they all scale with an exponent of about 0.85. Double the population, and you only need about 85% more roads. This is economies of scale, and it's exactly what you'd expect from a network-based argument[5].

The 15% infrastructure bonus 🏗️

This is a real, measurable savings. When a city doubles in size, it doesn't need to double its infrastructure. It saves about 15% on everything — roads, pipes, gas stations, electrical cables. This is why, per capita, New York City is more resource-efficient than a small town. Living together is efficient.

Specifically, the data show infrastructure scales as Population0.85 — remarkably consistent across countries and cultures.

But then West looked at socioeconomic quantities — wages, patents, GDP, number of restaurants, creative professionals, crime, disease, pollution — and found something completely different.

They scale superlinearly. The exponent is about 1.15.

City Scaling: Infrastructure vs. Socioeconomic Output

10,000 (small town) 15,000,000 (megacity)
Double the population and watch what happens. Infrastructure grows LESS than double (saving ~15%). But socioeconomic output grows MORE than double (a ~15% bonus). The good and bad scale together — you can't have the patents without the crime.

This is staggering. Double a city's population and you get:

The good and the bad scale together. You can't cherry-pick. A city that's 15% more creative is also 15% more criminal. A city that's 15% wealthier also has 15% more AIDS cases. West calls this "the unity of the urban phenomenon."

This is fundamentally different from biology. No living organism shows superlinear scaling. In biology, bigger always means slower and more efficient. In cities, bigger means faster and more intense.

Why? Because the "network" driving city scaling isn't a physical infrastructure network — it's a social network. The scaling comes from human interactions. When you pack more people together, the number of possible interactions grows faster than the population. And it's interactions — conversations, collaborations, transactions, arguments — that drive economic output, innovation, and, yes, crime.

West puts it vividly: "A city is a particle accelerator for humans." Cram people together, speed up their interactions, and creative output increases — along with everything else.

VI.

The Pace of Urban Life

Here's something you've probably noticed intuitively but never put a number on: people in big cities walk faster.

It's not your imagination. In the 1970s, psychologist Marc Bornstein measured pedestrian walking speeds in cities around the world. He found a systematic pattern: the bigger the city, the faster the walking speed. And in 2007, a much larger study confirmed it — and found the same ~1.15 superlinear exponent.

This is remarkable. Walking speed isn't determined by infrastructure or policy. Nobody is told to walk faster in Tokyo. It emerges spontaneously from the increased pace of social and economic interactions.

Walking Speed vs. City Population

The stick figure walks faster as the city grows. Everything in a city accelerates with size — not just walking, but transaction times, business formation, even the spread of disease.

And it's not just walking speed. The pace of everything in a city scales superlinearly:

Remember how in biology, bigger meant slower? The pace of life decreased with size? In cities, it's the exact opposite. Bigger means faster. The whole city speeds up.

West argues this is because cities are driven by social networks, not physical distribution networks. And social networks have a fundamentally different geometry — one that leads to increasing returns rather than decreasing ones.

But superlinear scaling has a dark side. And to see it, we need to look at the one thing that's supposed to grow but actually behaves like an organism rather than a city.

VII.

Companies: Born to Die

Here's a question that should keep CEOs up at night: why do cities almost never die, but companies almost always do?

Think about it. Athens is 3,400 years old. Rome is 2,700 years old. London has been continuously inhabited for nearly 2,000 years. These cities have survived wars, plagues, fires, economic collapses, and the fall of empires.

Meanwhile, the average lifespan of a company on the Fortune 500 list is about 10.5 years (counted from when they enter the list)[6]. Half of all publicly traded companies die within a decade. Of the Fortune 500 companies in 1955, 88% are gone.

Some notable corporate deaths 💀

Kodak (founded 1888, bankrupt 2012): invented the digital camera, then was killed by it. Sears (founded 1893, bankrupt 2018): once the Amazon of its day. Blockbuster (founded 1985, bankrupt 2010): had a chance to buy Netflix for $50 million. Pan Am (founded 1927, bankrupt 1991): once the world's largest airline.

The pattern is eerily consistent. Companies grow, optimize, calcify, and die.

Company Survival Rate Over Time

The survival curve for publicly traded companies looks like the decay of a radioactive isotope. The "half-life" is about 10.5 years. No matter how big or dominant a company seems, the math is relentless.

West's explanation is elegant and devastating: companies scale like organisms, not like cities.

When you analyze company data, you find sublinear scaling — just like biology. As companies grow, their revenue-per-employee grows, but sublinearly (exponent ~0.9). They become more efficient with scale — but at a cost.

That cost is innovation. Per capita innovation (measured by R&D output, patents per employee, new products per dollar spent) decreases as companies grow. The bigger the company, the less innovative each person becomes.

Why? Because companies, unlike cities, develop rigid hierarchies and bureaucratic networks. They optimize for efficiency — standardizing processes, reducing variance, eliminating redundancy. This makes them great at doing what they already do, but terrible at adapting to change.

"Cities keep reinventing themselves," West writes. "Horse and buggy gave way to automobiles, which are giving way to ride-sharing. But companies calcify around their core business. They fight the last war."

The contrast is stark:

A city is an open platform that continuously generates new ideas. A company is a closed system that eventually runs out of them.

This brings us to the deepest insight of West's work — the universal mathematics of growth itself.

VIII.

The Universal Growth Curve

West and his collaborators derived a remarkably simple equation that governs growth in all three domains — organisms, cities, and companies. It looks like this:

dM/dt = a·Mβ − b·M

Translation: the rate of growth equals "resources generated" (which scales as Mβ) minus "resources spent on maintenance" (which scales linearly with size).

The magic is in the exponent β. And this one parameter determines everything about how a system grows.

The Universal Growth Curve

Organisms grow fast, then plateau — reaching a stable adult size. Companies grow similarly but eventually decline and die. Cities never stop growing — but they require ever-faster cycles of innovation to sustain themselves.

When β < 1 (sublinear — organisms and companies): maintenance costs eventually catch up with resource generation. Growth slows, plateaus, and stops. The system reaches a stable "adult" size. For organisms, this is healthy maturity. For companies, it often precedes decline and death.

When β > 1 (superlinear — cities): resource generation always outpaces maintenance. Growth never stops. The system keeps getting bigger, faster, and more intense. But there's a terrifying catch[7].

The finite-time singularity 🤯

When β > 1, the growth equation predicts something alarming: the system reaches infinite size in finite time. Obviously, this can't happen in reality. What it means is that the current paradigm of growth must break down — requiring a major innovation or paradigm shift to reset the clock.

West argues this is why cities need ever-faster cycles of innovation. The time between major paradigm shifts (fire → agriculture → industrialization → digital) keeps shrinking. Each innovation resets the clock, but the treadmill speeds up every time.

Mathematically, this means we need infinite innovation in finite time. West calls this "the treadmill of accelerating growth" — and warns it may be unsustainable.

Superlinear growth requires the pace of innovation to accelerate — you need new paradigm shifts faster and faster. The time between major innovations keeps shrinking: millennia between fire and agriculture, centuries between the printing press and industrialization, decades between electricity and computers, years between the internet and AI.

West calls this "the treadmill of accelerating growth." The treadmill keeps speeding up, and you have to keep running faster just to stay in place. Miss one cycle of innovation, and the system crashes.

This is the deep mathematical reason why cities feel so relentless, why the pace of modern life keeps increasing, and why "disruption" is the defining word of our era. It's not a cultural choice — it's a mathematical necessity of superlinear scaling.

IX.

Putting It All Together

Let's step back and appreciate the big picture. Three domains — organisms, cities, and companies — and one equation with one parameter (β) explains the fundamental behavior of all three.

Property Organisms Cities Companies
Scaling exponent (β) ~0.75 (sublinear) ~1.15 (superlinear) ~0.85–0.9 (sublinear)
Growth pattern Fast then plateau Accelerating, unbounded Fast then plateau then decline
Pace of life with size Slows down Speeds up Slows down
Efficiency with size More efficient More efficient (infra) More efficient
Innovation with size N/A Increases per capita Decreases per capita
Death Inevitable Extremely rare Almost certain
Network type Fractal distribution Social interaction Hierarchical

The deep unity is striking. Whether it's a shrew or a whale, a village or a megacity, a startup or a Fortune 500 company — the same mathematical framework applies. The exponent β is the only thing that changes, and it determines everything: growth trajectory, pace of life, and ultimate fate.

Now, let's see if you can use what you've learned to make predictions yourself.

QUIZ 1: A newly discovered mammal species weighs 50 kg. Its heart rate hasn't been measured yet. What's your best guess?

Using Heart Rate ≈ 241 × M−0.25:

241 × 50−0.25 = 241 × 0.376 ≈ 91 bpm

For reference, a similarly-sized dog has a resting heart rate of about 70–120 bpm. The scaling law nails it! We also get a lifespan estimate of ~30 years (11.8 × 500.25).

QUIZ 2: City A has twice the population of City B. How many more patents does City A produce per year?

Patents scale superlinearly with exponent ~1.15:

21.15 ≈ 2.22× as many patents

Not just double — but more than double! That extra 22% is the "superlinear bonus" of urban agglomeration. The same math applies to wages, GDP, restaurants, and unfortunately, crime.

QUIZ 3: A city triples its population through migration. What happens to per-capita income?

Total wages scale as Pop1.15, so per-capita wages scale as Pop0.15:

30.15 ≈ 1.17× → a ~17% raise for everyone

Everyone in the city gets richer — not just the newcomers. This is why urbanization is one of the most powerful forces for economic growth in human history. (But remember: crime per capita also goes up ~17%.)

Pretty powerful, right? One equation, a handful of exponents, and you can predict heart rates, city economics, and corporate lifespans.

Of course, scaling laws aren't perfectly precise — they describe averages across many systems. Any individual animal, city, or company can deviate. But the deviations themselves are interesting: they tell you which systems are outperforming or underperforming their "expected" behavior given their size[8].

The power of residuals 📊

When a city produces more patents than its population predicts, that tells you something real — it has an unusually innovative environment. When a company grows slower than its size predicts, it may be in trouble.

West argues that the deviations from scaling laws are just as informative as the laws themselves. Scaling laws give you the baseline; residuals tell you what's unique.

Controversies remain — some researchers argue the exponents are closer to 2/3 than 3/4 for biology, or that city scaling exponents vary more by metric than West suggests. But the broad framework of universal power-law scaling remains robust and productive.

Geoffrey West ends his book with a warning. The accelerating treadmill of urban innovation is not sustainable indefinitely. At some point, the pace of required paradigm shifts becomes impossibly fast. We may already be feeling this — the sense that the world is changing faster than we can adapt.

The mathematics of scaling doesn't tell us what to do about it. But it tells us clearly what the constraints are. And understanding the constraints is always the first step toward wisdom.

X.

Further Resources

This explainer is based on the work of Geoffrey West, James Brown, Brian Enquist, Luís Bettencourt, and colleagues at the Santa Fe Institute. Any errors are the author's own. Scaling law research is an active field with ongoing debates — the numbers presented here are approximations meant to convey the core ideas.