Table of Contents
Two economists sit across from each other at a table. Between them lies a pile of money. The rules are simple. Each player can either take the pile and end the game, or pass it to the other player. Here’s the catch: every time someone passes, the pile doubles. But if you pass and your opponent takes on the next turn, you get nothing while they walk away with everything.
This is the Centipede game, named for the way its decision tree sprawls across a page like the legs of an insect. Game theorists love it. Not because it reveals some profound truth about cooperation or trust, but because it exposes something far more unsettling: the gap between what we call rational and what actually makes sense.
The Logic That Eats Itself
Start at the end of the game. Imagine there are just two turns left. The pile has grown substantially through mutual cooperation. Now it’s your turn. Should you pass?
The answer, according to game theory, is obvious. Look ahead one move. If you pass, your opponent faces the final decision. They can take the doubled pile or pass and get a slightly larger share in some predetermined split. Taking gives them more. So they’ll take. Knowing this, you should take now rather than pass and get nothing.
Now step back one turn. Your opponent faces the same calculation. They know that if they pass to you, you’ll take for the exact reason just described. So they should take now instead.
Step back again. And again. This chain of reasoning, called backward induction, unravels the entire game. Each player, reasoning about what the other will do, concludes that cooperation is pointless. The rational move at every single decision point is to take immediately.
The first player should grab the money on turn one. The game should end before it begins. Both players walk away with almost nothing when they could have built the pile into something worth having. This is what game theorists call the subgame perfect Nash equilibrium. It’s also completely absurd.
When Smart People Do Dumb Things
Here’s what actually happens when humans play the Centipede game. They cooperate. Not always, and not forever, but far more than theory predicts. The pile grows. Players pass multiple times. Someone eventually defects, but usually not on the first turn. Sometimes not even on the second or third.
Put economists in a lab and they’ll still cooperate. These are people who teach backward induction to undergraduates. They know the theory cold. They understand that their opponent understands it too. And yet they pass.
This isn’t ignorance or irrationality in the usual sense. Players aren’t making calculation errors. They’re making a different calculation entirely, one that weighs factors the standard model ignores. The orthodox solution assumes that players only care about maximizing their own money. But that’s not quite right. People care about fairness. They care about being the kind of person who cooperates. They care about not looking like a jerk.
More importantly, they’re reasoning about reasoning itself. Each player knows the backward induction argument. Each player also knows that if both players follow it, both lose. The logic is impeccable and the conclusion is terrible. Something has to give.
The Rationality Trap
The Centipede game sits at the intersection of two uncomfortable truths. First, individual rationality can produce collective absurdity. Second, everyone knowing this doesn’t fix the problem.
Consider what happens when both players understand the paradox. They both know that blind adherence to backward induction leaves money on the table. They both want to cooperate, at least for a while. But how long?
This is where the game gets truly devious. If you’re sure your opponent will cooperate for several rounds, you should cooperate too. But if you’re sure they’ll cooperate, you should also consider defecting just before they expect you to. And if they’re reasoning the same way about you, then maybe you should defect even earlier. The mutual knowledge that cooperation would be better doesn’t eliminate the incentive to defect. It just moves the question around.
Game theorists sometimes call this the “surprise exam paradox” of strategic interaction. A teacher announces that there will be a surprise exam next week. Students reason that it can’t be on Friday, because if they reach Friday without an exam, it won’t be a surprise. So it must be earlier. But then it can’t be Thursday either, by the same logic. The reasoning continues backward until the students conclude there can be no surprise exam at all. Then the teacher gives the exam on Wednesday, and everyone is surprised.
The Centipede game works the same way. The logic says defect immediately. But if everyone follows that logic, there’s a huge opportunity for someone who defects at turn two instead. Unless everyone is thinking that. The game creates a hall of mirrors where each level of strategic reasoning undermines the previous one.
Trust Without Foundations
What players actually do in the Centipede game suggests something radical about cooperation. It doesn’t require trust in the usual sense. Trust implies a belief that the other person will do the right thing. But in the Centipede game, there is no right thing. The incentive to defect is always present and always obvious.
Instead, cooperation emerges from something more pragmatic: the recognition that both players benefit from pretending the endgame doesn’t exist. At least for a while. This is trust as mutual suspension of disbelief rather than confidence in character.
Early in the game, the potential gains from continued cooperation are large relative to the immediate payoff from defecting. Defecting on turn one means taking a pittance when you could have a feast. So players pass. Not because they trust each other, but because the alternative is visibly stupid.
As the game progresses, the calculus shifts. The pile gets bigger. The gap between defecting now and defecting later shrinks. The fear that your opponent might defect first grows stronger. Eventually someone pulls the trigger. The question is when.
This pattern appears everywhere in the real world. Business partnerships that should logically never form still happen. Long term projects that could collapse at any moment still get completed. Nations with no formal enforcement mechanism still honor treaties. The shadow of the endgame is always there, but people act as if it isn’t. Until suddenly they can’t anymore.
The Problem of Common Knowledge
One of the strangest features of the Centipede game is what happens when you add more players or more rounds. You might think that a longer game would create more opportunities for cooperation. After all, the immediate gains from defecting early become relatively smaller. But the opposite often happens.
The longer the game, the harder it becomes to sustain cooperation. This seems backward. But it makes sense once you consider the role of common knowledge. For cooperation to work, players need to believe their opponent will cooperate for at least a few rounds. But they also need to believe their opponent believes they will cooperate. And so on.
Each additional round of mutual belief makes cooperation more fragile. The tower of reasoning gets taller and more unstable. A tiny doubt at any level can cascade downward and collapse the whole structure. Maybe your opponent thinks you might defect early. Or maybe they think you think they might think you might defect early. Any crack in the common knowledge leads back to immediate defection.
This is why the Centipede game is such a useful model for understanding real world cooperation failures. The logic that destroys cooperation isn’t about mistrust or greed. It’s about higher order beliefs. About beliefs about beliefs about beliefs. The substantive question of “should we cooperate” gets swallowed by the recursive question of “do we both know that we both know that we both know we should cooperate.”
Escape Routes That Don’t Quite Work
Game theorists have proposed various solutions to the Centipede paradox. None of them are fully satisfying.
One approach involves bounded rationality. Maybe players don’t actually perform infinite backward induction. Maybe they only think ahead a few steps, which would naturally lead to more cooperation. This is probably true as a descriptive matter. But it doesn’t resolve the theoretical puzzle. The question isn’t whether real people follow game theory. The question is whether they should.
Another approach appeals to other regarding preferences. If players care about fairness or their opponent’s payoff, the incentive to defect immediately disappears. Cooperative outcomes become sustainable. The problem is this just shifts the question. Why would players have these preferences? If caring about fairness is itself a strategy, we’re back where we started.
Some theorists invoke reputation effects. In repeated interactions, players might cooperate in the Centipede game to build a reputation for cooperation in future games. This works but requires a very specific setup. The Centipede game itself can’t be truly repeated, or it just becomes a longer Centipede game with the same backward induction problem.
Perhaps the most honest response is that the Centipede game reveals a genuine limitation in how we model rationality. The standard framework assumes common knowledge of rationality: everyone is rational, everyone knows everyone is rational, everyone knows everyone knows everyone is rational, and so on to infinity. This assumption does a lot of work in game theory. It’s also completely ridiculous.
No real person engages in infinite regress. Rationality in practice means something much more modest: considering likely outcomes, making reasonable inferences, adjusting based on experience. When you define rationality this way, the paradox softens. Cooperation in the Centipede game stops looking like a mistake and starts looking like good judgment.
What the Game Actually Teaches
The Centipede game is often presented as a cautionary tale about the limits of rational self interest. Look what happens when everyone pursues their own advantage. They all end up worse off. The lesson seems to be that we need something beyond rationality. Ethics, social norms, institutions that override individual incentives.
But there’s another reading. Maybe the game shows that our concept of rationality is too narrow. Real rationality involves more than calculating optimal strategies in artificially simplified scenarios. It involves dealing with uncertainty, managing coordination problems, and navigating the gap between theoretical models and messy reality.
When players cooperate in the Centipede game, they’re not being irrational. They’re recognizing that backward induction, while logically valid, rests on assumptions that don’t hold. The assumption that your opponent will definitely follow backward induction. The assumption that they assume you will follow it. The assumption that these mutual assumptions are stable and common knowledge.
This suggests a different way to think about game theory paradoxes.
Living in the Middle Rounds
Most of life happens in the middle rounds of a Centipede game. The endgame is visible but not immediate. Cooperation is risky but potentially rewarding. The incentive to defect is always present but not yet overwhelming.
This is the natural state of most relationships and institutions. Marriages, business partnerships, political alliances, international treaties. They all involve sequential decisions where parties could defect for short term gain but mostly don’t. Not because defection is impossible or because some external force prevents it. But because both parties prefer to act as if the endgame is further away than it really is.
The Centipede game suggests this is neither fully rational nor fully irrational. It’s something else: a pragmatic response to the reality that perfect rationality is self defeating.
This might be the real lesson. Rationality isn’t about always following every logical chain to its end. Sometimes it’s about knowing when to stop reasoning. When to accept cooperation that theory says should unravel. When to trust even though trust makes no sense.
The game where everyone loses shows us something true about winning. Sometimes the only way to win is to stop playing the way you’re supposed to play.
To leave money on the table early so there’s more money to take later. To pass when logic says take. To cooperate even though you both know, and both know that both know, that cooperation can’t possibly work.
And somehow it does work. Not perfectly. Not permanently. But enough. The pile grows. Value gets created. Both players end up better off than they would have been if they’d listened to the theory.
Maybe that’s the most rational thing of all.


