Table of Contents
Picture two economists at a dinner party. One believes the stock market will crash next month. The other thinks it will soar. They debate for hours, each presenting data, charts, and carefully constructed arguments. They part ways still disagreed, each convinced of their position.
According to Robert Aumann’s famous theorem, if these economists are truly rational and have the same information, this scenario is mathematically impossible.
Aumann won the Nobel Prize in Economics in 2005, partly for work that included a deceptively simple paper from 1976. The paper ran just five pages. Its conclusion upended how we think about disagreement itself. The theorem states that two rational agents with common knowledge of each other’s beliefs cannot agree to disagree. They must converge on the same conclusion.
The implications sound absurd at first. Look around. Rational people disagree constantly. Scientists debate climate models. Doctors disagree on treatment plans. Investors make opposite bets on the same stock. Are they all irrational? Is Aumann wrong?
Neither. The truth lives in the gap between mathematical purity and messy reality.
The Game Theory Foundation
Game theory treats decision making as a kind of strategic game. Players have information, beliefs, and strategies. They update their beliefs based on new information. Rational players, by definition, process information correctly and update their beliefs according to Bayes’ theorem.
Aumann asked a simple question within this framework. If two rational players know what each other believes, and they know that they both know, and they know that they both know that they both know, continuing infinitely, what happens to their beliefs?
The answer shocked the economics community. The beliefs must be identical.
Think of it like two calculators working on the same problem. If they both function correctly and have the same inputs, they must produce the same output. Aumann showed that rational minds work the same way when they share information completely.
The proof itself uses the mathematics of partitions and common knowledge. Each person divides the world into possible states. When they learn something, they eliminate certain states. Common knowledge means both people know something, know that the other knows, know that the other knows they know, and so on without end.
Under these conditions, disagreement becomes logically impossible. The very fact that someone rational disagrees with you contains information. If you are also rational, you must update your belief based on their disagreement. They must do the same. This process continues until beliefs align perfectly.
The Dinner Party Paradox
Return to those economists. Why can they still disagree after sharing their analyses?
The answer reveals what Aumann’s theorem actually proves. The theorem requires common knowledge. Not just shared information. Common knowledge represents something far more demanding.
Imagine trying to establish common knowledge in practice. You tell your colleague your belief. Now you both know your belief. But do you both know that you both know? Maybe your colleague wasn’t paying attention. Or maybe they heard you but doubt you meant it seriously. Or perhaps they question whether you understood their skeptical expression.
For true common knowledge, every possible doubt must vanish at every level of recursion. You must both know. Both know that both know. Both know that both know that both know. This tower continues infinitely.
The economists at dinner probably shared data. They definitely shared conclusions. But did they share everything? Every assumption behind their models? Every tiny probability estimate? Every reason they weighted certain factors more than others? Every life experience that shaped their intuitions?
Almost certainly not. Information asymmetry lurked somewhere in their exchange.
The Bayesian Machine
Aumann’s theorem assumes rational agents update beliefs using Bayes’ theorem. This assumption deserves scrutiny.
Bayes’ theorem provides a mathematical rule for updating probabilities. When new evidence appears, a rational agent should adjust their beliefs in a precise way. The theorem tells you exactly how much to update based on the strength of the evidence and your prior beliefs.
Humans struggle with Bayesian updating. People overweight recent information. They cling to initial beliefs too strongly. They make basic probability errors.
But Aumann’s theorem addresses ideal rationality, not human psychology. The theorem describes what perfectly rational agents must do, not what flesh and blood people actually do.
This creates a strange situation. The theorem is mathematically true but practically false. Two idealized reasoning machines cannot disagree. Two humans almost always can.
The gap between theory and practice reveals something important about disagreement itself. Most disagreements stem from hidden information or reasoning errors, not from genuinely irreconcilable rational perspectives.
Information Cascades and Market Madness
Game theory reveals a counterintuitive implication. If everyone knows about Aumann’s theorem and believes everyone else is rational, markets should never disagree about prices.
Consider a stock trading at $100. One trader thinks it should be $150 based on their analysis. Another thinks it should be $50. Both traders know the current price reflects all available public information. Both are rational.
Under Aumann’s framework, this scenario cannot be. The very fact that someone rational is willing to sell at $100 tells the buyer knows something. A rational seller would only sell if they believed the price was too high. A rational buyer should incorporate this information and update their belief downward. Similarly, the seller should update upward upon seeing a rational buyer willing to pay $100.
This updating process should continue until they agree on the price. The market price itself should reflect perfect agreement among rational traders.
Real markets obviously don’t work this way. Buyers and sellers exist simultaneously. Someone always thinks prices will rise while others think they will fall.
The explanation returns to information and rationality assumptions. Traders have different information. Some traders are not perfectly rational. Market prices reflect this messy reality, not theoretical ideal types.
But the theorem offers a valuable lens. When markets show extreme disagreement, we can ask why. Is new information fragmenting beliefs? Are some participants trading on emotion rather than analysis? Is someone systematically wrong about how to process information?
The Common Prior Assumption
Aumann’s theorem makes another subtle assumption. Both agents must start with the same prior beliefs before receiving any information.
This assumption hides enormous complexity. Prior beliefs represent everything someone thinks before seeing specific evidence. These priors come from life experience, education, cultural background, and fundamental intuitions about how the world works.
Two people might agree on all the facts yet disagree on conclusions because they started with different priors. A doctor trained in Western medicine might evaluate a treatment differently than one trained in traditional medicine, even seeing identical patient data. Their fundamental frameworks differ.
Aumann’s theorem says that if rational agents share the same priors and gain common knowledge of their posteriors, they must agree. The posterior is just the prior updated with new information.
But identical priors represent a heroic assumption. People grow up in different environments. They learn different things. They have different fundamental intuitions.
Game theorists sometimes defend this assumption by arguing that rational agents should converge on correct priors over time. If one prior better predicts reality, rational agents should adopt it. But this defense just pushes the problem back. How long does convergence take? What if the environment keeps changing?
The common prior assumption reveals that Aumann’s theorem describes a very specific type of disagreement. Not all disagreement. Only disagreement between agents who start from the same fundamental worldview and then acquire common knowledge of each other’s updated beliefs.
This narrow scope doesn’t diminish the theorem’s importance. It clarifies exactly what rational disagreement means.
The Paradox of Experts
Academic fields thrive on disagreement. Economists debate policy prescriptions. Physicists argue over interpretations of quantum mechanics. Historians contest the causes of major events.
These experts know each other’s arguments. They read the same papers. They attend the same conferences. By many measures, they share common knowledge of their respective positions.
Yet they continue disagreeing, often for decades. How does this square with Aumann’s theorem?
Several explanations emerge. First, experts rarely have truly common knowledge. Reading someone’s paper differs from knowing exactly what they think and why. Academic debate happens through publications, conferences, and conversations, but these mechanisms fall short of the infinite recursion Aumann requires.
Second, experts might have different priors rooted in their training or early research. A physicist who spent years studying one interpretation of quantum mechanics likely developed different fundamental intuitions than one who studied a competing interpretation.
Third, some experts might not be fully rational in the technical sense. They might weight evidence incorrectly. They might cling to positions for emotional or professional reasons. They might make subtle logical errors in complex domains.
Fourth, and most interestingly, experts might rationally disagree about what counts as evidence. This represents a deeper disagreement about methodology or epistemology. If two scientists disagree about whether computer simulations constitute valid evidence, they can rationally reach different conclusions even with the same data.
The expert disagreement paradox suggests that Aumann’s theorem works better as an ideal than a prediction. It shows us what perfect rationality with perfect information sharing would look like. Real disagreement persists because we fall short of these ideals.
Aumann’s theorem offers no immediate practical applications. You cannot force people to agree by invoking mathematical proofs. Knowing the theorem doesn’t make you more rational or better at updating beliefs.
But the theorem clarifies thinking about disagreement in powerful ways.
When encountering disagreement, ask whether the disagreement is rational. Do both parties have the same information? Have they truly shared their reasoning? Do they start from compatible priors? Are they both reasoning correctly?
Often the answer to one or more questions is no. This insight helps locate the source of disagreement. Maybe more information sharing would help. Maybe one party is making a reasoning error. Maybe the disagreement stems from incompatible fundamental assumptions.
The theorem also suggests humility. If someone rational disagrees with you, their disagreement contains information. You should update your belief at least somewhat. This applies especially to expert disagreement. When qualified experts disagree, they might have different information or different priors. Dismissing expert views as simply wrong ignores the information their disagreement conveys.
The theorem functions like a mirror. It reflects back an image of what perfect agreement would look like. The gap between the mirror and reality shows us exactly where imperfections lie. This makes Aumann’s theorem profoundly useful despite being practically inapplicable. It provides a reference point. A standard. A way to think clearly about why people disagree and what disagreement means.
The theorem won Aumann the Nobel Prize not because it solved practical problems but because it changed how economists think about information, belief, and rationality. It showed that game theory could illuminate basic questions about human reasoning.
Two economists can debate stock prices all evening and part ways still disagreed. Aumann’s theorem doesn’t make this impossible. It explains exactly why it happens. And in that explanation lies its enduring value.


