Nash's Invention of Non-Cooperative Game Theory (1949-50)
"The entire edifice of game theory rests on two theorems: von Neumann’s min-max theorem of 1928 and Nash’s equilibrium theorem of 1950" - Nasar (1998)
In the last few years I’ve written extensively on the collaboration between Oskar Morgenstern (1902-77) and John von Neumann (1903-1957) on the development of cooperative game theory, the theory of coalition formation with external enforcements of behavior (read: contract theory). Their joint work, a twelve-hundred-page manuscript, is what became the book Theory of Games and Economic Behavior*, published by Princeton University Press in 1944 (see essay below). The book summarizes the two authors’ efforts to construct a “systematic theory of rational human behavior by focusing on games as simple settings for the exercise of human rationality” (Nasar, 1998).
As Nasar writes, “when the book appeared in 1944, von Neumann’s reputation was at its peak. It got the kind of public attention — including a breathless front-page story in the New York Times — that no other densely mathematical work had ever received” (Nasar, 1998). The timing, as Morgenstern had predicted, was perfect. World War II drew to a close in 1945 and new states, new borders and new, very very powerful weapons unleashed “a search for systematic attacks of all sorts of problems in a wide variety of fields” (Nasar, 1998).
The only issue was that von Neumann and Morgenstern’s book didn’t actually present any new solutions, only reformulations of old ones, such as von Neumann’s 1928 “minimax theorem” (see essay below).
More so than anything else, TGEB was thus a collection of well known problems from the social sciences expressed in a new and exciting way for mathematicians. Economists remained skeptical, indeed fairly uninterested in the new theory. This perhaps, for a few reasons. The mathematical sophistication required to embrace the work was certainly an obstacle. Classically trained economists in the 1940s were unaccustomed to mathematics beyond simple calculus and statistics. As anyone who has picked up the book will have experienced, TGEB as a text is substantial, indeed overwhelming. Additionally, researchers are not exactly known to move swiftly and abandon paradigms and expertise at their whim — why should any economist have dropped the foundation their career was built on in order to embrace a new “game theory” that “can’t even solve a game like chess”, as Jacob Viner, the chairmen of the Princeton economics department pointed out. “What good is it, since economist is far more complicated than chess?”.
Finally, as Leonard (1992) points out, Morgenstern’s vocal antagonism towards economics as a field of research did not ingratiate himself with his peers. As I’ve written about before, the economist Morgenstern was pessimistic about the methods of economics. Indeed, throughout the 1930s he grew increasingly enchanted with mathematics, even to the point of attending the 1928 International Congress of Mathematicians where Hilbert presented his famous Probleme der Grundlegung der Mathematik (see essay below).
The issue, as later expressed by economist Paul Samuelson (1915-2009) was that although Morgenstern made “great claims, he himself lacked the mathematical wherewithal to substantiate them. Morover [Morgenstern] had the irksome habit of always invoking the authority of some physical scientist or another”.
“The essence of von Neumann and Morgenstern’s message was that economics was a hopelessly unscientific discipline whose leading members were busily peddling solutions to pressing problems of the day — such as stabilizing employment — without the benefit of any scientific basis for their proposals. The fact that much of economic theory had been dressed up in the language of calculus struck them as “exaggerated” and a failure. This was not, they said, because of the “human element” or because of poor measurement of economic variables. Rather, they claimed, “Economic problems are not formulated clearly and are often stated in such vague terms as to make mathematical treatment a priori appear hopeless because it is quite uncertain what the problems really are.”
— Excerpt, A Beautiful Mind* (Nasar, 1998)
Thus, despite the fanfare and attention, the contribution of von Neumann and Morgenstern’s work, at least among economists, remained rather limited. That is, until another young mathematician read their work and began formulating his own, “rival approach”.
The Theory of Games in Fine Hall (1949-50)
Mathematician John Forbes Nash (1928-2015) first came to Princeton University as a graduate student in 1948, four years after von Neumann and Morgenstern’s book was published. Nash had graduated with simultaneous B.S. and M.S. degrees in mathematics from the Carnegie Institute of Technology in the spring of that year and been accepted to graduate programs in mathematics at Harvard, Chicago and Princeton. After being offered a John S. Kennedy Fellowship of $1,150 per year (about $15,000 in 2024) by the chairman of the math department Solomon Lefschetz (1884-1972), Nash accepted Princeton’s offer.
“We like to catch promising men when they are young and open-minded”
- Excerpt from Solomon Lefschetz’s letter to Nash
Nash was 20 years old when he first came to Princeton. At that point, former Ph.D. student of Lefschetz, Albert W. Tucker (1905-95) was the head game theorist in Princeton’s math department, as von Neumann was formally affiliated with the nearby Institute for Advanced Study and Morgenstern with Princeton’s economics department. Although Nash did not enter Princeton intending to study game theory, his interest in the topic grew as he spent time in the common room of the math department in Fine Hall, where (following the influx of European emigres in the 1930s) “one game or another has always dominated” (Nasar, 1998). Chess, Go, Kriegspiel and various other boardgames were among the mathematicians’ favorites.
“A number of his fellow students remember thinking that Nash spent all of his time at Princeton in the common room playing board games. Nash, who had played chess in high school, played both go and Kriegspiel, the latter frequently with Steenrod or Tukey.”
- Excerpt, A Beautiful Mind* (Nasar, 1998)
The Game of Hex (1949)
In his second semester at Princeton, Nash surprised his fellow graduate students and professors by introducing a clever game of strategy later marketed by the Parker Brothers as “Hex”, known in the common room at Princeton simply as either “Nash” og “John” (see essay below).
Hex is played using black and white Go-stones on a rhombus-shaped board consisting of a 14x14 grid with hexagonal tiles. Each opposite edge of the board is colored either black and white. Two players alternate placing pieces inside the hexagonal cells — much like Go. Once a piece is placed, it can never be moved. The goal of each player is to construct a discontuous path of stones of their color from one edge of the board to the opposite edge.
The game is simple and mathematically interesting because it cannot end in a draw (no matter how the board is filled with stones, there will always be one and only one discontinous path of unicolored stones connecting one side with its opposite). This observation, made by Nash around 1949, is now known as the “Hex theorem”. The result had previously been observed by Danish mathematician Piet Hein (1905-96) in 1942 when he introduced the same game at the Niels Bohr Institute under the name “Polygon”. Although Nash insisted that his discovery of the game was made independently of Hein’s, his proof of its zero-sum nature was not published until 1952 when Nash wrote it up for a RAND technical report entitled Some Games and Machines for Playing Them. The proof showed that on symmetrical boards, the first player (with perfect information) has a winning strategy, but, crucially, not what the strategy is (much like chess and go). Although Nash is generally co-credited for the invention with Hein (including by his biographer Sylvia Nasar), there is some debate about the origins of Nash’s version1.
What about Non-Cooperation?
Despite lacking interest from economists, by the time of Nash’s arrival at Princeton von Neumann and Morgenstern’s book had attracted quite a following in the math department. As Nasar writes “Kuhn and Gale were always talking about von Neumann and Morgenstern’s book” referring to his fellow graduate students Harold Kuhn (1925-2014) and David Gale (1921-2008). No doubt, the students must have been aware of the book’s shortcomings, including von Neumann’s incomplete theory of non-cooperative games with more than two players and its emphasis on zero-sum games. Intrigued by the “apparent wealth of interesting, unsolved problems”, Nash too was attracted to game theory, and by 1950 became a regular in Tucker’s weekly seminar devoted to the topic. The first speaker at the sminar was John von Neumann himself.
Nash’s first result in game theory, indeed his first publication, was the solution to an old and unsolved problem in economics. His paper describes a bargaining situation where two individuals have the opportunity for mutual benefit, but no action taken by one of the individuals (without consent) can unilaterally affect the well-being of the other (see essay below) — a precursor to his later Nobel Prize-worthy result.
The paper was published in the prestigious mathematical economics journal Econometrica in 1950, quite an accomplishment for a graduate student with only a single undergraduate course in economics. Quite apart from von Neumann and Morgenstern’s approach to two-person, zero-sum games (which ignores situations where there is opportunity to engage in a game where the players’ payoffs do not sum to zero), in the paper Nash proceeds to derive values for the anticipation of players in such two-person non-zero sum games:
“Nash’s theory assumes that both sides’ expectations about each other’s behavior are based on the intrinsic features of the bargaining situation itself. The essence of a situation that results in a deal is “two individuals who have the opportunity to collaborate for mutual benefit in more than one way.” How they will split the gain, he reasoned, reflects how much the deal is worth to each individual.
He started by asking the question, What reasonable conditions would any solution — any split — have to satisfy? He then posed four conditions and, using an ingenious mathematical argument, showed that, if his axioms held, a unique solution existed that maximized the product of the players’ utilities. In a sense, his contribution was not so much to “solve” the problem as to state it in a simple and precise way so as to show that unique solutions were possible.”
- Excerpt, A Beautiful Mind* (1998)
When Nash came up with the idea for this result remains a matter of debate. In the introduction to Nash’s 2002 autobiography, his friend Kuhn wrote “It is my recollection that it had been sent to von Neumann during Nash’s first year as a graduate student and that Nash made an appointment to remind von Neumann of its existence. In this scenario, it had been written at Carnegie Tech as a term paper in the only course in economics that Nash ever took.”, adding however that “Nash’s current memory differs from mine; in a luncheon with Roger Meyerson in 1995, he expressed the opinion that he had written the paper after his arrival at Princeton.” Regardless of its origins, not only did the paper solve an important problem, it also demonstrated to economists (in one of their most prestigious journals) that the methodologies of the newly introduced game theory was indeed of value to them.
Generalizing the Minimax Theorem
Recall that the origins of von Neumann and Morgenstern’s book, indeed the entire field, was a 1928 paper by von Neumann proving the so-called minimax or “min-max” theorem that:
Every two-person, zero-sum game has a solution (an equilibrium point).
von Neumann’s result is an existence proof showing that two-person, zero-sum games with finitely many pure strategies have solutions (equilibria) under both “maximin” and “minimax” strategies, which are identical (Dimand & Dimand, 1996). To find such solutions, each player computes their security level by calculating the minimum expected payoff they could get by using each of their mixed strategies (the maximum of all such minima, “maximin”). von Neumann’s proof shows that the best strategy for both players is to use this maximin criterion (which sum to zero). Neither gets more than their security level, as this would leave the other with less (and vice versa). This is where von Neumann and Morgenstern’s analysis of such situations end.
The issue, which was reflected in the lackluster response among economists to von Neumann and Morgenstern’s work, is that very few interesting problems in economics feature players whose payoffs sum to zero. As such, in almost every real-world situation, applying the maximin criterion is not the best strategy! The best strategy is to play what ever strategy is the best response to the maximin criterion (knowing that this strategy only coincides with the maximin criterion in cases where the outcome is zero-sum). The best strategy, in a non-zero game, is thus the best response to the other player’s best strategy, whom in turn should reason similarly. “Although players act at the same time, in ignorance of other players’ current actions, each is forced to think about the fact that there are other players who in turn are similarly aware.” (Dixit & Nalebuff, 1991). As Nasar writes, “Such circular reasoning would seem to have no conclusion”.
Nash’s brilliant insight was to make this seeming paradox of “best response” the very basis for a generalization of von Neumann’s minimax theorem to games which are not zero-sum. ”As a minimum requirement for a pair of strategies to be a candidate for the solution of a two-person game, he required that each strategy be a best reply to the other. Such a pair of strategies, nowadays called a Nash equilibrium, is basic to noncooperative game theory. An authoritative game book cannot possibly recommend a strategy pair as the solution to a game unless the pair is a Nash equilibrium” (Binmore, 1996). Informally, Nash’s theorem thus states:
A strategy profile is a Nash equilibrium if no player can do better by unilaterally changing their strategy.
That is, in a two-person game, a pair of strategies constitute a Nash equilibrium if player A’s choice is optimal, given player B’s choice, and B’s choice is optimal given player A’s choice. No player can singlehandedly change their strategy in order to obtain a more optimal result. Crucially, neither player knows what strategy the other will choose, but act solely on the basis of their own interests, given their knowledge of other players’ interests. The finding generalizes to n players.
In the table above, the strategies and payoffs of a two-person game are shown. Player A can choose between the strategies Top and Bottom. If player A chooses Top, he will receive a payoff of 2 if player B chooses Left and 0 if player B plays Right. If player A chooses Bottom, he will receive a payoff 0 if player B plays Left, and 1 if player B plays Right. Thus, player A’s optimal choice depends on what he thinks player B will do (Varian, 2006 p. 506).
von Neumann’s (Best) Response
Somewhat surprisingly, von Neumann’s reaction to Nash’s innovation was indifference. “That’s trivial, you know. That’s just a fixed-point theorem”, he is to have stated to the young Nash, then 21 years old (Nasar, 1998). Game theorist Kenneth G. Binmore (1940-) later speculated as to why von Neumann and Morgenstern didn’t formulate the generalization themselves. “They certainly understood that the minimax theorem is true for two-person, zero-sum games if and only if each player’s maximin strategy is a best reply to the maximin strategy of his opponent.” One reason might be that von Neumann saw Nash’s observation as too trivial to be of interest. Typically, real-world strategic situations (including Nash’s bargaining problem) have multiple Nash equilibria , and thus, which solution should one choose? For two-person, zero-sum games the answer is irrelevant because all Nash equilibria are equally satisfactory. “I think von Neumann and Morgenstern saw that the same is not true in general and therefore said nothing at all rather than saying something they perceived as being unsatisfactory. […] A player needs positive reasons for choosing one strategy rather than another, and a rational player has a positive reason for choosing his maximin strategy in a two-person, zero-sum game: it guarantees his security level.”
Nash first shared his idea with his fellow graduate student David Gale, who immediately recognized its relevance. “He had a concept that generalized to disarmament”, he later stated. Tucker, Nash’s thesis advisor, was surprised that Nash had produced another result so quickly, but less enthusiastic about the result itself. “Whether or not this was of any interest to economists wasn’t known”.
Nash’s idea resulted in three journal papers, in addition to his Ph.D. thesis. The three articles contain three different proofs of the existence of Nash equilibria. The first, entitled ‘Equilibrium Points in N-person Games’ (1950b) consists of a note Nash and Gale drafted for the Proceedings of the National Academy of Sciences. The second, called ‘Non-Cooperative Games’ (1951) was published in the Annals of Mathematics Vol. 54 (2). In ‘Two-person Cooperative Games’ (1953), published in Econometrica 21, Nash extended his work on the bargaining problem (Nash, 1950a) to a wider class of situations in which threats can a play a role (Kuhn et al, 2002).
In 1994, the result earned Nash the Nobel Memorial Prize in Economic Sciences.
“When I was a fledging mathematician, with the opportunity to talk to world-class mathematicians, I noticed something that still puzzles me. I could tell the difference between hard problems and easy problems, but the great mathematicians at whose feet I sat seemed to find it no easier to evaluate a standard integral than to invent a totally new way of thinking about the world. In brief, although I agree that it was important to economists that someone should show them how to use such tools as the Kakutani fixed point theorem, I continue to think that the really important contribution that Nash made to noncooperative game theory was to provide a conceptual framework for the subject whose creation may have seemed an effortless first step to him, but looked like an unabridgeable gulf to his predecessors”.
- Excerpt, “Introduction”. In: Essays on Game Theory* (1996)
Related Privatdozent Essays
About
At the time of the writing of this essay, the Privatdozent newsletter goes out to 11,774 subscribers via Substack.
References
Binmore, K. 1996. Introduction. In: Nash, J.F. ‘Essays on Game Theory*’ (ed.) Edward Elgar.
Dimand, M.A., & Dimand, R.W. 1996.‘The History of Game Theory, Volume I: From the beginnings to 1945’. Routledge Research.
Dixit, A.K. and Nalebuff, B.J., 1993. Thinking Strategically: The Competitive Edge in Business, Politics, and Everyday Life*. WW Norton & Company.
Leonard, R. J. 1992. ‘Creating a Context for Game Theory’. In: Weintraub, E.R. (ed.) Towards a History of Game Theory*. Duke University Press.
Kuhn, H., 2002. Introduction. In: The Essential John Nash*. Princeton University Press.
Nasar, S. 1998. A Beautiful Mind*. Simon Schuster. New York, NY.
Varian 2006. Intermediate Microeconomics*. A Modern Approach WW Northern & Company Inc. New York.
* This is an Amazon Affiliate Link
It seems that Danish physicist Aage Bohr had brought Hein’s version of the game to Princeton in the early 1940s, and thus that Nash might have heard of or seen the game prior to his re-discovery.