#### Discover more from Privatdozent

# John F. Nash Jr.'s Game of "Hex" (1949)

### “Oh, by the way, what was it that they were playing?” asked von Neumann. “Nash,” answered Tucker, allowing the corners of his mouth to turn up ever so slightly.

By the late 1940s, the favorite past-time of faculty and graduate students in Fine Hall at Princeton University was board games, including the famous Go and Chess, as well as the less famous Kriegspiel. During this time, ‘beautiful mind’ John Forbes Nash Jr (1928-2015) invented his own game, “Nash”, later marketed by the Parker Bros under the name “Hex”. This essay tells the story of its discovery.

**Princeton University (1948–51)**

Nash entered graduate school when he was 20 years old, three years after leaving his hometown of Bluefield, West-Virginia. At the time, Princeton’s math department was filled with brilliant minds, lead by Solomon Lefschetz (1884-1972) who jointly with Ralph Fox (1913-73) and Norman Steenrod (1910-71) headed research on topology, first in the country. Emil Artin (1898-1962) lead algebra. Student of Lefschetz, Albert W. Tucker (1905-95) lead the group researching the newly developed game theory, inspired by the 1944 book *Theory of Games and Economic Behavior* by John von Neumann (1903-57) and economist Oskar Morgenstern (1902-77).

Princeton’s math department in the 40s and 50s has since become somewhat legendary in mathematical circles. As Sylvia Nasar recounted in 1998, *“Fine Hall is, I believe, the most luxurious building ever devoted to mathematics, [according to] one European émigré*. [..] *A country club for math, where you could take a bath.”*

Its cornerstone contains a lead box with copies of works by Princeton mathematicians and the tools of the trade — two pencils, one piece of chalk, and, of course, an eraser. Designed by Oswald Veblen [...] it was meant to be a sanctuary that mathematicians would be ‘loath to leave’. The dim stone corridors that circled the structure were perfect for both solitary pacing and “mathematical socializing. The nine “studies” — not offices! — for senior professors had carved paneling, hidden file cabinets, blackboards that opened like altars, oriental carpets, and massive, overstuffed furniture. [...] Each office was equipped with a telephone and each lavatory with a reading light.” Its well-stocked third-floor library, the richest collection of mathematical journals and books in the world, was open twenty-four hours a day. Mathematicians with a fondness for tennis (the courts were nearby) didn’t have to go home before returning to their offices — there was a locker room with showers.- Excerpt,

A Beautiful Mind*by Sylvia Nasar (1998)

John Nash was part of the clique of mathematicians and graduate students advancing the nascent discipline of game theory under Tucker. It’s fair to say that they did so in the purest mathematical sense, i.e. generally uninterested in relating their research to applications in the real world. According to economist and personal friend of Nash, Martin Shubik (1926-2018)—student of Morgenstern—later described:

"The graduate students and faculty in the mathematics department interested in game theory were both blissfully unaware of the attitude in the economics department, and even if they had known it, they would not have cared.. The contrast of attitudes between the economics department and the mathematics department was stamped on my mind soon after arriving at Princeton. The former projected an atmosphere of dull business-as-usual conservatism of a middle league conventional Ph.D. factory; there were some stars but no sense of excitement or challenge. The latter was electric with ideas and the sheer joy of the hunt. Psychologically they dwelt on different planets. If a stray ten-year-old with bare feed, no tie, torn blue jeans and an interesting theorem had walked into Fine Hall at tea time, someone would have listened. When von Neumann gave his seminar on his growth model, with few exceptions, the serried ranks of Princeton Economics cold scare forbear to yawn."- Excerpt,

Finding Equilibrium*by Düppe and Weintraub (2014 p. 94)

The head of the group, Tucker, would go on to supervise virtually all of the future top game theorists at Princeton, including David Gale (1921-2008) and 2012 Nobel laureate Lloyd Shapley (1923-2016), in addition to, of course, Nash.

## The Game of “Nash”

Nash is played on a (typically) 14x14 rhombus-like grid of n² hexagonal spaces using Go-stones of black and white. Each two opposite edges are also coloured black and white. Two players alternate placing pieces inside the hexagonal spaces—much like Go. Once a piece is played, it can never be moved. The goal of each player is to construct a connected path of stones his/her stones from one edge of the board to its opposite.

Nash was the first to prove that Hex cannot end in a draw. This non-trivial result is called the “Hex theorem”. He didn’t publish the proof but in 1952 put it in a RAND technical report entitled *Some Games and Machines for Playing Them*. The proof was later shown (by Gale) to be equivalent to the celebrated Brouwer fixed-point theorem:

The Brouwer fixed-point theorem (Brouwer, 1910). Let K be a topological space homomorphic to a compact, convex subset of Rⁿ and let f ∈ C(K, K), then f has at least one fixed point.

A fixed point is a point x for which f(x) = x, under some conditions on f. We can illustrate the meaning of the theorem informally for the case of the plane by imagining two pieces of grid paper of equal size with coordinate systems on them: Lay one of the pieces of paper flat on a table. Lay the other on top and crumple it without tearing or ripping it, and so that it does not reach outside the flat one. There will then be at least one coordinate point of the crumpled paper that lies directly above its corresponding point on the flat one. An analogous statement is that if you take an ordinary map of a country and lay it out on a table inside that country, there will always be a “You Are Here” pin on the map which represents the same point in the country. In three dimensions a consequence of the Brouwer fixed-point theorem is that, no matter how much you stir a cocktail in a glass (or think about milk shake), when the liquid has come to rest, some point in the liquid will end up in exactly the same place in the glass as before you took any action.

Brouwer’s fixed-point theorem is one among many theorems concerning fixed-points, but is particularly well known due to its use in many fields of mathematics, including in John von Neumann’s famous early treatise on general equilibrium theory (von Neumann, 1937).

The theorem is also worth mentioning because it was used in Nash’s first proof of the Nash equilibrium for which he was eventually awarded the 1994 Nobel Memorial Prize in Economic Sciences. Its extension, the Kakutani fixed-point theorem was also later used by Nash in a more elegant proof of the same result.

In the interest of completeness, the Kakutani’s fixed-point theorem states:

The Kakutani fixed-point theorem (Kakutani, 1941). Let S be a non-empty, compact and convex subset of some Euclidean space Rⁿ. Let φ:S -> 2ˢ be a set-valued function on S with a closed graph and the property that φ(x) is non-empty and convex for all x ∈ S, then φ has a fixed point.

To understand its properties, think for instance of a set-valued function f(x) defined on the closed interval [0,1] that maps a point x to the closed interval [1 — x/2, 1- x/4]. Then f(x) must have fixed points. In the diagram above, any point on the red line which intersects the graph of the function (in grey) is a fixed point, and so there are infinitely many fixed points in this particular case.

In addition to his fixed-point proof that Hex cannot end in a draw, Nash also provided a *reductio ad absurdum *existence proof (1949) that the first player in Hex on a board of any size has a winning strategy. Crucially, however, the proof gives no indication of a correct strategy for play. Nasar writes the following about this discovery:

"One morning in late winter 1949, Nash literally ran into the much shorter, wiry Gale on the quadrangle inside the Graduate College. “Gale! I have an example of a game with perfect information,” he blurted out. “There’s no luck, just pure strategy. I can prove that the first player always wins, but I have no idea what his strategy will be. If the first player loses at this game, it’s because he’s made a mistake, but nobody knows what the perfect strategy is.”- Excerpt,

A Beautiful Mind*by Sylvia Nasar (1998)

The result is common to a number of other games, and has come to be called the strategy-stealing argument. Here is a highly condensed informal version of Nash’s proof for Hex:

Hex first player winning strategy existence proof (Nash, ~1949)1. Either the first or second player must win, therefore there must be a winning strategy for either the first or the second player.

2. Let us assume that the second player has a winning strategy.

3. The first player can now adopt the following defense: He makes an arbitrary move. Thereafter he plays the winning second player strategy assumed above. If in playing this strategy, he is required to play on the cell where an arbitrary move was made, he makes another arbitrary move. In this way he plays the winning strategy with one extra piece always on the board.

4. This extra piece cannot interfere with the first player's imitation of the winning strategy, for an extra piece is always an asset and never a handicap. Therefore the first player can win.

5. Because we have now contradicted our assumption that there is a winning strategy for the second player, we are forced to drop this assumption.

6. Consequently, there must be a winning strategy for the first player.

## Hex

In Fine Hall, Nash’s game was simply referred to as “Nash” or “John”. The latter name, according to Martin Gardner (1914-2010), referred to the fact that the game could be played on hexagonal bathroom tiles. Unbeknownst to the mathematicians there, the game had been introduced in Europe a few years prior by Danish mathematician Piet Hein (1905-96) who brought it to the Niels Bohr Institute. There is little indication that Nash could have known of Hein’s prior invention. This despite accusations by Gardner in 1957 (Hayward & Toft, 2019). Hein first called his game ‘con-tac-tix’, but it became known in Denmark as ‘polygon’.

In 1952, following Nash’s proofs that 1. The game cannot end in a draw and 2. The first player always has an unknown, dominant strategy, the game was introduced by Parker Brothers in the United States. Around the same time, mathematician Claude Shannon (1916-2001) and electrical engineer E. F. Moore (1925-2003) constructed an analogue Hex playing machine with resistors for edges and lightbulbs for vertices (Shannon, 1953).

I hope you enjoyed this week’s newsletter! As regular readers will note, I am experimenting with shorter, more frequent stories rather than the regular long-form format. Please let me know if you have a strong preference for either.

Thank you, as always, for being a subscriber.

Sincerely,

Jørgen

## Related *Privatdozent* Essays

John von Neumann’s Minimax Theorem, March 26th 2021

Oskar Morgenstern’s Transformation, August 27th 2021

The Unparalleled Genius of John von Neumann, May 19th 2021

The Duties of John von Neumann’s Assistant in the 1930s, July 11th 2021

Turing Uncomputability, August 13th 2021

## About

The *Privatdozent* newsletter currently goes out to 8,058 subscribers via Substack.

## References

**Düppe, T. Weintraub, E.R. 2014.***Finding Equilibrium*. Princeton University Press.**Hayward, R.B. & Toft, B. 2019.***Hex, Inside and Out: The Full Story*. CRC Press.**Nasar, S. 1998.***A Beautiful Mind.*Simon & Schuster.**Shannon, C. 1953.**Computers and Automata.*Proceedings of the Institute of Radio Engineers*.**41**(10): 1234–41**von Neumann, J. 1937.**‘Über ein Oikonomisches Gleichungssystem und eine Verallgemeinerung des Brouwerschen Fixpunktsatzes’ in Menger, K. (ed).*Ergebnisse eines Mathematischen Seminars.*Vienna*.*

** This essay contains Amazon Affiliate links*

## John F. Nash Jr.'s Game of "Hex" (1949)

Hi Jorgen,

I'm confused about the Brouwer fixed-point theorem, or at least its explanation. You offer three illustrations of the theorem: two pieces of grid paper, a map and the placed mapped, and the before and after of a stirred liquid. In all three cases, there will be (at least) one point in one (the crumpled grid paper, the map, and the stirred liquid) that corresponds exactly to a point in the other (the smooth grid paper, the place mapped, and the pre-stirred liquid).

But this seems mistaken, at least in cases 2 & 3, unless there's a condition forbidding rotation. Suppose that the table inside the country straddles the border between two states - let's say North Dakota and South Dakota. One flips the map so that map-South Dakota is on the north side of the border and map-North Dakota is on the south side, and the map-border is as thick as the real border. This seems to be a counterexample to the theorem (or at least to the illustration of the theorem).

Likewise in the case of the liquid: why can't all the liquid molecules on the north side of the glass wind up on the south side of the glass and vice-versa (let's assume that there is an exact halfway point between its north and south sides, and that this halfway point is itself an unoccupied boundary). If this condition is possible, then there won't be any point in the liquid that is in exactly the same place as it was before the glass was stirred.

The examples were helpful in letting me understand the Brouwer fixed-point theorem, so I appreciate them, even though - or perhaps in part because! - they may have oversimplified the theorem so non-topologists like myself could understand something.