
In the early 1930s, the provincial town of Princeton, New Jersey was already well on its way to becoming a hotbed for mathematical exploration. Although plans for an Institute for Advanced Study (IAS) had been in the making for a while, 1930 would be the year of its founding and 1933 the year of its opening. Until 1938, “the Institute” was located in the same building as the math department at the nearby Princeton University, then called Fine Hall (now Jones Hall). On its permanent faculty “of unrivalled prestige” was “the last great universal mathematician” Hermann Weyl (1885-1955) and an elite group of topologists, including Oswald Veblen (1880-1960), James Alexander (1888-1971) and Marston Morse (1892-1977). Topologist Solomon Lefschetz (1884-1972) was there as well, although formally affiliated with Princeton, as was Veblen’s former Ph.D. student Alonzo Church (1903-95) who had been on the permanent faculty since 1929.
In addition were various European emigres associated with either the IAS, Princeton University or both. Albert Einstein (1879-1955) and John von Neumann (1903-1957) were among the first permanent faculty members at the IAS. Einstein first came to Princeton in 1932, while von Neumann had been a visiting professor at Princeton from 1930-33, when he was offered the IAS professorship.
Eugene Wigner (1902-95) was recruited simultaneously with von Neumann and stayed in Princeton until 1936. Their mutual acquaintances Wolfgang Pauli (1900-58) and Niels Bohr (1885-1962) also held visiting professorships, the former in 1935 and the latter from 1938-39. Logician Kurt Gödel (1906-78) first went to Princeton as a visiting fellow in 1933-34. Due partly to his developing mental illness, he returned to Austria in 1934. He visited the IAS again briefly in 1935. Following several stays in mental institutions, Gödel was invited back for the academic year 1938-39 by von Neumann. He stayed from October 1938 to June 1939 before finally emigrating to assume a faculty position at the IAS in 1940. Alan Turing (1912-54) first arrived in 1936, seeking to obtain a Ph.D. at Princeton under the supervision of Church. Initially intending to stay for a year, the offer of a Procter Visiting Fellowship in 1937 persuaded him to stay until 1938. In addition to Turing, Church also supervised Ph.D. students Stephen C. Kleene (1909-94) and J. Barkley Rosser (1907-89) whom both graduated in 1934.
Origins of the Entscheidungsproblem (1928)
Turing was 23 years old when he in 1935 attended a University of Cambridge course entitled “Foundations of Mathematics”. The mathematician giving the course was Max Newman (1897-1984) and the topic was the Entscheidungsproblem (“decision problem”) of “whether it is possible to provide a decision procedure that ‘allows one to decide the validity of a sentence’” (Soare, 2013). The question had been formally posed by David Hilbert (1862-1943) and his doctoral student Wilhelm Ackermann (1896-1962) as a challenge to the mathematical community. The history of the problem, however, dates back to Gottfried Leibniz (1646-1716).

After successfully constructing one of the first mechanical calculators (the “Stepped Reckoner”), Leibniz in the early 1700s began building a more general machine which could manipulate symbols in order to determine the truth values of mathematical statements, a conceptual machine later referred to as a “philosopher’s stone” by logician Heinrich Behmann (1891-1970). For the answers of such a machine to be trustworthy, the foundations of mathematics would need to be provably robust, that is, without paradoxes or inconsistencies. The establishment of such a foundation for mathematics was the goal of Hilbert’s program: to ground all theories to a finite and complete set of axioms and prove that these axioms were consistent.
In continuation of this ‘formalist’ program to provide a robust foundation for mathematics, Hilbert at the 1928 Congress in Bologna formulated three challenges, the third of which became known as the Entscheidungsproblem. Hilbert’s three challenges were (as restated by Stephen Hawking in 2005):
To prove that all true mathematical statements could be proven, that is, the completeness of mathematics.
To prove that only true mathematical statements could be proven, that is, the consistency of mathematics, and:
To prove the decidability of mathematics, that is, the existence of a decision procedure to decide the truth or falsity of any given mathematical proposition.
Or, in Hilbert’s own words:
“Es soll ein Verfahren gefunden werden, welches es ermöglicht, über die Gültigkeit eines gegebenen logischen Ausdrucks durch endlich viele Operationen zu entscheiden.”
which translates to:
"A procedure is to be found which allows one to decide the validity of a given logical expression through a finite number of operations."
At one point Hilbert called the challenge the “fundamental problem of mathematical logic”. A formalist, no doubt he believed that such a decision procedure (algorithm) existed, and so that (in accordance with Leibniz), in the future a machine could be built to solve (all) mathematical problems. This, despite the fact that (the so-called “naive”) set theory had by this point been shown by Georg Cantor (1845-1918) to be deeply flawed and full of paradoxes (see essay below).
Indeed, as late as 1922, Hilbert described how mathematics had its reputation “lost due to the paradoxes of set theory” (Hilbert, 1922). The reception of his challenge by the mathematical community in 1928 was thus generally positive. Moses Schönfinkel (1888-1942) thanked Hilbert for giving mathematicians “the courage and boldness to dare to touch” this “problem of ‘solving all problems’”. Already the next year, Schönfinkel and Paul Bernay (1888-1977) presented a first (partial) solution for the case when formulae containing at most two individual variables are considered. In the following years, partial solutions for other parts of calculus were presented by László Kálmar (1905-76), Frank Ramsey (1903-30) and Jacques Herbrand (1908-31). The latter two both unfortunately died tragically before the general case was settled.
Enter Kurt Gödel
The 22-year-old Kurt Gödel was in the audience in Bologna when Hilbert presented both the Entscheidungsproblem (the third challenge) and the ‘completeness question’ for first-order logic (the first challenge)1. The latter question was also included in Hilbert and Ackermann’s introduction to first-order logic in their book Grundzüge der theoretischen Logik* (“Principles of Mathematical Logic”). There they ask:
"Are the axioms of a formal system sufficient to derive every statement that is true in all models of the system?"
This problem became the topic of Gödel’s doctoral work, which he completed within a year at age 23. His dissertation, supervised by Hans Hahn (1879-1934), presented (what is now known) as the completeness theorem, establishing a correspondence between semantic truth and syntactic provability in first-order logic—a fundamental result which in 1930 earned Gödel both his Ph.D. and a publication by the Vienna Academy of Sciences. His result, in line with Hilbert’s program, showed the great man and his disciples (including Bernays, Ackermann, von Neumann and others) that there was indeed hope for a future where mathematical questions could be settled through the use of language and syntax, where “anything true in all models is provable”, perhaps in an automated fashion by a machine akin to that invented by Leibniz. Hilbert himself had reservations, especially with regards to the “higher domains” of mathematics such as set theory.
This hope provided by Gödel, though, would not be very long lasting. Presenting at the Second Conference on the Epistemology of the Exact Sciences in Köningsberg later in the same year, Gödel demonstrated an inherent fallacy in Hilbert’s program by showing how any consistent system F, no matter how complicated, is incomplete. There will always be statements in F which can neither be proved nor disproved in F, regardless of which and how many axioms F is derived from. As Soare (2013) recounts, von Neumann (there as representative of Hilbert’s program) happened to be in the audience when Gödel, then 24 years old, took the podium to present his result. He immediately recognized that Hilbert’s program was over.
The Entscheidungsproblem, however, remained open.
What is a decision procedure?
Before the Entscheidungsproblem could be settled, mathematicians needed to agree on what the “decision procedure” Hilbert and Ackermann asked for was, mathematically. As Martin Davis (1928-2023) argued (van Heijenoort, 1967):
“Suppose we are given a "calculational procedure" that consists of 1. A set of axioms; and 2. A logical conclusion written in first-order logic. Gödel’s doctoral dissertation proved that [Boolean logic is] complete […] in the sense that every valid formula is provable”
The Entscheidungsproblem asks if there might exist a generalized “calculational procedure” that could determine whether (generally) a conclusion can be derived from its premises in a nearly mechanical fashion. If such a procedure does exist, as Davis pointed out, "In principle, an algorithm for [the] Entscheidungsproblem would have reduced all human deductive reasoning to brute calculation".
Correspondence (1932-34)
Among those logicians working to try to solve the Entscheidungsproblem in the early 1930s was Veblen’s former Ph.D. student Alonzo Church. In two papers published in 1932-33, Church presented a system of logic (a “formalism”) now referred to as λ-calculus. Although there are few records of Gödel’s activities in this period, we know that Gödel wrote Church in June of 1932 to pose two questions about the system of logical postulates that Church had proposed in his two papers (Dawson, 2005):
How is it possible to prove absolute existence statements (such as the Axiom of Infinity) in his system; and
Assuming the system to be consistent, couldn’t its primitive notions be interpreted in type theory or set theory?
Church responded a few weeks later with an outline of a proof of the Axiom of Infinity. At the time, he appears to have been under the impression that his new system of logic might escape the implications of Gödel’s incompleteness results. This because he “thought that Gödel[‘s] […] theorem[s] might be found to depend on peculiarities of type theory”, and thus that if one devised a new system, one might avoid the “unfortunate restrictiveness” that Gödel’s incompleteness results had demonstrated. Unfortunately, as Church’s Ph.D. students Kleene and Rosser would show, Church’s system of logic was indeed provably inconsistent2. The result forced Kleene to rewrite his dissertation. As Dawson (2005) notes, by the time of Gödel’s arrival in the fall of 1933, Kleene and Rosser’s result—sometimes referred to as Kleene–Rosser paradox—was already “circulating among the logicians in Princeton”.

Around the same time as Gödel and Church were corresponding, Gödel was working on a class of functions which are now called primitive recursive functions. Gödel had introduced the concept in his 1931 paper, building on the concept of recursive functions previously studied by, among others, Richard Dedekind (1831-1916), Giuseppe Peano (1858-1932) and Thoralf Skolem (1887-1963), as well as Hilbert and Ackermann themselves (Davis, 1982). In the period February to May 1934, Gödel lectured on on the concept at the Institute for Advanced Study. Kleene and Rosser recorded notes from the lectures, which have been preserved and are available here. An important take-away from the lectures is Gödel’s note that primitive recursive functions “have the important property that, for each given set of values of the arguments, the value of the function can be computed by a finite procedure”. To this remark, he added the “suggestive” footnote (Davis, 1982)3:
“The converse seems to be true, if besides [primitive] recursions … recursions of other forms (e.g., with respect to two variables simultaneously) are admitted. This cannot be proved, since the notion of finite computation is not defined, but it serves as a heuristic principle.”
Gödel knew that primitive recursive functions (as stated in his 1931 paper) did not include all computable functions and so would be an insufficient foundation for claiming the effective computability of all numbers (a positive solution to the Entscheidungsproblem). To account for this, Gödel in the spring of 1934 expanded on a concept introduced by Jacques Herbrand (1908–1931) to define Herbrand-Gödel recursive functions (now known simply as recursive functions), improvements on those defined in Gödel’s 1931 paper (now called ‘primitive’ recursive functions), which would play a role in resolving the question.
Church’s λ-calculus
Despite the fact that Church’s new system had failed to escape the implications of Gödel’s incompleteness results, the notation he had developed in his new system survived unscathed (Davis, 1982). In his notation, the λ-operator (pronounced “lambda”) converts an expression containing free variables into one which denotes a function. Thus, although his students Kleene and Rosser ended Church’s hopes for an ambitious new formalism devoid of incompleteness, his/their work did give rise to a demonstrably consistent subsystem, what is referred to as λ-calculus (Church, 1935).
Church’s λ-calculus was thus the first formalism that made it was possible to define a class of functions which coincide with those functions that are “effectively calculable” (can be evaluated by means of an algorithm such as a computer program), thus, also, one of the first definitions of the term “computability”:
First definition of computability (Church, 1934)
A function is effectively calculable if and only if it is λ-definable.
Church and Kleene had worked together on defining functions as λ-definable since 1931. By 1934, they had shown that all the usual number theoretic functions were λ-definable.
Church and Gödel first met in person in early 1934, when the latter was visiting the IAS for the first time. In a letter to Kleene dated November 29th 1935, Church gives his account of their encounter (Davis, 1982; Copeland & Shagrir, 2013):
“In regard to Gödel and the notions of recursiveness and effective calculability, the history is the following. In discussion [sic] with him the notion of lambda-definability, it developed that there was no good definition of effective calculability. My proposal that lambda-definability be taken as a definition of it he regarded as thoroughly unsatisfactory.”
The latter sentence being the topic of much debate ever since.
After attending Gödel’s 1934 lectures at the IAS, Church and Kleene eventually revised their definition of computability to be based on Herbrand-Gödel recursive functions (Soare, 2013):
Second definition of computability (Church, 1936). A function on the positive integers is effectively calculable if and only if it is recursive.
The two would later be able to show that the λ-definable functions they had used in their first definition and the Herbrand-Gödel recursive functions they used in their second were formally equivalent. Gödel’s opinion on Church’s thesis would change over time, and by 1938 Gödel embraced it as a “mathematically satisfactory definition of computability” (Copeland & Shagrir, 2013). His change of heart, likely, was due to the work of another young logician, whom he actually never met.
Turing’s View
Unfortunately, by the time Turing arrived in Princeton in September of 1936, Gödel had returned to Vienna. Turing was there to pursue a Ph.D. with Church as his advisor. Before he set foot in America, however, Turing had written a paper showing that the solution to Entscheidungsproblem was negative: no decision procedure can exist that “allows one to decide the validity of a sentence” (Soare, 2013). As George Dyson recounted in his book Turing’s Cathedral* (2012):
“After a full year of work, Turing gave Newman a draft of his paper in April of 1936. ‘Max's first sight of Alan's masterpiece must have been a breathtaking experience, and from this day forth Alan became on of Max's principle protégés’”
The page proofs for Turing’s new paper ‘On Computable Numbers, with an Application to the Entscheidungsproblem’ arrived in Princeton on October 3rd 1936. The paper was published on the November 12th in the Proceedings of the London Mathematical Society. His seminal paper introduced what is now known as the Turing machine—a theoretical computational model that captures the limits of what can be computed in a completely different way altogether than that being developed by Church and Kleene:
Turing's definition of computability (1936). A function is intuitively computable (effectively computable) if and only if it is computable by a Turing machine; that is, an automatic machine (a-machine).
Equivalent but different to Church, Turing's model abstracted the process of computation in such a way that it could be analyzed mathematically. His abstraction gave a clear standard to differentiate between problems that computers can and can not solve (see essay below).
It was Turing’s assertion that any function which could be computed by mechanical means could be computed by a Turing machine. The claim is now known as the 'Church-Turing Thesis', as their approaches have been proven to be equivalent (in terms of their power of expression) regarding computability.
We do not know precisely when Gödel was made aware of Turing’s paper, but it is safe to assume that it occurred in the 1936-37. At some point around this time (possibly in 1938), Gödel wrote of the work:
“[Turing] has shown that the computable functions defined in this way are exactly those for which you can construct a machine with a finite number of parts which will do the following thing. If you write down any number n1,…,nr on a slip of paper and put the slip into the machine and turn the crank, then after a finite number of turns the machine will stop and the value of the function for the argument n1,…,nr will be printed on the paper “ (Copeland & Shagrir, 2013, p. 10).
The precise nature of Gödel’s admiration of Turing’s solution over Church’s is the topic of Jack Copeland and Oron Shagrir’s 2013 chapter Turing versus Gödel on Computability and the Mind in the wonderful 2015 book Computability: Turing, Gödel, Church, and Beyond*. The nuances of this debate are plentiful. I tend towards the interpretation that Gödel himself presented in a 1946 Princeton lecture where he expressed:
“Tarski has stressed in his lecture (and I think justly) the great importance of the concept of general recursiveness (or Turing’s computability). It seems to me that this importance is largely due to the fact that with this conceptual one has for the first time succeeded in giving an absolute definition of an interesting epistemological notation, i.e., one not depending on the formalism chosen.”
Indeed, that in despite of his own mastery of formalist means, Gödel preferred the solution not dependent on and limited by a (generally arbitrarily) chosen formal system — rather one of a more abstract and mechanical nature which, for him, was likely a healthier inclination. As Gödel wrote in his 1951 Gibbs lecture:
“The greatest improvement was made possible through the precise definition of the concept of finite procedure, which plays a decisive role in these [incompleteness] results. There are several different ways of arriving at such a definition, which, however, all lead to exactly the same concept. The most satisfactory way, in my opinion, is that of reducing the concept of finite procedure to that of a machine with a finite number of parts, as has been done by the British mathematician Turing.” — Gödel (1951, p. 304-305)
Epilogue
“There is of course no such theorem [that there is a positive solution to the Entscheidungsproblem holds] and this is very fortunate, since if there were we should have a mechanical set of rules for the solution of all mathematical problems, and our activities as mathematicians would come to an end.”— G.H. Hardy (1929)
In addition to Dawson’s book Logical Dilemmas* about the life of Gödel, for this topic I highly recommend Copeland, Posy and Shagrir’s book Computability*.
I hope you have wonderful weekend!
Best,
Jørgen
Related Privatdozent Essays
About
At the time of the writing of this essay, the Privatdozent newsletter goes out to 11,747 subscribers via Substack.
References
Copeland, J. (2004). The Essential Turing*. Oxford University Press
Copeland, J. & Shagrir, O. (2013).Turing versus Gödel on Computability and the Mind*. In: “Computability. Turing, Gödel, Church, and Beyond” (ed.). MIT Press.
Davis, M., (1982). Why Gödel didn't have Church's thesis. Information and control, 54(1-2), pp.3-24.
Gödel, K. (1931). Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I (“On Formally Undecidable Propositions of Principia Mathematica and Related Systems”). Monatschefte für Mathematik.
Gödel, K. (1964). Postscriptum.
Hawking, S. ed., (2007). God created the integers: The mathematical breakthroughs that changed history*. Hachette UK.
Sigmund, K. (2017). Exact Thinking in Demented Times. The Vienna Circle and the Epic Quest for the Foundations of Science. Basic Books. New York, NY. pp. 215.
Soare, R.I., (2013). Interactive computing and relativized computability.
Van Heijenoort, J. (1967). From Frege to Gödel: A source book in mathematical logic, 1879–1931 (Vol. 9)*. Harvard University Press.
* This is an Amazon Affiliate Link
Oddly, also present in Bologna during Hilbert’s address was economist Oskar Morgenstern, whom later in life would become Gödel’s “protector” and among his closest friends. Read more here: https://www.privatdozent.co/i/40211398/a-fascination-with-mathematics
The Kleene/Rosser proof is long and intricate (Curry, 1941), but essentially they show that Church’s system is inconsistent in the sense that every formula which can be expressed in its notation can also be proved, and thus that also false statements can be proved.
Despite its suggestive tone, responding to an inquiry by Martin Davis in 1965, Gödel denied that his 1934 paper anticipated—what would later become—the Church-Turing thesis ("a function on the natural numbers can be calculated by an effective method if and only if it is computable by a Turing machine").