Does Gödel preclude a workable ToE?

The answer is no, because although a "Theory of Everything" means a computational method of describing any situation, it does not allow you to predict the eventual outcome of the evolution an infinite time into the future, but only to plod along, predicting the outcome little by little as you go on.

Gödel's theorem is a statement that it is impossible to predict the infinite time behavior of a computer program.

Theorem: Given any precise way of producing statements about mathematics, that is, given any computer program which spits out statements about mathematics, this computer program either produces falsehoods, or else does not produce every true statement.

Proof: Given the program "THEOREMS" which outputs theorems (it could be doing deductions in Peano Arithmetic, for example), write the computer program SPITE to do this:

If you think about it, the moment THEOREMS says that "R does not halt", it is really proving that "SPITE does not halt", and then SPITE halts, making THEOREMS into a liar. So if "THEOREMS" only outputs true theorems, SPITE does not halt, and THEOREMS does not prove it. There is no way around it, and it is really trivial.

The reason it has a reputation for being complicated is due to the following properties of the logic literature:

Anyway, what I presented is the entire proof of Gödel's theorem, using a modern translation of Gödel's original 1931 method. For a quick review of other results, and for more details, see this MathOverflow answer: https://mathoverflow.net/a/72151/36526.

As you can see, Gödel's theorem is a limitation on understanding the eventual behavior of a computer program, in the limit of infinite running time. Physicists do not expect to figure out the eventual behavior of arbitrary systems. What they want to do is give a computer program which will follow the evolution of any given system to finite time.

A ToE is like the instruction set of the universe's computer. It doesn't tell you what the output is, only what the rules are. A ToE would be useless for predicting the future, or rather, it is no more useful for prediction than Newtonian mechanics, statistics, and some occasional quantum mechanics for day-to-day world. But it is extremely important philosophically, because when you find it, you have understood the basic rules, and there are no more surprises down beneath.


Incorporating Comments

There were comments which I will incorporate into this answer. It seems that comments are only supposed to be temporary, and some of these observations I think are useful.

Hilbert's program was an attempt to establish that set theoretic mathematics is consistent using only finitary means. There is an interpretation of Gödel's theorem that goes like this:

This interpretation is false, and does not reflect Hilbert's point of view, in my opinion. Hilbert left the definition of "finitary" open. I think this was because he wasn't sure exactly what should be admitted as finitary, although I think he was pretty sure of what should not be admitted as finitary:

  1. No real numbers, no analysis, no arbitrary subsets of ${\Bbb Z}$. Only axioms and statements expressible in the language of Peano Arithmetic.
  2. No structure which you cannot realize explicitly and constructively, like an integer. So no uncountable ordinals, for example.

Unlike his followers, he did not say that "finitary" means "provable in Peano Arithmetic", or "provable in primitive recursive Arithmetic", because I don't think he believed this was strong enough. Hilbert had experience with transfinite induction, and its power, and I think that he, unlike others who followed him in his program, was ready to accept that transfinite induction proves more theorems than just ordinary Peano induction.

What he was not willing to accept was axioms based on a metaphysics of set existence. Things like the Powerset axiom and the Axiom of choice. These two axioms produce systems which not only violate intuition, but are further not obviously grounded in experience, so that the axioms cannot be verified by intuition.

Those that followed Hilbert interpreted finitary as "provable in Peano Arithmetic" or a weaker fragment, like PRA. Given this interpretation, Gödel's theorem kills Hilbert's program. But this interpretation is crazy, given what we know now.

Hilbert wrote a book on the foundations of mathematics after Gödel's theorem, and I wish it were translated into English, because I don't read German. I am guessing that he says in there what I am about to say here.


What Finitary Means

The definition of finitary is completely obvious today, after 1936. A finitary statement is a true statement about computable objects, things that can be represented on a computer. This is equivalent to saying that a finitary statement is a proposition about integers which can be expressed (not necessarily proved) in the language of Peano Arithmetic.

This includes integers, finite graphs, text strings, symbolic manipulations, basically, anything that Mathematica handles, and it includes ordinals too. You can represent the ordinals up to $\epsilon_0$, for example, using a text string encoding of their Cantor Normal form.

The ordinals which can be fully represented by a computer are limited by the Church-Kleene ordinal, which I will call $\Omega$. This ordinal is relatively small in traditional set theory, because it is a countable ordinal, which is easily exceeded by $\omega_1$ (the first uncountable ordinal), $\omega_\Omega$ (the Church-Kleene-th uncountable ordinal), and the ordinal of a huge cardinal. But it is important to understand that all the computational representations of ordinals are always less than this.

So when you are doing finitary mathematics, it means that you are talking about objects you can represent on a machine, you should be restricting yourself to ordinals less than Church-Kleene. The following argues that this is no restriction at all, since the Church-Kleene ordinal can establish the consistency of any system.


Ordinal Religion

Gödel's theorem is best interpreted as follows: Given any (consistent, omega-consistent) axiomatic system, you can make it stronger by adding the axiom "consis(S)". There are several ways of making the system stronger, and some of them are not simply related to this extension, but consider this one.

Given any system and a computable ordinal, you can iterate the process of strengthening up to a the ordinal. So there is a map from ordinals to consistency strength. This implies the following:

It is natural to assume the following:

Further, the consistency proofs are often carried out in constructive logic just as well, so really:

This is not a contradiction with Gödel's theorem, because generating an ordinal sequence which approaches $\Omega$ cannot be done algorithmically, it cannot be done on a computer. Further, any finite location is not really philosophically much closer to Church-Kleene than where you started, because there is always infinitely more structure left undescribed.

So $\Omega$ knows all and proves all, but you can never fully comprehend it. You can only get closer by a series of approximations which you can never precisely specify, and which are always somehow infinitely inadequate.

You can believe that this is not true, that there are statements that remain undecidable no matter how close you get to Church-Kleene, and I don't know how to convince you otherwise, other than by pointing to longstanding conjectures that could have been absolutely independent, but fell to sufficiently powerful methods. To believe that a sufficiently strong formal system resolves all questions of arithmetic is an article of faith, explicitly articulated by Paul Cohen in Set Theory and the Continuum Hypothesis. I believe it, but I cannot prove it.


Ordinal Analysis

So given any theory, like ZF, one expects that there is a computable ordinal which can prove its consistency. How close have we come to doing this?

We know how to prove the consistency of Peano Arithmetic--- this can be done in PA, in PRA, or in Heyting Arithmetic (constructive Peano Arithmetic), using only the axiom

This means that the proof theoretic ordinal of Peano Arithmetic is $\epsilon_0$. That tells you that Peano arithmetic is consistent, because it is manifestly obvious that $\epsilon_0$ is an ordinal, so all its countdowns terminate.

There are constructive set theories whose proof-theoretic ordinal is similarly well understood, see here: "Ordinal analysis: Theories with larger proof theoretic ordinals".

To go further requires an advance in our systems of ordinal notation, but there is no limitation of principle to establishing the consistency of set theories as strong as ZF by computable ordinals which can be comprehended.

Doing so would complete Hilbert's program--- it would removes any need for an ontology of infinite sets in doing mathematics. You can disbelieve in the set of all real numbers, and still accept the consistency of ZF, or of inaccessible cardinals (using a bigger ordinal), and so on up the chain of theories.


Other interpretations

Not everyone agrees with the sentiments above. Some people view the undecidable propositions like those provided by Gödel's theorem as somehow having a random truth value, which is not determined by anything at all, so that they are absolutely undecidable. This makes mathematics fundamentally random at its foundation. This point of view is often advocated by Chaitin. In this point of view, undecidability is a fundamental limitation to what we can know about mathematics, and so bears a resemblence to a popular misinterpretation of Heisenberg's uncertainty principle, which considers it a limitation on what we can know about a particle's simultaneous position and momentum (as if these were hidden variables).

I believe that Gödel's theorem bears absolutely no resemblence to this misinterpretation of Heisenberg's uncertainty principle. The preferred interpretation of Gödel's theorem is that every sentence of Peano Arithmetic is still true or false, not random, and it should be provable in a strong enough reflection of Peano Arithmetic. Gödel's theorem is no obstacle to us knowing the answer to every question of mathematics eventually.

Hilbert's program is alive and well, because it seems that countable ordinals less than $\Omega$ resolve every mathematical question. This means that if some statement is unresolvable in ZFC, it can be settled by adding a suitable chain of axioms of the form "ZFC is consistent", "ZFC+consis(ZFC) is consistent" and so on, transfinitely iterated up to a countable computable ordinal, or similarly starting with PA, or PRA, or Heyting arithmetic (perhaps by iterating up the theory ladder using a different step-size, like adding transfinite induction to the limit of all provably well-ordered ordinals in the theory).

Gödel's theorem does not establish undecidability, only undecidability relative to a fixed axiomatization, and this procedure produces a new axiom which should be added to strengthen the system. This is an essential ingredient in ordinal analysis, and ordinal analysis is just Hilbert's program as it is called today. Generally, everyone gets this wrong except the handful of remaining people in the German school of ordinal analysis. But this is one of those things that can be fixed by shouting loud enough.


Torkel Franzén

There are books about Gödel's theorem which are more nuanced, but which I think still get it not quite right. Greg P says, regarding Torkel Franzén:

I thought that Franzen's book avoided the whole 'Goedel's theorem was the death of the Hilbert program' thing. In any case he was not so simplistic and from reading it one would only say that the program was 'transformed' in the sense that people won't limit themselves to finitary reasoning. As far as the stuff you are talking about, John Stillwell's book "Roads to Infinity" is better. But Franzen's book is good for issues such as BCS's question (does Godel's theorem resemble the uncertainty principle).

Finitary means computational, and a consistency proof just needs an ordinal of sufficient complexity.

Greg P responded:

The issue is then what 'finitary' is. I guess I assumed it excluded things like transfinite induction. But it looks like you call that finitary. What is an example of non-finitary reasoning then?

When the ordinal is not computable, if it is bigger than the Church-Kleene ordinal, then it is infinitary. If you use the set of all reals, or the powerset of $\Bbb Z$ as a set with discrete elements, that's infinitary. Ordinals which can be represented on a computer are finitary, and this is the point of view that I believe Hilbert pushes in the Grundlagen, but it's not translated.