4

Gödel's second incompleteness theorem (GSIT), informally stated, says:

For any formal effectively generated theory T including basic arithmetical truths and also certain truths about formal provability, if T includes a statement of its own consistency then T is inconsistent.

Here's a defense of logical positivism (LP), stated by Rob Maimon:

Axioms don't need verification, they are of the nature of definitions of terms. [...] you know exactly what they mean when they say verification principle, and you know very well that they don't expect to verify the verification principle.

As best as I can understand, LP claims that LP is consistent and complete. But it seems subject to GSIT, given that it includes (i) "basic arithmetical truths" and (ii) "certain truths about formal provability". Does this mean that LP is either inconsistent or incomplete? My take on GSIT is that either there's a way to know things without needing (i–ii), or we will have to forever appeal to bigger and bigger systems to 'know' the smaller ones are valid. And yet, LP seems to claim that nothing bigger than itself will ever be required. Am I misapplying GSIT?


As background, the following may be of interest:

Also, see the accepted answer to Math.SE's Consistency of Peano axioms (Hilbert's second problem)?:

We cannot prove that Peano axioms (PA) is a consistent theory from the axioms of PA. We can prove the consistency from stronger theories, e.g. the Zermelo-Fraenkel (ZF) set theory. Well, we could prove that PA is consistent from PA itself if it was inconsistent to begin with, but that's hardly helpful.

It seems to me that LP is an attempt to say that one doesn't need metaphysics—a system behind the system—and yet, it seems that LP itself is a metaphysic. It is claimed that LP is complete (I interpret "all meaningful statements exist within LP" as a claim to completeness) and the attempt is allegedly made from within LP. And yet when one attempts to do that from within e.g. the Peano axioms, one finds it is impossible, provably so by Gödel. I take this to mean that one can only 'know' LP's consistency/completeness from outside LP. And yet this is exactly what LP contests: that there exists nothing meaningful outside of it.

  • 2
    There are some really significant problems with Logical Positivism in the versions developed by Schlick, Neurath, Carnap, et al. And their work interacted in interesting ways with Gödel's. However, I think this is not a productive direction—to think of Logical Positivism as a theory. It is certainly not a formal proof system, so I would not agree that "LP claims that LP is consistent and complete." Consistency and completeness are features of proof systems. Some Logical Positivists tried to develop proof systems, most notably Carnap, but it would be more productive to look at that directly.
    –  ChristopherE
    Dec 7, 2013 at 0:17
  • If LP merely lacks "certain truths about formal provability", then it could still fall prey to the first incompleteness theorem. I'm not sure I buy the tact that "LP is not a formal system and thus GSIT nor GFIT apply"; it almost seems to be a semantic maneuver to try and get LP to still arbitrate what is true and what is not (under the guise of what is 'meaningful'), without this arbitration to be open to criticism. Furthermore, surely the claim that LP is the only way to know things is a claim to completeness?
    –  labreuer
    Dec 7, 2013 at 0:52
  • 1
    Completeness is a property of some sets of logical rules. LP is not a set of logical rules. Logical Positivists would also never claim that LP is the only way to know things, because they wouldn't claim that it is a way to know things in the first place. Observation and reasoning (i.e. science) is the way to know things.
    –  ChristopherE
    Dec 7, 2013 at 18:18
  • Formal systems are all about proving theorems (truths) from axioms (presuppositions). Am I wrong in thinking that LP says that all truths can be known from a restricted set of axioms, since truth is 'meaningful' and cannot be derived from 'meaningless' axioms? Compared to someone who rejects LP, LP seems to eliminate items from both the truths and presuppositions category, by calling them 'meaningless'. Whether LP is a formal system or not, it states that some formal systems are meaningless, and thus ultimately makes claims about formal systems.
    –  labreuer
    Dec 7, 2013 at 18:30
  • "Am I wrong in thinking that LP says that all truths can be known from a restricted set of axioms?" Yes. At the heart of logical positivism is denying this claim. Scientific truth comes from observations, not axioms.
    –  ChristopherE
    Dec 7, 2013 at 18:35
  • My apologies, I'm being a bit sloppy with my terms. I'm using 'axiom' in this sense. Perhaps that helps?
    –  labreuer
    Dec 7, 2013 at 18:39
  • 1
    So, I want to be encouraging, but whenever you use your own, potentially-idiosyncratic definitions for terms, you have to be super careful when you try to connect your ideas framed in terms of them with other people's ideas. Right in the first sentence of that post, this is framed in terms that Logical Empiricists/Positivists would vigorously reject: "A fact is a perception of reality." So once you've framed your ideas with common terms defined unusually, it's extremely difficult to say how/whether they connect with other people's.
    –  ChristopherE
    Dec 8, 2013 at 19:56
  • I suppose I will have to read up more on LP.
    –  labreuer
    Dec 8, 2013 at 22:32
  • 1
    A great place to start with the original texts is A.J. Ayer's edited collection Logical Positivism (1959, Free Press). For a secondary source, you might look at the Minnesota Studies volume Origins of Logical Empiricism.
    –  ChristopherE
    Dec 8, 2013 at 23:45

1 Answer

4

Here's a quick proof of Godel's theorem, which can be made as formal as you like:

Theorem: Given any computable deductive system S (with a condition which is best stated afterward), write a computer program GODEL to:

  1. print it's code into a variable R (computer programs can do that)
  2. Deduce all consequences of S (computer can do that, by assumption)
  3. Halt if you see a proof that "R does not halt".

Then S cannot consistently prove that "GODEL does not halt". Informally, GODEL looks for "I do not halt" and halts if it finds it. The construction is basically identical to the informal statement, and the self-reference is by an obvious construction which is an exercize for first-year programming students.

The hidden assumption on S is that if a program P halts after a finite number of steps, then S will follow it step by step, and prove that it halts after a finite number of deductions. This is the main condition in Godel's theorem.

That's it, that's the proof. It is obvious that if GODEL halts, S is inconsistent, and if S is inconsistent, it proves any theorem, including "GODEL does not halt" at which point GODEL halts. So "GODEL halts" iff "S is consistent" and that's Godel's second theorem.

What it means is that a closed computable system of deduction, with no uncomputable method of producing new axioms, cannot produce all mathematical truths, in particular, it cannot prove it's own formal consistency.

This is not a barrier to formalization of mathematics as much as a tool. What it says is that starting with a simple theory S, say PRA, you can add "S is consistent" to produce S+1, then "S+1 is consistent" to produce S+2. After you run out of integers, you have an infinite list of theories, and you can enumerate all the theorems proved by all the theories and take the union, and this is the theory "S+ omega". The iteration continues up over all countable computable ordinals, and this produces higher theories, which eventually complete mathematics.

The only non-computable thing is naming higher and higher computable ordinals. This cannot be done algorithmically, but it can probably be done evolutionarily, using a random number generator. A true random number generator produces an uncomputable number. The goal is to convert this uncomputable number into a scheme for naming countable ordinals that converge onto the Church Kleene ordinal with probability 1 as you get more random digits. I don't know how to do this, but I don't know it's false either. This is a completion of Hilbert's program in a sense--- it shows you how to step up to get higher and higher theories.

That's enough about Godel's theorem.

Logical positivism is something else. It is first the claim that formal languages can describe all of experience, that formal first order predicate logic is sufficiently strong to speak about anything. This is true, because of Turing universality. Computers can simulate anything. Anything you run on a computer you describe in first order logic on integers.

It is also the claim that the understanding of the world is founded first on direct experience, and that all claims must be reduced to experience, and verified or falsified based on experience. Those claims which cannot be reduced to sensory experience are neither true nor false, they are moot.

The way I personally made this statement precise as a child (which, by the way, was also the very last time I ever got confused on any of this) is not by formalizing reduction to sense experience the way Carnap tried to do, but rather by defining the notion of positivistic equivalence classes. Logical theories make deductions, and this is equivalent to computational models. So two computational models are different when their predictions for sense impressions are different. They are equivalent when their predictions for sense impressions are the same. The positivist rule is then to identify any two identified theories as being fundamentally identical in content, and to not consider the question of whether model A or model B is the true as meaningful. It is no more meaningful then asking whether electrodynamics is in Coulomb gauge or in Landau gauge, the question is meaningless, because the sense experience predicted either way is the same.

This does not require you to formally build everything from the ground up starting with sentences about observations, it requires you only to compare formal models using their observational consequences, and when these consequences are identical, to freely switch between either model without considering the two conceptions as different. I do this all the time, all physicists learn to do it, it's the "change of gauge" in gauge theory, or the "change of coordinate systems" in General Relativity, or the "change of basis" in quantum mechanics. If you ask a physicist "which gauge?", "which coordinates?", "which basis?" they laugh. The question is obviously meaningless.

This keeps the basic principle of Mach and Carnap, gives it a formal meaning, gives it precise examples from physics (so that you can't say it's nonsense) and avoids making a formal separator between sensory sentences and non-sensory sentences. All you need is to be able to compare a sensory prediction, a deduction from the formal model, to a sensory input, something which you can program a computer to do, so there is no mystery here.

Further it allows evolution of models, as ones that conflict with data are rejected, ones that match data survive, and ones that don't differ with respect to data are identified with one another. You can implement evolution on the models, and then you can model cognition (poorly).

Technically it also requires an Occam's razor, so that you throw away complicated arbitrary models for simpler ones. You could state this formally in terms of the length of the program involved, but you have to be careful to think of it in terms of the "ultimate program", meaning an idealized total predictive model thing underneath, like your personal theory of everything in life, not in terms of the specific subprogram for any one particular phenomenon. If your smoke alarm consistently goes off every day for 10 years when you cook fish, the simplest program isn't "alarm will go off forever", because it includes the possibility that your battery will one day run out of charge, even though you haven't seen it happen yet. You know what I mean.

That's positivism. It's evolution in the head. It dismisses as meaningless the metaphysical questions in the same way Carnap says, and in a sense, it does reduce everything to sense experience, because if you have a data structure encoding the sense experience that allowed you to evolve the theory, you can describe the evolution from the rejected or accepted experience, and you can do the Carnap manipulations to turn everything into sentences about the sense experience along with sentences that describe the algorithms you are evolving at this particular point.

Anyway, it's not a deep idea today, after Turing, when people know what a computer is. It's just something one sorts out as a child, when one thinks back on early childhood, as you made sense of the world, and say "how the heck did I do that?" It's just what Mach's positivism was all about.

The statements are of the nature of definitions of terms, what does it mean to understand something? What does it mean to have a meaningful question? It just defines the underlying computational processes involved in understanding meaning. These are not things which one can verify in any way, because they define precisely things that have no precise definition otherwise. The same way as you can't ask "How do you prove that F=ma from Newtonian mechanics?", you can't ask "How do you prove that the verification principle is valid" in positivism, it's an axiom, it defines what the word "understanding" means precisely computationally. Once you define this, then you can ask about verifying other statements, statements about the world.

It's also not hard to understand, compared to any nontrivial theorem, or any real science. It's just a founding philosophy. I also don't claim it's any extension of Carnap, because Carnap already understood all the consequences: the elimination of metaphysics, the justification of induction, the computational idea of mind, and so on. It's just an explanation with some hindsight.

  • Thanks for the extensive response! Your "it requires you only to compare formal models using their observational consequences" reminds me of Pragmatism more than LP. But isn't "observational consequences" overly restrictive? I can think of behavioral consequences as well as cognitive consequences of accepting one metaphysic over another. Life is about more than just observing reality; it also includes altering reality and thinking/feeling about reality.
    –  labreuer
    Dec 13, 2013 at 19:18
  • Any prescriptive statement about behavior translates into observations in the future--- namely the behavior! The behavioral structure may be from a metaphysics, sure, but the equivalence classes state that two metaphysics which prescribe the same behavior are equivalent in every sense. So "faith in a supernatural God that created the universe" is equivalent to "faith in a humanistic God that emerges from human struggle", and even "faith in no God, but acting according to the prescriptions of ethics which are compatible with humanistic God, superrationally"....
    –  Ron Maimon
    Dec 14, 2013 at 20:14
  • You just identify any two observationally equivalent metaphysical positions, and that includes behaviors, feelings, cognitive stuff, anything you can observe. You can observe feelings by asking someone or probing their brain, you can observe thoughts by asking someone, or probing their brain. It's still observationally relevant. The gauge principle here, that two frameworks are equivalent when their sensory consequences are the same is all that one is using, and it is exactly what Mach was saying and Carnap was trying to formalize.
    –  Ron Maimon
    Dec 14, 2013 at 20:16
  • That's useful to know; I have not gotten that idea from my exposure to LP. I wonder, though, whether this just defines a type of equivalence. For example, while two programs might generate the same output, it may be much easier to adapt one to producing new output. So they wouldn't be equivalent in terms of extensibility.
    –  labreuer
    Dec 15, 2013 at 3:21
  • @labreuer: Yes, exactly, this is why positivism is important, it allows you to switch frameworks quickly and easily because each formulation is different for giving you new ideas. I am giving you modern physicist positivism, it's a physicist philosophy, and the physicists continue to make it more precise with examples and fine tuning.
    –  Ron Maimon
    Dec 18, 2013 at 14:20
  • What philosophy kept people from doing what you claim LP does for physics?
    –  labreuer
    Dec 18, 2013 at 17:36
  • @labreuer: I am not sure I understand the question. All philosophy before positivism makes the claim that metaphysical questions are meaningful, and then the answers to these questions that people have in their head prevents them from taking another point of view, at least not without first rejecting all the metaphysical baggage from before. A positivist has no problem believing two completely contradictory metaphysical positions simultaneously, as long as the observations are the same in either one. You toggle between them like a physicist switches a gauge in electromagnetism.
    –  Ron Maimon
    Dec 19, 2013 at 1:19
  • I somehow seemed to have completely adopted this ability to switch, without also adopting the meaningfulness axiom. Or is the meaningfulness axiom absolutely required?
    –  labreuer
    Dec 19, 2013 at 17:59
  • If you are switching the answers to meaningful questions, aren't you concerned then about which answer is correct? Positivism means you don't admit that there is a correct answer, and you switch freely because there is no invariant meaning there.
    –  Ron Maimon
    Dec 20, 2013 at 1:48
  • I don't ever expect to hit 'the' correct answer, except in formal systems. Outside of them, I only expect to be able to become continually 'less wrong'.
    –  labreuer
    Dec 20, 2013 at 2:42
  • You say that first order predicate logic can describe all of experience, and that this is so because "computers can simulate anything." But a computer cannot always simulate itself. If it could, then it could solve the halting problem -- couldn't it? Could you clarify your reasoning on this point?
    –  senderle
    Apr 30, 2014 at 1:58
  • @senderle: A computer can simulate itself, up to resource limitations: it simulates a smaller memory, smaller running time, version of itself. If you try to solve the halting problem by simulating, you end up just testing whether any given program halts, which will give you a "yes" answer with definiteness, but never any "no" anwers, except for finite running time. The ability to simulate itself (or any other computer) is the central property of computation, it's Turing universality. The ability to simulate anything else is the Church Turing thesis.
    –  Ron Maimon
    Apr 30, 2014 at 4:12
  • I see what you're saying and I get what Turing universality is. It just seems to me that there's a conflation of two different concepts here. Being able to compute any computable algorithm doesn't seem to be the same thing as being able to simulate any computer. I know that's often how it's expressed (in terms of "simulation") but that seems like deceptive language.
    –  senderle
    Apr 30, 2014 at 13:12
  • @senderle: It's the opposite of deceptive, it is correct. Any computer can simulate any other computer, and this is what Turing universality means. The reason you get confused is because of the undecidability results, which appear in the limit of infinite running time. It is undecidable for one computer to make definite statements about its own (or any other computer's) behavior at infinite running time, at least if it claims to do so for all programs. This is impossible, because by running long enough, you can spite the prediction, and this is Turing's theorem on undecidability.
    –  Ron Maimon
    May 1, 2014 at 12:19
  • OK, so you're saying that there's no way to create an algorithm that will prevent a computer from simulating itself in finite time -- that does clarify your reasoning. Thanks!
    –  senderle
    May 1, 2014 at 14:38
  • @senderle: I didn't quite say that, that's technically false, given limitations of memory--- to simulate a computer with 8 gigs, you need 8.0000001 gigs (or however much your simulator takes). It becomes true if you tack on an extra few kilobytes, or if your computer isn't using all its memory, so it's morally true.
    –  Ron Maimon
    May 2, 2014 at 2:09

Your Answer

  • Links
  • Images
  • Styling/Headers
  • Lists
  • Blockquotes
  • Preformatted
  • HTML
  • Tables
  • Advanced help

Not the answer you're looking for? Browse other questions tagged or ask your own question.