This site is not maintained. Click here for the new website of Richard Dawkins.

Comment

← Irreligion: A Mathematician Explains Why the Arguments for God Just Don't Add Up

raptur's Avatar Jump to comment 90 by raptur

RE: Goedel's Incompleteness Theorem

Both posts have been a little off. Do people mind if I take a break from lurking for a bit? ;)

When working in a symbolic logic, there are two major systems. One is the _semantics_, which tells you how to determine whether a particular sentence is _entailed_ (i.e. must be true) by a given set of sentences you are presuming to be true called axioms. This set may consist only of tautologies like: "if P is true then P is true". For example, you may accept as true:

(1) All Men Are Mortal
(2) Socrates Is A Man

and then you might consider:

(3) Socrates Is Mortal

The semantics of first order logic, which is the simplest logic needed to capture this argument, consists of set-theoretic machinery to demonstrate that it is not possible for (3) to be false if (1) and (2) are true. So we say that (1) and (2) _entail_ (3).

The other is the _derivational system_, which tells you what sorts of transformations you are allowed to perform on sentences to get new sentences. So (1) and (2), in quasi-formal terms, would look like:

(1') (forall x)(if Man(x) then Mortal(x))
(2') Man(socrates)

One of the derivational rules of first-order logic says that you can chop off a (forall x) and substitute every occurrence of 'x' (a _variable_) with some _name_. If we do this to (1') and substitute 'x' with 'socrates,' we get:

(4) if Man(socrates) then Mortal(socrates)

Another derivational rule says that if you have an 'if-then' statement, and you also have the beginning of the 'if-then' statement, then you get the end of the 'if-then' statement. (2') provides the beginning of (4), giving us:

(5) Mortal(socrates)

Which is just the quasi-formal way of saying "Socrates Is Mortal." So we say that (1') and (2') _prove_ (5).

Notice, however, that it is not immediately obvious that these two systems have anything to do with each other. The semantics just tells you whether groups of sentences are logically consistent, but does not say anything about obtaining a larger group of logically consistent sentences. The derivational system, on the other hand, just tells you how to move around symbols in certain sorts of ways to get new sentences, without regard to what they actually mean. So how do we know that if something is _provable_ from a (finite) set of sentences, then it is _entailed_ from that set? Likewise, how do we know that if something is _entailed_ from a set of sentences, then it is _provable_?

The first question is answered by doing soundness proofs. If a particular logic is not sound, it is basically worthless because it is possible to prove things that are false using that logic. Whenever people propose new kinds of logics, soundness proofs are the first things they do. The second question is answered by doing completeness proofs: given a finite set of axioms, is it possible to mechanically derive every statement that is necessarily true? Sentential logic and first-order predicate logic are, in fact, complete. What Goedel demonstrated is that higher-order predicate logic is _not_ complete. So he did not prove logic wrong or anything like that: higher-order logic is still sound, so everything that you do manage to derive will be true (presuming your axioms are true). You just are not guaranteed to derive everything that's true. If we were, it should be possible in principle to write a computer program that would just run day and night solving all of mathematics automatically through a brute-force breadth-first search.

Does this make sense?

Wed, 09 Jan 2008 22:31:00 UTC | #104667