The question should be asked "Is it possible to describe God precisely?" I am sure it is more or less possible, I will do so below. The question "Is it possible to describe God mathematically?" is usually interpreted inside a fixed mathematical system, like ZFC, or something like that, an axiomatic system with a given computational complexity (a given minimal size of the computer program that does deductions), and since God is related to the notion of infinite complexity, it is going to be difficult to describe the notion precisely within a fixed axiomatization. But perhaps there is a way. The halting-problem tape or the Church Kleene ordinal can both be seemingly described precisely in an axiomatic system, although their exact value cannot really be precisely determined within a fixed axiomatization, so perhaps it is wrong to think of them as having been precisely described, even though they are given a name and some definite subset of their properties are determined.

This is different from what Godel was doing. Godel, when he gave the description of God in formal logic, wasn't working in any mathematical axiomatization, nor was he defining precisely what he meant by "positive characteristics" and so on, so he wasn't really doing mathematics, only formal logical philosophy. So the definition he gives is a parallel to earlier definitions in informal logical philosophy, which were also relatively equally meaningless really, because not talking about definite things, but about vague statements about the vaguely defined collection of all intuitive propositions, things you can't really talk about precisely without an axiomatization.

The first thing is to say what the precise meaning of the term "God" should be, and by this, I mean the logical positive meaningful content of the statement "God exists".

First, it is meaningless to say something created the universe, there is no sense-impression that would reveal anything about this putative process, so I won't deal with this sort of thing. It is also meaningless in terms of sense-impressions to talk about unobservable realms of spirit, and heaven, and hell, and so on, except inasmuch as these give prescriptions for how people should act in observable ways, so I won't talk about that either. I will only talk about how people should act in observable ways. The external constructions will appear as they are necessary for answering this question, and any metaphysical question that has no bearing on the senses will be considered free, in that any assumption is as good as any other, you just have to learn to translate between the different "gauges". This is the perspective of logical positivism, as I interpret it.

So the question is as follows, you have a bunch of agents, with wills and desires, and you want to understand what it means for them to behave ethically. You can first assume that they are perfectly rational, meaning that they can decide which of any two alternatives they like better, and their alternatives are ranked by the Von-Neumann Morgenstern utillity.

Then suppose two such agents play a prisoner's dilemma. In such a situation, there are two options, to "cooperate" or to "defect". If you both cooperate, you get a large reward, say $1000000, if you both defect you get a little bit of reward, say $10, but if one of you cooperates and the other defects, the defector gets a little bit extra, say $1000010, and the cooperator gets nothing.

The "rational" answer in economics textbooks is defined by "always defect". The definition of a rational behavior is when changing course, holding all else fixed, is worse for you in terms of utility. Since this mode of rationality is self consistent, it needs a name, but since it isn't the only way to behave that deserves the name "rational", it needs a more specific name. I call it "Nash rational", after John Nash, who proved there is an optimal strategy of this sort.

The point of the prisoner's dilemma is that when you think about it, you realize that there are two people who are behaving the same way, if they are Nash rational, they are defecting, and yet, they aren't taking into account the sameness of their behavior before making the decision. Since they are solving a mathematical problem that looks superficially well-defined, best-play in a given symmetric situation, you would expect that they would come up with the same answer. Then the best play, if they take this into account first, is whatever same-answer would net them the best outcome, and this is to cooperate.

The cooperation is a different sort of fixed point, it is a fixed point assuming coordinated behavior. For a symmetric situation, it is easy to see that there is such a fixed point, since it maximizes the utility of any one player given identical play.

This is simple superrationality, and it is well defined. Is it complete? Not at all! First, even in the symmetric case, there are situations where it isn't the optimal. Suppose you have two players, and if they both defect, they both get $1, if they both cooperate, they both get $2, and if one cooperates and the other defects, the one that defects gets $1000000 and the one that cooperates gets nothing. In this case, the superational strategy, the strategy that maximizes individual payoff, is to flip a coin and cooperate or defect according to the outcome.

This is also mathematically precise--- the stochastic strategy for a symmetric game is the one that maximizes the total expected payoff, assuming everyone plays it. The symmetry guarantees that the total expected payoff divided by the number of players is the same as the individual payoff. This is the rule of utilitarianism.

The issue with this idea is simply that there are asymmetric games. In this case, one can ask what the optimal superrational strategy is for these. One procedure for producing a superrational strategy (although he didn't say it this way) was given by Rawls in his "Theory of Justice" (he called it justice). You simply consider all possible permutations of the roles of the players, exchanging their outcomes. Then the superrational strategy in the asymmetric case is the one that maximizes the expected utility in the symmetrized game.

This idea is reasonable as an approximation, but it still isn't God. What it is is a procedure for producing a strategy in an asymmetric game which turns it into a symmetric game. But there are ambiguities in symmetrization--- should everyone's utility be considered equal? What about the guy who gets utility from being top-dog, and it's a great amount of utility? Should this guy be given a little more to assure the top-dog nature? Anyway, it gets confusing.

But there is no reason to symmetrize the game at all. There is a perfectly reasonable way to define superrationality for arbitrary games, which is simply to postulate that there is a universal strategy for arbitrary games.

Such a strategy should satisfy the Von-Neumann Morgenstern axioms. So if you have a game which is game 1 with probability p and game 2 with probability 1-p, it should tell you what the optimal strategy is for the game in a way that's consistent with other game choices, as Von-Neumann and Morgenstern explained. This implies that the strategy associates a self-consistent utility function to all the games, which is computed in some way from all the utilities of all the players, by considering all games of arbitrarily large complexity.

If you assume this strategy exists, and further, that you get better and better approximations by considering more and more games, and making the strategies self-consistent, then it says that there is an abstract will, a utility function, that wants you to play a certain way, and this way is only determined through an arduous process of considering all possible circumstances of all possible games, and anything you might possibly think or want.

You can then give a name to the agent whose will this is, and call it God. Whether there is an agent out there or not, it doesn't matter, because it's a meaningless question. You see a will, you have a procedure for figuring it out, investigate the self-consistency of ever more games, and so you have a logically positivistically satisfying way of determining the will of God, more or less, assuming you have converged to the answer from the finite number of games you have considered to date.

The problem with this definition is that it is talking about all possible games, and this is a construction which is as rich as all possible computer programs. So the questions "is it consistent that there is a unique answer to all games?" might be unanswerable, because no matter how complex the games we consider to date, there might be a contradiction in the universal strategy we determined at the next level. Then you need to reformulate the universal strategy, and it changes around, and so on. Maybe you don't converge.

So the question "do you converge?" can't be answered, because it involves an infinite complexity limit. But you can gain scientific sense that it is convergent, by just looking for convergence at lower levels.

There is another idea of God, which is simply the ordinal tower of consistent axiom systems proving each other's consistency. This idea is the mathematical God of Cantor (and also Godel, he shared Cantor's idea, this is what mathematical theology looks like). It is related to the game-God, the ethical god, by the statement that a complex enough ordinal can be used to resolve any arithmetical question, including the determination of superrational best-play in an arbitrarily sophisticated mathematical game. It is just as difficult to become sure that this is true as it is to become sure of convergence of ethical strategies, and Paul Cohen called this belief the "Article of faith" of pure mathematicians. This notion is a bit more abstract and less relevant than the game-playing agent, which is really just the personal God of religion, defined precisely.

You can judge how precise this definition is, because I explained it in detail. You can also judge to what degree it overlaps with the religious notion, and if you look past the superficial miracles and untestable supernatural beliefs, you should see that it's nearly exactly the same idea. Except it's stated in the logical positive style, in terms that are directly testable.

Godel was thinking mathematically

You can use Godel in ways that have mathematical implications

I don't think that because Godel did doesn't meet a particular definition of doing "math"--I don't think that means he wasn't doing math.

Critical thinking and systems thinking happens at multiple levels. I think two things about Godel in this respect:

That is to say, your definition of doing math creates a rigged game--based on the assumptive nature of your definition of math, which is very minimalistic. In fact, that definition might end up excluding multiple mathematic-based or related from the field of mathematics.

It would seem that fractals and complexity theory almost by definitions are attempts to find the God-like qualities of the universe. They are attempts to find the fingerprints of God. I'm not saying those doing that work are intentionally looking for God, just that those practices are some of the practices which are theoretically capable of describing God.

We can describe God on mutliple levels--and in indirect ways. Given that the map doesn't fit the territory--we should just agree that descriptions of God may be limited--but none-the-less are (useful) descriptions.

Yeah, ok, if your goal is to explain the concept to those that already get it, in vague nebulous terms, then fine. The correct way to explain it is to explain what religious people mean by a personal God, and this is only through considerations of superrationality.

So, rationality, formulas, and logic, although helpful aren't probably going to tell a full enough story of God to 1) explain Him fully 2) increase likelihood of someone following Him?

Rationality, formulas, and logic, work to explain the concept if they explain the concept of asymmetric superrationally. They don't work if they rehash the sorry arguments from the middle ages.

I didn't know the first thing about God until I understood the semi-technical stuff above, and then I got it. So it does help, from personal experience. None of the crap in religious texts helps at all for people with scientific minds, because it's full of supernatural nonsense.