Would you like to suggest alternative approaches?
The computer is under-utilized. With a simple course on programming, first in a simle high level language, then in assembly, then in C, then in Lisp, and a graphical library capable of plotting points for the student, you automatically force a person to learn a big chunk of practical mathematics simply because it is needed for programming the computer--- if you want to draw a circle, you must know the equation of a circle, or a parametrization. If you want to simulate a planet orbiting the sun, you must know how to turn the equations of motion into an algorithm, you have to understand differential equations. If you wish to compute an integral numerically, you must know what it means.
Some of the problems are subtle. if you want to do a flood-fill algorithm, you will find out how to decide if a point is interior to a region algorithmically. The methods in this case were discovered only in the early 20th century. In order to represent manifolds on a computer, you need to use various discrete forms which were each important--- combinatorial simplicial complexes, algebraic equations, wavelets.
The introduction of formal logic to computer-trained people is relatively effortless, because people use logical operations in computers all the time, and become familiar with boolean algebras. The quantifier calculus is the only new thing which is not common to other computing tasks, and it's not so hard. The logical deduction algoritms can be used to do mechanical reasoning, which is the main point of computers, and the proof of the elementary logical theorems are so trivial given a computer, they are self-evident and hardly need proving. The other astonishment is how easy it is to find unsolved problems using a computer. Kleene Algebras, Cellular automata, Collatz conjecture, these are all extremely difficult problems where it is trivial to see the content on a computer.
There are certain superficial problems with this approach, because it is essentially an approach to what is called "constructive mathematics" or "formalist mathematics", in the sense that all the mathematical objects are concrete and representable. This is not a restriction, as we know how to make a concrete model of any axiom system algorithmically, step by step, in principle, using Godel's completeness theorem, but philosophically, people generally don't think of this model as being exactly what the axioms are describing--- we tend to think of the universe of enormous sets and real numbers, rather than the universe of formal symbols for sets and real numbers, since the second thing is intrinsically countable, while our intuition for mathematical real numbers is that they are intrinsically uncountable.
But the only thing we can manipulate on the computer is the computational representations, the symbols. So with computers, a young person is basically made to learn Hilbert's point of view, the formalist point of view. This then makes certain results difficult to accept, the ones that have stood in the way of Hilbert's program. It also sounds Soviet, it is in opposition to what is thought of as the transcendent view of the Platonic universe.
But I believe this philosophy is correct, and equally transcendent as the usual philosophy, except having the advantage of being absolute, it is not riddled with undecidable questions about the continuum as the usual view is. We have a good understanding of what the use of infinity in mathematics is all about today, it's about introducing bigger and bigger ordinals, to allow more and more complex axiomatic methods. There is no need to have a mystical view of it, because these ordinals never need to be uncountable, and in fact, it is likely that they never need to even be non-computable, the ordinals less than Church-Kleene ordinals are probably sufficient to prove the consistency of any system, no matter how strong (although this is not yet a theorem). The non-computable and uncountable ordinals are simply tricks for extending the naming convention for ordinals to higher places than you can do by ordinary naming conventions, because the new ordinals which are uncountable just collapse in a countable model to larger countable ordinals than what you could represent originally.
I think it is important to learn how to use a formal system of reasoning early. It really doesn't matter which one, so that you know what a rigorous proof is supposed to look like when you see it.
The computer is not a panacea--- there are aspects of mathematics that are difficult to express in the primitive computational languages we have today, things like ordinals, uncountable sets, abstract groups and algebras, geometrical arguments that involve homotopies that need to be visualized, basically everything mathematicians find interesting. The formalisms can be put on a computer (everything can), it just hasn't been done in an efficient way. A computational education can spur the development of these computational tools.
For grade-school math, the computer, especially if augmented by some higher level languages with support for advanced functions, is extremely useful for providing both motivation and intuition. So much so, that with a computer, a child will basically learn all of grade school math independently in a very short time, and will just be bored in school.
I agree that calculators are not useful. Calculators are not computers, they provide only a limited amount of intuition, because they perform only the most basic tasks, with a human bias as to which tasks are most intereting. The computer is universal.
Generally, I don't think education is so much of a problem anymore, because young students can learn mathematics by themselves with an internet connection. If there is something missing today, it certainly won't be missing in a decade or two.