Traditional Computing Theory

The Nuprl logic IS an axiomatic theory of computation. Among its primitive notions are those of computable function and decidable predicate. Its reflection mechanism provides a basis for the theory of computational complexity which we are striving to integrate usefully as an aspect of any theorem with computational meaning. Nuprl also provides a domain theory that is adequate to give mathematical meaning to clean programming languages or subsets of them, such as Scheme, Pure Lisp, and ML.

To the extent that modern computing theory is divided into complexity and semantics (Volumes A and B respectively of the Handbook of Theoretical Computer Science), we can see Nuprl as a formal theory that can unify the two branches.

Scientific Theory of Computing

But there is a broader idea of "computing theory" that we also strive to address. The broader notion of theory is in the sense of a physical theory like Quantum Mechanics or QED or Classical Mechanics. We want to consider the notion of a theory of computation and information as it explains phenomena in a unified approach to science.

Consider this example of computing theory in practice. In the design of Nuprl we want to use decision procedures to prune subtrees of a proof by either finding a proof or discovering a counterexample. In principle there are decision procedures for the theory called Presburger Arithmetic and another one for the theory of Real Closed Fields. However, we do not implement these decision procedures in Nuprl because we know that they are too expensive to be useful. How do we know this? Complexity theory tells us that Presburger Arithmetic is at least doubly exponential in asymptotic running time in the length of formulas in the worst case. We don't know that this happens for "most formulas," but it happens for enough that it is not worth coding the algorithm.

Generally we know that "asymptotic complexity theory" is scientifically correct in that it can predict when a problem is too hard for an algorithmic solution or when an algorithm is too inefficient to be useful. As with all applied mathematics, there is a question about why it applies, the "unreasonable effectiveness of mathematics" is a theme that people have pondered. In this case, the applied mathematics is linked to computers by a law of information science; the law is that certain abstract models of computing faithfully model what is feasible to compute in the real world with devices of a certain structure.

There are other examples of type theory as the basis for a scientific account of information and computing. If we consider the chain of reasoning that tells us how to launch a rocket to land on Mars, it starts with astronomy, say the laws of celestial mechanics, then mathematics refines the differential equations to be solved. After that there is numerical analysis telling us how to approximate a solution to these equations. Then comes a very unscientific step, and an embarrassing one, we execute the numerical program in a high level programming language, like Java. This produces machine code for some physical digital computer. From here on the story again gets precise and scientific. We know how the circuits work all the way down to the quantum physics of the digital devices. But at the level of the software, there is a large gap in the chain of science. We do not know why a compiler is correct. Most of them are not correct. We cannot prove that the compiler actually produces code that yields numbers that approximate the solution. To do that, we need to invoke the scientific laws from information science. We need to state a computer model, a virtual machine. Then we need to prove that the compiled code executes in such a way that it preserves the mathematical semantics of the algorithm used to approximate the differential equation. These last steps can be done in type theory. Indeed the whole chain of reasoning could be formalized in type theory and related to the physical world by laws of device physics, celestial mechanics, computer science and mathematics. The first three theories are scientific.

We can imagine additional scenarios as in using Formalized Mathematics and models of cognition to explain why students find a topic hard to understand. We might be able to show that a certain topic in mathematics is difficult because it relies on a great deal of information, and consists of long chains of inference. We might even prove that a certain teacher's approach to this material is flawed because it omits a key idea needed for an efficient presentation of the results. This kind of explanation is about the physical world, about learning and cognitive performance. Yet the core of the explanation relies on a model of cognition and detailed knowledge about a particular physical knowledge base. The size and structure of this data base will play a key role in the precise explanation.

Technical Contributions

Semantics and Domain Theory

Our contributions to domain theory and semantics are discussed in the section called Type Theory, Set Theory and Domain Theory.

Computability Theory

Some people think that logic and the theory of computation show that we cannot accomplish our goals or formalizing working mathematics. They cite Gödel's theorem or results on the size of proofs in formal systems. This is ironic because part of what we are trying to do is formalize the very theories of computing and logic that they are misapplying. One of our contributions then is to circumscribe the Gödel theorems and computability theorems and show that those results do not say that the program of Formalized Mathematics is impossible. They only show that Hilbert's particular use of formal mathematics to provide a foundations for mathematics cannot succeed as he imagined.

Complexity Theory

One of the new areas we are currently working on is the expression of computational complexity in type theory. In addition to the work of Constable just cited, he and Karl Crary have presented a way to formalize a feasible notion of complexity inside Nuprl using partial reflection. Ralph Benzinger is also showing how to analyze the complexity of extracted programs at the metalevel. These results will appear in his Master's Thesis.