Cross-links between Nuprl and Type Theory, Set Theory and Domain Theoryd

Origins

The logic of Nuprl is a constructive type theory, but its types include those of partial objects, so it is also a domain theory. Its primitives are rich enough to build a substantial set theory on top of the type theory, one equivalent in expressive power to ZFC. This section first looks back to the origins of type theory and then traces its development through domain theory and to Nuprl.

The first chapter of Aristotle's classical book on logic, the Organon (Logic), is Categories ; it is Aristotle's type theory. He clearly introduces the notion that some assertions make sense only when their subjects are of the right type. In translation he says,

The differentia of genera which are different and not subordinate one to the other are themselves different in kind. For example, animal and knowledge: footed, winged, aquatic, two-footed, are differentia of animal, but none of these is a differentia of knowledge.

In Nuprl terms, Aristotle is saying that if the underling types (genera) are different, say A is not equal to B, then no subtype of A, say {x:A|B}, is the same as any subtype of B, say {y:B|Q}. He goes on to say in the next sentences that if A is a subtype of B (subordinate genera of B), then we can predicate P of both A and B objects.

Martin-Löf and the Nuprl designers followed Aristotle in first laying down some properties of language; e.g., of terms, before stating the typing principles. So Aristotle begins the Categories by talking about homonyms, synonyms, individuals, compound expressions, predicates ("of a subject") and typings ("in a subject").

Aristotle's logic introduced many other fundamental ideas, e.g., the third section of the Organon, called Prior Analytics, was a study of demonstration, including various rules of inference (essentially what is now called the monadic predicate calculus), but it did not deal with paradoxes such as that of Epimenides the Cretan who said that all Cretans are liars; one of their poets said so. This is essentially the question of the truth of the statement that says: "This statement is false". This is the Eubulides paradox.

This paradox was a challenge to common sense that everyone could understand, but it did not seem to affect the careful language of science and mathematics. Russell remarked that people considered it more of a joke than a problem. But in June of 1901 Bertrand Russell discovered his famous paradox lying at the heart of a logic that was intended to provide a basis for all of mathematical reasoning. Just before he and Whitehead started Principia Mathematica, he showed that it is contradictory to talk about the set of all sets that do not contain themselves; the discovery is called Russell's Paradox: Let R = { X| not X in X}. Then is R in R? If it is, then it is not. If it is not, then it is. We have a formalized version of this in Nuprl.

In June 1902, having spent a year trying to understand and resolve the paradox, Russell communicated it to Gottlob Frege because it also revealed a contradiction in Frege's just completed foundational logic for all of mathematics, his Grundgesetze der Arithmetik (Foundations of Arithmetic). Now the Liar Paradox and others took on some urgency, and Bertrand Russell devoted the next decade of his life to working on this problem.

In his 1908 paper, Mathematical Logic as Based on the Theory of Types, Russell laid out a careful theory to contain the paradoxes. It became the basis of the logic of Principia Mathematica that he and Alfred North Whitehead published in 1910. This paper opens with the sentence,

The following theory of symbolic logic recommends itself to me in the first instance by its ability to solve certain contradictions, of which the one best known to mathematicians is Burali-Forti's concerning the greatest ordinal.

Russell went on to enumerate seven paradoxes, starting with that of Epimenides: then the one he discovered (in two forms); then Berry's paradox about the least number not nameable; another about the least indefinable ordinal; then the Richard paradox; and finally the Burali-Forti one. Kleene's famous book Introduction to Metamathematics also begins with an extensive discussion of these paradoxes.

Russell's solution is to invoke a new basic principle of logic and apply it to designing a type theory that enforces the principle. The key point for us is that Nuprl basically adopts Russell's solution in its modern form. Here is how Russell put the matter.

...thus "I am lying" becomes "It is not true of all propositions that either I am not affirming them or they are true;" in other words, "It is not true for all propositions p that if I affirm p, p is true." The paradox results from regarding this statement as affirming a proposition, which must therefore come within the scope of the statement. This, however, makes it evident that the notion of "all propositions" is illegitimate; for otherwise, there must be propositions (such as the above) which are about all propositions and yet cannot, without contradiction, be included among the propositions they are about.

.... Whatever we suppose to be the totality of propositions, statements about this totality generate new propositions which, on pain of contradiction, must lie outside the totality. It is useless to enlarge the totality, for that equally enlarges the scope of statements about the totality. Hence there must be no totality of propositions, and "all propositions" must be a meaningless phrase.

This leads us to the rule: "Whatever involves all of a collection must not be one of the collection," or, conversely: "If, provided a certain collection had a total, it would have members only definable in terms of that total, then the said collection has no total."(1)

The hierarchy of types

A type is defined as the range of significance of a propositional function, that is, as the collection of arguments for which the said function has values. Whenever an apparent variable occurs in a proposition, the range of values of the apparent variable is a type, the type being fixed by the function of which "all values" are concerned. The division of objects into types is necessitated by the reflexive fallacies which otherwise arise. These fallacies, as we saw, are to be avoided by what may be called the "vicious-circle principle," that is, "no totality can contain members defined in terms of itself." This principle, in our technical language, becomes: "Whatever contains an apparent variable must not be a possible variable of that variable." This whatever contains an apparent variable must be of a different type from the possible values of that variable; we will say that it is of a higher type. Thus the apparent variables contained in an expression are what determine its type.

He introduces the notion of first-order, second-order and higher order logics in this way.

...We may define an individual as something destitute of complexity; it is then obviously not a proposition, since propositions are essentially complex. Hence in applying the process of generalization to individuals we run no risk of incurring reflexive fallacies.

Elementary propositions together with such as contain only individuals as apparent variables we will call first-order propositions. We can thus form new propositions in which first-order propositions occur as apparent variables. These we will call second-order propositions; these form the third logical type. Thus, for example, if Epimenides asserts "all first-order propositions affirmed by me are false," he asserts a second-order proposition; he may assert this truly, without asserting truly any first-order proposition, and thus no contradiction arises.

The above process can be continued indefinitely. The (n + 1)th logical type will consist of propositions of order n, which will be such as contain propositions of order n - 1, but of no higher order, as apparent variables....

Alternatives

Martin-Löf type theory is based on Russell's insight, but instead of a Russell's complex system (called ramified type theory) it is based on an open-ended sequence of universes, U1, U2, U3, .... that stratify the concept of type. The Martin-Löf form is called a predicative type theory - in contrast to the impredicative forms such as full second order logic, either classical or constructive (called the theory of species in the constructive case). Nuprl follows this line as well.

One alternative to this Russellian stratification are systems that allow a single universe of all types (or all sets). Russell would say that such an object is too big, but a contrary view (held by Quine, for example) is that having a set of all sets or a type of all types is legitimate as long as we do not permit forming all subsets. If there is some restriction on which subsets can be formed, say by requiring a stratified predicate to define the subset, then no contradictions will result. Quine proposed a system called New Foundations, NF, based on this idea. There are other such set theories, including one proposed by Alonzo Church in 1974.

In the case of "pure logic," the idea of a single large type that violates Russell's principle appears in the notion of second-order logic. It is possible to quantify over the type of all propositions to form statements like: Forall P.P. This sentence is a proposition, so it can be instantiated with itself, a clear violation of the vicious circle principle and a clear case of an impredicative notion of proposition.

Alonzo Church proposed a classical type theory based on this impredicative notion of proposition in his paper Formulation of the Simple Theory of Types, JSL, 5, 1940, 56-68. This is the logical basis of the HOL system. The theory is anything but "simple" from a philosophical or foundational point of view. It is equivalent to Zermelo set theory, Z.

The constructive version of second order type theory allows a fully impredicative notion of proposition, which at first sight seems to violate the notion of construction in that the propositions cannot be built up systematically from simpler ones. This theory can be generalized to Intuitionistic Higher-Order Logic, say IHOL.

Propositions as Types and Realizability of Propositions

Type theory opened the door to another fundamental idea in logic - the notion that propositions are types which, if true, are inhabited by proofs. This conception of mathematical truth had been already articulated clearly by L.E.J. Brouwer in 1908. According to Brouwer's Intuitionist philosophy, a proposition is only true if we can prove that it is. His notion of proof was not that of a logician's formal proof, but once Heyting formalized the notion of Intuitionistic Logic, it became a challenge to also provide a precise sense in which there is a mathematical object that proves a true formula.

Gödel and Kleene approached this problem at nearly the same time. Kleene discovered the concept of realizability in terms of recursive functions. Gödel used the notion of primitive recursive functions in all finite types, his system T, to provide a semantics for Intuitionistically true formulas.

Other notions of realizability were later discovered. Haskell Curry noticed in 1954 that propositional formulas could be realized by composition of his S, K and I combinators. Bill Howard generalized this idea to the predicate calculus in 1968. At about the same time, N. G. deBruijn described his Automath project and series of languages which used the lambda calculus to code proofs and provide realizers. Lauchli also sought a semantics for Intuitionistic logic and was led to realizers.

Per Martin-Löf understood from Howard and Dana Scott understood from deBruijn that these realizability concepts were central to a semantics for Intuitionistic mathematics and that this semantics should drive the design of formal systems. They each proposed foundational systems in the early 70s, but Scott had second thoughts which he expressed in his paper.

At essentially the same time, Jean Yves Girard discovered a class of typed lambda terms that served as realizers for the "Theory of Species," a formalism he called system F. He proved a remarkable theorem that his typed lambda terms were strongly normalizable. This provided a consistency proof for the Theory of Species. The same class of lambda terms was independently discovered by John Reynolds as a model of typed programming languages.

Domain Theory

The theories of Martin-Löf and Girard used only total functions in their semantics. But in computer science, the notion of a partial function was basic. The standard universal programming languages like Algol and Lisp allowed these partial functions. Indeed Lisp could be seen as a programming language based on the untyped lambda calculus with its unrestricted use of the Y combinator. In the early 70s Dana Scott set out to provide a mathematical semantics for this class of functions. Basically he started with the realizers and tried to find a type theory. His approach is now known as Domain Theory and is a flourishing branch of mathematics and computer science.

Technical Contributions

Nuprl draws heavily on the semantic theories of Per Martin-Löf which were inspired by the Curry-Howard correspondence. We showed how to apply these ideas to a very rich type theory, one adequate as the foundations of computer science, and we showed how to use the full power of the lambda calculus as realizers by allowing the Y combinator as a realizer. In this setting we were also able to provide types for partial functions in the manner of Dana Scott. This allowed us to reconcile Domain Theory and type theory as we felt was necessary to provide a foundation for computer science.


  1. When I say that a collection has no total, I mean that statements about all its members are nonsense. Furthermore, it will be found that the use of this principle requires the distinction of all and any considered in Section II.