Skip to main content
PRL Project

Logical Investigations, with the Nuprl Proof Assistant
Table of Contents

Previous Page Next Page

5. First-Order Logic: All and Exists


Finally, we add two more logical operators called quantifiers to the propositional calculus to form what we call First-Order Logic (FOL). These quantifiers are all () and exists (), as in saying "for all numbers there exists another number which is larger."

With the addition of quantifiers to our logic, we now introduce the topic of dependency. A logical statement that is dependent on a variable is called a predicate, and it can be thought of as a function which returns a proposition. For example, (x < 5) is a predicate; when we input a value for x, such as the number 2, the result is a proposition for which we may or may not be able to find evidence. We write P(x) to represent a predicate P dependent on a variable x.

We also need to introduce the notion of types. When we write logical statements with quantifiers, we need to declare what types the variables belong to. For the predicate (x < 5) to make sense, x could be declared as a natural number, the type represented by the symbol . For example, we could write the statement x:. (x < 5), read as "there exists a natural number x such that x is less than 5." In the rules below and the theorems that follow, we use a general predicate P(x) and declare a non-specific type T for x. Thus P is a function from the type T to the type of propositions, .


The Universal Quantifier: ∀

To say that we have evidence for the logical statement (x:T. P(x)) means that we have evidence for P(x) for any x in the type T. In other words, we can pick an arbitrary x from the type T and have evidence for the proposition P(x). In the following proof rules, we'll see that the rules for the universal quantifier are very similar to those for implication; in fact, universal quantification can be thought of as dependent implication.

  1. Construction:

    H x:T. P(x) by λ(x. ___ )

    H, (x : T) P(x) by ___

    We can prove that a predicate P(x) holds for all x by picking an arbitrary x and showing evidence for P(x). We are essentially proving that (x:T P(x)), and the evidence term will be a function. The completed rule takes the following form, where p(x) is evidence for P(x):

    H x:T. P(x) by λ(x. p(x))

    H, (x : T) P(x) by p(x)

    In Nuprl, this rule is applied using the tactic D 0 to decompose the conclusion.



  2. Application:

    H, f : (x:T. P(x)) G by ___

    H, f : (x:T. P(x)) T by ___
    H, f : (x:T. P(x)), (x : T), (v : P(x)) G by ___

    In order to use evidence for (x:T. P(x)) to prove a goal G, we must provide evidence for an element in T, and then show evidence for G assuming evidence for P(x). The evidence for P(x) comes from instantiating f with the evidence we found for T. The completed application rule takes the following form:

    H, f : (x:T. P(x)) G by g (t, f (t))

    H, f : (x:T. P(x)) T by t
    H, f : (x:T. P(x)), (x : T), (v : P(x)) G by g (x, v).

    In Nuprl, we instantiate the universal quantifier using the tactic SimpleInstHyp ⌈x⌉ #, where x is an element in T and # is the index of the hypothesis containing the universal quantifier.


The Existential Quantifier: ∃

Constructive logics interpret the existential quantifier, , as asserting that we can construct an object such that some property holds of it. The constructive meaning of this logical operator is not only that something exists, but that we can construct it. In constructive or intuitionistic FOL (iFOL), we need to specify how to build objects so that we know how to prove existential statements. The term constructive logic comes from this idea of constructing objects that witness a claim of existence.

Evidence for the logical statement (x:T. P(x)) therefore consists of a pair, where the first item in the pair is some constructed evidence (x:T), and the second item shows how that evidence satisfies P(x). As we see below, the proof rules for the existential quantifier are very similar to the rules for conjunction. We can think of existence as a form of dependent conjunction.

  1. Construction:

    H x:T. P(x) by < ___ , ___ >

    H T by ___
    H P(t) by ___

    In order to prove (x:T. P(x)), we need to construct some evidence t in T and prove P(t). In Nuprl, once we have evidence (t:T), we can use the tactic InstConcl [⌈t⌉] to instantiate it as our evidence for the goal. We will then be required to prove P(t). The completed rule takes the following form, where p(t) is evidence for P(t):

    H x:T. P(x) by < t, p(t) >

    H T by t
    H P(t) by p(t)


  2. Application:

    H, f : (x:T. P(x)) G by ___

    H, (x : T), (v : P(x)) G by ___

    If we know (x:T. P(x)), then we know that there exists evidence for both an object x in T and also P(x). In Nuprl, the tactic D i, where i is the index of the hypothesis, will decompose the existential statement into two separate hypotheses. The completed rule takes the following form, where g(x, v) is evidence for the goal G:

    H, f : (x:T. P(x)) G by let x, v = f  in  g(x, v)

    H, (x : T), (v : P(x)) G by g(x, v)



Transformations to Prenex Normal Form

Formulas of first-order logic can sometimes be converted to logically equivalent formulas in which all of the quantifiers appear first. For instance, (x. P(x)) C is equivalent to x. (P(x) C) where C is a closed formula, i.e. there are no occurrences of free variables. For simplicity, we omit type declarations. In the case of classical first-order logic, every formula can be effectively converted to at least one equivalent formula with all quantifiers at the front. We will examine this transformation and apply it to analyzing equivalent variants of implications of the following form, where A(x,y) and B(u,v) are decidable relations:


∃y.∀x. A(x,y) ⇒ ∃v.∀u. B(u,v)

This form of statement was analyzed carefully by Bishop in his article "Mathematics as a numerical language" [Bis70]. He shows how this implication can be interpreted as the following prenex formula:


∀y.∃v.∀u.∃x. (A(x,y) ⇒ B(u,v))

He calls this form of implication numerical implication. We can derive this from general theorems about moving quantifiers past operators, and we can also provide Bishop's informal explanation as a series of transformations that are less general and depend on the fact that the relations A and B are decidable, e.g. x,y. (A(x,y) ∨ ∼ (A(x,y)).

In classical logic, the following four rules of equivalence for implication can be used to explain Bishop's transformation. We list them in order of decreasing constructivity. Rules 1 and 2 are constructively valid in both directions ( and ), but rules 3 and 4 require additional assumptions in order to prove the forward direction constructively. We will describe these additional assumptions later when showing the proofs.

In these rules, we use a general predicate P(x), omitting the type declaration of x for simplicity. C is a closed proposition, i.e. there are no occurences of free variables.


Transformation Rules for Implication:

  1. (C ⇒ (∀x. P(x))) ⇔ ∀x.(CP(x))

  2. ((∃x. P(x)) ⇒ C) ⇔ ∀x.(P(x) ⇒ C)

  3. (C ⇒ (∃x. P(x))) ⇔ ∃x.(CP(x))

  4. ((∀x. P(x)) ⇒ C) ⇔ ∃x.(P(x) ⇒ C)


Using these transformations, we can analyze Bishop's example as follows:


Step 1: Move the quantifier for y, using transformation rule 2

∃y.∀x. A(x,y) ⇒ ∃v.∀u. B(u,v) ⇔ ∀y.(∀x. A(x,y) ⇒ ∃v.∀u. B(u,v))


Step 2: Move the quantifier for v, using transformation rule 3

∀y.(∀x. A(x,y) ⇒ ∃v.∀u. B(u,v)) ⇔ ∀y.∃v.(∀x. A(x,y) ⇒ ∀u. B(u,v))


Step 3: Move the quantifier for u, using transformation rule 1

∀y.∃v.(∀x. A(x,y) ⇒ ∀u. B(u,v)) ⇔ ∀y.∃v.∀u.(∀x. A(x,y) ⇒ B(u,v))


Step 4: Move the quantifier for x, using transformation rule 4

∀y.∃v.∀u.(∀x. A(x,y) ⇒ B(u,v)) ⇔ ∀y.∃v.∀u.∃x.(A(x,y) ⇒ B(u,v))



Explaining Bishop's Numerical Implication

As shown above, we can apply the transformation rules to analyze the meaning of (y.x. A(x,y) ⇒ ∃v.u. B(u,v)) and obtain the stronger formula y.v.u.x. (A(x,y) B(u,v)).

From the constructive meaning of the quantifiers, we also have evidence for the following formula:


v.∃x.∀y.∀u. (A(x(y,u), y) ⇒ B(u, v(y)))

where for a domain D the function v maps D to D and x maps (D × D) to D; both are effectively computable. We will examine the exact structure of these functions after we justify the series of transformations to prenex normal form. We will compare our explanation of the transformation to prenex form with Bishop's explanation.


Here are Bishop's transformations [Bis70] to explain his idea of numerical implication based on Gödel's Dialectica interpretation [Göd58].

Bishop claims that mathematics is primarily concerned with statements of the following form, where A(x,y) is decidable:

(1) ∃y:T. ∀x:S. A(x,y)

Now consider the following implication constructed from statements of this form:

(2) ∃y:T. ∀x:S. A(x,y) ⇒ ∃v:T'. ∀u:S'. B(u,v)

Again, A(x,y) and B(u,v) are decidable over the types T, S, T', and S'. We could take all of these types to be the natural numbers or the integers or even the rationals . We could consider other types as well, but for simplicity we consider those with decidable equality as is the case for the options we mentioned plus lists, trees, graphs and so forth over these types.

Bishop claims that "pure thought" translates the form of (2) into this form:

(3) ∀y:T. (∀x:S. A(x,y) ⇒ ∃v:T'. ∀u:S'. B(u,v))

Now Bishop says to fix y as in the following statement and consider how v is constructed by some computational procedure under the hypothesis x:S. A(x,y):

(4) ∀x:S. A(x,y) ⇒ ∃v:T'. ∀u:S'. B(u,v)

He notes that we can make the construction uniform in y even when we don't have the hypothesis x:S. A(x,y). So we can actually prove the following:

(5) ∃v:T'. (∀x:S. A(x,y) ⇒ ∀u:S'. B(u,v))

This logically implies:

(6) ∃v:T'. ∀u:S'. (∀x:S. A(x,y) ⇒ B(u,v))

Bishop now claims that (2) is equivalent to the following:

(7) ∀y:T. ∃v:T'. ∀u:S'. (∀x:S. A(x,y) ⇒ B(u,v))

To analyze this more deeply, we fix values of y, v, and u and consider the inner implication:

(8) (∀x:S. A(x,y)) ⇒ B(u,v)

Recall that B(u,v) is decidable, and its decision depends only on a finite number of instances of the hypothesis, say A(x1, y), A(x2, y),..., A(xn, y), because the computable function on the evidence term for (x:S. A(x,y)) respects extensional equality on the evidence terms. So the proof actually comes from the following Boolean expression:

(9) A(x1,y) ∧ A(x2,y) ∧ ... ∧ A(xn,y) ⇒ B(u,v)

In Boolean logic, when either proposition P or proposition Q is decidable, (P Q) ((P) Q). This property was addressed earlier in the exercises as Theorem 16. Since A(x,y) is decidable, we can use Boolean logic on the implication in (9) and conclude that the deduction has the form (A(x1, y) A(x2, y) ... A(xn, y)) B(u,v). By one of DeMorgan's laws in Boolean logic, this reduces to (A(x1, y) ∨ ∼A(x2, y) ... ∨ ∼A(xn, y)) B(u,v). This is equivalent to A(xk, y) B(u,v). As stated in the exercises as Theorem 14, this implies the following:

(10) A(xk,y) ⇒ B(u,v)

So we actually know the following:

(11) ∃xi:S. (A(xi,y) ⇒ B(u,v))

Now restoring the other quantifiers we get:

(12) ∀y:T. ∃v:T'. ∀u:S'. ∃xi:S. (A(xi,y) ⇒ B(u,v))

Finally, Bishop uses the Axiom of Choice to obtain the final assertion. One form of the Axiom of Choice is to state it as x:S. y:T. R(x,y) ⇒ ∃f:(ST). x:S. R(x, f(x)). The computable function f comes from the extract of the constructive proof of y:T. R(x,y). Using the Axiom of Choice, Bishop arrives at the final assertion:

(13) ∃v:(T → T'). ∃x:((T × S') → S). ∀y:T. ∀u:S'. (A(x(y,u), y) ⇒ B(u, v(y)))


Proofs

On the pages that follow, we show proofs for each of the transformation rules for implication. Carrying out complete proofs of the equivalence claims is very instructive, especially as a way of exploring the extra assumptions that are needed to make the transformations constructive. We list these as simple propositions, one for each direction of the equivalence.



Transformation rules for Disjunction and Conjunction

In classical logic, we can also transform disjunction and conjunction to prenex normal form, with the following set of rules:

  1. ((∃x. P(x)) ∨ C) ⇔ ∃x.(P(x) ∨ C)

  2. ((∀x. P(x)) ∨ C) ⇔ ∀x.(P(x) ∨ C)

  3. ((∃x. P(x)) ∧ C) ⇔ ∃x.(P(x) ∧ C)

  4. ((∀x. P(x)) ∧ C) ⇔ ∀x.(P(x) ∧ C)

To prove these rules constructively, some of the transformations require additional assumptions. We leave these proofs as exercises, one for each direction of each equivalence. We provide extra assumptions as necessary for each theorem.



Previous Page Next Page


Table of Contents