CoqTalks

# Coq Talks

Mondays from 1 to 2 PM in 5331 Computer Sciences

### Days

Here go any notes about what to do for an upcoming day or, if the day is over, any interesting things that came up.

#### 2008 August 25

This was the very last day of the seminar. We constructed a new induction principle for `trm` which followed the structure of `term` but where the recursion was on a bound for the depth rather than on the structure of the `trm` itself.

Check out our proof of the induction fact and our proof that local closure is decidable.

#### 2008 August 18

Today we talked about proofs of the form ```forall e : trm, term e -> Q e``` where `Q` is some function on e. If we attempt to proceed by induction we are generally doomed. In dealing with the abstraction subcase we arrive at a proof context like this:

```e : trm
H : term e -> Q e
==================
term (trm_abs e) -> Q (trm_abs e)
```
We can transform the goal to this:
```e : trm
H : term e -> Q e
H0 : term (trm_abs e)
=====================
Q (trm_abs e)
```
We would like to use `H` to prove that `Q e` holds. But in order to do that, we must show that `term e` holds. However, if we invert `H0` we will instead get the following subgoal:
```e : trm
H : term e -> Q e
H0 : term (trm_abs e)
L : vars
H1 : forall x, x \notin L -> term (e ^ x)
=========================================
Q (trm_abs e)
```
We can certainly pick a fresh variable, say x, and use `H1` to prove that `term (e ^ x)` as in the following:
```e : trm
H : term e -> Q e
H0 : term (trm_abs e)
L : vars
H1 : forall x, x \notin L -> term (e ^ x)
x : var
Fr : x \notin (L \u fv e)
H2 : term (e ^ x)
=========================
Q (trm_abs e)
```
However, `H2` is not useful for proving `Q e`, since what is really needed to prove the premise of `H` is ```term e```. Moreover, if `Q` itself yields an inductive expression that includes cofinite quantification, proving `Q e` may be useless. It may be that what is really needed to be proved is ```forall x, x \notin L -> Q (e ^ x)``` and that is impossible by doing induction on the `trm`.

In general, as we have seen before, when confronted with a goal of this form it is best to proceed by induction on the hypothesis `term e`. In that case, the subgoal corresponding to the abstraction case is:

```L : vars
t1 : trm
H : forall x, x \notin L -> term (t1 ^ x)
H0 : forall x, x \notin L -> Q (t1 ^ x)
=======================================
Q (trm_abs t1)
```
Generally, for the Penn implementation, where most inductive definitions use a constructor that takes a cofinitely quantified term for any term corresponding to `trm_abs` the hypotheses yielded by this approach are much more useful.

Consider trying to prove the really simple fact that it is decidable whether a term is locally closed or not. We express this fact thusly:

```forall e : trm, term e \/ ~ term e
```
The only approach to this is to do induction on e. It turns out that this will always fail. This file shows our failed attempt to prove this this goal.

The solution is to write our own induction principle for `trm` rather than accepting the standard induction principle generated by Coq. Our new induction principle should be about `trm` but should emulate the structure of the induction principle for `term`.

We began to construct a new induction principle. First, we decide the type of the principle, then we begin to prove it. We find that our first attempt fails, because our proof does not obey Coq's strong syntactic restriction on fixpoints, which ensures, rather conservatively, that all fixpoints are terminating. This file shows our failed attempt to prove this goal.

One solution to this problem is to devise a measure which will decrease at every step, and will satisfy Coq's syntactic restriction. A good measure is the depth of the `trm`.

```Fixpoint depth (e : trm) : nat :=
match e with
| trm_bvar n => 1
| trm_fvar x => 1
| trm_abs t1 => S (depth t1)
| trm_app t2 t2 => S (max (depth t1) (depth t2))
end.
```
Next week, we'll talk about how we use this depth measure to devise an induction principle which Coq will accept and which will allow us to prove our target fact `forall e : trm, term e \/ ~ term e.`

#### 2008 August 4

Today we constructed a proof of preservation for the STLC. Just as with progress, we do induction on the typing derivation. Also, we saw that if we use the induction tactic carelessly, i.e., without setting up the goal correctly, we'll end up with subgoals that we won't be able to prove. There are three rules for typing derivations and the general approach is the following:

• typing_var --- a variable can not be reduced, so we use inversion to eliminate this subgoal
• typing_abs --- an abstraction can not be reduced, so we use inversion to eliminate this subgoal. There are evaluation orders in which an abstraction can be reduced, and this step would be different in that case, but in the call by value semantics which we are using there is no rule for reducing an abstraction.
• typing_app --- an application can always be reduced. There are three cases:
• red_beta --- this is the actual substitution step and the only really interesting one. We solve this goal by expressing opening in terms of opening and substitution, and using the fact that typing is preserved by substitution when the term substituted has the same type as the variable substituted for.
• red_app_1 --- here we use our induction hypothesis for the left hand side, since that is the side that is reduced
• red_app_2 --- here we use our induction hypothesis for the right hand side, since that is the side that is reduced

Check out our three attempts to prove preservation, the third one being the succesful one.

Last time we developed a proof of progress for the simply typed lambda calculus. This time we'll develop a proof of preservation, exploiting the lemmas in STLC_Core_Infrastructure.

#### 2008 June 30

Today we constructed a proof of progress for the STLC, learning a few new tactics on the way. Progress is proved by induction not on the term, but on the typing derivation, i.e., on the proof that the term is well typed. There are three rules for typing derivations and we must prove a different thing for each rule, thus.

• typing_var --- a variable is neither a value nor can it be reduced. Therefore, for this case, we must find some hypothesis that implies False so that we can eliminate the goal. We do this by using the fact that a variable can not be bound in an empty environment, and thereby prove a contradiction.
• typing_abs --- we know that the only term that can be typed with this rule is an abstraction, and abstractions are values, so we must prove that the term typed by this rule is a value
• typing_app --- we know that the only term that can be typed with this rule is an application and that an application can always be reduced.
It's also important to note that we must prove progress in the empty environment. If, instead, we used an arbitrary environment then it would be impossible to eliminate the typing_var case and our goal would be unprovable.

We also noted that essential equalities may seem to disappear in the course of an induction proof. We use the gen_eq tactic to assert these equalities before we do the induction step. Some might argue that the induction tactic is flawed and that it should be able to discover and assert these equalities automatically or perhaps that a variant should be introduced that does these assertions automatically. That seems possible and desirable.

We used two new tactics, inversion and subst. inversion is a powerful tactic which splits the goal into subgoals, as many as there are constructors for the inductive type of the term being inverted. Each subgoal is populated with the hypotheses required to build a term of that inductive type for that constructor. It then eliminates any of these subgoals that have a discriminable equality among the hypotheses. subst uses fundamental properties of equality to perform substitutions according to equality hypotheses. It clears any equality hypotheses that are changed by substitution into reflexive statements, e.g., x = x.

We observed that progress is a much easier property to prove than preservation. We completed our proof using really basic tactics. The only domain specific lemma we needed was typing_regular which allows us to prove that if a term is well-typed in some environment then the environment is well-formed and the term is locally closed. Preservation is the hard one because we must show that typing is preserved by substitution.

Check out our three attempts to prove progress, the third one being the succesful one.

Last week we talked a lot about typing enviroments and the definitions of typing schemas and reduction rules. This week we'll apply ourselves to the basics of proving progress and preservation. We'll confront progress first, since it's by far the less difficult.

To prepare you should:

• Read Section 9.3 of Types and Programming Languages. You want to get an overview of the way the proofs of progress and preservation are constructed.
• Take a look at Lib_Tactic.v. This is a big repository of more or less miscellaneous tactics. Look especially at skip, very useful for skipping subcases of your proof and completely unsafe, and introv, a pleasant convenience tactic which it would be nice to see make its way into the Coq standard. This tactics works like intros except that it allows you to give names to all variables that are not named in the goal. gen_eq is also useful. It allows you to establish an equality hypothesis that may disappear when doing induction.
• Read the definition of typing_regular in STLC_Core_Infrastructure.v. This is a very useful lemma that allows us to reason from the fact that a term is well-typed to properties about the term and the typing environment.
• Read Section 8.5.2 in the Reference Manual. This section is about the unfold tactic. This tactic replaces the specified term with its definition and then does some further reductions.
• Read Section 8.7.2 in the Reference Manual. This section is about the destruct tactic. We've used the destruct tactic before. This time you should pay particular attention to the first variants, which uses intro patterns. These patterns allow you to specify names for the new hypotheses generated by the destruct tactic.
• Read Section 8.8.1 in the Reference Manual. This section is about the rewrite tactic. It allows to rewrite subterms to other subterms as long as the subterms are proven equal.
• Read Section 8.8.7 in the Reference Manual. This section is about the subst tactic which allows you to exploit equalities.
• Read Section 8.9.3 in the Reference Manual. This section is about the discriminate tactic which allows you to throw out subgoals when there is an equality hypothesis which is false.
• Read Section 8.10 in the Reference Manual. This section is about the inversion tactic. This is a powerful tactic that generates multiple subgoals for each way in which the given hypothesis can be supported and eliminates some subgoals which involve obvious contradictions.

The following lemmas are used only in the proof of preservation---which me may not be able to cover entirely. If you're still enthusiastic, read the comments associated with the following lemmas which are proved in STLC_Core_Infrastructure.v or STLC_Core_Soundnes.v. Also read any definitions on which they depend, e.g., fv and subst.

• subst_intro --- this lemma allows to rewrite opening with a terms as a substitution operation applied to the result of opening with a variable
• typing_subst --- this lemma allows to argue about preservation of types under substitution. Note that substitution is a pretty straightforward operation, not nearly as troublesome as the one discussed in textbooks.

#### 2008 June 23

Today we discussed the Metatheory_Env module. This module models an environment, i.e., a very general sort of mapping from variables (elements of type var) to values of any type. For proving properties of type-soundness we are generally intersted in environments that map variables to types (elements of type typ). The definitions Metatheory_Env are quite straightforward and implemented pretty simply in terms of lists. We discussed a few commands that can be used to find the proofs you need; these are Locate, SearchAbout, and SearchRewrite. We did a number of exercises to become familiar with the environment module.

A useful element of Metatheory_Var is var_default. It's good if you want a concrete variable to insert into your proofs, as using var_fresh is a bit cumbersome for simple exercises.

Typing and reduction in this implementation follow the TAPL book quite closely. The unusual part is to be found in the typing schema for abstractions. Also, the typing environment may not have duplicate bindings, a reasonable restriction in a proof scenario where we may always select fresh variables.

Check out the exciting things we proved about environments.

Check out the exciting things we proved about typings and reductions.

Last time we studied some interesting tactics that allow to generate variables that are fresh in any given context. This time we'll discuss typing and reduction rules for the STLC and also how to go about proving preservation and progress.

To prepare you should:

• Read Metatheory_Env.v through the end of the Relations section. This should give you an understanding of the structure of environments and their properties.
• Read all of STLC_Core_Definitions.v. Pay particular attention to the definitions of typing, value, red, progress and preservation.
• In the Reference Manual read Section 8.10. This section is about the inversion tactic. It is not necessary to read about every variant of the tactic. Do read the description of simple inversion. Read Section 10.5 as well but desist if it doesn't seem to be making any sense.

#### 2008 June 2

Today we discussed the Metatheory_Var module. This module handles generation of fresh variables with respect to an arbitrary set. We inspected the extracted O'Caml code to get an intuition about the general structure of var_fresh. Essentially, var_fresh finds the maximum element in a set and then increments it by one. The maximum element is found recursively, by finding the maximum of the head of the set, viewed as a list, and the elements of the tail of the list. This is not the most efficient way to find a fresh variable. It would be possible to just keep a global counter and increment it each time a fresh variable is generated. However, such an pproach, being less functional, is less easy to prove correct.

Of course, being able to find a variable that is fresh with respect to a given set is only half the problem. When in the middle of the proof, we must inspect everything in the proof context that may contain variables. These include variables, sets of variables, terms (which contain free variables), and environments (which are a mapping from variables to types).

The tactic, gather_vars_with is the crucial tactic for inspecting the proof environment and forming the union of sets of variables. test_pick_fresh_filter is an illustration of its use. Some interesting observations about the Ltac language that are illustrated by gather_vars_with are:

• it looks like ML but backtracks kind of like Prolog
• match goal with means match the entire proof context, including hypotheses
• H : ?S |- _ matches any list of hypotheses and conclusions. It can match the same list of hypotheses in multiple ways, shortest match first, and will continue trying matches until they all fail
• constr: is an annotation that indicates that the ltac term being modified must be treated like a Coq term --- in the example, F is applied to H and the application fails if the types are incorrect
• context[FH] is a matching pattern that succeeds if FH can be found anywhere within the matched term
• fail allows to modify the behavior of the backtracking mechanism
We stepped through this tactic in the debugger to get a feel for its behavior.

beautify_fset throws out empty sets and reassociates the remaining sets in a set union.

pick_fresh_gen illustrates the use of fresh which gives you a fresh variable and optionally takes a string to use as the variable prefix. pick_fresh_gen itself takes a set and a variable name and makes a new variable that is fresh with respect to the given set and places it in the proof context.

Finding a new variable that is fresh with respect to all variables in the context and placing that variable and a proof of freshness in the context is handled by pick_fresh, which must be defined anew for every metatheoretic proof. This is because it depends on gather_vars which is defined in terms of functions from, e.g., terms to sets. These functions vary depending on the particular calculus about which proofs are developed.

Once a fresh variable and a proof of freshness have been placed in the context one generally wants to use them to prove something. A lot of basic lemmas and tactics for that are in Metatheory_Fresh. We skip these until they're better motivated and move on to talk about environments and soundness.

Today we just exercised these tactics and didn't manage to prove anything interesting.

Last time we learned some things about Coq's handling of sets and about proving existential properties. This week we will be discussing the Penn implementation of variables, generating sets of already existing variables, and generating new variables that are fresh with respect to sets of variables.

To prepare you should:

• Read the definitions of var_freshes and the functions on which it depends in metatheory_Var.ml, the ML function extracted from the library Metatheory_Var. Construct a mental model of how fresh variables are generated. You do not need to follow dependencies outside the metatheory_Var.ml file.
• Read Sections 9.3 and 9.4 in the reference manual. These sections are about specifying your own tactics and about running tactics in the tactic debugger.
• Read the definition of fv in STLC_Core_Definitions.v. fv finds the set of free variables in a term.
• If you generally use CoqIDE make sure that you can also run coqtop and load the necessary Penn files. Unfortunately, the Coq tactic debugging facility does not function at all in CoqIDE.
• Read Section 4.3 in the reference manual. This section describes the various forms of reduction that the Coq engine performs.
• Read Section 8.5.4 in the reference manual. This section is about the simpl tactic. This tactic reduces a Coq term to another term according to a specific evaluation procedure.

#### 2008 April 28

Today we discussed the Coq libraries used to express properties about the mathematical objects, sets. The Coq standard library defines an interface, FSetInterface that gives the signatures of all sorts of lemmas that are true of sets. Also in the standard library is FSetList which gives a concrete representation of sets based on ordered lists. This is defined by means of a module. The elements of the set must conform to the module type OrderedType. The Penn implementation of sets is built on top of the standard library implementation.

The Penn implementation also defines (Lib_FinSet.v) some abbrevations for set operations and special sets. L \union L' forms the unions of L and L'. {{x}} is the singleton set containing x. {} is the empty set.

Now that we are able to express sets it is pretty easy to build proofs that abstractions are terms. Check out the proofs we built about abstractions. Note that we used the change tactic to replace one term with another with which is was interconvertible. Also, the @ allows turning off the elision of implicit arguments. This is useful when using apply. Finally, we noticed that we can use the empty set as the argument to trm_abs in all our proofs of well-formedness.

We also discussed proofs about existence. Since this is constructive logic, we prove the existence of something that satisfies some property by giving a witness and a proof that that witness satisfies the property. ex is the inductive definition that inhabits Prop, sig is the inductive definitions that inhabits Set. A convenient tactic for supplying a witness is the exists tactic.

Even though ex has only one constructor it cannot be opened up when proving a goal in the type Set.

The sig type is really useful to express a function return type and some property that the return argument must satisfy. When the function is extracted the proof goes away, and only the computation of the result value remains.

Check out the simple existential facts we proved today.

Last week we discussed the Penn implementation of locally nameless representation. Next week we'll discuss that as well as proofs of existence properties.

To prepare you should:

• Check out item 47 in the Coq Faq. This is about the exists tactics used for specifying the witness in an existential proof.
• Read Section 8.3.11 in the reference manual. This section is about the change tactic.
• Read Sections 2.7.7 and 2.7.8 in the reference manual. These sections discuss how to specify an argument that is implicit.
• Load Metatheory into your proof environment. Then extract the variable part of the metatheory and the libraries it depends on to ML code using Recursive Extraction Library Metatheory_Var.. This will generate a whole lot of files.
• Take a look at Lib_FinSet.v and familiarize yourself with the notation for special sets and set operations defined in this file.
• Note that Lib_FinSet is a library about finite sets, not an implementation. The actual implementation is in Lib_FinSetImpl.v. This in turn relies on FSetList which is a part of the Coq standard library. Take a look at the documentation for this library. Pay particular attention to the definitions of special sets and set operations.
• Take a look at some of the functions in metatheory_Var.ml. We'll be discussing generating new variables soon, you should look at the Variables module in metatheory_Var.ml to get a general outline of how that is done.

#### 2008 April 21

Today we discussed the Penn implementation of locally nameless representation but ignored cofinite quantification. The basic facts are the following:

• trm is an inductive type of the sort that we are quite familiar with.
• The trm_fvar constructor takes an argument of type var. We'll talk more about vars later.
• It is possible to construct a trm which is meaningless. For example, trm_abs (trm_bvar 1) contains an index which is not bound to any enclosing lambda. To deal with this we define a new predicate term which has a proof exactly when a trm is meaningful.
• term is straightforward except for the abstraction case. A proof that an abstraction is a term requires a function as one of the arguments to the term_abs constructor. This is unfamiliar and makes things a good deal more complicated. The motivation is cofinite quantification which we have not yet discussed.
• open is defined in terms of open_rec. open_rec is a simple recursive function. k === l is an abbreviation for Peano_dec.eq_nat_dec k l. Peano_dec.eq_nat_dec is a function with type forall m n : nat, {m = n} + {m <> n}. If we apply this function to k and l we get a value with type {k = l} + {k <> l}. We can match against this value; in close_rec the if shorthand is used.

Sometimes it is useful to declare things without defining them. To declare a variable with type var use Parameter v : var.

The info tactic is nice. To see what auto is doing in a particular case try info auto.

Check out the exciting things we proved today.

Last week we talked about locally nameless representation. This week we'll discuss the Penn implementation of it.

We'll also discuss proving existence.

To prepare you should:

• Read the definition of open_rec and the definition of open in STLC_Core_Definitions.v.
• Check out item 74 in the Coq Faq. This is about the info tactical. Next time you want to know what the auto tactic is doing, try using info auto instead.
• Read Section 1.3.1 in the Reference Manual. This section is about declarations. Sometimes you want to declare that you have something of a certain type without actually defining it. Hypothesis, Axiom, Variable and Parameter are all good for that, but there are certain conventions that you will be expected to adhere to when using them.
• Check out item 47 in the Coq Faq. This is about the exists tactics used for specifying the witness in an existential proof.

#### 2008 April 14

Today we finished up our proof with a result type in the Set universe. The proof extracted to a function that is just like our original function. Check out the things we proved today.

We used the refine tactic to build up the proof that we were then going to extract. The idea of using refine to make explicit the computational parts of your proof and then to use automation to fill in the rest is a very appealing one.

We noticed that we can use if statements within the refine tactic.

When we use tactics we have less direct control over the structure of our proof. Today though, our extracted function was just the same as the one constructed with refine. We looked at the proof itself and saw that the proof body was somehwat different. Since we used induction, the induction principle for nat was applied to some functions. The extraction facility does some reductions on the Coq term before it extracts to the ML code.

We talked about the drawbacks of writing formal proofs using DeBruijn indices or a nominal representation. On the one hand you get into some difficult details of shifting, on the other hand you get into some difficult details of renaming. One approach is to syntactically distinguish bound from free variables. There are many ways to do that, but we focus on locally nameless representation where free variables are represented by names and bound variables are represented by De Bruijn indices.

Read Sections 8.2.2 and 10.1. These sections are about the refine tactic.

Download and compile this version of the Penn solution to the POPLmark challenge.

Take a look at STLC_Core_Definitions.v. This is a definition of the simply typed lambda calculus using a locally nameless representation. In particular, you'll want to look at the definitions of typ, trm and term. You might also want to read Section 2 and Section 3 through 3.1 in Engineering Formal Metatheory.

#### 2008 April 7

Today we talked about some more variants of or in the Set universe, sumbool and sumor. We noticed that sumbool is an analogue to the boolean type of ML, while sumor is an analogue to the option type.

We can write functions that return elements of type bool or we can write analogous functions that return elements of type sumbool. The functions with return type sumbool have a richer specification, yet when they're extracted their type is extracted to bool.

Check out the exciting things we proved today.

Read Section 2.2.2 in the reference manual if you haven't already. This section is about if expressions, which are really just a shorthand for a certain restricted type of match expressions.

Read Section 6.2.3 and Chapter 18 through Section 18.1. These are both about the Extraction command.

Read Section 8.7.2. This section is about the destruct tactic. We have already used the destruct tactic in its simplest form. However, the argument to destruct is not restricted to a variable name. It can be any term that can be constructed from the variables in the context.

#### 2008 March 24

Today we discussed and, or, and not. Since the core hints database includes the constructors for and and or auto is able to solve conjunctions and disjunction of goals which it can solve individually. However, when a conjunction or disjunction is one of the hypotheses, and the proof requires one of the parts of the hypothesis, auto will not succeed. tauto handles proofs that require destructing conjunctions or disjunctions. But tauto treats every proposition as a black box, and will fail if it does not prove the goal. intuition tactic uses the decision procedure of tauto but then solves resulting subgoals with the tactic on which it is parameterized. Unfortunately, intuition will sometimes leave subgoals unsolved but transformed.

Check out the exciting things we proved about and and or.

and and or have some relatives among the Sets. sum resembles or closely, prod resembles and closely. We'll put them to some use next time.

Read Section 8.12.3 in the reference manual if you haven't already. This section is about the tauto tactic.

Read Section 8.12.4 in the reference manual if you haven't already. This section is about the intuition tactic.

SPRING BREAK

#### 2008 March 10

Today we talked about the discriminate tactic which fully automates proving anything from a hypothesis that two terms are equal when actually they are not.

We also showed how easy it is to build inductive definitions that are uninhabitable but that do not look just like the inductive definition for False. These definitions are just as good for proving anything as False is though. Best to avoid them if we want to prove anything useful.

The Coq site has a pdf version of the reference manual. I find it more congenial to use this than the web-based version.

Check out the exciting things we proved today.

Read Section 8.9.3 in the reference manual. This section is about the discriminate tactic.

#### 2008 March 3

Today we talked about the ; tactic separator which is often more useful then the . separator. A common idiom when interacting with Coq is to precede the ; with a well chosen and powerful tactic and to follow the tactic with auto. So, for example, induction n; auto is a great tactic if induction n will generate multiple subgoals all of which will be solved with auto.

We also discussed the pattern, elim, and induction tactic all of which are useful for facilitating induction. The destruct tactic is useful for when you do not wish to do recursion but do wish to match against each constructor of an inductive type. Check out the exciting things we proved using these tactics.

We also discussed True, False, and =. True is nice because it can always be proved. False is nice because you can use a proof of False to prove anything. = is nice because you can use a proof that two unequal things are equal to prove False. It is dangerous to generate inductive types that, like False, can not be inhabited. Check out the cool things we proved about these very important inductive definitions.

Read Section 8.5.7 in the reference manual. This section is about the pattern tactic. Just skim.

Read Section 8.7.1 in the reference manual if you haven't read it already. This section is about the induction tactic. Just skim it---there's no need to dwell on every possible variation of the tactic. Do read at least a little bit about the elim tactic.

Read Section 8.7.2 in the reference manual. This section is about the destruct tactic. You use the destruct tactic when you want to match against the constructors of an inductive types, but you do not want to do recursion. Just skim.

Read Section 9.1 and the first page of Section 9.2 in the reference manual. Chapter 9 introduces the tactic language. We will not have occasion to use a whole lot of it yet, but the sequence operator, ;, is really useful.

#### 2008 Feb 25

Today we used our induction principle to prove forall n, 0 <= n.

We talked about the apply tactic, and how the auto tactic uses the apply tactic and the intro tactic to solve goals.

We experimented with the apply and auto tactics by proving several things we've proved already, but much faster.

It was observed that one cannot use auto with nat_ind. nat_ind is a higher order function and is thus rejected by the auto tactic.

We'll talk about induction next time.

Check out the exciting things we proved today.

Read Section 8.2.1 in the reference manual. This is about the simple tactic exact. Ignore eexact; it's unpopular.

Read Section 8.3.5 in the reference manual. This section discusses the intro tactic, which we have used before. You will see that, like many other tactics, it has a number of variations. Just skim this section.

Read Section 8.3.6 in the reference manual. This section is about the apply tactic. Just focus on the apply tactic, we'll cover eapply later.

Read Section 8.7.1 in the reference manual. This section is about the induction tactic. Just skim it---there's no need to dwell on every possible variation of the tactic.

Read Section 8.13.2 in the reference manual.

Read Section 11.1 in the reference manual. This section is about defining your own notations. Don't bother if you're not that interested in notations.

#### 2008 Feb 18

Today we discussed Coq's type system a bit more. In particular, Coq has both dependent types and universal types but uses the keyword 'forall' for both. Also, 'forall' is not the universal quantification of classical logic, but the more restrictive proof building function of constructive logic. Typing of the case statement in Coq is not as restrictive as in ML, the type of the case can be constructed from the value matched against.

Ben pointed out that you can utf8 characters for funky math symbols if you want to. There is a small file that you can load of already mapped patterns. It is not already in the default load path of Coq, so you must either add to the path or specify the full path name of this file. If you've used a godi installation the file is at : /opt/godi/lib/coq/ide/utf8.

Ben complained because he could not map the symbol '=>' to the symbol he wanted. Mulhern speculated that this was because it was not part of an inductively defined type. This was wrong, think about the fact that 'forall' pops up as a utf8 symbol and it's part of the CC and not an inductive type. These notations are just a pattern matching device. In the words of the ref manual, they bind a syntactic expression to a term. So, you can match against a function definition, but not against the character string '=>'. Here's an example, not using utf8 characters.

Notation " 'lambda' x : T, Z" := (fun (x : T) => Z) (at level 200, x ident, t ident) : type_scope.

Unfortunately, you're really pushing the expressiveness of notations if you try to do the same thing for match statements.

Check out the exciting things we proved today.

Read Section 1 of "Constructive Logic for All". The first part of the paper goes into a bit of detail about the distinction between intuitionistic logic and classical logic but is not very formal.

Read Section 4 through 4.1 of the Coq Reference Manual.

#### 2008 Feb 11

Check out the exciting things we proved today.

Today we observed that inductive definitions like eq and le are really just a way to package up axioms and place them in our environment. In this case, le_n : forall n : nat, n <= n and le_S : forall n m : nat, n <= m -> n <= S m are the two axioms about le provided by the core part of the Coq standard library.

Also, -> and forall are part of the Calculus of Constructions (CoC) while other things that might seem equally fundamental, like eq are not, but are instead defined in terms of inductive types.

The Curry-Howard isomorphism is important here; in the world of logic -> is implication, but in the world of programs it is the function type.

Our intuition about implication carries over from classical propositional logic---assigning a value of T to A is a lot like finding a proof for A and assigning a value of F to A is a lot like finding a proof of A -> False.

Read Section 1.3 (The Vernacular) of the Coq Reference Manual. There are five sub-sections. Your reading strategy should be as follows. Start each sub-section and read for as long as you are interested, but at least until you grasp the basic syntax of the command discussed in that subsection. Then go on to the next sub-section. You can skip the part about the Function command entirely.

Read Section 30.5 of "Types and Programming Languages" by Benjamin Pierce. Note that in Coq, the symbol Π is represented by the keyword "forall" often abbreviated as ∀. Sorry about that, you'll get used to it.

#### 2008 Feb 4

Check out the exciting things we proved today.

• Make sure that you have a working CoqIde or, at least, coqtop.
• Bring something to write with and on.