Here go any notes about what to do for an upcoming day or, if the day is over, any interesting things that came up.

Today, we enriched the types of the functions we developed last time using
the **Program** tactic and ran into some subtleties with set equality.
Check out the more richly specified functions we
developed today.

Last time, we discussed writing recursive programs on sets. This time,
we'll look into more sophisticated uses of the **Program** tactic. To
prepare, review the proofs and do the exercise from last
week's talk.

This time, we learned how to write recursively defined functions on sets. Check out the fixpoint functions we built today.

Last time we looked at the **FSets** libraries. This time, we'll
examine some aspects of modules in greater depth using **FSets** as an
example. To prepare, come up with five different goals that can be solved
entirely by the **fsetdec** tactic. This file
contains a simple module and an uninspiring example.

Today we did an overview of the **FSets** libraries.

Last time we finished our discussion about coinductive datatypes. This
time, we'll talk about modules. To prepare, complete the exercise from
last week's talk and read the documentation of the
**FSetInterface** library.

Today we developed some more difficult proofs using coinductive datatypes.

Last time we developed some infinite structures and proofs. This time, we'll examine some of the difficulties in computing over infinite structures. To prepare, do the exercises in last week's notes.

Today we talked about infinite structures and proofs.

Last time, we were still working with finite structures. This time, we'll start working with infinite structures. To prepare, do the exercise from last week's talk and finish reading the chapter on infinite data structures from ML for the Working Programmer.

This time we wrote a dependently typed certified ceiling function in Coq.

Last time we dabbled in existence proofs. This time, we'll try constructing more ambitious certified programs. To prepare, look over the notes from last week.

This time we proved some properties using existence. Check out the exciting things we proved today.

Last time we worked on decison procedures of decidable properties. This time, we'll work on existence. To prepare, review the notes and do the exercises from last week's talk. Also, read Section 6.2 from "Certified Programming with Dependent Types" and at least attempt Exercise 1 from Section 6.6.

Today we decided to recognize that evenness was a decidable property and that we could return a definite answer in the case that n was not even. Check out our new decision procedures.

Last time we talked about inversion and used it to construct decision procedures that prove negative things. This time, we'll talk about decision procedures for decidable properties. To prepare, review the proofs and tactics from last week's talk and do the exercise at the bottom. Read Section 10.5 in the Reference Manual. Read and work through the examples in Section 4.4 in "Programming with Dependent Types" and do Exercise 1 in Section 4.6.

Today we talked about the inversion tactic, and why we need it. Check out the things we proved using inversion.

Last time we compared different ways of writing decision procedures. This time, we'll discuss what to do when we can't proceed by induction. To prepare, review the proofs from the two previous lectures.

Today we worked on a different way of writing our own decision procedures and compared it with the way we learned in the previous week. We also compared the performance of different approaches. Check out our decision procedures from today.

Last week we worked on building decision procedures using the notations
in **MoreSpecif**. This week we'll experiment with proofs by reflection.
To prepare, read Section 11.1 in "Programming with Dependent Types" and review
the proofs from last week's talk.

Today we discussed writing decision procedures using annotations which enforce a distinction between the trunk and branches of the decision procedure, which we lay out ourselves, and the proof leaves, which we satisfy using automation. In this context, we laid out explicitly the parts of the program that are preserved by the extraction facility and we used automation to fill in the parts that would be elided by the extraction facility. Check out our decision procedures from today.

Last week we wrote our first decision procedure in the CoC. This week, we'll study a particular approach to writing such decision procedures. To prepare, review the functions and proofs from last week's talk.

Today we discussed writing our own decision procedures in the CoC. Check out the exciting things we proved today.

Today we talked about equality. Check out the things we proved about equality.

Last time we finished talking about conjunctions and disjunctions. This time we'll discuss the properties of equality and its uses. To prepare, review the tactics and definitions introduced in last week's talk.

Today we discussed OR in a little more detail and introduced some new tactics that handle goals with conjunctions and disjunctions, or really any inductive types that are isomorphic to AND and OR. Check out the exciting things we proved about OR.

Last time we discussed the logical connectives AND and OR. This time,
we'll discuss OR in a little more depth and introduce some new tactics that
help us solve goals which contain conjunctions and disjunctions a little more
rapidly than we have hitherto. To prepare, review the tactics and definitions
introduced in last week's talk. Also, locate in the Coq
standard library an inductive definition that is like AND in that a goal
specified with that inductively defined type, as
**True /\ 0 <= 0** is for AND, can be made to progress using the **split**
tactic.

Today we discussed the logical connectives, AND and OR. Check out the interesting things we proved.

Last time we discussed automation. This time, we'll discuss some logical connectives. To prepare you should (1) review the commands and tactics from last week's talk, (2) use the minimal structural induction principle on natural numbers to write a function that takes a natural number and returns the sum of all the numbers up to and including that number, (3) state and briefly attempt to prove complete induction on natural numbers, (4) read Section 23.3 of "Types and Programming Languages" by Benjamin Pierce.

This week we discussed simple automation using the **auto** tactic.
Check out the boring old proofs we proved with so much
less effort using **auto**. We also investigated a few last things about
structural induction principles, available here.

Last week we discussed minimal and maximal induction principles. Next week we'll introduce some simple automation. To prepare, read the discussion of modus ponens on Wikipedia. Also, read Section 1.3.5 (Statement and proofs) and Chapter 7 (Proof Handling). Sections 7.1.6 through 7.2.6 mostly apply to interactions with coqtop, and can be skipped. Also, read Chapter 4 (Calculus of Inductive Constructions) through Section 4.1.

Today we discussed in detail the minimal and maximal induction principles corresponding to our definition of evenness. We proved each one using the other. Proving the minimal induction principle using the maximal induction principle is easy. Proving the maximal induction principle using the minimal induction principle is more challenging. Moreover the resulting proof object is large and complicated. Check out the things we proved today.

Last time we developed some proofs about evennness of natural numbers. Today, we'll discuss the difference between maximal and minimal induction principles further.

To prepare you should read the proofs from last week's lecture.

Today we constructed some proofs about proofs of evenness of natural numbers.

Last week we discussed recursive types. This week we'll talk further about induction principles.

To prepare review the induction principles that we've discussed and proved in previous talks.

Today we finished discussing simple variant types and discussed recursive types, particularly the iso-recursive treatment.

Last week we discussed the dynamic semantics of terms that inhabit variant types. This week, we'll discuss the static semantics of variant types and semantics of recursive types.

To prepare, review our
proofs from last week and
grab a copy of "Types and Programming Languages" on your way to the seminar.
Also, bring a version of the **Zdouble** function, expanded to a calculus
of records and variants, and including let statements.

Today we finished our discussion of **False** and explored how handy
**False** is, since with it we can prove whatever we want. We also discussed
how easy it is to come up with other inductive definitions with which we can
prove arbitrary goals. Check out the things we proved about
**True** and **False**.

We also discussed variant types in considerable detail, and the relation of Coq's non-recursive, non-dependent inductive definitions to these types.

Last time we talked about structural induction principles, how they are
derived and how to use them. This time we'll look at a technical issue in
induction---the typing of **match** statements. We'll cover only typing of
match statements that use non-recursive, non-dependent types, but we'll move up
to recursive and dependent types later.

To prepare you should read Sections 11.4 - 11.8 of "Types and Programming Languages" by Benjamin Pierce and review last week's exercises.

Today, we discussed structural induction principles for **nat**, how to
use them and how to prove them. We also discussed the inductively defined types
**True** and **False**, their structural induction principles, and what
good these are. Check out our proofs about **nat** from
today.

Last time we worked on some simple proofs by induction. This time, we'll discuss structural induction principles in greater depth. To prepare, review the tactics and commands from the previous lecture and read their descriptions in the Coq reference manual. Also, read your choice of Well-founded induction by Stephan Falke or Notes from a lecture on Well-founded Induction by Andrew Myers.

Today we wrote a lot of simple recursive functions and realized that we are doing proofs by structural induction. Check out the things we proved today. We also discussed some of Coq's search mechanisms for locating already proven lemmas in libraries that have been loaded into the Coq top level. Finally, we briefly discussed well-founded induction and structural induction.

Last time we developed simple proofs of implications. This time, we'll begin developing proofs by induction, i.e., writing recursive programs. For today, review the proofs and commands from the previous lecture and read their descriptions in the Coq reference manual.

NO SEMINAR

Today we introduced some new tactics, **refine** and **apply**, which
allow us to build a proof in steps, rather than supply it all at once, as we
must with **exact**. We also discussed the type system of the CoC in a
little more detail, and built some simple proofs of implications, introducing a
few simple tactics as we went along. Check out the
exciting things we proved today.

For today, make sure you have access to the documentation for the Coq standard libraries for version 8.2. Also, reviews the proofs and commands used in last week's lecture and read their descriptions in the Coq reference manual. Many of the commands have numerous variants but it is not useful to read about every variant. Just get the gist and move on. I prefer using the pdf version of the Coq reference manual. It's hyperlinked and much easier on the eyes than the html version.

A quick summary of what was discussed today:

- Constructive and Classical Logic
- The Curry-Howard Isomorphism
- Dependent Types

We also interacted a good deal with Coq system and developed a few simple proofs.

I'll be conducting a seminar on formal proof development using the proof assistant Coq. Toward the end of the semester we should begin proving facts about the simply typed lambda calculus using the Penn PL club infrastructure. The seminar is tentatively scheduled for Mondays, 2:30 - 4:00, starting the 15th, not the 8th of September. If you would like to attend, but find that this is a conflict, please let me know and I'll look into changing the time.

To prove a theorem in Coq is to generate a proof object, evidence for the fact that we are proving. A proof object is a program (see the Wikipedia article on the Curry-Howard correspondence for why this is so). What makes proving theorems different than writing programs in a more typical language is that Coq is based on the Calculus of Co-Inductive Constructions, an extremely flexible and powerful type system.

We will be proving all sorts of facts during the meetings. I don't present cookbook solutions for the proofs; you will have to work out your proofs using the skills and facts I've taught. This can sometimes take a while, which is why the meeting times are 1 1/2 hour. You should bring a laptop with Coq 8.2 installed.

You should be able to do the following steps before the seminar begins.- Open CoqIDE
- Select Queries -> Print
- In the box provided enter "nat"
- See the result of the command and not some error message

I will begin at a very basic level which should be accessible to undergraduate students. At least some experience in programming in ML or Haskell and a basic knowledge of Peano arithmetic would be useful but is not absolutely necessary.

This seminar is ungraded and free! You can quit any time you decide it's too hard, not terribly interesting or merely not the best use of your time and I won't be in the least offended.

Please contact me if you are interested or have any questions.