Functional Languages and Structural Induction


Contents


Overview

Structural induction is a technique for proving:

In the third case (properties of operations), structural induction is especially useful when the operation is defined in an applicative (functional) language. Therefore, we will start with a brief review of functional languages and some of their important features.

Functional Languages

Examples of functional languages (also called applicative languages) include:

In general, (pure) functional languages have the following characteristics: Most functional languages have (most of) the following features:

Syntax

In this section we present the syntax that we'll use for the remainder of this set of notes, including our discussion of structural induction. This is essentially the syntax of the extended ISWIM language used by Burstall in his paper on structural induction.

Function definitions

We'll assume that a slightly different syntax is used to define non-recursive and recursive functions:

Non-recursive:
let <fn-name> ( <args> ) = <expression>
Recursive:
let rec <fn-name> ( <args> ) = <expression>
A non-recursive function f may not include calls to f in the expression that defines the function body; a recursive function may.

Other uses of "let"

The keyword let can also be used to bind names to values and to expose parts of a compound object. Here is an example of the former:

This means that whenever PI is found in ..., it can be replaced by 3.1415, and similarly for max in: In the second example, it is convenient to use "max" rather than the longer expression "maxInList(L)"; furthermore, because we have referential transparency, the implementation of this let can store and reuse the result of the call to maxInList, rather than repeating the call.

Note that one way to think about these uses of let is as "syntactic sugar" for (a more readable version of) application:

is actually

Here is an example of the use of let to expose parts of a compound object:

where intDiv is a function that returns a pair of values. This use of let avoids having to define and use functions that "take apart" that pair; e.g.:

if-then-else

We will use the following syntax:

case statements with pattern match

Case statements can be used instead of nested if-then-else statements. For example, the statement:

           
   if isEmpty(L) then E1
   else let h = head(L) and t = tail(L) in E2
can be expressed as:
  case(L){
     nil:       E1
     cons(h,t): E2
  }
or
  case(L){
     nil:  E1
     h::t: E2
  }
which are more readable.

data

We will assume that we have the usual integer and boolean literals and operators, as well as lists, using nil to mean the empty list, and using "::" or "cons" to add a new element to the front of a list. For example, given list L, both of the following expressions evaluate to the list that starts with a 2 and has L as its tail:

We will also assume that types (including recursive types) can be defined by specifying the name of the type, and a set of one or more constructors. For example:
          tree == empty              /* nullary operator */
                | leaf(int)          /* unary operator   */
                | node(tree, tree)   /* binary operator  */
In a function, if t is a tree, a case may be used to determine what the root constructor is:
           case(t) {
              empty:  ....
              leaf(i):  .....
              node(lt, rt):  ....
           }

Examples

Below are two examples of short functions written in our functional language. The first is a function that concatenates two lists:

   let rec concat(L1, L2) =
      case(L1) {
         nil:  L2
         x::L: x::concat(L,L2)
      }
To see how this function works, let's consider what happens if we call concat as follows: Here's the sequence of "reductions"; each time we replace a call to concat with its result: Our second example function reverses the items in a list:
   let rec reverse(L) =
      case(L) {
         nil:    nil
         x::L: concat(reverse(L), x::nil)  
      }
Note that the variable L is used multiple times in this code: This may make the code a bit confusing (i.e., it is perhaps not ideal stylistically) but it works fine. Each <pattern>:<expression> pair forms its own scope, and the variables used in the pattern are bound to values that hold in that scope. So the formal parameter L represents the whole list, but the L used in the last line of the code represents just the tail of that list (and it's the tail that is passed by the recursive call).

Here's a trace of the call reverse(a::b::c::nil):


TEST YOURSELF #1

Question 1.

Write a function member that returns true iff a given item is in a given list.

Question 2.

Write a function union that returns the "union" of two lists (a list that includes every element in either list, but with no duplicates).

solution


Accumulating parameters

The version of reverse given above is not very efficient. For a list of length N, it requires N applications of concat with arguments of length (N-1, 1), (N-2, 1), ..., (1, 1). The time for concat is linear in the length of its first argument, so the time for reverse is O(N2).

We can write a better version of reverse that is linear in the length of the given list by using a trick called accumulating parameters. The idea is to use an extra parameter that "accumulates" the answer. Here's the new code:

      let reverse(L) = rev2(L,nil)  // 2nd param is answer so far

      let rec rev2(L,A) =
         case(L) {
            nil:  A   // when all of the list is processed the answer is A
            x::L: rev2(L, x::A)
         }
Now let's see what happens when we call this new version of reverse. If the initial call is: we get: And the list is reversed in linear time!

Higher-order functions

As mentioned in the overview, functional languages usually support higher-order functions; i.e., they treat functions as first-class objects that can be created, passed as parameters, and returned as function results.

One common use of higher-order functions is to write a function with a function parameter that is applied to all elements of some data structure; i.e., rather than writing one function for each operation you want to perform, you write a single function, and call it many times, each time passing a different operation.

For example, suppose I want a function that adds 1 to every item in a list of integers, and I want a function that converts every lower-case string in a list to upper case. I could write the following two functions (that are specific to those two tasks):

    let rec addToEach(L) =
      case(L) {
        nil:     nil
        x::Tail: (x+1)::addToEach(Tail)
      }

    let rec lCaseToUcase(L) =
      case(L) {
        nil:     nil
        x::Tail: uc(x)::lCaseToUcase(Tail)
      }
Or I could write a single function:
    let rec map(L, f) =
      case(L) {
        nil:     nil
        x::Tail: f(x)::map(Tail, f)
      }
and use it to define the other two:
    let addToEach(L) = map(L, succ)
    let lCaseToUcase(L) = map(L, uc)
Functional languages usually also support anonymous functions; for example, if there is no "succ" function, we could write:
    let addToEach(L) = map(L, lambda((x), x+1))

And anonymous functions can be used to create new functions dynamically; for example, I can create a function named compose that composes two given functions (the first a function of one argument, and the second a function of two arguments):

    let compose(f, g) = lambda((x,y), f(g(x,y)))
Now I can use compose, for example, as follows, to define a function overlap that, when applied to two lists, neither of which contains duplicates, returns true iff the intersection of the two lists is non-empty:
    let rec hasDups(L) = ...
    let rec concat(L1, L2) = ...
    let overlap = compose(hasDups, concat) in overlap((1::2::3::nil), (4::3::2::5::nil))

It is interesting to think about the types of these functions:
hasDups: list → boolean
concat: (list X list) → list
compose: ((α1 → α2) X ((α3 X α4) → α1)) → ((α3 X α4) → α2)
overlap: (list X list) → boolean

Note that the type of overlap is new (is different from the types of the previously defined functions)!

Another common example of the use of higher-order functions is the reduce function, which converts a list of values to a single value. (In the Burstall paper, it is called lit.) Here's the definition:

    let rec reduce(L, f, b) =
        // L is a list,
        // f is a binary function, and
        // b is the "base value", to be used when f is applied to the
        //   last element of the list
      case(L) {
        nil:     b
        x::Tail: f(x, reduce(Tail, f, b))
      }
To understand reduce, think of f as an operator (e.g., +), and think of reduce as: For example, if the list is: and the operator is + and b is 0, then we'd have: which is the same evaluated right-to-left as left-to-right, namely 10.

In fact, f needs to be a function (essentially, the prefix version of the operator you want); e.g.:


TEST YOURSELF #2

Question 1.

Write a function has0 that has one list parameter L, and that uses reduce to determine whether the value zero is in L. (Note that the function you pass as the second argument to reduce needs to be of type: (int X boolean) → boolean.)

Question 2.

Write a function member that has two parameters, an item x and a list L, and that uses reduce to determine whether x is in L. (Hint: Make use of an anonymous function in defining the function to be to passed to reduce as its second argument.)

solution


Parameter-passing modes

Recall that for lambda expressions there are 2 standard reduction orders:

As we mentioned when we defined NOR and AOR, they correspond to call-by-name and call-by-value parameter passing in functional languages. (Note: Although procedural languages often allow programmers to specify modes for each of a function's formal parameters--in particular, value vs reference--functional languages do not usually provide a similar mechanism for a choice between name and value parameters. Instead, that is either specified by the definition of the language itself, or there may be a particular implementation--a compiler or interpreter--that implements the language using call-by-name semantics, and another that implements it using call-by-value semantics.)

It is worth noting that there are advantages and disadvantages to both approaches:

  1. If evaluation of an actual parameter causes non-termination, and that parameter is not used in a particular instance of a function call, then call-by-name semantics leads to a terminating program, while call-by-value does not.
  2. If evaluation of an actual parameter is very expensive, and that parameter is not used in a particular instance of a function call, then call-by-name semantics is more efficient than call-by-value.
  3. If evaluation of an actual parameter is very expensive, and that parameter is used many times in a particular instance of a function call, then call-by-name semantics is less efficient than call-by-value.
  4. The implementation of call-by-name is usually more complicated than the implementation of call-by-value.
In a functional language with no side effects, points 3 and 4 can be addressed in the implementation: Problem 3 can be solved by using call-by-need instead of call-by-name: call-by-need evaluates an actual the first time it is used then reuses that value rather than recomputing it again and again. Problem 4 can be solved whenever it can be determined (by a static analysis called strictness analysis) that a particular parameter will always be evaluated (that the function is strict in that parameter). In that case, the parameter can be passed by value without affecting the behavior of the program (thus avoiding the complications of implementing call-by-name).

Here are some examples to illustrate the idea of strictness:

  1. let f(x, y) = if x>0 then x else y
  2. let f(x, y) = x + y
In example 1, function f is strict in x (because the condition x > 0 is always evaluated), but f is not strict in y. In example 2, function f is strict in both x and y (because the plus operator requires that both arguments be evaluated).

Lazy constructors and infinite objects

Call-by-name and call-by-value are sometimes referred to as lazy and eager evaluation. Those terms are also used for constructors (like cons for lists, or leaf and node for the trees defined above). An eager constructor requires that all of its arguments be fully evaluated, while a lazy constructor does not. One advantage of a lazy constructor is that we can make use of (conceptually) infinite objects. For example, here's a function that creates an infinite list:

If cons (::) is not lazy, then any application of from diverges. But if cons is lazy, then from(x) simply creates a list whose head is x, and whose tail is from(x+1), which remains unevaluated until it is used.

Here's an example of a use of the from function (to sum the values 1 - 3):

If we assume that function parameters are passed by value, but constructors are lazy, then this program works as follows: Below is another example that uses an infinite list created by function from. The function sumPrimes sums the first n prime numbers, making use of function sum defined above, as well as two auxiliary functions, filter(n,L), and sieve(L).


TEST YOURSELF #3

Trace the call sumPrimes(3).

solution


Structural Induction

Recall that structural induction is a technique for proving properties involving recursive data structures like lists and trees. I think that the best way to think about structural induction is as a proof by induction on the height of the data structure's abstract-syntax tree. When you think about it that way, it is very similar to standard proofs by induction, which involve showing that some property P holds for all values of n greater than or equal to zero; i.e.: ∀ n ≥ 0, P(n).

To prove this by induction we must:

  1. Prove the base case: show that P(0) holds.
  2. Prove the inductive case: show that
      ∀ v in [0..n], P(n) ⇒ P(n+1)
    i.e., show that if the property holds for all values up to n, then it must hold for n+1, too. Since we've shown that it holds for n = 0, this means that it holds for n = 1, which means that it holds for n = 2, etc.

For structural induction, we want to show that some property P holds for every possible instance of our recursive data structure. We use the same basic approach:

  1. Prove one or more bases cases, one for each possible non-recursive instance of the data structure (e.g., for an empty list, or for a leaf node of a tree). These are the cases for structures whose abstract-syntax trees have nullary operators at their roots (are of height 0), or have non-recursive operators at their roots (these will generally be of height 1).
  2. Prove that if the property holds for all data structures whose abstract-syntax trees are of height less than or equal to n, then it must hold for all data structures whose abstract-syntax trees are of height n.
Step two (the inductive step) involves one case for each recursive operator; i.e., we must show that the induction holds for all possible ways to build up a "taller" data structure by combining shorter ones.

Some example proofs

In the proof of the Church-Rosser Theorem, we saw one example of structural induction: we used it to show that the "walk" relation on lambda expressions has the diamond property. The base case involved non-recursive var expressions, and the inductive cases involved expressions built using apply or lambda.

We can also use structural induction to prove properties of code that operates on a recursive data structure. For example, here is the definition of the concat function again (which operates on lists):

   let rec concat(L1, L2) =
      case(L1) {
         nil:  L2
         x::L: x::concat(L,L2)
      }

Example 1: Prove that for all lists L: concat(L, nil) = L

Example 2: Prove that for all lists L1, L2, L3: concat(L1, concat(L2, L3)) = concat(concat(L1, L2), L3)


TEST YOURSELF #4

Given the following definition of reverse:

   let rec reverse(L) =
      case(L) {
         nil:     nil
         x::Tail: concat(reverse(Tail), x::nil)  
      }
Prove, using structural induction on L1 that for all lists L1, L2: reverse(concat(L1, L2)) = concat(reverse(L2), reverse(L1))

solution


OK, now how about proving something useful, and for which the proof requires some work (discovering and proving an additional lemma). Recall that we made reversing a list more efficient by using an accumulating parameter:

      let reverse(L) = rev2(L,nil)  // 2nd param is answer so far

      let rec rev2(L,A) =
         case(L) {
            nil:     A   // when all of the list is processed the answer is A
            x::Tail: rev2(Tail, x::A)
         }
How do we know that this definition is correct; i.e., that it always produces the same results as the original definition of reverse? We don't until we prove that, using structural induction.

Example 3: Prove that for all lists L: reverse(L) = rev2(L, nil)

Double Induction

Both example proof 2 and the lemma we used for example proof 3 involved more than one list, yet we were able to do the proofs using induction on just one of those lists. That worked because only function concat has two list parameters, and it only "dissects" (does a case on) the first one.

One way to think about what we did (and what we need to do in general to prove properties involving two lists) is to think of the "proof space" as a graph:

                |
              2 |
  length(L2)    |
              1 |
                |
              0 +----------------
                0  1  2  3 ...
                   
                   length(L1)
In general, our goal will be to show that some property P holds for all lengths greater than or equal to zero for each list; i.e., P(L1, L2) holds at every point on the graph. To do this, we need a proof that:
  1. shows that P holds for some specific set of points, and
  2. shows that P(x,y) ⇒ P(x', y') for some values of x' and y' that allow us to cover all points in the graph starting from the set covered by step 1.
For our example lemma, step 1 (the base case) showed that the property of interest held when length(L1) was 0, and length(L2) was arbitrary. That corresponds to the (infinite) set of points along the y axis. Step 2 (the induction step) showed that P(n, m) ⇒ P(n+1, m); i.e., if the property holds when length(L1) = n and length(L2) is arbitrary, then it must also hold when length(L1) = n+1 and length(L2) is arbitrary. That lets us take one "sideways" step in the graph. Since the base case covered the entire y axis, sideways steps are sufficient to cover the whole graph.

For some code involving multiple data structures, we'll need several base cases. For example, we might show that the property holds when both lists are empty, and when one is empty and the other is non-empty. That corresponds to showing that the property holds along both axies of the graph. Then the induction step can show:

i.e., that we can take one "diagonal" step (to the right and up) from any point. Since the base cases covered both the x and the y axis, diagonal steps allow us to cover the whole graph.

To illustrate this approach, consider the following eq function, which tests whether its two list parameters contain the same items in the same order:

    let rec eq(L1, L2) =
        case (L1) {
          nil: case (L2) {
                 nil:  true
                 y::L: false
               }
          x::Tail1: case (L2) {
                      nil:      false
                      y::Tail2: if (x == y) then eq(Tail1, Tail2)
                                else false
                    }
        }
Suppose we want to prove that eq(L1, L2) = eq(L2, L1). If we try using induction on just L1, we can't even prove the base case. We can, however, do the proof using the three base cases and the one induction step described above as follows:

Burstall's Paper

For some other examples of structural induction, including a proof of correctness for a very simple compiler (one that just compiles individual expressions), see Proving properties of programs by structural induction by R. M. Burstall. Below is an outline of how Burstall's compiler works, and what the proof of correctness involves.

The proof involves definitions of the following:

  1. The compiler itself: a function named comp of type expressionmprogram, where an mprogram is a list of instructions.
  2. The intended semantics of expressions, defined by a function called val of type expressionvalue (Burstall uses the term "item" instead of "value"). Function val is defined using four auxiliary functions: valueof, which converts literals to their values; varof and varvalue, which together convert an identifier to its value; funcof, which converts an operator to the corresponding function.
  3. The semantics of the compiler's target machine defined by a function called mpval of type (mprogram X initial-stack) → final-stack. Function mpval is defined using auxiliary function do, which defines how to execute one instruction.
Given these definitions, the proof shows that compiling an expression and executing the resulting list of instructions on the target machine yields the result defined by the intended semantics (where "yields" means "leaves at top-of-stack"); i.e., the proof shows that the following diagram commutes:
                     comp
               e ------------------> mp
               |                     |
        val    |                     | mpval
               |                     |
               v                     v
               
               v ------------------> s'
                     load
i.e., that the value produced by compiling (applying function comp) and executing (applying function mpval), which corresponds to the path
                ------------------->
                                    |
                                    |
                                    |
                                    v
produces the same result as the intended semantics (applying function val, and pushing the result onto the stack), which corresponds to the path
             |
             |
             |
             v
              --------------------->
The proof is by structural induction on the expression.

Summary

In this set of notes we have covered the following topics:

Functional Languages

Structural Induction