Recursion


Contents


Introduction

Recursion is:

A method is recursive if it can call itself; either directly: or indirectly:

You might wonder about the following issues:

One way to think about recursion:

Here's an example of a simple recursive method:

If the call printInt(2) is made, three "clones" are created, as illustrated below:

The original call causes 2 to be output, and then a recursive call is made, creating a clone with k == 1. That clone executes line 1: the if condition is false; line 4: prints 1; and line 5: makes another recursive call, creating a clone with k == 0. That clone just returns (goes away) because the "if" condition is true. The previous clone executes line 6 (the line after its "marker") then returns, and similarly for the original clone.

Now let's think about what we have to do to make sure that recursive methods work correctly. First, consider the following recursive method:

Note that a runtime error will occur when the call badPrint(2) is made (in particular, an error message like "java.lang.StackOverflowError" will be printed, and the program will stop). This is because there is no code that prevents the recursive call from being made again and again and .... and eventually the program runs out of memory (to store all the clones). This is an example of an infinite recursion. It inspires:

*** RECURSION RULE #1 ***

Here's another example; this version does have a base case, but the call badPrint2(2) will still cause an infinite recursion:

This inspires:
*** RECURSION RULE #2 ***


TEST YOURSELF #1

Consider the method printInt, repeated below.

Does it obey recursion rules 1 and 2? Are there calls that will lead to an infinite recursion? If yes, how could it be fixed?

solution


How Recursion Really Works

This is how method calls (recursive and non-recursive) really work:

Example (no recursion):

The runtime stack of activation records is shown below, first as it would be just before the first call to printChar (just after line 6), and then as it would be while printChar is executing (lines 1 - 3). Note that the return address stored in printChar's AR is the place in main's code where execution will resume when printChar returns.

When the first call to printChar returns, the top activation record is popped from the stack and the main method begins executing again at line 8. After executing the assignment at line 8, a second call to printChar is made (line 9), as illustrated below.

The two calls to printChar return to different places in main because different return addresses are stored in the ARs that are pushed onto the stack when the calls are made.

Now let's consider recursive calls:

The following pictures illustrate the runtime stack as this program executes.

Note that all of the recursive calls have the same return address -- after a recursive call returns, the previous call to printInt starts executing again at line 5. In this case, there is nothing more to do at that point, so the method would, in turn, return.

Note also that each call to printInt causes the current value of k to be printed, so the output of the program is: 2 1.

Now consider a slightly different version of the printInt method:

Now what is printed as a result of the call printInt(2)? Because the print statement comes after the recursive call, it is not executed until the recursive call finishes (i.e., printInt's activation record will have line 4 -- the print statement -- as its return address, so that line will be executed only after the recursive call finishes). In this case, that means that the output is: 1 2 (instead of 2 1, as it was when the print statement came before the recursive call). This leads to the following insight:

*** UNDERSTANDING RECURSION ***


TEST YOURSELF #2

What is printed as a result of the call printTwoInts(3)?

solution


Recursion vs Iteration

Now let's think about when it is a good idea to use recursion and why. In many cases there will be a choice: many methods can be written either with or without using recursion.

Here are a few examples where recursion makes things a little bit clearer, though in the second case, the efficiency problem makes it a bad choice.

Factorial

Factorial can be defined as follows:

or: (Note that factorial is undefined for negative numbers.)

The first definition leads to an iterative version of factorial:

The second definition leads to a recursive version: The recursive version is Because the recursive version causes an activation record to be pushed onto the runtime stack for every call, it is also more limited than the iterative version (it will fail, with a "stack overflow" error), for large values of N.


TEST YOURSELF #3

Question 1:
Draw the runtime stack, showing the activation records that would be pushed as a result of the call factorial(3) (using the recursive version of factorial). Just show the value of N in each AR (don't worry about the return address). Also indicate the value that is returned as the result of each call.

Question 2:
Recall that (mathematically) factorial is undefined for a negative number. What happens as a result of the call factorial(-1) for each of the two versions (iterative and recursive)? What do you think really should happen when the call factorial(-1) is made? How could you write the method to make that happen?

solution


Fibonacci

Fibonacci can be defined as follows:

Fibonacci can be programmed either using iteration, or using recursion. Here are the two versions (iterative first): For fibonacci, the recursive version is: Here's a picture of a "trace" of the recursive version of fibonacci, for the initial call fib(6):

Note: one reason the recursive version is so slow is that it is repeating computations. For example, fib(4) is computed twice, and fib(3) is computed 3 times.

Given that the recursive version of fibonacci is slower than the iterative version, is there a good reason for using it? The answer may be yes: because the recursive solution is so much simpler, it is likely to take much less time to write, debug, and maintain. If those costs (the cost for programming time) are more important than the cost of having a slow program, then the advantages of using the recursive solution outweigh the disadvantage, and you should use it! If the speed of the final code is of vital importance, then you should not use the recursive fibonacci.

Using Mathematical Induction to Prove the Correctness of Recursive Code

If you like math, you may enjoy thinking about how to use mathematical induction to prove that recursive code is correct. If not, feel free to skip this section.

When using induction to prove a theorem, you need to show:

  1. that the base case (usually n=0 or n=1) is true
  2. that case k implies case k+1

It is sometimes straightforward to use induction to prove that recursive code is correct. Let's consider how to do that for the recursive version of factorial. First we need to prove the base case: factorial(0) = 0! (which, by definition is 1). The correctness of the factorial method for N=0 is obvious from the code: when N==0 it returns 1.

Now we need to show that if factorial(k) = k!, then factorial(k+1) = (k+1)!. Looking at the code we see that for N != 0, factorial(N) = (N)*factorial(N-1). So factorial(k+1) = (k+1)*factorial(k). By assumption, factorial(k) = k!, so factorial(k+1) = (k+1)*(k!). By definition, k! = k*(k-1)*(k-2)*...*3*2*1. So (k+1)*(k!) = (k+1)*k*(k-1)*(k-2)*...*3*2*1, which is (by definition) (k+1)!, and the proof is done.

Note that we've shown that the factorial method is correct for all values of N greater than or equal to zero (because our base case was N=0). We haven't shown anything about factorial for negative numbers.

Summary