UW-Madison
Computer Sciences Dept.

International Collegiate Programming Contest

Dynamic Programming

This session focuses on dynamic programming. If you are not familiar with the relevant concepts and algorithms, or you need a refresher, you can use the resources below.

Problem Set (#5)

  1. Bee
  2. Hippity Hopscotch (NOTE: make sure you print a blank line between cases)
  3. Dividing Coins
  4. Longest Palindrome (NOTE: empty strings are valid inputs to this problem!)
  5. Forming Quiz Teams
  6. The Mailbox Manufacturers Problem
  7. Party at Hali-Bula (NOTE: the end of the input description should say the last line of the file contains a single 0)

Note: All problems expect I/O to be done using standard input and standard output. The solution to problems must be written in a single file. If you are programming in Java the application class should be called "Main".

Resources

Dynamic Programming

Dynamic programming is another classical programming paradigm, closely related to divide-and-conquer. Both are based on recursion: an instance of a problem is reduced to one or more simpler instances of the same problem. A divide-and-conquer approach implements this recursion in a straightforward way, whereas in dynamic programming special care is given to make sure the number of different subproblems that arise in the recursion remains small, and that each of them is solved at most once. Keeping the number of subproblems small often involves discerning some additional structure in the instances that the recursion generates. If the straightforward divide-and-conquer strategy takes too much time, you should try to find additional structure and give dynamic programming a shot.

To ensure that no subproblem is solved more than once, you can use memoization: keep track of a list of solved subproblems, and before solving one first check whether it appears in the list. Alternately, you can often arrange the subproblems in order of simpler to more complicated, and solve all of them in that order up to the one you are interested in. Many dynamic programming approaches make use of an array to store the solutions to possible subproblems. The array can be 1D, 2D, or more complex. The latter approach is often slightly faster than memoization, but memoization is useful when the space of possible subproblems is large and it is difficult to predict or organize the subproblems needed. Either approach typically requires a lot of memory, more so than the straightforward divide-and-conquer approach, but avoids the potential duplication of work in divide-and-conquer.

If you're only interested in the optimal value of the objective function (rather than an actual solution that realizes that value), the code for dynamic programming is often simpler and the memory overhead can be reduced.

A classic, simple example of the difference between divide-and-conquer and dynamic programming is computing Fibonacci numbers. Just recursively computing F(n) using the formula F(n) = F(n-1) + F(n-2) takes exponential time, but we can keep an array of all values F(1), F(2), F(3), and so on, and reduce the time to compute F(n) by a large margin, since it only takes one addition to compute F(n) when we know the previous values. This also has the advantage that we now know all the earlier Fibonacci numbers, so multiple queries only take us as much time to compute as the time to compute the largest one.

For more information on dynamic programming, and some examples, see these slides. This wikipedia page also has a good description of dynamic programming.

Classic Dynamic Programming Problems

There are several classic problems that every well-versed ICPC contestant should be familiar with. Here is a list of them:

 
Computer Sciences | UW Home