International Collegiate Programming
Contest

Dynamic Programming
This session focuses on dynamic programming.
If you are not
familiar with the relevant concepts and algorithms, or you need a refresher,
you can use the resources below.
Problem Set (#3)

Train Sorting

A Walk Through The Forest

Collecting Beepers

Prince and Princess

Winning Streak
Note: All problems expect I/O to be done using standard input and standard
output. The solution to problems must be written in a single file.
If you are programming in Java the application class should be called
"Main".
Dynamic Programming
Dynamic programming is another classical programming paradigm, closely
related to divideandconquer. Both are based on recursion: an instance
of a problem is reduced to one or more simpler instances of the same
problem. A divideandconquer approach implements this recursion in
a straightforward way, whereas in dynamic programming special care is
given to make sure the number of different subproblems that arise in
the recursion remains small, and that each of them is solved at most
once. Keeping the number of subproblems small often involves discerning
some additional structure in the instances that the recursion generates.
If the straightforward divideandconquer strategy takes too much time,
you should try to find additional structure and give dynamic programming
a shot.
To ensure that no subproblem is solved more than once, you can use
memoization: keep track of a list of solved subproblems, and before
solving one first check whether it appears in the list. Alternately,
you can often arrange the subproblems in order of simpler to more
complicated, and solve all of them in that order up to the one you are
interested in. Many dynamic programming approaches make
use of an array to store the solutions to possible subproblems.
The array can be 1D, 2D, or more complex. The latter approach is often
slightly faster than memoization, but memoization is useful when the
space of possible subproblems is large and it is difficult to predict
or organize the subproblems needed. Either approach typically requires
a lot of memory, more so than the straightforward divideandconquer
approach, but avoids the potential duplication of work in divideandconquer.
If you're only interested in the optimal value of the objective function
(rather than an actual solution that realizes that value), the code
for dynamic programming is often simpler and the memory overhead can be
reduced.
A classic, simple example of the difference between divideandconquer
and dynamic programming is computing Fibonacci numbers.
Just recursively computing F(n) using the formula F(n) = F(n1) + F(n2)
takes exponential time, but we can keep an array of
all values F(1), F(2), F(3), and so on, and reduce the time to compute
F(n) by a
large margin, since it only takes one addition to compute F(n) when we know the
previous values. This also has the advantage that we now know all the earlier
Fibonacci numbers, so multiple queries only take us as much time to compute as
the time to compute the largest one.
For more information on dynamic programming, and some examples, see
these slides.
This wikipedia page
also has a good description of dynamic programming.
Classic Dynamic Programming Problems
There are several classic problems that every wellversed ICPC contestant should be familiar with. Here is a list of them:
Lowest Common Ancestor
On a tree or a DAG, it is sometimes required to compute the Lowest Common Ancestor of two nodes p and q.
This is the lowest node in the tree that is ancestor to both p and q.
There are many approaches to solve this problem, but the simplest and most common involves storing the 2^jth parent for each node, where j ranges from 1 to log N.
This can then be used to binary search for LCA. It requires O(n log n) preprocessing, both in time and space, and O(log n) time per LCA query.
Read about this approach, as well as others, in the following
topcoder tutorial.
Though the LCA algorithm is simple enough to remember, we have a working implementation here.
