|  
      
  
 
 
  
    International Collegiate Programming 
    Contest
   | 
 
 
Dynamic Programming
This session focuses on dynamic programming. 
If you are not
familiar with the relevant concepts and algorithms, or you need a refresher,
you can use the resources below.
 
 Problem Set
- 
Cent Savings
 
    
- 
Train Sorting
 
    
- 
A Walk Through The Forest
 
    
- 
Collecting Beepers
 
- 
Circle of Leaf
 
- 
Winning Streak
 
 
Note: All problems expect I/O to be done using standard input and standard
output.  The solution to problems must be written in a single file.
If you are programming in Java the application class should be called
"Main". 
 
Resources
 
 Dynamic Programming
Dynamic programming is another classical programming paradigm. Like divide-and-conquer, it is based on recursion: an instance 
of a problem is reduced to one or more simpler instances of the same 
problem. In divide-and-conquer, the reduced instances are significantly smaller, which guarantees a small recursion tree. In a dynamic program, the reduced
instances may be just a little smaller, and the recursion tree may be large, but we make sure that the number of distinct instances in the recursion tree remains small and that each instance is solved at most once. The approach has many applications, in particular in optimization. It often helps to think of
an exhaustive search or backtracking process and discern additional structure in the instances that arise throughout the process.
 
To ensure that no subproblem is solved more than once, you can use
memoization: keep track of a list of solved subproblems, and before
solving one first check whether it appears in the list. Alternately,
you can often arrange the subproblems in order of simpler to more
complicated, and solve all of them in that order up to the one you are
interested in. Many dynamic programming approaches make
use of an array to store the solutions to possible subproblems.
The array can be 1D, 2D, or more complex. The latter approach is often
slightly faster than memoization, but memoization is useful when the
space of possible subproblems is large and it is difficult to predict 
or organize the subproblems needed. Either approach typically requires
a lot of memory, but the iterative approach sometimes allows to save
space by reusing the space for solutions to instances that will no
longer be needed.
 
If you're only interested in the optimal value of the objective function 
(rather than an actual solution that realizes that value), the code 
for dynamic programming is often simpler and the memory overhead can be
reduced.
 
For more information on dynamic programming, and some examples, see
these slides.
This Wikipedia page
also has a good description of dynamic programming.
 
 Classic Dynamic Programming Problems
There are several classic problems that every well-versed ICPC contestant should be familiar with. Here is a list of them:
 
 
 Lowest Common Ancestor
On a tree or a DAG, it is sometimes required to compute the Lowest Common Ancestor of two nodes p and q. 
This is the lowest node in the tree that is ancestor to both p and q. 
There are many approaches to solve this problem, including one known as binary liting, which only require O(log n) time per LCA query after a processing phase that takes time O(n log n). Read about binary lifting in
this tutorial and see
here for other approaches.
 
   
   |