International Collegiate Programming
Contest
|
Greedy Algorithms and Dynamic Programming
This session focuses on greedy algorithms and dynamic programming.
If you are not
familiar with the relevant concepts and algorithms, or you need a refresher,
you can use the resources below.
Problem Set
-
Cutting Sticks
-
Shoemaker's Problem
-
Stacking Boxes
-
The Poor Giant
-
e-Coins
-
Advanced Causal Measurements
-
The Grand Dinner
-
Bachet's Game
A PDF containing all the problems is available
here.
Note: All problems expect I/O to be done using standard input and standard
output. The solution to problems must be written in a single file.
If you are programming in Java the application class should be called
"Main".
Resources
Greedy Algorithms
Many computational problems can be cast in the following format: You are
given a system consisting of a number of components that each can be set
in various ways. Your goal is to set them in such a way that a given
objective is optimized. A greedy approach to such a problem iterates over
the components in a well-chosen order, and sets them in a way that is
locally optimal (according to a well-chosen metric), rather than
trying to figure out what will work long-term. Most objectives are too
complex for such a myopic approach to yield a global optimum. But if
it does, then the resulting algorithm is often very efficient.
Designing a greedy algorithm boils down to determining the order in which
to consider the components, and figuring out the right metric. In doing so,
it is important to verify that this combination leads to the correct
solution - ideally by reasoning about the algorithm, or else by developing
appropriate test cases.
Examples of problems for which a greedy approach works include: shortest
paths in graphs with nonnegative weights (Dijkstra's algorithm, which makes
its decisions based on minimum distance) and minimum spanning trees
(Prim's algorithm, which makes its decisions based on minimum edge weight).
More examples and info on greedy algorithms, can be found on
these slides
and in this Topcoder tutorial.
Dynamic Programming
Dynamic programming is another classical programming paradigm, closely
related to divide-and-conquer. Both are based on recursion: an instance
of a problem is reduced to one or more simpler instances of the same
problem. A divide-and-conquer approach implements this recursion in
a straightforward way, whereas in dynamic programming special care is
given to make sure the number of different subproblems that arise in
the recursion remains small, and that each of them is solved at most
once. Keeping the number of subproblems small often involves discerning
some additional structure in the instances that the recursion generates.
If the straightforward divide-and-conquer strategy takes too much time,
you should try to find additional structure and give dynamic programming
a shot.
To ensure that no subproblem is solved more than once, you can use
memoization: keep track of a list of solved subproblems, and before
solving one first check whether it appears in the list. Alternately,
you can often arrange the subproblems in order of simpler to more
complicated, and solve all of them in that order up to the one you are
interested in. Many dynamic programming approaches make
use of an array to store the solutions to possible subproblems.
The array can be 1D, 2D, or more complex. The latter approach is often
slightly faster than memoization, but memoization is useful when the
space of possible subproblems is large and it is difficult to predict
or organize the subproblems needed. Either approach typically requires
a lot of memory, more so than the straightforward divide-and-conquer
approach, but avoids the potential duplication of work in divide-and-conquer.
If you're only interested in the optimal value of the objective function
(rather than an actual solution that realizes that value), the code
for dynamic programming is often simpler and the memory overhead can be
reduced.
A classic, simple example of the difference between divide-and-conquer
and dynamic programming is computing Fibonacci numbers.
Just recursively computing F(n) using the formula F(n) = F(n-1) + F(n-2)
takes exponential time, but we can keep an array of
all values F(1), F(2), F(3), and so on, and reduce the time to compute
F(n) by a
large margin, since it only takes one addition to compute F(n) when we know the
previous values. This also has the advantage that we now know all the earlier
Fibonacci numbers, so multiple queries only take us as much time to compute as
the time to compute the largest one.
For more information on dynamic programming, and some examples, see
these slides
and this Topcoder tutorial.
This wikipedia page
also has a good description of dynamic programming.
|