CS 830: Randomness and Derandomization
Fall 2002

contacts | lecture notes | assignments


Course Description

You probably heard about the recent breakthrough in computer science - the discovery of an efficient deterministic algorithm for testing primality. This course is about the paradigms underpinning that and several other discoveries, namely randomness and derandomization.
Over the past few decades, randomness has become one of the most pervasive tools in computer science. Its widespread use includes algorithm design, complexity theory, cryptography, and network protocols. Randomized solutions often perform better than the best known deterministic solutions, and/or are more elegant and simpler to implement. As was the case for primality testing until August 4, the quest for efficient deterministic solutions typically remains open. A promising approach is derandomization: reducing the need for randomness and eliminating it completely at a marginal cost in performance.
The first part of this course will be devoted to the various uses of randomness in computer science. Although our treatment cannot be exhaustive, we will cover paradigms from all application areas.
The second part will be on derandomization and the related question of how to obtain the randomness we need. A randomized procedure usually assumes access to a lot of unbiased uncorrelated random bits. These may be hard to obtain physically. We will cover two constructs that come to rescue: pseudorandom generators and extractors. Pseudorandom generators are deterministic algorithms that stretch a short seed of truly random bits into a long sequence of ``pseudorandom'' bits, which the randomized procedure cannot distinguish from truly random. They allow us to reduce the number of truly random bits needed. Extractors are algorithms that extract almost uniformly distributed bits from sources of biased and correlated bits. They allow us to run randomized procedures when we only have access to imperfect physical sources of randomness.
The precise balance between to two parts of the course will depend on the interest of the students.


Text

There is no required text. The recommended text is Randomized Algorithms by Motwani and Raghavan. Lecture notes will be made available from the course web page.


Prerequisites

Elementary probability theory (at the level of Math 240), algorithms (CS 577), and basic complexity theory (the parts of CS 520 dealing with the model of a Turing machine, and the classes P and NP). Sufficient mathematical maturity can substitute for the latter two prerequisites.


Course Work

  • Scribes (20%). Writing lecture notes for 1-3 lectures (depending on how many students take the class), and typing them up in LaTeX using the guidelines provided.

  • Homework (30%). There will be 3 assignments. The problems will be challenging.

  • Final paper (50%). There will be no exams. Instead, you will be asked to read one or a few articles on a specific research topic and write a paper based on your reading. The list of topics and articles from which you can choose will be available by the 5th week of the semester. You are also welcome to suggest a topic yourself.


References

The following books are on reserve in Wendt library:
  • Motwani and Raghavan, Randomized Algorithms, QA274 M68 1995. (2-hour loan)

  • Alon and Spencer, The Probabilistic Method, QA164 A46 1992. (3-day loan)

  • Goldreich, Modern cryptography, probabilistic proofs, and pseudorandomness, QA76.9 A25 G64 1999. (3-day loan)


dieter@cs.wisc.edu