Section#4: Parallel complexity theory - P-completeness
(CS838: Topics in parallel computing, CS1221, Thu, Jan 28, 1999, 8:00-9:15 a.m.)


The contents

  1. Highly parallel reduction
  2. P-completeness
  3. The basic P-complete problem
  4. Examples of other P-complete problems
We have seen that NC is subset of P, but similarly to the NP-completeness theory, the problem whether P=NC is open and is likely equally difficult as its famous predecessor P=NP. The situation is very similar and also the techniques to deal with the problem are similar. Let us demonstrate these similarities in more detail.

Suppose many researchers have worked hard trying to find a highly parallel algorithm for solving some polynomial problem K and suppose they all have failed. This leads to a general, even though unproven, consensus that K is sequential in its essence. Suppose there is another polynomial problem K' which we want to parallelize, but after certain effort we give up, since no highly parallel solution is in sight and we rather start to suspect K' that it is also inherently sequential. Unfortunately, the small amount of work invested into the parallelization of K' is rather poor evidence that K' is really unparallelizable. But suppose that we manage to show that K can be reduced to K' (i.e., K can be described as an algorithm which makes calls to K') and this reduction itself is a highly parallel algorithm in the sense explained in Section 3, i.e., NC. This will support our hypothesis that K' is likely to be inherently sequential, since otherwise, K would become a problem with highly parallel feasibility, which would contradict the current overwhelming experience with K. However, there is still a small chance that K (and consequently K') does have a highly parallel solution, it is just bad luck that it has not been found yet. A much more convincing argument that K' is inherently sequential would be to show that any known hardly parallelizable polynomial problem can be reduced to K' in a highly parallel way. This is exactly the idea of P-completeness.
Back to the beginning of the page Back to the CS838 class schedule

Highly parallel reduction

Let us start with an informal example. Consider the following 4 problems: Parallel reductions allow to compare these problems from the viewpoint of highly parallel feasibility. It follows that MFB(i) is highly parallel iff MFV is highly parallel. This approach can be generalized.

Lemma

Computation of any function g:{0,1}*->{0,1}* induces associated bitwise decision problems gi: ``Is bit i of g(x) equal to 1?'' If g(x) is bounded, both problems are of the same parallel complexity.

However, MFT(k) and MFV likely differ in parallel complexity.

This works fine sequentially, but these m calls cannot be done in parallel. No matter how MFT(k) is feasible in parallel and no matter how many processors we have, we must perform all m calls sequentially, since each of them depends on the result of the preceding one. Consequently, even if MFT(k) had a highly parallel solution, MFV (and therefore MFB(i) as well) reduction on MFT(k) is inherently sequential. This could be an indication that MFT(k) itself is inherently sequential.

Consider now the MFP problem. It is not a functional problem, there may be many flow patterns as solutions. Successive calls to MFP may give different results. Therefore, any parallel reduction of a problem to MFP might give different answers (compare with the reduction of MFV to MFB(i) above). A usual approach is to require some additional unique property, such as being the lexicographically least solution, which converts the problem into a functional one.

Let us now generalize the notion of reducibility. It should not come as a surprise that the highly parallel reduction used intuitively in the example above is called NC-reduction.

Definition

A language L1 in P is NC-reducible to a language L2 in P, L1<=nc L2, if there is a function f:{0,1}*->{0,1}* such that for all x in {0,1}* (x in L1 iff f(x) in L2) and f can be computed by an NC algorithm.

Hence, if we are able to show that L1<=nc L2, we have shown that

  1. either if L2 in NC, then L1 in NC,
  2. or if L1\not in NC, then L2\not in NC.
Therefore, we can
  1. either prove an NC upper bound for L2 which gives an NC upper bound for L1,
  2. or prove a polynomial lower bound for L1 which gives a polynomial lower bound for L2.
Of course, if L1<=nc L2 and L2<=nc L1, then L1 and L2 are of the same parallel feasibility. And this is a basis for defining the class of P-complete languages/problems.
Back to the beginning of the page Back to the CS838 class schedule

P-completeness

Definition

A language L is P-hard under NC-reducibility if L'<=nc L for every L' in P.

Informally, a P-hard language L is as hard to decide in parallel as any problem in P.

Definition

A language L is P-complete if it is in P and it is P-hard.

Hence, L is in fact in P and all other problems have at most that parallel complexity as L. Similarly, we can define

Definition

A function optimization problem K is FP-complete if it is in FP and it is P-hard.

As in the sequential complexity theory, we have two basic observations.

Lemma

The class NC is closed under the relation of NC-reduction: If L1<=nc L2 and L2 in NC, then L1 in NC.

Lemma

The class of P-complete problems is closed under the relation of NC-reduction: If L2 in P and there exists P-complete L1 such that L1<=nc L2, then L2 is P-complete.

Corollary:
If any P-complete problem is in NC, then NC=P. If any P-complete problem is provably not to have an NC algorithm, then no P-complete problem is in NC. There is a strong evidence that the second case is true, even though no proof was given up to now. There is a striking similarity between the structure of NP from the viewpoint of sequential feasibility and the structure of P from the viewpoint of highly parallel feasibility.

To show that L is P-complete, we have to show that L is in P, and then we have again two choices.

The reason behind P-completeness of a problem is usually an unavoidable data dependency within the problem solution, so that the processors cannot work on many parts of the problem simultaneously, and that is why these problems are called inherently sequential.
Back to the beginning of the page Back to the CS838 class schedule

The basic P-complete problem

One of the historically first P-complete problems, which plays the same role in parallel complexity theory as the SATISFIABILITY problem in the sequential NP-completeness theory, is the Circuit Value Problem.

Theorem

[Ladner75] CVP is P-complete under NC-reduction.

This problem has several variants, which are still P-complete.

Back to the beginning of the page Back to the CS838 class schedule

Examples of other P-complete problems

Graph theory

Combinatorial optimization

Logic

Formal languages

Algebra

Computational Geometry

Acknowledgements

Most of the materials included into these lecture notes was inspired by textbook R. Greenlaw, et al. Limits to parallel computation, Oxford University Press, 1995, ISBN 0-19-508591-4.
Back to the beginning of the page Back to the CS838 class schedule

Last modified: Thu Jan 28 by Pavel Tvrdik