Research on Computational Complexity Theory
Dieter van Melkebeek


Given a problem, can we realistically solve it using computers? Complexity theory aims to answer this question by describing how many resources we need to compute the solution as a function of the problem size. Typical resources include time on sequential and parallel architectures, and memory space. As we want to abstract away from details of input representation and specifics of the computer model, we end up with classes of problems which we can solve within certain robust resource bounds. We often focus on decision problems, or equivalently, on recognizing the corresponding languages of yes instances. Some of the most well-studied complexity classes are:

Research in complexity theory boils down to determining the relationships between these classes - inclusions and separations. In the above list, we know that every class contains the previous one; for each of these inclusions strictness forms a major unresolved conjecture. A similar situation holds for the polynomial-time hierarchy, a tower of complexity classes within PSPACE the first two levels of which are P and NP. Complexity theorists try to show that this tower does not collapse at any level.

One often tackles such questions by singling out the hardest problems in a complexity class, and concentrating on their properties. We consider a problem A hard for a complexity class if we can efficiently reduce every problem in the complexity class to A, i.e., if we can efficiently solve any problem in the class when given access to an oracle providing solutions to instances of A. If A itself also belongs to the class, we call A complete for the class. Several complexity classes C have complete problems under various reducibility notions. If the efficiency of the reductions is comparable to that of another complexity class C', which is known to be included in C, separting C from C' is equivalent to showing that a complete problem of C falls outside of C'.

The above description implicitly refers to the classical deterministic model of computation. However, the same questions can be asked about models that allow the use of randomness or quantum phenomena. In fact, much of contemporary complexity theory studies the relative power of those various models.

Here is a summary of the separations and inclusion results I obtained thus far. More explanations follow below.

Separations

I have developed concrete time-space lower bounds for NP-complete problems like satisfiability and closely related problems on deterministic, randomized, and quantum machines.

Under the hypothesis that the polynomial-time hierarchy does not collapse, I have also established the first tight kernelization lower bounds for NP-complete problems like satisfiability and vertex cover.

I have worked on time and space hierarchies within complexity classes that are not known to have complete problems.

I have developed lower bound techniques in the comparision-query model.

Some of my earlier work can be cast within the generic framework of trying to separate complexity classes by finding structural properties that one class has but the other does not. This is the case for my study of the notion of locality in finite model theory, as well as for my thesis work on various properties of complete problems:

Inclusions

On the inclusion side, much of my work falls into the area of pseudo-randomness and derandomization, in particular for verification processes known as Arthur-Merlin protocols.

I study the connection between derandomization and lower bounds, and have developed some implications to the area of meta complexity.

I also obtained unconditional derandomization results, including on isolation and on polynomial identity testing, for which I initiated a systematic analysis of hitting set generators.

In the quantum setting, I developed time-space efficient simulations, partly motivated by the quest for time-space lower bounds.


Time-Space Lower Bounds

Complexity classes are defined using robust resource bounds like polynomial time. Separating P from NP is equivalent to showing that satisfiability -- the problem of deciding whether there is a setting of the variables in a given propositional formula that makes it true -- cannot be solved in polynomial time. A more modest goal is to establish nontrivial lower bounds on the running time of any algorithm for satisfiability.

In joint work with L. Fortnow, R. Lipton, and A. Viglas, I established such lower bounds for algorithms that use little memory space. We showed that any random-access machine deciding satisfiability in subpolynomial space has to run for at least n1.618 time on formulas of length n. More generally, we obtained a time-space lower bound for satisfiability: For any constant a less than the golden ratio we determined a positive constant b such that every algorithm for satisfiability that runs in time at most na has to use space at least nb. [paper] The exponent of the golden ratio has been improved subsequently. Currently, the best known exponent is about 1.801. See the survey paper I wrote for more information.

Together with R. Raz, I proved a pure time lower bound for satisfiability but on a more restricted model of computation: Any deterministic Turing machine with one d-dimensional work tape and random access to the input that solves satisfiability has to run for at least na steps where a is any constant less than ((d+2)/(d+1))1/2. [paper]

All of the above lower bounds hold for virtually any of the known NP-complete problems. In a paper with S. Diehl and R. Williams we established somewhat weaker lower bounds for solving coNP-complete problems like the language of propositional tautologies on nondeterministic machines. [paper]

In the randomized setting with two-sided error (i.e., when both false positives and false negatives are allowed), time-space lower bounds for satisfiability are still wide open - it is consistent with current knowledge that satisfiability would have such an algorithm that runs simultaneously in linear time and logarithmic space! In a paper with S. Diehl, we did manage to show nontrivial time-space lower bounds on such machines for somewhat harder problems, namely the equivalents of satisfiability in the second and higher levels of the polynomial-time hierarchy. [paper]

In the quantum setting, we do not even know of nontrivial time-space lower bounds for any level of the polynomial-time hierarchy, but T. Watson and I showed such results for the so-called counting hierarchy instead of the polynomial-time hierarchy. [paper]


Kernelization Lower Bounds

Many hard computational problems contain a parameter k other than the input size that has a large impact on the computational complexity but in practice only takes on small values. A good example is the vertex cover problem, where one seeks a subset of at most k vertices of a given graph that hit every edge of the graph. The trivial algorithm runs in time O(nk), but there exist algorithms that take time O(nc+f(k)) where c is some constant and f an arbitrary function - a running time that is typically much better than the trivial algorithm.

One way to achieve such running times is through kernelization: Reduce in time polynomial in n to an equivalent instance of size bounded by some function g of the parameter k only, and then run a brute-force algorithm on the reduced instance. In order to obtain good parameterized algorithms, the functions f and g should not behave too badly, which motivates the quest for kernels of small size g(k).

For the vertex cover problem, the best known kernelizations yield graphs with O(k2) edges. H. Dell and I gave evidence that one cannot do better: The existence of a kernel with O(k2-ε) edges would collapse the polynomial-time hierarchy. Similar results hold for other NP-complete problems. For example, for k≥3, k-CNF formulas cannot be sparsified in polynomial time below the trivial bound of O(nk) bits while preserving satisfiability, unless the polynomial-time hierarchy collapses. [paper]

In fact, the results can be cast more generally as (conditional) lower bounds on the communication cost in the following two-player communication protocol to decide certain languages L: Alice holds the entire input x but is polynomially bounded; Bob is computationally unbounded but does not know any part of x; their goal is to cooperatively decide whether x belongs to L at small cost, where the cost measure is the number of bits of communication from Alice to Bob. As another application, under the hypothesis that the polynomial-time hierarchy does not collapse, we show the optimality of the size of the known probabilistically checkable proofs (PCPs) for the satisfiability problem on k-CNF formulas.


Pseudo-Randomness and Derandomization

Randomness has become a standard tool in various areas of computer science: sequential, parallel, and distributed computing, cryptography, etc. It often allows us to come up with efficient and simple solutions. However, from a complexity theoretic point of view, the power of randomization remains unclear. One fundamental question is whether we can simulate randomized computations deterministically without a significant blowup in running time, and the same can be asked about memory space.

A celebrated result in this context is the deterministic polynomial-time primality test by M. Agrawal, N. Kayal, and N. Saxena, which was obtained by derandomizing a randomized polynomial-time primality test based on polynomial identity testing. The above question asks whether the same can be done for arbitrary randomized computations. The consensus within the complexity community suggests a positive answer, although an inherent exponential slowdown is not yet ruled out (the BPP versus EXP problem). The known generic results are all conditional: Under certain complexity theoretic hypotheses regarding the existence of hard problems, we can construct pseudo-random generators that allow us to simulate efficient randomized computations by efficient deterministic computations. The hypotheses have been simplified over the years but they are yet to be removed.

Conditional Results and Equivalences

I have looked at the role of randomness in bounded round interactive proof systems known as Arthur-Merlin protocols. Under certain hardness conditions, Arthur (the verifier) can use a pseudo-random generator to reduce the number of random bits he needs. The strongest hardness condition yields complete derandomizations. The weakest one provided the first evidence that any language with a bounded round interactive proof system, and graph nonisomorphism in particular, has subexponential size proofs: If not, the polynomial-time hierarchy collapses. [paper]

Together with A. Klivans, I applied the same derandomization technique to various other randomized processes: [paper]

The above applications all involve time-efficient derandomization based on circuit lower bounds for certain uniform time-bounded complexity classes. We developed similar results for the space-bounded setting, involving branching program lower bounds for uniform space-bounded complexity classes. [paper]

The technique also played a crucial role in the breakthrough construction of extractors by L. Trevisan, which allows us to run randomized algorithms reliably with imperfect sources of randomness.

In the time-bounded settings of both randomized decision procedures as well as Arthur-Merlin protocols, the circuit lower bound hypotheses are known to be equivalent to the existence of pseudo-random generators that yield derandomizations of the sought-after efficiency. However, there may be other ways to obtain such derandomizations than through pseudo-random generators. Such generic type of derandomization is often referred to as white-box, in contrast to the black-box type of derandomization through the direct application of pseudo-random generators. In joint work with N. Mocelin Sdroievski, we present near-equivalences between whitebox derandomization of Arthur-Merlin protocols and hardness on almost-all inputs of nondeterministic multibit functions against such protocols.

We also establish a full-fledged equivalence between a constructive version of the hardness condition against Arthur-Merlin protocols that go through a compression phase, and the existence of targeted pseudo-random generators, which are generators that are only required to work on a given instance. O. Goldreich showed the equivalence between white-box derandomization and targeted pseudo-random generators in the context of randomized decision processes. A corresponding equivalence in the context of Arthur-Merlin protocols remains open. [paper]

For randomized decision procedures, even effcient simulations on nondeterministic machines remain open. Similarly, for Arthur-Merlin protocols, efficient simulations on nondeterministic machines with access to an oracle for NP remain open. Such simulations are referred to as mild derandomizations. In joint work with B. Aydinlioglu, we investigated the equivalence of mild whitebox derandomization and circuit lower bounds, both in the setting of randomized decision procedures as well as for Arthur-Merlin protocols. [paper] For the setting of mild derandomization of Arthur-Merlin protocols, N. Mocelin Sdroievski and I established an equivalence between white-box derandomization and targeted pseudo-random generators. [paper]

Another relaxation of the notion of derandomization is to only require the simulations to be correct on most instead of (almost) all inputs of a given length. We refer to this relaxation as typically correct derandomization. In joint work with J. Kinne and R. Shaltiel, we describe a generic approach for constructing typically-correct derandomizations based on seed-extending pseudorandom generators, which are pseudorandom generators that reveal their seed. The typically-correct derandomizations are more efficient than the everywhere-correct ones that follow from the same pseudo-random generator. Moreover, in some settings, like for randomized decicions processes, we obtain typically-correct derandomizations from hypotheses that are seemingly weaker than for everywhere-correct derandomization. [paper]

Unconditional Results

Since the derandomization of primality testing via polynomial identity testing (PIT), the latter problem has become a focal point of the derandomization community. The problem consists of deciding whether a polynomial identity holds. More precisely, we are given an arithmetic circuit or formula, and wish to know whether all the coefficients of the underlying formal polynomial vanish. PIT has a very natural randomized algorithm, but despite its simplicity no efficient deterministic algorithm for PIT is known, at least not for general circuits or formulas. In fact, V. Kabanets and R. Impagliazzo showed that derandomizing PIT for general circuits/formulas, implies circuit/formula lower bounds that have been open for 50+ years. With S. Aaronson, I obtained a simpler proof that yields somewhat stronger results and scales better [paper].

Considerable progress has been made on deterministic PIT algorithms for restricted classes of arithmetic circuits or formulas, in particular for constant-depth formulas. With M. Anderson and I. Volkovich, we looked at a different type of restriction, obtained by bounding the number of times a variable can occur in the formula, without restricting the depth. For multilinear formulas in which every variable occurs only a constant number of times, we obtained a deterministic polynomial-time identity test. [paper]

The common approach towards derandomizing polynomial identity testing looks at a particular restricted class C of arithmetic formulas or circuits, and designs low-degree substitutions on a small set of fresh variables that leave any nonzero polynomial in C nonzero. Such substitutions are referred to as a hitting set generator G for C. The approach effectively reduces the number of variables, and yields derandomizations whose efficiency mainly depends on the number of fresh variables.

With I. Hu and A. Morgan, I initiated a different, more analytic approach towards derandomizing polynomial identity testing. Rather than focusing on specific classes C of interest, we propose to study common generators G of interest and characterize exactly the set of polynomials that G fails to hit. The set has the mathematical structure of an ideal; we refer to it as the vanishing ideal of G. We execute the program for a generator G that has turned out useful for several classes C, including the class of multilinear constant-read formulas from our earlier work. We describe its vanishing ideal in terms of a small generating set (Groebner basis) and develop a structured membership test for the multilinear part of the vanishing ideal (inspired by a connection to alternating algebra). Along the way we show that, even though polynomial substitutions are in principle sufficient in the context of derandomizing polynomial identity testing, it nevertheless helps to consider rational substitutions, as well. [paper]

In the context of the BPP versus EXP problem, R. Santhanam and I proved the first unconditional, albeit small, improvement over the trivial deterministic simulation of randomized time t: We showed how to simulate randomized multitape Turing machines operating in time t on deterministic Turing machines operating in time o(2t). [paper]

In the space-bounded setting, nontrivial unconditional derandomization results have been known for some time. In particular, N. Nisan showed how to simulate randomized logspace by a deterministic algorithm that runs in polynomial time and space Θ(log2 n), and M. Saks and S. Zhou established a deterministic algorithm that runs in time nΘ(log0.5n) and space Θ(log1.5 n). In joint work with J.-Y. Cai and V. Chakaravarthy, I obtained a family of deterministic simulations whose time and space complexities interpolate between these earlier results. [paper]

In joint work with G. Prakriya we designed a pseudorandom generator geared towards derandomizing the isolation lemma in space-bounded settings that is computable in logspace and has seed length O((log n)1.5). We used it to show that every language in NL can be accepted by a nondeterministic machine that runs in polynomial time and O((log n)1.5) space, and has at most one accepting computation path on every input. [paper]


Meta Complexity

The term meta complexity refers to the complexity of computational problems that are themselves about computational complexity. A typical example is computing Kolmogorov complexity. The Kolmogorov complexity of a string is the length of its shortest description; various complexity restrictions on the descriptions lead to various notions of Kolmogorov complexity.

The study of meta complexity connects to the search for computational problems of intermediate complexity. We know that if P differs from NP, then there are problems in NP that are not in P but are not NP-complete either. However, almost all of the known natural problems in NP have been categorized as either in P or else NP-complete. There are only a handful of natural candidate NP-intermediate problems, including factoring and graph isomorphism. A similar situation holds for complexity classes like PSPACE. In these contexts, sets of strings of high Kolmogorov complexity (for various Kolmogorov measures) have been proposed as natural candidates for languages that are not complete for the class they naturally live in.

Contrary to the common belief, in joint work with E. Allender, H. Buhrman, M. Koucky, and D. Ronneburger, I showed that these sets are complete under reductions such as polynomial-time zero-error randomized Cook reductions in the case of PSPACE. The result hinges on an interpretation of known hardness-based pseudorandom generator constructions as connecting two notions of "random strings": a string picked (uniformly) at random, and a string without a short description. [paper]

Several follow-up works obtained similar results based on the same connection. In joint work with E. Allender, J. Grochow, C. Moore, and A. Morgan, we developed a fundamentally different approach inspired by the interactive proof system for the complement of Graph Isomorphism (GI). It yields a randomized reduction with zero-sided error from GI to a Kolmogorov variant of the Minimum Circuit Size Problem (MCSP): Given a Boolean string x and an integer s, does there exists a Boolean circuit of size at most s that has x as its truth-table? The use of GI in the statement can be replaced by any isomorphism problem for which the underlying group satisfies some elementary properties. Instantiations include Linear Code Equivalence, Permutation Group Conjugacy, and Matrix Subspace Conjugacy. [paper]

Another meta-problem is the so-called language compression problem, which asks for succinct descriptions of the strings in a language A such that the strings can be efficiently recovered from their description when given access to A. Let A=n denote the set of strings in A of length n. Information theoretically, we need k = log |A=n| bits to be able to describe every possible string in A=n, and we can essentially realize that lower bound: Describe a string in A=n by its length n and its lexicographic index in A=n. However, it may take exponential time to recover a string from that description. H. Buhrman, T. Lee, and I studied how close one can get to the information theoretic lower bound with efficient reconstruction schemes. Without any additional help, it turns out one cannot get close at all: We exhibit a language A for which n - k - O(log n) bits are needed, even for randomized schemes. With the help of an untrusted party, though, we show how to achieve the information theoretic lower bound up to an additive term of O((k1/2 + log n) log n) using a deterministic scheme, and up to an additive term of O(log3 n) using a randomized scheme. [paper]


Hierarchies

Some of the most fundamental results in computational complexity are time hierarchies - that we can solve more decision problems on some model of computation when given a bit more of some resource. Such hierarchies have been established for any reasonable model of computation that is syntactic, i.e., for which there exists a computable enumeration of all machines in the model. The latter property strongly correlates with the existence of complete problems in the corresponding polynomial-time complexity classes. The standard deterministic and nondeterministic models of computation are all syntactic; the known hierarchy theorems imply the existence of infinite time hierarchies within classes like P and NP.

However, several relevant models of computation are provably not syntactic. Examples include semantically defined models like randomized or quantum machines with two-, one-, or zero-sided error. For instance, there does not exist a computable enumeration of all randomized machines that satisfy the promise of two-sided error. This relates to the fact that there is no known complete problem for BPP, the class of all languages which have polynomial-time randomized decision procedures with two-sided error.

To date, no nontrivial hierarchy theorem is known for any non-syntactic model. In joint work with K. Pervyshev, I established a time hierarchy for any reasonable model of computation with one bit of advice: For any rationals c and d such that 1 ≤ c < d, there exists a language that can be decided in time nd with one bit of advice but not in time nc with one bit of advice. In particular, we obtained infinite time hierarchies within the polynomial-time complexity classes corresponding to the randomized and quantum models listed above with one bit of advice. [paper]

J. Kinne and I investigated the corresponding problem for space rather than time. [paper]


Interactive Proof Systems

Interactive proof systems (IPSs) are protocols in which one or more all-powerful provers try to convince a randomized computationally limited verifier of some fact. They are strongly related to probabilistically checkable proof systems (PCPs), in which a randomized resource bounded verifier checks a given proof. Both concepts have led to breakthrough inapproximability statements for NP-hard optimization problems.

The intriguing results on IPSs and PCPs of the early nineties got me interested in computational complexity theory. They also brought me to the University of Chicago, where a lot of the initial work on them was done, e.g., the conception of Arthur-Merlin games, equivalent to bounded round interactive proof systems.

As mentioned, one aspect I investigated was the role of randomness in Arthur-Merlin games. I obtained some conditional derandomization results, e.g., that that every language with a bounded round interactive proof system has subexponential size proofs unless the polynomial-time hierarchy collapses. This provided the first strong evidence that graph nonisomorphism has subexponential size proofs of membership. And, as also mentioned above, I showed that there are no shorter PCPs for the satisfiability of k-CNF formulas than currently known, unless the polynomial-time hierarchy collapses.


Query Complexity

The required number of accesses to the input represents a lower bound for the running time. The measure has an information-theoretic rather than a computational flavor and seems more manageable. In several models, strong - often times even tight - lower bounds are known, including the comparison-query model, the standard model for problems like sorting and selection.

In joint work with I. Hu and A. Morgan, I developed two novel lower bound techniques for a generic problem in the comparison-query model. Both techniques can be described in terms of the Cayley graph of the symmetric group with adjacent-rank transpositions as the generating set. One technique represents a discrete analog of a classical technique in algebraic complexity due to M. Ben-Or. The other technique represents an analog of sensitivity arguments in Boolean complexity. [paper]

Using our techniques we derive almost-tight lower bounds for a problem for which earlier techniques failed completely: Given a rooted tree and a ranking of its leaves, what is the minimum number of inversions of the leaves that can be attained by ordering the tree? The problem originates in mathematical psychology and has the evaluation of the Mann-Whitney statistic for detecting differences between distributions as a special case. [paper]

In the quantum setting, T. Hayes, S. Kuti and worked on the query complexity of the majority function. [paper]


Structural Properties

Locality

In descriptive complexity, logics over finite structures are used to characterize complexity classes, and the expressibility of the logic determines the class. One potent tool for proving inexpressibility results is the notion of locality. Informally, a logical query is local if it can be answered by only looking at small, localized parts of the structure in order to answer the query. Once a logic is shown to be local, one approach to prove that a particular query cannot be expressed in that logic is by showing that the query is not local. For example, each first-order query over graphs only depends on the neighborhoods of the arguments up to some constant distance. On the other hand, the query checking whether two given vertices are connected cannot be answered by considering only the neighborhoods up to some constant distance. Thus, one can conclude that the connectivity problem is not expressible in first-order logic.

M. Anderson, L. Segoufin, N. Schweikardt, and I fully characterized the level of locality of a logic that captures the complexity class AC0 of all languages that can be decided by nonuniform families of polynomial-size constant-depth circuits. The key step exploits the known lower bounds for parity on constant-depth circuits to obtain a tight upper bound for the locality of queries expressible in that logic. Our results also have applications to the study of regular languages. [paper]

Sparseness

Sparseness of complete or hard languages constitutes a well-studied candidate property to differentiate between complexity classes. It particularly interests complexity theorists because of its connections to nonuniform complexity and to isomorphism questions.

A language is sparse if it contains no more than polynomially many of the exponentially many instances of each length. Showing that P has no sparse hard language under logarithmic space computable reductions would separate L from P.

I established the logical completeness of this approach for reductions that can ask a bounded number of queries: If L differs from P then there exists no sparse hard language for P under logarithmic space reductions with a bounded number of oracle queries. The proof works for various other classes as well, e.g., for L versus NL, and for NC1 versus L. Another instantiation states that no sparse hard language for NP exists under polynomial-time randomized reductions with a bounded number of queries unless randomness allows us to solve any NP problem in polynomial time. [paper]

Together with M. Ogihara, I also wrote a survey paper on sparse hard languages for P.

The existence of optimal proof systems for tautologies turns out to be related to the existence of complete languages for the class of sparse NP languages. In joint work with H. Buhrman, L. Fortnow, and S. Fenner, we investigate the existence of such complete languages. [paper]

A language is polynomial-time reducible to a sparse language iff it has polynomial-size circuits. In joint work with L. Antunes and L. Fortnow, we found an alternative characterization based on the notion of computational depth. [paper]

Autoreducibility

Autoreducibility defines the most general type of efficient reduction of a problem to itself. A problem is autoreducible if we can solve a given instance in polynomial time when allowed to ask an oracle the solution to any other instance.

Joint work with H. Buhrman, L. Fortnow, and L. Torenvliet established that large complexity classes like doubly exponential space have complete languages that are not autoreducible, whereas the complete languages of smaller classes like exponential time all share the property of autoreducibility. The specific results we got yielded alternate proofs of known separations. We also showed that settling the question for doubly exponential time either way, would imply major new separations: It would either separate P from PSPACE, and NL from NP, or else the polynomial-time hierarchy from EXP. [paper]

Resource Bounded Measure

Resource bounded measure formalizes the notions of scarceness and abundance within complexity classes like EXP. It allows us to express statements like "only a few" or "most" languages in EXP have a certain property. We can use it as a tool for separating complexity classes, e.g., by showing that one class only has a few complete languages and the other one has a lot. This approach seems particularly suited for separating the complexity class BPP from EXP. We know that BPP is contained in EXP. The BPP versus EXP problem asks whether equality holds.

Within the formalism of resource bounded measure, most languages turn out to be hard for BPP under relatively powerful reductions. On the other hand, H. Buhrman and I proved that EXP and several other subclasses only have a few complete or hard languages under weaker reducibilities. A narrow gap between the power of the reductions remains, and bridging it would separate BPP from EXP and from certain other classes. We also showed that EXP only has a few complete languages under polynomial time reductions that make all their queries in a nonadaptive way unless the polynomial-time hierarchy collapses. [paper]

Another approach for settling the BPP versus EXP problem, similar in spirit to the probabilistic method of combinatorics, tries to show that BPP is a small subclass of EXP. I proved the logical completeness of this strategy, i.e., if BPP differs from EXP then it is a small subclass of EXP. As a by-product, I obtained the first nontrivial example of a class for which the equivalent of the 0-1 Law in resource bounded measure holds. [paper]

One can view resource bounded measure as a restriction of classical Lebesgue measure that preserves several nice properties like additivity and monotonicity. Whether invariance under permutations also carries over, remains open. H. Buhrman, K. Regan, D. Sivakumar, M. Strauss, and myself investigated the resource bounded equivalent of permutation invariance, and showed that it holds if strong pseudo-random generators exist. Autoreducibility plays a crucial role in this context. We argued that if resource bounded permutation invariance holds, then autoreducibility is rare within EXP. Using our results about the connection between autoreducibility and completeness, we showed that the latter implies that BPP differs from EXP - yet another way of tackling the BPP versus EXP problem. [paper]


Quantum Computing

In the last few decades, we have seen some surprising results on the power of a computer model that tries to exploit quantum physical phenomena to gain computational power. Shor developed a polynomial time quantum algorithm for factoring and computing discrete logarithms. Grover showed that O(n½) queries to a quantum oracle are sufficient to find an entry in an unsorted database with n elements.

T. Watson and I developed a quantum model of computation that is particularly suitable for the study of computations with simultaneous time and space bounds. We also established two time-space efficient simulations of quantum computations: (1) a time-space efficient version of the Solovay-Kitaev simulation of quantum computations by quantum computations with an arbitrary but fixed set of quantum gates, and (2) a time-space efficient simulation of quantum computations on classical randomized machines with unbounded error. The latter plays a critical role in the time-space lower bounds we obtained for quantum machines solving complete problems in the counting hierarchy. [paper]

I also worked on hierarchies within BQP, the class of all languages that have polynomial-time quantum algorithms with two-sided error.

With T. Hayes and S. Kutin, I developed some quantum black-box algorithms for computing the majority function that provably outperform any classical algorithm. [paper] Note that Grover's algorithm can be interpreted as a procedure for computing the OR function in a quantum black-box model.

I am very interested in the relationship between BQP and the polynomial-time hierarchy.


dieter@cs.wisc.edu