Instance-Wise Hardness and Refutation versus Derandomization for Arthur-Merlin Protocols Download as PDF


Abstract

A fundamental question in computational complexity asks whether probabilistic polynomial-time algorithms can be simulated deterministically with a small overhead in time (the BPP vs. P problem). A corresponding question in the realm of interactive proofs asks whether Arthur-Merlin protocols can be simulated nondeterministically with a small overhead in time (the AM vs. NP problem). Both questions are intricately tied to lower bounds. Prominently, in both settings blackbox derandomization, i.e., derandomization through pseudo-random generators, has been shown equivalent to lower bounds for decision problems against circuits.

Recently, Chen and Tell (FOCS'21) established near-equivalences in the BPP setting between whitebox derandomization and lower bounds for multi-bit functions against algorithms on almost-all inputs. The key ingredient is a technique to translate hardness into targeted hitting-sets in an instance-wise fashion based on a layered arithmetization of the evaluation of a uniform circuit computing the hard function f on the given instance. Follow-up works managed to obtain full equivalences in the BPP setting by exploiting a compression property of classical pseudo-random generator constructions. In particular, Chen, Tell and Williams (FOCS'23) showed that derandomization of BPP is equivalent to constructive lower bounds against algorithms that go through a compression phase.

In this paper we develop a corresponding technique for Arthur-Merlin protocols and establish similar near-equivalences in the AM setting. As an example of our results in the hardness to derandomization direction, consider a length-preserving function f computable by a nondeterministic algorithm that runs in time na. We show that if every Arthur-Merlin protocol that runs in time nc for c = O(log2 a) can only compute f correctly on finitely many inputs, then AM is in NP. We also obtain equivalences between constructive lower bounds against Arthur-Merlin protocols that go through a compression phase and derandomization of AM via targeted generators. Our main technical contribution is the construction of suitable targeted hitting-set generators based on probabilistically checkable proofs for nondeterministic computations.

As a byproduct of our constructions, we obtain the first result indicating that whitebox derandomization of AM may be equivalent to the existence of targeted hitting-set generators for AM, an issue raised by Goldreich (LNCS, 2011). Byproducts in the average-case setting include the first uniform hardness vs. randomness tradeoffs for AM, as well as an unconditional mild derandomization result for AM.


dieter@cs.wisc.edu