Average-Case Fine-Grained Hardness, Part I

I recently finished my qualifying exam for which I surveyed worst-case to average-case reductions for problems in NP. As part of my survey, I reviewed recent results on average-case hardness in P (a.k.a. average-case fine-grained hardness). I would like to give an overview of some of these results in a series of blog posts, and I want to start by giving some motivations.

Why do we care about average-case fine-grained hardness?

(i) We hope to explain why we are struggling to find faster algorithms for problems in P such as DNA sequencing even for some random large datasets.

(ii) Average-case hard problems in P is useful to cryptography. For example, constructing one-way functions (i.e., functions that are easy to compute but hard to invert) based on worst-case complexity assumptions such as NP \not\subseteq BPP has been a long-standing open question. In this context, “easy” means polynomial-time computable. If we consider “easy” to be computable in time like O(n^{100}) instead, then a natural alternative question is whether we can construct such one-way functions based on worst-case fine-grained complexity assumptions. Another example is proof of work [DN92]. When a miner tries to mine the cryptocurrency, the miner is asked to solve a random puzzle, that is average-case hard but still tractable, and then the miner needs to prove they have indeed solved the puzzle through efficient protocols. An interesting and more recent follow-up is proof of useful work [BRSV17b], which proposes that instead of wasting computing power on a random puzzle that comes from nowhere, we can first reduce a computational task of practical interest to multiple random puzzles and then ask the miner to solve those puzzles.

The most common approach for proving average-case fine-grained hardness is arguably worst-case to average-case reduction, i.e., reducing an arbitrary instance to a number of random instances of which the distribution is polynomial-time sampable. Before I give concrete examples, I want to describe a general recipe for designing such worst-case to average-case reductions. (Some reader might notice that the step 3 of the recipe is same as the argument for proving computing permanent is self-reducible [L91], which essentially uses local decoding for Reed-Muller codes.)

General recipe for worst-case to average-case reductions:

  1. Choose our favorite hard problem L:\{0,1\}^n\to\mathbf{Z}_{\ge 0} in P.
  2. Construct a low-degree (degree-d) polynomial f_{L} on \mathbf{F}_{p}^n such that f_{L}(x)=L(x) for all x\in\{0,1\}^n.
  3. To solve L for worst-case x\in\{0,1\}^n: sample a uniformly random y\in\mathbf{F}_{p}^n, compute f_{L}(x+t_1 y),\dots,f_{L}(x+t_{d+1} y) for distinct nonzero t_1,\dots,t_{d+1} using average-case solver (note each x+t_{i} y is uniformly random), interpolate univariate polynomial g(t):=f_{L}(x+t y) using these points, and output g(0) which is f_{L}(x)=L(x). (This step can be replaced by decoding algorithms for Reed-Solomon codes to handle larger errors of average-case solver.)
  4. (The above steps already show that evaluating f_{L} for d+1 uniformly random inputs is as hard as solving L for worst-case input.) If we want to show L itself is average-case fine-grained hard, it suffices to give a reduction from computing f_{L}(x) for x\in\mathbf{F}_{p}^n back to solving L.

Notice that the above general recipe reduces a worst-case instance to d+1 random instances at the step 3, and thus, we want d to be small (like n^{o(1)}) such that it does not blow up the total runtime significantly. Typically, the step 4 would also blow up the runtime, and sometimes it depends on d. For all the problems (or the techniques) in this series, I will explicitly quantify the runtime blow-up in the step 4 (if there is any) and explain how small we want d to be (if the blow-up depends on d).

Now let us go through a concrete example. Consider one of the flagship problems in fine-grained complexity — orthogonal vector problem (OV): Given X=\{x_1,\dots,x_n\}, Y=\{y_1,\dots,y_n\}, where each x_i,y_i\in\{0,1\}^{d'} for d'\in\omega(\log n)\cap n^{o(1)}, decide if there are x_i,y_j such that \langle x_i,y_j\rangle=0. It is known OV has no O(n^{2-\varepsilon})-time algorithm for any constant \varepsilon>0 assuming strong exponential-time hypothesis (SETH) [W05], and there is a generalization of OV called k-OV that has no O(n^{k-\varepsilon})-time algorithm under the same assumption [W18].

Polynomial evaluation. Given an OV instance X,Y with d'=n^{o(1)}, we construct a degree-2d' polynomial f_{\textrm{OV}}(X,Y):=\sum_{i,j\in[n]}\prod_{t\in[d']}(1-x_{i,t}y_{i,t}) over \mathbf{F}_{p}^{2nd'} for p>n^2, where x_{i,t} denotes the t-th coordinate of x_i\in X. Observe that f_{\textrm{OV}} simply enumerates all the pairs of vectors and counts the number of orthogonal pairs for the OV instance, and obviously counting is at least as hard as decision. Using the aforementioned general recipe, it follows immediately that evaluating f_{\textrm{OV}} requires O(n^{2-o(1)}) time for average case assuming randomized version of SETH. This result was shown in [BRSV17a], and analogously, they constructed a polynomial for k-OV, which implies an average-case time hierarchy assuming randomized SETH.

However, the polynomial evaluation problem is algebraic. Next, let us consider a natural combinatorial problem — counting t-cliques in an n-vertices graph (for simplicity, think of t as a large constant). This problem has worst-case complexity n^{\Theta(t)} assuming ETH [CHKX06].

Counting t-cliques. It was first shown in [GR18] that there is an \widetilde{O}(n^2)-time reduction from counting t-cliques in any graph to counting t-cliques with error probability <\frac{1}{4} in some polynomial-time sampable random graph. The proof also follows our general recipe. First, we construct a degree-\binom{t}{2} polynomial f_{t\textrm{-clique}}(X):=\sum_{\textrm{size-}t\, T\subseteq [n]}\prod_{i<j\in T} X_{i,j} over \mathbf{F}_{p}^{n^2} for p>n^t, where X is the adjacency matrix. Observe that f_{t\textrm{-clique}} counts t-cliques by enumerating each size-t subset of vertices and checking whether it is a t-clique. It remains to work out the step 4 of the general recipe. This was done by a gadget reduction, that runs in \widetilde{O}(p^{t^2}n) time, from evaluating f_{t\textrm{-clique}} on Y\in\mathbf{F}_{p}^{n^2} to counting t-cliques in an \widetilde{O}(p^{t^2}n)-vertices graph [GR18].

Although this gadget reduction is nice, I will not explain it here, because later works [BBB19, DLW20] show that if the polynomial constructed at the step 2 has certain structure, then there is a general technique to reduce evaluating this polynomial back to the original problem (at the cost of requiring smaller error probability of average-case solver), which I will discuss in the next post. Finally, let me point out an issue in the previous paragraph — p is too large for the gadget reduction to be useful! We need p>n^t (note n^t is a trivial upper bound of the number of t-cliques) such that f_{t\textrm{-clique}} indeed outputs the number of t-cliques, but the gadget reduction takes \widetilde{O}(p^{t^2}n) time, and moreover, we do not know how to find a prime >n^t in \widetilde{O}(n^2) time. This issue was handled in [GR18] using Chinese remainder theorem. Specifically, we choose O(t\log n) many distinct primes p_i‘s upper bounded by O(t\log n) whose product is >n^t (the existence of such primes follows from asymptotic distribution of the prime numbers). Then, we apply the whole reduction described so far to evaluating f_{t\textrm{-clique}}(X)\mod p_i for each p_i and then use Chinese remainder theorem to recover f_{t\textrm{-clique}}(X). (We can use Chinese remaindering with errors [GRS99] to handle larger error probability.)

Hopefully, the above examples give you a flavor of worst-case to average-case reductions for fine-grained hard problems. As promised, in the next post, I will continue to discuss the new technique in [BBB19, DLW20], which automatizes the step 4 of the general recipe by requiring more structural properties for the polynomial constructed in the step 2 of the general recipe.

Acknowledgements. I would like to thank my quals committee — Aviad Rubinstein, Tselil Schramm, Li-Yang Tan for valuable feedback to my quals talk.


1 Comment on Average-Case Fine-Grained Hardness, Part I

  1. Looks very interesting, but quite involved. Simple question: Is the average case hardness for k-XOR known? Or are there bounds? I am not an expert, so I am thinking of memory and time complexity, assuming that over space {0,1}^d word operations such as XOR have constant complexity.

    Like

2 Trackbacks / Pingbacks

  1. Average-Case Fine-Grained Hardness, Part II – Theory Dish
  2. Average-Case Fine-Grained Hardness, Part III – Theory Dish

Leave a comment