The Fine-Grained and Parallel Complexity of Andersen's Pointer Analysis
Pointer analysis is one of the fundamental problems in static program analysis. Given a set of pointers, the task is to produce a useful over-approximation of the memory locations that each pointer may point-to at runtime. The most common formulation is Andersen’s Pointer Analysis (APA), defined as an inclusion-based set of m pointer constraints over a set of n pointers. Scalability is extremely important, as points-to information is a prerequisite to many other components in the static-analysis pipeline. Existing algorithms solve APA in O(n2 · m) time, while it has been conjectured that the problem has no truly sub-cubic algorithm, with a proof so far having remained elusive. It is also well-known that APA can be solved in O(n2) time under certain sparsity conditions that hold naturally in some settings. Besides these simple bounds, the complexity of the problem has remained poorly understood.
In this work we draw a rich fine-grained and paralellel complexity landscape of APA, and present upper and lower bounds. First, we establish an O(n3) upper-bound for general APA, improving over O(n2 · m) as n = O(m). Second, we show that even on-demand APA (“may a specific pointer a point to a specific location b?”) has an Ω(n3) (combinatorial) lower bound under standard complexity-theoretic hypotheses. This formally establishes the long-conjectured “cubic bottleneck” of APA, and shows that our O(n3)-time algorithm is optimal. Third, we show that under mild restrictions, APA is solvable in Õ(nω) time, where ω < 2.373 is the matrix-multiplication coefficient. It is believed that ω = 2 + o(1), in which case this bound becomes quadratic. Fourth, we show that even under such restrictions, even the on-demand problem has an Ω(n2) lower bound under standard complexity-theoretic hypotheses, and hence our algorithm is optimal when ω = 2 + o(1). Fifth, we study the parallelizability of APA and establish lower and upper bounds: (i) in general, the problem is P-complete and hence not parallelizable, whereas (ii) under mild restrictions, the problem is in NC and hence parallelizable. Our theoretical treatment formalizes several insights that can lead to practical improvements in the future.