The 2025 FOCS Test of Time Awards, awarded annually, recognize papers published in the Proceedings of the Annual IEEE Symposium on Foundations of Computer Science. This is the seventh annual award. The target years for the Test of Time Awards in 2025 are for papers presented at the FOCS conferences in 1995, 2005, and 2015. Following guidance from the FOCS Steering Committee, while focusing on the target years, the award committee will consider nominations for exceptional papers in other years. (Please see https://tc.computer.org/tcmf/focs-test-time-award/ for more details, including award winners of previous years.)
This remarkable paper introduced a fundamental primitive in the field of cryptography: how to query a data base fully preserving the privacy of the query. The work introduced the notion of private information retrieval (PIR), gave the first protocols that allow users to retrieve items from distributed databases without revealing which item — thereby founding a new subfield at the intersection of cryptography, information theory, and complexity theory. Its definitions and constructions have shaped decades of research, enabled practical variants, and remain a cornerstone for privacy-preserving data access.
This paper introduced the adversarial multi-armed bandit as well as (what later become known as) the contextual bandit problems, deviating from the online decision-making tradition that made stochastic assumptions about the sequence of decisions. The paper provided the first algorithms achieving sub-linear regret for these problems and a tight lower bound for the smallest attainable regret, introducing some key algorithmic and lower bounding techniques that continue to inspire contemporary developments. By showing that robust performance guarantees are feasible even when payoffs are chosen by an adversary, the paper has had a profound influence in the theory and practice of algorithms in settings ranging from the more benign to the more worst-case, including reinforcement learning, online convex optimization, game theory and multi-agent learning. Meanwhile, contextual bandits now constitute the algorithmic underpinnings of personalization in online content recommendations and display advertising, and are increasingly being applied in health care.
This paper introduced a generalization of the classical online bipartite matching problem, modeling questions in online advertising. The algorithm, as well as the notion of a trade-off–revealing LP used in its design, had a significant impact on the theory and practice of electronic commerce.
This paper solved an important question both in the theory of hardness of approximation, as well as in the geometric theory of metric embeddings. The paper shows super-constant integrality gaps for unique games, for the basic SDP relaxation program, even when enhanced with triangle inequality constraints. To date, this is still the strongest known program in the SDP or SoS hierarchy for which unique games are provably hard. On top of that, the result has an interesting geometric interpretation, showing that every embedding of “negative type” metrics into ell_1 must have non-constant distortion, thus disproving a conjecture by Goemans and Linial.
This paper provided a breakthrough result in computational learning theory: An efficient learning algorithm for the class of halfspaces (linear separators) that could tolerate adversarial label noise, under reasonable distributional assumptions. Prior to this work, there were almost no positive results for learning interesting concept classes with noise, except in the most benign of noise models. Moreover, connections to cryptography suggested that efficiently learning halfspaces with adversarial label noise might be impossible in the distribution-free setting. Kalai, Klivans, Mansour, and Servedio overcame this barrier, showing that under a wide class of distributions (e.g., log-concave distributions), halfspaces 𝑐𝑎𝑛 be efficiently learned regardless of adversarial label noise. The work contributed to a fundamental shift in the field’s perspective, leading to an outpouring of new positive results for learning geometric concepts in more challenging noise models and with relaxed distributional assumptions.
This paper made an important conceptual contribution by demonstrating how succinct public-key functional encryption is sufficient to construct indistinguishability obfuscation which is widely viewed as the “universal cryptographic primitive,” capable of yielding nearly any cryptographic functionality. Its techniques shaped subsequent work enabling simpler constructions and influencing a generation of follow-up research.
This paper separates the logarithm of the partition number and the deterministic communication complexity of a function, resolving a long-standing open problem and moreover, stimulating a flurry of works on lifting theorems of various kinds. Lifting, first introduced by Raz and Mckenzie, is the idea of first proving a separation in the setting of query complexity and then to “lifting” it to communication complexity.
This paper showed how to solve convex programs in the cutting-plane framework, using a (near-)linear number of calls to a separation oracle while performing a (near-)cubic number of arithmetic operations. This result both improved and unified prior methods for solving this classic problem, yielding improved algorithms for submodular minimization, matroid intersection, and other combinatorial optimization problems.