Speaker: Roger Myerson, University of Chicago
Title: Dual Reduction and Elementary Games with Senders and Receivers
Abstract: Consider the incentive constraints that define the incentive-compatible mechanisms of a senders-receivers game. Duals of these linear constraints form Markov chains on the senders’ type sets and the receivers’ action sets. The minimal nonempty absorbing sets of these Markov chains can be interpreted as the types and actions in a dual-reduced game. Any incentive-compatible mechanism of a dual-reduced game corresponds to an equivalent incentive-compatible mechanism for the original game. We say that a game is elementary if all nontrivial incentive constraints can be satisfied as strict inequalities in incentive-compatible mechanisms. Any senders-receivers game can be reduced to an elementary game by iterative dual reduction. The set of games with transitive dual solutions is dense.
Speaker: Irit Dinur, Weizmann Institute
Title: Expanders and PCPs: Emergence from Local to Global
Abstract:
PCPs offer a way of verifying NP proofs with high confidence using local checks, thereby demonstrating a form of ‘computational emergence’ in which the global correctness of a PCP proof emerges from small, independent, and local checks. In essence, while each local check only provides information about a tiny piece of the proof, the design of the PCP ensures that these local pieces collectively reflect the correctness of the whole. Thus, PCPs are NP proofs cast into a distributed form, where local checks collectively verify the global correctness.
Expander graphs have been used within PCP constructions, naturally yielding fault-tolerance at a low overhead. However, true local-to-global emergent behavior is exhibited by high-dimensional expanders, much more than classical expanders.
In this talk, we will describe several phenomena of local-to-global emergence in high-dimensional expanders, and highlight their applications to error-correcting codes and PCPs.
Speaker: Christos Papadimitriou, Columbia University
Title: Computing with Dynamical Systems
Abstract: TBA
Speaker: Rajeev Alur, University of Pennsylvania
Title: Specification-guided Reinforcement Learning
Abstract: Reinforcement learning (RL) aims to learn optimal policies for accomplishing tasks in unknown environments, and has been recently successful in applications in robotics. Traditionally, the task is encoded in the form of a local state-based reward function which becomes cumbersome for long-horizon goals. An appealing alternative is to use high-level logical specifications, opening the direction of RL from logical specifications. In this talk, I will review both theoretical results and practical tools in this emerging area.