Mathematical logic was developed in an effort to provide formal foundations for mathematics. In this quest, which ultimately failed logic begat computer science, yielding both computers and theoretical computer science. But then logic turned out to be a disappointment as foundations for computer science, as almost all decision problems in logic are either unsolvable or intractable.

Starting from the mid-1970s, however, there has been a quiet revolution in logic in computer science, and problems that are theoretically undecidable or intractable were shown to be quite feasible in practice. This talk describes the rise, fall, and rise of logic in computer science, describing several modern applications of logic to computing, include databases, hardware design, and software engineering.

We show a directed and robust analogue of a Boolean isoperimetric type theorem of Talagrand. As an application, we give a monotonicity testing algorithm that makes O(\sqrt{n}/\eps^2) non-adaptive queries to a function f, always accepts a monotone function and rejects a function that is \eps-far from being monotone with constant probability.

Joint work with Dor Mizer (Tel Aviv University) and Subhash Khot (NYU).

The evaluation of algorithms based on worst-case analysis has often been criticized as too pessimistic and not conveying meaningful comparisons. Instance optimality is a proposed way around this: an algorithm is "instance optimal" if for every input its computational cost is the same as the cost of the best algorithm on that particular input (to within some constant factor).

To properly describe the setting we need to specify the set of algorithms and corresponding costs. In many scenarios there is no hope of getting such performance, but since the turn of the century a number of domains were suggested where instance optimal algorithms exist, for instance in the realm of databases, computational geometry and data structures. In this talk I will describe the notion and outline a few such examples. I will discuss the possibility of obtaining a general theory of such algorithms.

Decisions taken by an agent in a multi-agent system depend on the agent's local information. A new formulation of the connection between knowledge and action in multi-agent systems allows new insights into the design of such systems. This talk will discuss how a theory of what agents know about the world and about knowledge of others can help in the design and analysis of distributed computer systems.

A great deal of effort has been made to reduce the risk of spurious scientific discoveries, from the use of holdout sets and sophisticated cross-validation techniques, to procedures for controlling the false discovery rate in multiple hypothesis testing. However, there is a fundamental disconnect between the theoretical results and the practice of science: the theory mostly assumes a fixed collection of hypotheses to be tested, or learning algorithms to be applied, selected non-adaptively before the data are gathered, whereas science is by definition an adaptive process, in which data are shared and re-used, and hypotheses and new studies are generated on the basis of data exploration and previous outcomes.

Surprisingly, the challenges of adaptivity can be addressed using insights from differential privacy, a field of study supporting a definition of privacy tailored to private data analysis. As a corollary we show how to safely reuse a holdout set a great many times without undermining its power of "correctness protection,'' even when hypotheses and computations are chosen adaptively. Armed with this technique, the analyst is free to explore the data ad libitum, generating and evaluating hypotheses, verifying results on the holdout, and backtracking as needed.

Joint work with Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toni Pitassi and Aaron Roth.

King Solomon, the wisest of all man, is being challenged by a lowly mouse to play a prediction game. Who will have the upper hand?

Evolutionary distances between pairs of aligned DNA sequences (representing species or genes) are very useful for reconstructing phylogenetic (evolutionary) trees. Obtaining good estimations of these distances is critical for accurate tree reconstruction. Traditionally, evolutionary distances are identified with evolutionary time - the expected number of substitutions (mutations) per site, which is estimated from the aligned sequences and the assumed evolutionary model.

When different types of substitutions (e.g., transitions and transversions) occur at different rates, estimates of their counts have different statistical properties; due to the finite length of the aligned sequences, it is hard to estimate the expected number of fast substitutions types when the sequences are (time-wise) far apart - a phenomenon known as "site saturation". It is similarly hard to estimate the expected number of slow substitutions types when these sequences are close. This suggests that for estimating long distances we should use a distance function which is based on slow mutation types, and for estimating small distances we should use a distance function which is based on fast mutation types. This is what the adaptive distances method aims at.

I will present the algebraic properties of the evolutionary models which enable the use of adaptive distances, and then report theoretical and experimental results which demonstrate the advantages of this method. Time permitting, some related extensions and applications of this method will be presented.

The talk is self-contained, and does not assume any prior knowledge of evolutionary models or phylogenetic reconstruction.

Based on papers with I. Gronau, I. Yavneh, D. Doerr and Y. Damti.

Click here for a video of Prof. Shlomo Moran's presentation.