December 31, Thursday
13:00 – 14:00
Probability Estimation over Large Alphabets
Computer Science seminar
Lecturer : Prof. Alon Orlitsky
Affiliation : University of California, San Diego, ECE & CSE
Location : 202/37
Host : Prof. Shlomi Dolev
show full content
Many applications call for estimating probabiities of rare,
even previously unseen, events. We briefly describe the problem's
theory, applications to classification and data compression,
relation to works by Fisher, Shakespeare, Laplace, Good, Turing,
Hardy, Ramanujan, and Shannon, and recent constructions of
asymptotically optimal estimators. The talk is self contained and
based on work with P. Santhanam, K. Viswanathan, J. Zhang, and others.
December 30, Wednesday
11:00 – 12:00
Relations in algebraic complexity
Computer Science seminar
Lecturer : Amir Yehudayoff
Affiliation : Princeton, NJ
Location : 37/202
Host : Amos Beimel
show full content
We will discuss complexity in worlds with a weak algebraic structure. We will start with a brief introduction to algebraic complexity, and explain Valiant's definitions of the algebraic analogs of P and NP. We will then explore these notions in weak algebraic structures which are not necessarily commutative or even associative. It turns out that even in such a world, permanent is NP-complete.
We also consider hardness in these weak computational worlds: First, we show that the non-commutative complexity of permanent is related to the classical sum-of-squares problem. We also show that in a non-associative world, NP is strictly stronger than P.
Joint work with Pavel Hrubes and Avi Wigderson.
December 29, Tuesday
12:00 – 14:00
The Randomized k-Server Conjecture (Online Algorithms meet Linear Programming)
Computer Science seminar
Lecturer : Niv Buchbinder
Affiliation : Cambridge, MA
Location : 37/202
Host : Chen Keasar
show full content
The k-server problem is one of the most central and well studied problems in competitive analysis and is considered by many to be the "holy grail" problem in the field. In the k-server problem, there is a distance function d defined over an n-point metric space and k servers located at the points of the metric space. At each time step, an online algorithm is given a request at one of the points of the metric space, and it is served by moving a server to the requested point. The goal of an online algorithm is to minimize the total sum of the distances traveled by the servers so as to serve a given sequence of requests. The k-server problem captures many online scenarios, and in particular the widely studied paging problem.
A twenty year old conjecture states that there exists a k-competitive online algorithm for any metric space. The randomized k-server conjecture states that there exists a randomized O(log k)-competitive algorithm for any metric space.
While major progress was made in the past 20 years on the deterministic conjecture, only little is known about the randomized conjecture.
We present a very promising primal-dual approach for the design and analysis of online algorithms. We survey recent progress towards settling the k-server conjecture achieved using this new framework.
December 28, Monday
14:00 – 16:00
Analyzing Data Structures with Forbidden 0-1 Matrices
Computer Science seminar
Lecturer : Seth Pettie
Affiliation : Department of EECS University of Michigan Ann Arbor
Location : 202/37
Host : Dr. Michael Elkin
show full content
In this talk I'll exhibit a simple method for analyzing dynamic data
structures using various forbidden substructure theorems. The idea is
to (1) transcribe the behavior of the data structure as some kind of
combinatorial object, (2) show that the object does not contain some
forbidden substructure, and (3) bound the size of any such object
without this substructure. I'll show how to analyze path
compression-based data structurses using forbidden 0/1 submatrices,
then discuss some extremal problems on 0-1 matrices.
This talk covers results from two papers to appear in SODA 2010.
December 23, Wednesday
12:00 – 13:30
Hardness in Games
Graduate seminar
Lecturer : Inbal Talgam
Affiliation : Weizmann Institute of Science
Location : 202/37
Host : Graduate Seminar
show full content
In 1951, John Nash proved that every game has an equilibrium, i.e. a set of strategies such that no player has an incentive to change her strategy.
Nash's proof is non-constructive in nature, since it relies on Brouwer's fixed point theorem.
This raises the following question - given a game, what is the computational complexity of finding a Nash equilibrium?
In this talk I will give an introduction to the area of Computing Equilibria in Algorithmic Game Theory
and overview a breakthrough result suggesting that the problem of finding a Nash equilibrium is hard – Daskalakis et al. prove it is complete for the complexity class PPAD.
The proof is based on a reduction from fixed point problems to finding equilibria,
using small “gadget games” as means of performing arithmetic computations.
No prior knowledge is assumed.
References:
C. Daskalakis, P. W. Goldberg and C. H. Papadimitriou.
- The Complexity of Computing a Nash Equilibrium, SIAM Journal on Computing, 1:195-259, 2009.
K. Etessami and M. Yannakakis. On the Complexity of Nash Equilibria and Other Fixed Points,
- In the 48th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2007.
11:00 – 12:00
Recent results in geometric modeling and point processing
Computer Science seminar
Lecturer : Andrei Sharf
Affiliation : Center of Visual Computing, Shenzhen Institute of Advanced Technology(SIAT) Chinese Academy of Sciences, Shenzhen, China
Location : 202/37
Host : Dr. Jihad El-sana
show full content
Most 3D shapes are nowadays acquired using range scanning devices.Currently, scanners are capable of capturing complex shapes, large urban scenes and lately even motion. The initial representation of the shape consists of several properly transformed depth images, resulting in a point sampling of the surface. Typically, scan data consist of missing parts, noise in point coordinates and orientation, outliers and non-uniform sampled regions. Without prior assumptions and user interventions, the reconstruction problem is ill posed; an
infinite number of surfaces pass through or near the data points. One of today's principal challenges is the development of robust point processing and reconstruction techniques that deal with the inherent inconsistencies in the acquired data set.
In my talk I will present recent advances in geometric modeling, processing and reconstruction of point data. I will describe a deformable model for watertight manifold reconstruction. The model yields a correct topology interpretation of the reconstructed shape and allows topology control to a certain extent. Next, I will present a topology-aware interactive reconstruction technique. Topological ambiguities in the data are automatically detected and user interaction is used to consolidate topology reconstruction. Following, I will present a space-time technique for the reconstruction of moving and deforming objects. The motion of the object is described as an incompressible flow of matter which overcomes deficiencies in the acquired data such as persistent occlusions, errors and even entirely missing frames. Motivated by recent advancements in sparse signal reconstruction, I will present a "lower-than-L2" minimization scheme for sparse reconstruction. The sparsity principle gives rise to a novel global reconstruction paradigm for sharp point set surfaces which is robust to noise.
December 22, Tuesday
12:00 – 14:00
Data sparsity and non-local reasoning in NLP
Computer Science seminar
Lecturer : Lev-Arie Ratinov
Affiliation : University of Illinois
Location : 37/202
Host : Yefim Dinitz
show full content
Two of most serious and fundamental problems Natural Language Processing are data sparsity and non-local reasoning. For example, let's consider the task of identifying people, locations and organizations in the following text (taken from Reuters):
"BLINKER BAN LIFTED. Dutch forward Reggie Blinker had his indefinite suspension lifted by FIFA on Friday and was set to make his Sheffield Wednesday comeback on Saturday
Blinker missed his club's last two games after FIFA slapped a worldwide ban on him for appearing to sign contracts for both Wednesday and Udinese
"
Many of the words, like 'Udinese' and 'Sheffield' are rare words that are unlikely to appear in the training data. On the other hand, the words 'Blinker' and 'Wednesday' in this text refer to a player and to a soccer team, and successfully identifying them as such requires global understanding of the text.
In this talk I will discuss algorithms for reducing data sparsity and making non-local decisions in NLP. the running example will be the task of named entity recognition.
Bio.
Lev Ratinov received his BSc and MSc from BGU, now he's a Phd student in University of Illinois at Urbana-Champaign. He has published at ACL, AAAI, and gave a tutorial at EACL
December 16, Wednesday
12:00 – 13:30
A taste of Computational Biology: constrained sequence alignment
Graduate seminar
Lecturer : Tamar Pinhas
Affiliation : BGU
Location : 202/37
Host : Graduate Seminar
show full content
This talk attempts to glimpse into the field of Computational Biology, specifically through the topic of constrained sequence alignment and its applications.
Introducing constraints into alignment processes facilitates improved speed and allows for fine-tuning of the alignment results.
Constraints are usually a formulation of a priory knowledge regarding the expected alignment result and often originate from biological studies.
We survey different types of constraints, the central approaches to constrained sequence alignment and discuss several algorithms and their applications.
December 9, Wednesday
12:00 – 13:30
Secret Sharing and Non-Shannon Information Inequalities
Graduate seminar
Lecturer : Ilan Orlov
Affiliation : BGU
Location : 202/37
Host : Graduate Seminar
show full content
A secret-sharing scheme is a mechanism for sharing data among a set of participants
such that only pre-defined authorized subsets of participants can reconstruct the data,
while any other subset has absolutely no information on the data.
The talk will have 2 parts.
In the first part, I will give some background on secret sharing schemes
and introduce several simple schemes.
In addition, I will discuss the connection between secret sharing and information theory.
(This part is really simple)
In the second part I will briefly show some results from my thesis.
Specifically, the result from the paper
Secret Sharing and Non-Shannon Information Inequalities Amos Beimel and Ilan Orlov [TCC09]
the abstract of this paper follows
The known secret-sharing schemes for most access structures
are not efficient; even for a one-bit secret the length of the shares in the
schemes is $2^{O(n)}$, where $n$ is the number of participants in the access
structure. It is a long standing open problem to improve these schemes
or prove that they cannot be improved. The best known lower bound is by
Csirmaz (J. Cryptology 97), who proved that there exist access structures
with $n$ participants such that the size of the share of at least one party is
$n/ log n$ times the secret size. Csirmaz's proof uses Shannon information
inequalities, which were the only information inequalities known when
Csirmaz published his result. On the negative side, Csirmaz proved that
by only using Shannon information inequalities one cannot prove a lower
bound of $omega(n)$ on the share size. In the last decade, a sequence of non-
Shannon information inequalities were discovered. This raises the hope
that these inequalities can help in improving the lower bounds beyond $n$.
However, in this paper we show that all the inequalities known to date
cannot prove a lower bound of $omega(n)$ on the share size.
December 8, Tuesday
12:00 – 14:00
Approximating the Statistics of various Properties in Randomly Weighted Graphs
Computer Science seminar
Lecturer : Yuval Emek
Affiliation : School of Electrical Engineering, Tel Aviv University
Location : 202/37
Host : Dr. Michael Elkin
show full content
Consider the setting of emph{randomly weighted graphs}, namely, graphs whose edge weights are chosen independently according to probability distributions with finite support over the non-negative reals. Under this setting, weighted graph properties typically become random variables and we are interested in computing their statistical features. Unfortunately, this turns out to be computationally hard for some weighted graph properties albeit the problem of computing the properties per se in the traditional setting of algorithmic graph theory is tractable. For example, there are well known efficient algorithms that compute the emph{diameter} of a given weighted graph, yet, computing the emph{expected} diameter of a given randomly weighted graph is SharpP{}-hard even if the edge weights are identically distributed.
In this work, we define a family of weighted graph properties and show that for each property in this family, the problem of computing the $k^{th}$ moment (and in particular, the expected value) of the corresponding random variable in a given randomly weighted graph $G$ admits a emph{fully polynomial time randomized approximation scheme (FPRAS)} for every fixed $k$. This family includes fundamental weighted graph properties such as the
diameter of $G$, the emph{radius} of $G$ (with respect to any designated vertex) and the weight of a emph{minimum spanning tree} of $G$.
Joint work with Amos Korman and Yuval Shavitt.
December 7, Monday
14:00 – 17:00
Phylogenetic Tree Reconstruction with Insertions and Deletions.
Computer Science seminar
Lecturer : Avinatan Hassidim
Affiliation : MIT
Location : 202/37
Host : Eitan Bachmat
show full content
Phylogenetic trees represent evolutionary history, by showing the course of evolution from some extinct common ancestor to today's species. These trees can reveal connections between different species, teach us about extinct species, and help us understand the development of viruses, and other organisms with high mutation rate.
The most common way of reconstructing phylogenetic trees is based on sequencing DNA from different species, and using similarities between sequences to infer the tree. This requires some model of how DNA evolves between an ancestor species and its descendants. The simplest model assumes that there are i.i.d mutations, and that each mutation is a substitution, and under this model, there are provably good reconstruction algorithm. However, recent studies show that insertions and deletions mutations are a serious cause of reconstruction errors, and can not be ignored.
We present the first efficient algorithm for tree reconstruction when the mutations can be substitutions, insertions and deletions. The algorithm uses a clustering based approach, which builds small parts of the tree, and glues them together. In addition, the algorithm outputs a global alignment of the DNA sequences, which respects the evolutionary history.
Based on joint works with Alex Andoni, Mark Braverman, Costis Daskalakis and Sebasiten Roch.
December 2, Wednesday
12:00 – 13:30
Interfaces for Scheduling Resources in Embedded Systems
Graduate seminar
Lecturer : Dr. Gera Weiss
Affiliation : CS,BGU
Location : 202/37
Host : Graduate Seminar
show full content
Modern software engineering heavily relies on clearly specified interfaces for
separation of concerns among designers implementing components and programmers
using those components. The interface of a component describes the
functionality and constraints on the correct usage in a succinct manner. The
need for interfaces is evident for assembling complex systems from components,
but more so in modern control applications where a network of sensors and
actuators is deployed to achieve complex control objectives. The notion of an
interface for a control component must incorporate information about timing and
resource needs, information that is not represented in current programming
interfaces.
In the talk, I'll propose formal languages as an interface for resource sharing
and timing. For example, imagine two components that share some resource (e.g.
CPU), allocated in a time-triggered manner. With the proposed interface, each
component specifies a formal language over the alphabet ${0,1}$ of ``safe
schedules", i.e., if $w=w(1),w(2),
$ is in this languages, and the component
gets the resource in every time slot t for which $w(t)=1$, then the component
meets its performance requirements. The main advantage of this interface is
that it allows modular analysis of each component and, then, constraints can be
combined using languages intersection.
I'll present theoretical results concerning applications of the above approach
to software control systems (where controllers are implemented by a software
that shares resources). I'll also introduce a Real-Time-Java based tool, called
RTComposer, that supports the proposed methodology. In RTComposer, scheduling
specifications can be given as periodic tasks, or using temporal logic, or as
omega-automata, or stability requirements. Components can be added dynamically,
and non-real-time components are allowed. The benefits of the approach will be
demonstrated and applications to wireless sensor/actuator networks based on the
WirelessHART protocol and to distributed control systems based on the Control
Area Network (CAN) bus (used, e.g., in automotive applications) will be
discussed.