Our first TOCA-SV meeting of the academic year is coming up next Friday, in Tressider Oak Lounge, Stanford. Like last year, it will host the Motwani Colloquium. We have reserved some parking (see instructions below) on a first come first serve basis (so come early). Also see the schedule and abstract below.

**Schedule**:

11:10-11:50 Dean Doron, Stanford University, *Nearly Optimal Pseudorandomness from Hardness*

11:50-12:30 Barna Saha, UC Berkeley, *Algorithms for Fast Sequence Comparison*

12:30-1:30 lunch

1:30-3:30 Student’s poster session

4:15-5:30 Motwani Colloquium, Ronitt Rubinfeld, MIT and Tel Aviv University,*Local Computation Algorithms*

**Abstracts**

*Size-free generalization bounds for convolutional neural networks*

We prove bounds on the generalization error of convolutional networks. The bounds are characterized in terms of the training loss, the number of parameters, the Lipschitz constant of the loss, and the distance of the initial and final weights. The bounds are independent of the number of pixels in the input, as well as the width and height of hidden feature maps. These are the first bounds for DNNs with such guarantees. We present experiments with CIFAR-10, varying hyperparameters of a deep convolutional network, comparing our bounds with practical generalization gaps.

- Dean Doron, Stanford University,

*Nearly Optimal Pseudorandomness from Hardness*

Existing techniques for derandomizing algorithms based on circuit lower bounds yield a large polynomial slowdown in running time. We show that assuming exponential lower bounds against nondeterministic circuits, we can convert any randomized algorithm running in time T to a deterministic one running in time nearly T^2. Under complexity-theoretic assumptions, such a slowdown is nearly optimal.

In this talk I will concentrate on the role of error-correcting codes in those techniques. We will see which properties of error-correcting codes are useful for constructing pseudorandomness primitives sufficient for derandomization, where they came short of achieving better slowdown, and how we can overcome that.

Based on joint work with Dana Moshkovitz, Justin Oh and David Zuckerman

*Algorithms for Fast Sequence Comparison*

There are many basic problems over sequences. For example, computing the edit distance between two sequences, detecting how RNA sequences fold, finding the distance between a sequence and a language (collection of sequences), understanding how disbalanced a parenthesis sequence is. They have natural applications in various areas including computational biology, natural language processing, compiler optimization, and data cleaning. The last few years have seen a plethora of results where new faster algorithms have been developed for these problems with various trade-offs between approximation (quality of solution) and running time (scalability). In this talk, I will present some of the tools that have been developed from my work related to these problems. Depending on time and audience choices, we will cover one or more of the following.

(i) A random walk based technique useful to measure how disbalanced a parenthesis sequence is, which has also been useful for streaming computation of edit distance.

(ii) A dependent rounding technique that gives nearly optimal result in nearly optimal time for estimating distance between a sequence and a language, which can also be used to speed up other polynomial time problems, such as the all pairs shortest path computation on graphs.

(iii) An amnesic dynamic programming technique that gives significantly faster approximation algorithm for RNA-folding, which can also be useful for many dynamic programming based problems that have Lipschitz property (most sequence problems have it).

(iv) An elimination of shortest path technique that gives sublinear time algorithms for estimating edit distance for certain regimes of edit distance.

The audience is encouraged to pick their favorite and convey that to the speaker at or before (by email) the talk.

- Ronitt Rubinfeld, MIT and Tel Aviv University,

*Local Computation Algorithms*

Consider a setting in which inputs to and outputs from a computational problem are so large, that there is not time to read them in their entirety. However, if one is only interested in small parts of the output at any given time, is it really necessary to solve the entire computational problem? Is it even necessary to view the whole input? We survey recent work in the model of “local computation algorithms” which for a given input, supports queries by a user to values of specified bits of a legal output. The goal is to design local computation algorithms in such a way that very little of the input needs to be seen in order to determine the value of any single bit of the output. Though this model describes sequential computations, techniques from local distributed algorithms have been extremely important in designing efficient local computation algorithms. In this talk, we describe results on a variety of problems for which sublinear time and space local computation algorithms have been developed — we will give special focus to finding maximal independent sets and sparse spanning graphs.

### Like this:

Like Loading...

*Related*

## Leave a Reply