Next week: TOCA-SV + Motwani Colloquium

Our first TOCA-SV meeting of the academic year is coming up next Friday, in Tressider Oak Lounge, Stanford. Like last year, it will host the Motwani Colloquium. We have reserved some parking (see instructions below) on a first come first serve basis (so come early). Also see the schedule and abstract below.


There are 35 reserved spaces in Tresidder Lot (L-39) on November 15, 2019. Our  space numbers will be 14-48; see map for location. Posted signs to read Reserved for TOCA-SV CS Workshop.

To avoid a citation, vehicle information is required to obtain permission to park in the designated reserved area. Use: See instructions.

*Internet browser Chrome or Firefox are recommended and most compatible with the system.


  • Hanie Sedghi, Google AI
Size-free generalization bounds for convolutional neural networks
We prove bounds on the generalization error of convolutional networks. The bounds are characterized in terms of the training loss, the number of parameters, the Lipschitz constant of the loss, and the distance of the initial and final weights. The bounds are independent of the number of pixels in the input, as well as the width and height of hidden feature maps. These are the first bounds for DNNs with such guarantees.  We present experiments with CIFAR-10, varying hyperparameters of a deep convolutional network, comparing our bounds with practical generalization gaps.
  • Dean Doron, Stanford University,
Nearly Optimal Pseudorandomness from Hardness

Existing techniques for derandomizing algorithms based on circuit lower bounds yield a large polynomial slowdown in running time. We show that assuming exponential lower bounds against nondeterministic circuits, we can convert any randomized algorithm running in time T to a deterministic one running in time nearly T^2. Under complexity-theoretic assumptions, such a slowdown is nearly optimal.

In this talk I will concentrate on the role of error-correcting codes in those techniques. We will see which properties of error-correcting codes are useful for constructing pseudorandomness primitives sufficient for derandomization, where they came short of achieving better slowdown, and how we can overcome that.

Based on joint work with Dana Moshkovitz, Justin Oh and David Zuckerman


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: