Workshop Abstract

Deep learning has driven dramatic performance advances on numerous difficult machine learning tasks in a wide range of applications. Yet, its theoretical foundations remain poorly understood, with many more questions than answers. For example: What are the modeling assumptions underlying deep networks? How well can we expect deep networks to perform? When a certain network succeeds or fails, can we determine why and how? How can we adapt deep learning to new domains in a principled way?

While some progress has been made recently towards a foundational understanding of deep learning, most theory work has been disjointed, and a coherent picture has yet to emerge. Indeed, the current state of deep learning theory is like the fable “The Blind Men and the Elephant”.

The goal of this workshop is to provide a forum where theoretical researchers of all stripes can come together not only to share reports on their individual progress but also to find new ways to join forces towards the goal of a coherent theory of deep learning. Topics to be discussed include:

  • Statistical guarantees for deep learning models
  • Expressive power and capacity of neural networks
  • New probabilistic models from which various deep architectures can be derived
  • Optimization landscapes of deep networks
  • Deep representations and invariance to latent factors
  • Tensor analysis of deep learning
  • Deep learning from an approximation theory perspective
  • Sparse coding and deep learning
  • Mixture models, the EM algorithm, and deep learning

In addition to invited and contributed talks by leading researchers from diverse backgrounds, the workshop will feature an extended poster/discussion session and panel discussion on which combinations of ideas are most likely to move theory of deep learning forward and which might lead to blind alleys.

Confirmed Speaker

Sanjeev Arora (Princeton University)
Stefano Soatto (University of California at Los Angeles)
Kamalika Chaudhuri (University of California at San Diego)
Jeremias Sulam (Johns Hopkins University)
Emily Fox (University of Washington)
Judy Hoffman (Georgia Institute of Technology)
Zachary C. Lipton (Carnegie Mellon University)
Irina Higgins (DeepMind)

Tentative Schedule

Saturday 8 December 2018

8:30am-8:40am : Opening remarks
Session 1: Moderator – Richard Baraniuk
8:40-9:20 Plenary talk 1
9:20-9:50 Invited talk 1
9:50-10:10 Contributed talk 1
10:10-10:30 Contributed talk 2
10:30-10:50 Coffee Break
Session 2: Moderator – Animashree Anandkumar
10:50-11:30 Plenary talk 2
11:30-12:00 Invited talk 2
12:00-1:30 Lunch break
Session 3: Moderator – Ankit Patel
1:30-2:10 Plenary talk 3
2:10-2:40 Invited talk 3
2:40-3:00 Contributed talk 3
3:00-3:50 Poster session
Session 4: Moderator – Nhat Ho
3:50-4:30 Plenary talk 4
4:30-5:00 Invited talk 4
5:00-5:30 Breakout Session
5:30-6:25 Panel Discussion
6:25-6:30 Closing remarks

Call for Papers and Submission Instructions

We invite researchers to submit anonymous extended abstracts of up to 4 pages (including abstract, but excluding references). No specific formatting is required. Authors may use the NIPS style file, or any other style as long as they have standard font size (11pt) and margins (1in).

Submissions are handled through the EasyChair system. Please note that at least one coauthor of each accepted paper will be expected to attend the workshop in person to present a poster or give a contributed talk.

Papers can be submitted at the address:

Important Dates

  • Submission Deadline: 23:59 pm CST,  Friday October 12th
  • Acceptance notification: Friday October 26th
  • Camera ready submission: Friday November 30th
  • Workshop: Saturday December 8th


Richard G. Baraniuk                 Stephane Mallat                      Anima Anandkumar                

Ankit B. Patel                          Nhat Ho    

Please email with any questions.