webinar register page

Webinar banner
Compressing Variational Bayes - Stephan Mandt
Deep learning has achieved unprecedented performance gains at the expense of dramatic data processing and storage costs. A recent trend demands a different type of machine learning: small models that fit on low-powered devices, neural compression algorithms that outperform their classical counterparts in rate-distortion performance, and approximate inference algorithms that help identify the sensitivity of a model upon discretizing its parameters or data. We view resource-efficient deep learning through the lens of approximate Bayesian inference, where we cover different aspects of the problem: storage-efficient deep learning, data-efficient deep learning, and runtime-efficient Bayesian deep learning. Our proposed models and algorithms will be tested and applied in different domains, including image and video compression, text analysis, and in the natural sciences

Sep 23, 2020 05:00 PM in Paris

Webinar logo
* Required information
Loading

Speakers

Stephan Mandt
Assistant Professor @University of California, Irvine.
Stephan Mandt is an Assistant Professor of Computer Science at the University of California, Irvine. From 2016 until 2018, he was a Senior Researcher and Head of the statistical machine learning group at Disney Research, first in Pittsburgh and later in Los Angeles. He held previous postdoctoral positions at Columbia University and Princeton University. Stephan holds a Ph.D. in Theoretical Physics from the University of Cologne. He is a Fellow of the German National Merit Foundation, a Kavli Fellow of the U.S. National Academy of Sciences, and was a visiting researcher at Google Brain. Stephan regularly serves as an Area Chair for NeurIPS, ICML, AAAI, and ICLR, and is a member of the Editorial Board of JMLR. His research is currently supported by NSF, DARPA, IBM, and Qualcomm.