webinar register page

Webinar banner
Yixin Wang - Representation Learning: A Causal Perspective
Representation learning constructs low-dimensional representations to summarize essential features of high-dimensional data like images and texts. Ideally, such a representation should efficiently capture non-spurious features of the data. It shall also be disentangled so that we can interpret what feature each of its dimensions capture. However, these desiderata are often intuitively defined and challenging to quantify or enforce.

In this talk, we take on a causal perspective of representation learning. We show how desiderata of representation learning can be formalized using counterfactual notions, enabling metrics and algorithms that target efficient, non-spurious, and disentangled representations of data. We discuss the theoretical underpinnings of the algorithm and illustrate its empirical performance in both supervised and unsupervised representation learning.

This is joint work with Michael Jordan.

https://arxiv.org/abs/2109.03795

Mar 2, 2022 05:00 PM in Paris

Webinar logo
* Required information
Loading

Speakers

Yixin Wang
LSA Collegiate Fellow @University of Michigan
Yixin Wang is an LSA Collegiate Fellow in Statistics and an assistant professor of Statistics (as of Fall 2022) at the University of Michigan. She works in the fields of Bayesian statistics, machine learning, and causal inference. Previously, she was a postdoctoral researcher with Professor Michael Jordan at the University of California, Berkeley. She completed her PhD in statistics at Columbia, advised by Professor David Blei, and her undergraduate studies in mathematics and computer science at the Hong Kong University of Science and Technology. Her research has received several awards, including the INFORMS data mining best paper award, Blackwell-Rosenbluth Award from the junior section of ISBA, student paper awards from ASA Biometrics Section and Bayesian Statistics Section, and the ICSA conference young researcher award.