webinar register page

Webinar banner
Interpreting Bayesian Deep Neural Networks Through Variable Importance - Sarah Filippi
While the success of deep neural networks is well-established across a variety of domains, our ability to explain and interpret these methods is limited. Unlike previously proposed local methods which try to explain particular classification decisions, we focus on global interpretability and ask a generally applicable, yet understudied, question: given a trained model, which input features are the most important? In the context of neural networks, a feature is rarely important on its own, so our strategy is specifically designed to leverage partial covariance structures and incorporate variable interactions into our proposed feature ranking. Here, we extend the recently proposed ``RelATive cEntrality'' (RATE) measure proposed by Crawford et al (2018) to the Bayesian deep learning setting. Given a trained network, RATE applies an information theoretic criterion to the posterior distribution of effect sizes to assess feature significance. Importantly, unlike competing approaches, our method does not require tuning parameters which can be costly and difficult to select.

Nov 18, 2020 05:00 PM in Paris

Webinar logo
Webinar is over, you cannot register now. If you have any questions, please contact Webinar host: David Rohde.