webinar register page

Webinar banner
Mingyuan Zhou - Adaptive Diffusion-based Deep Generative Models
Deep generative models (DGMs), such as generative adversarial networks (GANs) and diffusion-based probabilistic models, have achieved impressive results in synthesizing high resolution photorealistic images. In this talk, I will describe two adaptive diffusion-based strategies to build DGMs that are not only stable to train but also fast to generate. While a GAN can quickly synthesize an image by propagating a noise through its generator, it is known to suffer from training instability and mode collapse. By contrast, a diffusion-based probabilistic model often trains more stably and provides better mode coverage, however, it is notoriously slow as it needs to go through the same U-Net hundreds or even thousands of times to iteratively refine its generation. To address the issues of diffusion models, we have developed truncated diffusion probabilistic models that incorporate adversarial training to substantially shorten the reverse diffusion chain while maintaining high generation quality. To address the issues of GANs, we have developed adaptive diffusion to optimize the operating conditions of the GAN discriminator and induce model- and domain-agnostic differentiable data augmentation, boosting the performance of several recently proposed GANs. I will present example results on data generation to demonstrate the working mechanisms and advantages of adaptive diffusion.

Jul 12, 2022 05:00 PM in Paris

Webinar logo
Webinar is over, you cannot register now. If you have any questions, please contact Webinar host: Maxime VONO.