L15- Finding Good Discrete Priors for VAEs

Recent advancements in deep generative modeling replace the continuous latent vector representation with discrete latent representations and show improved performance in generation of new data. Most of these models, such as VQ-VAQ, VQ-GAN and DreamerV2 (RL), are based on Variational Autoencoders (VAEs). In the VAE framework, the user must choose a prior that generates latent variables, and in the continuous version, a simple Gaussian prior is usually chosen. However, in discrete latent spaces, choosing a prior is not trivial and may decide the faith of the generative model. In this project, we will investigate different ways to model the prior and compare to a baseline algorithm. The students will build their own generative model and try to improve the generative capabilities over a baseline algorithm, gaining hands-on experience in deep learning, unsupervised learning and deep generative models.