Variational Autoencoder | Introduction and Workshop

Поделиться
HTML-код
  • Опубликовано: 9 сен 2024
  • Variational Autoencoder is a more advance version of autoencoder. Instead of storing the latent vector directly in the neural network, it added another layer of gaussian function to allow for a more general representation of those latent vector. Typically, it allows for a better generation of data than GAN in certain situation. In this video I tried to walkthrough some basic introduction of VAE, how to make them in R, and how they were used in research.
    References
    towardsdatasci...
    www.tensorflow...
    keras.io/examp...
    Slides
    docs.google.co...
    Script
    github.com/bra...
    Email: liquidbrain.r@gmail.com
    Website: www.liquidbrai...
    Patreon: / liquidbrain

Комментарии • 3

  • @yt.abhibhav
    @yt.abhibhav Год назад

    A comment. In line 172, runif() is actually "r uniform" and not "run if". @20:07

  • @yt.abhibhav
    @yt.abhibhav Год назад

    Thanks a lot for this :)
    I have a question. Can I increase the size of latent variable in the case of VAE.? More precisely, how to interpret when we have more than two variables in latent variable. Does it mean we are forcing in a new parameter in the model? If yes then what that parameter reflects?

    • @LiquidBrain
      @LiquidBrain  Год назад

      To be honest, VAE is more or less a blackbox on how they compress the data. As far as my limited knowledge goes, I don't think there's a easy way to understand what are they actually doing with your data.