Vector Quantized VAEs

Поделиться
HTML-код
  • Опубликовано: 26 окт 2024

Комментарии • 9

  • @K3pukk4
    @K3pukk4 Год назад

    what a legend!

  • @jonathanyang2359
    @jonathanyang2359 3 года назад +1

    Thanks! I don't attend this institution, but this was an extremely clear lecture :)

  • @LyoshaTheZebra
    @LyoshaTheZebra 3 года назад

    Thanks for explaining that! Great job. Subscribed!

  • @sdfrtyhfds
    @sdfrtyhfds 3 года назад +1

    do you train the pixel cnn on the same data and just not update the Vae weights while training?

    • @davidmcallester4973
      @davidmcallester4973 3 года назад +3

      yes, the vector quantization is held constant as the pixel CNN is trained.

  • @sdfrtyhfds
    @sdfrtyhfds 3 года назад

    also, what if you skip the quantization during inference? would you still get images that make sense?

    • @davidmcallester4973
      @davidmcallester4973 3 года назад +3

      Do you mean "during generation"? During generation you can't skip the quantization because the pixel-CNN is defined to generate the quantized vectors (the symbols).

    • @sdfrtyhfds
      @sdfrtyhfds 3 года назад +1

      @@davidmcallester4973 I guess that during generation it wouldn't make much sense, i was thinking more in the direction of interpolating smoothly between two different symbols.

  • @bernhard-bermeitinger
    @bernhard-bermeitinger 3 года назад +2

    Thank you for this video, however, please don't call your variable ŝ 😆 (or at least don't say it out loud)