Autoencoders | Deep Learning Animated

Поделиться
HTML-код
  • Опубликовано: 24 дек 2024

Комментарии • 79

  • @Clammer999
    @Clammer999 4 месяца назад +16

    To me, the critical and effective way of educating and enlightening is the step-by-step reasoning coupled with powerful animations. This video has certainly achieved that. Thanks so much!

    • @Deepia-ls2fo
      @Deepia-ls2fo  4 месяца назад

      Thank you for your comment !

    • @mevinkoser8446
      @mevinkoser8446 5 дней назад

      I have to argue that there is a fundamental difference between educating and enlightening.

  • @CELLPERSPECTIVE
    @CELLPERSPECTIVE 6 месяцев назад +22

    Legendary algorithm pull. I love educational content like this. Road to 1M!

  • @yadavadvait
    @yadavadvait 4 месяца назад +5

    this channel is a hidden gem!

  • @adamskrodzki6152
    @adamskrodzki6152 2 месяца назад

    Amazing how high quality your videos are. Hope you will have much more subscribers soon enough. This quality definetly deserve that.

  • @pritamswarnakar6855
    @pritamswarnakar6855 Месяц назад

    I am sharing this and will endorse people in my contact to subscribe, as exaplaining this "not so common" topics, with this much ease is really a art and this efforts of yours deserves a great amount of respect and appreciation.

  • @minhazulislam3048
    @minhazulislam3048 6 дней назад

    Wish Goodluck for this new channel

  • @farzinnasiri1084
    @farzinnasiri1084 24 дня назад +1

    this explanation was exactly what I needed... I was having a hard time understanding the concept

  • @alexmattyou
    @alexmattyou Месяц назад

    I can visualize autoencoders better now. Keep doing animations.
    My brain just encodes animation data easily
    And I need to decode them in exam paper / seminar

  • @Marauder13
    @Marauder13 6 месяцев назад +3

    Awesome video and animations bro. Its so amazing!! Keep doing more videos, I'll stay tuned!

    • @Deepia-ls2fo
      @Deepia-ls2fo  6 месяцев назад

      Thank you, I'm not planning on stopping yet :)

  • @mostafasayahkarajy508
    @mostafasayahkarajy508 4 месяца назад +3

    Thank you very much for your videos. I am waiting for the next one about the VAE.

    • @Deepia-ls2fo
      @Deepia-ls2fo  4 месяца назад

      Thanks, hope I can post it this month :)

  • @HenrikVendelbo
    @HenrikVendelbo 3 месяца назад +1

    Excellent pace and choice of words. A video on UNET would be great

  • @MutexLock
    @MutexLock 13 дней назад

    Awesome video! Thank you for your perfect explanation!!

  • @gabberwhacky
    @gabberwhacky 4 месяца назад +1

    Clear and concise explanations, awesome!

  • @bromanned7069
    @bromanned7069 6 месяцев назад

    This channel is so underrated. Amazing explanation!

  • @MuhammadIsamil
    @MuhammadIsamil 2 месяца назад +1

    Perfect tone and pace. Thanks.

  • @beowolx
    @beowolx Месяц назад

    wow, that was such a good video! Thanks for that

  • @gama3181
    @gama3181 4 месяца назад

    i just found your channel and fall in love with it. thank you !

    • @Deepia-ls2fo
      @Deepia-ls2fo  4 месяца назад +1

      Thanks for the kind words !

  • @thmcass8027
    @thmcass8027 5 месяцев назад

    Thanks for making such an intuitive and insightful video! Cant wait for more content from this channel!

  • @Tothefutureand
    @Tothefutureand 4 месяца назад

    Very good and easy yo understand content, i love when channels like yours make hard concept that easy to understand.

  • @CuriousK7
    @CuriousK7 4 месяца назад

    Awesome content.❤ The reasoning and intricate animation are mindblowing. Eagerly waiting for VAE video 😊

  • @YarinM
    @YarinM 6 месяцев назад +1

    Nice explanation but I think two key aspects are missing (maybe planned to show up in later videos):
    1. the connection to transformerts.
    2. the fact that latent space allows you to make two models speek the same language (like the idea of CLIP and how its used in DallE)

    • @Deepia-ls2fo
      @Deepia-ls2fo  6 месяцев назад

      Hi, thank you for the feedbacks ! Indeed these aspects are very important in modern architectures, but I feel like I would need to introduce a lot of other concepts to get there.
      It's definitely something I'll treat in future videos.

  • @ArashNasr
    @ArashNasr 6 месяцев назад

    This video is both informative and visually appealing. Thanks!

  • @ananthakrishnank3208
    @ananthakrishnank3208 5 месяцев назад

    Thank you! You made it lucid.

    • @Deepia-ls2fo
      @Deepia-ls2fo  5 месяцев назад

      Thank you for your comment !

  • @dontdiediy7630
    @dontdiediy7630 6 месяцев назад

    Good job man! Nice graphical representations. Easy to follow.

  • @EigenA
    @EigenA Месяц назад

    Great video

  • @aryankashyap7194
    @aryankashyap7194 5 месяцев назад

    Great video! Waiting for the one on VAEs and other topics

    • @Deepia-ls2fo
      @Deepia-ls2fo  5 месяцев назад

      @@aryankashyap7194 Thanks, it will probably be up before the end of the summer :)

  • @CodeSlate
    @CodeSlate 6 месяцев назад +1

    Great content, hope you can get more exposure!

  • @Canbay12
    @Canbay12 4 месяца назад

    incredibly good content.keep up the good work!

  • @MutigerBriefkasten
    @MutigerBriefkasten 5 месяцев назад

    Perfect animated and well explained. Thank you 👍 subscribed 😊

  • @SheikhEddy
    @SheikhEddy 4 месяца назад +1

    What would you do if you wanted to find a middle between two points in latent space if simple interpolation produces garbage results?

    • @Deepia-ls2fo
      @Deepia-ls2fo  4 месяца назад +2

      Thanks for the comment, in fact taking a simple interpolation is perfectly fine when your latent space is "in order".
      It should have some properties like being somewhat continuous, which is not imposed by a simple autoencoder. However VAEs do have such a latent space.

  • @ShadArfMohammed
    @ShadArfMohammed 4 месяца назад

    Thanks for this wonderful content.

  • @DavidW.-is3wb
    @DavidW.-is3wb 6 месяцев назад

    Could you make a video on common dimensionality reduction methods like PCA and projection (linear discrimants) etc? I’ve always been interested in when they should be applied but not the other. Anyways, nice video very underrated! Deserves more exposure! T^T

    • @Deepia-ls2fo
      @Deepia-ls2fo  6 месяцев назад +1

      Thank you ! Yep that's the plan for the very next video: it will be an explanation of how several visualization methods work, there will probably be PCA, t-SNE and UMAP

  • @samson6707
    @samson6707 6 месяцев назад

    great video. knew about encoders from the transformer model where the optimization criterion for embedding is the output of the decoder for the classification/generation task measured by eg. cross entropy loss and i know about word2vec where the optimization criterion is dot product similarity of co-occuring words. i did not know that in autoencoders the optimization criterion is minimizing the loss over reconstructing the original input. nice.

  • @stormaref
    @stormaref 4 месяца назад

    Nice video, keep it up

  • @GabrieleBandino
    @GabrieleBandino 5 месяцев назад

    Great video! Are you planning on releasing the code used for it?

    • @Deepia-ls2fo
      @Deepia-ls2fo  5 месяцев назад

      Thank you ! Yes, I'll make a github page for the channel, I'll put the link in the description when it's done.

  • @rishidixit7939
    @rishidixit7939 Месяц назад

    For the applications like Domain Adaptation and Image Colorization how does the loss function look like for an AutoEncoder ? Also you said that the MSE Loss is used but then in that case a trivial solution exists where the image is copied pixel by pixel and the Network Learns Nothing. How is that problem taken care of ?

    • @Deepia-ls2fo
      @Deepia-ls2fo  Месяц назад

      @@rishidixit7939 Hi, I'm not familiar with those two tasks, but for Image Colorization an MSE would probably do just fine ?
      For preventing the Network to simply copy the image pixel by pixel, we have the bottleneck layer! Remember that this layer has a lot fewer neurons than there are pixels, so you can't just "copy" the values :)

  • @StreetPhilosophyTV
    @StreetPhilosophyTV 6 месяцев назад

    Great work!

  • @suryavaraprasadalla8511
    @suryavaraprasadalla8511 10 дней назад

    Audio has latency around 4:09 with video!

  • @sharjeel_mazhar
    @sharjeel_mazhar 4 месяца назад

    Can you make a video on RNN and its variants?

    • @Deepia-ls2fo
      @Deepia-ls2fo  4 месяца назад +1

      Hi Sharjeel thanks for your comment !
      RNN and other auto-regressive models are definitely on my to-do list. :)

  • @samson6707
    @samson6707 6 месяцев назад +1

    8:02 Principal Component Analysis? 😉

    • @andywub
      @andywub 6 месяцев назад

      or tsne/umap

  • @Aften_ved
    @Aften_ved Месяц назад

    4:10 Latent Space.

  • @kyohmin
    @kyohmin Месяц назад

    The GOAT

  • @ashkankarimi4146
    @ashkankarimi4146 6 месяцев назад

    Please create more videos!

  • @carsx7824
    @carsx7824 6 месяцев назад

    Do you use ai voiceover? Great video btw

    • @Deepia-ls2fo
      @Deepia-ls2fo  6 месяцев назад

      Thank you ! Indeed the voiceover is generated by an AI, but it is my own voice that I cloned. I'm using Elevenlabs. Did that annoy you or got you out of the video ? :(

  • @wobb_
    @wobb_ 6 месяцев назад +1

    My name jeff.