This is so beautifully done! What you emphasize is why the methods work, based on data-inetrnal correlation, not just how the steps of how the machines described, DAE and VAE, calculate. Thank you!!
3:25 states that VAEs are good at generating high resolution images. In some other videos, I found that this method is not so good for high resolution images.
Total nit: 2:47 "The shrunk data is called the latent space" --> The shrunken data is called the latent (a vector), which exists in whatever d-dimensional latent space you have 👍, right?
Hello there Mr Serrano this is a great intuitive video but I have a question . On 28:26 I believe that the mean and variance may vary from input to input right? So doesn't that mean that the purple and green distributions should become dissimilar on each sample inserted? Thank you Mr Serrano.
30:05 has a typo on the graph. q3 should be 0.4 and not 0.3 in order to sum to 1 and also to be inline with the calculations (where 0.4 is used). Just a note :) Great content by the way!
@SerranoAcademy Where did the first idea/intuition occurred for the first researcher who proposed that the equal diagonal values have property to denoise? you are just riding on the idea, better if you can share more information on it.
Brilliant explanations and huge work on the video! Really appreciated.
This is so beautifully done! What you emphasize is why the methods work, based on data-inetrnal correlation, not just how the steps of how the machines described, DAE and VAE, calculate. Thank you!!
Very intuitive explanation, thank you Luis for making this topic easy to understand.🙂
Thank you so much! So clear and helpful! Really great job.
Brilliant explanation
Really beautiful explanations & examples. very well explained with animations
So good. Thank you.
Love your videos, I was just about to look into this subject!
Definitivamente el mejor!!
Very nice explaination for beginners!
Thank you! :)
Great video!
As always your video is a masterpiece !
Thank you so much, glad you liked it! :)
amazing explanation, thank you
Thanks, that was really helpful !!!
super nice! Thanks a lot!
Thank you, glad you liked it!
🙂Excellent!
great as usual!
Thank you Samir!
Make videos on Graph Neural network please
thank you for your great video. just a quick note that at time 13:08, we get the value of -1 and not 1.
Yikes, you're right! Thank you for the correction!
I understand how a VAE is trained in an unsupervised way. How is a DAE trained? Is it done in the same way?
3:25 states that VAEs are good at generating high resolution images. In some other videos, I found that this method is not so good for high resolution images.
Total nit:
2:47 "The shrunk data is called the latent space" --> The shrunken data is called the latent (a vector), which exists in whatever d-dimensional latent space you have 👍, right?
Hi, great video, how do you make your amazing illustrations and animations ?
Hello there Mr Serrano this is a great intuitive video but I have a question . On 28:26 I believe that the mean and variance may vary from input to input right? So doesn't that mean that the purple and green distributions should become dissimilar on each sample inserted? Thank you Mr Serrano.
30:05 has a typo on the graph. q3 should be 0.4 and not 0.3 in order to sum to 1 and also to be inline with the calculations (where 0.4 is used). Just a note :) Great content by the way!
Ahh you're right, thank you so much for the correction! I can't change the video but will add a comment below.
you're saving my graduate research lmao
@SerranoAcademy Where did the first idea/intuition occurred for the first researcher who proposed that the equal diagonal values have property to denoise? you are just riding on the idea, better if you can share more information on it.
at 25:00 shouldnt it be a "large/small loss" instead of a "large/small loss function"? Im not really shure if i understood this right
million claps from me
2:14