Thank you for your great work removing the need of the audience to know much prior knowledge before they could enjoy your video. For example, you mentioned maximum likelihood and explain what it is immediately. It is such a challenge to straighten all these in a 17-minute video, but you did a great work. Thank you!
Thanks for your efforts in making such a high-quality video! I like the way you break down such complex ideas in a concise manner and visualize them intuitively and elegantly. I wish I could have this video six months ago, lol.
Thank you for such a great videos with all the steps and equations explained so clearly! I was looking for the referenced papers to dive deeper and found those in the video description! I've learned so much through the video! Your students are so lucky to have such a dedicated instructor!
Thanks a lot for the videos! I've been self-studying diffusion models on the side for a few months now and this is the only video I've seen that gives an in-depth yet intuitive explanation of the math.
@Jia-Bin Huang we want to maximize likelihood and also minimize KL divergence so that we can "maximize" similarity between two distributions..it is stated other-way round at timestamp 1:19 to 1:121
Yes! You are right! Maximize likelihood -> Minimize KL divergence -> Maximize similarity between the two distributions. I got confused with too many negations. :-P
Very compressive and precise. Thanks. Also thanks for tweedie formula and simplifying score based model. That is the most convoluted part in most papers. Looking forward to demystified NERFs from you!
Awesome post, Jiang, thank you so much for the great job! Anyway, a small comment/question on your video (without too much importance, I assume). At minute 5:56 you comment that (direct derivation of formula (7) in the paper "Denoising Diffusion Probabilistic Models"), mu^hat_t(x_t,x_0) is on the line joining x_0 and x_t. And, while this is approximately true for "normal" beta_t scheduling, I think that the estimated mean as a function of x_0 and x_t need not be exactly on such a line since, in general, the respective multipliers of x_0 and x_t in such an equation need not (in general) add up to one. In fact, in "normal" scheduling, as t increases, it seems that this sum keeps progressively moving away from 1, so that although obviously mu_t will continue to be a simple linear combination of both x_t and x_0, the fact is that it will progressively move away (although by a small amount) from this line. Would you agree with this observation? Greetings, and again, congratulations for the video and thank you very much for clarifying us the inners of diffusion models!
I agree. The author must have carefully chosen the most efficient way cutting into the complex concept hierarchy and every single word to achieve that efficiency.
Like this video so much! It is quite helpful to learn the math behind it, with a lot of humor and fun as vital as the Gaussian to the diffusion. Wonder what the distribution of Professor Huang's humor is. Thanks for making this video.
shout out to NCTU alumni! great video with so many sound effect, good visualization and metaphor! Just wish there's more reference to the derivation of the math part, as it's still a bit hard to follow even though I suspended the video so many times haha
I still need to get my head around the math! but like everyone else said, amazing video!! One question! How to you imagine a distribution of high resolution images?! Would it be like a point in high dimensional space? where the coordinates are the intensities of its pixels?! and from a high dimensional noise vector we move to the vector on the dataset distribution? Thanks looking forward future videos
Thanks for the question. I agree that it's kind of difficult to imagine the distribution of images as it's high-dimensional. For a grayscale 100x100 image, we are talking about a 10,000-dim space! And you are right, the "coordinate" of each dimension indicates the intensity of a particular pixel. Diffusion models learns to predict the vectors in this space so that iteratively we push some random noise to regions in this high-dimensional space so that they look like real images in the dataset.
thanks for the work, if i want to get x from y=Hx+n if i have noisy x (which is y) by using diffusion work what should be done ? what literature you know that had tackled similar problems ?
Thanks for the question. Diffusion models have been applied to various image restoration tasks. The earliest work is probably this one: arxiv.org/pdf/2011.13456 (see section 5), where they can perform conditional (on noisy/masked image) restoration using an unconditioned model. You can also directly train a model for image restoration if you have paired examples. See a recent work here arxiv.org/abs/2303.11435
Good question! I think there are some development of discrete variational autoencoder and diffusion models. Those methods can deal with discrete variables.
BRAVO! No one ever have explained the diffusion model in such an easy way with all the details.
Thank you so much for your kind words! This makes my day!
This is truly a great tutorial video, so well-made. Cannot believe covering so many things within only 17 minutes.
Thanks a lot! Happy that you enjoyed the video!
Thank you for your great work removing the need of the audience to know much prior knowledge before they could enjoy your video. For example, you mentioned maximum likelihood and explain what it is immediately. It is such a challenge to straighten all these in a 17-minute video, but you did a great work. Thank you!
Glad that you liked it! Appreciate your kind words! This made my day!
incredible explanation with so much detail packed in so little time. Looking forward to more of these
Thanks, Ayush! Glad that you like it!
This is the best video on diffusion models, I can't even imagine how you were able to distill this much info into 17 minutes
Glad it was helpful! Thanks a lot!
I can only dream that you were my PhD advisor. This is so nicely explained!
Thank you so much!
Thanks for your efforts in making such a high-quality video!
I like the way you break down such complex ideas in a concise manner and visualize them intuitively and elegantly. I wish I could have this video six months ago, lol.
Thanks for your kind words! It's a fun video to make, and I also learn a lot about diffusion models through the process.
Thank you for such a great videos with all the steps and equations explained so clearly! I was looking for the referenced papers to dive deeper and found those in the video description! I've learned so much through the video! Your students are so lucky to have such a dedicated instructor!
Thanks so much for your kind words!
You are a true educator! Great video!
Thank you so much! Glad that you like the video.
Thanks a lot for the videos! I've been self-studying diffusion models on the side for a few months now and this is the only video I've seen that gives an in-depth yet intuitive explanation of the math.
Glad it was helpful!
I'm building my own diffusion model myself. This is the best breakdown and visualization of the mathematics and implementation. Well done.
Thank you! This comment just made my day!
@Jia-Bin Huang we want to maximize likelihood and also minimize KL divergence so that we can "maximize" similarity between two distributions..it is stated other-way round at timestamp 1:19 to 1:121
Yes! You are right! Maximize likelihood -> Minimize KL divergence -> Maximize similarity between the two distributions.
I got confused with too many negations. :-P
seriously one of the best educational videos I've ever watched.
Thank you so much!
Very compressive and precise. Thanks. Also thanks for tweedie formula and simplifying score based model. That is the most convoluted part in most papers. Looking forward to demystified NERFs from you!
Glad it was helpful!
Thank you so much for your contribution. It's a tutorial make me clear about Diffusion, as beginner.
You are welcome. Glad it was helpful!
Just one minute in the video, you know it's extremely well done. Thanks for the video !
Glad you liked it! Thanks so much for the comment!
Really enjoying watching this video and learned a lot. Hope more such videos in the future.
Will do! Stay tuned! 😊
Awesome post, Jiang, thank you so much for the great job!
Anyway, a small comment/question on your video (without too much importance, I assume). At minute 5:56 you comment that (direct derivation of formula (7) in the paper "Denoising Diffusion Probabilistic Models"), mu^hat_t(x_t,x_0) is on the line joining x_0 and x_t. And, while this is approximately true for "normal" beta_t scheduling, I think that the estimated mean as a function of x_0 and x_t need not be exactly on such a line since, in general, the respective multipliers of x_0 and x_t in such an equation need not (in general) add up to one.
In fact, in "normal" scheduling, as t increases, it seems that this sum keeps progressively moving away from 1, so that although obviously mu_t will continue to be a simple linear combination of both x_t and x_0, the fact is that it will progressively move away (although by a small amount) from this line.
Would you agree with this observation?
Greetings, and again, congratulations for the video and thank you very much for clarifying us the inners of diffusion models!
Thank you so much for your comment! You are right! It won’t be on the line when the multipliers are not adding up to one.
Awesome video, hope I'm smarter when I try to rewatch it in 3 months ;)
Glad you liked it! Let me know if you have questions.
Thank you for making such a high quality video! It's very helpful for me to understand the diffusion model!
You're very welcome! Happy that it was helpful!
Great explanation, Jia-Bin! Thanks!
Thanks, Emre!
OK, this is the best video explanation of diffusion models I saw. Ideal ratio between simplifications and depth☺👏
Glad it was helpful! Thank you so much for your kind words!
I agree. The author must have carefully chosen the most efficient way cutting into the complex concept hierarchy and every single word to achieve that efficiency.
I appreciated the explanation of conditional generations. Nice job!
Thanks so much! Glad that you like it.
What a timing 🙌 needed this explanation so bad... thanks ✌️
Glad it helps! Thanks a lot!
I would say Top quality video! Congratulations!🎉 More like this would by awesome!
Thank you! Will do!
Best video on diffusion!!
Great! Glad that it’s helpful!
Great video! At 1:21 should be maximizing similarity between two distributions. Or minimizing the distance between two distributions.
Thanks for pointing this out! Yes, you are right! It should be *maximizing* the similarity between the two distributions.
great video explained! A lot of things behind for me to explore
Thank you so much!
Amazing work! Thank you for sharing 😀
Thank you! Cheers!
Great explanation
Glad it was helpful!
3:55 isn't that we drop the first term because it doesn't dependent on θ? q(xT|x0) is just an approximation of true N(0,1).
Great content!
Thanks a lot! Glad you like it!
Like this video so much! It is quite helpful to learn the math behind it, with a lot of humor and fun as vital as the Gaussian to the diffusion. Wonder what the distribution of Professor Huang's humor is. Thanks for making this video.
Cool! Glad you enjoyed it!
BRO YOU ARE EPIC
Thank you thank you!
Great video
Thank you!
shout out to NCTU alumni! great video with so many sound effect, good visualization and metaphor!
Just wish there's more reference to the derivation of the math part, as it's still a bit hard to follow even though I suspended the video so many times haha
Noted! Thanks a lot for the comment!
Thankyou for great step by step explanation. Can you share any good resources and insights for implementing diffusion for own custom images?
Hi! No problem. I think huggingface's diffuser probably has the best resources. Check it out: huggingface.co/docs/diffusers/en/index
I still need to get my head around the math! but like everyone else said, amazing video!!
One question!
How to you imagine a distribution of high resolution images?!
Would it be like a point in high dimensional space? where the coordinates are the intensities of its pixels?! and from a high dimensional noise vector we move to the vector on the dataset distribution?
Thanks looking forward future videos
Thanks for the question. I agree that it's kind of difficult to imagine the distribution of images as it's high-dimensional. For a grayscale 100x100 image, we are talking about a 10,000-dim space! And you are right, the "coordinate" of each dimension indicates the intensity of a particular pixel. Diffusion models learns to predict the vectors in this space so that iteratively we push some random noise to regions in this high-dimensional space so that they look like real images in the dataset.
thanks for the work, if i want to get x from y=Hx+n if i have noisy x (which is y) by using diffusion work what should be done ? what literature you know that had tackled similar problems ?
Thanks for the question. Diffusion models have been applied to various image restoration tasks.
The earliest work is probably this one: arxiv.org/pdf/2011.13456 (see section 5), where they can perform conditional (on noisy/masked image) restoration using an unconditioned model.
You can also directly train a model for image restoration if you have paired examples. See a recent work here arxiv.org/abs/2303.11435
My like comes with the 5th Symphony (9:39) 😸🎶
Oh My! Finally one person noticed that! (Spent a lot of time making that lol)
Can u tell me which topics i need tk master to understand the notations
I believe that some basics of probability would be sufficient to understand the notations.
I think there should be a
abla log q(x_t) instead of p(x_t) at the score matching part.
I have a question: Are all distribution mentioned is distribution of a continuous variable, since we're using integral here?
Good question! I think there are some development of discrete variational autoencoder and diffusion models. Those methods can deal with discrete variables.
Wish I could hear what you say:
0:36 "this stickholder"?
0:43 "hyber we do not know"
1:13 "just the cadirabigdes"
and so on
You can see the full script by turning on the subtitles/CC. Hope this helps.
@@jbhuang0604 I will try, thanks!
Awesome explanation
Thank you!