This is the only video that goes into how OpenAI used text/tokens in combination with the diffusion model in order to achieve such results. That was very helpful.
Thank you for the first effective high-level explanation of Diffusion I've found. Truly, I do not know how I went so long in this space not knowing about your channel.
Something not stated in the video is that Diffusion Models are WAY easier to train than GANs. Although it requires you to code the forward and backward diffusion procedures, training is rather stable which is more gratifying. Might release a tutorial on training diffusion models on a toy-ish dataset in the near future :)
WHY noise is added to a perfect image? And why we reverse it? To get a clear image? We already had a clear image at the beginning. This video fails to explain it.
@@taseronify Because we train the model on existing images where we know how they should look like. Then with new noise, the model generates new images during testing.
Love your channel! Cat videos get millions of views. Your videos might get in the thousands of views, but they have a huge impact by explaining high level concepts to people who can actually use them. Please keep up your exceptional work
Wow, thank you! Funny, I was thinking about my videos vs. cat videos very recently in a chat with Tim and Keith from MLST. I remember that part was not recorded. It's nice to read that you had the same thought. :)
@@AICoffeeBreak ☕ Thanks, the content on your channel is really well thought out and wonderfully conceived. I really hope the channel grows, and am quite sure Mr RUclips will favour a channel dedicated to the architecture that underpins his existence 😀 I spent some time during lockdown going through many chapters of Penroses The Road to Reality (one of the best and most difficult books I've ever read) with nothing but calc 1 to 3 and some linear algebra under my belt. I'm very interested in studying ML in my free time as many of the ideas are informed by physics. Thanks again for your educational content, the quality is top notch.
Wonderful review - not just does it capture the essential information, but it is also is interspersed with some very good humor. Look forward to more from you!
@@taseronify We train the model on existing images where we know how they should look like. Then with new noise, the model generates new images during testing.
This is an excellent teaching session. I learned a great deal. Thank you. I do not personally need another avocado armchair as that is all we ever sit on now in my house. It turns out that avocados are not ideal for chair construction. When the avocado becomes fully ripe the chair loses its furniture-like qualities. I would like to know whether the smaller, released version of GLIDE is at least useful for understanding the GLIDE archtecture and getting a feel for what GLIDE can do.
Haha, your middle line cracked me up. Regarding your last question, the answer is rather no. Scale enables for some capabilities that small data and models simply do not show.
you can feel this is serious stuff by the workout background music. Super interesting topic and a very clear video considering how many complex aspects were involved. 14:20 I wonder what GLIDE predicts here on the branch which inputs just noise without the text (at least at the first iteration?).
RE: music. Cannot leave the impression we are talking about unimportant stuff, lol. 😅 RE: Prediction without text from just noise. I think the answer is: something. Like, anything, but always depending on the noise that was just sampled. Different noise => different generations. Being the first step out of 150, this would mean that it basically adds here and there pieces of information that can crystallize in the remaining 149 iterations.
Very nice video! I'm working with flow-based models atm and also came accross lilian weng's blogpost, which is superb. I feel like diffusion models and flow-based models share some similarities. In fact all generative models share similarities :D
Amazing video. All concepts are explained so clearly. "Teeeeeext!" That notation made me laugh. It seems that that Classifier-free guidance technique they are using could be used in a lot of other cases where multimodality is required.
Very nice video! It's nice to see Diffusion models getting more attention. It seems the coolest AI generated art is all coming from diffusion models these days.
Great video :) One small mistake I would like to point out is at 6:30, where the example with the extra arrow is in fact a Markovian structure (Markov random field), but not a chain :)
Thanks so much for this video and your channel. I really appreciate your explanations, I'm coming at this topic from the art side rather than the technical side so having these concepts explained is very helpful. For the last month I've been producing artwork with Disco Diffusion and it's really a revolution in my opinion. Let me know if you'd like to use any future videos and I can send you a selection.
It's done by adding an image to the generated image during inference. This extra added image is computed via the gradient with respect to clip's output. It's a bit like deep dream, if you are old enough to know about it.
I love your videos! I also love how many comments you respond to...it makes it feel more like a community than other ML channels The idea of generating globally coherent images via a u-net is pretty cool - the global image attention part is weird I'll have to look into more lol. From DALLE-2 it seems another advantage of diffusion models is that it can be used to edit images, because it can modify existing images somehow
Hey, thanks! Yes, we totally forgot to mention how editing can be done: basically, you limit the diffusion process to only the area you want to have edited. The rest of the image is left unchanged.
@@AICoffeeBreak yeah, I agree, your videos are awesome! I just met your channels and it covers so many recent papers! I'm watching a bunch of your videos hahah And is global image attention covered in some other video? Thanks for the content!
Is it possible to create an embedding of an input image using a diffusion model? If the way to do it is to add noise, does the embedding still have interesting propreties ? I would not think so
Maybe I lack imagination, but I also do not think so. The neural net representations just capture the noise diff, which is not really an image representation.
@@AICoffeeBreak I have another question, does the network used during the denoising part (predict the noise to remove it) is the same at every noise level, or is it N different models for each level of noise?
@@AICoffeeBreak Just for information, there are indeed "no single latent space" because the sampling procedure is stochastic. But that why some people proposed a deterministic approach to produce sample from the target distribution, DDIM (denoising diffusion implicit model) which does not require to retrian the DDPM but only changes the sampling algorithm and allows the concept of latent space and encoder for diffusion models.
I absolutely love your channel and the explanations you provide, thanks for all the great work you put into these videos! But here I don't fully get the intuition behind the step-wise denoising: At step T we ask the network to predict the noise from step T-1, correct? But the noise at step T-1 is indistinguishable from the noise at step T-2, T-3, ... T-n, no? Let's say we add some random noise only twice: img = (img + noise_1) + noise_2 It seems like a non-identifiable problem! I can imagine we could train the network to predict (noise_1 + noise_2), but it should be physically impossible to predict which pixels were corrupted by noise_1, which were corrupted by noise_2?
Another great video, as usual! This little bean mutant of yours always puts a smile on my face ☺ Is it possible that it is actually an AI? For example, a transformer that converts language information into the facial expressions of the animated bean. That would be so cool 😎 I have a question: I am looking for training methods that are not based on backpropagation. Specifically, I want to avoid running backwards through the NNW again after the forward pass. Do you know of any algorithms like this? Already 2^10 * Thanks in advance 😄
Question: Can the NOISE ( input ) be used as a SEED to be highly-deterministic with the diffusion models outputs? (Assuming the trained model (PT or w/e) is the same?)
Does the diffusion model essentially work like Chuck Close’s art method, while CLIP actually finds the requisite parts that are to be put together to create the crazy images? Also, how do you even get an invite to Imagen or Dall-E to test this beyond all the possibly rigged samples they have up.
You lost me from time to time, but I think I have an overview now. I wish you have better explained how Diffusion models decide what they are going to use in their construction of the image. Sure It goes from noise to image, but If I use Ken Perlin's noise, it doesn't have any image component in it. So how does the diffusion model suck image information out of it?
Well I went to have a look at the Glide Text2im. To say I am not impressed would be an understatement. My prompt was "girl with short blonde hair, cherry blossom tattoos, pencil sketch". What did I get back, after 20 minutes? A crude drawing of 2 giraffes. And the one on the left is barely recognisable.
As I was about to go and generate the avocado armchair, I heard you say no avocado armchair. My disappointment is immeasurable and my day is ruined.
Imagine, our day was ruined too! 😭
Why can't it generate that iconic chair? Paradise lost. We miss those simpler times of that junior AI
🤣🤣
nice and clear
Thank you so much! ☺️
Sorry, the upload seems buggy. Re-uploading did not help. I'll wait to see if this gets better over time.
Did you try turning it off and on again? 🤖
This video offers one of the best explanations for classifier-free guidance.
This is the only video that goes into how OpenAI used text/tokens in combination with the diffusion model in order to achieve such results. That was very helpful.
Thanks!
Thanks a lot! 😀
At @3:55, in "227" the two "2s" written differently - I have never seen someone else other than myself do this! Cheers, Letitia. Great video.
Amazing production quality! Here we go!!
love your video so much! lots of helpful intuition 🌻🌻💮Thanks ms. coffee bean a lot
Wow what a difference a few months make. Dall-E 2 in April, Midjourney in July, and Stable Diffusion in August.
Hi from the future 😊.
very well explained. love the intuition / comparison piece. send my regards to ms coffee bean :D
Thanks! Ms. Coffee Bean was so happy to read this. :)
Thank you for the first effective high-level explanation of Diffusion I've found. Truly, I do not know how I went so long in this space not knowing about your channel.
Something not stated in the video is that Diffusion Models are WAY easier to train than GANs.
Although it requires you to code the forward and backward diffusion procedures, training is rather stable which is more gratifying.
Might release a tutorial on training diffusion models on a toy-ish dataset in the near future :)
Great point, thanks! 🎯
Paste the tutorial in the comments, when ready! 👀
That would be a great tutorial! Mine doesn't want to learn MNIST 😅
WHY noise is added to a perfect image? And why we reverse it? To get a clear image? We already had a clear image at the beginning.
This video fails to explain it.
@@taseronify Because we train the model on existing images where we know how they should look like. Then with new noise, the model generates new images during testing.
Hi.. just curious to know.. if any tutorial has come up?
Love your channel! Cat videos get millions of views. Your videos might get in the thousands of views, but they have a huge impact by explaining high level concepts to people who can actually use them. Please keep up your exceptional work
Wow, thank you! Funny, I was thinking about my videos vs. cat videos very recently in a chat with Tim and Keith from MLST. I remember that part was not recorded. It's nice to read that you had the same thought. :)
This was soo informative. And the humour was spot on!
You explained the CFG so well. I was trying to wrap my head around it for a while!
I love Yannic, but boy do I like your articulate presentation? I think I do
Wow, thanks! I love Yannic too! :)
Nice high-level summary. Thanks!
Just found your channel yesterday and I'm loving it! Way to go !
Glad we found you. 😜
I was waiting for this Leticia, love your channel, thank you
Just started playing around with disco diffusion. This is the best explanation I've found and I love the coffee bean character. Subbed.
Welcome to the coffee drinkers' club! ☕
@@AICoffeeBreak ☕
Thanks, the content on your channel is really well thought out and wonderfully conceived. I really hope the channel grows, and am quite sure Mr RUclips will favour a channel dedicated to the architecture that underpins his existence 😀 I spent some time during lockdown going through many chapters of Penroses The Road to Reality (one of the best and most difficult books I've ever read) with nothing but calc 1 to 3 and some linear algebra under my belt. I'm very interested in studying ML in my free time as many of the ideas are informed by physics. Thanks again for your educational content, the quality is top notch.
Wonderful review - not just does it capture the essential information, but it is also is interspersed with some very good humor. Look forward to more from you!
You always have the best, clear and concise explanations on these topics
Thanks! ☺️
I don't think so. I did not understand why noise is added to a perfect image?
What is achieved by adding noise?
Can anyone explain it please?
@@taseronify We train the model on existing images where we know how they should look like. Then with new noise, the model generates new images during testing.
This is an excellent teaching session. I learned a great deal. Thank you.
I do not personally need another avocado armchair as that is all we ever sit on now in my house. It turns out that avocados are not ideal for chair construction. When the avocado becomes fully ripe the chair loses its furniture-like qualities.
I would like to know whether the smaller, released version of GLIDE is at least useful for understanding the GLIDE archtecture and getting a feel for what GLIDE can do.
Haha, your middle line cracked me up.
Regarding your last question, the answer is rather no. Scale enables for some capabilities that small data and models simply do not show.
Nice explanation! You got my subscription!
Nice to see you! :)
you can feel this is serious stuff by the workout background music.
Super interesting topic and a very clear video considering how many complex aspects were involved.
14:20 I wonder what GLIDE predicts here on the branch which inputs just noise without the text (at least at the first iteration?).
RE: music. Cannot leave the impression we are talking about unimportant stuff, lol. 😅
RE: Prediction without text from just noise. I think the answer is: something. Like, anything, but always depending on the noise that was just sampled. Different noise => different generations. Being the first step out of 150, this would mean that it basically adds here and there pieces of information that can crystallize in the remaining 149 iterations.
Very nice video! I'm working with flow-based models atm and also came accross lilian weng's blogpost, which is superb. I feel like diffusion models and flow-based models share some similarities. In fact all generative models share similarities :D
Great explanation. Thank you.
Astoundingly well-explained!
Hehe, thanks! Astoundingly positively impactful comment. ☺️
I'm here to speculate Ms Coffee Bean knew the existence of DALLE 2... Convenient timing...
🤫
Amazing video. All concepts are explained so clearly. "Teeeeeext!" That notation made me laugh. It seems that that Classifier-free guidance technique they are using could be used in a lot of other cases where multimodality is required.
I finally understand diffusion! (Not really but moreso than before)
Very nice video! It's nice to see Diffusion models getting more attention. It seems the coolest AI generated art is all coming from diffusion models these days.
What a great explainer video! Thanks for sharing 🙂
Thanks for the feedback! ☺️
classifier-free guidance is explained well. Thank you
Glad it was helpful!
Great video :)
One small mistake I would like to point out is at 6:30, where the example with the extra arrow is in fact a Markovian structure (Markov random field), but not a chain :)
Thanks so much for this video and your channel. I really appreciate your explanations, I'm coming at this topic from the art side rather than the technical side so having these concepts explained is very helpful. For the last month I've been producing artwork with Disco Diffusion and it's really a revolution in my opinion. Let me know if you'd like to use any future videos and I can send you a selection.
Hey, write me an email or tell me your Twitter handle.
Nice video! very instructive!
Glide you liked it! 😅
"Is this a weird hack? - Yes, it is!"
Great, as always
At 13:14 - Is this 'clip guided diffusion' done by adding a term to the loss function or via a different method?
It's done by adding an image to the generated image during inference. This extra added image is computed via the gradient with respect to clip's output. It's a bit like deep dream, if you are old enough to know about it.
I love your videos! I also love how many comments you respond to...it makes it feel more like a community than other ML channels
The idea of generating globally coherent images via a u-net is pretty cool - the global image attention part is weird I'll have to look into more lol.
From DALLE-2 it seems another advantage of diffusion models is that it can be used to edit images, because it can modify existing images somehow
Hey, thanks! Yes, we totally forgot to mention how editing can be done: basically, you limit the diffusion process to only the area you want to have edited. The rest of the image is left unchanged.
@@AICoffeeBreak yeah, I agree, your videos are awesome! I just met your channels and it covers so many recent papers! I'm watching a bunch of your videos hahah
And is global image attention covered in some other video?
Thanks for the content!
Very nice video!
Thank you! Cheers!
Is it possible to create an embedding of an input image using a diffusion model? If the way to do it is to add noise, does the embedding still have interesting propreties ? I would not think so
Maybe I lack imagination, but I also do not think so. The neural net representations just capture the noise diff, which is not really an image representation.
@@AICoffeeBreak I have another question, does the network used during the denoising part (predict the noise to remove it) is the same at every noise level, or is it N different models for each level of noise?
The same model for each step. :)
@@AICoffeeBreak Just for information, there are indeed "no single latent space" because the sampling procedure is stochastic. But that why some people proposed a deterministic approach to produce sample from the target distribution, DDIM (denoising diffusion implicit model) which does not require to retrian the DDPM but only changes the sampling algorithm and allows the concept of latent space and encoder for diffusion models.
How's unet becomes a Markov chain if there's skip connection?
Can you explain this? I did get it exactly
Not the Unet is markov, but the successions of steps where at each step, you apply a Unet or something else.
very helpful thank you
Amazing video.. Any recommendations about the python code, to implement this model with any custom dataset?
I absolutely love your channel and the explanations you provide, thanks for all the great work you put into these videos!
But here I don't fully get the intuition behind the step-wise denoising:
At step T we ask the network to predict the noise from step T-1, correct?
But the noise at step T-1 is indistinguishable from the noise at step T-2, T-3, ... T-n, no?
Let's say we add some random noise only twice: img = (img + noise_1) + noise_2
It seems like a non-identifiable problem! I can imagine we could train the network to predict (noise_1 + noise_2),
but it should be physically impossible to predict which pixels were corrupted by noise_1, which were corrupted by noise_2?
Letitia do you have a channel discord ?
Another great video, as usual!
This little bean mutant of yours always puts a smile on my face ☺
Is it possible that it is actually an AI?
For example, a transformer that converts language information into the facial expressions of the animated bean.
That would be so cool 😎
I have a question: I am looking for training methods that are not based on backpropagation.
Specifically, I want to avoid running backwards through the NNW again after the forward pass.
Do you know of any algorithms like this?
Already 2^10 * Thanks in advance 😄
you should do an update video now that dalle 2 and imagen are out and people are hyping them up
We already have a video on Imagen. 😅
Imagen video. ruclips.net/video/xqDeAz0U-R4/видео.html
And a DALL-E 2 secret language video. ruclips.net/video/MNwURQ9621k/видео.html
I would like to give 1000 likes in this video!
Question: Can the NOISE ( input ) be used as a SEED to be highly-deterministic with the diffusion models outputs? (Assuming the trained model (PT or w/e) is the same?)
What is intuition behind classifier free approach?
Does the diffusion model essentially work like Chuck Close’s art method, while CLIP actually finds the requisite parts that are to be put together to create the crazy images? Also, how do you even get an invite to Imagen or Dall-E to test this beyond all the possibly rigged samples they have up.
Can anybody recommend a video that describes how this works in less technical terms? Like explain it to an art major?
bless your soul!
Thanks for this amazing video. Do you know any online course where we can practice with training diffusion models?
thx
You lost me from time to time, but I think I have an overview now. I wish you have better explained how Diffusion models decide what they are going to use in their construction of the image. Sure It goes from noise to image, but If I use Ken Perlin's noise, it doesn't have any image component in it. So how does the diffusion model suck image information out of it?
Teeeeext!
😂
Can someone pls explain how exactl ythis model was inspired the nonequilibrium thermodynamics?
Well I went to have a look at the Glide Text2im. To say I am not impressed would be an understatement. My prompt was "girl with short blonde hair, cherry blossom tattoos, pencil sketch". What did I get back, after 20 minutes? A crude drawing of 2 giraffes. And the one on the left is barely recognisable.
Damn you like Neffex ❤
Neffex is like 10 % of my life.
"open" AI
i'm sorry,
😁 you can decode the architecture of Model meta llama 3
„Open“ai
Why is everything a chain lately... kinda of laughable...
i like you
I like you, too
I'm so stupid
Terrible explanation
We really don't need that coffee bean jumping around in the video.
Give me vocal isolation.
Spleeter and uvr are nice, but if image stuff can work this well, apply it to music.
Gogogo