Diffusion models explained. How does OpenAI's GLIDE work?

Поделиться
HTML-код
  • Опубликовано: 7 янв 2025

Комментарии • 116

  • @Mrbits01
    @Mrbits01 2 года назад +54

    As I was about to go and generate the avocado armchair, I heard you say no avocado armchair. My disappointment is immeasurable and my day is ruined.

    • @AICoffeeBreak
      @AICoffeeBreak  2 года назад +6

      Imagine, our day was ruined too! 😭

    • @johnvonhorn2942
      @johnvonhorn2942 2 года назад +1

      Why can't it generate that iconic chair? Paradise lost. We miss those simpler times of that junior AI

    • @AICoffeeBreak
      @AICoffeeBreak  2 года назад +2

      🤣🤣

  • @alexvass
    @alexvass Год назад +2

    nice and clear

  • @AICoffeeBreak
    @AICoffeeBreak  2 года назад +7

    Sorry, the upload seems buggy. Re-uploading did not help. I'll wait to see if this gets better over time.
    Did you try turning it off and on again? 🤖

  • @LecrazyMaffe
    @LecrazyMaffe Год назад +4

    This video offers one of the best explanations for classifier-free guidance.

  • @ElieAtik
    @ElieAtik 2 года назад +3

    This is the only video that goes into how OpenAI used text/tokens in combination with the diffusion model in order to achieve such results. That was very helpful.

  • @MakerBen
    @MakerBen 2 года назад +2

    Thanks!

  • @balcaenpunch
    @balcaenpunch 2 года назад +4

    At @3:55, in "227" the two "2s" written differently - I have never seen someone else other than myself do this! Cheers, Letitia. Great video.

  • @MachineLearningStreetTalk
    @MachineLearningStreetTalk 2 года назад +6

    Amazing production quality! Here we go!!

  • @r00t257
    @r00t257 2 года назад +3

    love your video so much! lots of helpful intuition 🌻🌻💮Thanks ms. coffee bean a lot

  • @phizc
    @phizc Год назад +2

    Wow what a difference a few months make. Dall-E 2 in April, Midjourney in July, and Stable Diffusion in August.
    Hi from the future 😊.

  • @HangtheGreat
    @HangtheGreat 2 года назад +2

    very well explained. love the intuition / comparison piece. send my regards to ms coffee bean :D

    • @AICoffeeBreak
      @AICoffeeBreak  2 года назад +1

      Thanks! Ms. Coffee Bean was so happy to read this. :)

  • @Nex_Addo
    @Nex_Addo 2 года назад +6

    Thank you for the first effective high-level explanation of Diffusion I've found. Truly, I do not know how I went so long in this space not knowing about your channel.

  • @CristianGarcia
    @CristianGarcia 2 года назад +52

    Something not stated in the video is that Diffusion Models are WAY easier to train than GANs.
    Although it requires you to code the forward and backward diffusion procedures, training is rather stable which is more gratifying.
    Might release a tutorial on training diffusion models on a toy-ish dataset in the near future :)

    • @AICoffeeBreak
      @AICoffeeBreak  2 года назад +8

      Great point, thanks! 🎯
      Paste the tutorial in the comments, when ready! 👀

    • @MultiCraftTube
      @MultiCraftTube 2 года назад +5

      That would be a great tutorial! Mine doesn't want to learn MNIST 😅

    • @taseronify
      @taseronify 2 года назад +2

      WHY noise is added to a perfect image? And why we reverse it? To get a clear image? We already had a clear image at the beginning.
      This video fails to explain it.

    • @AICoffeeBreak
      @AICoffeeBreak  2 года назад +7

      @@taseronify Because we train the model on existing images where we know how they should look like. Then with new noise, the model generates new images during testing.

    • @RishiRaj-hu9it
      @RishiRaj-hu9it Год назад

      Hi.. just curious to know.. if any tutorial has come up?

  • @jonahturner2969
    @jonahturner2969 2 года назад +25

    Love your channel! Cat videos get millions of views. Your videos might get in the thousands of views, but they have a huge impact by explaining high level concepts to people who can actually use them. Please keep up your exceptional work

    • @AICoffeeBreak
      @AICoffeeBreak  2 года назад +5

      Wow, thank you! Funny, I was thinking about my videos vs. cat videos very recently in a chat with Tim and Keith from MLST. I remember that part was not recorded. It's nice to read that you had the same thought. :)

  • @alexandrupapiu3310
    @alexandrupapiu3310 2 года назад +2

    This was soo informative. And the humour was spot on!

  • @OP-yw3ws
    @OP-yw3ws Год назад +2

    You explained the CFG so well. I was trying to wrap my head around it for a while!

  • @samanthaqiu3416
    @samanthaqiu3416 2 года назад +5

    I love Yannic, but boy do I like your articulate presentation? I think I do

    • @AICoffeeBreak
      @AICoffeeBreak  2 года назад +2

      Wow, thanks! I love Yannic too! :)

  • @alfcnz
    @alfcnz 2 года назад +3

    Nice high-level summary. Thanks!

  • @amirarsalanrajabi5171
    @amirarsalanrajabi5171 2 года назад +2

    Just found your channel yesterday and I'm loving it! Way to go !

  • @emiliomorales2843
    @emiliomorales2843 2 года назад +5

    I was waiting for this Leticia, love your channel, thank you

  • @_tgwilson_
    @_tgwilson_ 2 года назад +1

    Just started playing around with disco diffusion. This is the best explanation I've found and I love the coffee bean character. Subbed.

    • @AICoffeeBreak
      @AICoffeeBreak  2 года назад +2

      Welcome to the coffee drinkers' club! ☕

    • @_tgwilson_
      @_tgwilson_ 2 года назад +1

      @@AICoffeeBreak ☕
      Thanks, the content on your channel is really well thought out and wonderfully conceived. I really hope the channel grows, and am quite sure Mr RUclips will favour a channel dedicated to the architecture that underpins his existence 😀 I spent some time during lockdown going through many chapters of Penroses The Road to Reality (one of the best and most difficult books I've ever read) with nothing but calc 1 to 3 and some linear algebra under my belt. I'm very interested in studying ML in my free time as many of the ideas are informed by physics. Thanks again for your educational content, the quality is top notch.

  • @Vikram-wx4hg
    @Vikram-wx4hg 2 года назад +1

    Wonderful review - not just does it capture the essential information, but it is also is interspersed with some very good humor. Look forward to more from you!

  • @klarietakiba1445
    @klarietakiba1445 2 года назад +3

    You always have the best, clear and concise explanations on these topics

    • @AICoffeeBreak
      @AICoffeeBreak  2 года назад +2

      Thanks! ☺️

    • @taseronify
      @taseronify 2 года назад

      I don't think so. I did not understand why noise is added to a perfect image?
      What is achieved by adding noise?
      Can anyone explain it please?

    • @AICoffeeBreak
      @AICoffeeBreak  2 года назад +1

      @@taseronify We train the model on existing images where we know how they should look like. Then with new noise, the model generates new images during testing.

  • @RalphDratman
    @RalphDratman 2 года назад +3

    This is an excellent teaching session. I learned a great deal. Thank you.
    I do not personally need another avocado armchair as that is all we ever sit on now in my house. It turns out that avocados are not ideal for chair construction. When the avocado becomes fully ripe the chair loses its furniture-like qualities.
    I would like to know whether the smaller, released version of GLIDE is at least useful for understanding the GLIDE archtecture and getting a feel for what GLIDE can do.

    • @AICoffeeBreak
      @AICoffeeBreak  2 года назад +1

      Haha, your middle line cracked me up.
      Regarding your last question, the answer is rather no. Scale enables for some capabilities that small data and models simply do not show.

  • @daesoolee1083
    @daesoolee1083 2 года назад +2

    Nice explanation! You got my subscription!

  • @marcocipriano5922
    @marcocipriano5922 2 года назад +4

    you can feel this is serious stuff by the workout background music.
    Super interesting topic and a very clear video considering how many complex aspects were involved.
    14:20 I wonder what GLIDE predicts here on the branch which inputs just noise without the text (at least at the first iteration?).

    • @AICoffeeBreak
      @AICoffeeBreak  2 года назад +4

      RE: music. Cannot leave the impression we are talking about unimportant stuff, lol. 😅
      RE: Prediction without text from just noise. I think the answer is: something. Like, anything, but always depending on the noise that was just sampled. Different noise => different generations. Being the first step out of 150, this would mean that it basically adds here and there pieces of information that can crystallize in the remaining 149 iterations.

  • @DeepFindr
    @DeepFindr 2 года назад +7

    Very nice video! I'm working with flow-based models atm and also came accross lilian weng's blogpost, which is superb. I feel like diffusion models and flow-based models share some similarities. In fact all generative models share similarities :D

  • @undergrad4980
    @undergrad4980 2 года назад +3

    Great explanation. Thank you.

  • @JosephRocca
    @JosephRocca 2 года назад +4

    Astoundingly well-explained!

    • @AICoffeeBreak
      @AICoffeeBreak  2 года назад +5

      Hehe, thanks! Astoundingly positively impactful comment. ☺️

  • @ArjunKumar123111
    @ArjunKumar123111 2 года назад +5

    I'm here to speculate Ms Coffee Bean knew the existence of DALLE 2... Convenient timing...

  • @Micetticat
    @Micetticat 2 года назад +9

    Amazing video. All concepts are explained so clearly. "Teeeeeext!" That notation made me laugh. It seems that that Classifier-free guidance technique they are using could be used in a lot of other cases where multimodality is required.

  • @tripzero0
    @tripzero0 2 года назад +3

    I finally understand diffusion! (Not really but moreso than before)

  • @Mutual_Information
    @Mutual_Information 2 года назад +10

    Very nice video! It's nice to see Diffusion models getting more attention. It seems the coolest AI generated art is all coming from diffusion models these days.

  • @Yenrabbit
    @Yenrabbit 2 года назад +4

    What a great explainer video! Thanks for sharing 🙂

  • @muhammadwaseem_
    @muhammadwaseem_ 11 месяцев назад +1

    classifier-free guidance is explained well. Thank you

  • @shahaffinder5355
    @shahaffinder5355 2 года назад +1

    Great video :)
    One small mistake I would like to point out is at 6:30, where the example with the extra arrow is in fact a Markovian structure (Markov random field), but not a chain :)

  • @spacemanchris
    @spacemanchris 2 года назад +5

    Thanks so much for this video and your channel. I really appreciate your explanations, I'm coming at this topic from the art side rather than the technical side so having these concepts explained is very helpful. For the last month I've been producing artwork with Disco Diffusion and it's really a revolution in my opinion. Let me know if you'd like to use any future videos and I can send you a selection.

    • @AICoffeeBreak
      @AICoffeeBreak  2 года назад +5

      Hey, write me an email or tell me your Twitter handle.

  • @theaicodes
    @theaicodes 2 года назад +4

    Nice video! very instructive!

  • @sophiazell9517
    @sophiazell9517 2 года назад +2

    "Is this a weird hack? - Yes, it is!"

  • @gergerger53
    @gergerger53 2 года назад +4

    Great, as always

  • @declan6052
    @declan6052 3 месяца назад +1

    At 13:14 - Is this 'clip guided diffusion' done by adding a term to the loss function or via a different method?

    • @AICoffeeBreak
      @AICoffeeBreak  3 месяца назад +1

      It's done by adding an image to the generated image during inference. This extra added image is computed via the gradient with respect to clip's output. It's a bit like deep dream, if you are old enough to know about it.

  • @Neptutron
    @Neptutron 2 года назад +7

    I love your videos! I also love how many comments you respond to...it makes it feel more like a community than other ML channels
    The idea of generating globally coherent images via a u-net is pretty cool - the global image attention part is weird I'll have to look into more lol.
    From DALLE-2 it seems another advantage of diffusion models is that it can be used to edit images, because it can modify existing images somehow

    • @AICoffeeBreak
      @AICoffeeBreak  2 года назад +4

      Hey, thanks! Yes, we totally forgot to mention how editing can be done: basically, you limit the diffusion process to only the area you want to have edited. The rest of the image is left unchanged.

    • @RfMac
      @RfMac 2 года назад +2

      @@AICoffeeBreak yeah, I agree, your videos are awesome! I just met your channels and it covers so many recent papers! I'm watching a bunch of your videos hahah
      And is global image attention covered in some other video?
      Thanks for the content!

  • @alexijohansen
    @alexijohansen 2 года назад +4

    Very nice video!

  • @Youkouleleh
    @Youkouleleh 2 года назад +4

    Is it possible to create an embedding of an input image using a diffusion model? If the way to do it is to add noise, does the embedding still have interesting propreties ? I would not think so

    • @AICoffeeBreak
      @AICoffeeBreak  2 года назад +3

      Maybe I lack imagination, but I also do not think so. The neural net representations just capture the noise diff, which is not really an image representation.

    • @Youkouleleh
      @Youkouleleh 2 года назад +1

      @@AICoffeeBreak I have another question, does the network used during the denoising part (predict the noise to remove it) is the same at every noise level, or is it N different models for each level of noise?

    • @AICoffeeBreak
      @AICoffeeBreak  2 года назад +2

      The same model for each step. :)

    • @Youkouleleh
      @Youkouleleh 2 года назад +1

      @@AICoffeeBreak Just for information, there are indeed "no single latent space" because the sampling procedure is stochastic. But that why some people proposed a deterministic approach to produce sample from the target distribution, DDIM (denoising diffusion implicit model) which does not require to retrian the DDPM but only changes the sampling algorithm and allows the concept of latent space and encoder for diffusion models.

  • @bhuvaneshs.k638
    @bhuvaneshs.k638 2 года назад +1

    How's unet becomes a Markov chain if there's skip connection?
    Can you explain this? I did get it exactly

    • @AICoffeeBreak
      @AICoffeeBreak  2 года назад +2

      Not the Unet is markov, but the successions of steps where at each step, you apply a Unet or something else.

  • @theeFaris
    @theeFaris 2 года назад +4

    very helpful thank you

  • @Sutirtha
    @Sutirtha 2 года назад +2

    Amazing video.. Any recommendations about the python code, to implement this model with any custom dataset?

  • @marcinelantkowski662
    @marcinelantkowski662 2 года назад +4

    I absolutely love your channel and the explanations you provide, thanks for all the great work you put into these videos!
    But here I don't fully get the intuition behind the step-wise denoising:
    At step T we ask the network to predict the noise from step T-1, correct?
    But the noise at step T-1 is indistinguishable from the noise at step T-2, T-3, ... T-n, no?
    Let's say we add some random noise only twice: img = (img + noise_1) + noise_2
    It seems like a non-identifiable problem! I can imagine we could train the network to predict (noise_1 + noise_2),
    but it should be physically impossible to predict which pixels were corrupted by noise_1, which were corrupted by noise_2?

  • @renanmonteirobarbosa8129
    @renanmonteirobarbosa8129 2 года назад +2

    Letitia do you have a channel discord ?

  • @Jupiter-Optimus-Maximus
    @Jupiter-Optimus-Maximus Месяц назад

    Another great video, as usual!
    This little bean mutant of yours always puts a smile on my face ☺
    Is it possible that it is actually an AI?
    For example, a transformer that converts language information into the facial expressions of the animated bean.
    That would be so cool 😎
    I have a question: I am looking for training methods that are not based on backpropagation.
    Specifically, I want to avoid running backwards through the NNW again after the forward pass.
    Do you know of any algorithms like this?
    Already 2^10 * Thanks in advance 😄

  • @core6358
    @core6358 2 года назад +1

    you should do an update video now that dalle 2 and imagen are out and people are hyping them up

    • @AICoffeeBreak
      @AICoffeeBreak  2 года назад +1

      We already have a video on Imagen. 😅

    • @AICoffeeBreak
      @AICoffeeBreak  2 года назад +1

      Imagen video. ruclips.net/video/xqDeAz0U-R4/видео.html

    • @AICoffeeBreak
      @AICoffeeBreak  2 года назад +1

      And a DALL-E 2 secret language video. ruclips.net/video/MNwURQ9621k/видео.html

  • @RfMac
    @RfMac 2 года назад +2

    I would like to give 1000 likes in this video!

  • @adr3000
    @adr3000 2 года назад

    Question: Can the NOISE ( input ) be used as a SEED to be highly-deterministic with the diffusion models outputs? (Assuming the trained model (PT or w/e) is the same?)

  • @aungkhant502
    @aungkhant502 2 года назад

    What is intuition behind classifier free approach?

  • @Imhotep397
    @Imhotep397 2 года назад

    Does the diffusion model essentially work like Chuck Close’s art method, while CLIP actually finds the requisite parts that are to be put together to create the crazy images? Also, how do you even get an invite to Imagen or Dall-E to test this beyond all the possibly rigged samples they have up.

  • @ithork
    @ithork 2 года назад

    Can anybody recommend a video that describes how this works in less technical terms? Like explain it to an art major?

  • @lewingtonn
    @lewingtonn 2 года назад

    bless your soul!

  • @aifirst9478
    @aifirst9478 2 года назад

    Thanks for this amazing video. Do you know any online course where we can practice with training diffusion models?

  • @chainonsmanquants1630
    @chainonsmanquants1630 2 года назад +2

    thx

  • @peterplantec7911
    @peterplantec7911 2 года назад

    You lost me from time to time, but I think I have an overview now. I wish you have better explained how Diffusion models decide what they are going to use in their construction of the image. Sure It goes from noise to image, but If I use Ken Perlin's noise, it doesn't have any image component in it. So how does the diffusion model suck image information out of it?

  • @BlissfulBasilisk
    @BlissfulBasilisk 2 года назад +5

    Teeeeext!

  • @bgspss
    @bgspss 2 года назад

    Can someone pls explain how exactl ythis model was inspired the nonequilibrium thermodynamics?

  • @DuskJockeysApps
    @DuskJockeysApps 10 месяцев назад

    Well I went to have a look at the Glide Text2im. To say I am not impressed would be an understatement. My prompt was "girl with short blonde hair, cherry blossom tattoos, pencil sketch". What did I get back, after 20 minutes? A crude drawing of 2 giraffes. And the one on the left is barely recognisable.

  • @sohambit9393
    @sohambit9393 14 дней назад

    Damn you like Neffex ❤
    Neffex is like 10 % of my life.

  • @lendrick
    @lendrick 2 года назад +4

    "open" AI

  • @hoami8320
    @hoami8320 6 месяцев назад

    i'm sorry,
    😁 you can decode the architecture of Model meta llama 3

  • @julius4858
    @julius4858 2 года назад +1

    „Open“ai

  • @DazzlingAction
    @DazzlingAction 2 года назад +2

    Why is everything a chain lately... kinda of laughable...

  • @jadtawil6143
    @jadtawil6143 2 года назад +2

    i like you

    • @DerPylz
      @DerPylz 2 года назад +1

      I like you, too

  • @stumby1073
    @stumby1073 2 года назад +1

    I'm so stupid

  • @davide0965
    @davide0965 Месяц назад

    Terrible explanation

  • @ujjwaljain6416
    @ujjwaljain6416 2 года назад

    We really don't need that coffee bean jumping around in the video.

  • @diarykeeper
    @diarykeeper 2 года назад

    Give me vocal isolation.
    Spleeter and uvr are nice, but if image stuff can work this well, apply it to music.
    Gogogo