LCM LoRA = Speedy Stable Diffusion!

Поделиться
HTML-код
  • Опубликовано: 13 май 2024
  • Yes, you read it right - a LoRA which actually helps you generate images FASTER! In just 1 SECOND (4 steps) you can generate a 1024x1024 image, PLUS - being a LoRA - it works with any model, IMG2IMG, Inpainting, ControlNet, etc, etc!
    Works with stable diffusion 1.5, SDXL and SSD 1B.
    Update! For A1111 without lcm, you can lower the lora strength to 0.5 for a better generation on SDXL!
    Enjoy :)
    Huggingface Blog Post:
    huggingface.co/blog/lcm_lora
    LCM Collection:
    huggingface.co/collections/la...
    SSD-1B - Up to 60% faster than base SDXL and uses less VRAM!
    • Generate up to 60% fas...
    Workflows:
    github.com/nerdyrodent/AVeryC...
    Stable Diffusion - Face + Pose + Clothing - NO training required!
    • Stable Diffusion - Fac...
    == More Stable Diffusion Stuff! ==
    * Installing Python for MS Windows Beginners - • Anaconda - Python Inst...
    * Make an ANIMATED Stable Diffusion generated avatar! - • Create your own animat...
    * Dreambooth Playlist - • Stable Diffusion Dream...
    == Chapters ==
    0:00 LCM LoRA Introduction
    2:45 LCM LoRA for any SD1.5 checkpoint!
    5:00 LCM LoRA for any SD1.5 checkpoint (A1111)
    6:42 LCM LoRA for any SDXL checkpoint!
    8:58 LCM LoRA for any SSD-1B checkpoint!
    10:36 LCM LoRA with AnimateDiff!
  • НаукаНаука

Комментарии • 92

  • @banzai316
    @banzai316 6 месяцев назад +4

    As a good student, we sit on the front row, learning from Nerdy Rodent! Woot woot! 🤭 Looking good.

  • @ratside9485
    @ratside9485 6 месяцев назад +38

    That's truly a significant advancement, and it will likely be refined in the future to produce even better results. Imagine these Lora/AIs becoming faster. We could develop AI filters that transform entire games into photorealism, including old classics. This might happen sooner than we think.

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад +6

      Yup. They’ve only been out a couple of days now, so I expect loads of stuff in the next 48 hours 😆

    • @ratside9485
      @ratside9485 6 месяцев назад

      @@NerdyRodent Have you already tested the new Nvidia plugin TensorRT for Stable Diffusion? It is supposed to double the speed.

  • @asciikat2571
    @asciikat2571 6 месяцев назад +1

    Loving your deep dives into this world man!

  • @Lamson777
    @Lamson777 6 месяцев назад +2

    I was just looking for a way to use it in A1111, and you are the first one to talk about it. Thank you!

  • @jacekfr3252
    @jacekfr3252 6 месяцев назад +4

    already having fun with those loras, anyway I am in love with your tutorials. They are so entertaining, love your work!

  • @jason-sk9oi
    @jason-sk9oi 5 месяцев назад +2

    Truly amazing 👏

  • @terbospeed
    @terbospeed 6 месяцев назад +3

    Between this and SSD-1B, Stable Diffusion is hitting it out of the park right now..

  • @bjornharms159
    @bjornharms159 6 месяцев назад +4

    Many thanks for this tip.
    I wanted to try out a short animation yesterday with Controlnet Tiles, LineArt and TemporalNet (auto1111). A single render image took 25 minutes. With this LORA it was only 5 minutes per render. So yes, it reduces the render time for video enormously. There is not much difference when generating images. At least on my Mac M1 8GB RAM

  • @Mike..
    @Mike.. 6 месяцев назад

    Very cool! Thanks for sharing

  • @arizonaphotolife3342
    @arizonaphotolife3342 6 месяцев назад

    This changes everything. If there is a generation site that allows this lora, your credits will go quite the distance :D thanks for the video.

  • @luke2642
    @luke2642 6 месяцев назад +10

    Good video, LCM definitely works well at 3 steps, cfg 1.5 and dpm sde++ karras. However in auto1111 even, without the lora if you set the cfg really low, like 1,2,3 ... lots of samplers work at four steps. For four steps, you can try DPM2 at cfg 1.5, DPM2 a at cfg 1.5, DPM++ 3M SDE at cfg 1.5, DPM2 Karras at cfg 2, DPM++ 2S a Karras at cfg 2, DPM++ SDE Karras at cfg 3. They obviously get better with more steps, but they're usable at 4 or 5 steps.

    • @amafuji
      @amafuji 6 месяцев назад

      Those are 2-step samplers. They do 2 steps for every called for steps. That's why they're slower.

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад

      I’m sure A1111 will get proper LCM support soon!

  • @promptmuse
    @promptmuse 6 месяцев назад +1

    Fantastic video my friend !

  • @Kelticfury
    @Kelticfury 5 месяцев назад +1

    Mind blown.

  • @ShamanicArts
    @ShamanicArts 6 месяцев назад

    Yo. Awesome Video! do you have a link to the comparison graph you made? cant find it on your github

  • @alpaykasal2902
    @alpaykasal2902 6 месяцев назад

    WHAT???? can't wait to try!

    • @petec737
      @petec737 6 месяцев назад

      skip it, you'd only be wasting your time. the quality is trash at best.

  • @xmattar
    @xmattar 6 месяцев назад +5

    Ur voice is crazy smooth
    What prompt did u use for it?

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад +3

      Rodent gentlemen with a curly moustache 😉

  • @excido7107
    @excido7107 6 месяцев назад +1

    Is the LCM animate diff workflow the same? I cant seem to find that one on your Github :)

  • @gamebugfinder
    @gamebugfinder 6 месяцев назад +3

    Have you tried FreeU? with that an UniPC at even 3 steps I'd say I'm getting equivalent or even better results even without the lcm-lora. of course i did fiddle with the FreeU settings

  • @user-wr2cd1wy3b
    @user-wr2cd1wy3b 5 месяцев назад

    do you need 12GB vram to use IP adapter? i can't seem to use it on 8GB

  • @timeTegus
    @timeTegus 6 месяцев назад +1

    Nice

  • @xilix
    @xilix 6 месяцев назад

    that bike for sale anywhere near massachusetts mayhaps?

  • @TR-707
    @TR-707 6 месяцев назад +1

    Even regular sdxl seems to only work well for me between 8-12 steps and low control

  • @meyou7041
    @meyou7041 2 месяца назад

    I'm so new to this. I don't understand how to use it in automatic 1111. Do i just use it as a lora? What weight do i set it to?

  • @jibcot8541
    @jibcot8541 6 месяцев назад +3

    Has anyone else put the LCM SDXL Lora into the Lora folder in automatic1111, but then it still doesn't show up in the SDXL Lora list to select with the rest of the Loras?

    • @Snaaaaaaaaaaaaake
      @Snaaaaaaaaaaaaake 6 месяцев назад

      Yep, only visible in 1.5 models, strangely

  • @tokatoofficial
    @tokatoofficial 6 месяцев назад

    Hi guys, It seems like I can't just run it on mac, do you know what changes I need to do or any resources? I couldn't find any on hf or github

  • @knightride9635
    @knightride9635 6 месяцев назад +1

    Great video, last week sdxl was taking me 2mn+ to generate one image, now 17sec....

  • @vesper8
    @vesper8 6 месяцев назад

    Is it normal that I can't see the Lora in my list of loras in A1111? I downlaoded the sdxl lcm lora and I'm using it with an sdxl model. Many other loras show up but not the LCM one.. even after I refreshed and restarted A1111 many times.

  • @wboumans
    @wboumans 6 месяцев назад +1

    Ow boy, neat

  • @catthing3398
    @catthing3398 6 месяцев назад +2

    I'm attempting to use the workflow you provided, but I cannot find the ModelSamplingDiscrete node, where is it from?

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад +1

      You’ve got an old version of ComfyUI

    • @catthing3398
      @catthing3398 6 месяцев назад

      @@NerdyRodent Thank you so much!

    • @MrPlasmo
      @MrPlasmo 6 месяцев назад

      @@NerdyRodent I have the same issue. Updated Comfy via the Manager, but still can't find the ModelSamplingDiscrete node... is there a different way to update Comfy without completely reinstalling?

    • @MrPlasmo
      @MrPlasmo 6 месяцев назад

      nevermind... there was a bug that would not let me update and I had to do a complete reinstall and see all the new stuff now

  • @a.k.amatsu4876
    @a.k.amatsu4876 6 месяцев назад

    The future is here!

  • @rhouvus7250
    @rhouvus7250 5 месяцев назад

    Is there a source for the grid you are showing in the video?

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад +1

      It’s just an x/y grid of steps Vs samplers from A1111

  • @musicandhappinessbyjo795
    @musicandhappinessbyjo795 6 месяцев назад +2

    Actually this might be pretty good when is comes to video creation which usually takes a lot of time.
    Also does this Lora works with upscaling?

  • @Satscape
    @Satscape 6 месяцев назад

    Just been playing with this, my potato 4GB card is magically transformed! Which is bad for my health, as I won't be able to go for long walks while it renders, any more. 😁

  • @bellsTheorem1138
    @bellsTheorem1138 6 месяцев назад +2

    I'm not sure I'm downloading the right files. Is it the pytorch_lora_weights.safetensors file? Why are all three named the same?

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад +2

      Yes, as mentioned in the video, they all have the same name

    • @bellsTheorem1138
      @bellsTheorem1138 6 месяцев назад

      @@NerdyRodent Sorry I missed that. Thank you :)

  • @ProzacgodAI
    @ProzacgodAI 5 месяцев назад

    What about riffusion?

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад +1

      Interesting idea - give it a go and let me know!

  • @wagmi614
    @wagmi614 6 месяцев назад +3

    animatediff LCM next with the new SDXL version

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад

      Ooo, nice!

    • @AgustinCaniglia1992
      @AgustinCaniglia1992 6 месяцев назад

      Which sdxl new version? Is it the one mentioned at the beginning of the video?

  • @dantepowerrr
    @dantepowerrr 6 месяцев назад

    Anyone else gets very blurry images using this? I mean, its fast, thats nice, but I think about 90% of the images I made using the lora was very blurry. Does this only happen on my side?
    I'm using SD 1.5 on A1111 btw.

    • @TheGalacticIndian
      @TheGalacticIndian 6 месяцев назад

      Same problem here. I suppose fresh AUTOMATIC1111 install might solve the problem... or not.

  • @EdgardMello
    @EdgardMello 6 месяцев назад

    For Macs only works with cpu so far, so no good to use this lcm loras :(

    • @EdgardMello
      @EdgardMello 6 месяцев назад

      58s for 4 steps in a Macbook pro 14" 16Gb Ram 2020

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад

      Is that 10x faster like they say?

  • @email7919
    @email7919 6 месяцев назад

    What gpu do you use?

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад

      It’s a 3090 so if you’ve got a 40 series card expect it to fly!

  • @LouisGedo
    @LouisGedo 6 месяцев назад +1

    👋

  • @carlingo3191
    @carlingo3191 6 месяцев назад +4

    I like your tutorials. I can't stand the Comfy elitism that's all around all of a sudden....but at least you managed to -almost- avoid take a dig at those who prefer simplicity.....

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад +2

      I switched to start with as SDXL just failed on A1111, now I’m addicted 😆

    • @carlingo3191
      @carlingo3191 6 месяцев назад +1

      @@NerdyRodentYea, totally understandable. Developer was hired by SAI and it definitely caters to the coder/developer types. I have no prob w/ it, and honestly was never all in on A1111, but I'm betting "Automatic" himself is probably the type of person who'd use Comfy and piss on A1111 & those who still use it........maybe that's why there haven't been any pushed/public updates in over a month now :-/ The Comfy dude's app tag line is a bit cocky too... :-p
      Anyhow, back to trying to figure out AnimateDff ha :)

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад

      @@carlingo3191 heh. If you get any decent AnimateDiff stuff let me know as I’m playing around with that too!

  • @tetsuooshima832
    @tetsuooshima832 6 месяцев назад +1

    lol I tried SSD-1B-A1111 model in auto1111 (obviously), it took almost 4min to load and the result was slow (around 25sec/it) and very ugly haha bad idea, don't do it guys xD No issue in ComfyUI

  • @yashpatidar.8506
    @yashpatidar.8506 6 месяцев назад

    Is it possible to use LCM with ip-adapter

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад +2

      Just like I show in the video you mean?

  • @SouleroOfficial
    @SouleroOfficial 3 месяца назад

    Love your videos! however my pictures always come out grainy and without any detailed background after following your example

  • @neofuturist
    @neofuturist 6 месяцев назад

    First 🐭🐭

  • @666Counterforce
    @666Counterforce 6 месяцев назад +6

    doesn't make a difference with my 3070 and automatic1111

    • @generalawareness101
      @generalawareness101 6 месяцев назад

      My automatic1111 doesn't even see it in the browser.

    • @666Counterforce
      @666Counterforce 6 месяцев назад

      @@generalawareness101 did you press "refresh" in lora menu ?

    • @generalawareness101
      @generalawareness101 6 месяцев назад

      Surely did.@@666Counterforce

  • @aerofrost1
    @aerofrost1 6 месяцев назад

    *cries in 1050 ti*
    You get 9 iterations per 1 second, I get 1 iteration per 9 seconds lol.

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад

      So down to around 36 seconds, nice!

  • @kallamamran
    @kallamamran 6 месяцев назад +1

    Tried it... The loss in quality is not worth the time you save

  • @user-pc7ef5sb6x
    @user-pc7ef5sb6x 6 месяцев назад

    I see no difference. 14 second - 5 steps - 1024x1024 with LCM and without LCM. GPU: 3060 12Vram

  • @petec737
    @petec737 6 месяцев назад +2

    Generating TRASH quality 10x faster lol

  • @macronomicus
    @macronomicus 6 месяцев назад +1

    This is crazy isnt it? Rest of my PC is new but my olde 980ti is still kicking! still does all games I play 60fps 1080p & now gens SD in mere seconds, I cant even use xformers because even though I compiled a wheel it was slower than without, as others also reported, so... i've been crawling with like 20 to 40 second gens for 512x768 images, now its single digit seconds, 3 to 7 depending on various factors. Crazy!