A look at the NEW ComfyUI Samplers & Schedulers!

Поделиться
HTML-код
  • Опубликовано: 28 июн 2024
  • A whole bunch of updates went into ComfyUI recently, and with them we get a selection of new samplers such as EulerCFG++ and DEIS, as well as the new GITS scheduler. See them all in action, then try it yourself at home!
    Want to support the channel?
    / nerdyrodent
    * DEIS, GITS, iPNDM - github.com/zju-pi/diff-sampler
    * CFG++ - arxiv.org/abs/2406.08070
    == Learn More Stuff! ==
    * Installing Anaconda for MS Windows Beginners - • Anaconda - Python Inst...
    * Installing ComfyUI for Beginners - • How to Install ComfyUI...
    * ComfyUI Workflows for Beginners - • ComfyUI Workflow Creat...
    * Faster Stable Diffusions with the LCM LoRA - • LCM LoRA = Speedy Stab...
    * Make an Animated, Talking Avatar - • Create your own animat...
    * Make A Consistent Character in ANY pose - • Reposer = Consistent S...
  • НаукаНаука

Комментарии • 38

  • @rogerc7960
    @rogerc7960 4 дня назад +18

    Open source software for the win

    • @NerdyRodent
      @NerdyRodent  4 дня назад +1

      Woo! Yeah!

    • @wakegary
      @wakegary 4 дня назад +1

      I hope to live in a world where Open Source lives on and blackboxes are still a thing. It's fun when life still has a little mystery. Especially science/tech.

  • @Ethan_Fel
    @Ethan_Fel 4 дня назад +1

    I generally use AYS but for unsampling, gits can get away for 2x6 steps (6 unsampling, 6 sampling) at 1080*1920 without any diffusion lora and the image is nearly identical to a 20-30 step with a regular scheduler or 10 steps ays

  • @swannschilling474
    @swannschilling474 4 дня назад +1

    Another exquisite and very needy content to enjoy!! 😊

  • @eyaura.
    @eyaura. 4 дня назад +4

    You need to release an LP with those ai songs.

  • @bgtubber
    @bgtubber 4 дня назад +3

    Interesting. Which samplers are better? I'm a bit of a newbie. I've been using DPM++ 2M Karras and Euler/Euler Ancestral in ComfyUI with the regular KSampler node. Should I switch to using these new samplers and schedulers and what would the benefits be? Any speed or quality improvements? I couldn't understand from the video. I'm mostly doing img2img stuff with controlnets and IP-adapters rather than generating stuff from scratch. Would these benefit this use case?

    • @Ethan_Fel
      @Ethan_Fel 4 дня назад +2

      ays and gits converge at a small number of steps without relying on lcm, hyper etc, AYS 10 steps is very good

    • @bgtubber
      @bgtubber 4 дня назад +2

      ​@@Ethan_Fel Ok, I've just tested this. In my limited time trying it out, it looks like using GITSScheduler with DPMPP 2M vs not using it gives me 50-90% speedup for the same quality. I get similar speedup with AlignYourStepsScheduler too (AYS). Neat!

    • @Ethan_Fel
      @Ethan_Fel 4 дня назад

      @@bgtubber yeah both scheduler are great for speed

  • @Nasrat597
    @Nasrat597 4 дня назад +4

    Thanks for the new video. as asual gold af.
    EDIT: rofl nice outro

  • @mordokai597
    @mordokai597 4 дня назад +11

    idky, but the algorithm has been hating on you lately... i haven't been recommended one of your videos in "a rat's age". that's like "a dog's age" but nerdier xP

    • @fureytha
      @fureytha 4 дня назад +5

      @@mordokai597 there is a subscribe button

    • @mordokai597
      @mordokai597 4 дня назад +4

      @@fureytha no **** sherlock... i've been subscribed for like 2 years... which is why it's weird i havent had a video show up my recommends for like a month

    • @eucharistenjoyer
      @eucharistenjoyer 4 дня назад +1

      @@mordokai597 Same thing around here. Actually I'm experiencing the same with the content from other AI creators.

    • @digital_down
      @digital_down 4 дня назад +1

      Same here, weird

    • @jasonhemphill8525
      @jasonhemphill8525 4 дня назад +1

      Same here

  • @superlucky4499
    @superlucky4499 4 дня назад

    How do you get your noodle colors so nice?

  • @michail_777
    @michail_777 4 дня назад

    Thank you. I have a question. Is it possible to add HighRes-Fix Script to the Custom Sampler? I know that you can connect a second KSampler and then HighRes-Fix Script. But I'd like to be able to do it directly.

  • @lokitsar5799
    @lokitsar5799 3 дня назад +1

    Hoping when you said it went up to a 11 you were giving a nod to Spinal Tap.

  • @FusionDeveloper
    @FusionDeveloper 4 дня назад

    Should maybe add a link to what you show at 0:20, which is a link on the 2nd link you have in the description?

  • @Art0691p
    @Art0691p 4 дня назад +1

    Thanks for the great detailed video - but I think we are getting overwhelmed with choice that doesn't now seem to offer a substantial 'reward'. In other words, not much difference between them really for all that extra time fiddling about with different settings :) I can see the use cases in video generation for extra speed - but far less so for static image generation.

    • @NerdyRodent
      @NerdyRodent  4 дня назад +3

      I’m up for anything that gives me fewer tails 😉

    • @Art0691p
      @Art0691p 3 дня назад

      @@NerdyRodent That's a very niche use case :) Keep up the great videos.

  • @andrejlopuchov7972
    @andrejlopuchov7972 3 дня назад +1

    What for video generation? Nothing faster than LCM for now aint it?

    • @NerdyRodent
      @NerdyRodent  3 дня назад +1

      Haven’t tried gits with video yet. So many things to try 😃

  • @LouisGedo
    @LouisGedo 4 дня назад +1

    👋

  • @mptest7461
    @mptest7461 4 дня назад +4

    Nice, but at the same time I'm scared because there are just new options to explore by trials and errors. So let's spend another half of a year generating images with different parameters then another half of a year for previewing them and picking. I'm afraid that some concept is lost in this nightmare. We needed a tool to make images quickly based on what we think. AI text interpretation and image generation was supposed to do that. But observing all the communication, tools, videos and discussions about that, I see some countless amount of hours spent worldwide on trying to deal with it's weaknesses. Of course AI/ML direction is desirable, but I believe the future is moving it into 3D domain because this is reflecting our world, doing some deep integration of AI and 3D engine, physics, collision detection etc. Instead of spending hundreds of hours on trying to fix AI artifact, maybe it's better to spend it on literally manual creation of some part of 3D model for Blender and then let AI to do arrangement of models in the scene etc. Combine 3D "thinking" and 2D like we have now in AI generators for backgrounds and textures. "Woman on grass" example of SD3 medium. If there is just customizable and parametrizable 3D model to be posed and placed in the surrounding environment by AI including rather rules of physics than analyzing billions of parameters on 2D images that are indirectly trying to reflect mapping from 3D world to 2D image, then I believe we could avoid "body horrors" and many other artifacts.

    • @VioFax
      @VioFax 2 дня назад

      3Dfauna on HF kinda does this. I don't know how to run it though. Read their paper though you might be interested.

  • @flisbonwlove
    @flisbonwlove 4 дня назад +1

    A Samplers & Schedulers Xmas!!!! Noice 🙌👌