Exploring ComfyUI Latent Noise Control with Unsampler!

Поделиться
HTML-код
  • Опубликовано: 18 дек 2024

Комментарии • 111

  • @AlistairKarim
    @AlistairKarim Год назад +9

    Dude, you're awesome. Somehow, I learn about all the most exciting stuff from your channel first.

    • @NerdyRodent
      @NerdyRodent  Год назад +2

      Glad you enjoy the stuffs 😀

  • @ttul
    @ttul 11 месяцев назад +2

    Give the Iterative Mixing Sampler a go too. It’s a more faithful unsampler, using the actual LDM algorithm to generate the noised sequence (see the Batch Unsampler node).

  • @jcboisvert1446
    @jcboisvert1446 Год назад +1

    Thanks

  • @wikidude
    @wikidude Год назад +8

    The Corridor guys used the i2i Alternative test to make their first RPS animation. It's a really powerful tool. Glad it made it to comfy in a better implementation.

    • @mich_elle_x
      @mich_elle_x Год назад +1

      They used EbSynth as well.

    • @wikidude
      @wikidude Год назад

      @@mich_elle_x not for the first one iirc. They did use davinci deflicker though

  • @dkamhaji
    @dkamhaji Год назад +2

    Love it! Can this be applied to AnimDiff/ip adapter workflows?

  • @danilsi6431
    @danilsi6431 Год назад +3

    This channel is always fantastically entertaining😌 And thanks for putting together the cool stuff on git

  • @swannschilling474
    @swannschilling474 Год назад +5

    Sweet!! I completely forgot that there had been something like that already a while ago...this one seems to be a lot more powerful! 😊

  • @Afr0man4peace
    @Afr0man4peace Год назад +1

    Hi, thanks for this video. I will test this with my new realism SDXL models

  • @bzikarius
    @bzikarius 11 месяцев назад

    Okay, got this, thanks, it works pretty well, if model impacts the style.

  • @djivanoff13
    @djivanoff13 Год назад +3

    why do I have a smaller image at the output?How can I increase it?

  • @bobbyboe
    @bobbyboe 11 месяцев назад

    Excellent, thanks! Finally someone who uses notes to document information inside the workflow - very helpful! I did't know these notes, although I was already looking around for something like that - perfect! I am using SDXL Models in a similar workflow and that can help to "melt" a cut out figure into a new invironment... unsampling the "rough" composing automatically made by blending 2 images, and resampling. When using "collage of a..." as initial prompt and "photo of a..." when resampling. If you have some alternativ ideas for my "melting figure into new background"-process, I am always interested as I try to optimize that. The idea is to change figure while maintaining the same environment but make that figure integrate with ai into the background seamless.

  • @AC-zv3fx
    @AC-zv3fx Год назад +1

    Wow! Since when does it exist? Is it something new? It seems so effective! Great video!

    • @NerdyRodent
      @NerdyRodent  Год назад

      It’s been out a while, but I had the pack installed for a different node and only just started playing with this specific one as I’ve been playing with noise a lot recently

  • @ДиДи-м3ю
    @ДиДи-м3ю Год назад +2

    It would be interesting to see a similar warflow for SDXL models

    • @NerdyRodent
      @NerdyRodent  Год назад +1

      You can change the models to SDXL ones and you'll be good to go :)

    • @jeffbull8781
      @jeffbull8781 Год назад +2

      @@NerdyRodent Weird when I run this with SDXL it generates total garbage, works fine with 1.5... I wonder why??

  • @JanKowalski-ie6nw
    @JanKowalski-ie6nw Год назад +1

    Hello, could you make a video about DreamCraft 3d, a image to 3d method that came out a few days ago?

  • @hatuey6326
    @hatuey6326 Год назад +1

    just awesome thanks !!!

  • @dkamhaji
    @dkamhaji Год назад +2

    Also - where does the CN Clips get their inputs from? Im trying to recreate without the everywhere nodes that cause conflicts on my setup :) Im getting an error at the Ksampler stage when the controlnet modules are turned on, and I have both pos/net prompts receiving from model node's Clip_Out. is that wrong? im not sure what else could be erroring. If I connect the first prompts to ksampler conditioning instead - it does work. something about the controlnet prompts..

  • @HestoySeghuro
    @HestoySeghuro Год назад

    Very cool. Will use this for my enhancing workflow I'm developping... works for XL?

  • @FrancisHerding
    @FrancisHerding Год назад

    How does the 1st step work, where you are not using the Output Control section? What are the pos & neg inputs for the Ksample if the Output Control section has been bypassed? It still works in your video even though it should not.

  • @ceegeevibes1335
    @ceegeevibes1335 7 месяцев назад

    cool. love unconventional things like this! thanks

  • @Copperpot5
    @Copperpot5 11 месяцев назад +1

    As shit as I am w/ Comfy, and as resistant I've been to using it, your vids are the only ones I actively go looking for if I need to figure something out with it. Sooo even though I'll likely not find myself comfy w/ comfy I figure you earned my $5 a month (+ing you patreon after commenting).....you're always positive in the comments and break these things down pretty well w/o making unnecessarily long videos. Keep at it. -J

    • @NerdyRodent
      @NerdyRodent  11 месяцев назад +1

      Thanks! I was resistant to comfy to start with as well, but now it does seem comfy 😆

  • @HasanAslan
    @HasanAslan Год назад +1

    worked with lcm with 4 steps , 8 steps in the final one. good stuff.

    • @NerdyRodent
      @NerdyRodent  Год назад

      Nice! Was going to test that too 😀

  • @sneedtube
    @sneedtube Год назад +2

    Another masterpiece dropped boys 💥

  • @attashemk8985
    @attashemk8985 Год назад +1

    Thanks a lot for remind, a hard to remember all SD possibilities)

    • @NerdyRodent
      @NerdyRodent  Год назад +1

      I know right… so many things to test and try!

  • @ruuuuudooooolph
    @ruuuuudooooolph Год назад +3

    I dont understand why I am getting weird colors and the image looks incomplete. I am not seeing any errors too. Can someone share the original workflow with github ?

  • @hleet
    @hleet Год назад +3

    Impressive. Does this replace IPAdapter nodes ? I'm not fond of IPAdapater, way too much nodes to use it and you never know which model to load and it happens to crash a lot :/

  • @JavierGarcia-td8ut
    @JavierGarcia-td8ut Год назад +1

    I do not find the workflow on your github, don't uploaded yet?

  • @geoffphillips5293
    @geoffphillips5293 2 месяца назад

    I couldn't get any decent results with SDXL just to save people spending time on this. I'd always get completely different images out to what goes in. But that's with the unsampler, with other methods that are out there, things aren't so bad and the controlnet stuff is helpful.

  • @polystormstudio
    @polystormstudio Год назад +3

    I don't think you posted the workflow. The last one in the list is from 3 days ago "SDXL_Reposer_Basic.png"

    • @Gh0sty.14
      @Gh0sty.14 Год назад +2

      It's there. ctrl+f and search Unsampler and you'll find it.

    • @ceiridge
      @ceiridge Год назад +7

      It's the Renoiser.png

  • @niroknox
    @niroknox Месяц назад

    Dude I learn so much from your videos, I have already created 2 music videos with the stuff i learn from you. thank you so much for doing this!
    I gotta know - is this really your voice and accent?

    • @NerdyRodent
      @NerdyRodent  Месяц назад

      No, I’m not actually British but am in fact a space rodent from Alpha Centuri!

    • @niroknox
      @niroknox Месяц назад

      @ if this is AI, I have to know which model you used for this voice/accent

    • @niroknox
      @niroknox Месяц назад

      Also, is there a way to commission you (budget is there) . Let me know if you do consulting I would love to chat.

  • @contrarian8870
    @contrarian8870 7 месяцев назад

    @NerdyRodent Is this specific workflow on your Github? I can't identify it by name...

    • @NerdyRodent
      @NerdyRodent  7 месяцев назад

      Yup, the unsampler one is there

  • @eucharistenjoyer
    @eucharistenjoyer Год назад

    Amazing stuff. Have you by any chance developed this :P? It seems to have gotten out of the radar by most channels.
    Not related, but since you're extremely knowledgeable: I'm not sure if you have done any video showing the "CFG Rescale" node, but do you know how it works?

    • @NerdyRodent
      @NerdyRodent  Год назад +1

      Dynamic thresholding I covered a while back for A1111 in - AMAZING A1111 Stable Diffusion Extensions You Might Have Missed!
      ruclips.net/video/tP5yy6A4GJw/видео.html - it’s basically that

  • @JKG-777
    @JKG-777 Год назад

    Thanks for the video. Where do I get the control net loras from?

    • @NerdyRodent
      @NerdyRodent  Год назад +1

      The resources section has links to the stabilityai control loras, sd models and more!

    • @JKG-777
      @JKG-777 Год назад

      @@NerdyRodent I found them. Thanks.

  • @ddiva1973
    @ddiva1973 Год назад

    Can you do the Animorph book cover transformation?

  • @RonnieMirands
    @RonnieMirands Год назад

    Another great workflow for free. Amazing! I am getting an error on Zoe Depth Map. Just bypassing this node it works. Maybe not installed?

  • @Kikoking-y9b
    @Kikoking-y9b 6 месяцев назад

    I have a question:
    Was it a mistake from You that you connected the input of Unsampler from ClibTextEncode instead from the Controlnet output?
    i tried working with your workflow to make a face look angry. All the time the output had high contrast which made the output look burned ,untill I changed the Positive and Negative inputs of the Unsampler .
    U had them connected from the ClibTextEncode to Unsampler directly .
    Now I connected the Apply Controlnet output to the unsampler and it became very good normal without any contrast.

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад +1

      Good spot 😉

    • @Kikoking-y9b
      @Kikoking-y9b 6 месяцев назад

      @@NerdyRodent thanks for your reply. can I ask you another question? I want to learn similar things like unsampler to change face details or make changes in the (latent space) similar to Unsampler instead of things like inpainting .
      Do you know what I should learn?

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад +1

      Sounds as if you’d probably like refacer then! Refacer - Painting to Realistic (and Vice-Versa) in ComfyUI
      ruclips.net/video/r7Iz8Ps7R2s/видео.html

    • @Kikoking-y9b
      @Kikoking-y9b 6 месяцев назад

      @@NerdyRodent you are great 😃 thanks 🙏

  • @ffq-ym4wr
    @ffq-ym4wr 3 месяца назад

    Do you have a model download address for control lora?

  • @ДиДи-м3ю
    @ДиДи-м3ю Год назад

    Try to use "reference" model + "Canny" for SDXL models. This will give you much more interesting results than older models (with high "cfg"). Try taking a reference image from the street and writing "snow" in promt text...

    • @NerdyRodent
      @NerdyRodent  Год назад

      Thanks for the tip!

    • @ДиДи-м3ю
      @ДиДи-м3ю Год назад

      @@NerdyRodentIn principle, you don’t need to redo anything in your work flow (for SDXL), just replace the old models with new T2i SDXL Line & Depth (I checked - everything works well)

  • @boricuapabaiartist
    @boricuapabaiartist Год назад +4

    I laughed uncontrollably after the last image generation at the end of the video. Was that still using epic realism, or one of your custom models? That smile had some Grinch vibes too

    • @NerdyRodent
      @NerdyRodent  Год назад +1

      That’s my girlfriend! Also yes, epic realism there 😉

  • @MeMine-zu1mg
    @MeMine-zu1mg 11 месяцев назад

    I get a huge error at the ksampler Advanced node that starts off "mat1 and mat2 shapes cannot be multiplied (77x2048 and 768x320)"

  • @SixFt12
    @SixFt12 11 месяцев назад

    Zoe-DepthMapPreprocessor and LineArtPreprocessor failed to load and fail to import when using the manager to install missing custom nodes. Is there an alternative for these nodes? How do I download them if there are alternatives? Thanks for any help on this.

    • @NerdyRodent
      @NerdyRodent  11 месяцев назад

      You can drop me a dm on www.patreon.com/NerdyRodent 😀

  • @patheticcoder4081
    @patheticcoder4081 Год назад

    What's the advantage to adding noise with the ksampler and give him another prompt?

  • @vindyyt
    @vindyyt Год назад

    Am I dumb, or the only .json workflow in your link is for the QR monster, and the rest are just .png files? I can't figure our where to download the comfyui workflows

    • @NerdyRodent
      @NerdyRodent  Год назад +1

      you can scroll down to find the workflows 😀

  • @AvizStudio
    @AvizStudio Год назад +2

    What the difference from regular image to image?

    • @JustFeral
      @JustFeral Год назад +4

      Did you watch the full video?

    • @vintagegenious
      @vintagegenious Год назад +1

      @AvizStudio From their Github:
      "This node does the reverse of a sampler. It calculates the noise that would generate the image given the model and the prompt."
      Img2img would just take the original image to generate. while here you take a noise that would generate that image, the point is to be able to do variations of that image

    • @AvizStudio
      @AvizStudio Год назад

      @@vintagegenious
      Hmm OK interesting

    • @AvizStudio
      @AvizStudio Год назад

      ​@vintagegenious
      Is that equivalent to "guessing the seed number of given picture"? Pretending the picture is generated?

    • @vintagegenious
      @vintagegenious Год назад

      ​@@AvizStudio the seed will decide the noise you add to the input image latent(with 1.0 denoise you have only input image, and with 0.0 you have only noise, so txt2img) . Here it gives you the latent, not the seed, so you can consider it as finding the best input image latent, denoise and seed. I'm not sure if for all latent there exists a seed that will give that latent, if it's the case then you are right.

  • @NeOBRINFO
    @NeOBRINFO Год назад

    i can't find this workflow on link u provide ?

  • @AmirBechouch
    @AmirBechouch Год назад

    is there any way to make comfyui running on linux using Rocm 4.0, as I believe this is the latest supported version for my rx 580

    • @NerdyRodent
      @NerdyRodent  Год назад

      I’ve not got an AMD card, but my guess is that should work just fine! How to Install ComfyUI in 2023 - Ideal for SDXL!
      ruclips.net/video/2r3uM_b3zA8/видео.html

  • @hatuey6326
    @hatuey6326 Год назад

    it's strange : i ddon't have the resolution in the node ZOE so i have an error ! i've dowloaded the model still have error

    • @NerdyRodent
      @NerdyRodent  Год назад +1

      Check the troubleshooting section for info on how to fix your local installation. 90% of the time you’ll need to update all 😊

  • @KINGLIFERISM
    @KINGLIFERISM Год назад +3

    Was working on exactly this. This is why this rodent... is the man. Now about the work... a lot of contrast needs to be removed.

  • @Herman_HMS
    @Herman_HMS Год назад

    my images with controlnet are coming out extremely polarized and overexposed even with low strength and negative prompts on a standard 1.5 model. Any advice how to fix it?

    • @NerdyRodent
      @NerdyRodent  Год назад +1

      Prompts certainly help for me! Things like “dark” or “high contrast” in +ve, or like I show in the video with -ve prompting

    • @Herman_HMS
      @Herman_HMS Год назад

      @@NerdyRodent will try, thanks for reply!

  • @betortas33
    @betortas33 Год назад

    Does this works with sdxl?

  • @jimdelsol1941
    @jimdelsol1941 Год назад

    Thanks !

  • @kallamamran
    @kallamamran 7 месяцев назад

    My Unsampler only generates a black image

    • @kallamamran
      @kallamamran 7 месяцев назад

      Changed the sampler. Now it works

  • @Moony_ultimate
    @Moony_ultimate Год назад

    Exist a way to use unsampler in automatic 1111??

    • @bgtubber
      @bgtubber 5 месяцев назад

      I think "Noise Inversion" in "Tiled Diffusion" does something similar. Look it up. It's in the img2img tab.

  • @generalawareness101
    @generalawareness101 Год назад

    Works but the mtp notes thing it downloaded the missing node and I could no longer use ComfyUI as it was unresponsive until I deleted it.

  • @bizarreadventurejojos5379
    @bizarreadventurejojos5379 Год назад +1

    It's a great video but my result image was 512 x 768 without any error and not upscale to higher resolution when I was inputs the 512 x 768 imagemand using your workflow, I don't know why you can input the lower resolution then output the higher resolution? you said your image with automatic upscaled to 1136 x 1440 resolution, i don't know why I can't do that. 😅 thanks

  • @AIWarper
    @AIWarper 11 месяцев назад

    Pro tip: you create this node yourself using SamplerCustom node that is native to comfy.
    Also allows for more customization

  • @LouisGedo
    @LouisGedo Год назад

    👋

  • @CoreyJohnson193
    @CoreyJohnson193 Год назад

    I think you're a grat teacher... sort of. I like to build these myself, so the .json or a better explanation of the nodes is necessary. It's frustrating getting the abridged version when I would like more indepth instructions. Please find time to break these down like other ComfyUI RUclipsrs.

    • @NerdyRodent
      @NerdyRodent  Год назад +2

      You can indeed save the workflow image provided as a .json file if you like! What is it specifically about the unsampler node that you'd like to know? It basically does just what I show in the video (and as it’s name suggests!) As for building your own, check out my ComfyUI Essentials video - ruclips.net/video/VM9snsuoqBc/видео.html

    • @MrSporf
      @MrSporf Год назад

      I think you're an amazing teacher. please keep doing them exactly like you are now as those long-winded ones are frustrating. keeping them focused and clear like you do is much better and thank you for the workflow

    • @CoreyJohnson193
      @CoreyJohnson193 Год назад

      @@MrSporf Some people need additional support. Luckily, I crafted a better workflow after I realized there are too many nodes on screen. It is able to do everything and requires less connections between nodes using "efficient" nodes instead of the typical ones.

    • @MrSporf
      @MrSporf Год назад

      @@CoreyJohnson193 good for you, well done. I guess you didn't need that extra support after all. Also, the workflow images are much better than the json files because you can actually see what is going on in the workflow.

    • @CoreyJohnson193
      @CoreyJohnson193 Год назад

      @@MrSporf Why not just have both?

  • @TickleMeTimbers
    @TickleMeTimbers Год назад

    Honestly she looks completely different, it's not even the same face at all. Everything is very disproportionate from her mouth to her nose to her eyes and basically everything else. It doesn't look like the same person except at a distance if you squint.

    • @tonikunec
      @tonikunec Год назад

      You gotta be blind mate... I've got the workflow and it does an excellent job.

    • @TickleMeTimbers
      @TickleMeTimbers Год назад

      @@tonikunec does it do a better job than the girl shown in the video thumbnail? Because if not, then I can safely say you sir are the blind one.