Exploring ComfyUI Latent Noise Control with Unsampler!

Поделиться
HTML-код
  • Опубликовано: 16 дек 2023
  • Unsampler is an awesome ComfyUI node that changes an image back into noise! Why would you want to do that? Well, it’s a fun way to change things and yet still keep much of the original image. Turn painting into photos, change hair or eye colour, give people different expressions - all with short, simple prompts 😀
    == Links ==
    Workflow: github.com/nerdyrodent/AVeryC...
    == More Stable Diffusion Stuff! ==
    * Faster Stable Diffusions with the LCM LoRA - • LCM LoRA = Speedy Stab...
    * How do I create an animated SD avatar? - • Create your own animat...
    * Installing Anaconda for MS Windows Beginners - • Anaconda - Python Inst...
    * Add anything to your AI art in seconds - • 3 Amazing and Fun Upda...
    * Video-to-Video AI using AnimateDiff - • How To Use AnimateDiff...
    * One image Gets You a Consistent Character in ANY pose - • Reposer = Consistent S...
    == Want to support the channel? ==
    / nerdyrodent
    :)
  • НаукаНаука

Комментарии • 101

  • @AlistairKarim
    @AlistairKarim 4 месяца назад +9

    Dude, you're awesome. Somehow, I learn about all the most exciting stuff from your channel first.

    • @NerdyRodent
      @NerdyRodent  4 месяца назад +2

      Glad you enjoy the stuffs 😀

  • @swannschilling474
    @swannschilling474 4 месяца назад +5

    Sweet!! I completely forgot that there had been something like that already a while ago...this one seems to be a lot more powerful! 😊

  • @wikidude
    @wikidude 4 месяца назад +7

    The Corridor guys used the i2i Alternative test to make their first RPS animation. It's a really powerful tool. Glad it made it to comfy in a better implementation.

    • @mikhailpetrovich8657
      @mikhailpetrovich8657 4 месяца назад

      They used EbSynth as well.

    • @wikidude
      @wikidude 4 месяца назад

      @@mikhailpetrovich8657 not for the first one iirc. They did use davinci deflicker though

  • @sneedtube
    @sneedtube 4 месяца назад +2

    Another masterpiece dropped boys 💥

  • @Afr0man4peace
    @Afr0man4peace 4 месяца назад +1

    Hi, thanks for this video. I will test this with my new realism SDXL models

  • @hatuey6326
    @hatuey6326 4 месяца назад +1

    just awesome thanks !!!

  • @ttul
    @ttul 3 месяца назад +2

    Give the Iterative Mixing Sampler a go too. It’s a more faithful unsampler, using the actual LDM algorithm to generate the noised sequence (see the Batch Unsampler node).

  • @HasanAslan
    @HasanAslan 4 месяца назад +1

    worked with lcm with 4 steps , 8 steps in the final one. good stuff.

    • @NerdyRodent
      @NerdyRodent  4 месяца назад

      Nice! Was going to test that too 😀

  • @Copperpot5
    @Copperpot5 4 месяца назад +1

    As shit as I am w/ Comfy, and as resistant I've been to using it, your vids are the only ones I actively go looking for if I need to figure something out with it. Sooo even though I'll likely not find myself comfy w/ comfy I figure you earned my $5 a month (+ing you patreon after commenting).....you're always positive in the comments and break these things down pretty well w/o making unnecessarily long videos. Keep at it. -J

    • @NerdyRodent
      @NerdyRodent  4 месяца назад +1

      Thanks! I was resistant to comfy to start with as well, but now it does seem comfy 😆

  • @bobbyboe
    @bobbyboe 4 месяца назад

    Excellent, thanks! Finally someone who uses notes to document information inside the workflow - very helpful! I did't know these notes, although I was already looking around for something like that - perfect! I am using SDXL Models in a similar workflow and that can help to "melt" a cut out figure into a new invironment... unsampling the "rough" composing automatically made by blending 2 images, and resampling. When using "collage of a..." as initial prompt and "photo of a..." when resampling. If you have some alternativ ideas for my "melting figure into new background"-process, I am always interested as I try to optimize that. The idea is to change figure while maintaining the same environment but make that figure integrate with ai into the background seamless.

  • @bzikarius
    @bzikarius 3 месяца назад

    Okay, got this, thanks, it works pretty well, if model impacts the style.

  • @danilsi6431
    @danilsi6431 4 месяца назад +3

    This channel is always fantastically entertaining😌 And thanks for putting together the cool stuff on git

  • @ceegeevibes1335
    @ceegeevibes1335 15 дней назад

    cool. love unconventional things like this! thanks

  • @dkamhaji
    @dkamhaji 4 месяца назад +2

    Love it! Can this be applied to AnimDiff/ip adapter workflows?

  • @KINGLIFERISM
    @KINGLIFERISM 4 месяца назад +3

    Was working on exactly this. This is why this rodent... is the man. Now about the work... a lot of contrast needs to be removed.

  • @attashemk8985
    @attashemk8985 4 месяца назад +1

    Thanks a lot for remind, a hard to remember all SD possibilities)

    • @NerdyRodent
      @NerdyRodent  4 месяца назад +1

      I know right… so many things to test and try!

  • @user-rj7ks6ik2s
    @user-rj7ks6ik2s 4 месяца назад +2

    It would be interesting to see a similar warflow for SDXL models

    • @NerdyRodent
      @NerdyRodent  4 месяца назад +1

      You can change the models to SDXL ones and you'll be good to go :)

    • @jeffbull8781
      @jeffbull8781 4 месяца назад +2

      @@NerdyRodent Weird when I run this with SDXL it generates total garbage, works fine with 1.5... I wonder why??

  • @HestoySeghuro
    @HestoySeghuro 4 месяца назад

    Very cool. Will use this for my enhancing workflow I'm developping... works for XL?

  • @boricuapabaiartist
    @boricuapabaiartist 4 месяца назад +4

    I laughed uncontrollably after the last image generation at the end of the video. Was that still using epic realism, or one of your custom models? That smile had some Grinch vibes too

    • @NerdyRodent
      @NerdyRodent  4 месяца назад +1

      That’s my girlfriend! Also yes, epic realism there 😉

  • @djivanoff13
    @djivanoff13 4 месяца назад +3

    why do I have a smaller image at the output?How can I increase it?

  • @jimdelsol1941
    @jimdelsol1941 4 месяца назад

    Thanks !

  • @RonnieMirands
    @RonnieMirands 4 месяца назад

    Another great workflow for free. Amazing! I am getting an error on Zoe Depth Map. Just bypassing this node it works. Maybe not installed?

  • @AC-zv3fx
    @AC-zv3fx 4 месяца назад +1

    Wow! Since when does it exist? Is it something new? It seems so effective! Great video!

    • @NerdyRodent
      @NerdyRodent  4 месяца назад

      It’s been out a while, but I had the pack installed for a different node and only just started playing with this specific one as I’ve been playing with noise a lot recently

  • @jcboisvert1446
    @jcboisvert1446 4 месяца назад +1

    Thanks

  • @FrancisHerding
    @FrancisHerding 4 месяца назад

    How does the 1st step work, where you are not using the Output Control section? What are the pos & neg inputs for the Ksample if the Output Control section has been bypassed? It still works in your video even though it should not.

  • @JanKowalski-ie6nw
    @JanKowalski-ie6nw 4 месяца назад +1

    Hello, could you make a video about DreamCraft 3d, a image to 3d method that came out a few days ago?

  • @eucharistenjoyer
    @eucharistenjoyer 4 месяца назад

    Amazing stuff. Have you by any chance developed this :P? It seems to have gotten out of the radar by most channels.
    Not related, but since you're extremely knowledgeable: I'm not sure if you have done any video showing the "CFG Rescale" node, but do you know how it works?

    • @NerdyRodent
      @NerdyRodent  4 месяца назад +1

      Dynamic thresholding I covered a while back for A1111 in - AMAZING A1111 Stable Diffusion Extensions You Might Have Missed!
      ruclips.net/video/tP5yy6A4GJw/видео.html - it’s basically that

  • @ExplicityDesigns
    @ExplicityDesigns 4 месяца назад +2

    sorry but in the first half of the video you mention the image gettin upscaled...how do you do that with unsampler?

  • @hleet
    @hleet 4 месяца назад +3

    Impressive. Does this replace IPAdapter nodes ? I'm not fond of IPAdapater, way too much nodes to use it and you never know which model to load and it happens to crash a lot :/

  • @ruuuuudooooolph
    @ruuuuudooooolph 4 месяца назад +2

    I dont understand why I am getting weird colors and the image looks incomplete. I am not seeing any errors too. Can someone share the original workflow with github ?

  • @AIWarper
    @AIWarper 4 месяца назад

    Pro tip: you create this node yourself using SamplerCustom node that is native to comfy.
    Also allows for more customization

  • @dkamhaji
    @dkamhaji 4 месяца назад +2

    Also - where does the CN Clips get their inputs from? Im trying to recreate without the everywhere nodes that cause conflicts on my setup :) Im getting an error at the Ksampler stage when the controlnet modules are turned on, and I have both pos/net prompts receiving from model node's Clip_Out. is that wrong? im not sure what else could be erroring. If I connect the first prompts to ksampler conditioning instead - it does work. something about the controlnet prompts..

  • @ddiva1973
    @ddiva1973 4 месяца назад

    Can you do the Animorph book cover transformation?

  • @JavierGarcia-td8ut
    @JavierGarcia-td8ut 4 месяца назад +1

    I do not find the workflow on your github, don't uploaded yet?

  • @NeOBRINFO
    @NeOBRINFO 4 месяца назад

    i can't find this workflow on link u provide ?

  • @patheticcoder4081
    @patheticcoder4081 4 месяца назад

    What's the advantage to adding noise with the ksampler and give him another prompt?

  • @mauidudejuno
    @mauidudejuno 4 месяца назад

    Thanks for the video. Where do I get the control net loras from?

    • @NerdyRodent
      @NerdyRodent  4 месяца назад +1

      The resources section has links to the stabilityai control loras, sd models and more!

    • @mauidudejuno
      @mauidudejuno 4 месяца назад

      @@NerdyRodent I found them. Thanks.

  • @SixFt12
    @SixFt12 4 месяца назад

    Zoe-DepthMapPreprocessor and LineArtPreprocessor failed to load and fail to import when using the manager to install missing custom nodes. Is there an alternative for these nodes? How do I download them if there are alternatives? Thanks for any help on this.

    • @NerdyRodent
      @NerdyRodent  4 месяца назад

      You can drop me a dm on www.patreon.com/NerdyRodent 😀

  • @Moony_ultimate
    @Moony_ultimate 4 месяца назад

    Exist a way to use unsampler in automatic 1111??

  • @youssefLKHL
    @youssefLKHL 4 месяца назад

    is there any way to make comfyui running on linux using Rocm 4.0, as I believe this is the latest supported version for my rx 580

    • @NerdyRodent
      @NerdyRodent  4 месяца назад

      I’ve not got an AMD card, but my guess is that should work just fine! How to Install ComfyUI in 2023 - Ideal for SDXL!
      ruclips.net/video/2r3uM_b3zA8/видео.html

  • @polystormstudio
    @polystormstudio 4 месяца назад +3

    I don't think you posted the workflow. The last one in the list is from 3 days ago "SDXL_Reposer_Basic.png"

    • @Gh0sty.14
      @Gh0sty.14 4 месяца назад +2

      It's there. ctrl+f and search Unsampler and you'll find it.

    • @ceiridge
      @ceiridge 4 месяца назад +6

      It's the Renoiser.png

  • @hatuey6326
    @hatuey6326 4 месяца назад

    it's strange : i ddon't have the resolution in the node ZOE so i have an error ! i've dowloaded the model still have error

    • @NerdyRodent
      @NerdyRodent  4 месяца назад +1

      Check the troubleshooting section for info on how to fix your local installation. 90% of the time you’ll need to update all 😊

  • @contrarian8870
    @contrarian8870 20 дней назад

    @NerdyRodent Is this specific workflow on your Github? I can't identify it by name...

    • @NerdyRodent
      @NerdyRodent  19 дней назад

      Yup, the unsampler one is there

  • @betortas33
    @betortas33 4 месяца назад

    Does this works with sdxl?

  • @user-rj7ks6ik2s
    @user-rj7ks6ik2s 4 месяца назад

    Try to use "reference" model + "Canny" for SDXL models. This will give you much more interesting results than older models (with high "cfg"). Try taking a reference image from the street and writing "snow" in promt text...

    • @NerdyRodent
      @NerdyRodent  4 месяца назад

      Thanks for the tip!

    • @user-rj7ks6ik2s
      @user-rj7ks6ik2s 4 месяца назад

      @@NerdyRodentIn principle, you don’t need to redo anything in your work flow (for SDXL), just replace the old models with new T2i SDXL Line & Depth (I checked - everything works well)

  • @MeMine-zu1mg
    @MeMine-zu1mg 4 месяца назад

    I get a huge error at the ksampler Advanced node that starts off "mat1 and mat2 shapes cannot be multiplied (77x2048 and 768x320)"

  • @Herman_HMS
    @Herman_HMS 4 месяца назад

    my images with controlnet are coming out extremely polarized and overexposed even with low strength and negative prompts on a standard 1.5 model. Any advice how to fix it?

    • @NerdyRodent
      @NerdyRodent  4 месяца назад +1

      Prompts certainly help for me! Things like “dark” or “high contrast” in +ve, or like I show in the video with -ve prompting

    • @Herman_HMS
      @Herman_HMS 4 месяца назад

      @@NerdyRodent will try, thanks for reply!

  • @vindyyt
    @vindyyt 4 месяца назад

    Am I dumb, or the only .json workflow in your link is for the QR monster, and the rest are just .png files? I can't figure our where to download the comfyui workflows

    • @NerdyRodent
      @NerdyRodent  4 месяца назад +1

      you can scroll down to find the workflows 😀

  • @AvizStudio
    @AvizStudio 4 месяца назад +2

    What the difference from regular image to image?

    • @JustFeral
      @JustFeral 4 месяца назад +4

      Did you watch the full video?

    • @vintagegenious
      @vintagegenious 4 месяца назад +1

      @AvizStudio From their Github:
      "This node does the reverse of a sampler. It calculates the noise that would generate the image given the model and the prompt."
      Img2img would just take the original image to generate. while here you take a noise that would generate that image, the point is to be able to do variations of that image

    • @AvizStudio
      @AvizStudio 4 месяца назад

      @@vintagegenious
      Hmm OK interesting

    • @AvizStudio
      @AvizStudio 4 месяца назад

      ​@vintagegenious
      Is that equivalent to "guessing the seed number of given picture"? Pretending the picture is generated?

    • @vintagegenious
      @vintagegenious 4 месяца назад

      ​@@AvizStudio the seed will decide the noise you add to the input image latent(with 1.0 denoise you have only input image, and with 0.0 you have only noise, so txt2img) . Here it gives you the latent, not the seed, so you can consider it as finding the best input image latent, denoise and seed. I'm not sure if for all latent there exists a seed that will give that latent, if it's the case then you are right.

  • @kallamamran
    @kallamamran 8 дней назад

    My Unsampler only generates a black image

    • @kallamamran
      @kallamamran 8 дней назад

      Changed the sampler. Now it works

  • @LouisGedo
    @LouisGedo 4 месяца назад

    👋

  • @bizarreadventurejojos5379
    @bizarreadventurejojos5379 4 месяца назад +1

    It's a great video but my result image was 512 x 768 without any error and not upscale to higher resolution when I was inputs the 512 x 768 imagemand using your workflow, I don't know why you can input the lower resolution then output the higher resolution? you said your image with automatic upscaled to 1136 x 1440 resolution, i don't know why I can't do that. 😅 thanks

  • @generalawareness101
    @generalawareness101 4 месяца назад

    Works but the mtp notes thing it downloaded the missing node and I could no longer use ComfyUI as it was unresponsive until I deleted it.

  • @Sandy5of5
    @Sandy5of5 4 месяца назад +7

    Morgan would be a much better candidate, but he's too smart to ruin his life.

    • @amafuji
      @amafuji 4 месяца назад +1

      He's good at pretending to be God. He would be excellent at pretending to be president too

  • @CoreyJohnson193
    @CoreyJohnson193 4 месяца назад

    I think you're a grat teacher... sort of. I like to build these myself, so the .json or a better explanation of the nodes is necessary. It's frustrating getting the abridged version when I would like more indepth instructions. Please find time to break these down like other ComfyUI RUclipsrs.

    • @NerdyRodent
      @NerdyRodent  4 месяца назад +2

      You can indeed save the workflow image provided as a .json file if you like! What is it specifically about the unsampler node that you'd like to know? It basically does just what I show in the video (and as it’s name suggests!) As for building your own, check out my ComfyUI Essentials video - ruclips.net/video/VM9snsuoqBc/видео.html

    • @MrSporf
      @MrSporf 4 месяца назад

      I think you're an amazing teacher. please keep doing them exactly like you are now as those long-winded ones are frustrating. keeping them focused and clear like you do is much better and thank you for the workflow

    • @CoreyJohnson193
      @CoreyJohnson193 4 месяца назад

      @@MrSporf Some people need additional support. Luckily, I crafted a better workflow after I realized there are too many nodes on screen. It is able to do everything and requires less connections between nodes using "efficient" nodes instead of the typical ones.

    • @MrSporf
      @MrSporf 4 месяца назад

      @@CoreyJohnson193 good for you, well done. I guess you didn't need that extra support after all. Also, the workflow images are much better than the json files because you can actually see what is going on in the workflow.

    • @CoreyJohnson193
      @CoreyJohnson193 4 месяца назад

      @@MrSporf Why not just have both?

  • @TickleMeTimbers
    @TickleMeTimbers 4 месяца назад

    Honestly she looks completely different, it's not even the same face at all. Everything is very disproportionate from her mouth to her nose to her eyes and basically everything else. It doesn't look like the same person except at a distance if you squint.

    • @tonikunec
      @tonikunec 4 месяца назад

      You gotta be blind mate... I've got the workflow and it does an excellent job.

    • @TickleMeTimbers
      @TickleMeTimbers 4 месяца назад

      @@tonikunec does it do a better job than the girl shown in the video thumbnail? Because if not, then I can safely say you sir are the blind one.