This trick is incredible for adding details, you can crank the second step count and CFG of the middle sampler to add tons of noise then use the finally sampler to remove it. Plays really nice with SDXL_lighting. It like a sudo add detailer. Thanks for sharing!
Hi Dr.lt.data thank you for adding my node to the manager ❤ also thank you for all the awesome tools, workflows videos and all the unseen work you do you are awesome .🙏
You can download from ComfyUI from here: github.com/comfyanonymous/ComfyUI And workflow from here: github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Experimental/workflow/ksampler_advanced.png
How much would you charge for taking the sd extension of multidiffusion-upscaler-for-automatic1111 and making it into a comfyui node? The current node made for comfyui doesn’t function well and can’t even work with more then 1 controlnet
I also noticed that it gives a nice touch to images - especially with sdxl - if you split the sampling process beginning with higher and ending with a lower cfg. Would be great if a sampler could do that on his own.. Not quite sure if I understand the 3 sampler sequence correct. Correct me if I'm wrong: Sampler1: 0 1 2 3 Sampler2: 5 Sampler3: 5 6 7 8 ... 24 right?
Your doubts are related to the erroneous name of the field in KSampler Advanced: “end_at_step”. "at" assumes that the specified step will be executed (similar to the previous field "start_at_step"). This field should be called "end_before_step".
Hey, thanks for what you're doing. I have a question, if you can help me, please. I generate using AnimateDiff/model LCM (with input video and input images for background and character) and use your Detailer nodes. And the problem is that if I put the weight of IPAdapter at 1, the background and the character itself turns out well, but the face is constantly flickering. And if set the IPAdapter weight to 0.75, the face turns out fine, but the background is constantly floating. Is it possible somehow after 1 generation (1st KSampler) to set a new Animathediff model so that it with the help of your Detailer processes only the face? I tried this, but the 2nd Animathediff model after the first KSampler contacts the first Animathediff model before the 1st KSampler and the whole face is distorted. If it could be done somehow, could set the denoise in the "Detailer for AnimateDiff SEGS" node higher than 0.25 and the face would probably be better generated. And if do everything with one AnimateDiff and set the weight in the "Detailer for AnimateDiff SEGS" node above 30, it starts generating the whole image on the face. After the first KSampler, you can also connect new Promt nodes for face only and a different SD model. If you know how to help me with this, please tell me.
Hello! First of all I'd like to personally thank you for your incredible work! Your tools are very amazing to work with and very intuitive even for me as a newbie to ComfyUI. Could you help me with one trouble tho? Speaking of tools I'm trying to get familiar with face detailer (pipe) using bbox. As for images with single person I didn't noticed any problems as in the situation I generate a picture with multiple characters I've noticed face detailer having trouble detecting multiple faces even the obvious ones. Sometimes it gets it right sometimes I doesn't. I the picture it doesn't I have to put thresholdscore insanely low like 0.01 to make face detailer detect the second or third face (sometimes even that doesn't help) I'm aware I could fix it manually by putting the masks but I want to understand why it's behaving like that. Would really apreciate your help
First, check the results detected by BboxDetector or SimpleDetector using SEGS Preview. That's the starting point of diagnosis. github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/detectors.md
@@drltdataHello! Thank you for your response. I've done that following instruction of your video: ruclips.net/video/4IjplfhDU60/видео.html Unfortunately I still have this problem. Bbox still has a trouble detecting a face even at threshold 0.01 while segm is detecting it at threshold 0.50. Maybe it's a bug? Could I message you on email sending you screenshots of my problem? Oh and important note. I observed that when I've put that problematic partly face detailed image once more it detected the missing face and corrected it. But it had to be done by loading that image again NOT running two face detailers one after another from one go. Thank you again for your help and I wait for your response. I'm also wondering can loras and chceckpoints affect facedetailer to this extend to have this problem?
Is it enough to just have two KSamplers where the 1st ends at 4th step and the 2nd starts at 5th? Or is there some particular function to the middle KSampler?
Simply skipping isn't enough to sufficiently denoise, which can result in a noisy output image. So the setup is to skip step 4 and instead do step 5 twice.
This trick is incredible for adding details, you can crank the second step count and CFG of the middle sampler to add tons of noise then use the finally sampler to remove it.
Plays really nice with SDXL_lighting. It like a sudo add detailer.
Thanks for sharing!
Hi Dr.lt.data thank you for adding my node to the manager ❤ also thank you for all the awesome tools, workflows videos and all the unseen work you do you are awesome .🙏
You can download from ComfyUI from here:
github.com/comfyanonymous/ComfyUI
And workflow from here:
github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Experimental/workflow/ksampler_advanced.png
How much would you charge for taking the sd extension of multidiffusion-upscaler-for-automatic1111 and making it into a comfyui node? The current node made for comfyui doesn’t function well and can’t even work with more then 1 controlnet
Thank you for sharing. How to put photos from the outside?
I also noticed that it gives a nice touch to images - especially with sdxl - if you split the sampling process beginning with higher and ending with a lower cfg. Would be great if a sampler could do that on his own..
Not quite sure if I understand the 3 sampler sequence correct. Correct me if I'm wrong:
Sampler1: 0 1 2 3
Sampler2: 5
Sampler3: 5 6 7 8 ... 24
right?
That's correct. It's skipping sampling of 4 step and instead doing sampling of 5 step twice :)
Your doubts are related to the erroneous name of the field in KSampler Advanced: “end_at_step”. "at" assumes that the specified step will be executed (similar to the previous field "start_at_step"). This field should be called "end_before_step".
Shhhh oh man, we have been using this method for a bit. It’s part of the secret sauce 😅
Hey, thanks for what you're doing. I have a question, if you can help me, please. I generate using AnimateDiff/model LCM (with input video and input images for background and character) and use your Detailer nodes. And the problem is that if I put the weight of IPAdapter at 1, the background and the character itself turns out well, but the face is constantly flickering. And if set the IPAdapter weight to 0.75, the face turns out fine, but the background is constantly floating.
Is it possible somehow after 1 generation (1st KSampler) to set a new Animathediff model so that it with the help of your Detailer processes only the face? I tried this, but the 2nd Animathediff model after the first KSampler contacts the first Animathediff model before the 1st KSampler and the whole face is distorted. If it could be done somehow, could set the denoise in the "Detailer for AnimateDiff SEGS" node higher than 0.25 and the face would probably be better generated. And if do everything with one AnimateDiff and set the weight in the "Detailer for AnimateDiff SEGS" node above 30, it starts generating the whole image on the face. After the first KSampler, you can also connect new Promt nodes for face only and a different SD model.
If you know how to help me with this, please tell me.
Hello!
First of all I'd like to personally thank you for your incredible work! Your tools are very amazing to work with and very intuitive even for me as a newbie to ComfyUI. Could you help me with one trouble tho?
Speaking of tools I'm trying to get familiar with face detailer (pipe) using bbox. As for images with single person I didn't noticed any problems as in the situation I generate a picture with multiple characters I've noticed face detailer having trouble detecting multiple faces even the obvious ones. Sometimes it gets it right sometimes I doesn't. I the picture it doesn't I have to put thresholdscore insanely low like 0.01 to make face detailer detect the second or third face (sometimes even that doesn't help) I'm aware I could fix it manually by putting the masks but I want to understand why it's behaving like that. Would really apreciate your help
First, check the results detected by BboxDetector or SimpleDetector using SEGS Preview. That's the starting point of diagnosis.
github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/detectors.md
@@drltdataHello! Thank you for your response. I've done that following instruction of your video: ruclips.net/video/4IjplfhDU60/видео.html
Unfortunately I still have this problem. Bbox still has a trouble detecting a face even at threshold 0.01 while segm is detecting it at threshold 0.50. Maybe it's a bug? Could I message you on email sending you screenshots of my problem? Oh and important note. I observed that when I've put that problematic partly face detailed image once more it detected the missing face and corrected it. But it had to be done by loading that image again NOT running two face detailers one after another from one go. Thank you again for your help and I wait for your response.
I'm also wondering can loras and chceckpoints affect facedetailer to this extend to have this problem?
I love all videos about comfyui
Can this somehow be applied to the Detailer nodes?
Should the seeds in the second and third sampler be equal?
Is there any rule for installing seeds?
When using KSampler Advanced, if add_noise is disabled, the seed has no effect.
Is it enough to just have two KSamplers where the 1st ends at 4th step and the 2nd starts at 5th? Or is there some particular function to the middle KSampler?
Simply skipping isn't enough to sufficiently denoise, which can result in a noisy output image. So the setup is to skip step 4 and instead do step 5 twice.
I see, thx for clearing that up.@@drltdata
Interesting find !
#NeuraLunk ;)