The variety of upscaling methods can be quite confusing, but this approach seems relatively simple compared to others with numerous complex nodes. It's also quite ingenious. I always support your efforts.
I do a lot of img2img tiled upscaling and often want to add detail. (Usually landscapes and archtiecture) What would be very advantageous would be a way to automatically interrogate each tile. If the prompt is too descriptive and the denoise strong enough to add detail, then the sampler will attempt to create a fractal of the whole image in each tile. If not an automatic interrogate tiles, then a manual prompt per tile would be helpful. Perhaps this is already possible and I haven't found it yet? Any workflow advice would be very appreciated. PS Thanks for all you do!
I’m also considering a method of creating a context based on a certain range of tiles, based on IPAdapter. The only concern is that the sampling cost could become much stronger. However, I’m considering it from the perspective of providing an option for the best quality. If you want to apply prompts to tiles manually, you could use the SEGS Filter to decompose all the tiles and run a separate detailer pipeline, but this would be a very painful workflow.
@@drltdata Thanks for your reply. IPAdapter makes sense to narrow the scope to a range of tiles using a section of the input image as the reference for guidance. Personally I am shooting for best quality and already process 10+ minute runs on my rtx4090 doing tiled sampling and upscaling at higher resolutions. For now i am using your regional prompt by color workflow and changed it to accept multiple hand painted masks where I can specify areas for prompting (grass, sky, trees etc.) and add / enhance detail that way. Then using tiled upscaling with style prompts only. It works, but is less than ideal. Tile controlnet helps, but isn't available for sdxl. I do "remastering" work where I look to improve on low quality inputs, but it's absolutely vital to maintain the integrity of the underlying design layout and look of materials. It's always a dance between freedom and constraint.
@@3DArtistree I added a node called "IPAdapter Apply (SEGS)" to the Impact Pack last night. However, to use this node, you need to update the Inspire Pack. I plan to upload a usage video soon, but I’m letting you know in advance.
Thank you, I am a bit upset I did not find this video earlier. I spent hours creating color mask ... for human and background to seperate them for creation and you always had a node for that issue😅
You can download from ComfyUI from here: github.com/comfyanonymous/ComfyUI And extension from here: github.com/ltdrdata/ComfyUI-Impact-Pack Workflow: (you can drag drop this image): github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/workflow/MakeTileSEGS_upscale.png
@@APCOAG You can use `Negative Placeholder` like this. github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/workflow/flux-detailer.png
@@drltdata Although I was asking about segs detailer,not the face detailer, but I will try the same idea. Thank you a lot for your time. BTW, you are a hero 😍
Thank you for the tutorial. Question: If we want to upscale the output of MagicAnimate, can I use Detailer for AnimateDiff node as this Detailer node doesn't take batch images.
Is there a way for gendered upscaling using this SEGS method? I saw your video on gendered refinement but am wondering what nodes to use to get it to work for upscaling
The workflow from this video is not working as it should before, now it blurs and hides all the details of the image. seems like something changed in ComfyUI
What do you recommend if the person detector is no good for the subject of the image? A monster for example. SAM detector/manual masking instead, then "mask to SEGS" node, then "make tile SEGS" node?
"mask_irregularity"、"irregular_mask_mode"--I can't find these two options in the node, and I have updated to the latest version, is my update method wrong?
One question... without prompting (again, lol) could this be used? And also... we can do this in environments where there are no characters? Just landscapes
Yes, indeed. This example deliberately upscaled through a different model. If you want to process the face to follow the original, you'll need to use a different approach specifically tailored for the facial region.
@@___x__x_r___xa__x_____f______ The detailer node where LoRA will be applied should be separated, and LoRA should only be applied to the basic_pipe on that specific detailer.
The variety of upscaling methods can be quite confusing, but this approach seems relatively simple compared to others with numerous complex nodes. It's also quite ingenious. I always support your efforts.
This is very awesome your nodes are time savers.
I do a lot of img2img tiled upscaling and often want to add detail. (Usually landscapes and archtiecture) What would be very advantageous would be a way to automatically interrogate each tile. If the prompt is too descriptive and the denoise strong enough to add detail, then the sampler will attempt to create a fractal of the whole image in each tile. If not an automatic interrogate tiles, then a manual prompt per tile would be helpful. Perhaps this is already possible and I haven't found it yet? Any workflow advice would be very appreciated. PS Thanks for all you do!
I’m also considering a method of creating a context based on a certain range of tiles, based on IPAdapter.
The only concern is that the sampling cost could become much stronger. However, I’m considering it from the perspective of providing an option for the best quality.
If you want to apply prompts to tiles manually, you could use the SEGS Filter to decompose all the tiles and run a separate detailer pipeline, but this would be a very painful workflow.
@@drltdata Thanks for your reply. IPAdapter makes sense to narrow the scope to a range of tiles using a section of the input image as the reference for guidance. Personally I am shooting for best quality and already process 10+ minute runs on my rtx4090 doing tiled sampling and upscaling at higher resolutions.
For now i am using your regional prompt by color workflow and changed it to accept multiple hand painted masks where I can specify areas for prompting (grass, sky, trees etc.) and add / enhance detail that way. Then using tiled upscaling with style prompts only. It works, but is less than ideal. Tile controlnet helps, but isn't available for sdxl.
I do "remastering" work where I look to improve on low quality inputs, but it's absolutely vital to maintain the integrity of the underlying design layout and look of materials. It's always a dance between freedom and constraint.
@@3DArtistree I added a node called "IPAdapter Apply (SEGS)" to the Impact Pack last night.
However, to use this node, you need to update the Inspire Pack.
I plan to upload a usage video soon, but I’m letting you know in advance.
this video means a lot for me. thanks.
Thank you, I am a bit upset I did not find this video earlier. I spent hours creating color mask ... for human and background to seperate them for creation and you always had a node for that issue😅
You can download from ComfyUI from here:
github.com/comfyanonymous/ComfyUI
And extension from here:
github.com/ltdrdata/ComfyUI-Impact-Pack
Workflow: (you can drag drop this image):
github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/workflow/MakeTileSEGS_upscale.png
Hi , Thank you for your great work.. but can you please show an example of using segs detailer with FLUX?
@@APCOAG
You can use `Negative Placeholder` like this.
github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/workflow/flux-detailer.png
@@drltdata Although I was asking about segs detailer,not the face detailer, but I will try the same idea. Thank you a lot for your time. BTW, you are a hero 😍
@@APCOAG That is an universal method for Impact and Inspire sampling nodes.
Just use placeholder instead of negative prompt. That's all you need.
Thank you for the tutorial. Question: If we want to upscale the output of MagicAnimate, can I use Detailer for AnimateDiff node as this Detailer node doesn't take batch images.
how does it compare to other upscale methods?
Is there a way for gendered upscaling using this SEGS method? I saw your video on gendered refinement but am wondering what nodes to use to get it to work for upscaling
You can compose a workflow that sends gender-segregated SEGS to MakeTileSEGS for processing.
@@drltdata I'll work on that!
fantastic! Thank you very much!
The workflow from this video is not working as it should before, now it blurs and hides all the details of the image. seems like something changed in ComfyUI
Thanks. I've tried this for an upscaling up to 4k but then I started to getting artifacts. Dou you have recommended settings for this that might help?
What do you recommend if the person detector is no good for the subject of the image? A monster for example. SAM detector/manual masking instead, then "mask to SEGS" node, then "make tile SEGS" node?
Yup. You can do it.
"mask_irregularity"、"irregular_mask_mode"--I can't find these two options in the node, and I have updated to the latest version, is my update method wrong?
Latest version is V4.66.4. If it is not. Your update method is wrong.
@@drltdata But I just deleted the entire ComfyUI-Impact-Pack and then recloned it from github, and it still doesn't work.
One question... without prompting (again, lol) could this be used? And also... we can do this in environments where there are no characters? Just landscapes
You can utilize IPAdapter.
@@drltdatacould you elaborate?
@@HestoySeghuro Create a basic_pipe that is connected to Detailer, using the IPAdapter made from the original image.
Where I can find this workflow?
@@josemariagala2588 You can find it in my comment.
@@drltdata thanks a lot!!! ☺️
@@drltdata How can I control the refinement parameters? I mean, if I ask to create for example an Asian woman, it makes her Caucasian.
@@josemariagala2588 You have to control via model and prompt.
@@drltdata where i put the model?
Do you think I can use this in architecture? I really have trouble correctly putting in cars in the garage
I recommend this video
ruclips.net/video/g9JXx4ik_rc/видео.html
Best workflow thanks
Thanks for sharing.
The face is differences 😢
Yes, indeed. This example deliberately upscaled through a different model.
If you want to process the face to follow the original, you'll need to use a different approach specifically tailored for the facial region.
Is it possible to run a character lora into the detailer node to maintain consistency? It’s not working for me
@@___x__x_r___xa__x_____f______ The detailer node where LoRA will be applied should be separated, and LoRA should only be applied to the basic_pipe on that specific detailer.
@@drltdata thank you DrLt 👍