is there a way to use combined, but make the masks as close to each other as possible to use detailer? In my case, I have masks of head, left hand, right hand, and two legs, I want to inpaint those areas, but those masks are very far way from each other, making the size of the image very large. I want to bring all those mask to close to each other, combined to one image, upscale/inpaint, then use special mapping to paste those refined parts to the original image
Thank you for your constant sharing. I have a problem not directly relating to this video, rather some error encountered during the most regular SVD workflow, could you please look into it? It said" TypeError: unsupported operand type(s) for *=: 'int' and 'NoneType'", and I cannot solve it no matter how
You can download from ComfyUI from here: github.com/comfyanonymous/ComfyUI And extension from here: github.com/ltdrdata/ComfyUI-Impact-Pack Workflow: (you can drag drop this image) github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/workflow/mask_to_segs.png
They are different concept. crop_factor providing wider surroding context for inpaintingi area. However, bbox_fill simply determines whether the mask, which is the target for inpainting, should be fully filled in a bbox shape.
Thank you. Apart from the functionality that comes with this node, is there any difference when using this compared to using masks directly? Basically what is the advantage or differences between masks and segments?
SEGS is designed to convey additional information generated by detection to the Detailer. This includes crop area, bbox information, mask details, what has been detected, and the confidence level of the detection. The target in the Detailer is not the mask itself but rather comprehensive information about the areas to inpaint, where the mask is just one of the supplementary attributes. Originally, there were nodes based on SEGS that operated on detection, but the "Mask To SEGS" and "SEGS to Mask List" play a bridging role to enable Detailer nodes, designed for manual inpainting, to be useful and utilize a variety of Mask-based nodes.
Hello, thank you for your work. I have a Question/Suggestion. While animating through AnimateDiff/Deforum, I noticed that if the character stands far away from the camera, the face is very poorly generated.. Sometimes increasing the resolution helps, but not always. As I understand with the new tools it is possible to select the face separately, zoom in and generate a great image of the face. Is it possible to somehow make it so that the one who generates can designate a moving object/character, and your tool/node would constantly follow, for example, a face? And with each frame, the selected "field" will constantly shift with the face of the character, but as noted in the first frame. I'm not a programmer, but as far as I've seen, there are already programs that constantly track the selected "field"/"object" on the canvas. Thanks again and happy new year!
I haven't uploaded a video about the updated nodes yet, but a new feature called masking_mode has been added to the` Simple Detector for AnimateDiff (SEGS)`. If you set masking_mode to 'Combine neighboring frames' or 'Don't combine', the mask will move accordingly.
@@drltdata Thank you for your reply. Yesterday after I wrote the comment, within an hour I found the nodes "Detailer For AnimateDiff SEGS/ Simple Detector for AnimateDiff (SEGS)". And all others that enlarge the generated image :)))) Thanks
Who is this youtube hero coming to our rescue ? :D Very nice!
is there a way to use combined, but make the masks as close to each other as possible to use detailer? In my case, I have masks of head, left hand, right hand, and two legs, I want to inpaint those areas, but those masks are very far way from each other, making the size of the image very large. I want to bring all those mask to close to each other, combined to one image, upscale/inpaint, then use special mapping to paste those refined parts to the original image
I added `SEGS Merge` node to Impact Pack.
Thank you for your constant sharing. I have a problem not directly relating to this video, rather some error encountered during the most regular SVD workflow, could you please look into it? It said" TypeError: unsupported operand type(s) for *=: 'int' and 'NoneType'", and I cannot solve it no matter how
github.com/comfyanonymous/ComfyUI/issues/2048
Check FreeU_Advanced
Thank you dr! Happy new year to you!@@drltdata
You can download from ComfyUI from here:
github.com/comfyanonymous/ComfyUI
And extension from here:
github.com/ltdrdata/ComfyUI-Impact-Pack
Workflow: (you can drag drop this image)
github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/workflow/mask_to_segs.png
basically crop_factor & bbox_fill both provide more context to the detailer, correct?
They are different concept. crop_factor providing wider surroding context for inpaintingi area. However, bbox_fill simply determines whether the mask, which is the target for inpainting, should be fully filled in a bbox shape.
omg, this is so useful, thank you so much
Is there a way to turn black and white image into mask and still combine the mask and targeted image to turn it into segs?
How do I use the face detailer on just 1 character in the scene?
You need to use Detailer instead of FaceDetailer.
ruclips.net/video/9GSQlxZFrLI/видео.html
Thank you. Apart from the functionality that comes with this node, is there any difference when using this compared to using masks directly? Basically what is the advantage or differences between masks and segments?
masks only mask segments segment by objest say like hair face clothes ect into color groups i think
SEGS is designed to convey additional information generated by detection to the Detailer. This includes crop area, bbox information, mask details, what has been detected, and the confidence level of the detection.
The target in the Detailer is not the mask itself but rather comprehensive information about the areas to inpaint, where the mask is just one of the supplementary attributes.
Originally, there were nodes based on SEGS that operated on detection, but the "Mask To SEGS" and "SEGS to Mask List" play a bridging role to enable Detailer nodes, designed for manual inpainting, to be useful and utilize a variety of Mask-based nodes.
Thanks a lot for the detailed explanation!👍
Is it possible to get XY coordinates of each seg?
Let's say I put small point several masks, and I want to get their coordinates in order to transpose PNG images and then send to ksampler..
Hello, thank you for your work. I have a Question/Suggestion. While animating through AnimateDiff/Deforum, I noticed that if the character stands far away from the camera, the face is very poorly generated.. Sometimes increasing the resolution helps, but not always. As I understand with the new tools it is possible to select the face separately, zoom in and generate a great image of the face. Is it possible to somehow make it so that the one who generates can designate a moving object/character, and your tool/node would constantly follow, for example, a face? And with each frame, the selected "field" will constantly shift with the face of the character, but as noted in the first frame. I'm not a programmer, but as far as I've seen, there are already programs that constantly track the selected "field"/"object" on the canvas.
Thanks again and happy new year!
I haven't uploaded a video about the updated nodes yet, but a new feature called masking_mode has been added to the` Simple Detector for AnimateDiff (SEGS)`. If you set masking_mode to 'Combine neighboring frames' or 'Don't combine', the mask will move accordingly.
@@drltdata Thank you for your reply. Yesterday after I wrote the comment, within an hour I found the nodes "Detailer For AnimateDiff SEGS/ Simple Detector for AnimateDiff (SEGS)". And all others that enlarge the generated image :))))
Thanks
awesome!
i like segs fr