ComfyUI-Impact-Pack: Tutorial #13 MASK to SEGS

Поделиться
HTML-код
  • Опубликовано: 18 дек 2024

Комментарии • 24

  • @stephantual
    @stephantual 11 месяцев назад +1

    Who is this youtube hero coming to our rescue ? :D Very nice!

  • @hoangucmanh299
    @hoangucmanh299 2 месяца назад

    is there a way to use combined, but make the masks as close to each other as possible to use detailer? In my case, I have masks of head, left hand, right hand, and two legs, I want to inpaint those areas, but those masks are very far way from each other, making the size of the image very large. I want to bring all those mask to close to each other, combined to one image, upscale/inpaint, then use special mapping to paste those refined parts to the original image

    • @drltdata
      @drltdata  2 месяца назад

      I added `SEGS Merge` node to Impact Pack.

  • @唐华汉
    @唐华汉 11 месяцев назад

    Thank you for your constant sharing. I have a problem not directly relating to this video, rather some error encountered during the most regular SVD workflow, could you please look into it? It said" TypeError: unsupported operand type(s) for *=: 'int' and 'NoneType'", and I cannot solve it no matter how

    • @drltdata
      @drltdata  11 месяцев назад

      github.com/comfyanonymous/ComfyUI/issues/2048
      Check FreeU_Advanced

    • @唐华汉
      @唐华汉 11 месяцев назад

      Thank you dr! Happy new year to you!@@drltdata

  • @drltdata
    @drltdata  11 месяцев назад +3

    You can download from ComfyUI from here:
    github.com/comfyanonymous/ComfyUI
    And extension from here:
    github.com/ltdrdata/ComfyUI-Impact-Pack
    Workflow: (you can drag drop this image)
    github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/workflow/mask_to_segs.png

  • @provi1085
    @provi1085 3 месяца назад

    basically crop_factor & bbox_fill both provide more context to the detailer, correct?

    • @drltdata
      @drltdata  3 месяца назад

      They are different concept. crop_factor providing wider surroding context for inpaintingi area. However, bbox_fill simply determines whether the mask, which is the target for inpainting, should be fully filled in a bbox shape.

  • @johnriperti3127
    @johnriperti3127 11 месяцев назад +1

    omg, this is so useful, thank you so much

  • @AndysTV
    @AndysTV 11 месяцев назад

    Is there a way to turn black and white image into mask and still combine the mask and targeted image to turn it into segs?

  • @hyukev
    @hyukev 5 месяцев назад

    How do I use the face detailer on just 1 character in the scene?

    • @drltdata
      @drltdata  5 месяцев назад

      You need to use Detailer instead of FaceDetailer.
      ruclips.net/video/9GSQlxZFrLI/видео.html

  • @DanielPartzsch
    @DanielPartzsch 11 месяцев назад

    Thank you. Apart from the functionality that comes with this node, is there any difference when using this compared to using masks directly? Basically what is the advantage or differences between masks and segments?

    • @the_one_and_carpool
      @the_one_and_carpool 11 месяцев назад

      masks only mask segments segment by objest say like hair face clothes ect into color groups i think

    • @drltdata
      @drltdata  11 месяцев назад +1

      SEGS is designed to convey additional information generated by detection to the Detailer. This includes crop area, bbox information, mask details, what has been detected, and the confidence level of the detection.
      The target in the Detailer is not the mask itself but rather comprehensive information about the areas to inpaint, where the mask is just one of the supplementary attributes.
      Originally, there were nodes based on SEGS that operated on detection, but the "Mask To SEGS" and "SEGS to Mask List" play a bridging role to enable Detailer nodes, designed for manual inpainting, to be useful and utilize a variety of Mask-based nodes.

    • @DanielPartzsch
      @DanielPartzsch 11 месяцев назад

      Thanks a lot for the detailed explanation!👍

  • @eveekiviblog7361
    @eveekiviblog7361 2 месяца назад

    Is it possible to get XY coordinates of each seg?

    • @eveekiviblog7361
      @eveekiviblog7361 2 месяца назад

      Let's say I put small point several masks, and I want to get their coordinates in order to transpose PNG images and then send to ksampler..

  • @michail_777
    @michail_777 11 месяцев назад

    Hello, thank you for your work. I have a Question/Suggestion. While animating through AnimateDiff/Deforum, I noticed that if the character stands far away from the camera, the face is very poorly generated.. Sometimes increasing the resolution helps, but not always. As I understand with the new tools it is possible to select the face separately, zoom in and generate a great image of the face. Is it possible to somehow make it so that the one who generates can designate a moving object/character, and your tool/node would constantly follow, for example, a face? And with each frame, the selected "field" will constantly shift with the face of the character, but as noted in the first frame. I'm not a programmer, but as far as I've seen, there are already programs that constantly track the selected "field"/"object" on the canvas.
    Thanks again and happy new year!

    • @drltdata
      @drltdata  11 месяцев назад

      I haven't uploaded a video about the updated nodes yet, but a new feature called masking_mode has been added to the` Simple Detector for AnimateDiff (SEGS)`. If you set masking_mode to 'Combine neighboring frames' or 'Don't combine', the mask will move accordingly.

    • @michail_777
      @michail_777 11 месяцев назад

      @@drltdata Thank you for your reply. Yesterday after I wrote the comment, within an hour I found the nodes "Detailer For AnimateDiff SEGS/ Simple Detector for AnimateDiff (SEGS)". And all others that enlarge the generated image :))))
      Thanks

  • @kdesign1579
    @kdesign1579 11 месяцев назад

    awesome!

  • @AAKVII
    @AAKVII 9 месяцев назад

    i like segs fr