ComfyUI-Impact-Pack - Workflow: Upscaling with Make Tile SEGS

Поделиться
HTML-код
  • Опубликовано: 18 дек 2024

Комментарии • 48

  • @KKsinsa
    @KKsinsa 11 месяцев назад

    The variety of upscaling methods can be quite confusing, but this approach seems relatively simple compared to others with numerous complex nodes. It's also quite ingenious. I always support your efforts.

  • @impactframes
    @impactframes 11 месяцев назад +1

    This is very awesome your nodes are time savers.

  • @3DArtistree
    @3DArtistree 10 месяцев назад +1

    I do a lot of img2img tiled upscaling and often want to add detail. (Usually landscapes and archtiecture) What would be very advantageous would be a way to automatically interrogate each tile. If the prompt is too descriptive and the denoise strong enough to add detail, then the sampler will attempt to create a fractal of the whole image in each tile. If not an automatic interrogate tiles, then a manual prompt per tile would be helpful. Perhaps this is already possible and I haven't found it yet? Any workflow advice would be very appreciated. PS Thanks for all you do!

    • @drltdata
      @drltdata  10 месяцев назад +2

      I’m also considering a method of creating a context based on a certain range of tiles, based on IPAdapter.
      The only concern is that the sampling cost could become much stronger. However, I’m considering it from the perspective of providing an option for the best quality.
      If you want to apply prompts to tiles manually, you could use the SEGS Filter to decompose all the tiles and run a separate detailer pipeline, but this would be a very painful workflow.

    • @3DArtistree
      @3DArtistree 10 месяцев назад

      @@drltdata Thanks for your reply. IPAdapter makes sense to narrow the scope to a range of tiles using a section of the input image as the reference for guidance. Personally I am shooting for best quality and already process 10+ minute runs on my rtx4090 doing tiled sampling and upscaling at higher resolutions.
      For now i am using your regional prompt by color workflow and changed it to accept multiple hand painted masks where I can specify areas for prompting (grass, sky, trees etc.) and add / enhance detail that way. Then using tiled upscaling with style prompts only. It works, but is less than ideal. Tile controlnet helps, but isn't available for sdxl.
      I do "remastering" work where I look to improve on low quality inputs, but it's absolutely vital to maintain the integrity of the underlying design layout and look of materials. It's always a dance between freedom and constraint.

    • @drltdata
      @drltdata  10 месяцев назад +1

      @@3DArtistree I added a node called "IPAdapter Apply (SEGS)" to the Impact Pack last night.
      However, to use this node, you need to update the Inspire Pack.
      I plan to upload a usage video soon, but I’m letting you know in advance.

  • @steveyy3567
    @steveyy3567 4 месяца назад

    this video means a lot for me. thanks.

  • @silas-dd5ll
    @silas-dd5ll 7 месяцев назад

    Thank you, I am a bit upset I did not find this video earlier. I spent hours creating color mask ... for human and background to seperate them for creation and you always had a node for that issue😅

  • @drltdata
    @drltdata  11 месяцев назад +7

    You can download from ComfyUI from here:
    github.com/comfyanonymous/ComfyUI
    And extension from here:
    github.com/ltdrdata/ComfyUI-Impact-Pack
    Workflow: (you can drag drop this image):
    github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/workflow/MakeTileSEGS_upscale.png

    • @APCOAG
      @APCOAG 4 месяца назад +1

      Hi , Thank you for your great work.. but can you please show an example of using segs detailer with FLUX?

    • @drltdata
      @drltdata  4 месяца назад +1

      @@APCOAG
      You can use `Negative Placeholder` like this.
      github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/workflow/flux-detailer.png

    • @APCOAG
      @APCOAG 4 месяца назад

      ​@@drltdata Although I was asking about segs detailer,not the face detailer, but I will try the same idea. Thank you a lot for your time. BTW, you are a hero 😍

    • @drltdata
      @drltdata  4 месяца назад +1

      @@APCOAG That is an universal method for Impact and Inspire sampling nodes.
      Just use placeholder instead of negative prompt. That's all you need.

  • @santoshgsk3213
    @santoshgsk3213 10 месяцев назад

    Thank you for the tutorial. Question: If we want to upscale the output of MagicAnimate, can I use Detailer for AnimateDiff node as this Detailer node doesn't take batch images.

  • @Cyrecok
    @Cyrecok 10 месяцев назад +1

    how does it compare to other upscale methods?

  • @ndro7068
    @ndro7068 5 месяцев назад

    Is there a way for gendered upscaling using this SEGS method? I saw your video on gendered refinement but am wondering what nodes to use to get it to work for upscaling

    • @drltdata
      @drltdata  5 месяцев назад

      You can compose a workflow that sends gender-segregated SEGS to MakeTileSEGS for processing.

    • @ndro7068
      @ndro7068 5 месяцев назад

      @@drltdata I'll work on that!

  • @jacekfr3252
    @jacekfr3252 11 месяцев назад

    fantastic! Thank you very much!

  • @APCOAG
    @APCOAG 7 месяцев назад

    The workflow from this video is not working as it should before, now it blurs and hides all the details of the image. seems like something changed in ComfyUI

  • @DanielPartzsch
    @DanielPartzsch 11 месяцев назад

    Thanks. I've tried this for an upscaling up to 4k but then I started to getting artifacts. Dou you have recommended settings for this that might help?

  • @TicklyLucario
    @TicklyLucario 11 месяцев назад

    What do you recommend if the person detector is no good for the subject of the image? A monster for example. SAM detector/manual masking instead, then "mask to SEGS" node, then "make tile SEGS" node?

    • @drltdata
      @drltdata  11 месяцев назад

      Yup. You can do it.

  • @DamingoLeo
    @DamingoLeo 11 месяцев назад

    "mask_irregularity"、"irregular_mask_mode"--I can't find these two options in the node, and I have updated to the latest version, is my update method wrong?

    • @drltdata
      @drltdata  11 месяцев назад

      Latest version is V4.66.4. If it is not. Your update method is wrong.

    • @DamingoLeo
      @DamingoLeo 11 месяцев назад

      @@drltdata But I just deleted the entire ComfyUI-Impact-Pack and then recloned it from github, and it still doesn't work.

  • @HestoySeghuro
    @HestoySeghuro 11 месяцев назад

    One question... without prompting (again, lol) could this be used? And also... we can do this in environments where there are no characters? Just landscapes

    • @drltdata
      @drltdata  11 месяцев назад

      You can utilize IPAdapter.

    • @HestoySeghuro
      @HestoySeghuro 10 месяцев назад

      @@drltdatacould you elaborate?

    • @drltdata
      @drltdata  10 месяцев назад

      @@HestoySeghuro Create a basic_pipe that is connected to Detailer, using the IPAdapter made from the original image.

  • @josemariagala2588
    @josemariagala2588 3 месяца назад

    Where I can find this workflow?

    • @drltdata
      @drltdata  3 месяца назад +1

      @@josemariagala2588 You can find it in my comment.

    • @josemariagala2588
      @josemariagala2588 3 месяца назад

      @@drltdata thanks a lot!!! ☺️

    • @josemariagala2588
      @josemariagala2588 3 месяца назад

      @@drltdata How can I control the refinement parameters? I mean, if I ask to create for example an Asian woman, it makes her Caucasian.

    • @drltdata
      @drltdata  3 месяца назад

      @@josemariagala2588 You have to control via model and prompt.

    • @josemariagala2588
      @josemariagala2588 3 месяца назад

      @@drltdata where i put the model?

  • @raphaellfms
    @raphaellfms 11 месяцев назад

    Do you think I can use this in architecture? I really have trouble correctly putting in cars in the garage

    • @drltdata
      @drltdata  11 месяцев назад

      I recommend this video
      ruclips.net/video/g9JXx4ik_rc/видео.html

  • @SelfErkek-qb8nc
    @SelfErkek-qb8nc 11 месяцев назад

    Best workflow thanks

  • @Kentel_AI
    @Kentel_AI 11 месяцев назад

    Thanks for sharing.

  • @MushroomStickMan
    @MushroomStickMan 10 месяцев назад

    The face is differences 😢

    • @drltdata
      @drltdata  10 месяцев назад

      Yes, indeed. This example deliberately upscaled through a different model.
      If you want to process the face to follow the original, you'll need to use a different approach specifically tailored for the facial region.

    • @___x__x_r___xa__x_____f______
      @___x__x_r___xa__x_____f______ 8 месяцев назад

      Is it possible to run a character lora into the detailer node to maintain consistency? It’s not working for me

    • @drltdata
      @drltdata  8 месяцев назад +1

      @@___x__x_r___xa__x_____f______ The detailer node where LoRA will be applied should be separated, and LoRA should only be applied to the basic_pipe on that specific detailer.

    • @___x__x_r___xa__x_____f______
      @___x__x_r___xa__x_____f______ 8 месяцев назад

      @@drltdata thank you DrLt 👍