ComfyUI Tutorial+Workflow: Inpainting only on masked area, fast outpainting, and seamless blending

Поделиться
HTML-код
  • Опубликовано: 23 сен 2024
  • This tutorial presents novel nodes and a workflow that allow fast seamless inpainting, outpainting, and inpainting only on a masked area in ComfyUI, similar to the AUTOMATIC1111 feature but with extra flexibility.
    The main advantages of inpainting only in a masked area with these nodes are:
    - It's much faster than sampling the whole image.
    - It enables setting the right amount of context from the image for the prompt to be more accurately represented in the generated picture.
    - It enables upscaling before sampling in order to generate more detail, then stitching back in the original picture.
    - It enables downscaling before sampling if the area is too large, in order to avoid artifacts such as double heads or double bodies.
    - It enables forcing a specific resolution (e.g. 1024x1024 for SDXL models)..
    - It doesn't modify the unmasked part of the image, not even passing it through VAE encode and decode.
    - The nodes take care of good blending.
    The Inpaint Crop and Stitch nodes can be downloaded using ComfyUI-Manager, just look for "Inpaint-CropAndStitch".
    The example workflow featured in this video can be downloaded from github.com/lqu...
    I hope you like them!! Subscribe for more :)
    The song playing in the background is "The last dial-up handshake", from the album "Last Transmission" of my music project "Elezeta". Check it here: • The last dial-up hands...
    #comfyui #stablediffusion #genai #inpainting #tutorial #workflow

Комментарии • 75

  • @marcinchaciej
    @marcinchaciej Месяц назад +5

    This should be default in Comfy, you made a great great contribution and we all really appreciate it

  • @NywlMac
    @NywlMac 14 дней назад +1

    THANK YOU!! I was looking for a tutorial so I could understand how the nodes for a inpaint workflow works, and all the ones I found is already those HUUGE workflows. This one is simple and works, and now, as I could understand how it works, I can improve it as I please. Great tutorial!

  • @Daralima.
    @Daralima. 3 месяца назад +3

    These nodes make inpainting in Comfy super convenient and easy to adjust to your needs. Thank you!

  • @Krashl
    @Krashl 3 месяца назад +5

    Thanks for the work! I use these nodes all the time to inpainting, but I was missing some features like custom sizing and blurring the mask (because this node creates hard mask borders by default), and the abrupt transition was visible on generations. I am very glad that the recent update has improved all of this

    • @elezetamusic
      @elezetamusic  3 месяца назад

      Awesome!! I'm glad that you found it useful. Thanks a lot for commenting.

  • @user-yg4qo9zg3u
    @user-yg4qo9zg3u 2 месяца назад +3

    You are a wonderful person, thank you for sharing! This is what I've been looking for! Just perfect! 🔥

    • @KevinScandinavia
      @KevinScandinavia 2 месяца назад

      Yeah no kidding, and including the json file was just the icing on the cake

  • @RemiStardust
    @RemiStardust Месяц назад +2

    Excellent work! This is something I really wanted! Very simple workflow, easy to even stitch the updated area back in to the original image!
    Thank you!

  • @ronnykhalil
    @ronnykhalil 25 дней назад +1

    so glad i came across this on a reddit thread. super helpful

  • @treksis
    @treksis 2 месяца назад +2

    thank you so much. saved my life. I tried to copy auto1111's Inpainting myself, this is much better.

  • @matthallett4126
    @matthallett4126 14 дней назад +1

    This is a great solution, I love it. Thank you so much!

  • @NeonSparks
    @NeonSparks 2 месяца назад +1

    This is exactly what I have been looking for! Amazing, can't wait to try this. Thanks

    • @elezetamusic
      @elezetamusic  2 месяца назад

      @@NeonSparks thank you!! Enjoy!

  • @derrickpang4304
    @derrickpang4304 Месяц назад +2

    This is exactly what I need. Thank you so much!!!!!

  • @Pyugles
    @Pyugles 3 месяца назад +3

    Thank you so much! This is a fantastic set of nodes, and great tutorials on using them! You've definitely gotten a sub from me!

  • @EmeranceLN13
    @EmeranceLN13 2 месяца назад +2

    subbed! Thank you for such a straight forward explanation !

  • @subashchandra9557
    @subashchandra9557 Месяц назад +1

    Now I just need to know how to do the Fill/Original/Latent Noise/ Latent Nothing options in ComfyUI!

  • @MarcSpctr
    @MarcSpctr 3 месяца назад +6

    seriously great work 👍🏻

  • @jonjoni518
    @jonjoni518 9 дней назад +1

    fantastic!!!

  • @Milo_Estobar
    @Milo_Estobar 2 месяца назад +1

    Thank you for your contribution 👍🏽 (App extension creator + Tutorial)...

  • @mahilkr
    @mahilkr 3 месяца назад +1

    Hi @Elezeta, excellent work! How can I generate multiple variations of a stitch? Currently, it only works with the repeater set to a value of 1.

    • @elezetamusic
      @elezetamusic  3 месяца назад

      Hey, please provide more details on what you want to do. If you want to generate multiple images, you could enqueue the job multiple times or in comfyui advanced options, set a higher batch number. If you want to use a repeater node, you'd have to repeat both the image and the masks (not sure if the repeater node can do masks).
      Not sure if I answered your question. If not, please let me know what you mean by "it only works...". Does it give an error?

  • @李云-f1b
    @李云-f1b Месяц назад +1

    Very good, thank you very much

  • @Vigilence
    @Vigilence 18 дней назад +2

    I used this node successfully with a regular mask. However, if I use the invert mask option with the precious mask and the same base image, the image that is inpainted is much lower in quality. Same settings are used from the successful in paint, so I’m not sure what the issue is.
    Can you confirm by testing with the lasted comfyui that the invert mask option is working properly?

    • @elezetamusic
      @elezetamusic  18 дней назад +1

      @@Vigilence Hi, this is because the inpainting area is so much larger if you do the whole image, so you get it downscaled by the node to fit the target resolution, and therefore, you lose resolution.
      A good solution is to apply a detailer afterwards. I'm figuring out if I could implement some node that does that detailing for very large images on a way that's easy to set up with these nodes

  • @skycladsquirrel
    @skycladsquirrel 3 месяца назад +1

    Perfect. Thank you. Subscribed!

  • @Vigilence
    @Vigilence 27 дней назад +2

    I have a photo with a woman. I created a mask for the ear and also masked some of the below portion so I can add an earring. However when in-painting no earring is added. Any tips?

    • @elezetamusic
      @elezetamusic  27 дней назад +1

      @@Vigilence try setting a context mask with the whole face and write a prompt such as "a woman with earrings". That probably works

    • @Vigilence
      @Vigilence 27 дней назад +1

      @@elezetamusic I will try! Ty

  • @MrPer4illo
    @MrPer4illo Месяц назад +1

    Thank you for the video. Can this approach be implemented for animatediff (animate only certain areas of an image)?

    • @elezetamusic
      @elezetamusic  Месяц назад +1

      totally, I am aware of some users using it for that purpose.

  • @fromloveandlifestory
    @fromloveandlifestory 2 месяца назад +1

    THank you!!!, Please create a note that can import a high-resolution image and split it into specified sections. Then, it should run the img2img (inpaint) process on each section and finally combine all the split sections back into a complete image. The goal is to be able to process high-resolution images without having to manually split them in Photoshop and edit each section individually

    • @elezetamusic
      @elezetamusic  2 месяца назад +1

      @@fromloveandlifestory there is tiled sampler for that!

    • @fromloveandlifestory
      @fromloveandlifestory 2 месяца назад

      Could you please share a similar workflow with me? Thank you very much!

  • @joelandresnavarro9841
    @joelandresnavarro9841 3 месяца назад +1

    Hello 🙋🏻‍♂️, you could consider improving the compatibility of using these nodes with the comfyui-photoshop extension (NimaNzrii)
    When sending photoshop, he mentions that the image and the area are not the same size. I don’t know if it’s a problem with your node or rather with the extension for psd

    • @elezetamusic
      @elezetamusic  3 месяца назад

      Hi, I do not have photoshop and can't install that extension. Anything related to photoshop integration you should reach out to who developed that node. Cheers!

  • @natlrazfx
    @natlrazfx 3 месяца назад +1

    brilliant, thank you so much

  • @CemilAL
    @CemilAL 3 месяца назад +3

    good job 👍

  • @henryphillips6167
    @henryphillips6167 2 месяца назад +1

    How do we modify this workflow to work with premade masks? I would also like to use another image as the fill for the masked areas. Could you detail how I would go about this?

    • @elezetamusic
      @elezetamusic  2 месяца назад +1

      @@henryphillips6167 i get what you're asking but this is out of the scope of this video and these nodes. I'd suggest you to continue to learn using comfyui and you'll eventually figure out how to do this. Sorry, I can't offer tailored support for comfyui

  • @dfhdgsdcrthfjghktygerte
    @dfhdgsdcrthfjghktygerte 3 месяца назад +2

    I want to erase something from a skin and flood fill it with just one color that maches the surroundings. Is this possible? When i try to use the "skin" or "color" prompt - it inserts faces or random stuff into the masked area.

    • @elezetamusic
      @elezetamusic  3 месяца назад +1

      hi! extend the context area enough so you can see where the skin is (e.g. an arm, a leg, whatever) and then type in "an arm", "a leg", or even extend the context area further to show there's a person, and then type a prompt like "a person". that will give the sampler enough context to fill in the gap seamlessly.

    • @elezetamusic
      @elezetamusic  3 месяца назад +1

      also, use an inpainting model, they work much better and don't add random stuff in the masked area

  • @PanKazik
    @PanKazik Месяц назад +1

    Very nice tutorial. The only problem I have is that despite the model I use (tried a few inpainting models) the result is always black square in place of mask. Any idea why it happens? (i am using comfy on mac)

    • @elezetamusic
      @elezetamusic  Месяц назад

      @@PanKazik do inpainting models work for you without my nodes? If not, then the issue is unrelated to the nodes and I don't think I can help. If it works without my nodes, I'd suggest to load the workflow from github and try it with different models without changing anything. That should work. I also use Mac. There's an issue with some Mac updates that the GPU only generates black images, but it would affect all models, not only inpainting. Check if that's the case.

    • @PanKazik
      @PanKazik Месяц назад

      @@elezetamusic I tried simple inpaining workflow and it worked. However I found the reason why it worked. The problem was number of steps. Anything over 16 produced black square. With that in mind everything works flawlessly. Thanks for your response :)

  • @Kikoking-y9b
    @Kikoking-y9b 3 месяца назад +1

    Hi ,very nice node thank you a lot.
    I have a question, How can I do upscaling before sampling ,what is the meaning of that or how can I do it?
    Iam not sure if you mean what I think. I think of cutting an area around a 'Face' and upscale only the face, and stitch the upscaled face back.
    Can you help me?

    • @elezetamusic
      @elezetamusic  3 месяца назад +1

      If you set mode to ranged size or forced size, the cropped image is automatically upscaled (or downscaled) to fit that resolution. Then you sample on it, and then during stitch it is returned to the original size.
      So you don't have to worry, the node takes care of it for you! You can check it by previewing the cropped image and checking its size

  • @831digital
    @831digital 3 месяца назад +1

    Instead of manually painting the mask in, do you have an example of this working with a detector that generates masks? This would make it more useful for animation.

    • @elezetamusic
      @elezetamusic  3 месяца назад

      No, but you can easily put it together :) give it a go!

    • @831digital
      @831digital 3 месяца назад

      @@elezetamusic I tried doing it with SEGS, but SEGS resizes the video and throws an error when trying to feed the mask back to the original. If it's super easy, please share an example.

  • @hphector6
    @hphector6 2 месяца назад +1

    i have an issue where the mask is still very visible, like a gray mask over the inpainted area

    • @elezetamusic
      @elezetamusic  2 месяца назад

      @@hphector6 hi, I'd bet that you're using vae encode for inpainting and a denoise lower than 1. If you want to use a denoise lower than 1, use InpaintModelConditioning instead of vae encode for inpainting.

    • @hphector6
      @hphector6 2 месяца назад

      @@elezetamusic I was using inpaint model conditioning as per the default workflow. I think it might have been the model im using which is a pony variant, no issues with epic real xl

  • @Damian151614
    @Damian151614 2 месяца назад +1

    Do you know why I get unmatched colors? I set denoise to 0.00 on purpose because I wanted to get the same output as input image, but somehow I get faded colors on masked area.

    • @elezetamusic
      @elezetamusic  2 месяца назад +1

      @@Damian151614 that's the encoding and decoding process of the VAE. A different VAE may get more accurate colors for that specific image

    • @Damian151614
      @Damian151614 2 месяца назад

      @@elezetamusic It was model problem. I tested it on another models (non VAE and VAE baked models) and only that one caused problems.

  • @ARAI96969
    @ARAI96969 3 месяца назад +1

    Thank you so much for this node! Before this I've never found a way to do inpaint one area without sending the whole image to Encode and spoiling the overall quality.. This Is a god send!!
    But I do have 1 query on which setting should I do if I wish to inpaint a large area (example 1/4 of the whole image to mask a whole character) and change a character entirely with another lora character, thus creating 2 unique characters interacting
    Cause if I mask a large area, the output is usually very bad, lacks details and distorted, do I upscale and area, Downscale it or increase size of mask perhaps?
    Thanks if u have any tips, cause I wish to keep my workflow simple and avoid using segment and auto detect to mask and repaint characters to my lora. I prefer to choose and mask them myself for more control

    • @ARAI96969
      @ARAI96969 3 месяца назад +1

      Hi, do u have any tips for inpainting larger areas? Cause I used your settings but it will generate distortions, any advise on good settings would be greatly appreciated

    • @elezetamusic
      @elezetamusic  3 месяца назад +1

      @@ARAI96969 well, for larger areas I'd suggest to inpaint the whole area first, then detail the key areas in it with several passes. You could also consider a tiled sampler.
      There's no magic solution to sample on higher resolutions with high detail in a single go and fast.

  • @InaKilometrosX1TUBO
    @InaKilometrosX1TUBO Месяц назад

    Hi, thanks for your videos, I have an error, see if you could help me.
    -
    Prompt outputs failed validation
    UnetLoaderGGUF:
    - Value not in list: unet_name: 'None' not in [ ]

    • @elezetamusic
      @elezetamusic  Месяц назад

      @@InaKilometrosX1TUBO this doesn't seem related to my nodes but to other nodes that I don't know. Sorry, can't help

  • @davoodice
    @davoodice 2 месяца назад +1

    Thank you

  • @imonutiy
    @imonutiy 6 дней назад

    Looks like blend_pixels just adds more pixels to context so it is the same as 1st option.

    • @elezetamusic
      @elezetamusic  6 дней назад +1

      @@imonutiy not really! Blend adds more pixels to have enough context to blend, but blend also does a gradual blending of the newly generated area into the original image, so that the transition is less abrupt.
      You see more context because it is required to blend.

    • @imonutiy
      @imonutiy 5 дней назад

      @@elezetamusic Thank you for the answer, by the way what are other options for inpainting sketching nodes in comfyui. It feels really wonky compared to web ui.