How to Inpaint FLUX with ComfyUI. BEST Workflows including Flux-Fill, ControlNet and LoRA.

Поделиться
HTML-код
  • Опубликовано: 30 янв 2025

Комментарии • 48

  • @NextTechandAI
    @NextTechandAI  4 месяца назад +6

    UPDATE - NEW Flux-Fill-Model for Model-Conditioning Inpainting workflow (see related chapter in the video) is available here: huggingface.co/black-forest-labs/FLUX.1-Fill-dev/tree/main (put it in your ComfyUI\models\diffusion_models and use the updated workflow from my patreon) and GGUF models here: huggingface.co/YarvixPA/FLUX.1-Fill-dev-gguf/tree/main (put in your ComfyUI\models\unet). Most likely you need to update your ComfyUI.
    --
    UPDATE - the ControlNet-Inpaint-Beta-Model is available (use instead of Alpha): huggingface.co/alimama-creative/FLUX.1-dev-Controlnet-Inpainting-Beta/tree/main
    --
    Which FLUX Inpainting workflow do you prefer?

    • @anqipang6970
      @anqipang6970 2 месяца назад

      for the new Flux-Fill-Model :Error occurred when executing UNETLoader:
      Error(s) in loading state_dict for Flux:
      size mismatch for img_in.weight: copying a param with shape torch.Size([3072, 384]) from checkpoint, the shape in current model is torch.Size([3072, 64]).

    • @NextTechandAI
      @NextTechandAI  2 месяца назад

      @anqipang6970 Please use the new workflow from my patreon, using the UNETLoader is wrong. Moreover most likely you need to update your ComfyUI.

  • @K-A_Z_A-K_S_URALA
    @K-A_Z_A-K_S_URALA 6 дней назад +1

    The best fellow! It would be interesting to take a look at inpaint flux with an accelerator in conjunction with Lora from you!

    • @NextTechandAI
      @NextTechandAI  6 дней назад +1

      Interesting idea, thanks for the hint.

    • @K-A_Z_A-K_S_URALA
      @K-A_Z_A-K_S_URALA 6 дней назад +1

      Очень крутой, что даешь столько информации и еще все бесплатно для людей! Респект тебе!.. Это редкость...Респект из России!бро

  • @xdivby0
    @xdivby0 Месяц назад

    Awesome video, I love how you don't say there is "one perfect" workflow but instead show us the individual relevant parts of workflows you like. Keep the videos up!

    • @NextTechandAI
      @NextTechandAI  Месяц назад +1

      Such differentiated feedback is rare, thank you for that!

  • @gaita99
    @gaita99 4 месяца назад

    Great tutorial! Thank you so much for sharing this valuable knowledge, and it's truly admirable that you're offering these workflows for free. It really makes a difference!

    • @NextTechandAI
      @NextTechandAI  4 месяца назад

      Thank you for the recognition and the very motivating feedback!

  • @Kozitaju
    @Kozitaju 3 месяца назад

    Wow, fantastic job in trying all these methods !!! Thank you very much and congratulations !!

    • @NextTechandAI
      @NextTechandAI  3 месяца назад

      Such a motivating feedback - thanks a lot!

  • @BloodyBubbsi
    @BloodyBubbsi 21 день назад

    harter Akzent 😄

    • @NextTechandAI
      @NextTechandAI  21 день назад

      Usually my accent is called "krass", which I prefer over "hart". Nevertheless, I'm happy you enjoyed the video.

    • @BloodyBubbsi
      @BloodyBubbsi 21 день назад

      @@NextTechandAI jerne, der jute Berlina Dialekt sei uns holdt

  • @devnull_
    @devnull_ 3 месяца назад +1

    Excellent summarization of all the options. Your are one of the few who actually showed real results. You earned one sub.

    • @NextTechandAI
      @NextTechandAI  3 месяца назад +1

      Thank you very much for your feedback and the sub!

  • @xyzxyz324
    @xyzxyz324 3 месяца назад +2

    Thank you for your great effort to share with the community for free! I greatly appreciate it. But as a suggestion, please do not include any music in between nodes connecting, it is really breaking someone's attention. its not a documentary, not a music video clip, just a tutorial who the people focus to learn something. Thank you again, and keep up the good work

    • @NextTechandAI
      @NextTechandAI  3 месяца назад

      Thank you for your honest feedback. I'll take this into account in the next videos.

  • @eledah9098
    @eledah9098 2 месяца назад

    Great vid! Are you planning to make one with the new Flux tools as well?

    • @NextTechandAI
      @NextTechandAI  2 месяца назад

      Thanks a lot!
      I've already updated the description for this video and added a new workflow for Flux Fill. Regarding Depth and Canny I'm not sure as we already have several good solutions, including Union Pro for Flux, which I've covered in the Flux ControlNet video. I'm very keen on the new Redux model, but it doesn't seem to work the way I have hoped. Anyhow, that's currently the best candidate for a video about the Flux tools.

  • @folkeroRGC
    @folkeroRGC 2 месяца назад

    Great tutorial, thanks, how we can use inpaint to use two Loras for different characters.

    • @NextTechandAI
      @NextTechandAI  2 месяца назад

      Thanks a lot. First inpaint the left character, in a second step inpaint the right one.

  • @ChanhDucTuong
    @ChanhDucTuong 3 месяца назад

    Thank you very much I was thinking about comparing different Inpainting technique, your video is just what I need. What do you think about cropping the inpainting part then upscale it seperately then inpaint then stitch it back? There is a node Crop&Stitch for that or we can do it manually but I'm not sure if those could work with your ControlNet workflow.

    • @NextTechandAI
      @NextTechandAI  3 месяца назад +1

      Thanks a lot for your feedback.
      Interesting, I didn't know these two noes. Looks like by using them we can get something similar to 'masked only' in A1111.
      I don't think you need ControlNet for this. Not sure regarding upscaling, but usually it's a good idea to do at least a 2x after inpainting to blur the contours.

  • @ic4roswings
    @ic4roswings 3 месяца назад

    Can't make it change my images IDK what am I doing wrong I use the exact workflow

  • @chan8946
    @chan8946 3 месяца назад

    Thank you for the nice video! I was wondering of your thoughts on applying this for background generation. Do you think the inpainting can be done with the consideration of the non-masked region?

    • @NextTechandAI
      @NextTechandAI  3 месяца назад

      Thanks for your feedback! Yes, that's possible. In the 'grow mask with blur'-node you have to toggle 'flip_input' to 'true', then the non-masked region is newly generated with the prompt.

  • @TECHTOUR
    @TECHTOUR 9 дней назад

    can we use official flux canny as controlnet?

    • @NextTechandAI
      @NextTechandAI  9 дней назад

      The workflow is a bit different, especially the conditioning node. I think there is even an official workflow from BlaskForestLabs. Nevertheless, if you've understood the workflows of this video, you can easily follow the workflows for Flux tools.

    • @TECHTOUR
      @TECHTOUR 9 дней назад

      @@NextTechandAI I get it, but u used controlnet apply, but the official canny model loads in the "load diffusion model" node, finding it hard to combine using that

  • @dconcorde6677
    @dconcorde6677 3 месяца назад

    Thx for video! Is diff. diffusion works only with inpaint? Will it somehow impact image being used with img2img?

    • @NextTechandAI
      @NextTechandAI  3 месяца назад

      Thanks for you feedback! I have only used differential diffusion for inpainting and only know it in this environment.

  • @petertremblay3725
    @petertremblay3725 Месяц назад

    Unfortunately i don't use comfyui anymore since the new flux model i use is 10 time faster in forge.

    • @NextTechandAI
      @NextTechandAI  Месяц назад

      Interesting. Which GPU and which Flux model are you using? Maybe some quantized model?

    • @petertremblay3725
      @petertremblay3725 Месяц назад

      @@NextTechandAI It's a recent merged model ''unet\fluxFusionV24StepsGGUFNF4_V2GGUFQ5KS.gguf'' and i have an RTX 3060 12 GB and for some reason i am not aware of in comfyui it's very slow with same clip and setup. I also use the turbo_Alpha lora and in forge a 800X600 image take about 20 to 24 seconds while in comfyui it take close to 4 min.

    • @NextTechandAI
      @NextTechandAI  Месяц назад

      @@petertremblay3725 That's strange, indeed. I can only guess that it requires certain runtime parameters or the workflow has to be optimized for the model. I'm not aware of any model running that much faster in Forge compared to ComfyUI. Anyhow, if Forge is a good solution for you then this is the way to go.

    • @petertremblay3725
      @petertremblay3725 Месяц назад

      @@NextTechandAI I am currently reading about it and it seem that indeed many users mention the speed to be way better in forge, i cannot post the reddit link since youtube will not let me do links. My last generation comparison is forge : 22 sec and comfy:148 sec.

  • @raven1439
    @raven1439 3 месяца назад

    Hi, as of recently, it is now possible to use AMD GPU PyTORCH ROCm natively under Windows via WSL2, could you do performance tests in ComfyUI comparing this native solution versus ZLUDA translator?

    • @NextTechandAI
      @NextTechandAI  3 месяца назад

      I would like to, but AMD behaves like AMD again: only the 7000 series is supported.

    • @raven1439
      @raven1439 3 месяца назад

      @@NextTechandAI This method doesn't work for you? HSA_OVERRIDE_GFX_VERSION=10.3.0 python main.py

    • @NextTechandAI
      @NextTechandAI  3 месяца назад

      @@raven1439 That's not the problem. I tried it a while ago, the GPU call just got stuck. I then read about several cases that the 6800 is not (yet?) supported by the driver.

  • @generalawareness101
    @generalawareness101 2 месяца назад

    Text eludes me with inpainting in Flux.

    • @NextTechandAI
      @NextTechandAI  2 месяца назад

      What do you mean?

    • @generalawareness101
      @generalawareness101 2 месяца назад

      @@NextTechandAI I mean, I have spent over 2 days trying to get it to work and it will not. I have gone through various YT creator workflows and forget it. Ironically, I actually had XL almost do it, while the flux one next to it (from a creator) could not. Dev. I even tried your workflow to no avail.

    • @NextTechandAI
      @NextTechandAI  2 месяца назад

      @@generalawareness101 I still don't know what exactly didn't work for you, but in general you have to give the text enough space. Similar to finger inpainting, the new area to be inpainted needs to be large enough to actually accommodate 5 fingers.

    • @generalawareness101
      @generalawareness101 2 месяца назад

      @@NextTechandAI I gave it 1/4, 1/2, 3/4 of the images. I tried everything.

  • @PyrateGFXProductions
    @PyrateGFXProductions 4 месяца назад

    🙏