UPDATE - NEW Flux-Fill-Model for Model-Conditioning Inpainting workflow (see related chapter in the video) is available here: huggingface.co/black-forest-labs/FLUX.1-Fill-dev/tree/main (put it in your ComfyUI\models\diffusion_models and use the updated workflow from my patreon) and GGUF models here: huggingface.co/YarvixPA/FLUX.1-Fill-dev-gguf/tree/main (put in your ComfyUI\models\unet). Most likely you need to update your ComfyUI. -- UPDATE - the ControlNet-Inpaint-Beta-Model is available (use instead of Alpha): huggingface.co/alimama-creative/FLUX.1-dev-Controlnet-Inpainting-Beta/tree/main -- Which FLUX Inpainting workflow do you prefer?
for the new Flux-Fill-Model :Error occurred when executing UNETLoader: Error(s) in loading state_dict for Flux: size mismatch for img_in.weight: copying a param with shape torch.Size([3072, 384]) from checkpoint, the shape in current model is torch.Size([3072, 64]).
Awesome video, I love how you don't say there is "one perfect" workflow but instead show us the individual relevant parts of workflows you like. Keep the videos up!
Great tutorial! Thank you so much for sharing this valuable knowledge, and it's truly admirable that you're offering these workflows for free. It really makes a difference!
Thank you for your great effort to share with the community for free! I greatly appreciate it. But as a suggestion, please do not include any music in between nodes connecting, it is really breaking someone's attention. its not a documentary, not a music video clip, just a tutorial who the people focus to learn something. Thank you again, and keep up the good work
Thanks a lot! I've already updated the description for this video and added a new workflow for Flux Fill. Regarding Depth and Canny I'm not sure as we already have several good solutions, including Union Pro for Flux, which I've covered in the Flux ControlNet video. I'm very keen on the new Redux model, but it doesn't seem to work the way I have hoped. Anyhow, that's currently the best candidate for a video about the Flux tools.
Thank you very much I was thinking about comparing different Inpainting technique, your video is just what I need. What do you think about cropping the inpainting part then upscale it seperately then inpaint then stitch it back? There is a node Crop&Stitch for that or we can do it manually but I'm not sure if those could work with your ControlNet workflow.
Thanks a lot for your feedback. Interesting, I didn't know these two noes. Looks like by using them we can get something similar to 'masked only' in A1111. I don't think you need ControlNet for this. Not sure regarding upscaling, but usually it's a good idea to do at least a 2x after inpainting to blur the contours.
Thank you for the nice video! I was wondering of your thoughts on applying this for background generation. Do you think the inpainting can be done with the consideration of the non-masked region?
Thanks for your feedback! Yes, that's possible. In the 'grow mask with blur'-node you have to toggle 'flip_input' to 'true', then the non-masked region is newly generated with the prompt.
The workflow is a bit different, especially the conditioning node. I think there is even an official workflow from BlaskForestLabs. Nevertheless, if you've understood the workflows of this video, you can easily follow the workflows for Flux tools.
@@NextTechandAI I get it, but u used controlnet apply, but the official canny model loads in the "load diffusion model" node, finding it hard to combine using that
@@NextTechandAI It's a recent merged model ''unet\fluxFusionV24StepsGGUFNF4_V2GGUFQ5KS.gguf'' and i have an RTX 3060 12 GB and for some reason i am not aware of in comfyui it's very slow with same clip and setup. I also use the turbo_Alpha lora and in forge a 800X600 image take about 20 to 24 seconds while in comfyui it take close to 4 min.
@@petertremblay3725 That's strange, indeed. I can only guess that it requires certain runtime parameters or the workflow has to be optimized for the model. I'm not aware of any model running that much faster in Forge compared to ComfyUI. Anyhow, if Forge is a good solution for you then this is the way to go.
@@NextTechandAI I am currently reading about it and it seem that indeed many users mention the speed to be way better in forge, i cannot post the reddit link since youtube will not let me do links. My last generation comparison is forge : 22 sec and comfy:148 sec.
Hi, as of recently, it is now possible to use AMD GPU PyTORCH ROCm natively under Windows via WSL2, could you do performance tests in ComfyUI comparing this native solution versus ZLUDA translator?
@@raven1439 That's not the problem. I tried it a while ago, the GPU call just got stuck. I then read about several cases that the 6800 is not (yet?) supported by the driver.
@@NextTechandAI I mean, I have spent over 2 days trying to get it to work and it will not. I have gone through various YT creator workflows and forget it. Ironically, I actually had XL almost do it, while the flux one next to it (from a creator) could not. Dev. I even tried your workflow to no avail.
@@generalawareness101 I still don't know what exactly didn't work for you, but in general you have to give the text enough space. Similar to finger inpainting, the new area to be inpainted needs to be large enough to actually accommodate 5 fingers.
UPDATE - NEW Flux-Fill-Model for Model-Conditioning Inpainting workflow (see related chapter in the video) is available here: huggingface.co/black-forest-labs/FLUX.1-Fill-dev/tree/main (put it in your ComfyUI\models\diffusion_models and use the updated workflow from my patreon) and GGUF models here: huggingface.co/YarvixPA/FLUX.1-Fill-dev-gguf/tree/main (put in your ComfyUI\models\unet). Most likely you need to update your ComfyUI.
--
UPDATE - the ControlNet-Inpaint-Beta-Model is available (use instead of Alpha): huggingface.co/alimama-creative/FLUX.1-dev-Controlnet-Inpainting-Beta/tree/main
--
Which FLUX Inpainting workflow do you prefer?
for the new Flux-Fill-Model :Error occurred when executing UNETLoader:
Error(s) in loading state_dict for Flux:
size mismatch for img_in.weight: copying a param with shape torch.Size([3072, 384]) from checkpoint, the shape in current model is torch.Size([3072, 64]).
@anqipang6970 Please use the new workflow from my patreon, using the UNETLoader is wrong. Moreover most likely you need to update your ComfyUI.
The best fellow! It would be interesting to take a look at inpaint flux with an accelerator in conjunction with Lora from you!
Interesting idea, thanks for the hint.
Очень крутой, что даешь столько информации и еще все бесплатно для людей! Респект тебе!.. Это редкость...Респект из России!бро
Awesome video, I love how you don't say there is "one perfect" workflow but instead show us the individual relevant parts of workflows you like. Keep the videos up!
Such differentiated feedback is rare, thank you for that!
Great tutorial! Thank you so much for sharing this valuable knowledge, and it's truly admirable that you're offering these workflows for free. It really makes a difference!
Thank you for the recognition and the very motivating feedback!
Wow, fantastic job in trying all these methods !!! Thank you very much and congratulations !!
Such a motivating feedback - thanks a lot!
harter Akzent 😄
Usually my accent is called "krass", which I prefer over "hart". Nevertheless, I'm happy you enjoyed the video.
@@NextTechandAI jerne, der jute Berlina Dialekt sei uns holdt
Excellent summarization of all the options. Your are one of the few who actually showed real results. You earned one sub.
Thank you very much for your feedback and the sub!
Thank you for your great effort to share with the community for free! I greatly appreciate it. But as a suggestion, please do not include any music in between nodes connecting, it is really breaking someone's attention. its not a documentary, not a music video clip, just a tutorial who the people focus to learn something. Thank you again, and keep up the good work
Thank you for your honest feedback. I'll take this into account in the next videos.
Great vid! Are you planning to make one with the new Flux tools as well?
Thanks a lot!
I've already updated the description for this video and added a new workflow for Flux Fill. Regarding Depth and Canny I'm not sure as we already have several good solutions, including Union Pro for Flux, which I've covered in the Flux ControlNet video. I'm very keen on the new Redux model, but it doesn't seem to work the way I have hoped. Anyhow, that's currently the best candidate for a video about the Flux tools.
Great tutorial, thanks, how we can use inpaint to use two Loras for different characters.
Thanks a lot. First inpaint the left character, in a second step inpaint the right one.
Thank you very much I was thinking about comparing different Inpainting technique, your video is just what I need. What do you think about cropping the inpainting part then upscale it seperately then inpaint then stitch it back? There is a node Crop&Stitch for that or we can do it manually but I'm not sure if those could work with your ControlNet workflow.
Thanks a lot for your feedback.
Interesting, I didn't know these two noes. Looks like by using them we can get something similar to 'masked only' in A1111.
I don't think you need ControlNet for this. Not sure regarding upscaling, but usually it's a good idea to do at least a 2x after inpainting to blur the contours.
Can't make it change my images IDK what am I doing wrong I use the exact workflow
Thank you for the nice video! I was wondering of your thoughts on applying this for background generation. Do you think the inpainting can be done with the consideration of the non-masked region?
Thanks for your feedback! Yes, that's possible. In the 'grow mask with blur'-node you have to toggle 'flip_input' to 'true', then the non-masked region is newly generated with the prompt.
can we use official flux canny as controlnet?
The workflow is a bit different, especially the conditioning node. I think there is even an official workflow from BlaskForestLabs. Nevertheless, if you've understood the workflows of this video, you can easily follow the workflows for Flux tools.
@@NextTechandAI I get it, but u used controlnet apply, but the official canny model loads in the "load diffusion model" node, finding it hard to combine using that
Thx for video! Is diff. diffusion works only with inpaint? Will it somehow impact image being used with img2img?
Thanks for you feedback! I have only used differential diffusion for inpainting and only know it in this environment.
Unfortunately i don't use comfyui anymore since the new flux model i use is 10 time faster in forge.
Interesting. Which GPU and which Flux model are you using? Maybe some quantized model?
@@NextTechandAI It's a recent merged model ''unet\fluxFusionV24StepsGGUFNF4_V2GGUFQ5KS.gguf'' and i have an RTX 3060 12 GB and for some reason i am not aware of in comfyui it's very slow with same clip and setup. I also use the turbo_Alpha lora and in forge a 800X600 image take about 20 to 24 seconds while in comfyui it take close to 4 min.
@@petertremblay3725 That's strange, indeed. I can only guess that it requires certain runtime parameters or the workflow has to be optimized for the model. I'm not aware of any model running that much faster in Forge compared to ComfyUI. Anyhow, if Forge is a good solution for you then this is the way to go.
@@NextTechandAI I am currently reading about it and it seem that indeed many users mention the speed to be way better in forge, i cannot post the reddit link since youtube will not let me do links. My last generation comparison is forge : 22 sec and comfy:148 sec.
Hi, as of recently, it is now possible to use AMD GPU PyTORCH ROCm natively under Windows via WSL2, could you do performance tests in ComfyUI comparing this native solution versus ZLUDA translator?
I would like to, but AMD behaves like AMD again: only the 7000 series is supported.
@@NextTechandAI This method doesn't work for you? HSA_OVERRIDE_GFX_VERSION=10.3.0 python main.py
@@raven1439 That's not the problem. I tried it a while ago, the GPU call just got stuck. I then read about several cases that the 6800 is not (yet?) supported by the driver.
Text eludes me with inpainting in Flux.
What do you mean?
@@NextTechandAI I mean, I have spent over 2 days trying to get it to work and it will not. I have gone through various YT creator workflows and forget it. Ironically, I actually had XL almost do it, while the flux one next to it (from a creator) could not. Dev. I even tried your workflow to no avail.
@@generalawareness101 I still don't know what exactly didn't work for you, but in general you have to give the text enough space. Similar to finger inpainting, the new area to be inpainted needs to be large enough to actually accommodate 5 fingers.
@@NextTechandAI I gave it 1/4, 1/2, 3/4 of the images. I tried everything.
🙏