💥The secret of easy Flux inpainting in ComfyUi - forget about stable diffusion
HTML-код
- Опубликовано: 5 окт 2024
- In this video, I’ll be showing you a simple workflow for Flux inpainting in ComfyUi. You can even combine this inpainting method with the optimized GGUF models that I covered in my previous videos to achieve faster execution and higher quality results. If you haven’t installed these models yet, make sure to check out the previous tutorial where I explain how to download and set up GGUF models on ComfyUi.
---
In this tutorial, I’ll walk you through the inpainting process using Flux with an easy-to-follow workflow, perfect for customizing images. You’ll be able to effortlessly change clothes, hairstyles, or other elements in your pictures with a simple brush tool, eliminating the need for complicated steps found online.
subscribe my channel to don't miss out next videos 👇🏻
/ @jockerai
In this video:
How to install and set up three essential custom nodes for ConfiUI.
A full breakdown of each node’s function and how to connect them for inpainting.
How to switch between default Flux models and optimized JJUF models for better performance on lower-end systems.
A step-by-step guide to masking and applying changes to images using Flux, including tweaks for blending and smoothing edges.
---
Links:
Download new text encoder for flux: huggingface.co...
Learn how to set up ComfyUi : • Install FLUX locally i...
installing GGUF models for Flux : • Install Flux locally i...
Download workflow : drive.google.c...
---
Thanks for watching! If this video helped, don’t forget to give it a thumbs up. Make sure to subscribe to the channel and hit the notification bell so you won’t miss any of my upcoming tutorials. Have any questions? Drop them in the comments below, and I’ll do my best to help!
click the link below to stay update with latest tutorial about Ai 👇🏻 :
www.youtube.com/@Jockerai?sub_confirmation=1
Five golden stars. Thank you!
@@CsokaErno thank you ✨😍
Thanks for the workflow , it worked nice
Waiting for more :)
You're welcome bro
You are amazing!!! A great teacher. You will explode on the internet!!!
Thank you so much that was uplifting comment ✨❤
Great detail and I liked that you showed how to build the workflow! Well done! 😀
@@GenoG thank you mate✨😉
Great video, thank you very much. Just a note, many content creators share their workflows with their viewers. This is helpful, it doesn't mean that viewers don't watch, like and comment on the videos, but it just shows the content creator appreciation for the time the viewers give to watch their videos. Like, I enjoy the video itself and how you create the workflow, there is always something new or more to learn in this process. I will build the workflow, but it would be nice if you would share it too. Thanks again!!
@@baheth3elmy16 thanks for sharing your thoughts ok I'll add the workflow file in the description ✨
Excellent! Can you do one for background remover and lighting?
کارت درسته
@@motion_time شما فارسی زبان هستید؟
@@Jockerai بله
و برام خیلی جذابه که یک ایرانی انقدر حرفه شده توی یک تکنولوژی جدید
واقعا کار هاتو میپسندم
راستی برای یک موقعیت کاری در حوزه هوش مصنوعی و بخصوص comfyui دنبال یکمتخصص هستیم
اگر وقتت آزاد بود بهم بگو
@@motion_time send me a message in tel-egram please @graphixm
Hi, thanks for the tutorial, this works great. How does the diffusion model knows the position of the shirt (and other inpainting things) - without any control net like openpose?
@@erans Inpainting is one of the examples of img2img. You brush some areas of an image,but yet the ai will scan the whole image to see what the image is about then make changes to brushed areas.
Just need a LORA node ;)
Yes it can be added which covered in "Mulri-Lora" video ;) : ruclips.net/video/-Xf0CggToLM/видео.html
@@Jockerai 🤙
can u please make tutorial about how to use Flux Lora model trained in Fal to use in localy installed comfyui. the model trained in Fal doesnt resemble when using the Lora in comfyui even with the trigger word.
I haven't test Fal trained Loras yet. But you can use different nodes to test that. watch my video titled Flux Multi-lora
it's working but i have a problem, it's so slow on flux dev fp8, only happen with inpainting (around 16 min). when i do txt2img it's 40 seconds, am i doing something wrong? my gpu is a radeon rx 7900XT
@@Hecbertgg I will make a video tomorrow and it's even faster
Prompt outputs failed validation
UnetLoaderGGUF:
- Value not in list: unet_name: 'flux1-dev-Q4_K_S.gguf' not in []
DualCLIPLoaderGGUF:
- Required input is missing: clip_name1
- Required input is missing: clip_name2
VAELoader:
- Required input is missing: vae_name
What is the solution to this problem?
make sure you download all models you need and place it in proper location. Then in ComfyUi slecet them in evrey node
I am a beginner and if I ask a stupid question, please excuse me. I can paint in your workflow on the foreground object. As soon as I try to paint on the background, e.g. a bottle with glasses, nothing happens and I don't get an error message. Am I doing something wrong? I would be very grateful for an answer
@@wolfgangterner7277 it's totally ok to ask questions feel free to do so.
What do you mean by nothing happens?
@@Jockerai
If I try to create a bottle with two glasses and I have painted a mask in the background, nothing changes in my picture. Only if I paint in the foreground object, e.g. change the color of a jacket, does this also happen
@@wolfgangterner7277 you have to try changing the prompt or increase flux guidance and try multiple times to gain the right reault
thanks for the tip, now everything works
Answers
@@wolfgangterner7277 happy to here that 🤩😉
Does a partial denoise work? Like say, 0.70?
yes in Basic Guider node you can adjust lower denoises, but 0.7 is very low and probably prompt will not work good. Set it around 0.85-1.0
Hi I just want to know what are you pc specs ? I'm about to buy a new laptop rtx 4050 how much time do you think that will take to generate an image ?
@@rishabhp1762 the time of generating image depends on many factors like size, models, loras etc. but in general it takes 88 seconds with Q8-GGUf Flux model for an image with 1024*1024 with RTX 3060 12G which I have.
@@Jockerai ok thank you
@@rishabhp1762 you're welcome
Whatever you do, do not get anything less than 12GB vram
if somone needed to make comic how with compyui what workflow he should use ?to capture the characters in separate?
You need to have an appropriate Prompt to make that. use "character sheet" phrase in your prompt
@@Jockerai is there any good workflow for that im desperately looking around to find it
@@nothing228 you can use my workflow in this video : ruclips.net/video/txDFK-RcUq4/видео.html
and use this LoRA for comic : civitai.com/models/210095/the-wizards-vintage-comic-book-cover
1:50 KJ nodes pack seems confilctd 🤔
@@technicusacity yes I know. I update all of my custom nodes and some conflicts still remain. It doesn't cause any disruption to our work with comfyUi
@@Jockerai Just a bit anoyin. Sadly CUI not indicate what modules the conflict arose. It wflow work strange. I tried to describe the flight of the plane over the city, but the result was disappointing. The plane was drawn, but the merging of the original image and the background under the mask does not occur. But blimp was successfully inserted 🙄
Lol is it GGUF. Not goof? Think PNG :D lol. First I heard this.
Spelling 4 letters is much harder than saying simple goof🤩🤩😎Although there's no specific rule for pronouncing abbreviations...;)