ComfyUI EP04 : (Smart) Inpaint with ComfyUI [Stable Diffusion]
HTML-код
- Опубликовано: 14 май 2024
- "Want to master inpainting in ComfyUI and make your AI Images pop? 🎨
Join me in this video where I'll take you through not just one, but THREE ways to create inpaint masks! 🖌️
1. Crafting the mask area by hand and DIY your own mask area
2. Automatically detecting the mask region in a selected area with Impact Pack Custom Nodes
3. Creating the mask using text prompts through ClipSeg Custom Nodes magic
And wait, there's more! I'll also share some insider secrets on modifying masks using Masquerade Custom Nodes. 🎭
You won't want to miss this! Hit that like and subscribe button and dive into the creative world of AI Image Generation with me! 🔥"
You can view the method to install ComfyUI Manager here :
• ComfyUI EP03 - Part4/4...
Content
=======
00:00 - Intro
02:12 - 1) Manual Inpaint
09:52 - 2) SAM Auto Detection
15:00 - Need of Masquerade Nodes
18:04 - 3) ClipSeg
20:32 - Conclusion
I understand English is not your first language, despite that your videos were the best ones for explaining how to install and use ComfyUI nodes etc. Showing and using examples and going through the steps is a great help. Thank you very much, you have helped me understand with less pain in the process.
Thx a lot
AGREED!!!! This bloke is good!!!
Thank you so much for the step by step of how it work. Keep up the great work!
Thanks for showing off some of the extra nodes and how they work. Automatic masking is so cool.
Thank you for showing the several methods of masking and inpainting!
Thank you, lots of great ways to inpaint in ComfyUI !
Thank you, finally a step by step comfyui that is easy to understand without a lot of nodes tangling together, Liked and subscribed
Thanks a bunch for sharing this! Hope you'll make more videos about ComfyUI! Can't wait to see more from you.
thx, I will create much more vdos about comfyui for sure!
Thankyou for the tutorial, easy to understand 👍👍👍👍
Good stuff, keep it up! We love it!
Very well explained. Keep it up man!
Very informative and helpful for a comfy UI beginner. Keep up your good work. 😄
Thx, i will continue creating good comfyui clips
It realy helps! Thanks.
Excellent!
Thank you for sharing
Good stuff! Thanks!
Thank you a lot for your explanation and comparison it helps a lot to understand.
Do you think Clipseg can be used in batch images?
เป็นคลิปที่ดีมากๆ ฝึกตั้งแต่ง่ายๆ ไปยากขึ้นเรื่อยๆ ผมก็นั่งทำซ้ำๆ จนจำได้แล้ว ขอบคุณครับ
Thank you, I learnt so many things.
Glad it was helpful!
Awesome work. Keep it up. I hope to see a future video on how to create different poses using the same face and body
Thx for your request
Спасибо большое за видео, как раз то, что я искал.
fantastic thanks!
Glad you like it!
Excellent. 👍
Thx ^^
Thank you!
You're welcome
Great job krub. 👍👍👍
Thanks 👍
Thanks for the video..hope you will make many more comfyui videos
Sure!
Thank you for all your videos, they are very helpful. I have a problem when I use set latent noise mask nothing happens even if I use the highest noise values, do you know why it could be? I can only get results when I use VAE encode
Great video, Thanks!
You're welcome!
clipseg is so awesome
thx man
thank you ever so much for all the clips na kub. I hardly subscribed to a channel nowadays, but I sub yours kub....
thx a lot!, r u the same person with @MunkTVDOMUNK? I'm also the fan of that channel!
Thank you so much. I’m munk kub 🙏🏼🙏🏼
Want to learn more about upscaling kub.
@@munkmegtube oh, i also plan that next episode will be the upscale topic. Matches your requirement perfectly!
Great tutorial, thanks!
Do you know how to inpaint a specific image to the tshirt?
can you also use this with incremental_images or is this mainly for just a single image use case?
Wow Very Good turtorial.
Glad that you like it
Thanks
Thx, I tried the "VAE encoder for inpainting" more but get less before your excellent toturial
nice tutorial, how do you inpaint better hands ?
Subscribed from Cambodia
Thank you for your content. HOW can i put a logo (or any image) on her shirt?
สุดยอดมากเลยครับคุณ ขอบคุณนะครัับ
มีสุดยอดกว่านี้อีกเยอะเลยครับ จะค่อยๆ สอนไปนะครับ
@@AIAngelGallery รอเลยครับ❤
Excellent video, thanks! But when I'm inpainting and I use a image generated by the default workflow using copy/paste clipspace, the inpainting is fast but when I use inpainting loading a new image from the disk the inpainting is slower because everytime I queue prompt it "requests to load basemodel" again. Is there a way to doesn't load a model everytime when using image from disk like source?
New person question: When you have all of this hooked up to your main workflow and you hit "Queue Prompt" it runs through the whole thing instead of going to the mask subroutine, screwing everything up. I didn't see you bypass anything so how do you avoid that?
มาแล้วครับอาจารย์
สงสัยอะไรถามได้นะครับ
There are a couple of other alternative nodes to inpainting that I've found, like ComfyI2I and some custom nodes in Impact Pack. Do you think they're worth trying out or are they about the same?
I'm looking to do auto face and hands fixing, there are facedetailer nodes specifically in other node packs but it looks like clipseg as described here kinda just works already. Do you think it makes sense to use clipseg for that or should i use a more specialized node?
Impact pack detailers node is super powerful. I will cover it in future episode because it is very complex one.
hi please help u added graphics to the t-shirt using a prompt u typed a flower and changed it to flower but what if i have an existing graphics image. how can i add my existing graphics to the t-shirt
the "manager" button is not present in my ui do u need to enable that somewhere or have i downloaded the wrong version?
Спасибо бро
hello, is it posible 2d character to psd file, each body parts another image file, how can we do?
Great vid but what if i want to create mask of batch of images?
You can do that with load image batch in WAS Node suite github.com/WASasquatch/was-node-suite-comfyui
What if I want to put an external image on the shirt?
Kind gentleman....can you show how to edit parts of body please? Using inpaint? Shape of arms, legs, fingers or even the face....So is this possible?
Could you make a tutorial on how to swap face with comfyui? What i mean is to changetheface of a photo with the face taken from another
Hlto. Thanks in advance!
Thx for your request
ตามดูครบทุกตอน ดูว่ายอดเยี่ยมและน่าสนใจมากครับ ขอรบกวนสอบถามครับ
1. สำหรับคนทั่วไปถือว่า ComfyUIใช้งานง่ายกว่า automatic 1111 ใช่ไหมครับ (กำลังจะลง 1111 พอดีมาเจอคลิปนี้ครับ)
3. เทรนlora ได้ไหมครับ
ตอบไปในเม้นอีกอันแล้วนะครับ
Sometimes inpainting through SetLatentNoiseMask generates very poorly compared to using Vae for Inpainting, poorly meaning that though it stays in the mask, what it generates is not logically good or desired compared to Vae for Inpainting which seems to 'understand' better what is intended or desired. Have you seen this or have advice? in your example at 6:51 using SetLatentNoiseMask works great so i dont know why i sometimes get poor results unless it is the model i am using. do some models work poorly with inpainting and can you provide your experience on that? Thank you for your channel
Is this work for SDXL?
Can i have the workflows all methods mentioned here please
Manual for personal images is the way to go since most of them are jpegs and don't have enough data in them, auto/smart mask will make a mess on them, clipseg/sam for your generations, like someone else in comments, i like to see faceswap tutorial in comfyui, thank you.
I understand that clipseg and sam can look at any image regardless of image format. So jpg things should not relevant? (Except that jpg quality is bad)
@@AIAngelGallery well tbh they can't figure the depth or in medium quality images anything. you select/type a tree in the background, they select half of the right arm of the person far away from the tree.
how to make a model with a wet t-shirt? prompts dont seem to work very well. thnaks
you can use this LORA (watch EP07 about how to use it)
civitai.com/models/17391/wet-t-shirt-lora
ผมตั้ง node เหมือนพี่เลย แต่ของผมมันไปรัน gen ภาพข้างบนด้วย อยากให้มันรันแต่ inpaint ต้องทำไงครับ ทำไมของวิดีโอคลิปพี่ มันรันแต่ inpaint
If the image is larger than the model's training data you will get tiled and/or strange results that doesn't fit properly in the mask. For this it's best to run the "Load Image" node through the "Image scale to side" and set side length to the models trained size, for example 1024 or 512 and then use this scaled image as your image source.
i am having problem with that automatic one . will it solve it 100%
how use an image as a mask, comfyUi masking is not perfect, lets say you use photoshop to create a perfect mask, then import the alpha into ComfyUI for masking the image and regenerate that part of the image, how to do that please, can someone help?
สอนวิธีใช้งาน Lora หน่อยครับผมติดตามอยู่นะครับผม ดีมากๆเลยครับ
เดี๋ยวสอนในตอนต่อไปครับ
This guy is living the future, on the 13th of august 2566 to be exact 😆
How do you even manage to set your system time that far into the future?
BTW: Great video, thanks 🙂
Oh, it's Buddhist Era year , equal 2023+543 = 2566
oh lol kk
@@AIAngelGallery
why is it like that?
@@AIAngelGallery
@@saltyseadog4719because not all country in the world uses Gregorian calendar. There are many kinds of calendar system, and in Buddhist country, they use Buddhist Era (BE) calendar.
Can I inpainting to change the shirt the model is wearing into a shirt I already have photos of
You should do that in photo editor first, then use the result to inpaint again with not so much denoise
ไม่ทราบว่าพอจะบอกได้ไหมครับว่าในคลิปนี่ใช้ Checkpoint ชื่อเต็มๆว่าอะไรครับ? พอดีชื่อมันซ้อนกันมองไม่ออก (แต่ถ้าบอกไม่ได้ก็ไม่เป็นไรครับ)
Epicrealism ครับ รายละเอียดอยู่ในคลิปแรก EP01 ของช่องเลย
อ้อ ขอบคุณครับ 🙏@@AIAngelGallery
I have a problem, when I inpaint, it somehow makes the entire image more contrasty and removes detail, and if I keep copying the result and inpaint again it keeps get darker worsening quality. How can I fix this and why does it do this?
Get darker only in the inpainted area? Or whole picture?
@@AIAngelGallery the whole picture, only the mask area gets regenerated but the whole picture get this contrast increase. I set it up exactly like you did.
@@AIAngelGallery I think it was the vae I was using
Hi are these nodes sdxl compatible?
Nevermind. It all worked alright. Great tutorial man. Very much appreciated! Clipseg and SAM are great!
Lora ใช้กับ comfyui ได้ไหมครับ
ได้ครับ มีnode เอาไว้ load lora ด้วย
@@AIAngelGallery ทำได้ละครับ ขอบคุณครับ
@@AIAngelGallery ใน comfyui ถ้าใช้มากกว่า หนึ่ง model ต้องทำยังไงครับ🙏🏼🥹+ รอ upscale อยู่นะครับ ทำ หลายวิธีมาก ออกมาไม่ดีเลยครับ
@@munkmegtube รอแป๊ปนะครับ พอดียุ่งๆ กับงานหลักอยู่ (ผมเป็นวิทยากรสอน Excel กับ Power BI ครับ)
Hallo my indian friend.
I work since one week with comfyui and learning a lot how to do things with it, but i dont understand why the ui do wat its do.... now i found your Video and you teach it so god! Thx, greetings from Germany 😊
กด queue Prompt อย่างไรครับ ให้มันทำแค่ ksample เดียว ผมกดแล้วมันทำงาน 2 อันเลยครับ
กด ctrl+m mute หรือ ctrl+b bypass ตัวที่ไม่ต้องการ
Why does the KSampler node in your video display a realtime preview image? Mine does not 😐 How can I enable this? *EDIT:* Found the solution! Edit the "run_nvidia_gpu.bat" file with "--preview-method auto" on the end. It will show the steps in the KSampler panel, at the bottom ☺
You can also set it in the manager (easier)
@@AIAngelGallery Nice, thx! Now all I wish is to have some kind of "switch" or "toggle" node to chose what workflow will be processed when I click on "Queue Prompt". Right now it always processes BOTH workflows (inpainting the chosen image and creating a completely new one).
@@MikevomMars if the result is the same (same fixed seed), comfyui will not generate that part
@@MikevomMars comfy roll custom node have switch node, take a look.
คนไทยใช้มะ
how to outpaint in comfyui
A great tutorial .. I'm having trouble loading Impact pack ... Note in manager "please edit the impact-pack.ini file in the ComfyUI-Impact-Pack directory and change 'mmdet_skip = True' to 'mmdet_skip = False." I can't find the impact-pack .ini file .. can you help . thank you
it's in "ComfyUI-Impact-Pack" Folder in Custom Nodes. however, i never need to edit that stuff.
to use manager and impact pack proper;y, you should update main ComfyUI and Manager using "git pull" command
Does the value of 1234 in the seed have a function? What happens if I change the seed number?
The result will change