Join the conversation on Discord discord.gg/gggpkVgBf3 You can now support the channel and unlock exclusive perks by becoming a member: pixaroma ruclips.net/channel/UCmMbwA-s3GZDKVzGZ-kPwaQjoin Check my other channels: www.youtube.com/@altflux www.youtube.com/@AI2Play
Thank you. I have been waiting for this in-painting and out-painting video for a long time. There are many tutorials available on RUclips, but understanding the method and implementing it is very important. You have explained it in a very simple way.
Very well done tutorial series, very easy to follow, I also recommend if anyone does not want to setup ComfyUI in the same way instead use Stabilitymatrix
That white tshirt issue bugged me, and since I'm not an Adobe user, I stumbled into a different route. Using the same mask, I used the prompt "nude" and the red shirt was gone. Copying that across I used "white tshirt" and that worked! Also... "black hat on head" worked without leaving comfy... I got lucky with that prompt. Thanks for another great workflow.
Wonderful tutorial once again. I like that you explained about denoise and also demonstrated photoshop techniques at the end! I will try out once I get some free time…
Me, on the other side, I prefer no PS here. Or, if there is any, not using the AI into it, so that we can use as well GIMP alternatively. The magic of this channel is its focus on ComfyUI and its freedom to let us do what we want, under our control. I moved away from Leonardo and the like looking for that. Besides, PS is the exact opposite of free (in both meanings). If this channel gears into more PS, it will lose me. There are other fantastic channels out there, and this one will be losing its edge by losing its focus.
For the color problem and fine tuning denoise, just add a controlnet ( depth are good ) with the cropped image as reference. this allow higher denoise ( even 1) on sampler without deform and transfigure the subject
Thanks! Very clear and valuable Inpaint tutorial. While trying a similar workflow, using Differential Diffusion node in combination with InpaintModelConditioning seems to have better integration results than just InpaintModelConditioning alone. Additionally if this node refuses to inpaint the shirt, try using another Inpaint model, (such as BrushNet).
Hi. Thanks for this valuable serie on comfyui. I would suggest you an episode on How to train a lora on comfyui, which workflow depending on check point used, best ckpnt you May suggest us to try... Keep going and thnx
thanks, yes people keep asking me, just didnt find an easy way to do it locally, only online and that is not free. The method I tried are harder to install and gives errors, that I am not able to explain the community how to fix them, since I am a designer not a coder. So I have been waiting for an easy solution, like fluxgym can work sometimes but still give some errors sometimes
Месяц назад
Exciting as always! Could you share the Inpainting and Outpainting workflow so that I don't have to rush from the video?
this is the direct link, mention pixaroma there if you still cant find it discord.com/channels/1245221993746399232/1270589667359592470/1300813045127188591
Thank you very much for all the information and how easy it is to follow. What pc configuration do you have and what time do you get in the image generation?
how do we include the new flux canny and depth lora with the inpainting workflow. How to combine pix to pix instruct and the inpainting condition nodes
you can connect the lora just like i did at the end of episode 24, but I remember I tried something and didnt work quite as expected, maybe because lora need 10 for flux guidance and inpainting didnt need so much or something, or maybe i didn put the right settings
@@pixaroma Yes I was confused which latent would go to the ksampler. Is it the one from inpaint conditioning or the one from pix to pix. See if you can figure it out.
@@CharlesPrithviRaj Ah I see what you mean, probably that why I didnt get the result I expected I think i skipped one of the inpaint nodes :)) I didnt test it but you can try with one of this nodes, LatentAdd or LatentBlend and that comes from both inpaint node and go to one of that latent node and from that node you connect to ksampler, so you combine both latent, in theory should work, but as I said I didnt test it, let me know if it works
I used image to video on the platform klingai.com/ so far i didn't find a free options that does that good like locally that why i used online platform
you can click on export and put it in any folder you want, if you go to top left corners and workflow and choose export, and choose a folder. and with workflow open you can open it. If you save it with save as or save it will go into the folder workflows, the path is something like this ComfyUI_windows_portable\ComfyUI\user\default\workflows
I didnt try it yet since it was flux + control net so it will take extra time to generate since is loading an extra model, but I will play with it to see if can get better results
Depends, you can use a load image from itools node from example that let you take images from a folder, or you can use batch the one next to queue to run the workflow multiple time, so depending on what you need to do. Chech episode 15 if you want to load a folder with images
@@pixaroma I don't think i'm understanding. So for example, if I want to get 10 result for different glasses on the same face. Where to put the batch size?
@@bubuububu is hard to explain in text, would have been much easier to show screenshot on discord. If you have the new interface with that floating QUEUE button, that has a number next to it like 1, you can increase that number to 10 and it will run that workflow 10 times, so if your seed is not fixed usually is set to randomize will run 10 times each time with a different seed so different results and it will stop. If you have the old interface that as the button QUEUE PROMPT then under it you have a check mark Extra Options, and there you have batch count that default is one, so you can put 10 there and run it.
there are two, i use this one ComfyUI-Florence2 by kijai yo can find it in manager. this is their page github.com/kijai/ComfyUI-Florence2 look how i use it in episode 11 ruclips.net/video/yutYU97Bj7E/видео.htmlsi=EZ2yHl6-7tsbGapX
When I did this tutorial, it didn't work (5:52). I realized after much fiddling around, using different loaded images, different masks, that it won't work if the mask is not continuous. I couldn't make it work when i had a mask on the far right edge and a seperate one on the far left edge. WEIRD.
I usually do one selection, there is on the inpaint crop an option called fill_mask_holes, maybe try to turn it off to see if helps, if not maybe is some bug
I am trying to do a single transform inpainting with flux 1.1 dev trained loras and the same model to do inpainting The model is trained on materials and when I run it through with a mask, the whole picture chages instead of the mask, I might be missing a node, but I just wat the masked area to be changed and it always changes the whole picture and outs artifacts in them I already have lora strgh to 1 and I need to play with a lot, but I have not been able to control the mask area or the picturre, so weird! The setup you have does not have a lora in it, which would be effective, anyone have a setup with a lora?
Join the conversation on Discord discord.gg/gggpkVgBf3
You can now support the channel and unlock exclusive perks by becoming a member:
pixaroma ruclips.net/channel/UCmMbwA-s3GZDKVzGZ-kPwaQjoin
Check my other channels:
www.youtube.com/@altflux
www.youtube.com/@AI2Play
Thank you. I have been waiting for this in-painting and out-painting video for a long time. There are many tutorials available on RUclips, but understanding the method and implementing it is very important. You have explained it in a very simple way.
thanks uday 🙂
Thanks for all your hard work! I really appreciate the effort you put into your tutorials.
Thank you so much for your support 🙂
Very well done tutorial series, very easy to follow, I also recommend if anyone does not want to setup ComfyUI in the same way instead use Stabilitymatrix
That white tshirt issue bugged me, and since I'm not an Adobe user, I stumbled into a different route. Using the same mask, I used the prompt "nude" and the red shirt was gone. Copying that across I used "white tshirt" and that worked! Also... "black hat on head" worked without leaving comfy... I got lucky with that prompt. Thanks for another great workflow.
Simple, accessible and understandable! You are the best one who delivers complex information! Thank you and don't stop!
Thank you 😊
Great tutorial! You made everything so clear and easy to follow - thanks for breaking Inpainting down so well!
Thank you 😊
Wonderful tutorial once again.
I like that you explained about denoise and also demonstrated photoshop techniques at the end! I will try out once I get some free time…
thank you 🙂
Me, on the other side, I prefer no PS here. Or, if there is any, not using the AI into it, so that we can use as well GIMP alternatively. The magic of this channel is its focus on ComfyUI and its freedom to let us do what we want, under our control. I moved away from Leonardo and the like looking for that. Besides, PS is the exact opposite of free (in both meanings). If this channel gears into more PS, it will lose me. There are other fantastic channels out there, and this one will be losing its edge by losing its focus.
It was worth the wait; you never truly disappoint.
Thank you
I was hoping you'd cover inpainting! Now, please do IP Adapter :)
Great episode! Thank you very much!!!!
Thank you for another fantastic episode!
I've been waiting on this one. Thanks so much! I can't wait to give it a try
Love the tutorial details and it's easy to understand
New interesting stuff ❤
very impressive and informative thank you very much
Love your tutorials.
awesome!
For the color problem and fine tuning denoise, just add a controlnet ( depth are good ) with the cropped image as reference. this allow higher denoise ( even 1) on sampler without deform and transfigure the subject
thank you, I will give it a try :)
Thanks! Very clear and valuable Inpaint tutorial. While trying a similar workflow, using Differential Diffusion node in combination with InpaintModelConditioning seems to have better integration results than just InpaintModelConditioning alone. Additionally if this node refuses to inpaint the shirt, try using another Inpaint model, (such as BrushNet).
thank you
Hi. Thanks for this valuable serie on comfyui. I would suggest you an episode on How to train a lora on comfyui, which workflow depending on check point used, best ckpnt you May suggest us to try... Keep going and thnx
thanks, yes people keep asking me, just didnt find an easy way to do it locally, only online and that is not free. The method I tried are harder to install and gives errors, that I am not able to explain the community how to fix them, since I am a designer not a coder. So I have been waiting for an easy solution, like fluxgym can work sometimes but still give some errors sometimes
Exciting as always! Could you share the Inpainting and Outpainting workflow so that I don't have to rush from the video?
Is on discord on the pixaroma-workflows channel link to discord in the header of the channel or in video description
@@pixaromaOh, I found it, thank you!
this is the direct link, mention pixaroma there if you still cant find it discord.com/channels/1245221993746399232/1270589667359592470/1300813045127188591
Thank you for great tutorial! How do you do workflow tabs in ComfyUI?
go to settings (that gear wheel) and search for workflow, look for Opened workflows position, and choose Topbar instead of sidebar
great video
thank you 🙂
Add a depth controlnet to have the generation more similar. Specially in flux
Fooocus inpaint is the king I think
i think I saw someone using a fooocus model in comfyui, but not sure.
Thank you very much for all the information and how easy it is to follow. What pc configuration do you have and what time do you get in the image generation?
Rtx4090 24 gb of vram, 128 gb of ram. Depends on the model between 3-15 seconds, sdxl is fast and flux takes like 14-15 seconds to generate an image
@@pixaroma Thanks for the info. 15 seconds is really fast
When are you going to do an IMG2VIDEO tutorial???? :)
when there is a good video model, so far the video models I saw are not really usable, compared with Kling ai for example that create decent video
Thx! But what’s the version of your comfy ui?
I i go to manager and scroll down in the right I see this ComfyUI: 2797[770ab2](2024-10-29)
Manager: V2.51.8
As for release is v0.2.5
@@pixaroma thx but is the .exe ? the ComfyUI Desktop V1? I'm waiting for
@@sollmasterdoodle no, that is still in beta from what I know and if you are not on the list dont have access to it yet
@@pixaroma thx a lot too you!Perfect video!
Is there a Comfy node that lets you paint on the image like in Forge?
I dont know one, but there are so many nodes, I am sure there are some that let you do that
how do we include the new flux canny and depth lora with the inpainting workflow. How to combine pix to pix instruct and the inpainting condition nodes
you can connect the lora just like i did at the end of episode 24, but I remember I tried something and didnt work quite as expected, maybe because lora need 10 for flux guidance and inpainting didnt need so much or something, or maybe i didn put the right settings
@@pixaroma Yes I was confused which latent would go to the ksampler. Is it the one from inpaint conditioning or the one from pix to pix. See if you can figure it out.
@@CharlesPrithviRaj Ah I see what you mean, probably that why I didnt get the result I expected I think i skipped one of the inpaint nodes :)) I didnt test it but you can try with one of this nodes, LatentAdd or LatentBlend and that comes from both inpaint node and go to one of that latent node and from that node you connect to ksampler, so you combine both latent, in theory should work, but as I said I didnt test it, let me know if it works
Can you tell me how I can change the color of a garment with such a small workflow? But without changing the garment! Only the color.
Is hard to keep things intact, i will probably just change the color in Photoshop and use image to image with low denoise to better blend the colors.
In the introduction part of the video, the female picture appears in motion. How can I do that, what should I research?
I used image to video on the platform klingai.com/ so far i didn't find a free options that does that good like locally that why i used online platform
After upgrading to the new interface, I can't tell where my workflows are being saved to. Is there some way to tell?
you can click on export and put it in any folder you want, if you go to top left corners and workflow and choose export, and choose a folder. and with workflow open you can open it. If you save it with save as or save it will go into the folder workflows, the path is something like this ComfyUI_windows_portable\ComfyUI\user\default\workflows
is there a way to increase the batch size?
From queue you have batch size 1, you increase that number
@@pixaroma but in queue thats batch count not batch size. i tried to add a "repeat latent batch" node but the stitch node does not like it
I don't know any, i usually just use that batch and check in a few minutes or use increment queue, so only if you find a node to do that
@@pixaroma okay thank you anyways. 👍
And what about the *Controlnet Inpaint BETA* for FLUX?
I didnt try it yet since it was flux + control net so it will take extra time to generate since is loading an extra model, but I will play with it to see if can get better results
Flux chin sighted. 😅😊
😁 can be fixed with sdxl Inpaint 😂
Excuse my amateur question, but how can I batch more images?
Depends, you can use a load image from itools node from example that let you take images from a folder, or you can use batch the one next to queue to run the workflow multiple time, so depending on what you need to do. Chech episode 15 if you want to load a folder with images
@@pixaroma I don't think i'm understanding. So for example, if I want to get 10 result for different glasses on the same face. Where to put the batch size?
@@bubuububu is hard to explain in text, would have been much easier to show screenshot on discord. If you have the new interface with that floating QUEUE button, that has a number next to it like 1, you can increase that number to 10 and it will run that workflow 10 times, so if your seed is not fixed usually is set to randomize will run 10 times each time with a different seed so different results and it will stop. If you have the old interface that as the button QUEUE PROMPT then under it you have a check mark Extra Options, and there you have batch count that default is one, so you can put 10 there and run it.
@@pixaroma thanks a lot. i knew i was missing something basic 😅
friend i have an annoying problem i just cant find the florence2 imageprompt node
there are two, i use this one ComfyUI-Florence2 by kijai yo can find it in manager. this is their page github.com/kijai/ComfyUI-Florence2 look how i use it in episode 11 ruclips.net/video/yutYU97Bj7E/видео.htmlsi=EZ2yHl6-7tsbGapX
@@pixaroma thank you men
When I did this tutorial, it didn't work (5:52). I realized after much fiddling around, using different loaded images, different masks, that it won't work if the mask is not continuous. I couldn't make it work when i had a mask on the far right edge and a seperate one on the far left edge. WEIRD.
I usually do one selection, there is on the inpaint crop an option called fill_mask_holes, maybe try to turn it off to see if helps, if not maybe is some bug
Thanks for not loading this up with a bunch of BS
I am trying to do a single transform inpainting with flux 1.1 dev trained loras and the same model to do inpainting The model is trained on materials and when I run it through with a mask, the whole picture chages instead of the mask, I might be missing a node, but I just wat the masked area to be changed and it always changes the whole picture and outs artifacts in them
I already have lora strgh to 1 and I need to play with a lot, but I have not been able to control the mask area or the picturre, so weird! The setup you have does not have a lora in it, which would be effective, anyone have a setup with a lora?
I didn't try it with lora yet
@@pixaroma Would you be interested in colaborating? I need to get it done! good experience too, we have like 700 loras trained waiting to be used!!
@@jessedbrown1980 i am checking now the message on discord
Man, Adobe is cooked...
😁