Having experimented with various ways of using inpainting in Comfyui I have to admit this tutorial was the better one with great results, I'm only new to this platform and I do struggle with custom nodes. Glad to have found this channel though.
Hey, I really appreciate the effort you put into this tutorial-great content! That said, the TTS voiceover could use some tweaks for better clarity and engagement. TTS systems, like the ones on ElevenLabs, need scripts that are carefully formatted to produce natural-sounding inflection. Here’s a tip: try feeding your script into an LLM like ChatGPT and ask it to rewrite the text phonetically for TTS purposes. You can prompt it to use ellipses, commas, and full stops strategically to create natural pauses and emphasis. For technical terms and shortcuts (like 'CTRL+SHIFT+X'), rewrite them phonetically-e.g., 'Control Shift Ex'-to ensure they're spoken correctly. Words like 'inpainting' might need to be split, like 'in... painting,' to sound more accurate. Lastly, please bring back the full stops! They’re not just grammar rules-they’re essential for making spoken language easier to follow. Almost every language uses them for a reason. Trust me, these tweaks will make your content not just good but great! But there’s nothing more disheartening than someone who clearly knows their stuff sabotaging their own content with poor delivery.
Is it possible to run the second KSampler node only for the img2img part and not the first one. This workflow is not practical if the first KSampler node generates multiple images. I just recently started ComfyUI and been using A1111 web ui.
When running your workflow I have this errror: KSampler Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 16, 256, 256] to have 4 channels, but got 16 channels instead
At 1:00 maybe change "VAE Encode" to "VAE Encode (for inpainting)" and ALSO attach the mask output from the "Load Image" to the "VAE Encode (for inpainting)" You will now have 2 wires coming out of Load Image "MASK". Otherwise it may not make changes or make almost no changes, as in my situation. The problem with this though, is that if I do less than 1.00 it sees the mask as being gray and I get a gray mouth or gray whatever.
How are you running the queue without the other ksampler firing? Are you bypassing it but just not showing that step in the video? I see so many videos where it looks like they're just clicking the queue button and it ignores the entire top half of the workflow.
This is a very easy to understand guide. Thank you. But to make it really easy and repeatable you had to add models in VAE and checkpoints because 1st run of your workflow shows a lot of errors. And for some reason 1st run made some plate out of nowhere which is pretty much confused me.
Thanks for the tutorial! I had no idea I could inpaint in ComfyUI! Does this work with highres pics/Pony XL? I just wanted to adjust a character's face but I keep getting something that looks lowres... Kind of reminds me of what happens when you inpaint on a1111 without selecting "only masked". Idunno....
@@PixelEasel Just went through the tutorial, and I'll admit it looks pretty daunting. I've only been on CUI for less than a week. That said, from the way you explained the problem there, it definitely sounds like just what I need. I'll try to implement it. I hope you don't mind if I ask follow-up questions (perhaps on the new tutorial?). Thanks for directing me there!
Great video. @pixeleasel Is it possible to automatically the load image in the second workflow to automatically pick up the first image generated/changed without copying and pasting clipspace like in your video? I want the OUTPUT of the FIRST flow to become the INPUT of the SECOND flow. (excuse the caps, it's there to make it easier to read/understand) :)
how to get rid of this error? Error occurred when executing VAEEncode: Could not allocate tensor with 2147483648 bytes. There is not enough GPU video memory available!
@@PixelEasel yes, this is a strange problem 'cause having 8 GB VRAM I can see only 1024 MB in the CMD of ComfyUi. And that's not my only case but a common problem VRAM.
you'll need to use the mask to choose the eye you want to close, and if it doesnt work with this workflow you can try this one ruclips.net/video/_LuPU5woth0/видео.html
Hello bro, im new on this i have the next troubles. can help me please Prompt outputs failed validation VAELoader: - Value not in list: vae_name: 'None' not in [] LoraLoader: - Required input is missing: lora_name
This isn't working for me. :( I think I've set everything up exactly as you have, but I cannot find "pytorch_lora_weights_SD.safetensors", I can only find "pytorch_lora_weights.safetensors". Does that even matter? I'm not getting an error message, It's just not changing the image at all. :( I have the denoise set to a higher value, also - still nothing. :(
I am sorry, I have to stop watching your channel. Nothing works... Not even this simple one... I think you are very skilled. But you are very bad at explaining things. Not to mention about the ai voice. Even with AI voice, you can still make it more comprehensive... But you choose to make this 5 minutes...
getting "- Value not in list: lora_name: 'pytorch_lora_weights_SD.safetensors' not in []" error is there a link to aquire this currently useing picx model
Thank you, Inpainting has always given me anxiety. You healed me. Thank you.
Having experimented with various ways of using inpainting in Comfyui I have to admit this tutorial was the better one with great results, I'm only new to this platform and I do struggle with custom nodes. Glad to have found this channel though.
thanks for the comment! glad to hear!
I feel like there are a ton of ways to do inpainting in comfy, its just more or less what you prefer, and I assume they all give pretty good results
Thanks for providing the workflow! Massive Like
thx for providing the comment 😇
@@PixelEasel your a truee G ninja you got discord?
wow, finally someone willing to share their knowledge. very rare. thank you sir you earned a new subscriber
Welcome aboard!
its free software and there are so many resources if you LOOK.
subbed for the free work flow.....nice video too!!
thanks! good to know!
me to, to the point!
I appreciate the simplicity of this workflow. Also How to the point you are. Tutorial video should be more like this.
thanks 😊
@@PixelEasel I just used this workflow last week. Still works.
Hey, I really appreciate the effort you put into this tutorial-great content! That said, the TTS voiceover could use some tweaks for better clarity and engagement. TTS systems, like the ones on ElevenLabs, need scripts that are carefully formatted to produce natural-sounding inflection.
Here’s a tip: try feeding your script into an LLM like ChatGPT and ask it to rewrite the text phonetically for TTS purposes. You can prompt it to use ellipses, commas, and full stops strategically to create natural pauses and emphasis. For technical terms and shortcuts (like 'CTRL+SHIFT+X'), rewrite them phonetically-e.g., 'Control Shift Ex'-to ensure they're spoken correctly. Words like 'inpainting' might need to be split, like 'in... painting,' to sound more accurate.
Lastly, please bring back the full stops! They’re not just grammar rules-they’re essential for making spoken language easier to follow. Almost every language uses them for a reason. Trust me, these tweaks will make your content not just good but great! But there’s nothing more disheartening than someone who clearly knows their stuff sabotaging their own content with poor delivery.
Is it possible to run the second KSampler node only for the img2img part and not the first one. This workflow is not practical if the first KSampler node generates multiple images. I just recently started ComfyUI and been using A1111 web ui.
yep. you can do the second pass only for the img2img
@@PixelEasel Got it! I just learned about CTRL+M to activate/deactivate node
Change partial area is different from remove partial area. So the action taken require different workflow, correct?
yes. when you remove something, you need to fill the area with something else
Thanks. Simple and easy.
thanks!
When running your workflow I have this errror: KSampler
Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 16, 256, 256] to have 4 channels, but got 16 channels instead
I had this error. The problem was that I was using an SDXL model with an incompatible VAE one. I fixed it by switching the VAE model to an SDXL one.
At 1:00 maybe change "VAE Encode" to "VAE Encode (for inpainting)" and ALSO attach the mask output from the "Load Image" to the "VAE Encode (for inpainting)"
You will now have 2 wires coming out of Load Image "MASK".
Otherwise it may not make changes or make almost no changes, as in my situation.
The problem with this though, is that if I do less than 1.00 it sees the mask as being gray and I get a gray mouth or gray whatever.
you should use only one of them
@@PixelEasel one of what?
Thank you for the tutorial, may I know how we can erase the mask, that we accidentally messed up?
right click
Can you add a lora to the bottom workflow or does it interfere with the inpaint
Aaah never mind I see the copy paste already makes the lora connections!
this workflow is good only for small changes, i'm working on another one for large areas
Well done!
thanks!
thanks 😊!
How are you running the queue without the other ksampler firing? Are you bypassing it but just not showing that step in the video? I see so many videos where it looks like they're just clicking the queue button and it ignores the entire top half of the workflow.
If you use fix seed and don't change anything else. it's the same output so the k sampler does not fire
What is the LORA that you use? And... Is lcm sampler is great if for inpainting? Thanks.
it's lcm lora. that why I use lcm sampler. and you more than welcome 🙏
@@PixelEasel I see! Thank you :) Will find out more about that LORA.
What if I just want to use a previous generated image? Just load it and inpaint directly, I dont want to generate a new image
you can upload any image
This is a very easy to understand guide. Thank you. But to make it really easy and repeatable you had to add models in VAE and checkpoints because 1st run of your workflow shows a lot of errors. And for some reason 1st run made some plate out of nowhere which is pretty much confused me.
thanks. I will update the workflow
Hello
I downloaded the vae file and put it in the vae folder, but it gave me an error again
Must it work with Lora?
what browser is that?
is this possible with flux dev ? thanks
Thanks for the tutorial! I had no idea I could inpaint in ComfyUI! Does this work with highres pics/Pony XL? I just wanted to adjust a character's face but I keep getting something that looks lowres... Kind of reminds me of what happens when you inpaint on a1111 without selecting "only masked". Idunno....
thanks for commenting! Maybe this workflow will suit you better ruclips.net/video/_LuPU5woth0/видео.html
@@PixelEasel Just went through the tutorial, and I'll admit it looks pretty daunting. I've only been on CUI for less than a week. That said, from the way you explained the problem there, it definitely sounds like just what I need. I'll try to implement it. I hope you don't mind if I ask follow-up questions (perhaps on the new tutorial?). Thanks for directing me there!
you more than welcome!
Great video. @pixeleasel Is it possible to automatically the load image in the second workflow to automatically pick up the first image generated/changed without copying and pasting clipspace like in your video?
I want the OUTPUT of the FIRST flow to become the INPUT of the SECOND flow.
(excuse the caps, it's there to make it easier to read/understand) :)
simple. just connect the output directly to the vae encode
Thank you for the quick answer. Always easy when you know how!! Keep up the great work
0:54 How did you load the Output into the "loadpicture node" so fast
probably copy paste
how to get rid of this error?
Error occurred when executing VAEEncode:
Could not allocate tensor with 2147483648 bytes. There is not enough GPU video memory available!
I would try different vae... but it seems you don't have enough memory
@@PixelEasel yes, this is a strange problem 'cause having 8 GB VRAM I can see only 1024 MB in the CMD of ComfyUi. And that's not my only case but a common problem VRAM.
Great video - is it possible to close a specific eye?
Or get the model to wink with a specific eye, for example, the left eye?
you'll need to use the mask to choose the eye you want to close, and if it doesnt work with this workflow you can try this one ruclips.net/video/_LuPU5woth0/видео.html
@PixelEasel I'll give it a go. Thanks for thr suggestion. I can make one eye shut, but no wink.
Hello bro, im new on this
i have the next troubles. can help me please
Prompt outputs failed validation
VAELoader:
- Value not in list: vae_name: 'None' not in []
LoraLoader:
- Required input is missing: lora_name
How do I download your load checkpoint?
you can find it on civitai
@@PixelEasel how to install in comfyui for colab?
I got bad results, maybe try to update the guide
this workflow is good for small changes. I'm working on an inpainting workflow for bigger changes, I'll upload it in the near future
when changing face expressions it is better to use other methods since different expressions change the whole face, not just parts of it
I agree, can you share what the other methods are?
you can try this workflow ruclips.net/video/VwEcGIBwsyw/видео.html
This isn't working for me. :( I think I've set everything up exactly as you have, but I cannot find "pytorch_lora_weights_SD.safetensors", I can only find "pytorch_lora_weights.safetensors". Does that even matter? I'm not getting an error message, It's just not changing the image at all. :( I have the denoise set to a higher value, also - still nothing. :(
might as well use midjourney, too many errors and frustrations
thx
Really good video. Unfortunate that your AI voice is so bad.
thx. working on it... I think it's getting better in the last videos! I'll try to make it better and better
Aurelia Land
I am sorry, I have to stop watching your channel. Nothing works... Not even this simple one... I think you are very skilled. But you are very bad at explaining things. Not to mention about the ai voice. Even with AI voice, you can still make it more comprehensive... But you choose to make this 5 minutes...
outdated content
you can try this method for inpainting ruclips.net/video/_LuPU5woth0/видео.html
horrible voice generator
sorry to hear! still working on it
Text to speech? Immediate downvote...
Plzz make a video about "How to install comfy ui With "chackpoint & lora"
will make one soon!!!
getting "- Value not in list: lora_name: 'pytorch_lora_weights_SD.safetensors' not in []" error is there a link to aquire this currently useing picx model
it the name of lcm lora. if u didnt install the model yet you can find it here
huggingface.co/latent-consistency/lcm-lora-sdv1-5/tree/main