Great job. I usually don't like the non verbal tutorials but yours worked very well especially at 0.5 speed made it easer to follow. Still I'm looking forward to when you add the voice over. I have subscribed. Thank you.
That's fantastic. I've just started learning Comfy and the speed that you work and the relaxing background music really help my brain not to freak out. Great video! :)
You are the best!!! your videos are very useful and informative. I'm a beginner and I figured it out right away, though I thought a little about where to move which models. the photos are very realistic THANKS!!!
Wow i learned a lot, i even feel better from the music. Am i the only one that goes kinda crazy trying to figure this stuff out at times? Especially training loras. The nice music is very, very welcome 👍 Liked subbed and saved
I followed step by step everything shown in the video and everything worked perfect! Are there any resources you can recommend to learn more about how to understand each node and why it can be placed there? There is quite a lot of information but it is a bit difficult to know which one is the most appropriate as there is too much!!! Thanks for the work you are doing and how detailed your videos are :D
Anyone else getting an Out of Memory error on the Sampler Custom Advanced node when processing? I'm running a RTX 4090 and 128gb RAM. Both Cuda and Pytorch are updated as well as attempting the 8 vs 16 flux models and NF4 node. I'm hunting in the Git forums, but haven't found a solution.
It seems like deleted mask isn't removing from inpaint properly. After cloth swap, you are swapping her hair and the mask only on hair, but clothes also swapping. As with face swap, her hair changes, but now clothes are okay. Bug? Great tutorial btw.
I experience the same problem. No matter what I do, the ganguro makeup reappears in all later images 😂 Do you have a fix how to clear the deleted masks for good? I think the mask becomes part of InpaintModelConditioning and stays there but I don´t know how to repair this. Thank you for the great tutorial!
Постоянно выдает ошибку. Given groups=1, weight of size [320, 4, 3, 3], expected input[1, 16, 135, 102] to have 4 channels, but got 16 channels instead Что она значит?
'diffusion_pytorch_model.safetensors' does not support control_type 'inpaint/outpaint'. ........ what does this mean ive tried the file with the original and your recommendation name change?
I was so pump when this one came out but so sad didn't work for me cause this error appeared: Error occurred when executing LoadAndResizeImage: Invalid background color: image. I changed the color image background to use masking color but its a bad result. I hope this one will work on flux models.
Would it be nice/possible to adjust this to a generic process, where you have a face, an expression, a hairdo, a body, an outfit, a pose, a background and combine all of them in a final image? Would such a flow be very complicated, and would it keep all elements consistent?
I don't have the (load and resize image) box installed. What do I have to install for it to appear? My native language is Spanish and excuse my way of writing.
Thank you very much for the video. In my opinion, a simple inpaint (like fixing a small area in an image) should not contain such nodes. Arent there any combined nodes at least to reduce complexity of generation process ?
@@colinfransch6054 There is a contolnet now. However, many people have been having errors and it's only one type of control. Canny. It's Alpha. For the Dev model only. I'm sure more will be coming soon. AI comes at you fast. I don't think I can link it here but a search should find it. I couldn't get it to work personally.
The difference lies in the types of inputs and outputs for each node, and in this workflow, we used InpaintModelConditioning to achieve the desired results.
I loved the tutorial. Nice pace and flow of building the workflow. I have managed to copy the exact workflow but somehow it's not working. I get the same image duplicated on the output, minor artefacts to show that it did see the mask but didn't actually change the clothes of anything. Does anyone have any ideas?
Great tutorial. The only thing that I miss is being to make a batch instead of a single image. I tried putting in an empty latent node, but it did not work.
i tried to reproduce your great results at 5:48, and all the hair is "small" meaning its just like, around the skull. Even when I prompt "big hair, 1980s hair," or some of the hair styles you have, it all just crops to the head. Is there some trick to getting it big? I kept making my mask bigger and bigger, and no love.....thanks!
@@CgTopTipsso wait. Why not go for AMD APU with XDNA. You'd use ryzen ai and ONNX from huggingface x AMD collab to convert CUDA models to AMD models. Using a windows machine it allocate system ram as vRAM dynamically. Biggest bottleneck for AI economy is crushed 😍
Yes, but you need to make some changes in the workflow. Since you only want to change the background of an image, the simplest way is to select it manually
first run using model on 4090ti, 16gbvram - 10 minutes load and process..... second run once model loaded 59seconds.... still testing......weird results so far...only difference I can see as opposed to other inpainting such as brushnet and powerpoint is you get text with this model....
First time always be slow, because is loading the model, and depending on your HDD/SSD/NVme, that's will take time. In my case, if I use the NVme, it takes just a few minutes, if I use my HDD it takes looooong time... I have a 4090 as well.
this is probably a good tutorial and stuff, but let's take a moment to realize how unnecessary complicated the comfy-ui is in comparison to automatic1111. I mean, all the stuff you have to set-up beforehand..
Good job. But it shows why I hate ComfyUI. You need to understand all those connections perfectly and what each node exactly does. Yes it is more powerful then Automatic 1111 but certainly not more user friendly.
@@CgTopTips Thanks. I have been using it for a while now and I know the nodes and what to connect, but it can be difficult for someone that is just learning. Good video for those learning. I just started testing the flux. Your example worked perfectly.
It would be nice firstly to put hardware requirements how much vram u need etc. And links to the exact models so we don't end up with a dead workflow Mine didn't work i got: '🔥 - 20 Nodes not included in prompt but is activated' model weight dtype torch.float8_e4m3fn, manual cast: torch.float16 model_type FLOW clip missing: ['text_projection.weight'] Requested to load FluxClipModel_ Loading 1 new model loaded partially 3668.486622619629 3667.77197265625 0 Unloading models for lowram load. 0 models unloaded. Requested to load TAESD Loading 1 new model loaded completely 0.0 9.32717514038086 True Unloading models for lowram load. 1 models unloaded. Loading 1 new model loaded completely 0.0 9.32717514038086 True Requested to load Flux Loading 1 new model loaded partially 3508.182622619629 3505.5234985351562 0 0%| | 0/20 [00:00
perfect tutorial without unnecessary blah blah THANKS!
TIPS: After install the nodes you can download his workflow in the description and skip to 4:33.
I found reducing conditioning to 2.2 really helps keep the inpaint in line with the image aesthetic.
thanks
This works great for me! Looks to be my go-to inpainting method for now. And you are my go-to channel for ComfyUI workflows! Don't change a thing.
Great job. I usually don't like the non verbal tutorials but yours worked very well especially at 0.5 speed made it easer to follow. Still I'm looking forward to when you add the voice over. I have subscribed. Thank you.
This was wildly helpful. Easy to setup and worked the first time.
That's fantastic. I've just started learning Comfy and the speed that you work and the relaxing background music really help my brain not to freak out. Great video! :)
Thanks, I will be adding voice over to the videos soon
@@CgTopTips good, very good, any question, The same worklow but adding lora?
You are the best!!! your videos are very useful and informative. I'm a beginner and I figured it out right away, though I thought a little about where to move which models. the photos are very realistic THANKS!!!
Surprising. I'll test after work.
Wow i learned a lot, i even feel better from the music. Am i the only one that goes kinda crazy trying to figure this stuff out at times? Especially training loras. The nice music is very, very welcome 👍 Liked subbed and saved
I tried this in the beta version of photoshop, as a result I got a different face. This workflow is noticeably better
Great stuff! I replicated it but with the standard checkpoint loader and KSampler with the Flux tensorpoint
This workflow with the new smaller version of the model would be great
Thank you, that is simply great!
I followed step by step everything shown in the video and everything worked perfect! Are there any resources you can recommend to learn more about how to understand each node and why it can be placed there? There is quite a lot of information but it is a bit difficult to know which one is the most appropriate as there is too much!!! Thanks for the work you are doing and how detailed your videos are :D
This is the sort of tutorials that I like. A fucking 10 year old can follow you in this tutorial. Excellent.
Anyone else getting an Out of Memory error on the Sampler Custom Advanced node when processing? I'm running a RTX 4090 and 128gb RAM. Both Cuda and Pytorch are updated as well as attempting the 8 vs 16 flux models and NF4 node. I'm hunting in the Git forums, but haven't found a solution.
same here
very useful Tutorial. thanks. Could we use this method or image to image for manipulating objects , like desk or ladder ?
please help i see this issue "Invalid background color: image"
A new entry called “background_color” has been added to the “Load&Resize note”. It has the value “image”. Simply delete this value.
Awesome! Great job!
Thanks bro
Can I ask what the Differential Diffusion node is for ? Also, by any chance do you know why nobody seem to be using fp8_e5m2 dtype for the model ?
DiffDiff is a form of inpainting that sets a denoise strenght per-pixel depending on the mask you pass it
It seems like deleted mask isn't removing from inpaint properly. After cloth swap, you are swapping her hair and the mask only on hair, but clothes also swapping. As with face swap, her hair changes, but now clothes are okay. Bug? Great tutorial btw.
I experience the same problem. No matter what I do, the ganguro makeup reappears in all later images 😂 Do you have a fix how to clear the deleted masks for good? I think the mask becomes part of InpaintModelConditioning and stays there but I don´t know how to repair this. Thank you for the great tutorial!
Постоянно выдает ошибку. Given groups=1, weight of size [320, 4, 3, 3], expected input[1, 16, 135, 102] to have 4 channels, but got 16 channels instead Что она значит?
which workflow on your page? its confusing because they named differently than the YT video? good stuff, thanks!
Great content. Pleasse share the haircut sheet as well. Thanks
Ok, now I know what's ganguro makeup is :)
Whoever taught Flux what ganguro makeup looks like needs to make a trip to Japan.
LoadAndResizeImage, ImpactGaussianBlurMask, Anything Everywhere is missing, how/ where to download this ?
'diffusion_pytorch_model.safetensors' does not support control_type 'inpaint/outpaint'. ........ what does this mean ive tried the file with the original and your recommendation name change?
I was so pump when this one came out but so sad didn't work for me cause this error appeared: Error occurred when executing LoadAndResizeImage: Invalid background color: image. I changed the color image background to use masking color but its a bad result. I hope this one will work on flux models.
A new entry called “background_color” has been added to the “Load&Resize note”. It has the value “image”. Simply delete this value.
Incredible!
Would it be nice/possible to adjust this to a generic process, where you have a face, an expression, a hairdo, a body, an outfit, a pose, a background and combine all of them in a final image?
Would such a flow be very complicated, and would it keep all elements consistent?
I don't have the (load and resize image) box installed. What do I have to install for it to appear? My native language is Spanish and excuse my way of writing.
How do this work without a controlnet model like openpose? in recognizing what way to put the new clothes on?
Thank you very much for the video. In my opinion, a simple inpaint (like fixing a small area in an image) should not contain such nodes. Arent there any combined nodes at least to reduce complexity of generation process ?
How can I use the newest schnell fp8 model with this? Rename the file to sft and copy to unet folder?
can we use ipadapter for face with flux?
I don’t think so but there is a controlnet model available
Unfortunatly, Flux doesn't support ControlNet and IPAdapte yet
@@CgTopTips check out canny controlnet model by xlabs Ai
@@CgTopTips okay I understand
@@colinfransch6054 There is a contolnet now. However, many people have been having errors and it's only one type of control. Canny. It's Alpha. For the Dev model only. I'm sure more will be coming soon. AI comes at you fast. I don't think I can link it here but a search should find it. I couldn't get it to work personally.
The same worklow but adding lora?
May I know what is the difference between inpaintmodelconditioning and set latent mask? I tried both cannot see obvious difference
The difference lies in the types of inputs and outputs for each node, and in this workflow, we used InpaintModelConditioning to achieve the desired results.
I can't see the workflow anywhere in the description... can someone point me out?
I loved the tutorial. Nice pace and flow of building the workflow. I have managed to copy the exact workflow but somehow it's not working. I get the same image duplicated on the output, minor artefacts to show that it did see the mask but didn't actually change the clothes of anything. Does anyone have any ideas?
it works fine but is there a better way to generate a single person in the photo without the ai cutting them off by making them too big?
Try photoshop :)
How to show this manager for custom nodes? Mine dont have this
Great tutorial. The only thing that I miss is being to make a batch instead of a single image. I tried putting in an empty latent node, but it did not work.
Thanks!
i tried to reproduce your great results at 5:48, and all the hair is "small" meaning its just like, around the skull. Even when I prompt "big hair, 1980s hair," or some of the hair styles you have, it all just crops to the head. Is there some trick to getting it big? I kept making my mask bigger and bigger, and no love.....thanks!
awesome
Thanks, I will soon make a video about combining IPAdapter and Flux
@@CgTopTips Yes I am already trying with InstantID and Flux Dev, so far unsuccessful, I will wait for your video. Thanks
I cant find the custom nodes in the manager
LoadAndResizeImage
Invalid background color: image
You rock!
Excellent Hard Work But it's only useful for technical background users
How i use GGUF Models?
thanks bud
is the 23GB justified ? or old technique to use sdxl.is good enough... anyone compared ?
24gb vram is good, more vram more speed
@@CgTopTipsso wait. Why not go for AMD APU with XDNA. You'd use ryzen ai and ONNX from huggingface x AMD collab to convert CUDA models to AMD models. Using a windows machine it allocate system ram as vRAM dynamically. Biggest bottleneck for AI economy is crushed 😍
@@ickorling7328 it's slow though
Missing Nodes:
-Anything Everywhere
-ImpactGaussianBlurMask
-LoadAndResizeImage
And can't find them in Manager.
rez
0:15 You need to install those 3 modules in the manager.
I love Flux but this looks far too complicated for the average person.
It's not for the average person. The average person would use ChatGPT and pay $20 a month for DALL-E or however much Adobe Firefly costs.
The non-average people are very happy about it.
There are different tools, you can also pay subscription and get stuff already configured
Happy to hear it
You have definitely not seen a complicated comfy ui workflow yet then
Is it possible to upload a mask? I need to change the background behind people.
Yes, but you need to make some changes in the workflow. Since you only want to change the background of an image, the simplest way is to select it manually
All good and fun, but the VRAM reqs. need to go down.
If I wanted to add a LORA where would I add that
I can´t enter this black screen to do workflow
Make sure all nodes's connection are correct. You can send a screenshot of your workflow to me via email
first run using model on 4090ti, 16gbvram - 10 minutes load and process..... second run once model loaded 59seconds.... still testing......weird results so far...only difference I can see as opposed to other inpainting such as brushnet and powerpoint is you get text with this model....
First time always be slow, because is loading the model, and depending on your HDD/SSD/NVme, that's will take time. In my case, if I use the NVme, it takes just a few minutes, if I use my HDD it takes looooong time... I have a 4090 as well.
i keep getting a black image at the end:(
Make sure you've selected the appropriate models for each node according to the video
The most important part missing is styles
I will soon make a video about combining IPAdapter and Flux
I will wait for it. I want to try copying a face on a body with inpainting and IPadapter. Is it possible?
does ComfyUi charge you by the hour?
can anyone share the workflow json file?
Please find in description (openart link)
this is probably a good tutorial and stuff, but let's take a moment to realize how unnecessary complicated the comfy-ui is in comparison to automatic1111. I mean, all the stuff you have to set-up beforehand..
Geil 👍
we got no in-paint model yet?!
Good job. But it shows why I hate ComfyUI. You need to understand all those connections perfectly and what each node exactly does. Yes it is more powerful then Automatic 1111 but certainly not more user friendly.
I suggest that you follow the workflows for a few months to gradually become familiar with the applications of the nodes
@@CgTopTips Thanks. I have been using it for a while now and I know the nodes and what to connect, but it can be difficult for someone that is just learning. Good video for those learning. I just started testing the flux. Your example worked perfectly.
need use automatic mask nodes
SAlen bien engomadas .. prefiero mas Stable Difusión a nivel profesional
why not just publish the workflow so we can download it?
I have uploaded all the workflows to the Openart website; the link is in the description
Please share your workflow!
Please check description section
@@CgTopTips Oh, yes, thanks!
Too difficult, give me one button solution.
Heard the Music, Music stressed me.. goodbye.
Rocket science
We all know what people are gonna do
what ever do you mean? like make photos of cats? Yes everyone will make cat photos!
a tutorial without a voice over is a worthless tutorial
nah, it's fine.
It would be nice firstly to put hardware requirements how much vram u need etc.
And links to the exact models so we don't end up with a dead workflow
Mine didn't work i got:
'🔥 - 20 Nodes not included in prompt but is activated'
model weight dtype torch.float8_e4m3fn, manual cast: torch.float16
model_type FLOW
clip missing: ['text_projection.weight']
Requested to load FluxClipModel_
Loading 1 new model
loaded partially 3668.486622619629 3667.77197265625 0
Unloading models for lowram load.
0 models unloaded.
Requested to load TAESD
Loading 1 new model
loaded completely 0.0 9.32717514038086 True
Unloading models for lowram load.
1 models unloaded.
Loading 1 new model
loaded completely 0.0 9.32717514038086 True
Requested to load Flux
Loading 1 new model
loaded partially 3508.182622619629 3505.5234985351562 0
0%| | 0/20 [00:00