This is what JackDainzh answered, I don’t know who he is. It's not a controlnet model, it's an entirely separate model that is designed to be used as a model to generate, not to guide. The issue is that, as of now, currently with img2img implementation of flux, there is no way to guide the model, say, with pix2pix intrsuct's Image CFG Scale slider, because it doesn't effect anything at the moment. (I forced it visible in the ui, when using flux), but because flux's conditioning is different to that of regular sd models, it get's skipped. To implement the guidence from img2img tab, is to rewrite the whole backend engine, which I have no idea how long will it take, maybe months, or maybe 1 day.
I finally got to do the inpainting I needed from Flux. On 12Gb VRAM with the full 'fill' model it was a lot quicker than I expected. That's the only one I've tried so far but with how well it worked I'm looking forward to the others.
As always, your videos are a welcome view. Favor to ask. As things come so fast to Comfy could you add a sound bite when things aren't quite ready for Forge?
ya did it again , great job as usual 😊😊 only trouble is i've been using foocus for in/out painting and now you've made me want to try it in comfyui grrrrr lol I gave up on comfy cause the update broke the LLM generator i was using in it , come to mention it i cant even remember the name of the generator now .. damn🤣🤣🤣🤣
😊 It does take some time to get used to, I have a love/hate relationship with Comfyui hehehe. But it is worth knowing though especially since it get's all the latest features quickly. At the very least just learn how to use drag and drop workflows and install any missing nodes. That's pretty much all most people need to know.
Hey my friend! Nice to see you here! I haven't used MJ in a while but there is a lot you can do locally compared to MJ's features, plus way more models to choose from. Hope all is well with you. 👍
@@MonzonMedia My MJ sub will end this year and I wont go back. Vizcom became so powerful and Rubbrband also is shaping up really well.
5 дней назад+1
Thanks for the vid! And do you know if the flux.1-fill-dev (23Gb) version is an extended version of the original flux.1-dev? or a whole new thing, and you have to install both?
When painting the mask, what effect do the different colors (black, white, negative) and opacity have on the outcome? Does the resulting inpaint change at all depending on which color/opacity you choose?
Great to follow. Updated Comfy but returning this: RuntimeError: Error(s) in loading state_dict for Flux: size mismatch for img_in.weight: copying a param with shape torch.Size([3072, 384]) from checkpoint, the shape in current model is torch.Size([3072, 64]). by the way, depth and canny would be the best to see
I managed to pull a comfyui update and now it works. Still having outlines visible on the outpainting. Thanks for the reply. I'm on windows and 4090 rtx.
Cool! Yeah always update when new features come out. If you’re seeing seams when outpainting try increasing the feathering or do 1-2 sides at a time. Results can vary.
@@LucaSerafiniLukeZerfini crop and stich inpaint node might be better for that but also the redux model can do it. I'll be posting a video on redux soon.
me again lol 😊 Have you tried the "DMD2" SDXL models yet? not that many about but wow are they impressive . prompt adherence is about the same as Flux schnell, but the image quality is really good, they say 4-8 steps, but a 12 step DMD2 image gives better results imo. Then again i am getting old now and me eyes aren't as good as they used to be .. that's my excuse 🤣🤣
and what i don't like in this new fill model is that it seems works on the actual pixels without enlarging painted area as we did in automatic. As a result we have low detailed and crappy quality if masked object was not so big
That has more to do with the platform you are using, for example fooocus and invoke ai has methods where when inpaint is used it generates the inpainted areas in it's native resolution. I can't recall on comfyui if there is a node that does that but I'm pretty sure there is. Might make a good video topic. 👍
How low? The redux model itself is very small, only 129MB so if you have low vram gpu just use the GGUF flux models and you should be good to go! Runs great on my 3060Ti 8GB VRAM with Q8 GGUF model.
Some people keep several functions or "workflows" on one desktop, which they turn on and off as needed. Others keep separate workflows on completely different desktops or use them one-by-one. Is there a convenient function in ComfyUI that allows you to switch between different Workflows, as if you have several desktops open and choose the one that suits what you are doing.
The new ComfyUI has a workflow panel on the left that allows you to select your saved workflows or recently used. Alternatively there is a fairly new tool I've been trying out called Flow that has several workflows pre-designed. Downside of it is that you can't save custom workflows yet but I hear that option will come soon. I'll be doing a video on it soon. Other than that, yeah it really is a personal thing on what works best for you.
Huh. What is the difference between xlabs and Shakker-Labs controlnets of canny/depth. Why is this special? We already have two of them. Someone please explain.
From what I recall they have 2. The union pro controlnet (6GB) which is an all in one controlnet with multiple controlnets. It's pretty decent but still needs more training. They also have a separate depth model that is 3GB, this one is only 1.2GB. I've yet to do side by side comparisons yet though. It was the same with SDXL, we will get other controlnets from the community until one is trained better. Keep in mind controlnet for flux is still very new.
@@MonzonMedia Tried doing that for a few days it just never worked. I could say a lake, or an army, or whatever and that it would do, but never the text. Stumped.
I tried both the big model and the FP8. Nothing but really BAD results. I don't know why. I'm using a 8 Gig VRam. All I get is random noise around the the outpaint areas and the original image is changed to mostly to noise. Also should it take 6 to 10 minutes for 1 image?
I'm going to do a follow up video on inpainting. What is shown here is very basic and sometimes not the best results. There are a couple other nodes that will help to get better results. Stay tuned!
When I try the outpainting workflow, the pictures come out all pixelated, especially the added part. What am I doing wrong? I'm using the same parameters, denoise is already at 1. Thank you for your videos by the way, you should have way more subs!
With inpainting, it disturbs the composition of the image, the results are not that good, so are the results for the outpainting, the final images are distorted at the edges and lost details, using the fp8 model.
I've had a good experience so far with both inpainting and outpainting. Make sure you are increasing the flux guidance. There are other methods to doing inpainting that should help with the original composition which I will cover soon.
Yup, runs on low vram mode and off loads the rest on system ram. Should be the same for you. It’s not super fast but flux models take roughly a minute depending on the model I’m using and size of the image. How much system ram you have?
Error(s) in loading state_dict for Flux: size mismatch for img_in.weight: copying a param with shape torch.Size([3072, 384]) from checkpoint, the shape in current model is torch.Size([3072, 64]). how i'm can do ?
By the way, there is Flux Tools support for SwamUI and SDNext. Fingers crossed that Forge adds this when the updates get done soon!
This is what JackDainzh answered, I don’t know who he is.
It's not a controlnet model, it's an entirely separate model that is designed to be used as a model to generate, not to guide. The issue is that, as of now, currently with img2img implementation of flux, there is no way to guide the model, say, with pix2pix intrsuct's Image CFG Scale slider, because it doesn't effect anything at the moment. (I forced it visible in the ui, when using flux), but because flux's conditioning is different to that of regular sd models, it get's skipped.
To implement the guidence from img2img tab, is to rewrite the whole backend engine, which I have no idea how long will it take, maybe months, or maybe 1 day.
Thank you, I just tried outpaint and inpaint truly amazing quality
Indeed! So far I haven't seen any major issues yet but still testing. Impressive so far, especially with outpainting. 👍
These are so cool! Can't wait to dive into these! Thanks for sharing the info! 🙌🙌
@@SouthbayJay_com appreciate it bro! Have fun! 🙌🏼
I finally got to do the inpainting I needed from Flux. On 12Gb VRAM with the full 'fill' model it was a lot quicker than I expected. That's the only one I've tried so far but with how well it worked I'm looking forward to the others.
@@Elwaves2925 good to know it can run on 12gb vram. Have you tried the FP8 and if you do is it any faster?
@@MonzonMedia Haven't had chance to try the fp8 version as I didn't know it existed until your video. I will be trying it later.
Can't wait. I found flux less effective in design rather than SDXL.
Been waiting for such things .. thanks for sharing ❤❤
You’re welcome 😊
Thanks. Do the Redux next, then Depth, then Canny
Welcome! Redux is pretty cool! Will likely do it next, then combine the 2 controlnets in another video.
great, thanks. by the way you can hide noodles just pressing eye icon on the down right side of your screen
Indeed! I do like to use the straight ones though but switch to spline when I need to remember where everything is connected. 😊 👍
As always, your videos are a welcome view. Favor to ask. As things come so fast to Comfy could you add a sound bite when things aren't quite ready for Forge?
Yeah I normally do but forgot this time although I did post a pinned comment that there is support for other platforms like SDNext and SwarmUI.
We are waiting for lllyasviel to attach it to Forge
🙏😊 I do see some action on the github page and no other "delays" posted. Fingers crossed my friend!
❤ we are all waiting ❤
ya did it again , great job as usual 😊😊
only trouble is i've been using foocus for in/out painting and now you've made me want to try it in comfyui grrrrr lol
I gave up on comfy cause the update broke the LLM generator i was using in it , come to mention it i cant even remember the name of the generator now .. damn🤣🤣🤣🤣
😊 It does take some time to get used to, I have a love/hate relationship with Comfyui hehehe. But it is worth knowing though especially since it get's all the latest features quickly. At the very least just learn how to use drag and drop workflows and install any missing nodes. That's pretty much all most people need to know.
Redux will be fun for MJ to deal with.
Hey my friend! Nice to see you here! I haven't used MJ in a while but there is a lot you can do locally compared to MJ's features, plus way more models to choose from. Hope all is well with you. 👍
@@MonzonMedia My MJ sub will end this year and I wont go back. Vizcom became so powerful and Rubbrband also is shaping up really well.
Thanks for the vid! And do you know if the flux.1-fill-dev (23Gb) version is an extended version of the original flux.1-dev? or a whole new thing, and you have to install both?
Welcome! Typically in-outpaint models are just trained differently but should be based on the original model.
@@MonzonMedia Got it! thanks!
This will be a good series of videos
Indeed! Already working on the next one. Good to hear from ya bud!
❤❤❤❤
Thank you! 😊
When painting the mask, what effect do the different colors (black, white, negative) and opacity have on the outcome? Does the resulting inpaint change at all depending on which color/opacity you choose?
It's just for visual preference it has no effect on the outcome.
Great to follow. Updated Comfy but returning this:
RuntimeError: Error(s) in loading state_dict for Flux:
size mismatch for img_in.weight: copying a param with shape torch.Size([3072, 384]) from checkpoint, the shape in current model is torch.Size([3072, 64]).
by the way, depth and canny would be the best to see
Context? What were you doing? What are your system specs?
I managed to pull a comfyui update and now it works. Still having outlines visible on the outpainting. Thanks for the reply. I'm on windows and 4090 rtx.
Cool! Yeah always update when new features come out. If you’re seeing seams when outpainting try increasing the feathering or do 1-2 sides at a time. Results can vary.
Yes maybe side by side works better. Other point I'm trying to manage background switching for car but the results still awful with flux.
@@LucaSerafiniLukeZerfini crop and stich inpaint node might be better for that but also the redux model can do it. I'll be posting a video on redux soon.
me again lol 😊
Have you tried the "DMD2" SDXL models yet? not that many about but wow are they impressive . prompt adherence is about the same as Flux schnell, but the image quality is really good, they say 4-8 steps, but a 12 step DMD2 image gives better results imo.
Then again i am getting old now and me eyes aren't as good as they used to be .. that's my excuse 🤣🤣
Not yet but I remember reading about it on Reddit. Thanks for the reminder!
and what i don't like in this new fill model is that it seems works on the actual pixels without enlarging painted area as we did in automatic. As a result we have low detailed and crappy quality if masked object was not so big
That has more to do with the platform you are using, for example fooocus and invoke ai has methods where when inpaint is used it generates the inpainted areas in it's native resolution. I can't recall on comfyui if there is a node that does that but I'm pretty sure there is. Might make a good video topic. 👍
I'll go play Project Zomboid, I recommend it
Ooohhh, will check it out! I finally played final fantasy vii remake! 😬😊 loved it!
Thanks for the guide , it is possible to run redux with low vram ?
How low? The redux model itself is very small, only 129MB so if you have low vram gpu just use the GGUF flux models and you should be good to go! Runs great on my 3060Ti 8GB VRAM with Q8 GGUF model.
@@MonzonMediathank you, do we need another workflow for gguf models?
@@MonzonMediaQ8 what’s the size of
Some people keep several functions or "workflows" on one desktop, which they turn on and off as needed. Others keep separate workflows on completely different desktops or use them one-by-one. Is there a convenient function in ComfyUI that allows you to switch between different Workflows, as if you have several desktops open and choose the one that suits what you are doing.
The new ComfyUI has a workflow panel on the left that allows you to select your saved workflows or recently used. Alternatively there is a fairly new tool I've been trying out called Flow that has several workflows pre-designed. Downside of it is that you can't save custom workflows yet but I hear that option will come soon. I'll be doing a video on it soon. Other than that, yeah it really is a personal thing on what works best for you.
Is there a way to include LoRA models for inpainting?
Not sure what you mean? Do you want to use a lora to inpaint? It doesn't work that way.
Could you please do a video on crop and stitch with Flux tool inpainting?
Yes of course! Will be doing it on my next inpainting video 👍
@MonzonMedia Thank you so much. Your tutorials are really good and easy to follow.
Huh. What is the difference between xlabs and Shakker-Labs controlnets of canny/depth. Why is this special? We already have two of them. Someone please explain.
From what I recall they have 2. The union pro controlnet (6GB) which is an all in one controlnet with multiple controlnets. It's pretty decent but still needs more training. They also have a separate depth model that is 3GB, this one is only 1.2GB. I've yet to do side by side comparisons yet though. It was the same with SDXL, we will get other controlnets from the community until one is trained better. Keep in mind controlnet for flux is still very new.
How do I get flux to inpaint text? I have tried everything when all I want is to take an image and have flux add the text it generates over it.
Same way you would prompt for it, just state in your prompt something like "text saying _________" and inpaint the area you want it to show up.
@@MonzonMedia Tried doing that for a few days it just never worked. I could say a lake, or an army, or whatever and that it would do, but never the text. Stumped.
I tried both the big model and the FP8. Nothing but really BAD results. I don't know why. I'm using a 8 Gig VRam.
All I get is random noise around the the outpaint areas and the original image is changed to mostly to noise.
Also should it take 6 to 10 minutes for 1 image?
I'm going to do a follow up video on inpainting. What is shown here is very basic and sometimes not the best results. There are a couple other nodes that will help to get better results. Stay tuned!
@@MonzonMedia Thanks for you reply. I'll be watching. Keep up the good work as usual.
Man, I started watching you at Easy Diffusion.
Whoa! That's awesome! 😁 I appreciate the support since then and now.
When I try the outpainting workflow, the pictures come out all pixelated, especially the added part. What am I doing wrong? I'm using the same parameters, denoise is already at 1.
Thank you for your videos by the way, you should have way more subs!
@@Xenon0000000 appreciate the support and kind words. Are you using a high flux guidance? 20-30 works for me.
@@MonzonMedia I left it at 30, I'll try changing that parameter too, thank you.
I have a much better workflow that I'll be sharing with you all soon that gives better results. Hope to post it some time tomorrow (Wed).
With inpainting, it disturbs the composition of the image, the results are not that good, so are the results for the outpainting, the final images are distorted at the edges and lost details, using the fp8 model.
I've had a good experience so far with both inpainting and outpainting. Make sure you are increasing the flux guidance. There are other methods to doing inpainting that should help with the original composition which I will cover soon.
SwarmUI tutorial please
Working on it! 😊
You have 8vram ? I have 8vram and it crash on all non gguf models . How are you able to load 11 gb model on 8 vram 😮
Yup, runs on low vram mode and off loads the rest on system ram. Should be the same for you. It’s not super fast but flux models take roughly a minute depending on the model I’m using and size of the image. How much system ram you have?
Error(s) in loading state_dict for Flux: size mismatch for img_in.weight: copying a param with shape torch.Size([3072, 384]) from checkpoint, the shape in current model is torch.Size([3072, 64]). how i'm can do ?
same here!
Did you do an update?
I had the same problem. My DualCLIPLoader Type was set to "Sdxl" not "Flux" ... maybe that helps haha