found the solution for "After loading the ControlNet Preprocessor node, in the preprocessor selection item there are only two things to choose from: "tile" and "tile"- You need to install from Manager " ComfyUI’s ControlNet Auxiliary Preprocessors." And now it list all the Controlnet preprocessors.
But even while using the union model ComfyUI is still downloading the individual controlnet models, like Zoe Depth 1.4GB and saves them into \\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts\lllyasviel\Annotators\
Thank you for this information, it helps me to streamline and organize my workflows. In large, complex workflows, the "Normal map Controlnet" node of rgthree is interesting. For example, it can be used to remove the control net after the sampler to speed up the process when scaling up.
@@Bicyclesidewalk i actually ended up going down the tensorrt rabbit hole and just set it up with v2 . its fast but i think it also introduces weird noise at crazy settings like +2 or 3 controlnet strength while i didnt notice extreme corrupting with even v1
yeah just confirmed - v2 without tensort is really smooth and nice even with Base and fp16 - speed increase from tensort version is negligible - probably only useful for small res vids
If one day the rest of the interfaces stop supporting, I will cry. Comfy looks absolutely terrifying, like a giant equation full of logarithms. Sorry for my English
It's not as complex as it looks nowadays. With the ComfyUI Manager, all you have to do is load an image into the browser window and it will automatically load the workflow and you can install the extensions automatically. Only needing to restart comfy and then you can hit the ground running just by inserting new prompts into their workflow.
@@tripleheadedmonkey6613 That's not exactly true though. Sometimes that works well but from my own experience (and why I don't like Comfy) the manager doesn't always install the correct (or up-to-date) nodes and even if it does, there can still be errors throughout the workflow. That may be down to the workflow being outdated or whatever but it gives you almost no information on how to fix it. I've spent hours scouring the internet trying to fix some errors/missing nodes.
@@Elwaves2925 90% of the time the issue is that ComfyUI itself needs an update in order to work with the nodes, or the workflow is no longer supported due to updates. The amount of times that a node or extension actually requires follow up beyond that is negligible and for most users can be ignored fully.
@@tripleheadedmonkey6613 Not in my experience. It's been closer to 90% needing follow up, either due to nodes, errors or the workflow and that's after updating everything. 🙂
@@Elwaves2925 That's really unlucky. I've been using it for a year and had barely any problems. Other people have had problems, but most of the time it's resolved in a few minutes by reinstalling. There are some exceptions, but usually this is because of manually installing things system wide which conflict with the python embed folder.
It would be very helpful to have introduction explaining what's the point of using it and difference between old CNet models and this one? saving disk space, better quality?
I don't see it being better quality from what I understand, it seems to be more about saving space and streamlining the process so you don't have to select the correct (and different) models each time. I might be wrong but that's how I understand it.
@@Elwaves2925 It would be interesting to test if one could mix different controlnet types this way. Could you for example alpha over a pose on top of a depth map, and have the controlnet still work for the most part? This might work better than layering multiple controlnets sequentially. I wish I had the time to test it, but comfy ui has a habit of just.. disappearing the rest of your day once you start fiddling around, so I'll restrain for now :D
On the huggingface website there is now a promax model as well. I don't know what's the difference with that model, but can i just try it like the original model? Does it work the same way?
He doesn't say "Comfy only" he says "as always it's working with ComfyUI." I like Olivio but he's too biased towards Comfy to even mention whether it works with other UI's. Good to know it works with Forge though as that's what I use, so cheers for that info.
@@twelvyvisionz7637 There's a few possibilities but without knowing your exact settings there's no way to be certain. 1) Check your VAE, is it the right one, do you even need one? 2) Try different samplers/schedulers. Some work, some don't. Euler A usually works and DPM++ 2M SDE Karras works for me. 3) Do you have the score numbers in place? If not, find a Pony generated image with a prompt and copy them across to your prompt. You can save that as a style fr next time. 4) Check your CFG scale and steps. Uses one's you find in pompts for Pony images as a guide. 5) If you use a refiner, try it without. Hope that helps. 🙂
Why do I pop up such a prompt with the same connection method as you? Error occurred when executing KSampler: 'NoneType' object has no attribute 'shape'
Amazing. I tried in a1111, when processing img2img, this universal database gave a better and more accurate picture than (lineart + canny )both! Not bad for a start. ps: However, the generation time has increased. Most likely due to a lack of 10GB of video memory (
For the Promax version, this adds Inpaint and Tile to the extensive list of controls. I've tested it on Comfyui and got a few errors, so I'm going to use the Union controlnet while waiting to see if the Promax version evolves for Comfyui.
Exactly, I'm getting the same for openpose, it just doesn't work. I've spent hours on it... Tried it in all three UIs (comfy, forge and auto) and all kinds of models, settings. None of the controlnets in Union work for inpainting either. Either I'm doing something wrong, or there's still a lot of bugs to be ironed out.
Olivio, you mean this model fits all pre-processors? Oddly enough, I often use the depth map, normalbae or tile model with the openpose preprocessor. This yields some very interesting results. But it's for Animation Thanks Olivio, let's see what model it is.
you do NOT need the PREPROCESSOR you can just plug ur source image directly to get you going fast. Of course for more control you may want to use the preprocessor node
Some people like it because it's customisation allows for a lot of power but despite what they say, it also brings complexity and a whole bunch of error solving with it. An imported workflow doesn't always work out of the box, even with the manager installed. If you don't need that power, the other UI's are still fantastic.
According to someone on Reddit, it just works directly with A1111 already. You just put the model in the ControlNet models folder, and then you can chose it as the model in the ControlNet extension in the UI and use any preprocessor with it. I haven't tried yet, but that's apparently the case based on a Reddit post 🤷♂
Me too "After loading the ControlNet Preprocessor node, in the preprocessor selection item there are only two things to choose from: "tile" and "tile". do i've missed something' thanks Olivio!
You could check for the news in their original huggingface info page! The model with the _suffix "promax" in its name was designed for inpainting work.
There's some that works though, although a bit janky. The problem is pony iirc, but the author is improving the dataset for the next version. But idk there's a lot of hate towards him on that side of the spectrum, we should really start supporting astralyte, otherwise corps might just take over and the last thing we'll have for anime is pony 6
Actually, we already do have. There's a model called 'anytest'. It can be used like various ControlNet models, just like this one, and you don't need to choose a preprocessor. It works with pony, animagine, and sdxl, and can reproduce even very complex poses without getting confused. the articles are only available in Japanese though
@@chadwick3593 No, the name 'anytest' is correct. The basic usage is extremely simple, works with 1111 and comfy. but if you want to know litte bit advanced, you'll need to translate the GitHub page EasySdxlWebUI and other articles into English. I can't paste the URL because RUclips deletes comments with link
Hi Olivio, on my ComfyUI as i included new nodes, i keep getting the "reconnecting" error that nevers recove, so i have to restart. Have you had this issue?
How necessary are all the multi modal control images (depth, stick figure, etc). Do you need to provide all those images as the input to the new control net?
Am I crazy or its just downloading all the different controlnet checkpoints in a temp folder and still using one file each? this feels gimmicky also, Can someone explain me what is promax version and what is the difference?
iv'e tried comfyui but the quality of the images is pretty bad compared to automatic1111 or forge. it seems to be slightly blurrier and more pixelated and the colours are a bit more washed out. has anyone else noticed that?
Olivia, I haven't played with A1111 for a long while, because the interface is getting buggy and the temp/cache folder is getting very big and eat up all my drive space. Is it safe to delete that folder in My documents and download everything again?
I get this error - Error occurred when executing ControlNetLoader: Error(s) in loading state_dict for ControlNet: size mismatch for task_embedding: copying a param with shape torch.Size([8, 320]) from checkpoint, the shape in current model is torch.Size([6, 320]). size mismatch for control_add_embedding.linear_1.weight: copying a param with shape torch.Size([1280, 2048]) from checkpoint, the shape in current model is torch.Size([1280, 1536]).
They're likely running the newer license by lawyers to make sure of everything first. Give them time, 48 hours is not long enough especially as it's been over a weekend.
found the solution for "After loading the ControlNet Preprocessor node, in the preprocessor selection item there are only two things to choose from: "tile" and "tile"- You need to install from Manager " ComfyUI’s ControlNet Auxiliary Preprocessors." And now it list all the Controlnet preprocessors.
That worked. Thanks!
thank you for adding that. I pinned your comment
@@OlivioSarikas thanks to you for your hard work my friend!
Ive install the ComfyUI’s ControlNet Auxiliary Preprocessors but cant find the node "ControlNet Preprocessor"
Thanks a lot!
Short and sweet, thanks a lot. Looking forward to see more workflows with this ControlNet from you.
Thank you for the tutorial, btw it works with Forge UI as well!
thank you for letting me know :)
Where did you put the file ? :o
@Oliver Sarikas, please do another follow-on, with that typical, useful, slow paced, detailed, informative, valuable explanation style of yours!
I'm most impressed by this controlnet model being 2.51GB for SDXL when the openpose SDLX model I downloaded two days ago was over 5GB by itself.
But even while using the union model ComfyUI is still downloading the individual controlnet models, like Zoe Depth 1.4GB and saves them into \\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts\lllyasviel\Annotators\
@@Teardropbrut isnt that just a preprocessor?
Thank you for this information, it helps me to streamline and organize my workflows. In large, complex workflows, the "Normal map Controlnet" node of rgthree is interesting. For example, it can be used to remove the control net after the sampler to speed up the process when scaling up.
i like all of your videos , I will be always your fan , thanks for everything
China No.1 Author Xinsir yet another chinese gaint to bring light to the AI community
Looks promising, but most of it doesn't work. Still a lot of bugs/problems to work out.
Have you tried the Depth Anything v2 preprocessor? If not, you should check it out.
The depth map is way cleaner
nice ill use this one - i just checked with an existing Depth Anything node (not sure if its v2) and it was way cleaner than the one in the vid
It is great.
@@Bicyclesidewalk i actually ended up going down the tensorrt rabbit hole and just set it up with v2 . its fast but i think it also introduces weird noise at crazy settings like +2 or 3 controlnet strength while i didnt notice extreme corrupting with even v1
yeah just confirmed - v2 without tensort is really smooth and nice even with Base and fp16 - speed increase from tensort version is negligible - probably only useful for small res vids
does it work with forge? if so how?
If one day the rest of the interfaces stop supporting, I will cry. Comfy looks absolutely terrifying, like a giant equation full of logarithms. Sorry for my English
It's not as complex as it looks nowadays. With the ComfyUI Manager, all you have to do is load an image into the browser window and it will automatically load the workflow and you can install the extensions automatically. Only needing to restart comfy and then you can hit the ground running just by inserting new prompts into their workflow.
@@tripleheadedmonkey6613 That's not exactly true though. Sometimes that works well but from my own experience (and why I don't like Comfy) the manager doesn't always install the correct (or up-to-date) nodes and even if it does, there can still be errors throughout the workflow. That may be down to the workflow being outdated or whatever but it gives you almost no information on how to fix it. I've spent hours scouring the internet trying to fix some errors/missing nodes.
@@Elwaves2925 90% of the time the issue is that ComfyUI itself needs an update in order to work with the nodes, or the workflow is no longer supported due to updates.
The amount of times that a node or extension actually requires follow up beyond that is negligible and for most users can be ignored fully.
@@tripleheadedmonkey6613 Not in my experience. It's been closer to 90% needing follow up, either due to nodes, errors or the workflow and that's after updating everything. 🙂
@@Elwaves2925 That's really unlucky. I've been using it for a year and had barely any problems. Other people have had problems, but most of the time it's resolved in a few minutes by reinstalling.
There are some exceptions, but usually this is because of manually installing things system wide which conflict with the python embed folder.
For me this might be the best Comfy news I could ask for. Many many many many thanks!!!
It would be very helpful to have introduction explaining what's the point of using it and difference between old CNet models and this one? saving disk space, better quality?
I don't see it being better quality from what I understand, it seems to be more about saving space and streamlining the process so you don't have to select the correct (and different) models each time. I might be wrong but that's how I understand it.
@@Elwaves2925 It would be interesting to test if one could mix different controlnet types this way. Could you for example alpha over a pose on top of a depth map, and have the controlnet still work for the most part? This might work better than layering multiple controlnets sequentially. I wish I had the time to test it, but comfy ui has a habit of just.. disappearing the rest of your day once you start fiddling around, so I'll restrain for now :D
@@NevelWong No idea and I barely use Comfy at all, except for a couple of things but it would be interesting to see if someone else tries it.
Great video! Always great to save more hard drive space I say!
Amazing tutorial Olivio thank you!! Can I ask you why you are using the Jaggernaut model v9 and not another one? What is the difference there?
i love your videos maan ♥♥
Thanks!!!!
Very interresting, thanks
On the huggingface website there is now a promax model as well. I don't know what's the difference with that model, but can i just try it like the original model? Does it work the same way?
Thanks!
Hey Olivio! Did you see the new licence for SD3??
Thank you, this is amazing! It certainly can simplifie a lot of workflows
Oli is AI mad
Does it work for both sd1.5 and sdxl checkpoints
This is awesome! Thanks Olivio, you're the best 🙌🏽🙌🏽
What do you mean 'comfy only'? It works just fine with the preprocessors I tested in Forge Webui.
He doesn't say "Comfy only" he says "as always it's working with ComfyUI." I like Olivio but he's too biased towards Comfy to even mention whether it works with other UI's. Good to know it works with Forge though as that's what I use, so cheers for that info.
I get colorful pics when I use it in Forge with Pony models. Any solution?
@@twelvyvisionz7637 There's a few possibilities but without knowing your exact settings there's no way to be certain.
1) Check your VAE, is it the right one, do you even need one?
2) Try different samplers/schedulers. Some work, some don't. Euler A usually works and DPM++ 2M SDE Karras works for me.
3) Do you have the score numbers in place? If not, find a Pony generated image with a prompt and copy them across to your prompt. You can save that as a style fr next time.
4) Check your CFG scale and steps. Uses one's you find in pompts for Pony images as a guide.
5) If you use a refiner, try it without.
Hope that helps. 🙂
Awesome news/showcase video!
Thanks for keeping it tight and informative.
Are the results of this, any good? The current controlNet models for SDXL are mediocre at best.
Awesome
finally you are back with valuable vids!
Wow!
Finally a new video 😊
Wow! 😮👍
Why do I pop up such a prompt with the same connection method as you?
Error occurred when executing KSampler:
'NoneType' object has no attribute 'shape'
Amazing.
I tried in a1111, when processing img2img, this universal database gave a better and more accurate picture than (lineart + canny )both!
Not bad for a start.
ps: However, the generation time has increased. Most likely due to a lack of 10GB of video memory (
Amazing
Does it work nicely with SDXL 1.5?
Thanks for the video, I like how easy you made the videos, could you please share your comfyui workflow?
I was gonna ask for the same can we get the workflow please ?
Thanx 4 the good news and this makes the ControllNet handling very easy. TOP TOP TOP
That's very clear, thank you
this is really cool
For the Promax version, this adds Inpaint and Tile to the extensive list of controls. I've tested it on Comfyui and got a few errors, so I'm going to use the Union controlnet while waiting to see if the Promax version evolves for Comfyui.
Is this working with openpose correctly for everyone else? I am getting the stick person with blurry background as the final generated image.
Exactly, I'm getting the same for openpose, it just doesn't work. I've spent hours on it... Tried it in all three UIs (comfy, forge and auto) and all kinds of models, settings. None of the controlnets in Union work for inpainting either. Either I'm doing something wrong, or there's still a lot of bugs to be ironed out.
use 512x512, that works fine
for those of you who have tried it, how does it compare to SD1.5's controlnet 1.1? particularly in lineart
Does it work with normals??? Because we've never had an SDXL normal CN before.
When I do OpenPose, I get this error with the preprocessor any idea why?
AV_ControlNetPreprocessor
list index out of range
Olivio, you mean this model fits all pre-processors? Oddly enough, I often use the depth map, normalbae or tile model with the openpose preprocessor. This yields some very interesting results.
But it's for Animation
Thanks Olivio, let's see what model it is.
After loading the ControlNet Preprocessor node, in the preprocessor selection item there are only two things to choose from: "tile" and "tile"(
same
This for pony models would be great.
Do you have a 101 video that walks through ComfyAI? Also what does a ControlNet do?
He has some videos on both but if you can't find what you need there, there's plenty of others on YT.
you do NOT need the PREPROCESSOR you can just plug ur source image directly to get you going fast.
Of course for more control you may want to use the preprocessor node
Sir why all like moving to comfy ui instead of webui
Some people like it because it's customisation allows for a lot of power but despite what they say, it also brings complexity and a whole bunch of error solving with it. An imported workflow doesn't always work out of the box, even with the manager installed. If you don't need that power, the other UI's are still fantastic.
Dear Olivio, how you made that decent animation at the end?
Picture me begging for an A1111 example! Please?
Seconded
According to someone on Reddit, it just works directly with A1111 already. You just put the model in the ControlNet models folder, and then you can chose it as the model in the ControlNet extension in the UI and use any preprocessor with it. I haven't tried yet, but that's apparently the case based on a Reddit post 🤷♂
Switch to ComfyUI already it's superior in everything
@@nonsohel Except in that it's node/graph based.
@@nonsohel Hey, at 71 I'm pretty happy that I'm able to run A1111, Forge, Fooocus, etc. ComfyUI is just hard for me to figure out.
How is the animation in the end made ? mixamo character + motion ?
is it working with Automatic1111?
Will it do the nifty QR-Code controlnet?
Me too "After loading the ControlNet Preprocessor node, in the preprocessor selection item there are only two things to choose from: "tile" and "tile". do i've missed something' thanks Olivio!
Hey! Theres a new file now, diffusion_pytorch_model_promax, is it any better/different than the diffusion_pytorch_model ?
You could check for the news in their original huggingface info page!
The model with the _suffix "promax" in its name was designed for inpainting work.
so I guess we won't see anytime soon controlnet for pony.
There's some that works though, although a bit janky. The problem is pony iirc, but the author is improving the dataset for the next version. But idk there's a lot of hate towards him on that side of the spectrum, we should really start supporting astralyte, otherwise corps might just take over and the last thing we'll have for anime is pony 6
Actually, we already do have. There's a model called 'anytest'. It can be used like various ControlNet models, just like this one, and you don't need to choose a preprocessor. It works with pony, animagine, and sdxl, and can reproduce even very complex poses without getting confused. the articles are only available in Japanese though
@@AhKaj-jw8of Do you mean AnyText? I don't see an AnyTest, and AnyText seems to only do text.
@@chadwick3593 No, the name 'anytest' is correct. The basic usage is extremely simple, works with 1111 and comfy. but if you want to know litte bit advanced, you'll need to translate the GitHub page EasySdxlWebUI and other articles into English. I can't paste the URL because RUclips deletes comments with link
Pony does things I can't replicate with any other checkpoint.
Does it work with A1111?
X2
@@miguelarce6489 what X2?
@@cubapawlac X3
@@makam2089 ??? Answer the full sentence please. I don’t understand…
должно и может работать так как работает в Forge они сильно не отличаются
any difference for the promax.safetensors?
Hi Olivio, on my ComfyUI as i included new nodes, i keep getting the "reconnecting" error that nevers recove, so i have to restart. Have you had this issue?
haven't seen that yet on my pc. best you ask in the comfyui discord. also check if you get any error messages
How necessary are all the multi modal control images (depth, stick figure, etc). Do you need to provide all those images as the input to the new control net?
no, i just showed them as examples. you only need one (or as many controlnets as you want to use at one time)
Am I crazy or its just downloading all the different controlnet checkpoints in a temp folder and still using one file each? this feels gimmicky
also, Can someone explain me what is promax version and what is the difference?
iv'e tried comfyui but the quality of the images is pretty bad compared to automatic1111 or forge. it seems to be slightly blurrier and more pixelated and the colours are a bit more washed out. has anyone else noticed that?
Olivia, I haven't played with A1111 for a long while, because the interface is getting buggy and the temp/cache folder is getting very big and eat up all my drive space. Is it safe to delete that folder in My documents and download everything again?
yes, but move all the models to a new folder so you don't have to download them again
Long time no SEE Olivio. missed you so much man... Are you get married ? 😉😊😄
not yet ;)
is it me or is the Preprocessor node from this vid outputting 8bit depth maps in midas mode? I get crazy banding in the depth map augmented output
I get this error - Error occurred when executing ControlNetLoader:
Error(s) in loading state_dict for ControlNet:
size mismatch for task_embedding: copying a param with shape torch.Size([8, 320]) from checkpoint, the shape in current model is torch.Size([6, 320]).
size mismatch for control_add_embedding.linear_1.weight: copying a param with shape torch.Size([1280, 2048]) from checkpoint, the shape in current model is torch.Size([1280, 1536]).
How do i save an image in JPG format in comfyUI? Please help me
Does anyone have an idea why the "ControlNet Preprocessor", doesn't see any of the controlnet installed? I just have "tile" listed
is there any controlnet for sd3?
TensorRT support ?
There is one also for 1.5?
Would anyone know how to use multiple controlnet? Still new to Comfy and we could stack them in A1111.
Does it works for seg model from 1.5?
Controlnet preprocessor gives "list index out of range" for reason I don't see. Anyone have a solution?
does this work for a1111 and tile for image enhancement like sd?
Dang, trying to get it to work. Everything goes fine until it gets to the KSampler. Hope some else has this issue and finds a way around it.
are you sure you use a Xl model?
@@OlivioSarikas Ah, that was it! I was being silly. Thank you!
Now the workflows will be simpler, it’s just one file to deal with
Hi
I can't join disc, can you update the link?
Why hasn't CivitAI lifted the ban on SD3 yet? 48 hours have passed, I’m sure someone made the lore and models past tense, where are they?
They're likely running the newer license by lawyers to make sure of everything first. Give them time, 48 hours is not long enough especially as it's been over a weekend.
Isn't Automatic 1111 deprecated?
Hello, works on InvokeAI too? Someone tested?
How to install this in webui forge
how to change the cloth?
when a1111? 😢
According to another comment it works right now.
FU AND COMFY UI
According to other comments it works in A1111 and Forge, so possibly others too.
i stop watching it when he said compfyui, thank you and no thanks
please make a tutorial on how to upscale a video in Automation 1111
Off topic but I figured you would know. For Project Odyssey, do they only accept animations or is themed lookbook acceptable?
Thanks!
works with rocm on linux ?