I can't somehow get ControlNet tab to appear I did exactly what you did, and tried reinstalling everything like 3 times. 2:59 that's what I'm talking about.
I mostly use the scribble model in 1.5. Currently I am drawing a mask on the loaded image and converting the mask to image to feed that into canny/depth for lack of the scribble model in SDXL.
When i try to make only the text to ice cream with white background, it turns the text colorful but not the structre to ice cream. If i adjust the strength + end percent it generates an ice cream in the back but the text is still no ice cream style. Can you tell me what i am doing wrong? :)
That workflow is really good Just wanted to use the Lora Stacker Efficient Node with it , but have no ideia how to integrate it. Today big problem with Comfy is how to integrate the custom nodes from different providers considering that everyone is creating their custom loader and pipelines to work with their own workflow. Therefore, I think Comfy still needs out of the box solutions to at least enable autocomplete for embeddings and multiple loras without the hassle of creating multiple nodes for it.
I'm confused with the ControlNet Preprocessors repo which is now archived and the comfyui_controlnet_aux repo, both on ComfyUI manager and both can be installed.
Yeah, the new one came out in the middle of making the video so feel free to go with that one instead! It’s marked as a work in progress though, so watch out for issues. Basically at the moment you can go with either.
ok I just installed the regular preprocessors and got it working, thanks template works great, I was able to figure out you can open multipule browsers of the artboard with other workflows and cut and slice them together, had a face restorer / roop like workflow was able to add to the end @@NerdyRodent
thanks @@NerdyRodent for responding. i guess i was expecting the pre-processor output to be more like an outline when using canny. is there a way to tell COmfyUI to use a different pre-processor? I don't think I'm supposed to be seeing a depth map preview image when using a canny model.
Thank you for your explanation, but I am facing a problem when I run it, it shows me an error "Error occurred when executing CheckpointLoaderSimple: Error while deserializing header: MetadataIncompleteBuffer" What is the problem ? is there a solution
@@NerdyRodent I am using the latest version and when I did a restart I got another problem Error occurred when executing KSamplerAdvanced: mat1 and mat2 shapes cannot be multiplied (1x2560 and 2816x1280)
Sadly all I get is out of memory errors. at 1024x1024 it is asking for Currently allocated : 7.28 GiB Requested : 16.00 MiB Device limit : 8.00 GiB Guess it can't be done at home with smaller cards?
Find the .bat you use to open your generator of choice and add the command line Argument --medvram. I run Comfy with XL models just fine on my 3070 laptop with only 8gb of vram
@@blacksage81 That isn't a valid argument for ComfyUI (its the default apparently). I'm trying --lowvram right now to see. If its working for you though that makes me think something else is wrong.
I'm not yet comfy with comfy so basic question: HOW TO CHANGE THE BADGER? (he is really cute btw) I've tried to use Load but the badger it's always the base image, and I don't see any node loading it!
@@NerdyRodent Happy to hear to that. Just the thing is the preprocessor seemed to have both negative postive output. But the previous ones had only the positive outs. Maybe Mine is different.
I am a self proclaimed Geek. Most importantly, your content is ALWAYS "the latest thing", "something I'm interested in", "something I can actually do, which I couldn't do before your guidance.". You have a loyal follower. (Hitting Like) ~ Don't change a think, keep up the good work. ~ But part of me worries this I LIKE your "We are all equal" and an the same time, "most of you are stupid, and I'm here to help" helpful/condescending attitude. (I think it's more "a bit" than "actual you") I mean, name another youtuber that will give us this?
@@NerdyRodent Yes, I think it's a folder issue. My version of comfyui uses "ComfyUI_windows_portable" and there seems to be a choice of what folder to place the pre-processors in. Still not sure where they should be. I updated the Manager and downloaded missing nodes and models but still not getting the image. I'll try more tomorrow but had enough for now....
Thank you for your extremely helpful tutorials! This lesson almost everything worked out, but at the final stage this error appears: *Error occurred when executing ControlNetLoader:* Error while deserializing header: HeaderTooLarge File "/content/ComfyUI/ComfyUI/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/content/ComfyUI/ComfyUI/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/content/ComfyUI/ComfyUI/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/content/ComfyUI/ComfyUI/ComfyUI/nodes.py", line 572, in load_controlnet controlnet = comfy.sd.load_controlnet(controlnet_path) File "/content/ComfyUI/ComfyUI/ComfyUI/comfy/sd.py", line 986, in load_controlnet controlnet_data = utils.load_torch_file(ckpt_path, safe_load=True) File "/content/ComfyUI/ComfyUI/ComfyUI/comfy/utils.py", line 11, in load_torch_file sd = safetensors.torch.load_file(ckpt, device=device.type) File "/usr/local/lib/python3.10/dist-packages/safetensors/torch.py", line 308, in load_file with safe_open(filename, framework="pt", device=device) as f: Can you please tell me how to fix this error 🙏🏻? Workflow successfully uploaded from your sample .png
This error is related only to the ControlNet block. When I broke the connection with this group, everything generates normally, but only without Canny or Depth🙈
thanks a lot for your videos i spent the entire day in comfy ui brushing up on all the new niceties like the nested stuff and the manager. But whew.. i feel like SDXL + refiner workflow is so mindnumbingly boring. There are lots of nice checkpoints and loras on Civitai that are fun to setup..maybe theres a slight feeling of the generations looking like everyone elses but dam it looks so good
I have a text which is pretty clear in the controlnet canny image but still in the final picture I miss sometimes one letter or so. But anyway, thanks for the great video and workflow! Liked, commented and subscribed. If anyone has tips for text-based creations for clarity or other ideas, I'd gladly hear :)
I can't somehow get ControlNet tab to appear I did exactly what you did, and tried reinstalling everything like 3 times. 2:59 that's what I'm talking about.
On my way home, just saw this and can't wait to get there asap, thanks for another great video +1
Good luck! 😉
Can't wait for them to be Automatic1111 compatible!
One day… 😉
I mostly use the scribble model in 1.5. Currently I am drawing a mask on the loaded image and converting the mask to image to feed that into canny/depth for lack of the scribble model in SDXL.
Absolutely incredible!
When i try to make only the text to ice cream with white background, it turns the text colorful but not the structre to ice cream. If i adjust the strength + end percent it generates an ice cream in the back but the text is still no ice cream style. Can you tell me what i am doing wrong? :)
That workflow is really good Just wanted to use the Lora Stacker Efficient Node with it , but have no ideia how to integrate it. Today big problem with Comfy is how to integrate the custom nodes from different providers considering that everyone is creating their custom loader and pipelines to work with their own workflow. Therefore, I think Comfy still needs out of the box solutions to at least enable autocomplete for embeddings and multiple loras without the hassle of creating multiple nodes for it.
I'm confused with the ControlNet Preprocessors repo which is now archived and the comfyui_controlnet_aux repo, both on ComfyUI manager and both can be installed.
Yeah, the new one came out in the middle of making the video so feel free to go with that one instead! It’s marked as a work in progress though, so watch out for issues. Basically at the moment you can go with either.
ok I just installed the regular preprocessors and got it working, thanks template works great, I was able to figure out you can open multipule browsers of the artboard with other workflows and cut and slice them together, had a face restorer / roop like workflow was able to add to the end @@NerdyRodent
do you have this workflow available to download somewhere please
based on this setup, im getting a depth image in the preview image window when using canny ControlNet model. Not sure why.
The preview shows the output from the pre-processor
thanks @@NerdyRodent for responding. i guess i was expecting the pre-processor output to be more like an outline when using canny. is there a way to tell COmfyUI to use a different pre-processor? I don't think I'm supposed to be seeing a depth map preview image when using a canny model.
use the canny preprocessor with the canny model for best results
What graphics card do you have?? That was super fast generation 😂😂. Great video!
Just an old 3090 😉
Can you skip the preprocessor and feed the text image directly to the depth model?
Yup, Do not bother with preprocessors if you already have processed images
May the AI overlords bless you. thanks for all the stuff you do!
For later viewers of this, simply use ComfyUI Manager to add custom nodes for the ControlNet Preprocessors. No Git cloning required.
Thankyou
Can you please do a quick walkthrough vid on integrating the new ControlNet with the SeargeSDXL comfyui workflow? Pretty pleeaasseee😘
Thank you for your explanation, but I am facing a problem when I run it, it shows me an error
"Error occurred when executing CheckpointLoaderSimple:
Error while deserializing header: MetadataIncompleteBuffer"
What is the problem ? is there a solution
Maybe an old version of ComfyUI?
@@NerdyRodent I am using the latest version and when I did a restart I got another problem
Error occurred when executing KSamplerAdvanced:
mat1 and mat2 shapes cannot be multiplied (1x2560 and 2816x1280)
@@anaskhalid4948 that sounds like you may be trying to put a control net through the refiner
You didn't change anything in the settings, like the file you are uploading
@@NerdyRodent
Thank you for your interest. It's working now
@@NerdyRodent
Sadly all I get is out of memory errors. at 1024x1024 it is asking for Currently allocated : 7.28 GiB
Requested : 16.00 MiB
Device limit : 8.00 GiB
Guess it can't be done at home with smaller cards?
Find the .bat you use to open your generator of choice and add the command line Argument --medvram. I run Comfy with XL models just fine on my 3070 laptop with only 8gb of vram
@@blacksage81 That isn't a valid argument for ComfyUI (its the default apparently). I'm trying --lowvram right now to see. If its working for you though that makes me think something else is wrong.
the depth contol node you are using is not available in control net processors - any other ones we can use?
You can use any of the depth preprocessors from any of the control net extensions
Need to do a workaround to get it to work - need change yaml file
I'm not yet comfy with comfy so basic question: HOW TO CHANGE THE BADGER? (he is really cute btw)
I've tried to use Load but the badger it's always the base image, and I don't see any node loading it!
Is the preprocessor for the control net SD 1.5 different from the new one?
It’s just an image processor so nothing needs to change
@@NerdyRodent Happy to hear to that. Just the thing is the preprocessor seemed to have both negative postive output. But the previous ones had only the positive outs. Maybe Mine is different.
@@musicandhappinessbyjo795 image in, image out on mine…
Can you do multi controlnet, both canny and depth?
Yup, just feed one into the other. Bit hit and miss with SDXL at the moment and though it seems to prefer Euler
How to distinguish controlnet preprocessor and models between SD 1.5 and SDXL inside ComfyUI? :O
You could just name your sd1.5 models with “1.5” in the name? Not really used 1.5 in comfy ☹️
I wish ComfyUI made it easy to batch process folders of images like A1111
The batch nodes are fairly easy… once you’ve added them workflow
So awesome!
Thank you! Cheers!
We need you to try the new Img2vid form modelscope
Sure, I’ll maybe take a look!
“Badgers?! Badgers! We don’t need no stinking badgers.”
We need rodents.
;)
I could also use both control net models (canny and depth)at once right? If I put one Apply Controlnet into the next. Or am I missing something here?
Yup. There are also multi-controlnet loaders too for a more compact look
Not work on Google Colab?😭
Should work, but may not be allowed
so this is basically txt2img with controlnet. that's why it has no denoise slider like img2img would have? right?
Yup!
@@NerdyRodent your workflow is quite neat. so if u find time to share an img2img-one i'd be a happy man xD
when i am in img2img but denoise strength set to 100 and some text prompt: is it the same as txt2img ? @@NerdyRodent
Thank you!! 🎉🎉
Is the ControlNet in SDXL better than the other versions of SD?
I like Sdxl for sure!
this is the official controlnet release by stability AI or a custom user model?
can you add a workflow to use along with Lora?
Sure, you can add the lora in there too!
@@NerdyRodent I am not getting good results. Thats why I am asking for a workflow 😁🙏
@@AgustinCaniglia1992 set the Lora strength somewhere between 0.2 and 0.5 for the best results, keyword “contrasts”
I am a self proclaimed Geek. Most importantly, your content is ALWAYS "the latest thing", "something I'm interested in", "something I can actually do, which I couldn't do before your guidance.".
You have a loyal follower. (Hitting Like) ~ Don't change a think, keep up the good work. ~ But part of me worries this I LIKE your "We are all equal" and an the same time, "most of you are stupid, and I'm here to help" helpful/condescending attitude. (I think it's more "a bit" than "actual you") I mean, name another youtuber that will give us this?
Feed your inner nerd! 😃
I'm getting the preview image but no save image. I'm using your flow (Badger png file). Any idea what's missing?
Check the terminal output for any issues and the outputs directory for the files
@@NerdyRodent Yes, I think it's a folder issue. My version of comfyui uses "ComfyUI_windows_portable" and there seems to be a choice of what folder to place the pre-processors in. Still not sure where they should be. I updated the Manager and downloaded missing nodes and models but still not getting the image. I'll try more tomorrow but had enough for now....
Sir why don't you use SDXL1.0base_0.9vae?
You can indeed use any base model you fancy - try civitai for a whole load of other options!
great stuff
Hey there’s actually openpose as well! 😄
Sweet!
ComfyUI users are literally chillin' while Auto1111 still has not fixed memory RAM and VRAM issues with SDXL lule :>
Ikr! 😉
Thank you for your extremely helpful tutorials! This lesson almost everything worked out, but at the final stage this error appears:
*Error occurred when executing ControlNetLoader:*
Error while deserializing header: HeaderTooLarge
File "/content/ComfyUI/ComfyUI/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/content/ComfyUI/ComfyUI/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/content/ComfyUI/ComfyUI/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/content/ComfyUI/ComfyUI/ComfyUI/nodes.py", line 572, in load_controlnet
controlnet = comfy.sd.load_controlnet(controlnet_path)
File "/content/ComfyUI/ComfyUI/ComfyUI/comfy/sd.py", line 986, in load_controlnet
controlnet_data = utils.load_torch_file(ckpt_path, safe_load=True)
File "/content/ComfyUI/ComfyUI/ComfyUI/comfy/utils.py", line 11, in load_torch_file
sd = safetensors.torch.load_file(ckpt, device=device.type)
File "/usr/local/lib/python3.10/dist-packages/safetensors/torch.py", line 308, in load_file
with safe_open(filename, framework="pt", device=device) as f:
Can you please tell me how to fix this error 🙏🏻? Workflow successfully uploaded from your sample .png
This error is related only to the ControlNet block. When I broke the connection with this group, everything generates normally, but only without Canny or Depth🙈
Sounds like you could be using a very old version of ComfyUI?
how to install this to google colab
Just run the notebook - same as for any other Colab!
thanks a lot for your videos i spent the entire day in comfy ui brushing up on all the new niceties like the nested stuff and the manager. But whew.. i feel like SDXL + refiner workflow is so mindnumbingly boring. There are lots of nice checkpoints and loras on Civitai that are fun to setup..maybe theres a slight feeling of the generations looking like everyone elses but dam it looks so good
You want to use depth for text the just make a pic with white text and black background then don't use a preprocessor.
That's right - if you make your own depthmap then you don't need a preprocessor!
I have a text which is pretty clear in the controlnet canny image but still in the final picture I miss sometimes one letter or so.
But anyway, thanks for the great video and workflow! Liked, commented and subscribed.
If anyone has tips for text-based creations for clarity or other ideas, I'd gladly hear :)
You can try more control nets 😉 Make a DIY depth map for the text and add that in, for example!
@@NerdyRodent Thanks. Great ideas. ControlNet weekend project imminent! Maybe some masking, inpainting etc. too.
Nicee!
Sweet
evolution from rodent to a badger
😮
thank uuuu
where is the workflow ?
Links are in the video description!
Wake me up when controlnet is added to A111 and fooocus. Not some atrocious node interface.
wa
The complexity of "comfy" UI is ridiculous, distracts from using smooth artistic flow, too technical, as if A1111 isn't technical enough...