Sorry to hear about your health problems, wishing you good luck and a speedy recovery! Can't wait for the course I think this is exactly what I've been looking for for over a year now, everything in one place rarther than a missmash of different RUclips tutorials. These controlnets look great! Only problem is I was prepping images over the weekend to get ready to make my first Lora now I think I'll have to restart from scratch again to see if these new models and your excellent guidence can improve the results, thanks so much 😊
I'm very sorry to hear about your health situation. I hope you will get better soon. Great video by the way, the presentation was perfect, and the music was so soothing.
Why do I pop up such a prompt with the same connection method as you? Error occurred when executing KSampler: 'NoneType' object has no attribute 'shape'
Thanks for making this video! It's really cool to see how fast this technology is growing. I'll be sure to try out these new controlnets in my future workflows! And I hope you get better soon! You are the only RUclipsr I found that gives in-depth explanations for workflows and provides the workflows to play around with, so thank you for that!
Excellent video my friend, I had NO problems by downloading and running the Depth Anything node as well as the Canny Edge node, both run great. BUT when I tried to run the DWPose Estimator, an error pops up: curred when executing DWPreprocessor: 'NoneType' object has no attribute 'get_providers' And actually I got the same error when truing to run the DWPreprocessor
Hey Man, awsome stuff, would love a workflow even paid for comfy ui specially for products, as in using your own product shots and compose them in different environments
@@ceegeevibes1335 I meant applying depth from one image to another image i.e a different person. I know you can swap face using reactor node but that does not yield the best result for me.
hey thanks for the update, got a question though, there are 2 canny models avaliable the normal one and the V2 one, do u know the difference, they both got the same filesize...
I get best results by loading these models as diffusion models ( diff controlnet loader ) this loader has to be connected to ( diffrensial diffusion node ) that is connected to your checkpoint model output, ( I figure this is how they will be able to work TOGETHER with the checkpoint to actually work properly ) - im not 100% sure technically but I'm getting extremely good results, even if im wrong: i would encourage yo'll to test this variation of workflow and judge yourselves. Have a great Comfy DAY!
I want to dive deeper into comfy.ui. Is 6GB VRAM RTX 3050 Laptop GPU enough to run Comfyui? I see there is a I want to delve deeper into comfy.ui. Is 6GB VRAM RTX 3050 Laptop GPU enough to run Comfyui? I see there is a text "generate on cloud". Is it worth it? Will it be charged every time we generate an image? Or pay by the hour? thanks. hope you get well soon.
God fucking damnit, I was waiting for this for SO LONG, and today, my ComfyUI install broke during a system crash... Now I have to re-install EVERYTHING just to get ControlNet... ARghhhh.....
The HF page says this is the SOTA version of open source openpose models... so while this isnt totally new like video makes it sound... its just some better trained models or something... clickbait strikes again
📢 Last chance to get 40% OFF my AI Digital Model for Beginners COURSE: aiconomist.gumroad.com/l/ai-model-course
Sorry to hear about your health problems, wishing you good luck and a speedy recovery! Can't wait for the course I think this is exactly what I've been looking for for over a year now, everything in one place rarther than a missmash of different RUclips tutorials. These controlnets look great! Only problem is I was prepping images over the weekend to get ready to make my first Lora now I think I'll have to restart from scratch again to see if these new models and your excellent guidence can improve the results, thanks so much 😊
12:00 how did changing the seed value by 1 tell it to generate 4 images instead of 1? 🤔
I'm very sorry to hear about your health situation. I hope you will get better soon. Great video by the way, the presentation was perfect, and the music was so soothing.
aaaaaaaand NOW there is Xinsir UNION model that does everything XD !
this was relevant for 5 days ! great video btw
Why do I pop up such a prompt with the same connection method as you?
Error occurred when executing KSampler:
'NoneType' object has no attribute 'shape'
Thanks for making this video! It's really cool to see how fast this technology is growing. I'll be sure to try out these new controlnets in my future workflows! And I hope you get better soon! You are the only RUclipsr I found that gives in-depth explanations for workflows and provides the workflows to play around with, so thank you for that!
Thank you! It's great to know I could help.
my depth anything doesnt work, can you create a video on that how to install ,and where to place required files.
Google for huggingface depth_anything_vitl14.pth
Download and place it in: ComfyUI\models\annotator\LiheYoung\Depth-Anything\checkpoints
Excellent video my friend, I had NO problems by downloading and running the Depth Anything node as well as the Canny Edge node, both run great.
BUT when I tried to run the DWPose Estimator, an error pops up:
curred when executing DWPreprocessor:
'NoneType' object has no attribute 'get_providers'
And actually I got the same error when truing to run the DWPreprocessor
Hey Man, awsome stuff, would love a workflow even paid for comfy ui specially for products, as in using your own product shots and compose them in different environments
What have I been using then? I swear I was using control net poses with SDXL for the past year already
can we use it on A1111 ?
thank you
I'm going to watch a good video today and learn from it.
Thank you always~
Thanks a lot for those amazing videos, wishing you good luck and a speedy recovery!
Thank you very much!
My healing prayers to you! Thanks for making and sharing this. wonder how does these compared with MistoLine's SDXL-ControlNet
Great work as always even with your health problems you are delivering ❤
how do you apply the depth model to an existing image ?
OP is showing this in the video timestamp: 4:15
@@ceegeevibes1335 I meant applying depth from one image to another image i.e a different person. I know you can swap face using reactor node but that does not yield the best result for me.
Do you know if it is possible to use the models in Forge?
Is it possible to change a person's pose in an image while keeping their original body structure intact?
can these models be used in 1111 ?
Yes!
Amazing video as usual. Thanks a lot!
My pleasure!
Love your vids, thanks for the helpful insights.
hey thanks for the update, got a question though, there are 2 canny models avaliable the normal one and the V2 one, do u know the difference, they both got the same filesize...
I haven't tested the V2 model yet, but I don't think there will be a big difference between the two.
@@Aiconomist v2 looks much better
great video, and hope you get better soon!
Thank you!
I get best results by loading these models as diffusion models ( diff controlnet loader ) this loader has to be connected to ( diffrensial diffusion node ) that is connected to your checkpoint model output, ( I figure this is how they will be able to work TOGETHER with the checkpoint to actually work properly ) - im not 100% sure technically but I'm getting extremely good results,
even if im wrong: i would encourage yo'll to test this variation of workflow and judge yourselves. Have a great Comfy DAY!
Hi, do you have a workflow in order to see that? I'm very new in this :'c
nice tutorial as always thanks 🙏🙏
What's the plugin you used for the "Generate on Cloud GPU"?
It's Comfy Cloud by nathannlu
You can also stack multiple controlnets to get even better results.
Sending prayers and love. 👩🏾⚕️🤒🩹 😊
I want to dive deeper into comfy.ui. Is 6GB VRAM RTX 3050 Laptop GPU enough to run Comfyui? I see there is a I want to delve deeper into comfy.ui. Is 6GB VRAM RTX 3050 Laptop GPU enough to run Comfyui? I see there is a text "generate on cloud". Is it worth it? Will it be charged every time we generate an image? Or pay by the hour? thanks. hope you get well soon.
God fucking damnit, I was waiting for this for SO LONG, and today, my ComfyUI install broke during a system crash... Now I have to re-install EVERYTHING just to get ControlNet... ARghhhh.....
Yes! 😮
What about a Lineart model
I think Xinsir will train this model; we should just give it some time..
Please, SD 1.5
I personally think SD 1.5 already has well trained ControlNet models.
Rodriguez Donald Robinson Betty Martinez Patricia
Hey brother, I sent you an email. Great videos :)
There aren’t new
The HF page says this is the SOTA version of open source openpose models... so while this isnt totally new like video makes it sound... its just some better trained models or something... clickbait strikes again
My RUclips channel thanks you.