@@joeterzio7175 I wasn't a fan of node based editors either but I gave Comfy a try because I was sick of A1111's memory issues, slow speed and incompatibilities. I do not regret it - performance is AWESOME! The easiest way to use Comfy is to save different workspaces (as here provided!), enter your own prompts and that's all. No work, no hassle.
FOR ANYONE GETTING *ERROR* AT 2:45 *MOVE YOUR ComfyUI_windows_portable folder to your "C" Drive.* The file path must be " C:\ComfyUI_windows_portable " In my case I have a storage space labeled "F". So my file path is " F:\ComfyUI_windows_portable " Windows does not like long file paths, so you need to place the comfy_UI folder at the root of your drive
Matteo's workflow from openart shows the uppermost advanced controlnet model as "ad/motion.ckpt" (the one you changed to "anidiff_controlnet_checkpoint.ckpt"). Unfortunately, I can't find it anywhere.
Hello, I must commend the remarkable steadiness and effectiveness of your process flow. Yet, figuring out how to set the length of the resulting videos escapes me. Would you be able to guide me through that?
How to fix this issue???? Error occurred when executing VAEDecode: Runtimerror: Given groups=1, weight of size [512, 16, 3, 3], expected input[3, 4, 64, 64] to have 16 channels, but got 4 channels instead
Can you please make a video how to set all the nodes and such from strach Been wanting to start my own anime, its finally time thats to you and this video ❤
Also what if your didn't want to use a phomt to make whatever character I have my own designs already made is there a way to connect the dance video with a pre made template on the anime character you want to use If you could help find out how to do this alot of people will thank you including myself ❤❤❤
hi amaizing tutorial, but a question, the node you use on batch and image name code"EmptyLatentImage" i cant find, is probably a sustitute? i try to serch but the EmptyLatentImage are now dnot have batch input. any idea? thanksin advance
as I was scrolling down this video I realized that everyone is waiting for you to provide the ad/motion.ckpt thingy. If you don´t conply we will start to burn cars on the streets...lol
@@aivideos322 yep, that is the conclusion I came too as well. And it’s all working. Likewise, I can’t actually find said ckpt, but the safetensors that’s also used near the bottom of the workflow.
Great video as always. I was keen to try this but I can't find ad/motion.ckpt and issues downloading the youtube short for some reason? I can try another video if i can find the motion.ckpt. Cheers.
Seemingly you have left quite a lot of people frustrated. As it’s not obvious where to get the checkpoint from. I’ve used the safetensors that’s also used near the bottom of the workflow
Two things. The first one others are asking for as well - where do wer get your renamed anidiff_controlnet_checkpoint.ckpt from? There doesn't appear to be a file link, even with your 'anidiff' part removed. The second thing is I have the v3_sd15_mm.ckpt used in the AnimateDiff Loader node....but where do I store it so it detects it?
@@Elwaves2925 I’m not saying it’s 100% the correct answer, but I’ve used the FP16 safetensors like what’s used at the bottom of the workflow! If I find the actual file that’s been renamed I’ll update. But try it with that. It worked for me.
@@PeteJohnson1471 Cheers, I'll give that a go. I tried with the control open pose file that's used elsewhere in the workflow (it loaded it, not me) but it messed up the second animation. Next time I go on I'm going to try some other motion models as well.
Great tutorial, thanks for this! Question tho: Is there a way to feed it an image to be animated like the sourced video? Like say I want to animate a specific, original character singing. Can I provide an image of said character and a video of someone singing and have comfy replace that person with the character? Or those Animatediff works through prompts only at the moment?
Hmm, I'm trying to understand. I have a lora that I'd like to add to this process, but it doesn't seem to get picked up properly. Where is the best place to add it? It's sort of a person lora, making the same person every time.
If I want to use a LoRa I trained on specific person, is it possible to use it here? if so where do I put the lora loader (i.e. which nodes to connect it to?)
I apologize for the basic question, but I've only recently started using Comfy. If I haven't downloaded the checkpoints and LoRa required for this workflow, why am I still able to use them? I mean, I see them within the nodes, but I've never downloaded them.
I seem to be having a problem when it reaches the first ksampler ('NoneType' object has no attribute 'shape') and haven't found a way to fix it, any ideas would be appreciated, thanks.
Yeh, after 4 hours i was found almost all files, because Olivio change the names... so.. could you put own links and describe to names of destinations folders? pls :)
RuntimeError: Given groups=1, weight of size [512, 16, 3, 3], expected input[3, 4, 64, 64] to have 16 channels, but got 4 channels instead (Please Help any guy ????)
Guys, why i always get the same face? I change prompt like hairs, eyes, age etc and i always have the same face but other details like hairs etc... I tried dreamsharper XL, dreamsharper 8, realisticVision and juggernaut xl.
Cool, I don't think this will replace anyone, you just can work much faster and do more work. I see how this will be used in vfx with a layer mask on which you can generate for example a simple render of lava flow and add the the details with SD. Most people who make AI "'art" videos are stuck with the lack of possibility to control the narrative. But if you know 3D, compositing and animating you can make low poly basic animations and sims and later on refine them with SD. I just which comfyui had some sort of IF/else statement or case 1, 2, 3.... which could be triggered through a control pannel wth button, that way you don't have to modify everything the whole time and just create on big workflow with multiple setups
i have 4060ti 16gb and used the matteo previous ballerina template + add face detailer and >> 1, 5 sec video takes about 7 min to create .... so it is possible....
Is there a site with motion models to use instead of trying to grab videos from dancers on youtube? I can't really download from youtube (I still haven't figured that out). A library of animations (like pose libraries on CivitAI) would be a plus.
funny i found it by right clicking and Add Node > ControlNet Preprocessors > Faces and Poses > DW Preprocessor. But cant pull it up in a search of my nodes.
He didn't do too much testing with it. As he wanted to put the video out asap. Thuogh this does work as well and you can convert the frame limit as input to keep the Number of frames node in place
Great tutorial as always. Just one question: Where do you put the 'v3_sd15_mm' checkpoint? Can't seem to find the right folder for it. Can't select it in AnimateDiff Loader (only undefined).
Anyone run into the same error? How Can I fix it? Error occurred when executing DWPreprocessor: [WinError 3] The system cannot find the path specified: 'C:\\Users\\admin\\Downloads\ ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\ckpts\\hr16/DWPose-TorchScript-BatchSize5\\cache\\models--hr16--DWPose-TorchScript-BatchSize5\\snapshots\\359d662a9b33b73f6d0f21732baf8845f17bb4be'
Apparently, it's all about the assembly of ComfyUI. I had the same problem, and the updates to ComfyUI via Manager did not help. Then I just re-installed ComfyUI using the Pinokio neural network automatic installation service, and everything worked:)
Can you PLEASE stop giving this ridiculous SpaghettiUI so much attention. It's a complete waste of time and it offers NOTHING useful over A1111 or any other variant of for that matter.
i guess you don't see the potential of comfyui ....and all the things you can not do in automatic1111 ...you can put loras before Or after the prompt, or the image....you can start with a model and finish with another....you can affect loras or controlnets to specific parts, etc. . . . . i was "against" comfyui at start, as i was used to automatic1111 ...but i have to admit there are things you can do better in comfuyi ...i still prefer to "inpaint" in automatic1111 tho ....
For animation comfyui is a must....especially if you want to do things like steerable motion, SVD or lengthy videos. Sometimes you want to do the video rendering step by step to save on vram. For not so complex images and impairing A1111 is great though.
Somebody had to say it. Been working with ComgyUI for about a year now (off and on). It is literally the most headache-inducing user interface I have ever had the displeasure to experience.
thats the first animation ive seen with sd that looks professional very cool!
Just simply mind-blowing. You are awesome Olivio. Will try this tonight.
I've tried other workflows, this is the best, and it's free.
After multiple tests...Im getting better results with a depth map control but overal great workflow for slower type of movement.
This channel clearly became the no. 1 source for ComfyUI tutorials. Awesome 👍
Exactly the reason why I barely watch anymore. 👎
@@joeterzio7175 I wasn't a fan of node based editors either but I gave Comfy a try because I was sick of A1111's memory issues, slow speed and incompatibilities. I do not regret it - performance is AWESOME! The easiest way to use Comfy is to save different workspaces (as here provided!), enter your own prompts and that's all. No work, no hassle.
yeah, I hope he will continue on this ... other youtubers seems to go back to A1111 because more people use that one instead :/
Please follow Matteo on RUclips: www.youtube.com/@latentvision
SweetyHigh Video: ruclips.net/user/shorts-_YZ1kSoInQ
#### Links from my Video ####
Workflow Download: openart.ai/workflows/matt3o/template-for-prompt-travel-openpose-controlnet/kYKv5sJWchSsujm0zOV0
huggingface.co/guoyww/animatediff/blob/main/v3_sd15_mm.ckpt
huggingface.co/guoyww/animatediff/blob/main/v3_sd15_adapter.ckpt
huggingface.co/guoyww/animatediff/tree/main
huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/blob/main/control_v11p_sd15_openpose_fp16.safetensors
we need help finding one of the models
@OlivioSarikas you have mentioned at the title of the video using the DW Pose but you linked the Open Pose model !
Wow this one really is a gem! Thanks man, keep up the amazing videos!
What is the download link for anidiff_controlnet_checkpoint?
looking for the same thing
Amazing tutorial once again! Thanks Olivio 🐔😘
Это великолепно, пошел пробовать! Большая тебе благодарность!
FOR ANYONE GETTING *ERROR* AT 2:45 *MOVE YOUR ComfyUI_windows_portable folder to your "C" Drive.* The file path must be " C:\ComfyUI_windows_portable "
In my case I have a storage space labeled "F". So my file path is " F:\ComfyUI_windows_portable "
Windows does not like long file paths, so you need to place the comfy_UI folder at the root of your drive
Matteo's workflow from openart shows the uppermost advanced controlnet model as "ad/motion.ckpt" (the one you changed to "anidiff_controlnet_checkpoint.ckpt").
Unfortunately, I can't find it anywhere.
I can't find the ad/motion.ckpt either so not working for me sadly
On openai comments say its one of these from huggingface: "crishhh/animatediff_controlnet"
Found it in the openart workflow description. It does point to the hugging face that @elowine mentioned
@@olao6737 @elowine I did download this file, but where should I put, the webui\models\ControlNet?
nagyon ígéretes ! köszi a videot
Can't find the anidiff checkpoint on the link you provided.
Hi , why video just 2sec how to real duration from source ? example source 15 sec buy why output just 2 sec.
RUclipsrs are so greedy they sell workflow in patreon.. but you give away for free .. you deserved more followers
good job and thanks for the update
Hi Olivio, thanks for the video.
If I want to change frame count to lets say 48 or 64 from 32, should I change the "context overlap" to 3 or 4 etc ?
Hello, I must commend the remarkable steadiness and effectiveness of your process flow. Yet, figuring out how to set the length of the resulting videos escapes me. Would you be able to guide me through that?
Thank you always!!
How to fix this issue????
Error occurred when executing VAEDecode:
Runtimerror: Given groups=1, weight of size [512, 16, 3, 3], expected input[3, 4, 64, 64] to have 16 channels, but got 4 channels instead
Could you please tell me which adjustment can make the video longer?
Can you please make a video how to set all the nodes and such from strach
Been wanting to start my own anime, its finally time thats to you and this video ❤
Also what if your didn't want to use a phomt to make whatever character
I have my own designs already made is there a way to connect the dance video with a pre made template on the anime character you want to use
If you could help find out how to do this alot of people will thank you including myself ❤❤❤
hi amaizing tutorial, but a question, the node you use on batch and image name code"EmptyLatentImage" i cant find, is probably a sustitute? i try to serch but the EmptyLatentImage are now dnot have batch input.
any idea? thanksin advance
what about Turbo+LCM models? do they help for frames render speed? or they are unusable here?
Those fingers look insane... 😅😮
as I was scrolling down this video I realized that everyone is waiting for you to provide the ad/motion.ckpt thingy. If you don´t conply we will start to burn cars on the streets...lol
its just open pose renamed.
really ???@@aivideos322
@@aivideos322 yep, that is the conclusion I came too as well. And it’s all working. Likewise, I can’t actually find said ckpt, but the safetensors that’s also used near the bottom of the workflow.
@@aivideos322 so he used twice? That's it? I dunno man...
It’s not openpoae.
Great video as always. I was keen to try this but I can't find ad/motion.ckpt and issues downloading the youtube short for some reason? I can try another video if i can find the motion.ckpt. Cheers.
Seemingly you have left quite a lot of people frustrated. As it’s not obvious where to get the checkpoint from.
I’ve used the safetensors that’s also used near the bottom of the workflow
How is it smoothly looping? is it just the video that dose it? or whats going on? mine all animate great, but at the end jump
Two things. The first one others are asking for as well - where do wer get your renamed anidiff_controlnet_checkpoint.ckpt from? There doesn't appear to be a file link, even with your 'anidiff' part removed.
The second thing is I have the v3_sd15_mm.ckpt used in the AnimateDiff Loader node....but where do I store it so it detects it?
Got the second part sorted, just the first one now.
@@Elwaves2925 I’m not saying it’s 100% the correct answer, but I’ve used the FP16 safetensors like what’s used at the bottom of the workflow!
If I find the actual file that’s been renamed I’ll update. But try it with that. It worked for me.
@@PeteJohnson1471 Cheers, I'll give that a go. I tried with the control open pose file that's used elsewhere in the workflow (it loaded it, not me) but it messed up the second animation.
Next time I go on I'm going to try some other motion models as well.
@@Elwaves2925 did for me too, so I reduced the denoise strength in the 2nd ksampler to about 35. and things don't go too far out of whack ;-)
where to place the ckpt file ?? can u help
I tried the first animation method, follow all the steps but not sure why it doesn't really follows the controlnet which i have given.
Hi, so one question. I'm unable to stack multiple controlnets using these fp16 models, any reason why?
Prompt outputs failed validation
VHS_LoadVideoPath:
- Custom validation failed for node: video - Invalid file path: C:\Users\MonWeb\Downloads\videoplayback.webm
??????????????????????
this keeps happening to me, too
Great tutorial, thanks for this! Question tho: Is there a way to feed it an image to be animated like the sourced video? Like say I want to animate a specific, original character singing. Can I provide an image of said character and a video of someone singing and have comfy replace that person with the character? Or those Animatediff works through prompts only at the moment?
could you do a tutorial on lip sync?
Hmm, I'm trying to understand. I have a lora that I'd like to add to this process, but it doesn't seem to get picked up properly. Where is the best place to add it?
It's sort of a person lora, making the same person every time.
hey Olivio can you let us know where to place those files from the link you shared? thank you so much!
If I want to use a LoRa I trained on specific person, is it possible to use it here? if so where do I put the lora loader (i.e. which nodes to connect it to?)
I apologize for the basic question, but I've only recently started using Comfy. If I haven't downloaded the checkpoints and LoRa required for this workflow, why am I still able to use them? I mean, I see them within the nodes, but I've never downloaded them.
I seem to be having a problem when it reaches the first ksampler ('NoneType' object has no attribute 'shape') and haven't found a way to fix it, any ideas would be appreciated, thanks.
i have a quick question can we export the pose data? i am currently working on an idea to animate 3d models..
Where can I download the Anidiff controlnet checkpoint?
Looks like in the video description
Yeh, after 4 hours i was found almost all files, because Olivio change the names... so.. could you put own links and describe to names of destinations folders? pls :)
seems it's only the renamed anidiff_controlnet_checkpoint.ckpt people can't find. myself as well.
Amazing! But how to control video length
I have a hard time replicating people workflows on youtube, it feels like they never go through the package instalation or model checkpoint placenent
upgraded from python 3.9->3.10.11 fixed most things, well packages still were messed up but try fix button in manager fixed it.
Great content, man! What exactly is the control net model at 8:10 . The forth huggingface points to a openpose controlnet. Is that it?
Yeah, wasn't able to find it on any of his links. Great work on the video as usual, Olivio!!
Was wondering as well:)
stuck on this one as well
It seems many are having the same issue. Myself included.
I believe it is the file "v3_sd15_mm.ckpt" again, it needs to go into models/controlnet folder.
RuntimeError: Given groups=1, weight of size [512, 16, 3, 3], expected input[3, 4, 64, 64] to have 16 channels, but got 4 channels instead (Please Help any guy ????)
Has anyone had any luck with this for realistic outputs, rather than anime?
Looks really great. Wh6is the max available length of generated animation?
Guys, why i always get the same face? I change prompt like hairs, eyes, age etc and i always have the same face but other details like hairs etc... I tried dreamsharper XL, dreamsharper 8, realisticVision and juggernaut xl.
Stability in the video? Sure, if we're not looking at the hands...
Is the other contronet model depth?
Niiice Olivio !
Cool, I don't think this will replace anyone, you just can work much faster and do more work. I see how this will be used in vfx with a layer mask on which you can generate for example a simple render of lava flow and add the the details with SD. Most people who make AI "'art" videos are stuck with the lack of possibility to control the narrative. But if you know 3D, compositing and animating you can make low poly basic animations and sims and later on refine them with SD. I just which comfyui had some sort of IF/else statement or case 1, 2, 3.... which could be triggered through a control pannel wth button, that way you don't have to modify everything the whole time and just create on big workflow with multiple setups
Would 16gb vram be capable of doing this? I’ve got a 4080
yes
@@agamenonmacondo tight guess I got some learning to do
i have 4060ti 16gb and used the matteo previous ballerina template + add face detailer and >> 1, 5 sec video takes about 7 min to create .... so it is possible....
timestep keyframe node not loading (advanced-controlnet is broken on my system) any other node i can replace it for??
did you click on "update all" in comfyui manager?
halllo anidiff_controlnet@@OlivioSarikas
Is there a site with motion models to use instead of trying to grab videos from dancers on youtube? I can't really download from youtube (I still haven't figured that out). A library of animations (like pose libraries on CivitAI) would be a plus.
download 4k video downloader+, it's free for 30 videos a day.
yt-dlp or youtube-dl (both open source and free)
Come to think of it, YT-DLP IS AVAILABLE in visions of chaos. Which is awesome.
Not compatible with SDXL ?
Ive been searching for the DWPose Esimator node havent found it yet, where can i get that one?
funny i found it by right clicking and Add Node > ControlNet Preprocessors > Faces and Poses > DW Preprocessor. But cant pull it up in a search of my nodes.
YAS!!!!!
Holly Molly
In theory, wouldnt you be able to do this with a Canny? If i can figure it out, i might finally get over the roadbloack I've been at with animdiff.
meanwhile lumiere hot dropping this tech is moving soo fast
Why not use the load video upload node instead of the load video path?
He didn't do too much testing with it. As he wanted to put the video out asap. Thuogh this does work as well and you can convert the frame limit as input to keep the Number of frames node in place
@@JoshTheFlyGuy👍
Can you do a video on RAVE?
He used another method of doing this but with the ballerina. Is this the new way to go about it?
Great tutorial as always.
Just one question: Where do you put the 'v3_sd15_mm' checkpoint? Can't seem to find the right folder for it. Can't select it in AnimateDiff Loader (only undefined).
That goes into custom_nodes\ComfyUI-AnimateDiff-Evolved\models - sorry, i should have pointed that out in the video
@@OlivioSarikas thanks, man. You rock!
The hands though...
Nothing stops you from passing each frame through meshgraphormer.
Anyone run into the same error? How Can I fix it?
Error occurred when executing DWPreprocessor:
[WinError 3] The system cannot find the path specified: 'C:\\Users\\admin\\Downloads\
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\ckpts\\hr16/DWPose-TorchScript-BatchSize5\\cache\\models--hr16--DWPose-TorchScript-BatchSize5\\snapshots\\359d662a9b33b73f6d0f21732baf8845f17bb4be'
Apparently, it's all about the assembly of ComfyUI. I had the same problem, and the updates to ComfyUI via Manager did not help. Then I just re-installed ComfyUI using the Pinokio neural network automatic installation service, and everything worked:)
Can I please ask you for a similar tutorial for Automatic 1111 ?
Not powerful enough to do stuff like this
first mistake its not automatic1111 second mistake not sdxl
YOU scuf
love the results but the complicated process is a big turnoff
This is a simple efficient workflow I'm actually surprised by how simple and powerful it is
You have to build it once, then you just edit prompts and click start.
@@purelife_aippl look for one button fix and zero learning curve... Only then they can say look how skilled I am 😂
Alternatively you could learn animation =)
You're so out of touch with how much work and knowledge usually goes into art lol.
Can you PLEASE stop giving this ridiculous SpaghettiUI so much attention. It's a complete waste of time and it offers NOTHING useful over A1111 or any other variant of for that matter.
i guess you don't see the potential of comfyui ....and all the things you can not do in automatic1111 ...you can put loras before Or after the prompt, or the image....you can start with a model and finish with another....you can affect loras or controlnets to specific parts, etc. . . . . i was "against" comfyui at start, as i was used to automatic1111 ...but i have to admit there are things you can do better in comfuyi ...i still prefer to "inpaint" in automatic1111 tho ....
For animation comfyui is a must....especially if you want to do things like steerable motion, SVD or lengthy videos. Sometimes you want to do the video rendering step by step to save on vram. For not so complex images and impairing A1111 is great though.
@@cedtala Literally nothing you just mentioned is useful or necessary, just like ComfyUI.
Somebody had to say it. Been working with ComgyUI for about a year now (off and on). It is literally the most headache-inducing user interface I have ever had the displeasure to experience.
AttributeError: module 'comfy.ops' has no attribute 'Linear'
Getting the same error with AnimateDiff, Any idea whats its all about?