DWPose for AnimateDiff - Tutorial - FREE Workflow Download

Поделиться
HTML-код
  • Опубликовано: 11 ноя 2024

Комментарии • 158

  • @aem5smruasn
    @aem5smruasn 9 месяцев назад +13

    thats the first animation ive seen with sd that looks professional very cool!

  • @Vestu
    @Vestu 9 месяцев назад +3

    Just simply mind-blowing. You are awesome Olivio. Will try this tonight.

  • @StringedYak
    @StringedYak 5 месяцев назад

    I've tried other workflows, this is the best, and it's free.

  • @purelife_ai
    @purelife_ai 9 месяцев назад +1

    After multiple tests...Im getting better results with a depth map control but overal great workflow for slower type of movement.

  • @MikevomMars
    @MikevomMars 9 месяцев назад +8

    This channel clearly became the no. 1 source for ComfyUI tutorials. Awesome 👍

    • @joeterzio7175
      @joeterzio7175 9 месяцев назад +2

      Exactly the reason why I barely watch anymore. 👎

    • @MikevomMars
      @MikevomMars 9 месяцев назад

      @@joeterzio7175 I wasn't a fan of node based editors either but I gave Comfy a try because I was sick of A1111's memory issues, slow speed and incompatibilities. I do not regret it - performance is AWESOME! The easiest way to use Comfy is to save different workspaces (as here provided!), enter your own prompts and that's all. No work, no hassle.

    • @hleet
      @hleet 9 месяцев назад +1

      yeah, I hope he will continue on this ... other youtubers seems to go back to A1111 because more people use that one instead :/

  • @OlivioSarikas
    @OlivioSarikas  9 месяцев назад +5

    Please follow Matteo on RUclips: www.youtube.com/@latentvision
    SweetyHigh Video: ruclips.net/user/shorts-_YZ1kSoInQ
    #### Links from my Video ####
    Workflow Download: openart.ai/workflows/matt3o/template-for-prompt-travel-openpose-controlnet/kYKv5sJWchSsujm0zOV0
    huggingface.co/guoyww/animatediff/blob/main/v3_sd15_mm.ckpt
    huggingface.co/guoyww/animatediff/blob/main/v3_sd15_adapter.ckpt
    huggingface.co/guoyww/animatediff/tree/main
    huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/blob/main/control_v11p_sd15_openpose_fp16.safetensors

    • @joshmcinnesart
      @joshmcinnesart 9 месяцев назад

      we need help finding one of the models

    • @fadysaber-b8p
      @fadysaber-b8p 9 месяцев назад

      @OlivioSarikas you have mentioned at the title of the video using the DW Pose but you linked the Open Pose model !

  • @stonythewoke9921
    @stonythewoke9921 9 месяцев назад

    Wow this one really is a gem! Thanks man, keep up the amazing videos!

  • @designapp5308
    @designapp5308 9 месяцев назад +9

    What is the download link for anidiff_controlnet_checkpoint?

    • @XadoXxx
      @XadoXxx 9 месяцев назад

      looking for the same thing

  • @NuKeBoX-NKBX
    @NuKeBoX-NKBX 9 месяцев назад

    Amazing tutorial once again! Thanks Olivio 🐔😘

  • @Antoh11
    @Antoh11 9 месяцев назад

    Это великолепно, пошел пробовать! Большая тебе благодарность!

  • @kreempa.i.8349
    @kreempa.i.8349 2 месяца назад

    FOR ANYONE GETTING *ERROR* AT 2:45 *MOVE YOUR ComfyUI_windows_portable folder to your "C" Drive.* The file path must be " C:\ComfyUI_windows_portable "
    In my case I have a storage space labeled "F". So my file path is " F:\ComfyUI_windows_portable "
    Windows does not like long file paths, so you need to place the comfy_UI folder at the root of your drive

  • @velvetjones8634
    @velvetjones8634 9 месяцев назад +13

    Matteo's workflow from openart shows the uppermost advanced controlnet model as "ad/motion.ckpt" (the one you changed to "anidiff_controlnet_checkpoint.ckpt").
    Unfortunately, I can't find it anywhere.

    • @drewbuckleymusic
      @drewbuckleymusic 9 месяцев назад +1

      I can't find the ad/motion.ckpt either so not working for me sadly

    • @elowine
      @elowine 9 месяцев назад +5

      On openai comments say its one of these from huggingface: "crishhh/animatediff_controlnet"

    • @olao6737
      @olao6737 9 месяцев назад +2

      Found it in the openart workflow description. It does point to the hugging face that @elowine mentioned

    • @hongtian
      @hongtian 4 месяца назад

      @@olao6737 @elowine I did download this file, but where should I put, the webui\models\ControlNet?

  • @ist1jano
    @ist1jano 9 месяцев назад

    nagyon ígéretes ! köszi a videot

  • @digitalblizz9102
    @digitalblizz9102 9 месяцев назад +2

    Can't find the anidiff checkpoint on the link you provided.

  • @Ton_DayTrader
    @Ton_DayTrader Месяц назад

    Hi , why video just 2sec how to real duration from source ? example source 15 sec buy why output just 2 sec.

  • @ApexArtistX
    @ApexArtistX 5 месяцев назад

    RUclipsrs are so greedy they sell workflow in patreon.. but you give away for free .. you deserved more followers

  • @amortalbeing
    @amortalbeing 9 месяцев назад

    good job and thanks for the update

  •  4 месяца назад

    Hi Olivio, thanks for the video.
    If I want to change frame count to lets say 48 or 64 from 32, should I change the "context overlap" to 3 or 4 etc ?

  • @typho0n5
    @typho0n5 6 месяцев назад

    Hello, I must commend the remarkable steadiness and effectiveness of your process flow. Yet, figuring out how to set the length of the resulting videos escapes me. Would you be able to guide me through that?

  • @moviecartoonworld4459
    @moviecartoonworld4459 9 месяцев назад

    Thank you always!!

  • @hpsbath
    @hpsbath 3 месяца назад

    How to fix this issue????
    Error occurred when executing VAEDecode:
    Runtimerror: Given groups=1, weight of size [512, 16, 3, 3], expected input[3, 4, 64, 64] to have 16 channels, but got 4 channels instead

  • @孟可-g8b
    @孟可-g8b 9 месяцев назад

    Could you please tell me which adjustment can make the video longer?

  • @AsherAndres
    @AsherAndres 2 месяца назад

    Can you please make a video how to set all the nodes and such from strach
    Been wanting to start my own anime, its finally time thats to you and this video ❤

    • @AsherAndres
      @AsherAndres 2 месяца назад

      Also what if your didn't want to use a phomt to make whatever character
      I have my own designs already made is there a way to connect the dance video with a pre made template on the anime character you want to use
      If you could help find out how to do this alot of people will thank you including myself ❤❤❤

  • @Xavi-Tenis
    @Xavi-Tenis 6 месяцев назад

    hi amaizing tutorial, but a question, the node you use on batch and image name code"EmptyLatentImage" i cant find, is probably a sustitute? i try to serch but the EmptyLatentImage are now dnot have batch input.
    any idea? thanksin advance

  • @NotThatOlivia
    @NotThatOlivia 9 месяцев назад +2

    what about Turbo+LCM models? do they help for frames render speed? or they are unusable here?

  • @nexusyang4832
    @nexusyang4832 9 месяцев назад

    Those fingers look insane... 😅😮

  • @kleber1983
    @kleber1983 9 месяцев назад +6

    as I was scrolling down this video I realized that everyone is waiting for you to provide the ad/motion.ckpt thingy. If you don´t conply we will start to burn cars on the streets...lol

    • @aivideos322
      @aivideos322 9 месяцев назад +2

      its just open pose renamed.

    • @cedtala
      @cedtala 9 месяцев назад

      really ???@@aivideos322

    • @PeteJohnson1471
      @PeteJohnson1471 9 месяцев назад

      @@aivideos322 yep, that is the conclusion I came too as well. And it’s all working. Likewise, I can’t actually find said ckpt, but the safetensors that’s also used near the bottom of the workflow.

    • @kleber1983
      @kleber1983 9 месяцев назад

      @@aivideos322 so he used twice? That's it? I dunno man...

    • @velvetjones8634
      @velvetjones8634 9 месяцев назад +1

      It’s not openpoae.

  • @drewbuckleymusic
    @drewbuckleymusic 9 месяцев назад +2

    Great video as always. I was keen to try this but I can't find ad/motion.ckpt and issues downloading the youtube short for some reason? I can try another video if i can find the motion.ckpt. Cheers.

  • @PeteJohnson1471
    @PeteJohnson1471 9 месяцев назад +5

    Seemingly you have left quite a lot of people frustrated. As it’s not obvious where to get the checkpoint from.
    I’ve used the safetensors that’s also used near the bottom of the workflow

  • @prestonrussell1452
    @prestonrussell1452 5 месяцев назад

    How is it smoothly looping? is it just the video that dose it? or whats going on? mine all animate great, but at the end jump

  • @Elwaves2925
    @Elwaves2925 9 месяцев назад +6

    Two things. The first one others are asking for as well - where do wer get your renamed anidiff_controlnet_checkpoint.ckpt from? There doesn't appear to be a file link, even with your 'anidiff' part removed.
    The second thing is I have the v3_sd15_mm.ckpt used in the AnimateDiff Loader node....but where do I store it so it detects it?

    • @Elwaves2925
      @Elwaves2925 9 месяцев назад

      Got the second part sorted, just the first one now.

    • @PeteJohnson1471
      @PeteJohnson1471 9 месяцев назад +2

      @@Elwaves2925 I’m not saying it’s 100% the correct answer, but I’ve used the FP16 safetensors like what’s used at the bottom of the workflow!
      If I find the actual file that’s been renamed I’ll update. But try it with that. It worked for me.

    • @Elwaves2925
      @Elwaves2925 9 месяцев назад +1

      @@PeteJohnson1471 Cheers, I'll give that a go. I tried with the control open pose file that's used elsewhere in the workflow (it loaded it, not me) but it messed up the second animation.
      Next time I go on I'm going to try some other motion models as well.

    • @PeteJohnson1471
      @PeteJohnson1471 9 месяцев назад +1

      @@Elwaves2925 did for me too, so I reduced the denoise strength in the 2nd ksampler to about 35. and things don't go too far out of whack ;-)

    • @Prabhakaranraj
      @Prabhakaranraj 9 месяцев назад

      where to place the ckpt file ?? can u help

  • @kenrock2
    @kenrock2 8 месяцев назад

    I tried the first animation method, follow all the steps but not sure why it doesn't really follows the controlnet which i have given.

  • @nitinburli7814
    @nitinburli7814 8 месяцев назад

    Hi, so one question. I'm unable to stack multiple controlnets using these fp16 models, any reason why?

  • @aserrzasf
    @aserrzasf 8 месяцев назад +1

    Prompt outputs failed validation
    VHS_LoadVideoPath:
    - Custom validation failed for node: video - Invalid file path: C:\Users\MonWeb\Downloads\videoplayback.webm
    ??????????????????????

    • @shshsh-zy5qq
      @shshsh-zy5qq 7 месяцев назад

      this keeps happening to me, too

  • @estebanmoraga3126
    @estebanmoraga3126 4 месяца назад

    Great tutorial, thanks for this! Question tho: Is there a way to feed it an image to be animated like the sourced video? Like say I want to animate a specific, original character singing. Can I provide an image of said character and a video of someone singing and have comfy replace that person with the character? Or those Animatediff works through prompts only at the moment?

  • @sansdomicileconnu
    @sansdomicileconnu 9 месяцев назад +2

    could you do a tutorial on lip sync?

  • @xikura
    @xikura 9 месяцев назад

    Hmm, I'm trying to understand. I have a lora that I'd like to add to this process, but it doesn't seem to get picked up properly. Where is the best place to add it?
    It's sort of a person lora, making the same person every time.

  • @shshsh-zy5qq
    @shshsh-zy5qq 7 месяцев назад

    hey Olivio can you let us know where to place those files from the link you shared? thank you so much!

  • @534A53
    @534A53 7 месяцев назад

    If I want to use a LoRa I trained on specific person, is it possible to use it here? if so where do I put the lora loader (i.e. which nodes to connect it to?)

  • @Wurmhouse
    @Wurmhouse 7 месяцев назад

    I apologize for the basic question, but I've only recently started using Comfy. If I haven't downloaded the checkpoints and LoRa required for this workflow, why am I still able to use them? I mean, I see them within the nodes, but I've never downloaded them.

  • @amkire65
    @amkire65 9 месяцев назад

    I seem to be having a problem when it reaches the first ksampler ('NoneType' object has no attribute 'shape') and haven't found a way to fix it, any ideas would be appreciated, thanks.

  • @raku_kun7
    @raku_kun7 9 месяцев назад

    i have a quick question can we export the pose data? i am currently working on an idea to animate 3d models..

  • @gjohgj
    @gjohgj 9 месяцев назад +10

    Where can I download the Anidiff controlnet checkpoint?

    • @FrostyPixelsOG
      @FrostyPixelsOG 9 месяцев назад

      Looks like in the video description

  • @KeredaSmile
    @KeredaSmile 9 месяцев назад +2

    Yeh, after 4 hours i was found almost all files, because Olivio change the names... so.. could you put own links and describe to names of destinations folders? pls :)

    • @drewbuckleymusic
      @drewbuckleymusic 9 месяцев назад

      seems it's only the renamed anidiff_controlnet_checkpoint.ckpt people can't find. myself as well.

  • @ingabuga
    @ingabuga 9 месяцев назад

    Amazing! But how to control video length

  • @sadshed4585
    @sadshed4585 9 месяцев назад

    I have a hard time replicating people workflows on youtube, it feels like they never go through the package instalation or model checkpoint placenent

    • @sadshed4585
      @sadshed4585 9 месяцев назад

      upgraded from python 3.9->3.10.11 fixed most things, well packages still were messed up but try fix button in manager fixed it.

  • @victorvaltchev42
    @victorvaltchev42 9 месяцев назад +11

    Great content, man! What exactly is the control net model at 8:10 . The forth huggingface points to a openpose controlnet. Is that it?

    • @juredujmovic
      @juredujmovic 9 месяцев назад +4

      Yeah, wasn't able to find it on any of his links. Great work on the video as usual, Olivio!!

    • @gjohgj
      @gjohgj 9 месяцев назад +2

      Was wondering as well:)

    • @bomar920
      @bomar920 9 месяцев назад +3

      stuck on this one as well

    • @Elwaves2925
      @Elwaves2925 9 месяцев назад +2

      It seems many are having the same issue. Myself included.

    • @netandif
      @netandif 9 месяцев назад

      I believe it is the file "v3_sd15_mm.ckpt" again, it needs to go into models/controlnet folder.

  • @hpsbath
    @hpsbath 3 месяца назад

    RuntimeError: Given groups=1, weight of size [512, 16, 3, 3], expected input[3, 4, 64, 64] to have 16 channels, but got 4 channels instead (Please Help any guy ????)

  • @timmyG
    @timmyG 6 месяцев назад

    Has anyone had any luck with this for realistic outputs, rather than anime?

  • @Otchengazoom
    @Otchengazoom 9 месяцев назад

    Looks really great. Wh6is the max available length of generated animation?

  • @goliat2606
    @goliat2606 9 месяцев назад

    Guys, why i always get the same face? I change prompt like hairs, eyes, age etc and i always have the same face but other details like hairs etc... I tried dreamsharper XL, dreamsharper 8, realisticVision and juggernaut xl.

  • @WackyConundrum
    @WackyConundrum 9 месяцев назад +1

    Stability in the video? Sure, if we're not looking at the hands...

  • @kkryptokayden4653
    @kkryptokayden4653 9 месяцев назад

    Is the other contronet model depth?

  • @PeterLunk
    @PeterLunk 9 месяцев назад

    Niiice Olivio !

  • @collectiveunconscious3d
    @collectiveunconscious3d 9 месяцев назад +2

    Cool, I don't think this will replace anyone, you just can work much faster and do more work. I see how this will be used in vfx with a layer mask on which you can generate for example a simple render of lava flow and add the the details with SD. Most people who make AI "'art" videos are stuck with the lack of possibility to control the narrative. But if you know 3D, compositing and animating you can make low poly basic animations and sims and later on refine them with SD. I just which comfyui had some sort of IF/else statement or case 1, 2, 3.... which could be triggered through a control pannel wth button, that way you don't have to modify everything the whole time and just create on big workflow with multiple setups

  • @earthpond8043
    @earthpond8043 9 месяцев назад +1

    Would 16gb vram be capable of doing this? I’ve got a 4080

    • @agamenonmacondo
      @agamenonmacondo 9 месяцев назад

      yes

    • @earthpond8043
      @earthpond8043 9 месяцев назад

      @@agamenonmacondo tight guess I got some learning to do

    • @yngeneer
      @yngeneer 9 месяцев назад

      i have 4060ti 16gb and used the matteo previous ballerina template + add face detailer and >> 1, 5 sec video takes about 7 min to create .... so it is possible....

  • @Kvision25th
    @Kvision25th 9 месяцев назад

    timestep keyframe node not loading (advanced-controlnet is broken on my system) any other node i can replace it for??

    • @OlivioSarikas
      @OlivioSarikas  9 месяцев назад

      did you click on "update all" in comfyui manager?

    • @jasonstetsonofficial
      @jasonstetsonofficial 9 месяцев назад

      halllo anidiff_controlnet@@OlivioSarikas

  • @DronesClubMember13
    @DronesClubMember13 9 месяцев назад

    Is there a site with motion models to use instead of trying to grab videos from dancers on youtube? I can't really download from youtube (I still haven't figured that out). A library of animations (like pose libraries on CivitAI) would be a plus.

    • @PeteJohnson1471
      @PeteJohnson1471 9 месяцев назад

      download 4k video downloader+, it's free for 30 videos a day.

    • @58gpr
      @58gpr 9 месяцев назад +2

      yt-dlp or youtube-dl (both open source and free)

    • @PeteJohnson1471
      @PeteJohnson1471 9 месяцев назад

      Come to think of it, YT-DLP IS AVAILABLE in visions of chaos. Which is awesome.

  • @TentationAI
    @TentationAI 9 месяцев назад

    Not compatible with SDXL ?

  • @Skettalee
    @Skettalee 9 месяцев назад

    Ive been searching for the DWPose Esimator node havent found it yet, where can i get that one?

    • @Skettalee
      @Skettalee 9 месяцев назад

      funny i found it by right clicking and Add Node > ControlNet Preprocessors > Faces and Poses > DW Preprocessor. But cant pull it up in a search of my nodes.

  • @ProfessorLightWAV
    @ProfessorLightWAV 9 месяцев назад

    YAS!!!!!

  • @you-share
    @you-share 9 месяцев назад

    Holly Molly

  • @hcfgaming401
    @hcfgaming401 9 месяцев назад

    In theory, wouldnt you be able to do this with a Canny? If i can figure it out, i might finally get over the roadbloack I've been at with animdiff.

  • @michelchaman6495
    @michelchaman6495 9 месяцев назад

    meanwhile lumiere hot dropping this tech is moving soo fast

  • @purelife_ai
    @purelife_ai 9 месяцев назад

    Why not use the load video upload node instead of the load video path?

    • @JoshTheFlyGuy
      @JoshTheFlyGuy 9 месяцев назад +1

      He didn't do too much testing with it. As he wanted to put the video out asap. Thuogh this does work as well and you can convert the frame limit as input to keep the Number of frames node in place

    • @purelife_ai
      @purelife_ai 9 месяцев назад

      ​@@JoshTheFlyGuy👍

  • @RamonGuthrie
    @RamonGuthrie 9 месяцев назад

    Can you do a video on RAVE?

  • @DealingWithAB
    @DealingWithAB 9 месяцев назад

    He used another method of doing this but with the ballerina. Is this the new way to go about it?

  • @NilsDum
    @NilsDum 9 месяцев назад

    Great tutorial as always.
    Just one question: Where do you put the 'v3_sd15_mm' checkpoint? Can't seem to find the right folder for it. Can't select it in AnimateDiff Loader (only undefined).

    • @OlivioSarikas
      @OlivioSarikas  9 месяцев назад +3

      That goes into custom_nodes\ComfyUI-AnimateDiff-Evolved\models - sorry, i should have pointed that out in the video

    • @NilsDum
      @NilsDum 9 месяцев назад

      @@OlivioSarikas thanks, man. You rock!

  • @user-ty5fd9hr2q
    @user-ty5fd9hr2q 9 месяцев назад

    The hands though...

    • @stephantual
      @stephantual 9 месяцев назад +1

      Nothing stops you from passing each frame through meshgraphormer.

  • @bytesundbuechse
    @bytesundbuechse 9 месяцев назад

    Anyone run into the same error? How Can I fix it?
    Error occurred when executing DWPreprocessor:
    [WinError 3] The system cannot find the path specified: 'C:\\Users\\admin\\Downloads\
    ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\ckpts\\hr16/DWPose-TorchScript-BatchSize5\\cache\\models--hr16--DWPose-TorchScript-BatchSize5\\snapshots\\359d662a9b33b73f6d0f21732baf8845f17bb4be'

    • @irinabondareva1
      @irinabondareva1 9 месяцев назад

      Apparently, it's all about the assembly of ComfyUI. I had the same problem, and the updates to ComfyUI via Manager did not help. Then I just re-installed ComfyUI using the Pinokio neural network automatic installation service, and everything worked:)

  • @carmelodistasio1454
    @carmelodistasio1454 9 месяцев назад +4

    Can I please ask you for a similar tutorial for Automatic 1111 ?

    • @MementoVoxDei
      @MementoVoxDei 9 месяцев назад

      Not powerful enough to do stuff like this

  • @MewTube-o4l
    @MewTube-o4l 9 месяцев назад

    first mistake its not automatic1111 second mistake not sdxl

  • @moshiry_1338
    @moshiry_1338 9 месяцев назад

    YOU scuf

  • @frankiesomeone
    @frankiesomeone 9 месяцев назад +6

    love the results but the complicated process is a big turnoff

    • @purelife_ai
      @purelife_ai 9 месяцев назад +4

      This is a simple efficient workflow I'm actually surprised by how simple and powerful it is

    • @sznikers
      @sznikers 9 месяцев назад

      You have to build it once, then you just edit prompts and click start.

    • @lurker668
      @lurker668 6 месяцев назад

      ​@@purelife_aippl look for one button fix and zero learning curve... Only then they can say look how skilled I am 😂

    • @supermodal
      @supermodal 6 месяцев назад

      Alternatively you could learn animation =)

    • @poppi3362
      @poppi3362 Месяц назад

      You're so out of touch with how much work and knowledge usually goes into art lol.

  • @rod-me8ey
    @rod-me8ey 9 месяцев назад +2

    Can you PLEASE stop giving this ridiculous SpaghettiUI so much attention. It's a complete waste of time and it offers NOTHING useful over A1111 or any other variant of for that matter.

    • @cedtala
      @cedtala 9 месяцев назад +2

      i guess you don't see the potential of comfyui ....and all the things you can not do in automatic1111 ...you can put loras before Or after the prompt, or the image....you can start with a model and finish with another....you can affect loras or controlnets to specific parts, etc. . . . . i was "against" comfyui at start, as i was used to automatic1111 ...but i have to admit there are things you can do better in comfuyi ...i still prefer to "inpaint" in automatic1111 tho ....

    • @purelife_ai
      @purelife_ai 9 месяцев назад +1

      For animation comfyui is a must....especially if you want to do things like steerable motion, SVD or lengthy videos. Sometimes you want to do the video rendering step by step to save on vram. For not so complex images and impairing A1111 is great though.

    • @rod-me8ey
      @rod-me8ey 9 месяцев назад +1

      @@cedtala Literally nothing you just mentioned is useful or necessary, just like ComfyUI.

    • @josiahbirthright24
      @josiahbirthright24 9 месяцев назад +1

      Somebody had to say it. Been working with ComgyUI for about a year now (off and on). It is literally the most headache-inducing user interface I have ever had the displeasure to experience.

  • @victorgarciasilva8183
    @victorgarciasilva8183 9 месяцев назад

    AttributeError: module 'comfy.ops' has no attribute 'Linear'

    • @yamb0x
      @yamb0x 9 месяцев назад

      Getting the same error with AnimateDiff, Any idea whats its all about?