How To Use AnimateDiff for Video To Video in ComfyUI

Поделиться
HTML-код
  • Опубликовано: 13 май 2024
  • Want to use AnimateDiff for changing a video? Video Restyler is a ComfyUI workflow for applying a new style to videos - or to just make them out of this world! Simply select an input video, pick a style of face image and generate :) AnimateDiff Vid to Vid fun.
    Grab your AnimateDiff Video to Video workflow for FREE now!
    Workflows - github.com/nerdyrodent/AVeryC...
    Beginner? Start here! - • How to Install ComfyUI...
    ComfyUI Zero to Hero - • ComfyUI Tutorials and ...
    == More Stable Diffusion Stuff! ==
    * Installing Anaconda for MS Windows Beginners - • Anaconda - Python Inst...
    * How do I create an animated SD avatar? - • Create your own animat...
  • ХоббиХобби

Комментарии • 76

  • @NerdyRodent
    @NerdyRodent  6 месяцев назад +9

    How much fun is styling videos? 🎉😊

    • @tartwinkler1711
      @tartwinkler1711 6 месяцев назад +5

      Styling videos is more fun than walking naked in a strange place, but not much.

    • @LouisGedo
      @LouisGedo 6 месяцев назад +1

      👋

  • @kacperskyy5652
    @kacperskyy5652 6 месяцев назад +17

    I just wanted to say that You are an absolute genius with these workflows, that AND the fact that You're sharing them for free is just amazing. YOU ARE A LEGEND!!!

  • @Andro-Meta
    @Andro-Meta 6 месяцев назад +4

    I've gone from barely understanding how to run ComfyUI to modifying, creating my own work flows, and creating my own custom nodes and I am so grateful that you're so thorough with your guides and offer such great workflows! Thank you so much!

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад

      Great to hear! It's fun one you get used to it :)

  • @deastman2
    @deastman2 6 месяцев назад +1

    This is just the help I needed to get started processing my video. Thanks!

  • @autonomousreviews2521
    @autonomousreviews2521 6 месяцев назад +1

    You get smoother and smoother - Great share :)

  • @banzai316
    @banzai316 6 месяцев назад +1

    Thanks for the workflow! 👏

  • @ChameleonAI
    @ChameleonAI 6 месяцев назад +1

    Wow, I'm impressed with the temporal consistency displayed here. Thanks and well done.

  • @aa-xn5hc
    @aa-xn5hc 6 месяцев назад +1

    Thank you, brilliant!!

  • @ShoreAllan
    @ShoreAllan 6 месяцев назад +1

    Hello Nerdy,
    many greetings from Berlin - Germany. Thank you very much for your great work, which helped me a lot with the realisation of my ideas. Do you see a possibility to create two characters - for example in the "Reposer". You then have one pose - but with two people who are then replaced?

  • @michalgonda7301
    @michalgonda7301 6 месяцев назад

    Thank you for what are you doing ;) ... its great keep it up :) ... I wonder tho what is the name of the workflow where is removing background? I would love to try that but cant find it in workflows :/

  • @ImAlecPonce
    @ImAlecPonce 6 месяцев назад +1

    I love reactor…. But it just does work on my new 4060 comp… works great on old 2060 though.
    Love your vids

  • @stan-zm3ep
    @stan-zm3ep 6 месяцев назад

    dear Nerdy Rodent, is there any similar free tools similar to deepmotion? besides the faceswap - will be good to swap the entire 3d character... pls advise if there is any

  • @Hooooodad
    @Hooooodad 6 месяцев назад +1

    Amazing

  • @T3MEDIACOM
    @T3MEDIACOM 6 месяцев назад

    How can I just do images with this? I would like the faceswap only for SDXL... just curious.

  • @tron77777
    @tron77777 6 месяцев назад

    Is there a reason your using random seeds and not fixed? In other animated diff protects I see fixed seeds

  • @TBjunk25
    @TBjunk25 6 месяцев назад

    Could you speed up videos to make rendering faster?

  • @VeraSilaLofi
    @VeraSilaLofi 6 месяцев назад

    Here once again for the cutting edge.

  • @tripitakai
    @tripitakai Месяц назад

    Hi I'm getting an error : SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5) could you tell me how to fix it please?

  • @JustGimmesomeShelter
    @JustGimmesomeShelter 3 месяца назад

    hey, great tutorial, 1 question. i'm missing the load ipadapter module, and its not in the missing links. i have the ipadapter plus installed. thanks

    • @NerdyRodent
      @NerdyRodent  3 месяца назад

      You can drop me a dm on patreon for support!

  • @aivideos322
    @aivideos322 6 месяцев назад +5

    Upscaling with animate diff uses much too much memory IMO. It's great for making an initial video to use but upscaling with it... ya good luck. If you use tile/temporaldiff,lineart control models you can separate the frames and upscale each individually with almost no change in the consistency, and it allows unlimited upscale size, full 1.0 de-noise, and it renders 3x faster because you are not doing frames all together. I use Impact pack for the "Batch to List" node that allows you to separate batches for individual processing.

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад +1

      I’ve not tried upscaling via AnimateDiff as yet, but just using a plain upscaling model would probably be fine on the base output too

  • @user-ph1ir8mb7w
    @user-ph1ir8mb7w 5 месяцев назад

    hey nerdy nice work... just a question is it all just 3 seconds limited?

    • @NerdyRodent
      @NerdyRodent  4 месяца назад

      Nope, you can do much longer videos!

  • @throttlekitty1
    @throttlekitty1 5 месяцев назад

    What are you using to show the labels for the custom node origins?

    • @throttlekitty1
      @throttlekitty1 5 месяцев назад

      Turns out it's a feature in the Manager, but I had to do a git reset hard despite having had pulled the latest commit.

  • @Smashachu
    @Smashachu 6 месяцев назад +1

    Any idea when/if there's going to be a tensorRT for XL? I'm enjoying the doubled generation speed but i feel like it would be most useful on longer to generate images like idk XL 1024x1024 images that just pound my poor 3080 into a puddle of it's own excrement and tears. The tears are mine

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад

      Maybe a few months? *rubs crystal ball*

    • @Smashachu
      @Smashachu 6 месяцев назад

      @@NerdyRodent We can't rub our balls in public like that. I learned that the hard way.

  • @galaxyvulture6649
    @galaxyvulture6649 6 месяцев назад

    Is there a way to use stable diffusion without using my gpu? It just takes too long to generate, but I like the workspaces.

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад

      Yup, a Huggingface Space won't use your GPU :)

  • @spenzakwsx4430
    @spenzakwsx4430 6 месяцев назад

    great video. but where can i find the "Video Restyler" workflow. i have checked on your website, but nothing

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад

      Currently the next to last one in the list as I added the SDXL Reposer after this

  • @IntiArtDesigns
    @IntiArtDesigns 6 месяцев назад +1

    I've installed IP adapter and run the 'install missing custom nodes', but i still seem to be missing some requirements for your workflow.
    PrepImageForClipVision
    IPAdapterModelLoader
    IPAdapterApply
    Where can i get these and how do i install them? Thanks.

    • @vtchiew5937
      @vtchiew5937 6 месяцев назад +2

      I got the same problem, using ComfyUI manager and performing an "update all" does the trick for me

    • @IntiArtDesigns
      @IntiArtDesigns 6 месяцев назад

      Well, that added the ones that were missing, but now new ones are now missing that weren't before. wtf?
      CheckpointLoaderSimpleWithNoiseSelect
      ADE_AnimateDiffUniformContextOptions
      ADE_AnimateDiffLoRALoader
      ADE_AnimateDiffLoaderWithContext
      I don't understand. @@vtchiew5937

    • @RonnieMirands
      @RonnieMirands 6 месяцев назад

      I am missing some nodes and cant find a solution

  • @MexicanWawix
    @MexicanWawix 6 месяцев назад

    So just to be clear, 12gb VRAM are enough to run this workflow or 18gb are needed?..

    • @user-dq8le8um5b
      @user-dq8le8um5b 5 месяцев назад

      It’s good to have more than 12gb vram.

  • @bwheldale
    @bwheldale 6 месяцев назад

    Although I have 'ReActor Node 0.1.0 for ComfyUI' installed I'm still getting 'ReActorFaceSwap node' missing error! It's working without Reactor but how do I fix this error? I NEED to try all those nodes!

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад

      Did you restart after the node install?

    • @bwheldale
      @bwheldale 6 месяцев назад

      @@NerdyRodent Yes, I did but it was installing the prebuilt Insightface package that was missing which solved it. I'm not sure why having Visual Studio 2022 in my case didn't suffice. PS: My previous reply was deleted, I guess the link to the 'troubleshooting' section for the 'comfy-reactor-node' is the reason. PPS: I love the workflow content I'm still fiddling with it all and have been for the last few days. I need to fix faces after so it can see them is what I'm working on learning how to do now.

  • @ehsankholghi
    @ehsankholghi 3 месяца назад

    ur gpu?

  • @ooiirraa
    @ooiirraa 5 месяцев назад

    I have tried a lot of workflows, but always the video changes drastically every 2 seconds (every 16 frames). Why it might be the case?

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад +2

      This one doesn’t do that, have you tried it? 😀

  • @pragmaticcrystal
    @pragmaticcrystal 6 месяцев назад +1

    💛

  • @ltcshow6175
    @ltcshow6175 4 месяца назад

    Thanks took me a while to figure out where to get your workload at your git lol but once I did well it is almost 2am and the wife went to bed hours ago and I usually join her so ya. I have an issue though and it is driving me crazy because A) If it can work the way that I think it can then well damn I've got the best damn process to make an animated video B) Same as "A)" it must be possible because I've had 16 frames of pure awesome then I went to the whole video and wow still awesome but those first 16 frames were completely different I was like damn different seed so I go back redo with the seed I figure it was and nope I do this twice on different seeds then I'm like okay so I am using the right seed on one of these here so I change the frame cap to 16 again and bam best damn same 16 frames of pure awesome but if I change the frame cap I get a different generation. C) Is their a solution to this and if so how can I implement this in the workflow. If you don't know but think you have enough knowledge to do a workaround and think you can help me out here then that is amazing because I feel like I'm on the edge of making something kick ass and pure awesome. Could also do a screen sharing session just to show you what I'm getting and or something.

  • @d1agram4
    @d1agram4 6 месяцев назад

    I wish comfyui had a way to swap the spline in/outputs with straight/angles so I can see where stuff is plugging in easier.

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад

      You can… just change your settings. However, during an in-depth and incredibly scientific study I did, 75% of people considered Spline to be superior to the other 3 options…

  • @twilightfilms9436
    @twilightfilms9436 6 месяцев назад

    Is it possible to do the same with A1111?

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад

      More than likely! Just do each step manually along the way

  • @luclaura1308
    @luclaura1308 6 месяцев назад

    Can we add LORAs to this workflow?

  • @pablocastillopalomino3536
    @pablocastillopalomino3536 6 месяцев назад

    NO ENTIENDO PORQUE DICE QUE ESTA HECHO EN STABLE DIFUSSION SI EL SOFTWARE QUE VEO ES OTRO. ALGUIEN ME EXPLICA?

    • @ltcshow6175
      @ltcshow6175 4 месяца назад

      Puedes usar stable diffusion en cualquiera software con razón y SD=stable diffusion.

  • @NorsemanAIArt
    @NorsemanAIArt 6 месяцев назад +1

    I wish I could get over the ComfyUI barrier......I am stuck in a1111 :/// LOVE your videos though 😍😍

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад +5

      I thought the same, now I’m addicted and A1111 feels clunky 😆

    • @velvetjones8634
      @velvetjones8634 6 месяцев назад

      I was loving Comfy until I bashed my head against a wall every day for a week trying to get Reactor to work.
      I’ve since gone back to A1111.

  • @Democratese
    @Democratese 6 месяцев назад

    Has anyone tested this workflow in Google colab?

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад +1

      I haven’t, but don’t see why it wouldn’t work 😀

    • @Democratese
      @Democratese 6 месяцев назад

      @@NerdyRodent I've had some trouble with dependencies in colab. Will give it a try though.

  • @studioGZ
    @studioGZ 6 месяцев назад +1

    😂❤🎉 WOW;

  • @dkontey6421
    @dkontey6421 5 месяцев назад

    this is not working, loadipadapter and clipvision are in some error so the video combine is not working!

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад +1

      You can work through these steps to fix your ComfyUI setup - github.com/nerdyrodent/AVeryComfyNerd#troubleshooting

  • @keisaboru1155
    @keisaboru1155 6 месяцев назад

    I did a video like this but it was fine without awful flicker n stuff . On a1111 . Just recently . But . Idk . Seems like no one wants a simple solution

  • @Iancreed8592
    @Iancreed8592 6 месяцев назад +3

    We are just a skip and a hop away from Hollywood becoming irrelevant. Finally we'll get decent shows and movies without political bs.

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад

      Home videos are making a comeback 😉

  • @WanderlustWithT
    @WanderlustWithT 6 месяцев назад

    Still looks god awful but let's allow this technology to improve, it's going to be amazing someday.

  • @reggaemarley4617
    @reggaemarley4617 4 месяца назад

    Would 8 gigs of VRAM be okay? 🥹