How To Use AnimateDiff for Video To Video in ComfyUI
HTML-код
- Опубликовано: 13 май 2024
- Want to use AnimateDiff for changing a video? Video Restyler is a ComfyUI workflow for applying a new style to videos - or to just make them out of this world! Simply select an input video, pick a style of face image and generate :) AnimateDiff Vid to Vid fun.
Grab your AnimateDiff Video to Video workflow for FREE now!
Workflows - github.com/nerdyrodent/AVeryC...
Beginner? Start here! - • How to Install ComfyUI...
ComfyUI Zero to Hero - • ComfyUI Tutorials and ...
== More Stable Diffusion Stuff! ==
* Installing Anaconda for MS Windows Beginners - • Anaconda - Python Inst...
* How do I create an animated SD avatar? - • Create your own animat... - Хобби
How much fun is styling videos? 🎉😊
Styling videos is more fun than walking naked in a strange place, but not much.
👋
I just wanted to say that You are an absolute genius with these workflows, that AND the fact that You're sharing them for free is just amazing. YOU ARE A LEGEND!!!
Glad you like them!
I've gone from barely understanding how to run ComfyUI to modifying, creating my own work flows, and creating my own custom nodes and I am so grateful that you're so thorough with your guides and offer such great workflows! Thank you so much!
Great to hear! It's fun one you get used to it :)
This is just the help I needed to get started processing my video. Thanks!
You get smoother and smoother - Great share :)
Thanks for the workflow! 👏
My pleasure!
Wow, I'm impressed with the temporal consistency displayed here. Thanks and well done.
Thank you, brilliant!!
Hello Nerdy,
many greetings from Berlin - Germany. Thank you very much for your great work, which helped me a lot with the realisation of my ideas. Do you see a possibility to create two characters - for example in the "Reposer". You then have one pose - but with two people who are then replaced?
Thank you for what are you doing ;) ... its great keep it up :) ... I wonder tho what is the name of the workflow where is removing background? I would love to try that but cant find it in workflows :/
I love reactor…. But it just does work on my new 4060 comp… works great on old 2060 though.
Love your vids
dear Nerdy Rodent, is there any similar free tools similar to deepmotion? besides the faceswap - will be good to swap the entire 3d character... pls advise if there is any
Amazing
How can I just do images with this? I would like the faceswap only for SDXL... just curious.
Is there a reason your using random seeds and not fixed? In other animated diff protects I see fixed seeds
Could you speed up videos to make rendering faster?
Here once again for the cutting edge.
Hi I'm getting an error : SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5) could you tell me how to fix it please?
hey, great tutorial, 1 question. i'm missing the load ipadapter module, and its not in the missing links. i have the ipadapter plus installed. thanks
You can drop me a dm on patreon for support!
Upscaling with animate diff uses much too much memory IMO. It's great for making an initial video to use but upscaling with it... ya good luck. If you use tile/temporaldiff,lineart control models you can separate the frames and upscale each individually with almost no change in the consistency, and it allows unlimited upscale size, full 1.0 de-noise, and it renders 3x faster because you are not doing frames all together. I use Impact pack for the "Batch to List" node that allows you to separate batches for individual processing.
I’ve not tried upscaling via AnimateDiff as yet, but just using a plain upscaling model would probably be fine on the base output too
hey nerdy nice work... just a question is it all just 3 seconds limited?
Nope, you can do much longer videos!
What are you using to show the labels for the custom node origins?
Turns out it's a feature in the Manager, but I had to do a git reset hard despite having had pulled the latest commit.
Any idea when/if there's going to be a tensorRT for XL? I'm enjoying the doubled generation speed but i feel like it would be most useful on longer to generate images like idk XL 1024x1024 images that just pound my poor 3080 into a puddle of it's own excrement and tears. The tears are mine
Maybe a few months? *rubs crystal ball*
@@NerdyRodent We can't rub our balls in public like that. I learned that the hard way.
Is there a way to use stable diffusion without using my gpu? It just takes too long to generate, but I like the workspaces.
Yup, a Huggingface Space won't use your GPU :)
great video. but where can i find the "Video Restyler" workflow. i have checked on your website, but nothing
Currently the next to last one in the list as I added the SDXL Reposer after this
I've installed IP adapter and run the 'install missing custom nodes', but i still seem to be missing some requirements for your workflow.
PrepImageForClipVision
IPAdapterModelLoader
IPAdapterApply
Where can i get these and how do i install them? Thanks.
I got the same problem, using ComfyUI manager and performing an "update all" does the trick for me
Well, that added the ones that were missing, but now new ones are now missing that weren't before. wtf?
CheckpointLoaderSimpleWithNoiseSelect
ADE_AnimateDiffUniformContextOptions
ADE_AnimateDiffLoRALoader
ADE_AnimateDiffLoaderWithContext
I don't understand. @@vtchiew5937
I am missing some nodes and cant find a solution
So just to be clear, 12gb VRAM are enough to run this workflow or 18gb are needed?..
It’s good to have more than 12gb vram.
Although I have 'ReActor Node 0.1.0 for ComfyUI' installed I'm still getting 'ReActorFaceSwap node' missing error! It's working without Reactor but how do I fix this error? I NEED to try all those nodes!
Did you restart after the node install?
@@NerdyRodent Yes, I did but it was installing the prebuilt Insightface package that was missing which solved it. I'm not sure why having Visual Studio 2022 in my case didn't suffice. PS: My previous reply was deleted, I guess the link to the 'troubleshooting' section for the 'comfy-reactor-node' is the reason. PPS: I love the workflow content I'm still fiddling with it all and have been for the last few days. I need to fix faces after so it can see them is what I'm working on learning how to do now.
ur gpu?
I have tried a lot of workflows, but always the video changes drastically every 2 seconds (every 16 frames). Why it might be the case?
This one doesn’t do that, have you tried it? 😀
💛
💙
Thanks took me a while to figure out where to get your workload at your git lol but once I did well it is almost 2am and the wife went to bed hours ago and I usually join her so ya. I have an issue though and it is driving me crazy because A) If it can work the way that I think it can then well damn I've got the best damn process to make an animated video B) Same as "A)" it must be possible because I've had 16 frames of pure awesome then I went to the whole video and wow still awesome but those first 16 frames were completely different I was like damn different seed so I go back redo with the seed I figure it was and nope I do this twice on different seeds then I'm like okay so I am using the right seed on one of these here so I change the frame cap to 16 again and bam best damn same 16 frames of pure awesome but if I change the frame cap I get a different generation. C) Is their a solution to this and if so how can I implement this in the workflow. If you don't know but think you have enough knowledge to do a workaround and think you can help me out here then that is amazing because I feel like I'm on the edge of making something kick ass and pure awesome. Could also do a screen sharing session just to show you what I'm getting and or something.
I wish comfyui had a way to swap the spline in/outputs with straight/angles so I can see where stuff is plugging in easier.
You can… just change your settings. However, during an in-depth and incredibly scientific study I did, 75% of people considered Spline to be superior to the other 3 options…
Is it possible to do the same with A1111?
More than likely! Just do each step manually along the way
Can we add LORAs to this workflow?
Of course :)
NO ENTIENDO PORQUE DICE QUE ESTA HECHO EN STABLE DIFUSSION SI EL SOFTWARE QUE VEO ES OTRO. ALGUIEN ME EXPLICA?
Puedes usar stable diffusion en cualquiera software con razón y SD=stable diffusion.
I wish I could get over the ComfyUI barrier......I am stuck in a1111 :/// LOVE your videos though 😍😍
I thought the same, now I’m addicted and A1111 feels clunky 😆
I was loving Comfy until I bashed my head against a wall every day for a week trying to get Reactor to work.
I’ve since gone back to A1111.
Has anyone tested this workflow in Google colab?
I haven’t, but don’t see why it wouldn’t work 😀
@@NerdyRodent I've had some trouble with dependencies in colab. Will give it a try though.
😂❤🎉 WOW;
this is not working, loadipadapter and clipvision are in some error so the video combine is not working!
You can work through these steps to fix your ComfyUI setup - github.com/nerdyrodent/AVeryComfyNerd#troubleshooting
I did a video like this but it was fine without awful flicker n stuff . On a1111 . Just recently . But . Idk . Seems like no one wants a simple solution
We are just a skip and a hop away from Hollywood becoming irrelevant. Finally we'll get decent shows and movies without political bs.
Home videos are making a comeback 😉
Still looks god awful but let's allow this technology to improve, it's going to be amazing someday.
Would 8 gigs of VRAM be okay? 🥹