AnimateDiff Lightning - Local Install Guide - Stable Video
HTML-код
- Опубликовано: 7 июн 2024
- AnimateDiff Lightning is super fast. You can create a Video in seconds. The Model works with Contronet, Video to Video render, Motion Loras and more. Here is my Local Install Guide for AnimateDiff Lightning in Automatic 1111 (A1111) and ComfyUI
#### Links from my Video ####
my Workflow on Patreon: / 100890477
Model Download: huggingface.co/ByteDance/Anim...
Test Free online: huggingface.co/spaces/ByteDan...
#### Join and Support me ####
Buy me a Coffee: www.buymeacoffee.com/oliviotu...
Join my Facebook Group: / theairevolution
Joint my Discord Group: / discord
AI Newsletter: oliviotutorials.podia.com/new...
Support me on Patreon: / sarikas - Хобби
Thank you for always providing us with the latest techniques in an easy-to-understand manner. Have a happy day!
What is the minimum vram required?
8GB minimum
looks great I find Lightning models have a great upscale potential with denoise on a second lightning phase
A1111 - download, just pick model and your done,
Comfy - noodles upon noodles of noodles, confusion, horror, pain
I'm not a comfyUI fan either, but you can actually do things quicker with ComfyUI by downloading ppl's workflow. Its pretty much like working with extensions. To build your own workflow is what makes it hard, but its not necessary to use.
Seriously. Even with a workflow it's always a crapshoot
yes, but you can do a lot more things with comfyui. It's like comparing a train ride to driving your own car. Yes, a train ride will get you from A to B and driving a car is something you need to learn first. But you can do a lot more with your own car that an train can ever do for you
Comfyui is so superior itd not even funny
Thank you
Thank you, love your video. Do you have also have this workflow but based on a starting image?
Which video talks about the GMFSS Fortuna installation & it's models?
all i get is "TypeError: 'NoneType' object is not iterable" ..and then normal images no longer work :(
For a capital S slight quality improvement on the upscale, I found NNlatentUpscale module works better than the normal latentUpscale. On a 3090 the time difference is not noticeable
YES!!!
Yeah 👏👏
thanks for all your amazing videos, can we use it with img2img to use it to upscale videos? and use it with tile controlnet?
you can use it for video to video. so that should work :)
will it be useful to preserve original video fidelity and increase coherence or not?@@OlivioSarikas
@@JoKeR-hl1np oh, i misunderstood your question. I don't think this is a good way to upscale actual video footage.
what is the difference between AnimateDiff Lightning and AnimateLCM ? .... 🤔
and between AnimateDiff first version (is it called AnimatedLCM?)
I'm getting this error after installing and trying to run the new lightning models. Trying to google it isn't turning up much that I could find. Do you have any idea how to fix this? I was able to use AnimateDiff before.
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Hello friend, let me make you a question. Does the length of the video depend on the vram of the gpu?
No, length of the generated video has no relation to VRAM. Length of generation does depend on VRAM, specifically if you don’t have enough it will offload into RAM of disk which is horribly slow to generate
i am getting this error: inopsError: Error while processing rearrange-reduction pattern "(b f) d c -> (b d) f c". Input tensor shape: torch.Size([2, 4096, 320]). Additional info: {'f': 16}. Shape mismatch, can't divide axis of length 2 in chunks of 16
I did everything exactly the same as you, even selected the same checkpoint model. I am on rtx4090. Also I don't have these 2 files in the same folder mm_sd_v1.4.ckpt and mm_sd_v1.5.ckpt, like you do. Is it because of these files i am getting this error. If yes then from where i can download these files.
Thanks.
Me too getting the same error
At 2:15, you should a number of files in your model folder, but you never explain about the two checkpoint files that are in there.
Are they needed? If so, why aren't they mentioned? Are they mentioned in the comfy_ui portion of the video (which I likely won't watch until I start using comfy_ui)?
This isn't a criticism by any means, as I appreciate the time you spend making your videos, as well as the content itself. You're hands down one of best AI creators.
no you need any models for the list i show bofore that are specifically for AnimateDiff Lightning
I followed along but for some reason it is just creating an image and not a video.
I followed each step and even have animatediff enabled. There is no visible reason why this would be happening. Any ideas?
Looks like the console is showing AttributeError: 'NoneType' Object has no attribute 'save_infotext_txt'
Once the size and time are exceeded, the animation will deform and become uncontrollable.
sadly yes. even with the 4 frame overlap
how long does it take to produce the animation? in a1111
im running a 3070 with 8gb
❤
animate diff in automatic 1111 has interpolate, its really new
interopolate has been in there for a long time. Are you talking about something new?
@@AndyHTu its called FreeInit Params
2:27 Why did you not enable frame interpolation in A1111 on bottom left? But you enabled frame enterpolation in comfyui? You need to redo entire video because that seems like you are discriminating against A1111 and saying comfyui is better in that regard. Either you did this intentionaly to nudge people towards comfyui or you clearly missed that A1111 plugin DOES offer frame interpolation as seen in your video in bottom left at 2:27
Comfyui pushes patreon memberships, A1111 does not as its straight forward. Ive noticed the slow shift in most youtubers and it then became clear why many of them are pushing comfy.
And that's why i unsubscribed this channel.
👋
No patron = no comfyui process
Wow, how far we are!😁
tl;dr Comfy version:
Instead of paying for free stuff - In your existing AD workflow replace your previous AD model with just downloaded, adjust sampler steps to model. You're done.
This is basically Will Smith eating spaghetti.
First
Woof
the inconsistency is still too bad to be useable
I need to see some great stuffs created in seconds to be convinced