AnimateDiff Lightning - Local Install Guide - Stable Video

Поделиться
HTML-код
  • Опубликовано: 7 июн 2024
  • AnimateDiff Lightning is super fast. You can create a Video in seconds. The Model works with Contronet, Video to Video render, Motion Loras and more. Here is my Local Install Guide for AnimateDiff Lightning in Automatic 1111 (A1111) and ComfyUI
    #### Links from my Video ####
    my Workflow on Patreon: / 100890477
    Model Download: huggingface.co/ByteDance/Anim...
    Test Free online: huggingface.co/spaces/ByteDan...
    #### Join and Support me ####
    Buy me a Coffee: www.buymeacoffee.com/oliviotu...
    Join my Facebook Group: / theairevolution
    Joint my Discord Group: / discord
    AI Newsletter: oliviotutorials.podia.com/new...
    Support me on Patreon: / sarikas
  • ХоббиХобби

Комментарии • 70

  • @moviecartoonworld4459
    @moviecartoonworld4459 2 месяца назад

    Thank you for always providing us with the latest techniques in an easy-to-understand manner. Have a happy day!

  • @MrSaifuddin93
    @MrSaifuddin93 2 месяца назад +23

    What is the minimum vram required?

  • @rizzsea
    @rizzsea 2 месяца назад +1

    looks great I find Lightning models have a great upscale potential with denoise on a second lightning phase

  • @wykydytron
    @wykydytron 2 месяца назад +48

    A1111 - download, just pick model and your done,
    Comfy - noodles upon noodles of noodles, confusion, horror, pain

    • @ClownCar666
      @ClownCar666 2 месяца назад +1

    • @AndyHTu
      @AndyHTu 2 месяца назад

      I'm not a comfyUI fan either, but you can actually do things quicker with ComfyUI by downloading ppl's workflow. Its pretty much like working with extensions. To build your own workflow is what makes it hard, but its not necessary to use.

    • @deviltiger78
      @deviltiger78 2 месяца назад +3

      Seriously. Even with a workflow it's always a crapshoot

    • @OlivioSarikas
      @OlivioSarikas  2 месяца назад +30

      yes, but you can do a lot more things with comfyui. It's like comparing a train ride to driving your own car. Yes, a train ride will get you from A to B and driving a car is something you need to learn first. But you can do a lot more with your own car that an train can ever do for you

    • @amorgan5844
      @amorgan5844 2 месяца назад +8

      Comfyui is so superior itd not even funny

  • @yaahooali
    @yaahooali 2 месяца назад

    Thank you

  • @EmmaFitzgerald-dp4re
    @EmmaFitzgerald-dp4re Месяц назад

    Thank you, love your video. Do you have also have this workflow but based on a starting image?

  • @royjones5790
    @royjones5790 2 месяца назад

    Which video talks about the GMFSS Fortuna installation & it's models?

  • @dosenfleisch1310
    @dosenfleisch1310 2 месяца назад +3

    all i get is "TypeError: 'NoneType' object is not iterable" ..and then normal images no longer work :(

  • @procomp113
    @procomp113 2 месяца назад

    For a capital S slight quality improvement on the upscale, I found NNlatentUpscale module works better than the normal latentUpscale. On a 3090 the time difference is not noticeable

  • @electronicmusicartcollective
    @electronicmusicartcollective 2 месяца назад

    YES!!!

  • @cabb_
    @cabb_ 2 месяца назад

    Yeah 👏👏

  • @JoKeR-hl1np
    @JoKeR-hl1np 2 месяца назад +2

    thanks for all your amazing videos, can we use it with img2img to use it to upscale videos? and use it with tile controlnet?

    • @OlivioSarikas
      @OlivioSarikas  2 месяца назад +1

      you can use it for video to video. so that should work :)

    • @JoKeR-hl1np
      @JoKeR-hl1np 2 месяца назад

      will it be useful to preserve original video fidelity and increase coherence or not?@@OlivioSarikas

    • @OlivioSarikas
      @OlivioSarikas  2 месяца назад +1

      @@JoKeR-hl1np oh, i misunderstood your question. I don't think this is a good way to upscale actual video footage.

  • @user-kk2ve1un4u
    @user-kk2ve1un4u 2 месяца назад +9

    what is the difference between AnimateDiff Lightning and AnimateLCM ? .... 🤔

    • @johnriperti3127
      @johnriperti3127 2 месяца назад

      and between AnimateDiff first version (is it called AnimatedLCM?)

  • @tylermucha9281
    @tylermucha9281 2 месяца назад

    I'm getting this error after installing and trying to run the new lightning models. Trying to google it isn't turning up much that I could find. Do you have any idea how to fix this? I was able to use AnimateDiff before.
    RuntimeError: CUDA error: device-side assert triggered
    CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
    For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
    Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

  • @CesarSanchezQuiros
    @CesarSanchezQuiros 2 месяца назад +1

    Hello friend, let me make you a question. Does the length of the video depend on the vram of the gpu?

    • @l4l01234
      @l4l01234 2 месяца назад +1

      No, length of the generated video has no relation to VRAM. Length of generation does depend on VRAM, specifically if you don’t have enough it will offload into RAM of disk which is horribly slow to generate

  • @zaselimgamingvideos6881
    @zaselimgamingvideos6881 2 месяца назад +4

    i am getting this error: inopsError: Error while processing rearrange-reduction pattern "(b f) d c -> (b d) f c". Input tensor shape: torch.Size([2, 4096, 320]). Additional info: {'f': 16}. Shape mismatch, can't divide axis of length 2 in chunks of 16
    I did everything exactly the same as you, even selected the same checkpoint model. I am on rtx4090. Also I don't have these 2 files in the same folder mm_sd_v1.4.ckpt and mm_sd_v1.5.ckpt, like you do. Is it because of these files i am getting this error. If yes then from where i can download these files.
    Thanks.

    • @nirsarkar
      @nirsarkar 2 месяца назад +1

      Me too getting the same error

  • @mcdazz2011
    @mcdazz2011 Месяц назад

    At 2:15, you should a number of files in your model folder, but you never explain about the two checkpoint files that are in there.
    Are they needed? If so, why aren't they mentioned? Are they mentioned in the comfy_ui portion of the video (which I likely won't watch until I start using comfy_ui)?
    This isn't a criticism by any means, as I appreciate the time you spend making your videos, as well as the content itself. You're hands down one of best AI creators.

    • @OlivioSarikas
      @OlivioSarikas  Месяц назад

      no you need any models for the list i show bofore that are specifically for AnimateDiff Lightning

  • @Neuromindart
    @Neuromindart 2 месяца назад +1

    I followed along but for some reason it is just creating an image and not a video.
    I followed each step and even have animatediff enabled. There is no visible reason why this would be happening. Any ideas?

    • @Neuromindart
      @Neuromindart 2 месяца назад

      Looks like the console is showing AttributeError: 'NoneType' Object has no attribute 'save_infotext_txt'

  • @ScottMia
    @ScottMia 2 месяца назад +5

    Once the size and time are exceeded, the animation will deform and become uncontrollable.

    • @OlivioSarikas
      @OlivioSarikas  2 месяца назад +2

      sadly yes. even with the 4 frame overlap

  • @zimizi
    @zimizi 2 месяца назад

    how long does it take to produce the animation? in a1111

    • @zimizi
      @zimizi 2 месяца назад

      im running a 3070 with 8gb

  • @tanveerahmad2865
    @tanveerahmad2865 2 месяца назад

  • @o1ecypher
    @o1ecypher 2 месяца назад +3

    animate diff in automatic 1111 has interpolate, its really new

    • @AndyHTu
      @AndyHTu 2 месяца назад +5

      interopolate has been in there for a long time. Are you talking about something new?

    • @o1ecypher
      @o1ecypher 2 месяца назад

      @@AndyHTu its called FreeInit Params

  • @DivinityIsPurity
    @DivinityIsPurity 2 месяца назад +5

    2:27 Why did you not enable frame interpolation in A1111 on bottom left? But you enabled frame enterpolation in comfyui? You need to redo entire video because that seems like you are discriminating against A1111 and saying comfyui is better in that regard. Either you did this intentionaly to nudge people towards comfyui or you clearly missed that A1111 plugin DOES offer frame interpolation as seen in your video in bottom left at 2:27

    • @BabylonBaller
      @BabylonBaller 2 месяца назад +5

      Comfyui pushes patreon memberships, A1111 does not as its straight forward. Ive noticed the slow shift in most youtubers and it then became clear why many of them are pushing comfy.

    • @jevinlownardo8784
      @jevinlownardo8784 2 месяца назад +1

      And that's why i unsubscribed this channel.

  • @LouisGedo
    @LouisGedo 2 месяца назад

    👋

  • @aa-xn5hc
    @aa-xn5hc 2 месяца назад +1

    No patron = no comfyui process

  • @DJVARAO
    @DJVARAO 2 месяца назад +3

    Wow, how far we are!😁

  • @Orangeduck
    @Orangeduck 2 месяца назад

  • @Deadgray
    @Deadgray 2 месяца назад

    tl;dr Comfy version:
    Instead of paying for free stuff - In your existing AD workflow replace your previous AD model with just downloaded, adjust sampler steps to model. You're done.

  • @pedro3000
    @pedro3000 2 месяца назад

    This is basically Will Smith eating spaghetti.

  • @abaal2109
    @abaal2109 2 месяца назад

    First

  • @roberta.s.946
    @roberta.s.946 2 месяца назад

    Woof

  • @chucknorris8704
    @chucknorris8704 2 месяца назад

    the inconsistency is still too bad to be useable

  • @m14wdotcom
    @m14wdotcom 2 месяца назад

    I need to see some great stuffs created in seconds to be convinced