LTX Video 0.9.1 With Flow Edit Video2Video - A Game Changer For Local AI!

Поделиться
HTML-код
  • Опубликовано: 6 фев 2025
  • LTX Video 0.9.1 With Flow Edit Video2Video - A Game Changer For Local AI
    Discover the new LTX Video Model 0.9.1, a game-changer in AI video generation! This updated version introduces enhanced motion consistency, improved text-prompt following, and reduced morphing for smoother, more detailed outputs. In this video, we explore the latest features of the LTX Video Model, including its integration with ComfyUI and the powerful STG guidance method, which takes AI video production to the next level.
    ComfyUI Kokoro TextToSpeech With LatentSync For LipSync (Run On Cloud)
    home.mimicpc.c...
    Follow along as we demonstrate workflows for text-to-video, image-to-video, and video-to-video editing using the latest LTX Tricks custom nodes. Watch as we transform basic inputs into stunning visuals with seamless frame transitions, enriched detail, and consistent styling. We also highlight the benefits of the updated architecture, which optimizes VRAM usage and supports a wide range of creative applications, from cinematic scenes to stylized animations.
    Whether you're a professional creator or an AI enthusiast, this tutorial will empower you to create stunning AI-generated videos on your local machine. Don’t forget to like, share, and subscribe for more insights into cutting-edge AI video tools and workflows!
    Attached workflows of 2, LTX with STG and LTX Flow Edit V2V (Freebies):
    www.patreon.co...
    If You Like tutorial like this, You Can Support Our Work In Patreon:
    / aifuturetech
    Discord : / discord
  • НаукаНаука

Комментарии • 51

  • @TheFutureThinker
    @TheFutureThinker  Месяц назад +6

    ComfyUI-LTXTricks
    github.com/logtd/ComfyUI-LTXTricks
    Attached workflows of 2, LTX with STG and LTX Flow Edit V2V (Freebies):
    www.patreon.com/posts/ltx-video-0-9-1-118605761?Link&

  • @trung_ai
    @trung_ai Месяц назад +10

    Thank you for a great video. I would like to add some of my findings with I2V after exploring LTX (0.9.0):
    - With mp4 compression, for realistic video, optimal CRF value is ~ 30, 50 60 will degrade quality a lot and cause diverse and extreme motion. For art and anime, there is usually no need for detailed texture so if you want more motion CRF 50 will yield more diverse animation
    - For prompting, keywords like slowly, gradually will make smoother and more consistent animation but the output will quite in slow motion, so interpolate and increase FPS is a good choice. Add very slowly will make it even slower.
    - LTX is very good at landscape/ video without people. So I2V or T2V with detailed prompts most of the time will yield great results.
    - Don't include multiple or sudden camera movement keywords (e.g. shift, suddenly etc.) as it will create morphing animations (i.e. from frame 0 to 20 it has consistent video, then will suddenly shift to entirely different scenes)
    - For dancing/ more rapid character movement, I have good results with this kind of prompt "[Subject] begins to dance with graceful precision, his movements fluid and intentional".

  • @QuanticalCapybara
    @QuanticalCapybara Месяц назад

    Very cool tuto ! This year is going to be amazing for ia vids.

  • @insurancecasino5790
    @insurancecasino5790 Месяц назад

    Bro, your channel is going to give me a heart attack. LOL! You're a machine with these vids updates. Awesome work.

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад +3

      hehe.. thanks. Have fun on holiday!
      I will continue with the web app development for Hunyuan Video Web UI. ;)

  • @ratside9485
    @ratside9485 Месяц назад +6

    Loras are also possible, the model really has potential. I believe they have now also announced official Lora support.

    • @AIBusinessIdeasBenji
      @AIBusinessIdeasBenji Месяц назад +3

      I know a guy even fine tune this base model. And it awesome! Next video talk about this.

    • @TerragonAI
      @TerragonAI Месяц назад +1

      @@AIBusinessIdeasBenji that sounds great! 🙂

    • @bause6182
      @bause6182 Месяц назад

      I am waiting controlnets support also😊

  • @dxtrytooon
    @dxtrytooon Месяц назад

    Thank you for your work, may I ask what is the best model you would recommend to use for picture to video? For the highest quality results

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад

      Open source?

    • @dxtrytooon
      @dxtrytooon Месяц назад

      @@TheFutureThinker if you have both recommendations I’ll take it open and not open 👌

  • @mhnoni
    @mhnoni Месяц назад +1

    Thanks for the video, What is the min vram for this?

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад +2

      Its about 12 GB need.

    • @mhnoni
      @mhnoni Месяц назад

      @@TheFutureThinker oh nice, thanks.

  • @kalakala4803
    @kalakala4803 Месяц назад

    Great vid ! We have V2V that to be run on PC friendly like AnimateDiff before!

  • @trung_ai
    @trung_ai Месяц назад

    I have tried LTX quite a few times already, overall has great experience with it. Though one thing I struggle with is camera movement, e.g. zoom out/ pan right/ pan left etc. doesn't seem to work. Have you found a consistent method for camera movement?

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад

      It need a very detail prompt for camera panning. and version 0.9.1 does improve on prompt following.
      Good luck ;)

  • @ckhmod
    @ckhmod Месяц назад

    Great video! One question. Where does Florence-2-Flux-Large install to?

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад +1

      models\llm\Florence-2-Flux-Large
      it can be download in here manually : huggingface.co/gokaygokay/Florence-2-Flux-Large/tree/main
      Or use Download Florenece 2 Loader node, auto download.

    • @ckhmod
      @ckhmod Месяц назад

      @@TheFutureThinker Thank you so much! Just going through this now. Cheers.

  • @Gardener7
    @Gardener7 Месяц назад

    Which is better, this one or the Nvidia one?

  • @thedevo01
    @thedevo01 Месяц назад

    The guy in the apocalyptic city wouldn't be able to walk with a knee like that 😆
    Jokes aside tho, in curious why it ignored the boy turning 180 while walking away from the camera.

  • @dsphotos
    @dsphotos Месяц назад

    great channel! thx a lot but i have 1 error at the end of the flow:
    ImageConcanate: Sizes of tensors must match except in dimension 2. Expected size 111 but got size 105 for tensor number 1 in the list.

    • @dsphotos
      @dsphotos Месяц назад

      just deledet the compare nodes, since i dont need it now error is gone, also noticed if you tell in your prompt to move slowly and use 30fps + steps 40-50+ in sheduler you get a much better result-> still renders it quite fast, which is amazing on a 10 years old pc + rtx 3060 with 12gb ram :-)

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад

      Hi , and yes the compare node for demo purposes. It need both references and output video to be the same resolution.
      Dont need this when you generate video usually.

  • @amanportfolio
    @amanportfolio Месяц назад

    Florence2ModelLoader; Missing Node Types
    When loading the graph, the following node types were not found it is still missing after install missing nodes, what should i do?

  • @seifergunblade9857
    @seifergunblade9857 Месяц назад

    what is the minimum vram

  • @TheKr4tosD
    @TheKr4tosD Месяц назад

    hello, anyone can help me ? its getting this error: Missing Node Types
    When loading the graph, the following node types were not found
    Florence2ModelLoader

  • @HugoCostaBR
    @HugoCostaBR Месяц назад

    The LTX-Video Node is still broken in Comfyui Manager Installer. The only way is to install manually via Git URL.

    • @sorijin
      @sorijin Месяц назад +1

      Nope. While it doesn't show up in the Install Missing Custom Nodes, it is searchable to install from Custom Nodes Manager and works 100% when installed from there.

  • @FusionDeveloper
    @FusionDeveloper Месяц назад +1

    in my experience so far, LTX 0.9.1 has higher VRAM requirement than LTX 0.9, even though 0.9.1 is half the size.

  • @wonder111
    @wonder111 Месяц назад

    Anyone else getting cross-hatching in moving highlight areas like a flickering flame? Looking for solutions in DaVinci Resolve. Topaz upscale seems to ignore cross-hatching. I was wondering if I should optimize footage in Resolve before upscaling in Topaz? Anyone else working in this way?

  • @petergreene8760
    @petergreene8760 Месяц назад

    I can't seem to 0.9.1 model to run on my 2080ti without getting OOM error

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад

      2080ti ? Have you emailed LTX ask why it happened?

    • @petergreene8760
      @petergreene8760 Месяц назад

      @TheFutureThinker no, I didn't think to do that

  • @MaxCui-jt8ob
    @MaxCui-jt8ob Месяц назад +1

    CR 文本替换
    “list”对象没有属性“replace”

  • @dropLove_
    @dropLove_ Месяц назад

    Benji stays on it.

  • @ssserega2976
    @ssserega2976 10 часов назад

    I tried this scheme, it's just terrible 👎👎👎there are other ltx schemes that work much better!

  • @phudo_ai
    @phudo_ai Месяц назад +2

    Very hard to control the output result. This is curently not the game changer as you said.

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад

      depends how you use it, if you said control? like a video editor? it won't be. it's AI.

    • @darmok072
      @darmok072 Месяц назад

      Have you used something that gives greater control? Personally I find everything pretty much the same at the moment: just with different ways of getting to the same results.

  • @luisellagirasole7909
    @luisellagirasole7909 Месяц назад +1

    Hi, I got always this error
    "Error(s) in loading state_dict for VideoVAE:
    size mismatch for decoder.conv_in.conv.weight: copying a param with shape torch.Size([1024, 128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 128, 3, 3, 3]).
    size mismatch for decoder.conv_in.conv.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]).
    size mismatch for decoder.up_blocks.0.res_blocks.0.conv1.conv.weight: copying a param with shape torch.Size([1024, 1024, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3, 3]).
    size mismatch for decoder.up_blocks.0.res_blocks.0.conv1.conv.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]).
    size mismatch for decoder.up_blocks.0.res_blocks.0.conv2.conv.weight: copying a param with shape torch.Size([1024, 1024, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3, 3]). etc"

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад +1

      Check out 02:20 . VideoVAE error because you need the new VAE loader for the version 0.9.1 model VAE archecture.

    • @luisellagirasole7909
      @luisellagirasole7909 Месяц назад

      @@TheFutureThinker Yes I have update comfyUi from manager, but still got error, maybe I will update in another way, I'll try, thanks

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад

      Try not to use ComfyUI manager to update if that is the case. Git pull.