ComfyUI Tutorial Series Ep 25: LTX Video - Fast AI Video Generator Model

Поделиться
HTML-код
  • Опубликовано: 30 янв 2025

Комментарии • 113

  • @pixaroma
    @pixaroma  Месяц назад

    Free workflows are available on the Pixaroma Discord server in the pixaroma-workflows channel discord.gg/gggpkVgBf3
    You can now support the channel and unlock exclusive perks by becoming a member:
    pixaroma ruclips.net/channel/UCmMbwA-s3GZDKVzGZ-kPwaQjoin
    Check my other channels:
    www.youtube.com/@altflux
    www.youtube.com/@AI2Play

  • @g4p5l6
    @g4p5l6 2 дня назад

    Great tutorial, works exactly as described. As you mentioned the first few clips are a bit off and improve with further iterations. Looking forward to seeing how the technology improves! Thanks for posting.

  • @ivo_tm
    @ivo_tm Месяц назад +5

    The quality of the generated videos is very decent. Thank you very much!

    • @pixaroma
      @pixaroma  Месяц назад +1

      Yeah I can work with some of those, image to video and some good prompts and a lot of variations 😁

  • @gabdofuturo
    @gabdofuturo 19 дней назад +1

    Wow bro! It generated on my RTX 3060 12gb in less than 2 minutes, less time than when I generate on a paid platform I used. Now, I just have to find out how to use a 4k upscale node and I'm done. Bro, you just made me save lots of money. Really good for a poor Brazilian guy.

    • @cristianbonessoni6543
      @cristianbonessoni6543 15 дней назад

      Did you figure out how to upscale to 4k? I thought the video was fantastic, I have a really old GTX 1080! And you managed to create it in 10 minutes!

  • @Uday_अK
    @Uday_अK Месяц назад +7

    This is impressive! 🎥✨ Local video generation models like LTX could truly shape the future of open-source AI, making creativity more accessible to everyone.

    • @pixaroma
      @pixaroma  Месяц назад +1

      thanks Uday :)

  • @camilaot4862
    @camilaot4862 День назад

    Thank for this pixaroma! Liked and subscribed! And I'll watch the whole playlist if I can!

  • @FusionDeveloper
    @FusionDeveloper Месяц назад +2

    I love LTX video, especially with the enhanced output nodes.

  • @JuniorReveron
    @JuniorReveron Месяц назад +2

    I love your videos. When you started on Episode 01 I put comfy on a 125 GB SSD, now I had to move everything to an 8 TB SSD I was running out of space.

    • @pixaroma
      @pixaroma  Месяц назад +1

      There so many cool things to try so more fun more space it needs 😂 I am almost out of space also, I have to see what things I can delete

  • @95georgiev
    @95georgiev Месяц назад

    cant wait to watch yet another ai video for ai videos

  • @GKChandlerBooks
    @GKChandlerBooks 27 дней назад

    Thanks for sharing. Looking forward to implementing it.

  • @jonrich9675
    @jonrich9675 Месяц назад +2

    always love your videos. Keep them coming :)

  • @Northern362
    @Northern362 Месяц назад

    Great video and series, thanks for all your hard work. I started playing with the samplers, getting much better results with ddim.

    • @pixaroma
      @pixaroma  Месяц назад +1

      Great to hear 🙂

  • @ohheyvoid
    @ohheyvoid 9 дней назад

    Awesome! Super helpful. Thank you! Subbed

  • @alexandrapadureanu4192
    @alexandrapadureanu4192 Месяц назад +2

    Interesting stuff!

  • @devnull_
    @devnull_ Месяц назад +3

    Thanks!

  • @NotThatOlivia
    @NotThatOlivia Месяц назад

    nicely done overview of the model!

  • @rank8467
    @rank8467 Месяц назад

    Great video, great model, great WF. Now we have to find a way to continue the video, because 10 seconds max is short.😉

    • @pixaroma
      @pixaroma  Месяц назад +1

      You could take the last frame of the video and use it to make another video

    • @rank8467
      @rank8467 Месяц назад

      that's what I do, but sometimes it goes in the wrong direction.@@pixaroma

    • @pixaroma
      @pixaroma  Месяц назад

      Unless someone make a custom node or something or a better model i don't know a fix

  • @JackGamerEuphoriaDev
    @JackGamerEuphoriaDev Месяц назад +2

    Awesome I was at your discord server when I saw Video Generation finally I was waiting for this Thanks a lot 🎉❤

  • @QuickQuizQQ
    @QuickQuizQQ Месяц назад

    another great tutorial,thank you keep going )

  • @JakubSK
    @JakubSK 18 дней назад +1

    It ran on my M3 MacBook in less than 2 minutes😮

    • @pixaroma
      @pixaroma  18 дней назад +1

      I assume is good

    • @JakubSK
      @JakubSK 18 дней назад

      @ you’re awesome!

  • @ita3pizza
    @ita3pizza Месяц назад

    Thanks for sharing!

  • @59Marcel
    @59Marcel Месяц назад

    Really interesting tutorial. Thank you.

    • @pixaroma
      @pixaroma  Месяц назад

      Thanks Marcel ☺️

  • @Ono_Rourke
    @Ono_Rourke Месяц назад

    thank you, great tutorial

  • @UnrealCpp
    @UnrealCpp 24 дня назад

    i wonder if is possible to use a singel image and text describe to generate a AI video and every feames can be used in unity 2d animation frame sequence

    • @pixaroma
      @pixaroma  24 дня назад

      Maybe you can get the frames from the video but not sure is so advanced to do animation like that, i mean i got with kling ai advanced animation but ltx wasn't quite so advanced

    • @UnrealCpp
      @UnrealCpp 24 дня назад

      @@pixaroma i will try your advices, if i can't make it i hope it will becom your EP26

  • @giuseppedaizzole7025
    @giuseppedaizzole7025 Месяц назад

    Great..thanks for sharing!!

  • @Fayrus_Fuma
    @Fayrus_Fuma Месяц назад

    Good. Let's wait to see if the video can be looped properly. So far this is the first step, but the way it always cuts off abruptly, a lot of things are not available yet.

  • @WildCrashBNG
    @WildCrashBNG Месяц назад

    Sir, which python version is required for ComfyUI? Also, when I install LTX video from the manager, it says (IMPORT FAILED). What could be the reason for this?

    • @pixaroma
      @pixaroma  Месяц назад

      I installed the portable version so it install the version it needs and its own environment , i have also updated to this maybe it helps, first i uninstalled then installed, I got
      Name: torch
      Version: 2.5.1+cu124
      Name: torchvision
      Version: 0.20.1+cu124
      Name: xformers
      Version: 0.0.28.post3
      Name: torchaudio
      Version: 2.5.1+cu124

  • @dreadfulpirate
    @dreadfulpirate 3 дня назад

    Hello and thank you.. One question. On the ComfyUI_examples page, its saying to download t5xxl_fp16.safetensors to /models/text_encoders folder. However you say to put that file in the /models/clip folder. Can you assist?

    • @pixaroma
      @pixaroma  3 дня назад +1

      I think it works on both folders try one if not copy them in other, normaly is a text encoder so it should go to text encoder. I think i got used to put on clip because there i put for flux

  • @Billabongesta
    @Billabongesta Месяц назад

    Thank you!

  • @thedevo01
    @thedevo01 Месяц назад

    Thank you for the tutorial! It's one of the best ones when it comes to local video diffusion.
    Is your audio AI generated as well? Some words sound suspicious. :))

    • @pixaroma
      @pixaroma  Месяц назад

      Yeah all the audio is generated from the text i give it, so is not always perfect but it does the job

  • @dinnerchief
    @dinnerchief Месяц назад

    what are th ui imporvments you use, like, ram, vram gpu usage and the progress bar?

    • @pixaroma
      @pixaroma  Месяц назад +1

      I just installed the crystools node from manager, custom nodes manager, and those appear where you bar is, if is on top appears there

    • @dinnerchief
      @dinnerchief Месяц назад

      @pixaroma thank you

  • @Korodarn
    @Korodarn Месяц назад

    I did update all and installed video helper but it still doesn't see these LTVX nodes. I do have the samplercustom node and video combine nodes. I also tried installing LTXVideo and LTXTricks but this still didn't give me those nodes.
    LTXVConditioning
    LTXVScheduler
    LTXVImgToVideo

    • @Korodarn
      @Korodarn Месяц назад

      Apparently update all didn't work completely, had to do a git pull on the folder.

    • @pixaroma
      @pixaroma  Месяц назад

      Sometimes updating from manager fails

    • @9290943013
      @9290943013 23 дня назад

      i got the same issue, can you help me out please.

  • @michaelpyro
    @michaelpyro Месяц назад

    For image to video I get this error: - Required input is missing: noise_scale
    Do I need a "get image size" node in between load image and LTX config nodes? I can't seem to find a get image size node in my node manager.

    • @pixaroma
      @pixaroma  Месяц назад

      Try one of the workflows that i tested and worked from my discord ok pixaroma-workflows channel are free, so you can check to see if works and then you can modify that

  • @codingfun63
    @codingfun63 4 дня назад

    VAE load device: cuda:0, offload device: cpu, dtype: torch.float32
    no CLIP/text encoder weights in checkpoint, the text encoder model will not be loaded. ?

    • @pixaroma
      @pixaroma  3 дня назад

      are you having enogh vram? that might be the cause, if you have model and clip it should work if you a powerful nvidia card with this 2 models
      Download ltx-video-2b-v0.9.safetensors into models/checkpoints folder
      huggingface.co/Lightricks/LTX-Video/tree/main
      Make sure t5xxl_fp16 is in your models/clip folder
      huggingface.co/Comfy-Org/stable-diffusion-3.5-fp8/tree/main/text_encoders

  • @ss0ulzz
    @ss0ulzz 11 часов назад

    So i followed all your step by steps to generate a generic video of a woman walking (basically in the prompt i wrote: a woman walking) as simple as that, negatives were: blurry, noise, deformed. I went with a very quick prompts just to test it out. Video came out horribly terrible, it's pretty much as if 20 buckets of paints were tossed in the air and splashed on the ground. that's the video i got. i tempered a little with the cfg etc... (You know, self exploit) but it didn't get worse , the video was still (the paint example lol) with extremely minor improvement however nowhere a very clear / detailed / high res video of a woman walking . Any tips / ideas/ or suggestions? thank you

    • @pixaroma
      @pixaroma  10 часов назад

      try to use long detailed prompts generated with an LLM it expects long prompts with details, simple prompts will fail more than longer detailed prompt. I use chatgpt for example to get detailed long prompts

  • @WW2_in_Minutes
    @WW2_in_Minutes Месяц назад +2

    great stuff! What are you using for your AI voice? It sounds great. In the future can you make a tutorial on how to create an AI voice like that?

    • @pixaroma
      @pixaroma  Месяц назад +5

      I use voiceair, they have the voice from elevenlabs, I can do a video for the ai2play channel maybe in the future, just need to find some time, I have so many projects scheduled :)

    • @WW2_in_Minutes
      @WW2_in_Minutes Месяц назад +2

      @@pixaroma awesome thanks!

    • @sybexstudio
      @sybexstudio 11 дней назад

      @@pixaromaDoes it just translate what you say in another voice? Or are you writing it down

    • @pixaroma
      @pixaroma  11 дней назад

      I write it in english and give that text to get audio

  • @knives463
    @knives463 Месяц назад

    Great job, thanks ! How about a tutorial about Hunyuan video ?

    • @pixaroma
      @pixaroma  Месяц назад

      The license dont allow use of it in Europe so i can not do a video about it

    • @knives463
      @knives463 Месяц назад

      @@pixaroma A shame :( Thanks for your answer ;)

  • @newwen2102
    @newwen2102 Месяц назад

    Thanks a lot!!!!!!!!!!!!!!

  • @TurboCoder-m4m
    @TurboCoder-m4m Месяц назад

    Thank you very much for this amazing tutorial and the workflows shared. This was my first attempt at generating video, and everything worked flawlessly (and even faster than I expected !!!)
    I have a couple of questions, in case you’re up for answering them:
    (Silly question) Img2Vid - Can the workflow be set up to generate only the MP4 video without saving the PNG reference image to the output folder?
    (Complex question) Img2Vid - Do you know of any workflow that allows specifying both the start and end frames of the video? That would be super useful for me...
    Thanks so much for your hard work!

    • @pixaroma
      @pixaroma  Месяц назад +1

      I dont know any workflow yet, yeah that with end frame will be useful. For saving I dont know why save the png also. I tried to disable in video combine the output, but then it doesnt save png and no video, so only if you right click and save it from there is like a preview

  • @ВладиславАндреев-о7о

    Sorry, i have missed LTXV node... which one i should install?

    • @ВладиславАндреев-о7о
      @ВладиславАндреев-о7о Месяц назад

      I had update ComfyUI from manager

    • @pixaroma
      @pixaroma  Месяц назад

      I have included in the video description all the details, with links

    • @ВладиславАндреев-о7о
      @ВладиславАндреев-о7о Месяц назад

      i have some issues with Viideo node, It is not installed

    • @pixaroma
      @pixaroma  Месяц назад

      @@ВладиславАндреев-о7о you can try to install this comfyui version on another folder, it install automatically all the nodes i used so far github.com/Tavris1/ComfyUI-Easy-Install made by a member of our discord community

  • @sybexstudio
    @sybexstudio 11 дней назад

    Can you add styles to the videos and other methods just img 2img

    • @pixaroma
      @pixaroma  11 дней назад +1

      I didn't play too much with video, once i learn more things I can do more videos

  • @finbenton
    @finbenton Месяц назад

    Is there a limit on how long video you can generate?

    • @pixaroma
      @pixaroma  Месяц назад

      I only did 5 seconds, i saw online some did with some extra nodes somehow 10 seconds but all ai seems to have a limit around 5-10 sec

  • @SumoBundle
    @SumoBundle Месяц назад

    Very nice. To bad you need a lot of resources. Do you know if we can create computers with more graphic cards ?

    • @pixaroma
      @pixaroma  Месяц назад +1

      I don't know, but this model worked on 12gb video card

  • @bestof467
    @bestof467 Месяц назад

    How to add node to save to video mp4 instead of webp? Also how to make video longer? Images distort with large prompt.

    • @pixaroma
      @pixaroma  Месяц назад

      I showed in the video, i used that node from video helper, check the entire video

    • @bestof467
      @bestof467 Месяц назад

      @@pixaroma Yes, I saw the new node. But the video result morphs and distorts most of the time. Any fix around for this?

    • @pixaroma
      @pixaroma  Месяц назад

      @@bestof467 long prompts, slow motion, that improve it, all models have glitches, including Sora

  • @zorankp
    @zorankp 9 дней назад

    Cannot find the workflow?

    • @pixaroma
      @pixaroma  9 дней назад

      Is on discord on pixaroma-worflows channel

  • @gjohgj
    @gjohgj Месяц назад

    Arent there better video upscalers with better models than Topaz? For instance control the creativity or prompt option

    • @pixaroma
      @pixaroma  Месяц назад

      it might be, but I dont know any, I used that one since I have it for a year or so

    • @sybexstudio
      @sybexstudio 11 дней назад

      Can’t you use foolhardy upscaler 4K

  • @AIFuzz59
    @AIFuzz59 Месяц назад

    Use the video combine node to save it as mp4. No one wants to save as webp format

    • @pixaroma
      @pixaroma  Месяц назад +1

      I covered MP4 saving at minute 5:35 🙂

  • @MikevomMars
    @MikevomMars 11 дней назад

    Missing Node Types:
    LTXVScheduler
    LTXVConditioning
    EmptyLTXVLatentVideo
    - ComfyUI updated, VideoHelper installed but there is no way to install these missing nodes in your workflow. Do not show up in the manager.

    • @pixaroma
      @pixaroma  11 дней назад

      If you click on manager and install missing nodes still doesn't appear? Maybe update comfyui from the update folder there is a bat there, not sure why it does appears to you

    • @MikevomMars
      @MikevomMars 11 дней назад

      @@pixaroma Nope - no way to install. Lots of users having the same issue, according to GitHub. This project seems to be broken 😕

  • @LouisGedo
    @LouisGedo Месяц назад

    👋 hi

  • @DeathMasterofhell15
    @DeathMasterofhell15 Месяц назад

    4070ti takes about 5 minutes why?

    • @pixaroma
      @pixaroma  Месяц назад

      it should not take that long, not sure why, try with less length like 97 and that 768x512px and see if all the times is that speed or only the first time. I asked someone and said on 12gb of vram takes like under 2 minutes, and depends on the text clip encoder also, the fp16 worked better for some, for other fp8 was faster etc, so try other clips maybe

    • @pixaroma
      @pixaroma  Месяц назад

      you can also try to replace vae decode with vae decode tilled, might help with the memory

  • @TheRitualChannel
    @TheRitualChannel Месяц назад

    @7:01 "RTX four thousand ninety" huh? are you a robot? cuz the rest of us say forty-ninety.

    • @pixaroma
      @pixaroma  Месяц назад

      😂 I am using Ai to convert my text to audio, so it reads different sometimes, depending on the generation. I think it didn't read right and i gave the numbers in words, maybe I didn't give the text right since English is not my native language

  • @procrastonationforever5521
    @procrastonationforever5521 Месяц назад

    I am fast typist! I can type 5000 chars a min. The text is complete gibberish random garbage though... The same is with AI video nowadays...

    • @pixaroma
      @pixaroma  Месяц назад

      I am sure it will get better in the future, like with AI images, we already can do cool images and it started really bad with a few pixels images.

  • @carverthunderlord
    @carverthunderlord Месяц назад

    Why is this shit so goddamn archaic? Every single video I watch there is some features in the video that I don't have. At 1:16 you're in a manager window downloading a video helper and I don't have those screens. smdh. I'm about to give up.

    • @pixaroma
      @pixaroma  Месяц назад

      can you post a screnshot in the comfyui channel on my pixaroma discord server? you should have all the things I have. I will try to help or someone from discord will help, Just trying to understand what it works and not, and what you get. Did you installed manager?