LTX Video 0.9.1 With Flow Edit Video2Video - A Game Changer For Local AI!
HTML-код
- Опубликовано: 6 фев 2025
- LTX Video 0.9.1 With Flow Edit Video2Video - A Game Changer For Local AI
Discover the new LTX Video Model 0.9.1, a game-changer in AI video generation! This updated version introduces enhanced motion consistency, improved text-prompt following, and reduced morphing for smoother, more detailed outputs. In this video, we explore the latest features of the LTX Video Model, including its integration with ComfyUI and the powerful STG guidance method, which takes AI video production to the next level.
ComfyUI Kokoro TextToSpeech With LatentSync For LipSync (Run On Cloud)
home.mimicpc.c...
Follow along as we demonstrate workflows for text-to-video, image-to-video, and video-to-video editing using the latest LTX Tricks custom nodes. Watch as we transform basic inputs into stunning visuals with seamless frame transitions, enriched detail, and consistent styling. We also highlight the benefits of the updated architecture, which optimizes VRAM usage and supports a wide range of creative applications, from cinematic scenes to stylized animations.
Whether you're a professional creator or an AI enthusiast, this tutorial will empower you to create stunning AI-generated videos on your local machine. Don’t forget to like, share, and subscribe for more insights into cutting-edge AI video tools and workflows!
Attached workflows of 2, LTX with STG and LTX Flow Edit V2V (Freebies):
www.patreon.co...
If You Like tutorial like this, You Can Support Our Work In Patreon:
/ aifuturetech
Discord : / discord Наука
ComfyUI-LTXTricks
github.com/logtd/ComfyUI-LTXTricks
Attached workflows of 2, LTX with STG and LTX Flow Edit V2V (Freebies):
www.patreon.com/posts/ltx-video-0-9-1-118605761?Link&
Thank you for a great video. I would like to add some of my findings with I2V after exploring LTX (0.9.0):
- With mp4 compression, for realistic video, optimal CRF value is ~ 30, 50 60 will degrade quality a lot and cause diverse and extreme motion. For art and anime, there is usually no need for detailed texture so if you want more motion CRF 50 will yield more diverse animation
- For prompting, keywords like slowly, gradually will make smoother and more consistent animation but the output will quite in slow motion, so interpolate and increase FPS is a good choice. Add very slowly will make it even slower.
- LTX is very good at landscape/ video without people. So I2V or T2V with detailed prompts most of the time will yield great results.
- Don't include multiple or sudden camera movement keywords (e.g. shift, suddenly etc.) as it will create morphing animations (i.e. from frame 0 to 20 it has consistent video, then will suddenly shift to entirely different scenes)
- For dancing/ more rapid character movement, I have good results with this kind of prompt "[Subject] begins to dance with graceful precision, his movements fluid and intentional".
Very cool tuto ! This year is going to be amazing for ia vids.
Yes , its going wild
Bro, your channel is going to give me a heart attack. LOL! You're a machine with these vids updates. Awesome work.
hehe.. thanks. Have fun on holiday!
I will continue with the web app development for Hunyuan Video Web UI. ;)
Loras are also possible, the model really has potential. I believe they have now also announced official Lora support.
I know a guy even fine tune this base model. And it awesome! Next video talk about this.
@@AIBusinessIdeasBenji that sounds great! 🙂
I am waiting controlnets support also😊
Thank you for your work, may I ask what is the best model you would recommend to use for picture to video? For the highest quality results
Open source?
@@TheFutureThinker if you have both recommendations I’ll take it open and not open 👌
Thanks for the video, What is the min vram for this?
Its about 12 GB need.
@@TheFutureThinker oh nice, thanks.
Great vid ! We have V2V that to be run on PC friendly like AnimateDiff before!
Yes this is it. The next .
I have tried LTX quite a few times already, overall has great experience with it. Though one thing I struggle with is camera movement, e.g. zoom out/ pan right/ pan left etc. doesn't seem to work. Have you found a consistent method for camera movement?
It need a very detail prompt for camera panning. and version 0.9.1 does improve on prompt following.
Good luck ;)
Great video! One question. Where does Florence-2-Flux-Large install to?
models\llm\Florence-2-Flux-Large
it can be download in here manually : huggingface.co/gokaygokay/Florence-2-Flux-Large/tree/main
Or use Download Florenece 2 Loader node, auto download.
@@TheFutureThinker Thank you so much! Just going through this now. Cheers.
Which is better, this one or the Nvidia one?
The guy in the apocalyptic city wouldn't be able to walk with a knee like that 😆
Jokes aside tho, in curious why it ignored the boy turning 180 while walking away from the camera.
great channel! thx a lot but i have 1 error at the end of the flow:
ImageConcanate: Sizes of tensors must match except in dimension 2. Expected size 111 but got size 105 for tensor number 1 in the list.
just deledet the compare nodes, since i dont need it now error is gone, also noticed if you tell in your prompt to move slowly and use 30fps + steps 40-50+ in sheduler you get a much better result-> still renders it quite fast, which is amazing on a 10 years old pc + rtx 3060 with 12gb ram :-)
Hi , and yes the compare node for demo purposes. It need both references and output video to be the same resolution.
Dont need this when you generate video usually.
Florence2ModelLoader; Missing Node Types
When loading the graph, the following node types were not found it is still missing after install missing nodes, what should i do?
what is the minimum vram
hello, anyone can help me ? its getting this error: Missing Node Types
When loading the graph, the following node types were not found
Florence2ModelLoader
The LTX-Video Node is still broken in Comfyui Manager Installer. The only way is to install manually via Git URL.
Nope. While it doesn't show up in the Install Missing Custom Nodes, it is searchable to install from Custom Nodes Manager and works 100% when installed from there.
in my experience so far, LTX 0.9.1 has higher VRAM requirement than LTX 0.9, even though 0.9.1 is half the size.
Yes, more stuff to extract in the execution
Anyone else getting cross-hatching in moving highlight areas like a flickering flame? Looking for solutions in DaVinci Resolve. Topaz upscale seems to ignore cross-hatching. I was wondering if I should optimize footage in Resolve before upscaling in Topaz? Anyone else working in this way?
I can't seem to 0.9.1 model to run on my 2080ti without getting OOM error
2080ti ? Have you emailed LTX ask why it happened?
@TheFutureThinker no, I didn't think to do that
CR 文本替换
“list”对象没有属性“replace”
Benji stays on it.
Always on it.
I tried this scheme, it's just terrible 👎👎👎there are other ltx schemes that work much better!
Very hard to control the output result. This is curently not the game changer as you said.
depends how you use it, if you said control? like a video editor? it won't be. it's AI.
Have you used something that gives greater control? Personally I find everything pretty much the same at the moment: just with different ways of getting to the same results.
Hi, I got always this error
"Error(s) in loading state_dict for VideoVAE:
size mismatch for decoder.conv_in.conv.weight: copying a param with shape torch.Size([1024, 128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 128, 3, 3, 3]).
size mismatch for decoder.conv_in.conv.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoder.up_blocks.0.res_blocks.0.conv1.conv.weight: copying a param with shape torch.Size([1024, 1024, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3, 3]).
size mismatch for decoder.up_blocks.0.res_blocks.0.conv1.conv.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for decoder.up_blocks.0.res_blocks.0.conv2.conv.weight: copying a param with shape torch.Size([1024, 1024, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3, 3]). etc"
Check out 02:20 . VideoVAE error because you need the new VAE loader for the version 0.9.1 model VAE archecture.
@@TheFutureThinker Yes I have update comfyUi from manager, but still got error, maybe I will update in another way, I'll try, thanks
Try not to use ComfyUI manager to update if that is the case. Git pull.