I am trying it on my 4090 card with 24GB Vram for a 5 sec video , its taking 30-40 minutes..... but is it safe for my card as the requirement says 40GB or 80GB VRam ? Also sometimes im getting out of memory error on VAE Decode (Tiled) ....
Hello, this does not feel right. On 12 GB VRAM, it takes me 14 minutes to generate. Maybe you have the resolution too high or you are missing some dependencies. for VAE decode one, i have mine set up as tile size = 128 and overlap 32.
@@CodeCraftersCorner I was using vae setting as 160/64 with 154 frames to be generated but now after changing vae to 128/32 its taking same 30-40 minutes on my 4090. Im using weight_dtype fp8_e4m3fn_fast ... all other setting same as given in workflow provided
no in fact it took 1hr 20min.... I actually locked my PC and it was working in background, may be the reason behind it but anyway it was noway near to your time... now im trying same prompt using LTXV v0.9.1 to check the time to genarate....
Hello, no need for custom nodes for this one. They are all native nodes (built in). Are you sure your ComfyUI is updated correctly. Try to manually update if you used the Manager.
Okay, try to do this. In the ComfyUI folder, open a CMD / Terminal. Type git log and check if you have the commit 52c1d93. This was pushed yesterday (December 20th). It's possible your ComfyUI is not being updated correctly.
Hello, please update your ComfyUI to the latest version. You also download the latest copy from their GitHub if you do not want to update your current version.
Certainly not with less than 24GB VRAM. The VAE Decode will fail if your card can't handle the sampled video, but only after wasting a great deal of time with all the previous nodes, making it twice as useless. Don't even bother with less than 24 or 40GB VRAM.
There is one nice solution to solve all this complicated V-Ram problem simply. Let's buy 'Nvidia H100' we all! (...when our yearly income have reached million dollars T_T)
Thanks for the video and for the links to resources.
Thank you!
Thanks for sharing this !
Thanks for watching!
Thanks, try it and it was ok but does not follow the prompt completely. I think the 5 sec limitation impact following the prompt more correctly.
Thanks for your videos. There is a workflow for video to video in custom nodes Hunyaun and example folder.
Yes, It requires the ComfyUI-HunyuanVideoWrapper by Kijai.
There is a video to video workflow in the non official comfy version that was released a while back.
Yes, this is the repo: ComfyUI-HunyuanVideoWrapper by Kijai
I am trying it on my 4090 card with 24GB Vram for a 5 sec video , its taking 30-40 minutes..... but is it safe for my card as the requirement says 40GB or 80GB VRam ? Also sometimes im getting out of memory error on VAE Decode (Tiled) ....
Hello, this does not feel right. On 12 GB VRAM, it takes me 14 minutes to generate. Maybe you have the resolution too high or you are missing some dependencies. for VAE decode one, i have mine set up as tile size = 128 and overlap 32.
@@CodeCraftersCorner I was using vae setting as 160/64 with 154 frames to be generated but now after changing vae to 128/32 its taking same 30-40 minutes on my 4090. Im using weight_dtype fp8_e4m3fn_fast ... all other setting same as given in workflow provided
no in fact it took 1hr 20min.... I actually locked my PC and it was working in background, may be the reason behind it but anyway it was noway near to your time... now im trying same prompt using LTXV v0.9.1 to check the time to genarate....
Using LTXV v0.9.1 it took just 24 secs on my 4090 to generate 153 frames of video... amazingly fast!!!
Is there also the possibility of IMG2Video and can you please show a workflow for it? 🙂
Not with this model! If you can run the HunYanVideo Wrapper, then there is a workflow for it.
@@CodeCraftersCorner That method just uses llava to create a description of an image. It's not really I2V.
Hunyuan Latent video node is missing. Updated Comfy. but not available.? Did you also install Hunyuan video wrapper?
Hello, no need for custom nodes for this one. They are all native nodes (built in). Are you sure your ComfyUI is updated correctly. Try to manually update if you used the Manager.
had the same problem - manually update with comfy update folder helped
@@johnedwards7655 Glad the manual method worked.
there is a fp8 v of HunYuan video can i use it with this workflow ?
There is a gguf of the model along with llama used with it.
@@CGFUN829 ca u give me link for that plz ?
Hello, this is the native ComfyUI implementation. You can get the GGUF version from the GitHub page.
Updated Comfy but no Latent Video node?
Okay, try to do this. In the ComfyUI folder, open a CMD / Terminal. Type git log and check if you have the commit 52c1d93. This was pushed yesterday (December 20th). It's possible your ComfyUI is not being updated correctly.
having low-vram, why haven't you made a video using the GGUF models?
I was testing out to see if it can run on my system and I shared my results.
@@CodeCraftersCorner Next one GGUF...:) ..Thanks
@@giuseppedaizzole7025 Okay, I will check if i can run it and the quality. If good, i will make a video.
@@CodeCraftersCorner Great, really appreciate that u answer, thanks.
and the ltx 0.9.1 here and give me error becuase vae and i test new vae and same issu
update comfyui
Hello, please update your ComfyUI to the latest version. You also download the latest copy from their GitHub if you do not want to update your current version.
unfortunaly this is not used all gpu in pc.
for 24gb its allow to render 480*272px (for x4 upscale to FHD) and 241 frames (10sec).
Thanks for sharing!
This modelo does NOT do img2vid
Yes, not for now. It's in their plan to have an Image-to-Video Model.
Certainly not with less than 24GB VRAM. The VAE Decode will fail if your card can't handle the sampled video, but only after wasting a great deal of time with all the previous nodes, making it twice as useless. Don't even bother with less than 24 or 40GB VRAM.
There is one nice solution to solve all this complicated V-Ram problem simply.
Let's buy 'Nvidia H100' we all! (...when our yearly income have reached million dollars T_T)
I'm afraid that's not a realistic solution for most of us!