Hi man! Thanks for the videos, I have been going through your videos from past month, I have learned quite a lot, but there is still something that I'm facing difficulty with, consistent releastic human models (a single model) with specific different poses, I was able to get the poses correct using controlnet but the human model is extremely inconsistent when it comes to its face and body shape. any suggestions? Thanks!
well brother mai kal se espe try kr rha hoon and i have taken the its down to 42sec/20 steps :) no quality loss, even lora is working, im gonna share my workflow try it and test it.
Sure I will, but you can integrate this in any flux workflow. Just add 'apply first block cache' node in this way Model - Apply First Block Cache - (whatever model was connected to)
i still have the issue with gguf, but you dont really need it, i got hunyuan working now with the node "Load Diffiusion Model + hunyuan video t2v 720 fp8" weight type fp8:e4m3fn_fast & DualClipLoader Clip_L + Llava lama fp8_scaled, type hunyuan_video. -> the wavespeed still speeds up the process now even on my very old pc with only rtx 3060 12gb vram
how time take you for video ? i have a rtx 3060 12gb and I7 intel 11gen and take 50min to make a video with lora , i use hunyuan fp8 and llava f8 scaled all in fast
Hi man!
Thanks for the videos, I have been going through your videos from past month, I have learned quite a lot,
but there is still something that I'm facing difficulty with, consistent releastic human models (a single model) with specific different poses, I was able to get the poses correct using controlnet but the human model is extremely inconsistent when it comes to its face and body shape.
any suggestions?
Thanks!
Hi, can you do a video on the Triton installation on Windows? From everything I've seen online, it looks complicated. Thanks.
well brother mai kal se espe try kr rha hoon and i have taken the its down to 42sec/20 steps :) no quality loss, even lora is working, im gonna share my workflow try it and test it.
and my vram is 6gb rtx 4050
can you do video to video workflow for hunyuan
Should we be using the --lowvram ComfyUI launch parameter?
It works for me , thank you! Could you also share the flux workflow?
Sure I will, but you can integrate this in any flux workflow. Just add 'apply first block cache' node in this way
Model - Apply First Block Cache - (whatever model was connected to)
Can u do video how to install it locally plz
Please integrate it with image to video
Not possible on Hunyhian
Can someone confirm if RTX 30series of 6 gb ram with 16 gb ram would it work?
If you download the lowest quantized version (Q3ks) then it'll probably work but will be slow.
first to view
i still have the issue with gguf, but you dont really need it, i got hunyuan working now with the node "Load Diffiusion Model + hunyuan video t2v 720 fp8" weight type fp8:e4m3fn_fast & DualClipLoader Clip_L + Llava lama fp8_scaled, type hunyuan_video. -> the wavespeed still speeds up the process now even on my very old pc with only rtx 3060 12gb vram
Yes fp8 will have better results than most GGUF (except Q8) 💯🔥
how time take you for video ? i have a rtx 3060 12gb and I7 intel 11gen and take 50min to make a video with lora , i use hunyuan fp8 and llava f8 scaled all in fast
thats far too long, do you have wavespeed running? i have a workflow without upscale where i can do 1 in 152 s with 20 steps