with this workflow, not sure what I'm missing but the video (before upscale..etc) generated like it wasn't completed... only some weird artifacts rendered. No matter what version of model I'm switching from the loader section.... my simple hunyuan workflow works so Im not sure what I missed configured on this one. Any tips?
I'm experiencing an issue when generating videos in ComfyUI. I'm using the latest version of ComfyUI and have tried using the models shown in the tutorial video. While I can generate videos in my other workflows, this specific one is producing blocky and pixilated results (Looks corrupt). To troubleshoot, I've tried disabling the Power Lora node and a few others, but the issue persists. Does anyone know how to resolve this issue?
Thank you, so much, for clarifying the Hunyuan AI text2video must utilize a simple and shorter prompt. I was trying different solutions that were more complex and verbose (e.g., "ChatBot write me a prompt for...").
I hadn't heard of the video styler node before so thanks for that. The previous workflow I had fed both + and - prompts into the same input, which seems weird, and suggests it doesn't take any notice of negative prompts, certainly these don't always seem to have an effect.
i stuck at ksampler it showing 0/8 and the it crashed i use image motion guider shape '[1, 10, 17, 30, 16, 1, 2, 2]' is invalid for input of size 337280 i new to this and i need help thank you
Great video! Do you have any suggestion for preventing Hunyuan from creating multiple shots in one video? Sometimes it’ll make 2 or 3 very short (and not very cohesive) shots in 1 gen.
I've noticed the same thing, try simplifying the prompt, it easily gets confused, for example if you prompt a man running down the street, the man has a blue helmet on. Hunyuan seems to sometimes think you are asking for two shots, first shot being the man running down the street the second being another man with a helmet. So it's best to mention subjects just once. The new simplified prompt would be a man with a blue helmet running down the street. Subject is clear and action is clear. Hope this helps
I'm experiencing an issue when generating videos in ComfyUI. I'm using the latest version of ComfyUI and have tried using the models shown in the tutorial video. While I can generate videos in my other workflows, this specific one is producing blocky and pixilated results (Looks corrupt). To troubleshoot, I've tried disabling the Power Lora node and a few others, but the issue persists. Does anyone know how to resolve this issue?
So is this creating a prompt from the video or image rather than using the actual input video as a blueprint for the generated one? Is there anyway to do image to vid or vid to vid?
The native implementation does not support vid 2 vid, only kijais wrapper. And both do not support image to video. The devs promised a image to video version before the end of this month
im trying to run it right now just having issues getting all the right files into the right folders to get it to run and make anything other than white noise. I wish we could adopt a nameing convention for files so it qwas blatantly obvious what folders they go into....maybe I havent been at this long enough.....but find that is the biggest learning curve with this stuff. OH at then end he explains what files go in what folders......so eager to get it running I didnt watch til the end cus dont need to know about upscaling UNTIL i can generate an image.
A super good tutorial and above all, thank you for the free and beautiful workflow
Glad you liked it, will be giving you more videos like this
with this workflow, not sure what I'm missing but the video (before upscale..etc) generated like it wasn't completed... only some weird artifacts rendered. No matter what version of model I'm switching from the loader section.... my simple hunyuan workflow works so Im not sure what I missed configured on this one. Any tips?
I'm experiencing an issue when generating videos in ComfyUI. I'm using the latest version of ComfyUI and have tried using the models shown in the tutorial video.
While I can generate videos in my other workflows, this specific one is producing blocky and pixilated results (Looks corrupt).
To troubleshoot, I've tried disabling the Power Lora node and a few others, but the issue persists. Does anyone know how to resolve this issue?
Thank you, so much, for clarifying the Hunyuan AI text2video must utilize a simple and shorter prompt. I was trying different solutions that were more complex and verbose (e.g., "ChatBot write me a prompt for...").
The workflow looks great. Thank you. Keep on!
I hadn't heard of the video styler node before so thanks for that. The previous workflow I had fed both + and - prompts into the same input, which seems weird, and suggests it doesn't take any notice of negative prompts, certainly these don't always seem to have an effect.
i stuck at ksampler it showing 0/8 and the it crashed i use image motion guider shape '[1, 10, 17, 30, 16, 1, 2, 2]' is invalid for input of size 337280 i new to this and i need help thank you
My show prompt box is not displaying any text. any idea?
One question. Apologies if I missed the explanation in the video, but why must the prompts submitted by so short to get effective results?
Great video! Do you have any suggestion for preventing Hunyuan from creating multiple shots in one video? Sometimes it’ll make 2 or 3 very short (and not very cohesive) shots in 1 gen.
I've noticed the same thing, try simplifying the prompt, it easily gets confused, for example if you prompt a man running down the street, the man has a blue helmet on. Hunyuan seems to sometimes think you are asking for two shots, first shot being the man running down the street the second being another man with a helmet. So it's best to mention subjects just once. The new simplified prompt would be a man with a blue helmet running down the street. Subject is clear and action is clear. Hope this helps
I'm experiencing an issue when generating videos in ComfyUI. I'm using the latest version of ComfyUI and have tried using the models shown in the tutorial video.
While I can generate videos in my other workflows, this specific one is producing blocky and pixilated results (Looks corrupt).
To troubleshoot, I've tried disabling the Power Lora node and a few others, but the issue persists. Does anyone know how to resolve this issue?
got this error with every mp4. clip. HunyuanVid mp4 could not be loaded with cv.
Just bypass the Video-Comparison node, and it works
excellent content. thank you, sir
Glad you liked it!
Thank you very much..!
You're welcome, thank you for watching
So is this creating a prompt from the video or image rather than using the actual input video as a blueprint for the generated one? Is there anyway to do image to vid or vid to vid?
The native implementation does not support vid 2 vid, only kijais wrapper. And both do not support image to video. The devs promised a image to video version before the end of this month
Can this work on Mac at all and also can you do image to video instead of text video?
Hunyuan doesn't support image to video yet.
im trying to run it right now just having issues getting all the right files into the right folders to get it to run and make anything other than white noise. I wish we could adopt a nameing convention for files so it qwas blatantly obvious what folders they go into....maybe I havent been at this long enough.....but find that is the biggest learning curve with this stuff. OH at then end he explains what files go in what folders......so eager to get it running I didnt watch til the end cus dont need to know about upscaling UNTIL i can generate an image.
8s at 24p is all I can do on a 4090.
6th like 1st comment
Thanks for your kindness
Thanx for the like
Can huayuan video create long videos like 30 sec or 1 minute?
The longest I did is 5 seconds.
It probably would if your GPU has a TB of RAM. With a 4090 at 720p I can only get 125 frames or about 5 seconds.
Sub'd
Thanx