**EDIT: Now available to the public!** It's a great start to this feature but I'd love to see more, context reference for consistent environments and clothing, ability to use multiple characters and even some sort of lyp-sync feature like Runway or Kling. That being said, Hailuo's model continues to impress me, mainly since it's fairly easy to prompt for. What has been your experiences with Hailuo AI?
So grateful i found your channel, Love how you creatively explore various workflows, workarounds, to get improved results, such as the davinci idea and comfy ui!!
For sure! I was working on more content for Hailuo but takes me a bit longer to put together since generating each clip takes some time. Soon my friend!
I'm looking forward to trying this once it leaves beta and it will only improve. It should be a big help in creating consistent scenes to tell a story.
Definitely! As you saw in the intro you could get away with simple clothing styles to some extent but attire with certain details, you won't be able to get it consistent across other generations. But it's a good start. Appreciate the support bud!
Indeed, would be a nice addition. I hear they are working on some way to extend clips. That being said you can easily do it in most video editors. Or you can use Final Frame Extractor. finalframe.ai/frameextract/
Hi thanks for this update. How do you manage to make longer videos with Hailuo? Do you get the last frame of the video and make a new video from there or what else? I would like any suggestions
Yes right now that's the only way to extend generations. I hear they are working on "extending" clips but unfortunately I don't know when that will happen.
Not sure my friend since I don't know your set up or any errors. With that being said, instandID is so outdated these days, I recall seeing a post on reddit about having to update the PIP or something related to the python version. Have you used face fusion? It's a standalone face swapper and works very well. I use Pinokio to install it easily. Or you can just install Reactor which I prefer over instandID.
As well as Face Fusion, there's also PuLID that works well in Comfy. Personally, I prefer PuLID as it takes more into account than just the face (like hair) but it depends on exactly what you want.
I wish these are able to run on your own GPU, it takes so much trial and error to get what you want to look half decent by the time you do you've burned through any free credits they give you.
Yeah I hear ya, that has a lot to do with the models needing to mature and improve, video only really started gaining some ground last year but considering the progress it's quite impressive. That being said, it's still very random very similar to text to image progression SD1.5>SDXL>Flux.
**EDIT: Now available to the public!**
It's a great start to this feature but I'd love to see more, context reference for consistent environments and clothing, ability to use multiple characters and even some sort of lyp-sync feature like Runway or Kling. That being said, Hailuo's model continues to impress me, mainly since it's fairly easy to prompt for. What has been your experiences with Hailuo AI?
So grateful i found your channel, Love how you creatively explore various workflows, workarounds, to get improved results, such as the davinci idea and comfy ui!!
Glad you're enjoying the content! Keep an eye out for more creative workarounds. 🙌
Interesting! Would love to see more of it
For sure! I was working on more content for Hailuo but takes me a bit longer to put together since generating each clip takes some time. Soon my friend!
VERY NICE!! =P
Indeed! 😁 So fun!
I'm looking forward to trying this once it leaves beta and it will only improve. It should be a big help in creating consistent scenes to tell a story.
Definitely! As you saw in the intro you could get away with simple clothing styles to some extent but attire with certain details, you won't be able to get it consistent across other generations. But it's a good start. Appreciate the support bud!
@@MonzonMedia You're welcome.
Very good 👍 ❤❤
Appreciate it, it's so fun!
Still waiting for the End-Frame feature.
Indeed, would be a nice addition. I hear they are working on some way to extend clips. That being said you can easily do it in most video editors. Or you can use Final Frame Extractor. finalframe.ai/frameextract/
Hi thanks for this update. How do you manage to make longer videos with Hailuo? Do you get the last frame of the video and make a new video from there or what else? I would like any suggestions
Yes right now that's the only way to extend generations. I hear they are working on "extending" clips but unfortunately I don't know when that will happen.
I have installed instant id but it is failed to import on my comfy ui what is happening i don't know
Not sure my friend since I don't know your set up or any errors. With that being said, instandID is so outdated these days, I recall seeing a post on reddit about having to update the PIP or something related to the python version. Have you used face fusion? It's a standalone face swapper and works very well. I use Pinokio to install it easily. Or you can just install Reactor which I prefer over instandID.
As well as Face Fusion, there's also PuLID that works well in Comfy. Personally, I prefer PuLID as it takes more into account than just the face (like hair) but it depends on exactly what you want.
@@Elwaves2925 Oh yes of course, great suggestion!
I wish these are able to run on your own GPU, it takes so much trial and error to get what you want to look half decent by the time you do you've burned through any free credits they give you.
Yeah I hear ya, that has a lot to do with the models needing to mature and improve, video only really started gaining some ground last year but considering the progress it's quite impressive. That being said, it's still very random very similar to text to image progression SD1.5>SDXL>Flux.