I do plan on making more guides for Kling, maybe one for camera motions & probably a beginner overview guide as well. For text-to-video I'd recommend creating images in their image generator first and then animating them into videos
@@taopromptsDo you find that the quality of these videos is better than importing upscaled images in the other workflow you shared in your other video?
Fantastic update from Kling!! Thanks, Bro. Once Kling gets as good as MiniMax with animations and prompt adherence and rendering motion in natural speed, then I will likely switch to Kling, hehe (if they add an unlimited generations plan).
I'm curious if your could make a few videos with an 2D animated character. 10 videos of a close up of the head with mouth movements. 10 videos of medium shot with arm and hand gestures. 10 videos of full body shots with walking cycle, turn around, and sitting down. Would this train the AI to animate a 2D character in the same way? Would it maintain the 2D animation style for the scene? With chromakey this could help my workflow.
That's certainly worth trying. My hunch is that Ai still struggles with 2D animation, the base model hasn't been trained with enough 2D data so even if you finetune it with your own 2D animation it will have a hard time maintaining consistency. But I'd have to test it to know for sure.
@@taoprompts That's what I thought, still an amazing feature!! we are getting really close to generating whole scenes with consistent characters. Very exciting!!
If you use image-to-video, the character you want should already be in the image. So in that case there's no need to train another model to get that character
@melee75 It depends on one's personal use. Kling is overall great, but does have it's weak areas, as they all do. MiniMax's animations are much better than Kling, Runway and Luma, and MiniMax overall follows prompts better, but it doesn't yet have 10-second animations and lip-sync.
This should be FREE or at least a lot cheaper since after that, you'll only be able to generate the video's in Kling. It would be a way to bring people to the service.
Exactly. This is why MiniMax is (currently) the King of producing real-time motion animation, by default. Slo-mo is overused in A.I. filmmaking due to the fact that most of them generate slo-mo video by default (Runway, Luma, Kling, etc.), and even even their "fast motion" doesn't look natural, except for MiniMax, MiniMax renders natural speed animations most of the time.
Your videos are the best, how about a complete guide in text like you did on midjourney for kling there? I loved Tao goblin 🤣🤣
I do plan on making more guides for Kling, maybe one for camera motions & probably a beginner overview guide as well.
For text-to-video I'd recommend creating images in their image generator first and then animating them into videos
Thank you Tao!! I was wondering how to use that new feature...and thank you for using your credits to show us.
It works pretty well, although it does cost a lot of credits 🥲
great video bro. i would like to see a video editing tutorial including audio edits
Cool stuff! Thanks for the video.
I was super excited to see this update!
@@taopromptsDo you find that the quality of these videos is better than importing upscaled images in the other workflow you shared in your other video?
@@ShawnSmithTO The quality is really good, but for more control over the content in the images, I would use image-to-video from the other video
Superb update.❤
I've been expecting this update for a while!
Super duper cool!!
Nice info 👍
Fantastic update from Kling!! Thanks, Bro. Once Kling gets as good as MiniMax with animations and prompt adherence and rendering motion in natural speed, then I will likely switch to Kling, hehe (if they add an unlimited generations plan).
@@FilmSpook 😂😂😂
Maybe in version 2.0! They are adding a lot of new updates all the time
@@taoprompts True! I also like Runway's updates. I plan on switching back to Runway for a month, at least, to make a few short films, hehe. Cheers
good stuff
I'm curious if your could make a few videos with an 2D animated character. 10 videos of a close up of the head with mouth movements. 10 videos of medium shot with arm and hand gestures. 10 videos of full body shots with walking cycle, turn around, and sitting down. Would this train the AI to animate a 2D character in the same way? Would it maintain the 2D animation style for the scene? With chromakey this could help my workflow.
That's certainly worth trying. My hunch is that Ai still struggles with 2D animation, the base model hasn't been trained with enough 2D data so even if you finetune it with your own 2D animation it will have a hard time maintaining consistency. But I'd have to test it to know for sure.
3:41 Forgive me bro, but it's really funny! lol🤣
😅 That Tao-Goblin looks dope! I'd love to see that character in a short film, for real, hehe.
Goblin got a hair transplant
This is awesome. do you think it's possible to do this with an animated cartoon character?
I think this is only for realistic characters right now
@@taoprompts That's what I thought, still an amazing feature!! we are getting really close to generating whole scenes with consistent characters. Very exciting!!
Great Video - can I ask why do you think it only works for txt prompts and not image to video (do you think this is coming). Great channel 👍
Probably because you already got the character in the video already
If you use image-to-video, the character you want should already be in the image. So in that case there's no need to train another model to get that character
That’s great - but in order to keep a consistent environment as well - just wondered if the option would be available
So in your opinion is Kling the best AI video generator right now?
Ya, he does think it's the best. He's mentioned that in a previous video.
@melee75 It depends on one's personal use. Kling is overall great, but does have it's weak areas, as they all do. MiniMax's animations are much better than Kling, Runway and Luma, and MiniMax overall follows prompts better, but it doesn't yet have 10-second animations and lip-sync.
Overall yes. They are also adding new features the fastest
Can we train videos made from images using turnaround sheets in video format?
Sure although each video used in training should have 1 face in it
Good video, Can I train the model with my cat?
I think this only works on humans right now
But if I don't want use my own face as a model??? I want my beautiful ai character, what should i do
create images of a consistent character -> animate them into videos -> use those videos to train this model.
Or just use the animated videos in step 2
This should be FREE or at least a lot cheaper since after that, you'll only be able to generate the video's in Kling. It would be a way to bring people to the service.
What if u were AI from start but just rephrase everything? Ala hahaa awesome
You can definitely train this model with Ai video samples of a character 👍
slow motion 😴
Exactly. This is why MiniMax is (currently) the King of producing real-time motion animation, by default. Slo-mo is overused in A.I. filmmaking due to the fact that most of them generate slo-mo video by default (Runway, Luma, Kling, etc.), and even even their "fast motion" doesn't look natural, except for MiniMax, MiniMax renders natural speed animations most of the time.