What features I would like to see in Minimax: 1) twice longer clips 2) clip extending 3) character reference/location reference 4) act one copy 5) camera control 6) and yea, overall improvement of the model. Go Minimax!
@@ZorenStudio55 for img to vid you could try a minimax gpt in chatgpt where you put the image and it gives a text in English and Chinese. Paste the text into minimax with img.
Also, I've noticed that in Minimax, if you have something like a photo of a lake (i posted a mock Lochness monster sighting on my X profile as well as a Jaeger mech rising out of the lake from a couple photos I took) and added in my prompt for the water to "interact with the body" as it rises out of the lake. It was amazing how well it responded to that prompt. Also, if you take photos during the golden hour (just before sunrise or right after sunrise) you can get some really nice cinematic looking shots and your subject will tend to be lit correctly to match the lighting of the original image.
Fully agree, I used Minimax and some Runways Act One for the short animated Children’s music 🎶 video I made and posted yesterday. Normally I would use image to video but I just used text to video for the animation. It was still pretty consistent as the prompts helped facilitate that.
Same here, and I did see recently that they are soon to release 10-second-long generations, which would greatly help in filmmaking and storytelling with MiniMax. Their "6 seconds" videos are not even a full six seconds, they are slightly under, but rounded off as 6 seconds.
I'm hoping they will adopt the char ref feature built into VIdu. Vidu's reference to video feature is awesome for creating character or object consistency. However, the quality isn't quite up to snuff and doesn't quite keep up with the Minimax's and Kling's of the world. I did speak with the devs there and there is some exciting news about their platform that I can't discuss, let's just say they're going to raise the bar for features and functionality with a higher quality model. I want to see who will be able to allow for multi char ref or multi char trained model so we can prompt more than one consistent character which would speed up the process and give us more control of scenes without having to stitch things together. I'm super excited to see what's coming around the corner.
Agreed. I used Vidu to help with some shots in a couple projects, and the character reference is excellent, but yes their videos are too choppy, even when using extra credits to upscale them. I don't know what they were thinking at Vidu, because they should have fixed their choppy video issue months ago. 😅 Almost no one talks about Vidu, but I appreciate that you brought them up and that they are soon releasing some great updates. Cheers
@@FilmSpook thanks for the reply, it's been awesome watching these tools grow. I actually did my best to make this video look as good as I could without having Topaz, I made the most of the clips I had but I think it came out pretty decent considering ruclips.net/video/cdCLKhZvCVM/видео.htmlsi=IZJUzYXlOzKqkGDn
Refuge, got a question - why did you upscale the videos seperately? Wouldnt it be better to first edit the full video, and then just toss the final video into Topaz and just upscale 1 video instead of the source materials? Why would i have many heavy files which i might not use all of them, or just use tiny portions of each clip, and waste space and time upscaling them all, instead of finishing the project, and then upscaling the final thing?
Great point. I find that MiniMax generally doesn't need upscaling, 720p is more than enough resolution, unless you are doing it professionally, of course, but I use Capcut's free online upscaler sometimes. However, upscaling can sometimes remove some of the realism due to adding too much sharpness, if the original A.I video was done in a photorealistic style.
For a couple reasons. First, not every scene needs to be upscaled in a project. Also, very often upscaling is hard on a computer so it's safer to mitgate computer issues to do it 1 by 1.
Runway is currently FAR behind MiniMax when it comes to two or three key things for A.I. filmmaking: It generally doesn't follow prompts as well, for MiniMax has very advances prompt accuracy and your prompts can be up to 2000 characters, which is FAR more than Runway. Runway doesn't animate human actions as well and as accurately as MiniMax. Runway also blocks and censors too strongly. Runway has much more morphing than MiniMax. And also, MiniMax renders out most animations in real-time speed rather than in slo-mo like Runway usually does. Runway has MANY cool additional features that MiniMax doesn't have, but when it comes to animation capabilities then MiniMax is overall much more advanced and this is the reality (for now at least).
For the image to video animation i've been doing (which is not realistic imagery) I found Luma has been giving me the best results. Camera control options are good and strangely it seems to be better when you don't prompt it sometimes. It seems to understand the details of the image and interpret it properly. Often it is confused by the prompt. Comparing the same image animation in Luma vs Minimax the Luma outputs are much better. Luma's extend feature is super useful and creative also.
Haha! How do you get them to stop talking? Jokes, I love Minimax and use it every day, but I do struggle to generate videos of characters not talking despite using keywords such as "silent," "quiet," "the character remains quiet," and "not talking." Someone even suggested prompting "with their lips glued together" and "with their lips stuck together," but my characters just want to talk, talk, talk!
@@EmilyNilsen Is that so? I haven't seen that problem myself. But I also haven't used Minimax a lot due to lack of lipsync so mostly I have been using Kling & Runway for now.
I love AI and Curious Refuge. I'm crazy about doing one of your courses, I hope I can save a little cash and be able to do it in the future because you are fucking incredible, congratulations!
👍🏾👍🏾YES, thanks!!! MiniMax is my go-to for A.I. filmmaking at this time. I recently left Runway, as it's animation capabilities are far behind MiniMax, if we're being honest. Runway's recent updates are cool, but they still are not focusing mostly on what matters the most with A.I. filmmaking, which is proving excellent capabilities for animating humans in real-time speed (not that slo-mo BS that Runway's A.I. often forces, by default). 😅
Thank you! 👍 I have been trying to get my characters to simply be quiet and not talk/move their mouths. No success uet. Does anyone have any prompting tips?
10:18 How would a common user know which models are the best (like Theia)? I see this a lot with AI tools where there are a dozen models to choose from. Where does one find out the difference between all these models?
I'm sure you said it but now I can't find where. Where were the original images generated? I have a huge hit and miss rate when insisting the images are photorealistic, especially when doing more fantasy related images.
Yes, from a filmmaker's or pro's perspective MiniMax is currently the King when it comes to animation using A.I. But it ultimately depends on what you're using it for. For product videos, Runway is better than MiniMax and can do 10 seconds or more.
Why's everything gotta' be "cinematic"? Jeez! Meanwhile, you can't do a "cinematic video" with 5 second clips and no lip-syncing. You can do short clips for social media. I you want to do something longer wit a story or dialogue or narration you'll need at least Kling or Runway. Minimax is too limited at this point. "Cinematic".
I am doing stories and marketing using runway and midjourney and it's a lot of fun. Was wondering if to switch to kling or minimax. Why don't you recommend minimax? Why is it worse than runway? Also, what is your opinion on kling?
@@adarwinterdror7245minimax lacks other features. The only amazing thing about minimax is the actual quality of the videos, but that's about it. There's no Extend Video, End Frame, Character Reference, Lip Sync, Video to Video. It ONLY has Img2Vid and Txt2Vid
I tried to make a text to video with the prompt "baby and a puppy" and I even tried adding specific details. Both times it said it goes against their guidelines or something. I'm like what???
Your using the Standard feature of Kling, which doesn't utilize their best model. You can literally direct the scene using Motion Brush. No one on RUclips knows how to take full advantage of the tools given to us
I do appreciate the effort you put into your research and your ability to coalesce it all into digestible and useful segments/ posts. As someone who has been in the business for close to as long as you have been alive, I just wanted to share this unsolicited constructive criticism. There’s no need to have your face in the entire video. I do like knowing who’s talking though so it’s good to cut back and forth between the reference footage and your talking head. I get that you’re good looking, and well coiffed with a great fashion sense…and I give it up to you for all of those blessings, truly. In fact, that may just be my issue. You’re so good looking that it can be a bit distracting from the mission of the post. You’re welcome to delete this comment after reading. Thanks and keep up the good work.
Recently watched a video of a user who paid for the unlimited version and he had the same wait times as the free version, obviously without the credit limits. Anyone who has used Hailou knows sometimes generations take an absolute age. Anyone else have similar experience of the paid version?
Not me. But in Runway, especially in turbo model, things happen withing 1-5 minutes max even in the 'Relaxed' mode. I'm using the unlimited plan, so no need to worry about credits at all. Infinite creation. I can experiment all I want.
@SupaFreq, the average for me is around 2 minutes a video, I'd say, but faster during the hours when less people are using it, sometimes even just 60-90 seconds. I'm on the Unlimited Plan, and it is definitely faster overall than the free version.
What features I would like to see in Minimax: 1) twice longer clips 2) clip extending 3) character reference/location reference 4) act one copy 5) camera control 6) and yea, overall improvement of the model. Go Minimax!
Lipsync
Good list!
Minimax gives you everything from slow motion to epic action. The only vid generator to do that right now
Still can't understand Minimax prompt structure 😢
Minimax is pretty incredible!
@@ZorenStudio55 for img to vid you could try a minimax gpt in chatgpt where you put the image and it gives a text in English and Chinese. Paste the text into minimax with img.
Also, I've noticed that in Minimax, if you have something like a photo of a lake (i posted a mock Lochness monster sighting on my X profile as well as a Jaeger mech rising out of the lake from a couple photos I took) and added in my prompt for the water to "interact with the body" as it rises out of the lake. It was amazing how well it responded to that prompt. Also, if you take photos during the golden hour (just before sunrise or right after sunrise) you can get some really nice cinematic looking shots and your subject will tend to be lit correctly to match the lighting of the original image.
👍🏾👍🏾
Good to know!
Fully agree, I used Minimax and some Runways Act One for the short animated Children’s music 🎶 video I made and posted yesterday. Normally I would use image to video but I just used text to video for the animation. It was still pretty consistent as the prompts helped facilitate that.
Definitely good to kitbash!
@ 🥰😃
Yeah, MiniMax has been the best for me - just praying they introduce advanced features soon.
Same here, and I did see recently that they are soon to release 10-second-long generations, which would greatly help in filmmaking and storytelling with MiniMax. Their "6 seconds" videos are not even a full six seconds, they are slightly under, but rounded off as 6 seconds.
Agreed!
I'm hoping they will adopt the char ref feature built into VIdu. Vidu's reference to video feature is awesome for creating character or object consistency. However, the quality isn't quite up to snuff and doesn't quite keep up with the Minimax's and Kling's of the world. I did speak with the devs there and there is some exciting news about their platform that I can't discuss, let's just say they're going to raise the bar for features and functionality with a higher quality model. I want to see who will be able to allow for multi char ref or multi char trained model so we can prompt more than one consistent character which would speed up the process and give us more control of scenes without having to stitch things together. I'm super excited to see what's coming around the corner.
Agreed. I used Vidu to help with some shots in a couple projects, and the character reference is excellent, but yes their videos are too choppy, even when using extra credits to upscale them. I don't know what they were thinking at Vidu, because they should have fixed their choppy video issue months ago. 😅 Almost no one talks about Vidu, but I appreciate that you brought them up and that they are soon releasing some great updates. Cheers
@@FilmSpook thanks for the reply, it's been awesome watching these tools grow. I actually did my best to make this video look as good as I could without having Topaz, I made the most of the clips I had but I think it came out pretty decent considering ruclips.net/video/cdCLKhZvCVM/видео.htmlsi=IZJUzYXlOzKqkGDn
Oh! I can't wait to see what Minimax releases then!! Exciting.
We're excited to see what the updates are!
Refuge, got a question - why did you upscale the videos seperately? Wouldnt it be better to first edit the full video, and then just toss the final video into Topaz and just upscale 1 video instead of the source materials?
Why would i have many heavy files which i might not use all of them, or just use tiny portions of each clip, and waste space and time upscaling them all, instead of finishing the project, and then upscaling the final thing?
Great point. I find that MiniMax generally doesn't need upscaling, 720p is more than enough resolution, unless you are doing it professionally, of course, but I use Capcut's free online upscaler sometimes. However, upscaling can sometimes remove some of the realism due to adding too much sharpness, if the original A.I video was done in a photorealistic style.
For a couple reasons. First, not every scene needs to be upscaled in a project. Also, very often upscaling is hard on a computer so it's safer to mitgate computer issues to do it 1 by 1.
It's worth adding the phrase "not talking" to Minimax prompts.
True!
hold on. Based on all stats it used to be: Runway, Kling 1.5, Minimax, Luma. How come it jumped to spot 1 ?
Runway is currently FAR behind MiniMax when it comes to two or three key things for A.I. filmmaking: It generally doesn't follow prompts as well, for MiniMax has very advances prompt accuracy and your prompts can be up to 2000 characters, which is FAR more than Runway. Runway doesn't animate human actions as well and as accurately as MiniMax. Runway also blocks and censors too strongly. Runway has much more morphing than MiniMax. And also, MiniMax renders out most animations in real-time speed rather than in slo-mo like Runway usually does. Runway has MANY cool additional features that MiniMax doesn't have, but when it comes to animation capabilities then MiniMax is overall much more advanced and this is the reality (for now at least).
@@FilmSpook what about Kling 1.5 ?
Keep in mind - Runway comes with an entire suite of tools.
Minimax 정말 현실적인 결과물이네요. 다른것들은 자주 슬로우모션만 만드는데. 비교 영상 만들어 주셔서 너무 감사합니다:)
Thanks for watching!
That's a good one. Congrats refuge
Appreciate ya watchin'!
Also, I noticed Kaiber has a Minimax API. Is it using the same video render model as Hailuo?
Yes, it does, from what I understand, as licensed to them.
Thanks for this! I have a bunch of unused credits on Kaiber, so this is excellent news. I see that they also have Kling.
Yes!
For the image to video animation i've been doing (which is not realistic imagery) I found Luma has been giving me the best results. Camera control options are good and strangely it seems to be better when you don't prompt it sometimes. It seems to understand the details of the image and interpret it properly. Often it is confused by the prompt. Comparing the same image animation in Luma vs Minimax the Luma outputs are much better. Luma's extend feature is super useful and creative also.
We appreciate the tips! We'll give Luma more tries
But Minimax doesn't have lipsync yet? How do you make characters talk?
Haha! How do you get them to stop talking? Jokes, I love Minimax and use it every day, but I do struggle to generate videos of characters not talking despite using keywords such as "silent," "quiet," "the character remains quiet," and "not talking." Someone even suggested prompting "with their lips glued together" and "with their lips stuck together," but my characters just want to talk, talk, talk!
@@EmilyNilsen Is that so? I haven't seen that problem myself. But I also haven't used Minimax a lot due to lack of lipsync so mostly I have been using Kling & Runway for now.
Lots of other tools like LivePortrait or even Runway
Yes!!! I started using it last week
Hope you like it!
very good explaining. thank you!
Glad it was helpful!
I love AI and Curious Refuge. I'm crazy about doing one of your courses, I hope I can save a little cash and be able to do it in the future because you are fucking incredible, congratulations!
Nice!
Thanks!
Like your content so much!😊
Thank you so much!
Love using Hailuo AI
What's the avg time for generation?
@@aashay Luckily I am an affiliate and use the unlimited plan. I would say the average time to generate is 60 seconds.
@@The-AI-Experiment oh nice. So If I subscribe for unlimited plans, I'd probably get similar generation time.
@@The-AI-Experiment how to become - affiliate ?
@@aashay Possibly
это просто самая лучшая нейронка из всех какие попробовал. уже роликов 400 наделал)
Appreciate you watching!
👍🏾👍🏾YES, thanks!!! MiniMax is my go-to for A.I. filmmaking at this time. I recently left Runway, as it's animation capabilities are far behind MiniMax, if we're being honest. Runway's recent updates are cool, but they still are not focusing mostly on what matters the most with A.I. filmmaking, which is proving excellent capabilities for animating humans in real-time speed (not that slo-mo BS that Runway's A.I. often forces, by default). 😅
Minimax is amazing; just keep in mind the Runway has some excellent other AI tools in it's suite other than generators!
Tx for sharing. Good quality.
Glad you enjoyed it
Needed this video yesterday 😭😭😭
Sorry we got to you late!
Amazing
Thank you! Cheers!
It would be extremely helpful if you could share the "prompts" used for each of the image that you created using Minimax, thanks.
Will keep that in mind for next time!
Thank you! 👍 I have been trying to get my characters to simply be quiet and not talk/move their mouths. No success uet. Does anyone have any prompting tips?
Jump in our discord and we'll try and help!
✌️🔥
Thanks for watching!
❤️❤️❤️
Thanks for watching!
10:18 How would a common user know which models are the best (like Theia)?
I see this a lot with AI tools where there are a dozen models to choose from. Where does one find out the difference between all these models?
Certainly difficult but we just subscribe to a lot of sources :)
I'm sure you said it but now I can't find where. Where were the original images generated? I have a huge hit and miss rate when insisting the images are photorealistic, especially when doing more fantasy related images.
Typically we generate in MJ
Can't slo-mo shots be sped up fairly easily in Resolve/Premiere?
Yes, although sometimes it still doesn't look quite right.
Finally y'all say MINIMAX is number 1 and better than runway because it is true
Yes, from a filmmaker's or pro's perspective MiniMax is currently the King when it comes to animation using A.I. But it ultimately depends on what you're using it for. For product videos, Runway is better than MiniMax and can do 10 seconds or more.
Better because they’ve scraped from copyrighted films, games and content because they’re chinesed based and don’t care about american copyright laws
Minimax is definitely pretty strong contender these days!
🎉🎉🎉🎉🎉🎉❤
Thanks for watching!
Why's everything gotta' be "cinematic"? Jeez! Meanwhile, you can't do a "cinematic video" with 5 second clips and no lip-syncing. You can do short clips for social media. I you want to do something longer wit a story or dialogue or narration you'll need at least Kling or Runway. Minimax is too limited at this point. "Cinematic".
I am doing stories and marketing using runway and midjourney and it's a lot of fun.
Was wondering if to switch to kling or minimax.
Why don't you recommend minimax? Why is it worse than runway?
Also, what is your opinion on kling?
@@adarwinterdror7245minimax is better when you want more physical control, runway is better for camera movement, and kling is llike a mix of both
@@adarwinterdror7245minimax lacks other features. The only amazing thing about minimax is the actual quality of the videos, but that's about it. There's no Extend Video, End Frame, Character Reference, Lip Sync, Video to Video. It ONLY has Img2Vid and Txt2Vid
Because we focus on ai filmmaking :)
I tried to make a text to video with the prompt "baby and a puppy" and I even tried adding specific details. Both times it said it goes against their guidelines or something. I'm like what???
Your using the Standard feature of Kling, which doesn't utilize their best model. You can literally direct the scene using Motion Brush. No one on RUclips knows how to take full advantage of the tools given to us
We appreciate the feedback! Sometime we record the videos before these features are shown
I do appreciate the effort you put into your research and your ability to coalesce it all into digestible and useful segments/ posts. As someone who has been in the business for close to as long as you have been alive, I just wanted to share this unsolicited constructive criticism. There’s no need to have your face in the entire video. I do like knowing who’s talking though so it’s good to cut back and forth between the reference footage and your talking head. I get that you’re good looking, and well coiffed with a great fashion sense…and I give it up to you for all of those blessings, truly. In fact, that may just be my issue. You’re so good looking that it can be a bit distracting from the mission of the post. You’re welcome to delete this comment after reading. Thanks and keep up the good work.
I love the non-serious self-awareness in this comment.
🤔🤡
We appreciate the feedback!
Recently watched a video of a user who paid for the unlimited version and he had the same wait times as the free version, obviously without the credit limits. Anyone who has used Hailou knows sometimes generations take an absolute age. Anyone else have similar experience of the paid version?
Not me.
But in Runway, especially in turbo model, things happen withing 1-5 minutes max even in the 'Relaxed' mode.
I'm using the unlimited plan, so no need to worry about credits at all. Infinite creation. I can experiment all I want.
@SupaFreq, the average for me is around 2 minutes a video, I'd say, but faster during the hours when less people are using it, sometimes even just 60-90 seconds. I'm on the Unlimited Plan, and it is definitely faster overall than the free version.
It's certainly been a common criticism lately about the wait times as it relates to the costs :(
How do you make mouths not move in Minimax? Adding "not talking, mouth still, the man is not talking" hasn't worked so far.
Hmmm we'll see what we can do and get back to you!
The lip flap spoils all those minimax clips
I wish they would add lip-syncing.
Soon!
Comments section feels botty
Beep boop beep! You caught our secret! beep booop beep :)
JK
@@curiousrefuge lol