@@ZorenStudio55 for img to vid you could try a minimax gpt in chatgpt where you put the image and it gives a text in English and Chinese. Paste the text into minimax with img.
Also, I've noticed that in Minimax, if you have something like a photo of a lake (i posted a mock Lochness monster sighting on my X profile as well as a Jaeger mech rising out of the lake from a couple photos I took) and added in my prompt for the water to "interact with the body" as it rises out of the lake. It was amazing how well it responded to that prompt. Also, if you take photos during the golden hour (just before sunrise or right after sunrise) you can get some really nice cinematic looking shots and your subject will tend to be lit correctly to match the lighting of the original image.
What features I would like to see in Minimax: 1) twice longer clips 2) clip extending 3) character reference/location reference 4) act one copy 5) camera control 6) and yea, overall improvement of the model. Go Minimax!
I wish they create a button to extend the video from last scene. so your point 1&2 is settle. point 3 we can try mimic like Leonardo Ai for color/theme style and adjust the low & high reference image.
Fully agree, I used Minimax and some Runways Act One for the short animated Children’s music 🎶 video I made and posted yesterday. Normally I would use image to video but I just used text to video for the animation. It was still pretty consistent as the prompts helped facilitate that.
I'm hoping they will adopt the char ref feature built into VIdu. Vidu's reference to video feature is awesome for creating character or object consistency. However, the quality isn't quite up to snuff and doesn't quite keep up with the Minimax's and Kling's of the world. I did speak with the devs there and there is some exciting news about their platform that I can't discuss, let's just say they're going to raise the bar for features and functionality with a higher quality model. I want to see who will be able to allow for multi char ref or multi char trained model so we can prompt more than one consistent character which would speed up the process and give us more control of scenes without having to stitch things together. I'm super excited to see what's coming around the corner.
Agreed. I used Vidu to help with some shots in a couple projects, and the character reference is excellent, but yes their videos are too choppy, even when using extra credits to upscale them. I don't know what they were thinking at Vidu, because they should have fixed their choppy video issue months ago. 😅 Almost no one talks about Vidu, but I appreciate that you brought them up and that they are soon releasing some great updates. Cheers
@@FilmSpook thanks for the reply, it's been awesome watching these tools grow. I actually did my best to make this video look as good as I could without having Topaz, I made the most of the clips I had but I think it came out pretty decent considering ruclips.net/video/cdCLKhZvCVM/видео.htmlsi=IZJUzYXlOzKqkGDn
For the image to video animation i've been doing (which is not realistic imagery) I found Luma has been giving me the best results. Camera control options are good and strangely it seems to be better when you don't prompt it sometimes. It seems to understand the details of the image and interpret it properly. Often it is confused by the prompt. Comparing the same image animation in Luma vs Minimax the Luma outputs are much better. Luma's extend feature is super useful and creative also.
Same here, and I did see recently that they are soon to release 10-second-long generations, which would greatly help in filmmaking and storytelling with MiniMax. Their "6 seconds" videos are not even a full six seconds, they are slightly under, but rounded off as 6 seconds.
Runway is currently FAR behind MiniMax when it comes to two or three key things for A.I. filmmaking: It generally doesn't follow prompts as well, for MiniMax has very advances prompt accuracy and your prompts can be up to 2000 characters, which is FAR more than Runway. Runway doesn't animate human actions as well and as accurately as MiniMax. Runway also blocks and censors too strongly. Runway has much more morphing than MiniMax. And also, MiniMax renders out most animations in real-time speed rather than in slo-mo like Runway usually does. Runway has MANY cool additional features that MiniMax doesn't have, but when it comes to animation capabilities then MiniMax is overall much more advanced and this is the reality (for now at least).
I found that runway does follow the prompts, but you have to be very specific and descriptive. Also I’ve found if you want less slo mo in runway you can switch to 5s clips instead of 10s and then extend from there.
Refuge, got a question - why did you upscale the videos seperately? Wouldnt it be better to first edit the full video, and then just toss the final video into Topaz and just upscale 1 video instead of the source materials? Why would i have many heavy files which i might not use all of them, or just use tiny portions of each clip, and waste space and time upscaling them all, instead of finishing the project, and then upscaling the final thing?
Great point. I find that MiniMax generally doesn't need upscaling, 720p is more than enough resolution, unless you are doing it professionally, of course, but I use Capcut's free online upscaler sometimes. However, upscaling can sometimes remove some of the realism due to adding too much sharpness, if the original A.I video was done in a photorealistic style.
For a couple reasons. First, not every scene needs to be upscaled in a project. Also, very often upscaling is hard on a computer so it's safer to mitgate computer issues to do it 1 by 1.
@@curiousrefuge Well, my videos go to marketing, so those are less than 60 seconds long and contain more than 10-30 shot videos, but only 2-3 seconds from each. So for me it is more economical to render 60 seconds of the final video, than 30 videos, 5\10 seconds each. But it depends on the project and the work pipe line. thank mr.... refuge :)
I love AI and Curious Refuge. I'm crazy about doing one of your courses, I hope I can save a little cash and be able to do it in the future because you are fucking incredible, congratulations!
Haha! How do you get them to stop talking? Jokes, I love Minimax and use it every day, but I do struggle to generate videos of characters not talking despite using keywords such as "silent," "quiet," "the character remains quiet," and "not talking." Someone even suggested prompting "with their lips glued together" and "with their lips stuck together," but my characters just want to talk, talk, talk!
@@EmilyNilsen Is that so? I haven't seen that problem myself. But I also haven't used Minimax a lot due to lack of lipsync so mostly I have been using Kling & Runway for now.
I only ever use Minimax. Despite it's limitations, I've found it's the best to achieve the vision I'm aiming for. And the initiative it takes, tho sometimes annoying, often proves fantastic. I much prefer it to Luma and Kling, as I really don't want everything rendered in slow mo. I signed up for unlimited, so I don't get the delays you found, and it's so much better, maybe 2-3 minutes per render. As soon as it introduces end frames, or even camera movement, it'll be the clear leader of the pack.
Some of my best clips have come from minimax but Kling is still King! 1. Multiple takes of the same scene processed simatneously and no waiting que for your next batch of processes. 2. Less overall morphing of characters and original image. 3. 10 second clips are working well in 1.5. 4. Built in image generator. Minimax still best concise text to video.
👍🏾👍🏾YES, thanks!!! MiniMax is my go-to for A.I. filmmaking at this time. I recently left Runway, as it's animation capabilities are far behind MiniMax, if we're being honest. Runway's recent updates are cool, but they still are not focusing mostly on what matters the most with A.I. filmmaking, which is proving excellent capabilities for animating humans in real-time speed (not that slo-mo BS that Runway's A.I. often forces, by default). 😅
10:18 How would a common user know which models are the best (like Theia)? I see this a lot with AI tools where there are a dozen models to choose from. Where does one find out the difference between all these models?
Great work as always, Curious Refuge! I have a question: I need to merge video generations from Runway and KLING 1.5 into a single video, but the aspect ratio from Runway is different, more square compared to the videos generated by KLING. Can Topaz solve this issue? Thank you so much! I’ll keep following you, and I can’t wait to show you my video!!!
I do appreciate the effort you put into your research and your ability to coalesce it all into digestible and useful segments/ posts. As someone who has been in the business for close to as long as you have been alive, I just wanted to share this unsolicited constructive criticism. There’s no need to have your face in the entire video. I do like knowing who’s talking though so it’s good to cut back and forth between the reference footage and your talking head. I get that you’re good looking, and well coiffed with a great fashion sense…and I give it up to you for all of those blessings, truly. In fact, that may just be my issue. You’re so good looking that it can be a bit distracting from the mission of the post. You’re welcome to delete this comment after reading. Thanks and keep up the good work.
Gen-3 is much faster, but you have to be more specific and descriptive with your prompts. Also it will most likely take more generations to get the result you want in gen-3. Gen-3 also has video extend feature, lip sync and act-one. Also, I would recommend unlimited sub for gen-3 or minimax, otherwise your credits will disappear fast.
Thank you! 👍 I have been trying to get my characters to simply be quiet and not talk/move their mouths. No success uet. Does anyone have any prompting tips?
I'm sure you said it but now I can't find where. Where were the original images generated? I have a huge hit and miss rate when insisting the images are photorealistic, especially when doing more fantasy related images.
Recently watched a video of a user who paid for the unlimited version and he had the same wait times as the free version, obviously without the credit limits. Anyone who has used Hailou knows sometimes generations take an absolute age. Anyone else have similar experience of the paid version?
Not me. But in Runway, especially in turbo model, things happen withing 1-5 minutes max even in the 'Relaxed' mode. I'm using the unlimited plan, so no need to worry about credits at all. Infinite creation. I can experiment all I want.
@SupaFreq, the average for me is around 2 minutes a video, I'd say, but faster during the hours when less people are using it, sometimes even just 60-90 seconds. I'm on the Unlimited Plan, and it is definitely faster overall than the free version.
They just need to to up their videos to 1080p quality and then they are the winners. I actually prefer KLING just because I want the most satisfying looking animations. These are cleary the standard mode Kling outputs, they give much better results in professional mode image-to-video, I personally don't use text-to-video.
What was this video about? It was starting wit minimax and then end up with different scenes from different tools and enhacing the videos with another tool! like what was going on here? marketing for all the tools??
I tried to make a text to video (without uploading an image) and apparently the prompt "baby and a puppy" goes against their guidelines or something. I'm like what???
Your using the Standard feature of Kling, which doesn't utilize their best model. You can literally direct the scene using Motion Brush. No one on RUclips knows how to take full advantage of the tools given to us
Why's everything gotta' be "cinematic"? Jeez! Meanwhile, you can't do a "cinematic video" with 5 second clips and no lip-syncing. You can do short clips for social media. I you want to do something longer wit a story or dialogue or narration you'll need at least Kling or Runway. Minimax is too limited at this point. "Cinematic".
I am doing stories and marketing using runway and midjourney and it's a lot of fun. Was wondering if to switch to kling or minimax. Why don't you recommend minimax? Why is it worse than runway? Also, what is your opinion on kling?
@@adarwinterdror7245minimax lacks other features. The only amazing thing about minimax is the actual quality of the videos, but that's about it. There's no Extend Video, End Frame, Character Reference, Lip Sync, Video to Video. It ONLY has Img2Vid and Txt2Vid
@@adarwinterdror7245 Runaway is great for some things. If your content is G-rated, you can pretty much get everything you need with one Runway unlimited plan. I love their video-to-vidio feature. But if you have any edge or grit, you need Kling, which has the best lip-syncing, and the most natural and convincing animations. Minimax only does 6 seconds at a time, you can't extend, and there's no lip-syncing. I use all three + Vidu.
Yes, from a filmmaker's or pro's perspective MiniMax is currently the King when it comes to animation using A.I. But it ultimately depends on what you're using it for. For product videos, Runway is better than MiniMax and can do 10 seconds or more.
this AI tool is very slow. it takes 540 minutes to wait for your text to video prompt because there are thousands of people who are also generating video ahead of you 😒😒
Great tutorial! 👏 If you're into AI videos, I just dropped my own AI-generated animation cover of APT that blends stunning visuals with music in a way you won't believe! 🎶✨ Check it out and let me know what you think-it's a perfect example of what you can create with AI! 😎👇
Dude When you critique 1 image or 1 series of images And then Jump back and forth to critiquing all the images It's not just super confusing It's idiotic . Apologies....Try taking 1 image Try taking 1 image of the time and then dissecting that fully Instead of jumping back and forth between all the images 20 minutes describing each 1 separately Because again It's frustrating to follow and it's idiotic Sorry
AI upscale makes it sharper but not realistic like what you get out of a "cinema camera". Everything is still plastic and unnatural. Bottom line: real humans still have use to go out and shoot real footage! YAY
Minimax gives you everything from slow motion to epic action. The only vid generator to do that right now
Still can't understand Minimax prompt structure 😢
Minimax is pretty incredible!
@@ZorenStudio55 for img to vid you could try a minimax gpt in chatgpt where you put the image and it gives a text in English and Chinese. Paste the text into minimax with img.
Also, I've noticed that in Minimax, if you have something like a photo of a lake (i posted a mock Lochness monster sighting on my X profile as well as a Jaeger mech rising out of the lake from a couple photos I took) and added in my prompt for the water to "interact with the body" as it rises out of the lake. It was amazing how well it responded to that prompt. Also, if you take photos during the golden hour (just before sunrise or right after sunrise) you can get some really nice cinematic looking shots and your subject will tend to be lit correctly to match the lighting of the original image.
👍🏾👍🏾
Good to know!
Hi friend
What features I would like to see in Minimax: 1) twice longer clips 2) clip extending 3) character reference/location reference 4) act one copy 5) camera control 6) and yea, overall improvement of the model. Go Minimax!
Lipsync
Good list!
I wish they create a button to extend the video from last scene. so your point 1&2 is settle. point 3 we can try mimic like Leonardo Ai for color/theme style and adjust the low & high reference image.
Where is that blue cat?
You forgot the main one. API access.
This was absolutley phenomenal... great job!
Thank you so much!
great job? there is no job.
Fully agree, I used Minimax and some Runways Act One for the short animated Children’s music 🎶 video I made and posted yesterday. Normally I would use image to video but I just used text to video for the animation. It was still pretty consistent as the prompts helped facilitate that.
Definitely good to kitbash!
@ 🥰😃
It's worth adding the phrase "not talking" to Minimax prompts.
True!
I'm hoping they will adopt the char ref feature built into VIdu. Vidu's reference to video feature is awesome for creating character or object consistency. However, the quality isn't quite up to snuff and doesn't quite keep up with the Minimax's and Kling's of the world. I did speak with the devs there and there is some exciting news about their platform that I can't discuss, let's just say they're going to raise the bar for features and functionality with a higher quality model. I want to see who will be able to allow for multi char ref or multi char trained model so we can prompt more than one consistent character which would speed up the process and give us more control of scenes without having to stitch things together. I'm super excited to see what's coming around the corner.
Agreed. I used Vidu to help with some shots in a couple projects, and the character reference is excellent, but yes their videos are too choppy, even when using extra credits to upscale them. I don't know what they were thinking at Vidu, because they should have fixed their choppy video issue months ago. 😅 Almost no one talks about Vidu, but I appreciate that you brought them up and that they are soon releasing some great updates. Cheers
@@FilmSpook thanks for the reply, it's been awesome watching these tools grow. I actually did my best to make this video look as good as I could without having Topaz, I made the most of the clips I had but I think it came out pretty decent considering ruclips.net/video/cdCLKhZvCVM/видео.htmlsi=IZJUzYXlOzKqkGDn
Oh! I can't wait to see what Minimax releases then!! Exciting.
We're excited to see what the updates are!
For the image to video animation i've been doing (which is not realistic imagery) I found Luma has been giving me the best results. Camera control options are good and strangely it seems to be better when you don't prompt it sometimes. It seems to understand the details of the image and interpret it properly. Often it is confused by the prompt. Comparing the same image animation in Luma vs Minimax the Luma outputs are much better. Luma's extend feature is super useful and creative also.
We appreciate the tips! We'll give Luma more tries
Yeah, MiniMax has been the best for me - just praying they introduce advanced features soon.
Same here, and I did see recently that they are soon to release 10-second-long generations, which would greatly help in filmmaking and storytelling with MiniMax. Their "6 seconds" videos are not even a full six seconds, they are slightly under, but rounded off as 6 seconds.
Agreed!
Pretty incredible tools!
They are!
hold on. Based on all stats it used to be: Runway, Kling 1.5, Minimax, Luma. How come it jumped to spot 1 ?
Runway is currently FAR behind MiniMax when it comes to two or three key things for A.I. filmmaking: It generally doesn't follow prompts as well, for MiniMax has very advances prompt accuracy and your prompts can be up to 2000 characters, which is FAR more than Runway. Runway doesn't animate human actions as well and as accurately as MiniMax. Runway also blocks and censors too strongly. Runway has much more morphing than MiniMax. And also, MiniMax renders out most animations in real-time speed rather than in slo-mo like Runway usually does. Runway has MANY cool additional features that MiniMax doesn't have, but when it comes to animation capabilities then MiniMax is overall much more advanced and this is the reality (for now at least).
@@FilmSpook what about Kling 1.5 ?
Keep in mind - Runway comes with an entire suite of tools.
I found that runway does follow the prompts, but you have to be very specific and descriptive. Also I’ve found if you want less slo mo in runway you can switch to 5s clips instead of 10s and then extend from there.
Also, I noticed Kaiber has a Minimax API. Is it using the same video render model as Hailuo?
Yes, it does, from what I understand, as licensed to them.
Thanks for this! I have a bunch of unused credits on Kaiber, so this is excellent news. I see that they also have Kling.
Yes!
Minimax 정말 현실적인 결과물이네요. 다른것들은 자주 슬로우모션만 만드는데. 비교 영상 만들어 주셔서 너무 감사합니다:)
Thanks for watching!
very good explaining. thank you!
Glad it was helpful!
Love using Hailuo AI
What's the avg time for generation?
@@aashay Luckily I am an affiliate and use the unlimited plan. I would say the average time to generate is 60 seconds.
@@The-AI-Experiment oh nice. So If I subscribe for unlimited plans, I'd probably get similar generation time.
@@The-AI-Experiment how to become - affiliate ?
@@aashay Possibly
Refuge, got a question - why did you upscale the videos seperately? Wouldnt it be better to first edit the full video, and then just toss the final video into Topaz and just upscale 1 video instead of the source materials?
Why would i have many heavy files which i might not use all of them, or just use tiny portions of each clip, and waste space and time upscaling them all, instead of finishing the project, and then upscaling the final thing?
Great point. I find that MiniMax generally doesn't need upscaling, 720p is more than enough resolution, unless you are doing it professionally, of course, but I use Capcut's free online upscaler sometimes. However, upscaling can sometimes remove some of the realism due to adding too much sharpness, if the original A.I video was done in a photorealistic style.
For a couple reasons. First, not every scene needs to be upscaled in a project. Also, very often upscaling is hard on a computer so it's safer to mitgate computer issues to do it 1 by 1.
@@curiousrefuge Well, my videos go to marketing, so those are less than 60 seconds long and contain more than 10-30 shot videos, but only 2-3 seconds from each. So for me it is more economical to render 60 seconds of the final video, than 30 videos, 5\10 seconds each.
But it depends on the project and the work pipe line.
thank mr.... refuge :)
That's a good one. Congrats refuge
Appreciate ya watchin'!
Like your content so much!😊
Thank you so much!
Same to you 😊 my dear friend
I love AI and Curious Refuge. I'm crazy about doing one of your courses, I hope I can save a little cash and be able to do it in the future because you are fucking incredible, congratulations!
AWESOME! THANK YOU VERY MUCH!
Our pleasure!
@curiousrefuge I AM GREATLY INSPIRED TO MAKE MY OWN FILMS WITH MINIMAX A.I. NOW!😁
But Minimax doesn't have lipsync yet? How do you make characters talk?
Haha! How do you get them to stop talking? Jokes, I love Minimax and use it every day, but I do struggle to generate videos of characters not talking despite using keywords such as "silent," "quiet," "the character remains quiet," and "not talking." Someone even suggested prompting "with their lips glued together" and "with their lips stuck together," but my characters just want to talk, talk, talk!
@@EmilyNilsen Is that so? I haven't seen that problem myself. But I also haven't used Minimax a lot due to lack of lipsync so mostly I have been using Kling & Runway for now.
Lots of other tools like LivePortrait or even Runway
I only ever use Minimax. Despite it's limitations, I've found it's the best to achieve the vision I'm aiming for. And the initiative it takes, tho sometimes annoying, often proves fantastic. I much prefer it to Luma and Kling, as I really don't want everything rendered in slow mo. I signed up for unlimited, so I don't get the delays you found, and it's so much better, maybe 2-3 minutes per render.
As soon as it introduces end frames, or even camera movement, it'll be the clear leader of the pack.
Minimax is definitely a leading contender right now!
Does the Mini Max program make reels for Instagram?
You can certainly use it but it's not a direct connection
Needed this video yesterday 😭😭😭
Sorry we got to you late!
Nice!
Thanks!
In Runway I’ve found that if I generate 5s clips instead of 10s, there is less slow mo. I then use the extend video feature from there.
Good tip!
can you export 4k footage from Minimax? Where can I find out about export settings?
We'd suggest using Topaz
Why is it when I click on the text area it gives me a warning that the site phishes and will steal your personal data?
Unsure?
You went to the wrong site, you need to open the link in description
Amazing
Thank you! Cheers!
Tx for sharing. Good quality.
Glad you enjoyed it
Hi Curious, question I want to join your january advertising animation course. I have joined the waiting list what do i need to do next please
You'll get an email - enrollment starts 1/3
это просто самая лучшая нейронка из всех какие попробовал. уже роликов 400 наделал)
Appreciate you watching!
Thank you very much for the video Really thank you
You are very welcome
Is there any workarounds to give control over the seed?
Jump in our Discord and we'll try and troubleshoot with you!
Whats your recommendation for AI film making runway or minimax ?
It depends on your project needs!
Glad to know you have finally realized Minimax superior quality to Runway. Last time you keept repeating Runway was the better option.
We consider MANY things. Remember, Runway includes a entire suite of tools.
How long are the finished video clips generated from images with this hailuo minimax? Does it only generate a few seconds or a few minutes?
All of these platforms provide just a handful of seconds currently.
@curiousrefuge Thanks
Some of my best clips have come from minimax but Kling is still King! 1. Multiple takes of the same scene processed simatneously and no waiting que for your next batch of processes. 2. Less overall morphing of characters and original image. 3. 10 second clips are working well in 1.5. 4. Built in image generator. Minimax still best concise text to video.
Lots of people love Kling! We love Kling too :)
It would be extremely helpful if you could share the "prompts" used for each of the image that you created using Minimax, thanks.
Will keep that in mind for next time!
👍🏾👍🏾YES, thanks!!! MiniMax is my go-to for A.I. filmmaking at this time. I recently left Runway, as it's animation capabilities are far behind MiniMax, if we're being honest. Runway's recent updates are cool, but they still are not focusing mostly on what matters the most with A.I. filmmaking, which is proving excellent capabilities for animating humans in real-time speed (not that slo-mo BS that Runway's A.I. often forces, by default). 😅
Minimax is amazing; just keep in mind the Runway has some excellent other AI tools in it's suite other than generators!
10:18 How would a common user know which models are the best (like Theia)?
I see this a lot with AI tools where there are a dozen models to choose from. Where does one find out the difference between all these models?
Certainly difficult but we just subscribe to a lot of sources :)
Great work as always, Curious Refuge! I have a question: I need to merge video generations from Runway and KLING 1.5 into a single video, but the aspect ratio from Runway is different, more square compared to the videos generated by KLING. Can Topaz solve this issue? Thank you so much! I’ll keep following you, and I can’t wait to show you my video!!!
Thanks so much! Topaz can certainly help to expand and keep high quality.
Thx Bro, and ONE BIG LIKE ;)
No problem
Can't slo-mo shots be sped up fairly easily in Resolve/Premiere?
Yes, although sometimes it still doesn't look quite right.
I do appreciate the effort you put into your research and your ability to coalesce it all into digestible and useful segments/ posts. As someone who has been in the business for close to as long as you have been alive, I just wanted to share this unsolicited constructive criticism. There’s no need to have your face in the entire video. I do like knowing who’s talking though so it’s good to cut back and forth between the reference footage and your talking head. I get that you’re good looking, and well coiffed with a great fashion sense…and I give it up to you for all of those blessings, truly. In fact, that may just be my issue. You’re so good looking that it can be a bit distracting from the mission of the post. You’re welcome to delete this comment after reading. Thanks and keep up the good work.
I love the non-serious self-awareness in this comment.
🤔🤡
We appreciate the feedback!
Can i use on iphone
It depends on which app. For Minimax, we strongly urge using a normal computer/laptop.
quality already great, movement and physics not quite there yet, so learn now how to do it and in 1 or two year the technology will be ready!
It will be absolutely crazy in one more year!
How do you stop the characters from talking? Their mouths are always moving and I'm not doing anything that needs them to talk.
Negative prompts typically help! We would suggest saying something like closed mouth in the prompt itself.
@ I have done that to no avail. Thank you for responding.
when i download the minimax videos, they are very low quality, can you guide us how to increase the quality of the video?
Which prompts did you use?
Didn’t the stone in the guy’s hand change shape?
Looks like it!
Gen-3 vs Minimax. What's your recommendation?
Gen-3 is much faster, but you have to be more specific and descriptive with your prompts. Also it will most likely take more generations to get the result you want in gen-3. Gen-3 also has video extend feature, lip sync and act-one. Also, I would recommend unlimited sub for gen-3 or minimax, otherwise your credits will disappear fast.
Both good for their own things! Gen3 is also a total AI suite of tools.
❤️❤️❤️
Thanks for watching!
Thank you! 👍 I have been trying to get my characters to simply be quiet and not talk/move their mouths. No success uet. Does anyone have any prompting tips?
Jump in our discord and we'll try and help!
Couldn't you just speed up any slow-motion outputs? And then maybe generate extended clips to regain duration?
You could! But there are issues with that too in terms of timing.
This High Contrast Cinematic is what gives away it's AI Gen'd. It's beetervtobtakevthe AI and Color it in Davinci
Thanks for watching!
I'm sure you said it but now I can't find where. Where were the original images generated? I have a huge hit and miss rate when insisting the images are photorealistic, especially when doing more fantasy related images.
Typically we generate in MJ
Recently watched a video of a user who paid for the unlimited version and he had the same wait times as the free version, obviously without the credit limits. Anyone who has used Hailou knows sometimes generations take an absolute age. Anyone else have similar experience of the paid version?
Not me.
But in Runway, especially in turbo model, things happen withing 1-5 minutes max even in the 'Relaxed' mode.
I'm using the unlimited plan, so no need to worry about credits at all. Infinite creation. I can experiment all I want.
@SupaFreq, the average for me is around 2 minutes a video, I'd say, but faster during the hours when less people are using it, sometimes even just 60-90 seconds. I'm on the Unlimited Plan, and it is definitely faster overall than the free version.
It's certainly been a common criticism lately about the wait times as it relates to the costs :(
They just need to to up their videos to 1080p quality and then they are the winners. I actually prefer KLING just because I want the most satisfying looking animations. These are cleary the standard mode Kling outputs, they give much better results in professional mode image-to-video, I personally don't use text-to-video.
Makes total sense!
Minimax AI is for Beginners ❤
What do you mean?
Is it for free?
Nope!
CR is so AI he actually looks like a midjourny consistent character.
You found our secret!
What was this video about? It was starting wit minimax and then end up with different scenes from different tools and enhacing the videos with another tool! like what was going on here? marketing for all the tools??
We aren't sponsored by tools, so this is a demo of each!
🎉🎉🎉🎉🎉🎉❤
Thanks for watching!
✌️🔥
Thanks for watching!
Is minimax is the best and is it paid?
It is best quality but lacks some features that others have.
“Bad guys” head turned into a dry ballsack 😂
LOL
I tried to make a text to video (without uploading an image) and apparently the prompt "baby and a puppy" goes against their guidelines or something. I'm like what???
Yeah, can't explain that one!
Your using the Standard feature of Kling, which doesn't utilize their best model. You can literally direct the scene using Motion Brush. No one on RUclips knows how to take full advantage of the tools given to us
We appreciate the feedback! Sometime we record the videos before these features are shown
that is good
Thanks for watching!
Minimax sounds like another Apple variant of the iPhone.
Haha that's true...
Why are all the comments so recent? They have nothing to do with the film... unless this was made with MInimax?
what do you mean comments so recent?
@curiousrefuge the video was made 13 years ago, but the comments are all from less than 2 weeks ago
How do you make mouths not move in Minimax? Adding "not talking, mouth still, the man is not talking" hasn't worked so far.
Hmmm we'll see what we can do and get back to you!
@@curiousrefuge Hi. Did you manage to solve it? I do a lot of closeups on faces, and I can't use Minimax at all right now.
I fairly enjoyed this video but I'm trying to figure out if you are Ai or not. lol
Not AI :)
Is it free
nope
Why's everything gotta' be "cinematic"? Jeez! Meanwhile, you can't do a "cinematic video" with 5 second clips and no lip-syncing. You can do short clips for social media. I you want to do something longer wit a story or dialogue or narration you'll need at least Kling or Runway. Minimax is too limited at this point. "Cinematic".
I am doing stories and marketing using runway and midjourney and it's a lot of fun.
Was wondering if to switch to kling or minimax.
Why don't you recommend minimax? Why is it worse than runway?
Also, what is your opinion on kling?
@@adarwinterdror7245minimax is better when you want more physical control, runway is better for camera movement, and kling is llike a mix of both
@@adarwinterdror7245minimax lacks other features. The only amazing thing about minimax is the actual quality of the videos, but that's about it. There's no Extend Video, End Frame, Character Reference, Lip Sync, Video to Video. It ONLY has Img2Vid and Txt2Vid
Because we focus on ai filmmaking :)
@@adarwinterdror7245 Runaway is great for some things. If your content is G-rated, you can pretty much get everything you need with one Runway unlimited plan. I love their video-to-vidio feature. But if you have any edge or grit, you need Kling, which has the best lip-syncing, and the most natural and convincing animations. Minimax only does 6 seconds at a time, you can't extend, and there's no lip-syncing. I use all three + Vidu.
Imagine all this in 1 year from now! I remember a few months ago the artifacts from image to AI video was terrible!
So true!
Finally y'all say MINIMAX is number 1 and better than runway because it is true
Yes, from a filmmaker's or pro's perspective MiniMax is currently the King when it comes to animation using A.I. But it ultimately depends on what you're using it for. For product videos, Runway is better than MiniMax and can do 10 seconds or more.
Better because they’ve scraped from copyrighted films, games and content because they’re chinesed based and don’t care about american copyright laws
Minimax is definitely pretty strong contender these days!
The lip flap spoils all those minimax clips
The...what?
@@curiousrefuge "Lip flap" the mouth and lips moving, would make using these shots impossible
Both hailuo and kling are chinese companies
correct!
I wish they would add lip-syncing.
Soon!
كله
thanks for watching!
this AI tool is very slow. it takes 540 minutes to wait for your text to video prompt because there are thousands of people who are also generating video ahead of you 😒😒
Yes that's quite a bit of time!
Comments section feels botty
Beep boop beep! You caught our secret! beep booop beep :)
JK
@@curiousrefuge lol
Great tutorial! 👏 If you're into AI videos, I just dropped my own AI-generated animation cover of APT that blends stunning visuals with music in a way you won't believe! 🎶✨ Check it out and let me know what you think-it's a perfect example of what you can create with AI! 😎👇
thanks!
"cinematic" must be the most overused word in recent times. It's sad.
We appreciate you watching...
Dude When you critique 1 image or 1 series of images And then Jump back and forth to critiquing all the images It's not just super confusing It's idiotic . Apologies....Try taking 1 image Try taking 1 image of the time and then dissecting that fully Instead of jumping back and forth between all the images 20 minutes describing each 1 separately Because again It's frustrating to follow and it's idiotic Sorry
We appreciate the feedback!
AI upscale makes it sharper but not realistic like what you get out of a "cinema camera". Everything is still plastic and unnatural. Bottom line: real humans still have use to go out and shoot real footage! YAY
Topaz can help with the realism too