Ha! Thanks! I was getting a bit picky about it-- but again: like, 15 minutes of work went into that! I think if I spent a full hour on it (and maybe used Ideogram for text?) we'd have something that really cooks!
Maybe we need a new video on the graphic trailer? Ideation, prompting in MJ and then Video animation and finally adding in the titles in Ideogram? Seriously, if I saw that trailer with titles on Netflix I wouldn't think it wasn't done by a pro team.🎥 Can you make a title graphic into for your detective story or something in a similar style to show us?
Veo-2 is really good-- it's just a bit hamstrung by the Image to Video being tied to Imagen3. While I think Imagen3 is good...I mean, it's nowhere near what we can do with our other tools!
Hey I love your videos. But this is what I can't figure out? What are the other tools that are definitively better and have a decent price? If you answer thank you so much! 😅@@TheoreticallyMedia
@TheoreticallyMedia What you bring to this fast-developing ai space is practical with quality, which is genuinely appreciated by the rest of us who are also trying to stay at the cutting-edge of possibilities. 2025 should be an amazing year!
Yeah, I LOVED that opening scene example. Extremely cool. Thx for all you do man. Even with AI, I know it's still crazy time-consuming. Very grateful 🙏🏻
Hey Tim!!! I suppose you already know this... but whenever I look up info about Udio on Perplexity... your deep dive video comes up as the feature... so nice one man!!! Happy New Year and great to have you back.
Glad your back.. That yellow "Apple Show Open" turned out really cool.. I liked it.. I think the middle frame feature is a good one.. I can think of a bunch of shots where that could be very useful...
Wow! Your video generation prompts are really detailed! I'm over here doing a single sentence (maybe 2, on a good day.) Thank you for sharing the GPT. Very helpful!!
Yeah, sometimes it works, sometimes not. I have gotten some really good prompts when going basic as well. One funny thing about the GPT is that I think it's OVERdoing it now. I forgot to mention it in the video, but turning OFF prompt enhance seems to work better for it. I gotta go back and tweak it a little as well-- I think it's been wanting to do multi-action stuff in the prompt, which Minimax's new model doesn't seem to like quite as much.
@TheoreticallyMedia if nothing else, your GPT helps non-filmmakers (like me) get a grip on the terminology and good starting point. Generative AI is a little like a slot machine still... But the duds can be quite amusing (or incredibly frustrating.) Fun times! 😄
Ha-- I know, it was such a dumb idea, but I gave it a run just to see if it would work, and yeah-- it was one of those "AI, you really make me laugh sometimes" moments. With some further tests, I think it could really shine. The down side is that 6 sec limitation in Minimax, so I might give it a run in Gen-3. Just since you need a moment or two to get your actor "in position"--
wow, good results on the first try character reference for minimax. Very impressed, already making the whole "we need 40 videos of your character from 40 different angles" nonsense a thing of the past.
Haha, both AI me AND real me!! Apparently I needed a little holiday rest time! Although, to be totally honest, as of last week I was REALLY itching to get back to you all!! Happy New Year!!
Oh okay, now that middle frame thing is a neat addition. So, thought process here. Runway might be heading towards opening up something close to "stop motion" but we can use images with a slightly more dramatic shifts in posture, position, and action. I am not sure if keyframing is the proper word or a new turn but I like the sounds of it. Its not fully automating the process and we are still looking at an expensive run for an animated project to use this method. But you theoretically just need 1 artist and one writer and together they could put something together with this. I am intrigued. Very Intrigued. Unfortunately I do not have such a companion to try this out without it costing me. But I would love to give this a shot. There are a few other tools in the AI list that would probably still be necessary. The Lip Sink one comes to mind for dialogue. Something to adjust lighting. You could do a lot with three keyframes. I am more than willing to bet we could.
That's part of why I'm so curious about their upcoming Frames Image Generator. The Runway gang has been high on World Models for a bit now, so it leads me to think there might be some robust editing tools in the image gen. Like, being able to change the angle after the shot has been composed. Or-- fingers crossed, being able to change poses/expressions as part of a post process. If that happens: Yeah, middle frame is going to be unreal.
😡 4x ads while watching your video, but to support your great work, I've watched them all and even clicked on 1 of them for the algorithm ❤🎉 Great video, keep it up!
8:08 what did you mean by "comping the character into a MidJourney generated image"? I didn't quite understand what the before and after procedures were. Welcome back, btw!
@TheoreticallyMedia so the first video was just a text to video in Minimax and the second one, you took the little boy from that and pasted him on a Midjourney background and then did an image to video in Minimax?
I'm yet to get the update, is it public yet? But this is the kinda thing I've been itching for Minimax to do for months! Always favoured the platform over the many alternatives out there, but its just a couple of cool features away from being the elite market leader. This is certainly a step in that direction! I'd be interested to see how it works, even IF it works, for multiple characters? Or just the one?
Beta preview right now, but it’s Minimax, so I’d say in about a week it’ll be public. They don’t wait around like some of the big guys (Adobe, et al) Just one character for right now
2025 should have fantastic possibilities - 15 minutes and your opening scene looks darn good. Just run that through After Effects for Titles... Thanks Tim & Cheers from Seattle! 🍻
I don't know , if you have also tried this already , but if you use a frame of a studio green screen shot and prompt for it , Minimax removes the green screen on the first frames and replace the green , with the prompt , you choose , like a fishermen's Victorian village , I have few well succeeded examples.
I used the three frame feature in Runway a few times in my upcoming episodes of my short film series called The Station. one was a close of a characters eyes with a reflection of what she was looking at, zoomed out to her profile and then back to her eye with the reflection. It worked pretty well.
I've been using the RunwayML 3 frames a lot. It also works really well with one frame in the middle and leaving it to interpret the prompt to produce what happens before and after.
I've got to spend some more time with it. They kind of snuck this one out right before the holidays, so I think a lot of people missed it-- or, haven't really gotten the chance to circle back and really put it through its paces. Great to know you've been getting some good results out of it! It looks promising for sure!
Love the bad comps feature lol, I literally spend hours in Photoshop getting my start frames perfect, so this will definitely save some time.. when it works anyway lol
Oh, I’ve spent hours in the past playing with feathering and brush opacities to get something just right! I wouldn’t say I was ever great at it, but man- the hours. It’s nice to know we can just kind of give the model an overall idea of what we’re looking for, and it’ll handle the load from there! I think a lot of people who complain that AI isn’t “getting it right” haven’t logged those hundreds of hours tooling around with RGB curves! Haha
Thank you for the video and tips. I like the posterised worn-out style of the credit sequence. Can I ask what you used in the prompt to get that effect, please?
SpyCuk is HILARIOUS. I didn't catch that until after upload and was laughing my head off about it! And yeah-- that character cutout-- That's WILD right?
Y'know, I know that movie got a lot of flack when it was released, but I caught a bit of it a few weeks ago, and it was much better than I remembered. Probably time for a rewatch. That said, I don't think that's going to convince me to watch Yellowstone.
The 3 frame tool, you can use it to generate a 180 degree turntable video(right side view, front view, left side view) Then upload that to polycam and turn it into a gaussian splat. Meaning you get a 3D representation of your subject!
Ahhh, it's great to be back! Not gonna lie, I was getting pretty itchy last week! Too much free time! But-- this week has already proven that we're off to a STRONG start! 2025 is going to be so much fun!
leaning heavy on a Midjourney Sref for that one-- The prompt is stupid simple: title sequence card --ar 16:9 --style raw --sref 619993087 --stylize 150 --v 6.1
This is awesome! Great job! A question: I am really interested in how you used the Portal Transition in Runway. It has a few options to fill in for the prompt but there is no explanation of what the possibilities are. What prompt did you use for the video you showed? And does it also work with two images? Thanks a lot and keep up the great work (just subscribed to your channel).
To be honest, I think the keyword is more in the “hyper zoom” that is in the prompt template. I think I ran it a few times with those fill ins empty and I didn’t see much difference. But, as always: experiment away!
So that was done w/ the new "ACT-1" feature in Runway. I had a whole thing planned for it, but that was the day that Sora dropped, so I had to cover that. Basically: It's Generate an image in Midjourney, Animated the Character in Minimax (Prompt: "Talking + Emotion), and it'll give you some good stuff. From there, take it over to Act-1, where you can input video of yourself saying the lines and it'll give you good lipsync on those characters. Oh, I did swap out the voices in Elevenlabs as well! Hmmm, maybe I should dust off that tutorial!
@TheoreticallyMedia Hey Tim, i understand the whole workflow but how did you manage to generate a good body acting performance according to the line you wanted the character to say? That has been the issue stopping me from creating some ideas. Do you just iterate until you get something good enough for your lines? Thank you!
Hey Tim, i understand the whole workflow but how did you manage to generate a good body acting performance according to the line you wanted the character to say? That has been the issue stopping me from creating some ideas. Do you just iterate until you get something good enough for your lines? Thank you!
@@ArisGomez that just randomly happens. You can try to prompt the way you want it. It's the luck we can type of results like it moves how we want. You can try multiple times and you will get some results how you want to move that's the way
Ideas for best use case for Runway middle frame could be: 1. Transform subject such as Bruce Banner to Hulk or Transformers vehicle to bot 2. Wide angle to close-up tracking such as bowler throwing ball down lane to ball point-of-view hitting strike 3. Outside to inside perspective change such as bullet being fired at skull from gun nozzle and then bullet seen impacting brain
Runway Gen3’s Middle Frame is great for control. Generate a frame in the middle for the tricky transition moment, then fill in the blanks, Huge for film. I'm want Hunyuan or LTX Video to do it too as a massive upgrade for ToonCrafter. Rather than creating and trying to fix the videos (few minutes per generation), images are faster with more control for composition. Interpolation is close to perfect first go. Way faster workflow for control.
It's still technically in beta I guess- but, knowing Minimax, I'm guessing that it'll be sometime within a week. That seems to be the trend when I get early access to something.
Should be linked in the Description. I posted it over on X and on Reddit. I was SUPPOSED to do a big tutorial on it (showcasing Act-1), but that was the day Sora dropped, so it got shelved on RUclips. Maybe I’ll dust it off next week?
Great video! Thanks for this! The only thing Runway ML needs now is a feature like Pika Lab’s Ingredients, where videos are generated using multiple assets. Any idea which site offers that besides Pika?
hey, Tim! Happy New Yada Yada! Could I get a recommendation from you? Best *Sketch to Image* (for stills)? NOT architectural, but more people-oriented. Examples I have seen tend to range from 2-yr-old's left foot to Frazetta. Something inbetween, please! ;) Thanks!
oh for sure-- and even though I did a few here, to be honest: I'm less interested in doing famous faces or doing fan-fiction/meme stuff. To me, the really interesting part is going to be generating up your own roster of "stars" to use in your own projects. Basically, everyone gets to have their own agency of AI Actors.
@TheoreticallyMedia I understand you. But noticed that literary nobody talking about this hack, that minimax don't censor famous persons in prompts. It takes you a method for some kind of consistency in storytelling. You can even choose a celebrity which have more or less similar face features with your face, and face swap. Sorry for my English. Hope it understandable 😁
The process involves moving the character model around to achieve desired shots, although it may require multiple attempts to get the perfect result. Thank you
I think the best sync around right now is actually Runway's Act-1 (which you have to provide driving face video to) and then Elevenlabs Voice to Voice. It's how I did that hitman interview segment in the first half of the video.
Tim, 3 frames is extremely useful for me and prompts me to switch from Kling to runway. , I only use 3d frames to video msotly. i actually have a 8 minute film made with that on this channel. I would love if you took a min for that.
So, you're just going to ignore Pika 2.0... It can do sequences of actions, up to six character/environment references. And, it's extraordinarily good.
Yeah, I gotta get in there! They were a little back burnered when I tried to play with it over the holidays and they were running their free for all-- and basically, I hit it at the wrong time while everything was slagged. I'll pop in soon! (Before 3.0 for sure!)
Now it just needs to be able to adjust the lighting and reflections of the character to the environment better to get rid of the green screen composition look.
I've got this trick I like to do where I'll run outputs through another V2V (like Gen-3 or Domo) to kind of "bake out" that composited look. It works REALLY well-- but, often at the price of consistency. Sora (as much as its derided) also has a super great function for this w/ the Blend modes. But-- Sora is still a bit TOO unpredictable. We're getting there though!
@TheoreticallyMedia You could even do a homophone play on "Paws" and "Pause" where our assassin has the quirky but conscientious habit of taking a pause before every paid hit to reflect on the paws of the sweet critters he gets paid to save on his days off from hitman work. 😉
slowly getting there. Once you can have consistent environments and characters it will be over for the big studios. Ideally, I'd love to see the use of "proxies", where you can have a super simple low poly room or whatever, and place your characters in (like placeholders - they can be cubes or whatever, just to show the AI the overall positions), and also move the virtual camera around (again, in a super simple environment just to show the AI the camera angle) . Otherwise if you are trying to make a tv show then it will be super hard to keep the locations of the characters consistent. Right now this is still "far off" but I predict that it will indeed be almost like a blend between traditional 3d modelling and what we currently have, cause currently there is just too little control.
With some 3d skills, you can pull something close to that off via Video to Video-- but yeah, still requires a lot of prep work. I think that Proxies idea is really solid. I've seen some stuff in development along those lines, but it trends too close to 3d workflows-- to which, the 3d people think it's too simple, and the non-3d folks think it's too complex. Someone will crack it soon though...overall, totally agree: I think 2025 is going to be an interesting bridge year for AI and 3d.
"it will be over for the big studios" - This is silly. You have "consistent environments" (real places) and consistent characters (actors) now and can shoot a movie on your iPhone in 4k quality. Guess what? Big studios still dominate. And, no, it's not because they're more creative -- it's because they control the distribution. They always will. Remember when RUclips was going to change that? Didn't happen. I don't see what AI changes about distribution that hasn't already happened, and hasn't already failed to change the landscape of big media.
Haha, it's a little wonky, but I think with some work, you can get it there. Actually, what I was thinking is if you have a lot of different expression of the same face, and used those to reference-- you could get a fairly consistent AI Actor across a bunch of generations. As is, I do think Image to Video is still the way to go-- but, you can get some cool results w/ T2V. So, for a longer project (or short film) utilizing a mixture of the two could be compelling.
You can already get consistent character in any AI video (in terms of face at least), by replacing the face with FaceFusion. As long as they have something resembling a face, and it doesn't turn away from camera. These Minimax creations seem to never blink... like the T-1000
Haha, which works well for Wednesday Addams! But, you're right-- The really great method is to use a combo of FaceFusion and Runway's Act-1. Thus far, that is the best "acting" results I've seen for AI Video thus far.
Tested it last month! ruclips.net/video/v8lA8hJR1jo/видео.html It's really (really) good-- but, the biggest downside is that it is tied to Imagen3 as its only source for Image to Video. And, while I think I3 is a big step up from the last version-- we can do SO much more with our external workflows. I think that if Veo-2 (eventually) allows for i2V input, it'll be king of the hill in no time.
Hah, gotta get ahead of the curve and do Squid Game as Arcane Outputs! Folks have been suggesting that I tackle some animation stuff though, so I'll put it on the "to do" list! (I did really like Arcane-- although, that second season was a bit sloppy...still one of the coolest animated shows I've seen in quite some time!)
At the current rate, I think we'll all have a desktop Hollywood studio by the end of this year. We'll all still need to do some work to get everything humming-- but yeah-- everything you need will be there by this time next year. Crazy.
I personally think Kling and Minimax are neck and neck. Veo2 is SUPER impressive, but somewhat limited by the fact that Image To Video requires you to use Imagen3 (which is good, and a massive step up from Imagen2)-- but, still not as powerful as using all the external image tools we have access to. Sora....sigh. I think currently it's kind of amazing as a Video to Video tool-- but, yeah-- really lacking otherwise.
Hailou minimax reminds me of my teacher's opinion about me: so much potential, it's just a shame it's not realized. I have yet to actually USE a Hailou render, it's always wonky, morphy and wild. nothing that can end up in a convincing video output. if only Hailou and Runway had a baby.... having the Hailou's motion with the runway consistency.
I think the closest I've seen to that yet is Google's Veo-2-- which, makes sense considering it was likely trained on a mountain of RUclips data. The downside w/ Veo-2 is that it's tied to Imagen-3. Which, while I think is pretty good-- is still nowhere near what we can do with our external image generation tools.
Man, AI cracks me up at least once a day. I didn't notice Spy Cuk until after I uploaded-- hilarious. Hopefully whatever algo Apple TV has to scan for show ideas sees this and makes it a reality!
once again, at the time of this post, NOT YET RELEASED TO THE PUBLIC, on either Hailou or Runway, at least knowing Minimax it probably won't be long. But the time Googles Veo 2 is released the Chinese will probably have some new sites out. And still no word on Runway's FRAMES either. Anyway, not the most exciting updates, Minimax is great I love them, but lets be honest here,..they are due for a BIG update to bring them up past Kling standards. They need higher more improved details, longer generation capabilities, vid-to-vid capabilities, more photorealism and less "cartooniness". Midjourney is due for an update too but my feeling is they are holding back or curbing updates because of lawsuits and legal issues
Yeah, that’s kind of what I like about Runway’s Act-1, which I used for that Hitman Interview. I think there are still some “blank stare” looks, but the characters “act” a bit more at least, mostly because there is actual human acting driving it. (Since it was me, I do use the term “acting” lightly, ha!)
Hailuo free is super duper slow...cant test. All ai video generators should give some testing space for people to decide if they want to buy credits or not.
I’ll say it’s a LOT faster on the paid tier. Eh, I get it- it’s the priority queue for the paid users. If free tier bogs it down, paid users start complaining and leaving. I’d say a solution might be to allow for a one-day free trial, but you know everyone would quickly sign up with 300 different email addresses. Sigh…
Haha, I'm not ignoring you-- promise. More like, I get a TON of comments and don't get to get to all of them. I'm usually only around in the comments section for a few hours here and there, mostly b/c I have to start working on the NEXT video. Honestly, best time to catch me is in the first few hours of an upload. Anyhow-- I mention in this video that the feature is still in beta. And the same went for the Veo2 video. That said, the Minimax feature will probably drop pretty soon, so just consider this a preview of what is upcoming.
Haha. I watched Lioness! Does that count? I have heard a lot of good things about Landman. And I do love Billy Bob, so I’ll check that one out for sure!
Ok that graphic opening scene example was amazing! Seriously looked ready for prime time! 👌
Ha! Thanks! I was getting a bit picky about it-- but again: like, 15 minutes of work went into that! I think if I spent a full hour on it (and maybe used Ideogram for text?) we'd have something that really cooks!
Really was quite impressive!
What was the prompt for those images? Really stunning style
Maybe we need a new video on the graphic trailer? Ideation, prompting in MJ and then Video animation and finally adding in the titles in Ideogram? Seriously, if I saw that trailer with titles on Netflix I wouldn't think it wasn't done by a pro team.🎥 Can you make a title graphic into for your detective story or something in a similar style to show us?
Of all the things Tim showcased in this video, that blew me away! 💯
Cool… I love hailou and Kling. China AI are kicking ass!
Still on Veo waitlist to see if it’s beat the China AI.
Veo-2 is really good-- it's just a bit hamstrung by the Image to Video being tied to Imagen3. While I think Imagen3 is good...I mean, it's nowhere near what we can do with our other tools!
Hey I love your videos. But this is what I can't figure out? What are the other tools that are definitively better and have a decent price? If you answer thank you so much! 😅@@TheoreticallyMedia
@@EthanLedley KLing is King
The title sequence was actually bang on. I loved that effect. Welcome back and Happy New Year
The opening scene was _ridiculously_ good!💯More so with the music. Happy New Year Tim. Good to see you back. 👍
Welcome back, Tim.
It's all coming together. Very impressive!
Great presentation. Welcome back!
Oh, it's so great to be back! It was a good break, but I'll say that I was getting really itchy last week!!
@TheoreticallyMedia What you bring to this fast-developing ai space is practical with quality, which is genuinely appreciated by the rest of us who are also trying to stay at the cutting-edge of possibilities.
2025 should be an amazing year!
Yeah, I LOVED that opening scene example. Extremely cool. Thx for all you do man. Even with AI, I know it's still crazy time-consuming. Very grateful 🙏🏻
Hey Tim!!! I suppose you already know this... but whenever I look up info about Udio on Perplexity... your deep dive video comes up as the feature... so nice one man!!! Happy New Year and great to have you back.
Cool stuff Tim. Very helpful info as always!
Appreciate it!!!
Glad your back.. That yellow "Apple Show Open" turned out really cool.. I liked it.. I think the middle frame feature is a good one.. I can think of a bunch of shots where that could be very useful...
Wow! Your video generation prompts are really detailed! I'm over here doing a single sentence (maybe 2, on a good day.) Thank you for sharing the GPT. Very helpful!!
Yeah, sometimes it works, sometimes not. I have gotten some really good prompts when going basic as well. One funny thing about the GPT is that I think it's OVERdoing it now. I forgot to mention it in the video, but turning OFF prompt enhance seems to work better for it.
I gotta go back and tweak it a little as well-- I think it's been wanting to do multi-action stuff in the prompt, which Minimax's new model doesn't seem to like quite as much.
@TheoreticallyMedia if nothing else, your GPT helps non-filmmakers (like me) get a grip on the terminology and good starting point. Generative AI is a little like a slot machine still... But the duds can be quite amusing (or incredibly frustrating.) Fun times! 😄
Great one again ! 😁 and oh, fun to see ya again back into serious blue suit guy 😁
Haha, he’s so angry!! Clearly headed to a job he hates!
Happy new year! That shot at 8:50 is really interesting.
Ha-- I know, it was such a dumb idea, but I gave it a run just to see if it would work, and yeah-- it was one of those "AI, you really make me laugh sometimes" moments.
With some further tests, I think it could really shine. The down side is that 6 sec limitation in Minimax, so I might give it a run in Gen-3. Just since you need a moment or two to get your actor "in position"--
honestly... that opening scene with the 3 keyframes... pretty fking good :) Loved the tune there as well... will try it out ;)
That was Udio! I think the prompt was a simple as “theme to a modern crime show” and “instrumental”
I felt the same! Like, this song is awesome!
omg characters on minimax! I'm in heaven!
We missed you too, TIM! You're my most trusted source for all things Generative AI! Happy 2025!
Thank you Sir!! Your video is really helpful as always.
‘I could’ve been a veterinarian’ made me lol
I think there's a great AI Short Film here! "Paws and Claws?"
Loved that title intro looked so good 👌
wow, good results on the first try character reference for minimax. Very impressed, already making the whole "we need 40 videos of your character from 40 different angles" nonsense a thing of the past.
Yeah, that was a BIT too much. I went through that process twice and it was just a nightmare. It's no wonder AI Me looked so cranky!
The first video of 2025, where I have actually laughed out loud, while watching on RUclips 😂 God bless you, Tim! Keep up that vibe!
I figure if the AI Overlords are training on me, I might as well teach them how to have a sense of humor! Ha! Happy New Year!
10:52 looks really, really good!
Tim, you look way happier in 2025! That's for sure! LOL. Happy New Year, my friend!
Haha, both AI me AND real me!! Apparently I needed a little holiday rest time! Although, to be totally honest, as of last week I was REALLY itching to get back to you all!!
Happy New Year!!
Oh okay, now that middle frame thing is a neat addition.
So, thought process here. Runway might be heading towards opening up something close to "stop motion" but we can use images with a slightly more dramatic shifts in posture, position, and action. I am not sure if keyframing is the proper word or a new turn but I like the sounds of it. Its not fully automating the process and we are still looking at an expensive run for an animated project to use this method. But you theoretically just need 1 artist and one writer and together they could put something together with this.
I am intrigued. Very Intrigued. Unfortunately I do not have such a companion to try this out without it costing me. But I would love to give this a shot.
There are a few other tools in the AI list that would probably still be necessary.
The Lip Sink one comes to mind for dialogue.
Something to adjust lighting.
You could do a lot with three keyframes. I am more than willing to bet we could.
That's part of why I'm so curious about their upcoming Frames Image Generator. The Runway gang has been high on World Models for a bit now, so it leads me to think there might be some robust editing tools in the image gen. Like, being able to change the angle after the shot has been composed. Or-- fingers crossed, being able to change poses/expressions as part of a post process.
If that happens: Yeah, middle frame is going to be unreal.
This is perfect I literally just bought both unlimited version last week! Ai is getting exciting
That’s awesome! Yeah, it’s going to be a wild year!
I used the character sheet idea in Vidu and that worked perfectly so I can it working with most character reference options
😡 4x ads while watching your video, but to support your great work, I've watched them all and even clicked on 1 of them for the algorithm ❤🎉
Great video, keep it up!
8:08 what did you mean by "comping the character into a MidJourney generated image"? I didn't quite understand what the before and after procedures were. Welcome back, btw!
Oh, “compositing” basically just copy/pasting a character in on an image layer. And doing a very bad job of it, mind you!
@TheoreticallyMedia so the first video was just a text to video in Minimax and the second one, you took the little boy from that and pasted him on a Midjourney background and then did an image to video in Minimax?
I'm yet to get the update, is it public yet? But this is the kinda thing I've been itching for Minimax to do for months! Always favoured the platform over the many alternatives out there, but its just a couple of cool features away from being the elite market leader. This is certainly a step in that direction! I'd be interested to see how it works, even IF it works, for multiple characters? Or just the one?
Beta preview right now, but it’s Minimax, so I’d say in about a week it’ll be public. They don’t wait around like some of the big guys (Adobe, et al)
Just one character for right now
2025 should have fantastic possibilities - 15 minutes and your opening scene looks darn good. Just run that through After Effects for Titles... Thanks Tim & Cheers from Seattle! 🍻
I don't know , if you have also tried this already , but if you use a frame of a studio green screen shot and prompt for it , Minimax removes the green screen on the first frames and replace the green , with the prompt , you choose , like a fishermen's Victorian village , I have few well succeeded examples.
oh, I have NOT tried that-- that's wild! Headed over now to give it a shot!
I used the three frame feature in Runway a few times in my upcoming episodes of my short film series called The Station. one was a close of a characters eyes with a reflection of what she was looking at, zoomed out to her profile and then back to her eye with the reflection. It worked pretty well.
Happy New Year!
I've been using the RunwayML 3 frames a lot. It also works really well with one frame in the middle and leaving it to interpret the prompt to produce what happens before and after.
I've got to spend some more time with it. They kind of snuck this one out right before the holidays, so I think a lot of people missed it-- or, haven't really gotten the chance to circle back and really put it through its paces. Great to know you've been getting some good results out of it! It looks promising for sure!
Did you hear about the O1 model becoming self aware the other week? It’s scary and interesting at the same time.
Love the bad comps feature lol, I literally spend hours in Photoshop getting my start frames perfect, so this will definitely save some time.. when it works anyway lol
Oh, I’ve spent hours in the past playing with feathering and brush opacities to get something just right! I wouldn’t say I was ever great at it, but man- the hours.
It’s nice to know we can just kind of give the model an overall idea of what we’re looking for, and it’ll handle the load from there!
I think a lot of people who complain that AI isn’t “getting it right” haven’t logged those hundreds of hours tooling around with RGB curves! Haha
Thank you for the video and tips. I like the posterised worn-out style of the credit sequence. Can I ask what you used in the prompt to get that effect, please?
8:49 that was awesome
11:12 SpyCuk looks interesting
SpyCuk is HILARIOUS. I didn't catch that until after upload and was laughing my head off about it!
And yeah-- that character cutout-- That's WILD right?
Waterworld is a highly underrated film
Y'know, I know that movie got a lot of flack when it was released, but I caught a bit of it a few weeks ago, and it was much better than I remembered. Probably time for a rewatch.
That said, I don't think that's going to convince me to watch Yellowstone.
The 3 frame tool, you can use it to generate a 180 degree turntable video(right side view, front view, left side view) Then upload that to polycam and turn it into a gaussian splat. Meaning you get a 3D representation of your subject!
Welcome back, Tim!
Ahhh, it's great to be back! Not gonna lie, I was getting pretty itchy last week! Too much free time! But-- this week has already proven that we're off to a STRONG start! 2025 is going to be so much fun!
I am definitely waiting for Frames in Runway and Consistent characters in Hailuo to drop. 🎉🎉
Whats ur prompt for the images u used at 10:40? Theyr great
leaning heavy on a Midjourney Sref for that one-- The prompt is stupid simple: title sequence card --ar 16:9 --style raw --sref 619993087 --stylize 150 --v 6.1
@ thanks and thanks for all the work you do
This is awesome! Great job!
A question: I am really interested in how you used the Portal Transition in Runway. It has a few options to fill in for the prompt but there is no explanation of what the possibilities are. What prompt did you use for the video you showed?
And does it also work with two images?
Thanks a lot and keep up the great work (just subscribed to your channel).
To be honest, I think the keyword is more in the “hyper zoom” that is in the prompt template. I think I ran it a few times with those fill ins empty and I didn’t see much difference.
But, as always: experiment away!
@@TheoreticallyMedia thanks!
The Mollusk, nice!
I don’t say it lightly, but that is one of the best albums of all time!
hey please tell me how did you make that interviewer talk so expressively along with the hand movements. Please Please
So that was done w/ the new "ACT-1" feature in Runway. I had a whole thing planned for it, but that was the day that Sora dropped, so I had to cover that. Basically: It's Generate an image in Midjourney, Animated the Character in Minimax (Prompt: "Talking + Emotion), and it'll give you some good stuff. From there, take it over to Act-1, where you can input video of yourself saying the lines and it'll give you good lipsync on those characters.
Oh, I did swap out the voices in Elevenlabs as well!
Hmmm, maybe I should dust off that tutorial!
@@TheoreticallyMedia Got it. Thank you so much for the response
@TheoreticallyMedia Hey Tim, i understand the whole workflow but how did you manage to generate a good body acting performance according to the line you wanted the character to say? That has been the issue stopping me from creating some ideas. Do you just iterate until you get something good enough for your lines? Thank you!
Hey Tim, i understand the whole workflow but how did you manage to generate a good body acting performance according to the line you wanted the character to say? That has been the issue stopping me from creating some ideas. Do you just iterate until you get something good enough for your lines? Thank you!
@@ArisGomez that just randomly happens. You can try to prompt the way you want it. It's the luck we can type of results like it moves how we want. You can try multiple times and you will get some results how you want to move that's the way
Ideas for best use case for Runway middle frame could be:
1. Transform subject such as Bruce Banner to Hulk or Transformers vehicle to bot
2. Wide angle to close-up tracking such as bowler throwing ball down lane to ball point-of-view hitting strike
3. Outside to inside perspective change such as bullet being fired at skull from gun nozzle and then bullet seen impacting brain
Oh man, now I gotta try out a Hulk transition!
Runway Gen3’s Middle Frame is great for control. Generate a frame in the middle for the tricky transition moment, then fill in the blanks, Huge for film. I'm want Hunyuan or LTX Video to do it too as a massive upgrade for ToonCrafter. Rather than creating and trying to fix the videos (few minutes per generation), images are faster with more control for composition. Interpolation is close to perfect first go. Way faster workflow for control.
Great video, thanks!!
I can't seem to access the Character reference tab, do you know when/if it will be out to free users ?
It's still technically in beta I guess- but, knowing Minimax, I'm guessing that it'll be sometime within a week. That seems to be the trend when I get early access to something.
@@TheoreticallyMedia Great! Thank you very much!! 😁
Where can I watch your short film, "The Interview" ? I'd like to watch it.
Should be linked in the Description. I posted it over on X and on Reddit. I was SUPPOSED to do a big tutorial on it (showcasing Act-1), but that was the day Sora dropped, so it got shelved on RUclips.
Maybe I’ll dust it off next week?
Great video! Thanks for this! The only thing Runway ML needs now is a feature like Pika Lab’s Ingredients, where videos are generated using multiple assets. Any idea which site offers that besides Pika?
Happy 2025 Tim. I'm here for the crazy A.I. video ride and fastening my seatbelts. Let's go!
Given how hard Nvidia came out at CES, I think this is going to be a super sonic year!
hey, Tim! Happy New Yada Yada! Could I get a recommendation from you? Best *Sketch to Image* (for stills)? NOT architectural, but more people-oriented. Examples I have seen tend to range from 2-yr-old's left foot to Frazetta. Something inbetween, please! ;) Thanks!
That middle frame feature could really come in handy if a few frames of a film/video were glitched out and had to be replaced to avoid a jump.
Consistency is existing from the first model of minimax. Choose famous person as your character and add its name to each prompt.
oh for sure-- and even though I did a few here, to be honest: I'm less interested in doing famous faces or doing fan-fiction/meme stuff. To me, the really interesting part is going to be generating up your own roster of "stars" to use in your own projects.
Basically, everyone gets to have their own agency of AI Actors.
@TheoreticallyMedia I understand you. But noticed that literary nobody talking about this hack, that minimax don't censor famous persons in prompts. It takes you a method for some kind of consistency in storytelling. You can even choose a celebrity which have more or less similar face features with your face, and face swap.
Sorry for my English. Hope it understandable 😁
Thx Tim, very helpful!
The process involves moving the character model around to achieve desired shots, although it may require multiple attempts to get the perfect result. Thank you
That ReRoll button is the name of the game right now...
i really to use the subject to video for minimax ai video. will it handle more than one face ?
Is subject reference only available on a certain plan? I'm using the free tier and don't see it.
How would you get these characters to talk? With the lip sync and consistent voice etc?
I think the best sync around right now is actually Runway's Act-1 (which you have to provide driving face video to) and then Elevenlabs Voice to Voice.
It's how I did that hitman interview segment in the first half of the video.
Tim, 3 frames is extremely useful for me and prompts me to switch from Kling to runway.
, I only use 3d frames to video msotly. i actually have a 8 minute film made with that on this channel. I would love if you took a min for that.
I'll swing over and check it out! And for sure, for your workflow, I'm sure 3-frames is a dream come true!
@@TheoreticallyMedia please do tim. your opinion would be so interesting.
So, Im guessing hailuos char reference is not open to general public yet as im not seeing it in mine.
So, you're just going to ignore Pika 2.0...
It can do sequences of actions, up to six character/environment references. And, it's extraordinarily good.
Yeah, I gotta get in there! They were a little back burnered when I tried to play with it over the holidays and they were running their free for all-- and basically, I hit it at the wrong time while everything was slagged. I'll pop in soon! (Before 3.0 for sure!)
@TheoreticallyMedia I just had to remind you, because I'm very impressed. I understand, their launch was... a bit awkward.
@9:20 use recraft instead of Photoshop. It does the same thing but as an image generator
i do love Recraft! I did a video on them just recently if you didn't catch it: ruclips.net/video/-yhUORe7Zjs/видео.htmlsi=hpa7zSLDOxdfJB2L
Now it just needs to be able to adjust the lighting and reflections of the character to the environment better to get rid of the green screen composition look.
I've got this trick I like to do where I'll run outputs through another V2V (like Gen-3 or Domo) to kind of "bake out" that composited look. It works REALLY well-- but, often at the price of consistency.
Sora (as much as its derided) also has a super great function for this w/ the Blend modes. But-- Sora is still a bit TOO unpredictable. We're getting there though!
3:36
😆 🤣 😂 WTF? Assasin moonlights as a Veterinarian on his days off? 😆 🤣 😂
I think that's going to be a stellar AI Short Film!! haha-- Paws of Death!!
@TheoreticallyMedia
😆 🤣 😂 I love that!!
@TheoreticallyMedia
You could even do a homophone play on "Paws" and "Pause" where our assassin has the quirky but conscientious habit of taking a pause before every paid hit to reflect on the paws of the sweet critters he gets paid to save on his days off from hitman work.
😉
slowly getting there. Once you can have consistent environments and characters it will be over for the big studios. Ideally, I'd love to see the use of "proxies", where you can have a super simple low poly room or whatever, and place your characters in (like placeholders - they can be cubes or whatever, just to show the AI the overall positions), and also move the virtual camera around (again, in a super simple environment just to show the AI the camera angle) . Otherwise if you are trying to make a tv show then it will be super hard to keep the locations of the characters consistent. Right now this is still "far off" but I predict that it will indeed be almost like a blend between traditional 3d modelling and what we currently have, cause currently there is just too little control.
With some 3d skills, you can pull something close to that off via Video to Video-- but yeah, still requires a lot of prep work. I think that Proxies idea is really solid. I've seen some stuff in development along those lines, but it trends too close to 3d workflows-- to which, the 3d people think it's too simple, and the non-3d folks think it's too complex.
Someone will crack it soon though...overall, totally agree: I think 2025 is going to be an interesting bridge year for AI and 3d.
"it will be over for the big studios" - This is silly. You have "consistent environments" (real places) and consistent characters (actors) now and can shoot a movie on your iPhone in 4k quality. Guess what? Big studios still dominate. And, no, it's not because they're more creative -- it's because they control the distribution. They always will.
Remember when RUclips was going to change that? Didn't happen. I don't see what AI changes about distribution that hasn't already happened, and hasn't already failed to change the landscape of big media.
The master 🙇
Btw late happy new year to u!
11:13 I can't wait to watch "SpyCuk"!
Haha, I noticed that after the upload and was like...oh, well that's a choice for a title! haha
⚽⚽⚽ Figo The Assassin 😹😹😹
He’s gonna get caught since after every kill he yells “Goaaaaaaaaal!”
3:29 Your assassin looks just like George Lazenby (the one-film James Bond).
Haha, it’s like the son of George and a professional footballer! Or, George has been hitting the gym!
@@TheoreticallyMedia Good point, Lazenby was a bit scrawny.
Runwatys midframe is good for changing scenes too
So, "crushed it" means it's a bit rubbish? xD
Haha, it's a little wonky, but I think with some work, you can get it there. Actually, what I was thinking is if you have a lot of different expression of the same face, and used those to reference-- you could get a fairly consistent AI Actor across a bunch of generations.
As is, I do think Image to Video is still the way to go-- but, you can get some cool results w/ T2V. So, for a longer project (or short film) utilizing a mixture of the two could be compelling.
You can already get consistent character in any AI video (in terms of face at least), by replacing the face with FaceFusion. As long as they have something resembling a face, and it doesn't turn away from camera. These Minimax creations seem to never blink... like the T-1000
Haha, which works well for Wednesday Addams! But, you're right-- The really great method is to use a combo of FaceFusion and Runway's Act-1. Thus far, that is the best "acting" results I've seen for AI Video thus far.
Once you see veo2 you can never back...
Tested it last month! ruclips.net/video/v8lA8hJR1jo/видео.html
It's really (really) good-- but, the biggest downside is that it is tied to Imagen3 as its only source for Image to Video. And, while I think I3 is a big step up from the last version-- we can do SO much more with our external workflows.
I think that if Veo-2 (eventually) allows for i2V input, it'll be king of the hill in no time.
@@TheoreticallyMedia google and open AI are battling for the dumbest companies who cant read the room. 🤣
Do some Arcane loras, your channel’s
Gonna blow up even more! 😊
Hah, gotta get ahead of the curve and do Squid Game as Arcane Outputs!
Folks have been suggesting that I tackle some animation stuff though, so I'll put it on the "to do" list!
(I did really like Arcane-- although, that second season was a bit sloppy...still one of the coolest animated shows I've seen in quite some time!)
So close. If this improves and the generated length improves too, that's basically hollywood for everyone
At the current rate, I think we'll all have a desktop Hollywood studio by the end of this year. We'll all still need to do some work to get everything humming-- but yeah-- everything you need will be there by this time next year.
Crazy.
For me, the best video AI is Kling's, its physics are impressive. SORA was a disappointment and I haven't had a chance to use Google's video AI yet
I personally think Kling and Minimax are neck and neck. Veo2 is SUPER impressive, but somewhat limited by the fact that Image To Video requires you to use Imagen3 (which is good, and a massive step up from Imagen2)-- but, still not as powerful as using all the external image tools we have access to.
Sora....sigh. I think currently it's kind of amazing as a Video to Video tool-- but, yeah-- really lacking otherwise.
Hailou minimax reminds me of my teacher's opinion about me: so much potential, it's just a shame it's not realized. I have yet to actually USE a Hailou render, it's always wonky, morphy and wild. nothing that can end up in a convincing video output. if only Hailou and Runway had a baby.... having the Hailou's motion with the runway consistency.
I think the closest I've seen to that yet is Google's Veo-2-- which, makes sense considering it was likely trained on a mountain of RUclips data. The downside w/ Veo-2 is that it's tied to Imagen-3. Which, while I think is pretty good-- is still nowhere near what we can do with our external image generation tools.
@ i absolutely agree, do you have access to veo2? Is there a Veo2 review on the way?
@11:13 spy cuk lol
Man, AI cracks me up at least once a day. I didn't notice Spy Cuk until after I uploaded-- hilarious.
Hopefully whatever algo Apple TV has to scan for show ideas sees this and makes it a reality!
once again, at the time of this post, NOT YET RELEASED TO THE PUBLIC, on either Hailou or Runway, at least knowing Minimax it probably won't be long. But the time Googles Veo 2 is released the Chinese will probably have some new sites out. And still no word on Runway's FRAMES either. Anyway, not the most exciting updates, Minimax is great I love them, but lets be honest here,..they are due for a BIG update to bring them up past Kling standards. They need higher more improved details, longer generation capabilities, vid-to-vid capabilities, more photorealism and less "cartooniness". Midjourney is due for an update too but my feeling is they are holding back or curbing updates because of lawsuits and legal issues
Hit like, fools!
Grandpa from spy kids lmaoooo
Runway truly sucks. Too much censorship, face deformations etc. Don't waste your money on it.
cool. 2025 is going to be great.
We've only just started and I'm super excited already!
All the characters are lifeless, they don't blink or have and changes in facial expression.
Yeah, that’s kind of what I like about Runway’s Act-1, which I used for that Hitman Interview. I think there are still some “blank stare” looks, but the characters “act” a bit more at least, mostly because there is actual human acting driving it.
(Since it was me, I do use the term “acting” lightly, ha!)
Hailuo free is super duper slow...cant test. All ai video generators should give some testing space for people to decide if they want to buy credits or not.
I’ll say it’s a LOT faster on the paid tier. Eh, I get it- it’s the priority queue for the paid users. If free tier bogs it down, paid users start complaining and leaving.
I’d say a solution might be to allow for a one-day free trial, but you know everyone would quickly sign up with 300 different email addresses. Sigh…
I like the WAY YOU ALWAYS FING IGNORE me and this model isnt available also you wont say about veo 2 that google says not available till September
Haha, I'm not ignoring you-- promise. More like, I get a TON of comments and don't get to get to all of them. I'm usually only around in the comments section for a few hours here and there, mostly b/c I have to start working on the NEXT video. Honestly, best time to catch me is in the first few hours of an upload.
Anyhow-- I mention in this video that the feature is still in beta. And the same went for the Veo2 video. That said, the Minimax feature will probably drop pretty soon, so just consider this a preview of what is upcoming.
the word is WATER not 'wodder'
DUDE! Stop making videos and start watching Yellowstone and when you're done with that, watch Landman. We will wait.
Haha. I watched Lioness! Does that count? I have heard a lot of good things about Landman. And I do love Billy Bob, so I’ll check that one out for sure!
Grandpa from spy kids lmaoooo
Khan? I'm....ok with that.