That looked like pretty consistent characters to me. Where is the article that dropped the news about this release? I saw nothing on the runway site under news?
@@jessewallace6147 Consistent means to take one of those characters and put them in a different location wearing same clothes to the same exact details. Did I miss where they did that?
You are thinking of cinematic productions; people could use these as filters in social media in the same way that they use less refined technologies to day. When they can get this in the wild, things will go crazy.
That was you?! That’s awesome man!! Great acting on your part! Haha you were ALMOST the thumbnail! (And please tell that dude who was in the middle screen his expressions really made my day!)
Thanks Tim!! You keep me in the loop! I’m waiting to do my sci fi series as animation. So I’m looking for the best AI to do that with style with. We are so close!
WOW, WOW, WOW, this looks amazing and I'm definitely going to invest in a Runway subscription when its released. Looking back to about 18months ago and to where we are now is just mind blowing - imagine another 18 months further down the line. As a very small AI/3D creator - I'm excited- very excited :D
@@encartauk I’ve actually held off for a moment, competition is fierce in AI world now and sure tools like Viggle, Hedra and more will have updates soon. If had body movement also or at least upper body then I definitely would sub.
Do you know if the video image quality makes a difference when uploading. Meaning if I shoot with a Blackmagic camera 4 K compress, is the quality better in the cartoon (image on Act One) Thanks
Thanks Tim! And I just left Runway's Unlimited plan for MiniMax, too, hehe. But I still have another day or two (I think) on Runway's Unlimited Plan. I'll definitely re-subscribe to Runway in the near future, though, especially when they match the level of MiniMax with accurately following prompts and smooth animations in real-time speed.
Holly f*#&!My brain actually started telling me I was watching a TV show at about 3:53. Given this is at about 2 out of 10 evolutions of the tech to come, how is this not the beginning of the end of at least "simple" drama productions?
@@Walexo45 Just because we are not quite there yet doesn't change the overwhelming sense of direction we are now heading. So I don't know either. Small groups will be able to make entire series. If their writing is decent (i.e. better than a lot of professional shows) and they can release on platforms like RUclips, people will watch for sure! Would hate to be a singer now and hate to be an actor in a few short years from now.
@@emotionalsuccess yes, that is what i am looking forward into. I love to make movies but was hard to find producers & actors. I am waiting for this technology to roll out its 7 or 8th evolution.. & oh boy it will destroy careers of actors but will help alrounders like me to act, write, direct,edit. basically its the talented alrounder introverts whose gona benefit from AI
Tim, you´re videos are just so entertaining!! 😆😆 "This guy was not supposed to have a whole in his chest until the end of the movie" 😅 - and yeah that Diners talking scene is definetly next level! Seems like Act-One will be able to have images as source only for now (no AI videos, as far as I could tell from the examples)
Wow! That's so amazing! Can't wait to see you try to make a Pixar Noir detective story with this, as a test. Everything's going so fast now, 2025 will be totally crazy.
Maybe you can help. I have repeatedly tried to get KLING image to video to NOT ZOOM, NOT PAN and most of all "It is a tripod shot. It is a locked camera shot. The man is looking into the camera the entire time. He is listening. He is always looking directly into the camera" prompt. No matter what, it will not follow prompt. Relevance - tried at all settings. Negitive prompts - tried all possibilities. Spending a lot of time and money and getting bad results. Any ideas? Thanks .
Video to video IS the future of entertainment. I have been waiting for years for this. Since Ebsynth was working on it. I have a ton of scripts that I'm waiting to produce with it.
I will be very curios of how small heads would work, or multiple characters in the same scene. Would you be able to "choose" somehow? Or should we go over these limitations by cutting up the picture and stitching together the result videos. 🤔
I am still struggling with video to video, probably gonna have to watch a ton of tutorials to get it down to a science, I haven't made one decent video yet. But I've had tons of success with Gen 3 text image and image to video
Hot take. Several years from now there will be less actors and scripted movies, but much more training data models. Basically stock footage being made specifically for video to video data.
I think we're about to move from scripted movies to interactive Holodeck experiences once this stuff and VR sync up as far as performance. There will always be a demand for high quality crafted movies, but how many of those do you think hit the theatres each year? So much of it is junk. I'm definitely up for a more specialized form of entertainment than whatever garbage some ped, I mean industry exec in pedowood decides is a good idea. Given most actors are pretty terrible people in RL, I'm ok with them being diminished in influence in the world. Meritocracy ftw.
Hey I want to know if I can make a scene in blender and use greenscreen to put myself inside a cgi environment and THEN use this tool to better composite myself and to make the CGI look photorealistic. As in, my cgi may possibly not be the best but then i use some AI filter so that whats already there looks better. Can this work?
No, suppose I am an end customer. Do you think I will learn from scratch all these technologies just to do it myself for one single project? No, I will contact you and I say that I need an ad for my new shoes (I am the business owner or the marketing agency), I give you videos, pictures, we align on the requirements and you do it.
It looks cool, the only thing I am wondering about is can it only do videos where you stand or sit still, and all the videos is from the chest and up, no using of hands and so on.
Same! I'll test it once we get access. But even if that is the case, I think you can do some tricky stuff w/ comping and some smart framing. It might take a little work at first, but I think there's a TON of stuff you can do "outside the box" here.
interesting video, thanks a lot. What do u think is the best tool for AI image to image that allows accurate results from reference image when we take a character/person (: Thx
How do you get consistency between takes. Using the same original footage, the same prompt, and the same seed, Runway changes the character and the background for me each time to some significant degree.
have same issue so I put all takes by one character in one video, then use same seed and same prompty for 2nd video with all the takes of the 2nd character
@@artvsmachine yeah i'll upload a short I just did on tiktok to show you how close I got - but it's all temporary. we just have to wait for all these pieces to come together ha)
That's part of it, yes, but *the real Main Reason is the greater creative freedom and control we get.* The truth is, very few people are actually willing to pay for a Premium or Unlimited Plan, which means most of what people create using Runway (MiniMax, etc) won't be great content. Most people who have interests in A.I. filmmaking are not serious about it, which means less competition.
Not really, though. (Maybe MAGA-woke people who can't handle freedom of expression and cry about everything under the sun that isn't Conservatively Correct enough for them.) The real problem is that _good_ shows get cancelled because of capitalism/money/greed. Creativity can now be decoupled from money/greed, which is amazing from a creative standpoint. Sure, we'd have to start avoiding more Nazi Simps, but it shouldn't be too hard for intelligent people to figure out how to curate this stuff.
@@FilmSpook Thankfully there's a huge open-source community for AI tools right now, and there are some open-source video projects out there. If these continue, which I imagine they will, I am hoping people willing to put down money for local or hosted AI compute power will be able to do their own without a cloud subscription. Also... F Pedowood.
@@mrxw-m8b because they agreed to in contracts. They can always make movies in other countries to avoid the union rules. but then they will likely face issues with releasing their films in american cinemas because they're all part of the union rules. It very hard to get a non union film in cinemas.
Truly amazing this will take the reclusive creative and create him/her an empire of their works that otherwise they would have to outsource or pay for help to present. Now you can present your screenplays, animation, etc by your damn self!! Thanks Tim.
I'm getting into Blender and Unreal to block, stage and film my basic movie, then use whatever video to video AI will be around in 2 years when done, to up it to near photoreal. :)
The interesting question for all systems like Act One, is that genAI, as a general rule is, basically useless for any multi-scene content in video, or even consistent characters for stills, with all manner of hacks trying to frustratingly achieve this (but failing), making genai solutions like Act One useless demo-candy. The open question for AI researchers is if diffusion techniques can ever be 'controlled' in a way that can produce consistent characters, outfits, color palates, etc etc. Or, if what artists mean by 'consistent xyz' is a level of abstraction that diffusion techniques cannot be made to understand or follow. Runway, I don't think is on the cutting edge of research, I haven't seen any papers by anyone that works there, so they're dependent on other teams to show them the way if it's possible, it feels like major breakthroughs will be needed to me though, new innovation as rare and groundbreaking as the invention diffusion models in the first place, so I wouldn't hold my breath or quit my day job if I was a VFX artist, just like with SD, every studio or production house will try all the genAI tools and give up after the first frustrating project.
Haiper 2.0 also just came out. Basically better photo to video, but a HUGE, I mean Huge plummet in text to video (from realisticish to cartoonish.). Went way backwards there.
Thank you for the video, Tim. The problem is that you make some tests in Runway and burn all your credits in a couple of hours. And here they are, launching again a new feature for the "wow effect" and make some fast cash while they prepare the next unfinished tool. But filmmakers can't wait forever. I am using Domo AI and it's converting my 3D animated material into comics. I have 100% consistency between shots. And I'm just paying $20/month for unlimited videos.
Oh, don't get me wrong, I LOVE Domo as well! In fact, that whole example video I showcased here (the animated noir thing) was a big Domo kitbash! Should be linked down below: if you're a Domo fan, check it out. Domo plus Skyglass is STELLAR.
@@TheoreticallyMedia Thank you, I'm a fan of your videos, they are so informative! And I also believe that AI conversion is the future of filmmaking. I'll check it out.
In the midst of our era's generally strange atmosphere, which I admit keeps me in a state of overall reservation and perhaps I can say that I'm not particularly optimistic about the future, the developments in this field of technology (AI) spark my interest, I enjoy every new 'step', and I'm thrilled by the fact that this technology will undoubtedly transform (almost) everything in the future (even though there are opposing views on this). In short, for me, all this is a drop of light, and I'm happy to have the opportunity to experience this evolution as it's being born. And of course, all of us who 'test', 'play' with this technology are PART of its evolution, given that we are the testers, the trainers, and not just simple users or observers. This alone gives great value to every new development.
Whoa slow down Runway lol. Give me a chance to catch up. I am having so much fun making videos with Video to Video, now I have to wrap my head around this. Let's go!
Runway is still far behind MiniMax with image to video and text to video animations and with accurately following prompts, but this new feature is awesome indeed. I recently made an awesome black-and-white A.I. film with MiniMax that you cannot make with Runway's capabilities (as of now), for Runway cannot animate human movements very well in real-time speed, and this is a *major issue* they need to improve. Runway also overall doesn't follow prompts very well, unlike Kling and MiniMax.
Video to video is cool, but until you can make consistent extended videos with the exact same features then it is mostly a gimmick and not ready for most professional work. Runway needs to match MiniMax's level of animations in order to TRULY be a top competitor.
@@FilmSpook I actually have some Minimax on my channel. Little shorts mini 1 minute documentaries. The crazy part is how realistic the actors look. Some I haven't even uploaded here that blow my mind. Like how is this even real? What a time to be alive.
Don't you think that is the same technology that is found e.g. in HeyGen? All characters were shot from the shoulders up. Or it could be just a nice LivePortrait wrapper embedded into RW.
Haha, I do feel that's AI in general. One of the things I do like to focus on w/ the channel is to show at least one or two "bad" generations. Which, I'm sure to do once I get access here!
@@TheoreticallyMedia That's definitely what I love about your channel. You're much more balanced then everyone else, enthusiastic about the technology but not unrealistic. It's noticed and appreciated.
@@iDannyism Yes. It takes hundreds of generations to get something halfway decent. I feel in a year or so, we will look back and say wow, that was crappy.
Unsure right now. It seems to be Liveportrait on steroids, but I’m also seeing a lot of character movement in the “action” so I think this might be a bit of V2V in there? Waiting to get access to see!
Im still using Runway just for unlimited option btw :D But its so close to be perfect ngl. Midjourney annoys me a lot while im trying to achieve what i want but runway always makes whatever i want much easier and with less generations. Its kinda interesting cuz i think video generation must be way more diffucult for the machine. Btw runway's lipsync is working very good if you can manage to generate a realistic human face likewith blinks and some mimics
Still a deep chasm of uncanny valley. I think in low resolution, Act 1 is getting more proficient, but its not yet ready for long format video. Looks at best, like rubber masks.
The advantage of having AI technology at our fingertips is that, with Hollywood often recycling ideas-remaking movies, TV series, and churning out sequels and prequels-it feels like everything has already been done. For an individual, producing a full-featured film seemed unimaginable due to the enormous budget required. However, with this technology and a bit of creativity, the opportunities for amateurs to create incredible content are now limitless.
We are still a long way from really good short films, where good storytelling also counts. At the moment, we are all still alpha testers. But I have lost some of my desire to experiment because such basic things are not changing at all at the moment, such as the fact that a face remains a face.
Yeah, I really think it's best to play around with for now and save any script you plan on trying to achieve with AI for at least a year down the road.
people have always said "we live in fast-moving times" but in the case of AI, for the first time it's actually true - it's a matter of days. you're on vacation for a week and everything has already changed
Dude...MARCH OF 2023! I don't know if I hit that hard enough in the video. But MARCH of last year?! That was what were were looking at? I mean...insane.
Unsure yet, but I think so. That said, my head is already coming with comping ideas to bring full body etc together. There are a lot of doors opening up here!
I hope people still take the time to write scripts rather than be seduced by Ai's ability to construct them. Visually and technically AI is bringing into reality the chance for the person-on-the-street to potentially make a fully-realised movie. That's awesome. Yet there ought to be some hard graft - nothing worthwhile comes so easily. My humble opinion is that AI will only be worth using as a tool to tell stories if those stories are written by us humans.
I think we'll see a wave a low-bar stuff, but that'll get evened out pretty quickly. At the end of the day, there's only so far that eye candy will take you. The good stuff always rises to the top. Plenty of forgotten CGI Cartoons and shows from when that technology became cheap and affordable.
Hey Tim! Really exciting news yet again... I don't know if it's just me, but your links in the description appear as simple text. Like this ---- Related Links: 🔗 My video on creating a micro-short film with AI: [Link] 🔗 Example from Nicholas Neubert with music integration: [Link] 🔗 Follow John Finger’s fun experiments with Gen-3: [Link]
I know for sure Hollywood has been looking at it, with the thought being: the actor can look “perfect” in every shot with no need for makeup. Which, I still think is kind of backhanded, considering no one seems to be worried about all the makeup artists that are employed by the industry. I don’t know, I guess the old BTS part of me takes ire with that. We talk a lot about the jobs of above the line, or post. But, if fully adopted, the “little guys” will be the first to really go as a line item cut. Craft services will be fine though. That’s a business to get into! I’m half kidding of course. To be honest, I think Hollywood might adopt a few aspects of this tech, but for the most part, movies will still be movies. I always maintain we’re looking at a new medium here. One that is rooted in filmmaking, but will eventually become something else.
@@TheoreticallyMedia thnxx for reply as 19 year old i am also confused, cuz technology is evolving too fast, we can't even adapt it in that sort of time. I also want to be filmmaker so im a little bit afraid about traditional filmmaking 🤔. But anyway we have to adapt it and regulate it before it turns into terminator😂🥰
Only thing bothering me with runway is its censorship considering im a horror filmmaker its almost impossible to generate certain things. Its really annoying.
The biggest issue is that all this is useless until character consistency is achieved. That's when real application can get serious.
That looked like pretty consistent characters to me. Where is the article that dropped the news about this release? I saw nothing on the runway site under news?
@@jessewallace6147 Consistent means to take one of those characters and put them in a different location wearing same clothes to the same exact details. Did I miss where they did that?
@@haljordan1575 You can do that with decent LORA's. It basically trains the model on specific character / clothing style.
@@larion2336barrier to entry too high
You are thinking of cinematic productions; people could use these as filters in social media in the same way that they use less refined technologies to day. When they can get this in the wild, things will go crazy.
These titles lose their luster when they say the same thing every time.
I’m so glad he put “seriously” in the title, so I know he’s serious this time. And this time it’s not clickbait /s
pretty much clickbait
Serious clickbait material
ironically still clickbait
I create better animations on my channel than AI generated JUNK! 💪😎
Always appreciate your very timely videos! I finally made it into a scene with the side by side talking fox 😂
That was you?! That’s awesome man!! Great acting on your part! Haha you were ALMOST the thumbnail!
(And please tell that dude who was in the middle screen his expressions really made my day!)
That was great acting bro thank you for the new Runway feature 😄
I create better animations on my channel than AI generated JUNK! 💪😎
Truly excited about this too! Thanks for dropping this.
Thanks Tim!! You keep me in the loop! I’m waiting to do my sci fi series as animation. So I’m looking for the best AI to do that with style with. We are so close!
One of the best updates to have come
Runway tends to manipulate people
True
I create better animations on my channel than AI generated JUNK! 💪😎
WOW, WOW, WOW, this looks amazing and I'm definitely going to invest in a Runway subscription when its released. Looking back to about 18months ago and to where we are now is just mind blowing - imagine another 18 months further down the line. As a very small AI/3D creator - I'm excited- very excited :D
i don't know what subscription length you're planning to get, but don't get anything long term as this will be bettered before you know it.
@@encartauk I’ve actually held off for a moment, competition is fierce in AI world now and sure tools like Viggle, Hedra and more will have updates soon.
If had body movement also or at least upper body then I definitely would sub.
❤I’m so excited! This is a game changer for sure!
Great video! We are actually using Gen-3 Alpha a lot, still trying to find the use for video-to-video.
That was amazing, Runway just keeps plugging away and adding very nice tools.
Do you know if the video image quality makes a difference when uploading. Meaning if I shoot with a Blackmagic camera 4 K compress, is the quality better in the cartoon (image on Act One)
Thanks
Interesting to see things evolving. Has me anticipating what it might be like in 10 years and if it will be easily accessible and affordable then.
Thanks Tim! And I just left Runway's Unlimited plan for MiniMax, too, hehe. But I still have another day or two (I think) on Runway's Unlimited Plan. I'll definitely re-subscribe to Runway in the near future, though, especially when they match the level of MiniMax with accurately following prompts and smooth animations in real-time speed.
Sitting enjoying vid at work cheers Theo!
Holly f*#&!My brain actually started telling me I was watching a TV show at about 3:53. Given this is at about 2 out of 10 evolutions of the tech to come, how is this not the beginning of the end of at least "simple" drama productions?
I don't understand how Hollywood can survive when people just make their own TV series anytime, anywhere.
@@Walexo45 Just because we are not quite there yet doesn't change the overwhelming sense of direction we are now heading. So I don't know either. Small groups will be able to make entire series. If their writing is decent (i.e. better than a lot of professional shows) and they can release on platforms like RUclips, people will watch for sure! Would hate to be a singer now and hate to be an actor in a few short years from now.
definitely a breaking bad nod
@@emotionalsuccess yes, that is what i am looking forward into. I love to make movies but was hard to find producers & actors. I am waiting for this technology to roll out its 7 or 8th evolution.. & oh boy it will destroy careers of actors but will help alrounders like me to act, write, direct,edit. basically its the talented alrounder introverts whose gona benefit from AI
The glases guy looks dubbed, but the older guy is ace.
🔥LETS GO!!! Can't wait to use it!
Same!
Tim, you´re videos are just so entertaining!! 😆😆 "This guy was not supposed to have a whole in his chest until the end of the movie" 😅 - and yeah that Diners talking scene is definetly next level! Seems like Act-One will be able to have images as source only for now (no AI videos, as far as I could tell from the examples)
Wow! That's so amazing! Can't wait to see you try to make a Pixar Noir detective story with this, as a test. Everything's going so fast now, 2025 will be totally crazy.
FYI - No links are provided in the description.
Maybe you can help. I have repeatedly tried to get KLING image to video to NOT ZOOM, NOT PAN and most of all "It is a tripod shot. It is a locked camera shot. The man is looking into the camera the entire time. He is listening. He is always looking directly into the camera" prompt. No matter what, it will not follow prompt. Relevance - tried at all settings. Negitive prompts - tried all possibilities. Spending a lot of time and money and getting bad results. Any ideas? Thanks
.
I saw this today and was totally blown away. All those face capture tools will go obsolete in a matter of few months. Runway is killing it!
My grandfather told me stories of him using Gen 1 in March 2023. "You kids have everything easy now. In my times in 2023......"
you had to walk uphill, and carry your prompt-- and you wouldn't even get your video until you walked up another hill! AND IT WAS SNOWING!!
I create better animations on my channel than AI generated JUNK! 💪😎
Truely mindblowing and can't wait to learn the workflow and generate some shorts :D
This looks amazing. I always struggle with music videos, so this could be gold for me personally
This is going to be GREAT for Music Videos!
In what way do you see this helping specifically for MVs?
Video to video IS the future of entertainment. I have been waiting for years for this. Since Ebsynth was working on it. I have a ton of scripts that I'm waiting to produce with it.
More great info Tim. Much appreciated ✌️
I will be very curios of how small heads would work, or multiple characters in the same scene. Would you be able to "choose" somehow? Or should we go over these limitations by cutting up the picture and stitching together the result videos. 🤔
No links working in description 😢
Street Fighter naming convention??? LOL. Can't wait for the "Super" version.
I think you need to update the [link] tags in the description, thank you for the review.
Amazing!!!
It's 8 AM in France and I'm waking up with this amazing news thanks to you💫
I've been waiting for this and I'm eager to play with it🎉
Headed to the Reframe conference tomorrow - Cristobal Valenzuela (Runway CEO) is one of the main speakers. I'll report back!
Please tell “Tim Loves You!” as loudly as you can at him! Haha
@@TheoreticallyMedia Haha, done! I'm sure he won't at all think I'm nuts
@@TheoreticallyMediatell him we all love him.
Ya, please do!!!
@@TheoreticallyMedia Tim, I transcribed a few quotes from Cris Valenzuela from yesterday. Do you want me to paste here?
Can it change your voice for each character even though you are filming and recording yourself?
It can. Although not in real time.
I am still struggling with video to video, probably gonna have to watch a ton of tutorials to get it down to a science, I haven't made one decent video yet. But I've had tons of success with Gen 3 text image and image to video
Great review as always 😊 thanks
Hot take. Several years from now there will be less actors and scripted movies, but much more training data models. Basically stock footage being made specifically for video to video data.
I think we're about to move from scripted movies to interactive Holodeck experiences once this stuff and VR sync up as far as performance. There will always be a demand for high quality crafted movies, but how many of those do you think hit the theatres each year? So much of it is junk. I'm definitely up for a more specialized form of entertainment than whatever garbage some ped, I mean industry exec in pedowood decides is a good idea. Given most actors are pretty terrible people in RL, I'm ok with them being diminished in influence in the world. Meritocracy ftw.
@@rhadiem "pedowood" 😆 totally agree btw
Hey I want to know if I can make a scene in blender and use greenscreen to put myself inside a cgi environment and THEN use this tool to better composite myself and to make the CGI look photorealistic. As in, my cgi may possibly not be the best but then i use some AI filter so that whats already there looks better. Can this work?
As a VFX artist, i can confidently say we're 100% DOOMED! 😭🙏
I don't get that so much. Isn't it just that you have a much further advanced tool at your hands?
@@n.iremerdem for now, yes. In 5 years time, it could well be game over.
No, suppose I am an end customer. Do you think I will learn from scratch all these technologies just to do it myself for one single project? No, I will contact you and I say that I need an ad for my new shoes (I am the business owner or the marketing agency), I give you videos, pictures, we align on the requirements and you do it.
@@MartinZanichelli but what about when anyone can do it?
Time. I value my time so I hire you to do it so I can focus on the business.
It looks cool, the only thing I am wondering about is can it only do videos where you stand or sit still, and all the videos is from the chest and up, no using of hands and so on.
Same! I'll test it once we get access. But even if that is the case, I think you can do some tricky stuff w/ comping and some smart framing.
It might take a little work at first, but I think there's a TON of stuff you can do "outside the box" here.
interesting video, thanks a lot. What do u think is the best tool for AI image to image that allows accurate results from reference image when we take a character/person (: Thx
How do you get consistency between takes. Using the same original footage, the same prompt, and the same seed, Runway changes the character and the background for me each time to some significant degree.
have same issue so I put all takes by one character in one video, then use same seed and same prompty for 2nd video with all the takes of the 2nd character
@@CarlosRodela I think I'm doing the same thing. Even if I rerun the same clip with the same seed there's variation. I'll just live with it for now.
@@artvsmachine yeah i'll upload a short I just did on tiktok to show you how close I got - but it's all temporary. we just have to wait for all these pieces to come together ha)
oh i delelted it -well ill upload one soon on my channel - check for it tmrw. and yeah a lot of trial and error with that slider 1-10 and the seed
I think the main reason people are so excited for this is that they are fed up with Hollywood..
That's part of it, yes, but *the real Main Reason is the greater creative freedom and control we get.* The truth is, very few people are actually willing to pay for a Premium or Unlimited Plan, which means most of what people create using Runway (MiniMax, etc) won't be great content. Most people who have interests in A.I. filmmaking are not serious about it, which means less competition.
Not really, though. (Maybe MAGA-woke people who can't handle freedom of expression and cry about everything under the sun that isn't Conservatively Correct enough for them.) The real problem is that _good_ shows get cancelled because of capitalism/money/greed. Creativity can now be decoupled from money/greed, which is amazing from a creative standpoint. Sure, we'd have to start avoiding more Nazi Simps, but it shouldn't be too hard for intelligent people to figure out how to curate this stuff.
@@FilmSpook Thankfully there's a huge open-source community for AI tools right now, and there are some open-source video projects out there. If these continue, which I imagine they will, I am hoping people willing to put down money for local or hosted AI compute power will be able to do their own without a cloud subscription. Also... F Pedowood.
@@mrxw-m8b with all the union rules I doubt Hollywood will ever be able to use much ai.
@@mrxw-m8b because they agreed to in contracts. They can always make movies in other countries to avoid the union rules. but then they will likely face issues with releasing their films in american cinemas because they're all part of the union rules. It very hard to get a non union film in cinemas.
Truly amazing this will take the reclusive creative and create him/her an empire of their works that otherwise they would have to outsource or pay for help to present. Now you can present your screenplays, animation, etc by your damn self!! Thanks Tim.
I'm getting into Blender and Unreal to block, stage and film my basic movie, then use whatever video to video AI will be around in 2 years when done, to up it to near photoreal. :)
I really like video-to-video because it provides the best control of the shot 🌝
The interesting question for all systems like Act One, is that genAI, as a general rule is, basically useless for any multi-scene content in video, or even consistent characters for stills, with all manner of hacks trying to frustratingly achieve this (but failing), making genai solutions like Act One useless demo-candy. The open question for AI researchers is if diffusion techniques can ever be 'controlled' in a way that can produce consistent characters, outfits, color palates, etc etc. Or, if what artists mean by 'consistent xyz' is a level of abstraction that diffusion techniques cannot be made to understand or follow. Runway, I don't think is on the cutting edge of research, I haven't seen any papers by anyone that works there, so they're dependent on other teams to show them the way if it's possible, it feels like major breakthroughs will be needed to me though, new innovation as rare and groundbreaking as the invention diffusion models in the first place, so I wouldn't hold my breath or quit my day job if I was a VFX artist, just like with SD, every studio or production house will try all the genAI tools and give up after the first frustrating project.
Haiper 2.0 also just came out. Basically better photo to video, but a HUGE, I mean Huge plummet in text to video (from realisticish to cartoonish.). Went way backwards there.
Excellent video.
Thank you for the video, Tim. The problem is that you make some tests in Runway and burn all your credits in a couple of hours. And here they are, launching again a new feature for the "wow effect" and make some fast cash while they prepare the next unfinished tool. But filmmakers can't wait forever.
I am using Domo AI and it's converting my 3D animated material into comics. I have 100% consistency between shots. And I'm just paying $20/month for unlimited videos.
Oh, don't get me wrong, I LOVE Domo as well! In fact, that whole example video I showcased here (the animated noir thing) was a big Domo kitbash!
Should be linked down below: if you're a Domo fan, check it out. Domo plus Skyglass is STELLAR.
@@TheoreticallyMedia Thank you, I'm a fan of your videos, they are so informative! And I also believe that AI conversion is the future of filmmaking. I'll check it out.
Tim's personal assistant must spend 90% of their time brewing coffee😂
That job was taken by automation.
Very nicely covered. Thank you.
In the midst of our era's generally strange atmosphere, which I admit keeps me in a state of overall reservation and perhaps I can say that I'm not particularly optimistic about the future, the developments in this field of technology (AI) spark my interest, I enjoy every new 'step', and I'm thrilled by the fact that this technology will undoubtedly transform (almost) everything in the future (even though there are opposing views on this). In short, for me, all this is a drop of light, and I'm happy to have the opportunity to experience this evolution as it's being born. And of course, all of us who 'test', 'play' with this technology are PART of its evolution, given that we are the testers, the trainers, and not just simple users or observers. This alone gives great value to every new development.
Excellent!
It was a hustle, but I got the video up!
This looks very good. Better than Live Portrait. I wonder if it’s doing full body mocap?
Looks like LivePortrait on steroids!! Hope you get the early access and show some sweet oscar worthy acting perfomances 😄
That’s EXACTLY what I was thinking! A lot of questions still, but it this works like it seems, this is massive!
@@TheoreticallyMedia Though I wonder what the licensing will look like.
I create better animations on my channel than AI generated JUNK! 💪😎
Slow and steady wins the race - Little by little we are getting there with being able to actually generate consistant, quality AI video.
Whoa slow down Runway lol. Give me a chance to catch up. I am having so much fun making videos with Video to Video, now I have to wrap my head around this. Let's go!
Runway is still far behind MiniMax with image to video and text to video animations and with accurately following prompts, but this new feature is awesome indeed. I recently made an awesome black-and-white A.I. film with MiniMax that you cannot make with Runway's capabilities (as of now), for Runway cannot animate human movements very well in real-time speed, and this is a *major issue* they need to improve. Runway also overall doesn't follow prompts very well, unlike Kling and MiniMax.
Video to video is cool, but until you can make consistent extended videos with the exact same features then it is mostly a gimmick and not ready for most professional work. Runway needs to match MiniMax's level of animations in order to TRULY be a top competitor.
@@FilmSpook I actually have some Minimax on my channel. Little shorts mini 1 minute documentaries. The crazy part is how realistic the actors look. Some I haven't even uploaded here that blow my mind. Like how is this even real? What a time to be alive.
@@TheLegionofAwesome 👍🏾👍🏾 Awesome, and I agree. I'll check out your work! Thanks
if you test it, can you test the ability for runway to be consistent with the side or back of a character through multiple shots?
Will do!
Don't you think that is the same technology that is found e.g. in HeyGen? All characters were shot from the shoulders up. Or it could be just a nice LivePortrait wrapper embedded into RW.
If there's one thing I know of Runway, if they see you can do something, what you'll Actually be able to to do is like, half of that.
Haha, I do feel that's AI in general. One of the things I do like to focus on w/ the channel is to show at least one or two "bad" generations. Which, I'm sure to do once I get access here!
@@TheoreticallyMedia That's definitely what I love about your channel. You're much more balanced then everyone else, enthusiastic about the technology but not unrealistic. It's noticed and appreciated.
@@iDannyism Yes. It takes hundreds of generations to get something halfway decent. I feel in a year or so, we will look back and say wow, that was crappy.
this is the future of movie making I think
Is this Runway’s take on live portrait for facial performance only or is actually an upgrade for full on video to video?
Unsure right now. It seems to be Liveportrait on steroids, but I’m also seeing a lot of character movement in the “action” so I think this might be a bit of V2V in there?
Waiting to get access to see!
"I can't stop looking at him" 😂😂
He's seriously the best. I want to turn him into a screensaver!
Im still using Runway just for unlimited option btw :D But its so close to be perfect ngl. Midjourney annoys me a lot while im trying to achieve what i want but runway always makes whatever i want much easier and with less generations. Its kinda interesting cuz i think video generation must be way more diffucult for the machine. Btw runway's lipsync is working very good if you can manage to generate a realistic human face likewith blinks and some mimics
Still a deep chasm of uncanny valley. I think in low resolution, Act 1 is getting more proficient, but its not yet ready for long format video. Looks at best, like rubber masks.
Yep, exactly. Not to mention the crappy voiceovers that don't seem natural at all.
Links in the description are missing.
It is seriously impressive what it can do!
But the endproduct still looks like nutcracker dolls talking 😅😂
I wasn’t impressed by AI video last year. It’s gotten a lot better this year, but I still feel it needs another year or two of development.
Animaze is a software that have been doing this for a long time. With mouth tracking and eye tracking. Motion tracking in general.
Elephant in the room - no lip sync to video.
none of the links work?
A.I. Will make my dreams come true. "Wesley Crusher the musical". And Wesley is the villain.
Thanks Tim
The advantage of having AI technology at our fingertips is that, with Hollywood often recycling ideas-remaking movies, TV series, and churning out sequels and prequels-it feels like everything has already been done. For an individual, producing a full-featured film seemed unimaginable due to the enormous budget required. However, with this technology and a bit of creativity, the opportunities for amateurs to create incredible content are now limitless.
It could become interesting for a screenwriter's portfolio to show their writing and story pitches.
We are still a long way from really good short films, where good storytelling also counts. At the moment, we are all still alpha testers. But I have lost some of my desire to experiment because such basic things are not changing at all at the moment, such as the fact that a face remains a face.
Yeah, I really think it's best to play around with for now and save any script you plan on trying to achieve with AI for at least a year down the road.
I Couldn’t get the links to work
Here I was thinking claude has changed Everything
Today was a wild day. This was the day that changed the day!
Finally. Keep it up.
Prepare for more "Harry Pumper" and "Lord of the Lyft: the Two Dumbells" videos like never before.
Oh, the meme game just stepped up, foe sure.
I’m not even a video person and I am excited!
people have always said "we live in fast-moving times" but in the case of AI, for the first time it's actually true - it's a matter of days. you're on vacation for a week and everything has already changed
Early days!!!!
Dude...MARCH OF 2023! I don't know if I hit that hard enough in the video. But MARCH of last year?! That was what were were looking at? I mean...insane.
It's getting better, but in a year, this will look archaic.
Live Portrait can do this and much more, as you can add faces to videos, but all of them are still limits.
It’s pretty cool looking but closeups only? No full body shots? Not body movements?
Unsure yet, but I think so. That said, my head is already coming with comping ideas to bring full body etc together.
There are a lot of doors opening up here!
I hope people still take the time to write scripts rather than be seduced by Ai's ability to construct them. Visually and technically AI is bringing into reality the chance for the person-on-the-street to potentially make a fully-realised movie. That's awesome. Yet there ought to be some hard graft - nothing worthwhile comes so easily. My humble opinion is that AI will only be worth using as a tool to tell stories if those stories are written by us humans.
I think we'll see a wave a low-bar stuff, but that'll get evened out pretty quickly. At the end of the day, there's only so far that eye candy will take you. The good stuff always rises to the top.
Plenty of forgotten CGI Cartoons and shows from when that technology became cheap and affordable.
"No Priors" has an interesting interview with founder Cristóbal Valenzuela.
Cool, now I just need to re-mortgage the house so I can afford to do 3 video generations on their platform.
Hey Tim!
Really exciting news yet again...
I don't know if it's just me, but your links in the description appear as simple text.
Like this ----
Related Links:
🔗 My video on creating a micro-short film with AI: [Link]
🔗 Example from Nicholas Neubert with music integration: [Link]
🔗 Follow John Finger’s fun experiments with Gen-3: [Link]
Ah-- thank you! Fixing that now!!
I think that guy in the middle needs some Pepto! 😂
The party just never ends.
Today was relentless. So much dropped! I’ll need to do a roundup toward the end of the week- and man, I don’t even know what tomorrow has in store!
Lets Go!
the stage has 1 man plays, movies will soon have 1 man films
Runway Gen-3 IV: Second Strike lol
Haha Alpha Turbo XL EXTREME!! Thank you for getting that joke!
Not gonna be excited until I could get my hands on it.
Should be rolling out now! I just checked: I still don't have it, but I expect by today or tomorrow-- they're pretty good about that.
@@TheoreticallyMedia thanks. Btw, how do you use voice other than your own? Other services, or runway builtin features?
And what if: Hollywood adapted it, experienced filmmakers,writers, artists, just think about the output💀
I know for sure Hollywood has been looking at it, with the thought being: the actor can look “perfect” in every shot with no need for makeup. Which, I still think is kind of backhanded, considering no one seems to be worried about all the makeup artists that are employed by the industry.
I don’t know, I guess the old BTS part of me takes ire with that. We talk a lot about the jobs of above the line, or post. But, if fully adopted, the “little guys” will be the first to really go as a line item cut.
Craft services will be fine though. That’s a business to get into!
I’m half kidding of course. To be honest, I think Hollywood might adopt a few aspects of this tech, but for the most part, movies will still be movies. I always maintain we’re looking at a new medium here. One that is rooted in filmmaking, but will eventually become something else.
@@TheoreticallyMedia thnxx for reply as 19 year old i am also confused, cuz technology is evolving too fast, we can't even adapt it in that sort of time. I also want to be filmmaker so im a little bit afraid about traditional filmmaking 🤔.
But anyway we have to adapt it and regulate it before it turns into terminator😂🥰
Only thing bothering me with runway is its censorship considering im a horror filmmaker its almost impossible to generate certain things. Its really annoying.
Only one 2D Animation sample 2 seconds in length. Hedra probably still winning with 2D animation
Hedra is killing it. I need to feature them on the channel soon! (I was supposed to do the launch, but timing on my end got screwed up! Sigh!)