Here’s what I want: 1) Stable Model 2) Stable Environment. 3) Creative Camera Work If I can simply create characters and insert them into an environment, without either of them morphing into an acid trip, I’ll pay. As of now, getting usable clips is not only time consuming with too many trial and error prompting, it gets expensive. Whoever can accomplish this first is going to do very well. I hope it happens soon.
it’s cool to have the runway gen3 extensions… the problem is they’re using GEN 2 to do the extensions, not GEN 3, so that’s why the extensions look so weird and low quality… they really need to use GEN 3 to do the extensions.
The Adobe CEOs are extremely greedy, unlike anything seen in any other company. Additionally, it’s worth noting that ON1 is set to launch their new ON1 RAW photo software, which will exclusively feature LOCALY generated images ... NOT ONLINE.
As soon as Adobe releases a 4th version of the firefly model, we'll have a robust image-to-video pipeline without subscribing to many different services.
@@AINIMANIA-3DYes, but that's the case with every plug-n-play solution. If you want privacy and uncensored generations, you gotta go with flux and comfyUI
@@KevinSanMateo-p1l Do you have a list of AI video generators that are uncensored? Ideogram is the only one I know of. Minimax appears to do Will Smith, Darth Vader and Mario, but I don't know if it would do Trump shooting a gun, for example (like in the Dor Brothers' videos).
@@KevinSanMateo-p1l That's such a "I don't know how to f-ing read" comment, so, f-ing READ. Commenter said when Adobe release Firefly 4, they will be able to use ONE subscriptions service (Creative Cloud) rather than NEEDING to use multiple. Read, for God's sake.
After watching the freaky extend video feature it made me wonder if this is the real skynet. Instead of an apocalypse and nukes, which we've seen coming, skynet is going to create seriously disturbing videos that drive us into insanity.
You know what? You make a great point because. what can end up happening is people could start end up making AI videos that look like a real terrorist threat to try and cause war and now this is going to make it harder for governments to verify videos. Ohh jeez. this is going to cause a hot mess of new fraud.
MiniMax is the most impressive model out. It does great with expressing prompted emotions but I have to say that Kling’s pro version has been capable of that too. Great video, as always :)
100%, my Friend, I've been using MiniMax every day for over a week, and it by far is the best at animating humans as well as other things like birds flying and animals running.
@@curiousrefuge I tried minimax after I watched your perfect video and minimax is great but without the ability to use picture to video is useless for me, because basically you cannot produce something with one character (human). Which is the best option for picture to video? I personally never seen so realistic videos like minimax, but if you have to produce something with one character on many videos doing different things which AI tool you prefer to use? Thank you once again for your you tube channel :)
As someone working on an XR concept in California I've been following the legislation you mentioned. The lead on the language in that bill is the "Center for AI Safety" which is basically a non-profit consultancy. Not all that enthusiastic that they are leading the charge here in CA.
Dude if you dragged the ankle point to the bear's toes I can imagine how precise you were with the rest of them. No wonder the bear animation looks wonky
I think a better comparison would be to have them all start with the same picture. Even just taking the first frame from Firefly would have been a good starting point to compare
@@terryd8692 I mean, there wil always bere a bias as in Adobe will simply choose their best example, but to give it a little of a fight at least start based on the same premises
Feels like a free ad for Adobe. If you're going to run comparisons with Runway, at least show us the prompt so we can make our own assessments. The "trust me bro" approach makes people wonder what you're hiding and who is paying you to hide it.
how tf do they get their movies to look so high resolution in those movies showed at the end? i know there are ways to "cheat" by adding fine grain and filters, but the resolution overall looks much better then what runway gives out, even with good prompting and high resolution input images. especially "seeing is believing" it looks amazing, the shot with the asian woman is great!
Thank you so much! Technically, Runway’s resolution is slightly higher (1280x768) than Minimax (1280x720), but I agree-the pixel density in Minimax feels smoother. Especially with cinematic outputs, Minimax has a great consistency, and though not technically "sharp" or "high-res," it feels more balanced-kind of like a BluRay downsized to DVD that still retains its perceived sharpness. For "Seeing Is Believing," I didn’t use Topaz or any other AI video upscaler. Instead, I just put all 720p clips into a 5K Final Cut Pro project, which just "zooms" them out without additional upscaling or pixel interpolation. Then, as you mentioned, color grading and adding fine grain help give the shots that "hi-res" look, even though they technically aren’t. :) You can watch the final 4K version of "Seeing Is Believing" here: ruclips.net/video/ghnk0rf5qPU/видео.html
@@particlepanic thank you for taking the time to answer so detailed, this is great input. I appreciate it! and at first didn’t realize you are the creator that answered haha, I’m looking forward to future projects of you, keep it up :)))
11:47 Why not do an end frame when testing the camera movement though? I would want as much control as possible, so I would definitely do an end frame. I’m curious to see what the results look like when you have both a start frame and an end frame, and you change the camera movement at the same time.
After LITTERALY stolen thousands and thousands of photos, images, and video footages from their clients in their cloud service, of course they can generate great AI videos.
In the runway vs adobe comparisons. Adobe's actually seem just as jank tbh and I wouldnt use either in real world applications. 1. Look at the reindeers back leg as it turns to face the camera 2. Drone flying through Lava... cause sure that's totally a thing drones can do 3. The puppets, sure whatever both are cursed 4. Look at the ripples on the sand change over time
Wait, which program generated the montage that's playing while you're talking about legislation (19:48, 20:26, 20:37, 20:45, etc.)? Those are some of the best I've ever seen.
...and yesterday the announcment that Runway Gen-3 is now able to generate video-to-video...things go faster than the news.. btw. thanks for the meshy reminder...have to check directly. 🙃
There are several videos on RUclips of people flying drones through lava. It's not copying anyone's work. It's using it as a reference, just like every other generation. CTFD.
Thank you very much! I have been researching this space, looking at smaller vendors. I would have ignored Adobe assuming a heavy handed "solution", but this actually looks worth paying for. (Adobe stock at the next dip?)
I'd love to see Adobe release this stuff, but my fear is that they begin charging extra for generations. Wouldn't surprise me either as they are pretty greedy with their stock footage after you're paying big money for the suite.
All this talk of film and footage, but you never showed a single clip of either.?! It's all digital video, ain't no film or feet involved! Digital "video" not film, is measured in "time", not feet. It's my pet peeve and everybody gets it dead wrong it seems, drives me half nuts! I grew up in the age of film and I made the switch to digital and got it right. So how is it that you kids that never touched a piece of film in your entire life, how is it that you all keep talking about "film" and "filming" like you even have a clue what the stuff is, let alone where to get it? Cheers 🍻
I am not convinced by a comparison of a few random generations from models that have been trained on different data sets and for a different range of topics. Its a bit like taking a formula one car and a golf cart and compare the off road capability.
The AI video sector is getting hot AF. Im using like 10 different video generator websites in my workflow to make videos. Its honestly getting out of hand. I also find the California push back of AI is due to the fact that Hollywood is there and they dont like the idea of the common man competing with their market share.
Do you know which video generators are uncensored (violence, guns, gore, horror, blood, celebrity and politician likenesses, etc.)? If one uploads an image of Trump, for example, to Gen3 Runway will it animate it?
@@High-Tech-Geek I’ve made some Trump image to video with kling. I think if you just upload the picture and call it “Fat orange idiot…” instead of “Donald Trump” then it won’t ID it.
... strange to compare an adobe ia video wich is an example but you didn't try on your own with runway that you can try by your own, we know that's for demo its cherry picking and not representative when you try by yourself an ia video tool. (sry for my english,not my first language)
Thank you so much for highlighting "Seeing Is Believing" as one of your AI Films of the Week! For everyone who wants to see the full "Cinematic Turing Test" demo in 4K: ruclips.net/video/ghnk0rf5qPU/видео.html
that lava shot is actually an FVP drone pilots footage, i remember the footage from his youtube vlog where he flew his FPV drone into a volcano, and the lava destroyed his propeller wings! i wonder if he submitted his clip for AI training, or is Adobe just snatch up Content creators clips the same way udio does there audio generations
I don't know why everyone goes crazy about it already it's still in baby stages.. video lasts like 5 seconds at best and doesn't even have audio, you can't possibly make a movie or tv show, your better of imagining something at least you can think of outcomes
just like they still haven't done 2k video with gen 3 also tell ,them to get on with it or they will loose out big as they are starting to do with kling who are taking a lot of there business bec they have both
With kling update in every way now kling is better than runway runway 3 is loosing on every level too stupid to do negative prompts and 2k its become a joke
wait until it come out first. this is advertising…. it doesn’t always do what it is claiming. Still never had any decent results with firefly with images.
We've already got AI models with large followings online, so the insistence of caring if the popularity of a persona was fabricated/augmented by Hollywood with real people or AI does it all is already starting to wane. Thank goodness that bill means the top actors can still rake in their millions. Just ponder how much money is spent on getting a brand-name actor and how that money could have used to help make the film better in other ways. I mean really, take a look at your latest Marvel blockbuster: would using real actors make any real difference, other than some expectation thing? It's only an expectation because marketing has made sure it is. On the plus side, those with true acting talent who don't have the 'look' aesthetic of the moment will have more opportunity to get work, coupled with some pretty visual avatar representation to lust over so yay? The same could said for the dearth of YT (tiktok, whatever) influencers: the talking head model is going to be the first to go AI, so hoping to make some money on the platform is not a good future job prospect. heh
This technology needs mandatory watermarking, or a ban on photorealism. It is going to make video evidence inadmissible in court, even when it is authentic, when there is no way of knowing for sure if it is real or not.
Obrigado pelo conteúdo!! Tudo o que precisamos é do conselho certo sobre como investir em cripto e estaremos prontos para a vida, ganhei mais de um milhão de dólares negociando no mercado de cripto este ano, independentemente das condições de mercado 😊.
"The future of healthcare is here! With AI-powered avatars, we'll have access to expert-level knowledge in real-time. No more waiting for doctors or searching for answers online. These avatars will be trained on vast amounts of medical data, enabling them to reason and respond like a PhD-level expert. By mid-2025, avatars will transform healthcare delivery, providing personalized medical information, emotional support, and even helping with diagnosis and treatment plans. Get ready for faster patient understanding, increased accessibility, and reduced healthcare costs! What do you think about this revolution in healthcare? Share your thoughts!"
The (very, *_very_* young) woman in the coffee shop does not actually seem to be speaking real words; it looks more like she's clacking her teeth together at times. There are some really amazing horror-oriented AI "reels" on Facebook that might give H.P. Lovecraft chills. One especially has multiple scenes that weave together a sci-fi mini-story to great effect. I'm looking for an image-to-3D-model conversion that can work from clean drawings. BTW: I think your "walking bear" animation failed because of the raised arm position of the base model. Finally, although having nothing to do with film directly, a retopology tool for organic and hard-surface 3D models would seem like a highly useful (and non-controversial) use of AI; no sane person enjoys *_that_* tedious process.
aitutorialmaker AI fixes this (AI driven Tutorials). Adobe's New AI Video Generator!
Here’s what I want: 1) Stable Model 2) Stable Environment. 3) Creative Camera Work
If I can simply create characters and insert them into an environment, without either of them morphing into an acid trip, I’ll pay.
As of now, getting usable clips is not only time consuming with too many trial and error prompting, it gets expensive.
Whoever can accomplish this first is going to do very well. I hope it happens soon.
Exactly not even SuperNintendo had suck crappy assets for motion caputure LOL hahhha
it’s cool to have the runway gen3 extensions… the problem is they’re using GEN 2 to do the extensions, not GEN 3, so that’s why the extensions look so weird and low quality… they really need to use GEN 3 to do the extensions.
didn't know that. does that hold true for for all subscription levels?
Interesting point !
knowing Adobe they’ll charge like $20 for 5 clips
And you'll have to redo each idea a few times.
Right 😔
it;s ok that keeps people that won't pay or can't pay out of the door ,, Beta will drops 2nd week of october can't wait
The Adobe CEOs are extremely greedy, unlike anything seen in any other company. Additionally, it’s worth noting that ON1 is set to launch their new ON1 RAW photo software, which will exclusively feature LOCALY generated images ... NOT ONLINE.
We hope not!
As soon as Adobe releases a 4th version of the firefly model, we'll have a robust image-to-video pipeline without subscribing to many different services.
Only heavily censored
@@AINIMANIA-3DYes, but that's the case with every plug-n-play solution.
If you want privacy and uncensored generations, you gotta go with flux and comfyUI
@@AINIMANIA-3D that’s such such a boomer mindset. Everyone will have some kind of text video model so censorship will no longer be an issue
@@KevinSanMateo-p1l Do you have a list of AI video generators that are uncensored? Ideogram is the only one I know of. Minimax appears to do Will Smith, Darth Vader and Mario, but I don't know if it would do Trump shooting a gun, for example (like in the Dor Brothers' videos).
@@KevinSanMateo-p1l That's such a "I don't know how to f-ing read" comment, so, f-ing READ. Commenter said when Adobe release Firefly 4, they will be able to use ONE subscriptions service (Creative Cloud) rather than NEEDING to use multiple. Read, for God's sake.
After watching the freaky extend video feature it made me wonder if this is the real skynet. Instead of an apocalypse and nukes, which we've seen coming, skynet is going to create seriously disturbing videos that drive us into insanity.
You know what? You make a great point because. what can end up happening is people could start end up making AI videos that look like a real terrorist threat to try and cause war and now this is going to make it harder for governments to verify videos. Ohh jeez. this is going to cause a hot mess of new fraud.
It's time to retire the Sora comparisons. We've got the AI video options to create TODAY.
Is anyone still even waiting for Sora?
@@dasberlinlex My grandfather. Sora 1.0 was announced in 1984 :)
@@MartinZanichelli I love it. Great joke. You have a nice sense of humor.
Sora did it's job. It got the ball rolling on a mass scale to give us all these options. Sora was never about just Sora.
@@TPCDAZ Yeah, I'd say Sora just never was. Not even looking forward to it. 😎
You just can’t bring coffee girl down-that cup is at least half full!:)
lol
LOL
MiniMax is the most impressive model out. It does great with expressing prompted emotions but I have to say that Kling’s pro version has been capable of that too. Great video, as always :)
100%, my Friend, I've been using MiniMax every day for over a week, and it by far is the best at animating humans as well as other things like birds flying and animals running.
@@FilmSpook just needs that image to video and it's top dog, for now.
Minimax is so impressive with text2video!
@@curiousrefuge I tried minimax after I watched your perfect video and minimax is great but without the ability to use picture to video is useless for me, because basically you cannot produce something with one character (human). Which is the best option for picture to video? I personally never seen so realistic videos like minimax, but if you have to produce something with one character on many videos doing different things which AI tool you prefer to use? Thank you once again for your you tube channel :)
@@georgikozhuharov2293 Can't try minimax because it won't even open the page. Looks like it's too popular and overloaded right now..
17:43 It would be cool if there was an option for the AI to automatically rig the character.
Wouldn't be surprised if that's going to happen very soon!
As someone working on an XR concept in California I've been following the legislation you mentioned. The lead on the language in that bill is the "Center for AI Safety" which is basically a non-profit consultancy. Not all that enthusiastic that they are leading the charge here in CA.
Great point...we'll see!
Dude that's Me!
r/Optopode here, and thank you so much for the reference 🪶
that video was hillarious!!! I loved it! -Mitzy
Amazing work!
I bet adobe will make a seperate subscription model for firefly just like they did for their 3D services Substance.
We'll see!
Dude if you dragged the ankle point to the bear's toes I can imagine how precise you were with the rest of them. No wonder the bear animation looks wonky
I think a better comparison would be to have them all start with the same picture. Even just taking the first frame from Firefly would have been a good starting point to compare
True! That's a more accurate test!
I'm not sure that you can fairly compare the firefly marketing videos to something you chucked together in a couple of minutes
@@terryd8692 I mean, there wil always bere a bias as in Adobe will simply choose their best example, but to give it a little of a fight at least start based on the same premises
Feels like a free ad for Adobe. If you're going to run comparisons with Runway, at least show us the prompt so we can make our own assessments. The "trust me bro" approach makes people wonder what you're hiding and who is paying you to hide it.
Nice. Some of these look handy in one way or another.
how tf do they get their movies to look so high resolution in those movies showed at the end? i know there are ways to "cheat" by adding fine grain and filters, but the resolution overall looks much better then what runway gives out, even with good prompting and high resolution input images. especially "seeing is believing" it looks amazing, the shot with the asian woman is great!
Thank you so much! Technically, Runway’s resolution is slightly higher (1280x768) than Minimax (1280x720), but I agree-the pixel density in Minimax feels smoother. Especially with cinematic outputs, Minimax has a great consistency, and though not technically "sharp" or "high-res," it feels more balanced-kind of like a BluRay downsized to DVD that still retains its perceived sharpness. For "Seeing Is Believing," I didn’t use Topaz or any other AI video upscaler. Instead, I just put all 720p clips into a 5K Final Cut Pro project, which just "zooms" them out without additional upscaling or pixel interpolation. Then, as you mentioned, color grading and adding fine grain help give the shots that "hi-res" look, even though they technically aren’t. :) You can watch the final 4K version of "Seeing Is Believing" here: ruclips.net/video/ghnk0rf5qPU/видео.html
@@particlepanic thank you for taking the time to answer so detailed, this is great input. I appreciate it! and at first didn’t realize you are the creator that answered haha, I’m looking forward to future projects of you, keep it up :)))
Glad you enjoyed these!
where is th link to the chinese geenrator? there are tons I cant find it thanks
It will be everywhere soon Blackwell chips at work
the next 6 months will be crazy!
11:47 Why not do an end frame when testing the camera movement though? I would want as much control as possible, so I would definitely do an end frame. I’m curious to see what the results look like when you have both a start frame and an end frame, and you change the camera movement at the same time.
Good point, we'd need to test that next time.
After LITTERALY stolen thousands and thousands of photos, images, and video footages from their clients in their cloud service, of course they can generate great AI videos.
We appreciate you watching!
Yeah, how people are still happy to give them money is beyond me
Adobe can't even do humans in Firefly yet, so I won't hold my breath on how good Luma is.
We'll see!
Adobe Video extension, to extend a rush... it needs to be connected online ?
Content writers becomes hero.
The bear animation… you put the first point in the wrong spot! 😮
Did you see that the dot says groin when you click on it?
In the runway vs adobe comparisons. Adobe's actually seem just as jank tbh and I wouldnt use either in real world applications.
1. Look at the reindeers back leg as it turns to face the camera
2. Drone flying through Lava... cause sure that's totally a thing drones can do
3. The puppets, sure whatever both are cursed
4. Look at the ripples on the sand change over time
Adobe was impressive but Runway is still the champ for me. I guess I'll have to test it out myself.
Definitely worth testing it all!
Where is the link to minimaxi?
You've obviously never lived in Canada - snow does blow up sometimes lol.
Haha...we have so much to learn!
Wait, which program generated the montage that's playing while you're talking about legislation (19:48, 20:26, 20:37, 20:45, etc.)? Those are some of the best I've ever seen.
It's a handful of different tools!
O no not me participating in Gen:48 ! I’ll have to try Adobe next.
Can't wait to see what you create!
Off-topic question- but I really like your glasses. What is the model and brand?
...and yesterday the announcment that Runway Gen-3 is now able to generate video-to-video...things go faster than the news.. btw. thanks for the meshy reminder...have to check directly. 🙃
Have to keep checking :)
A werewolf you say? 🐺
::howls at the moon!::
Thanks for the fresh information! The AI generator race continues)
Glad you enjoyed it!
Tere was a guy who flew his drone through a volcano... This looks EXACTLY LIKE THAT..... IT's copying his work.
There are several videos on RUclips of people flying drones through lava. It's not copying anyone's work. It's using it as a reference, just like every other generation. CTFD.
Certainly possible to have trained on that one video but a generation is compiled of far more data than a single vid.
Will Adobe Firefly only interface with other Adobe software?
Very likely!
Don't need to extend that way just reuse the same photo and prompt something else the results will be better.
I want still camera shots, it seems they are always moving around no matter what I input
locked shot, stationary shot, tripod mounted camera, etc. none of these work for you?
That's a good tip. I'd say even with those prompts you'll still get movement in 50% of your shots unfortunately.
@@High-Tech-Geek I'll try those terms In my prompts thanks 🙏
Bonkers. Thanks for the excellent overview.
Our pleasure!
Can we select a lower frame rate to get more than six seconds of video, and use the output in DaVinci Resolve Studio to fill in the missing frames?
You can certainly use other AI tools and try DV to smooth out the missing frames.
Thank you very much! I have been researching this space, looking at smaller vendors. I would have ignored Adobe assuming a heavy handed "solution", but this actually looks worth paying for. (Adobe stock at the next dip?)
We'll see!
I'd love to see Adobe release this stuff, but my fear is that they begin charging extra for generations. Wouldn't surprise me either as they are pretty greedy with their stock footage after you're paying big money for the suite.
We would probably bet that there will be some kind of charge.
What i dont like now is pricing for relative very low time of output.. I know it's on the beginning but the pricing is crazy
True, it's quite pricey!
All this talk of film and footage, but you never showed a single clip of either.?!
It's all digital video, ain't no film or feet involved! Digital "video" not film, is measured in "time", not feet. It's my pet peeve and everybody gets it dead wrong it seems, drives me half nuts! I grew up in the age of film and I made the switch to digital and got it right. So how is it that you kids that never touched a piece of film in your entire life, how is it that you all keep talking about "film" and "filming" like you even have a clue what the stuff is, let alone where to get it?
Cheers 🍻
We appreciate your feedback!
Isnt that Sora? They just not letting on thats what it is.
bingo
which part?
@@curiousrefuge Adobe's video gen upgrade.
I am not convinced by a comparison of a few random generations from models that have been trained on different data sets and for a different range of topics. Its a bit like taking a formula one car and a golf cart and compare the off road capability.
True! It's difficult to test, but we try our best :)
The AI video sector is getting hot AF. Im using like 10 different video generator websites in my workflow to make videos. Its honestly getting out of hand. I also find the California push back of AI is due to the fact that Hollywood is there and they dont like the idea of the common man competing with their market share.
Do you know which video generators are uncensored (violence, guns, gore, horror, blood, celebrity and politician likenesses, etc.)?
If one uploads an image of Trump, for example, to Gen3 Runway will it animate it?
@@High-Tech-Geek I’ve made some Trump image to video with kling. I think if you just upload the picture and call it “Fat orange idiot…” instead of “Donald Trump” then it won’t ID it.
Definitely makes finding a workflow difficult. However, we wouldn't be surprised if in one year from now most things are consolidated.
I wonder if the AI model will be a plugin in their desktop video software.
Perhaps one day!
That girl with the coffee has to be from a horror movie. Going on a date with a pretty girl moments before being abducted and eaten by an alien.
That not surprising they waited, and have better. Thanks!
Thanks for watching!
"You need to convert 25 to 24p for maximum quality".... lol ???
Meaning, most generally accepted 'cinematic' look
thanks, I just upload 2 videos, that are 90% looks like realistic. done with minimax.. have to try adobe too
You can do it!
And advertisement can beat a real product.
NIce! but how can i make a loner clip, like a shortfilm, using the same characters in minimax?
Must generate many clips and splice them together to to make your story come to life !
@@curiousrefuge yeah, but can i continue the consistency of the subjet
... strange to compare an adobe ia video wich is an example but you didn't try on your own with runway that you can try by your own, we know that's for demo its cherry picking and not representative when you try by yourself an ia video tool. (sry for my english,not my first language)
Thanks!! Great video
thanks for watching!
I think in all of or runway did the more realistic and fun version.. adobe looks like a painting..
Thank you so much for highlighting "Seeing Is Believing" as one of your AI Films of the Week! For everyone who wants to see the full "Cinematic Turing Test" demo in 4K: ruclips.net/video/ghnk0rf5qPU/видео.html
Our pleasure!
В следующий раз камеру ставь поближе пожалуйста. Ничего не видно что там рисуется :(
Is that Snoop Dog eatting a burger...?
Hahah could be?
F*ck Adobe! I keep trying to get away from them, but they keep bringing me back in...
They are certainly stepping up their game!
Brilliant video. Thanks
Glad you enjoyed it
that lava shot is actually an FVP drone pilots footage, i remember the footage from his youtube vlog where he flew his FPV drone into a volcano, and the lava destroyed his propeller wings! i wonder if he submitted his clip for AI training, or is Adobe just snatch up Content creators clips the same way udio does there audio generations
It's absolutely not the drone pilot's footage.
Perhaps there was *some* training but not one single generation is a result from a single video.
This kooks amazing
Is the robot the hamster's extension or is the hamster the robot's pet?
The world may never know!
@@curiousrefuge It's a lot like the question "Is AI the human's extension or are the humans AI's pets?"
I don't know why everyone goes crazy about it already it's still in baby stages.. video lasts like 5 seconds at best and doesn't even have audio, you can't possibly make a movie or tv show, your better of imagining something at least you can think of outcomes
It's more about adding it as one thing to a toolset, rather than making a movie with it entirely.
will you pls tell runway to do negative prompts on runway gen 3 the are too dumb to get this
We hope they add that soon!
@@curiousrefuge your big in a i i think they are doo dumb to do it your in there spheres tell them we need it pls
just like they still haven't done 2k video with gen 3 also tell ,them to get on with it or they will loose out big as they are starting to do with kling who are taking a lot of there business bec they have both
With kling update in every way now kling is better than runway runway 3 is loosing on every level too stupid to do negative prompts and 2k its become a joke
wait until it come out first. this is advertising…. it doesn’t always do what it is claiming. Still never had any decent results with firefly with images.
very interesting episode
Nothing short of awesomely amazing...Thanks for bringing the value-content. Greatly appreciated.
Our pleasure!
It pains me to say this but these new Premiere features might pull me back from Resolve. Their AI is not coming together very well at all.
I have to say premiere has gotten way better lately.
We love Davinci - we hope they will come out with some new tricks over the next couple years!
I'm really excited for Premiere to start introducing some cool things!
We are too!
thank. bro
Is the video generator free for a few uses?
Minimax is currently but better get started now before it's too late :)
Amazing Stuff!
Glad you think so!
We've already got AI models with large followings online, so the insistence of caring if the popularity of a persona was fabricated/augmented by Hollywood with real people or AI does it all is already starting to wane. Thank goodness that bill means the top actors can still rake in their millions. Just ponder how much money is spent on getting a brand-name actor and how that money could have used to help make the film better in other ways.
I mean really, take a look at your latest Marvel blockbuster: would using real actors make any real difference, other than some expectation thing? It's only an expectation because marketing has made sure it is. On the plus side, those with true acting talent who don't have the 'look' aesthetic of the moment will have more opportunity to get work, coupled with some pretty visual avatar representation to lust over so yay?
The same could said for the dearth of YT (tiktok, whatever) influencers: the talking head model is going to be the first to go AI, so hoping to make some money on the platform is not a good future job prospect. heh
This technology needs mandatory watermarking, or a ban on photorealism. It is going to make video evidence inadmissible in court, even when it is authentic, when there is no way of knowing for sure if it is real or not.
oh, its not out yet.
Not yet!
When people misunderstand everything about film making and art itself
This is neat wow cool. Also, it's evolving faster than humanity can handle. Buckle up.
Glad you liked it!
많이 배우고 있습니다.
thanks for watching!
I love runway because its so weird and sh!t
Is anyone still waiting for Sora?
We're enjoying all the other tools currently!
They just want control.
But how do they do these Tesla clips. When I'm trying to do this it either says copyright issue or it just doesn't look like tesla.
Very often people will also use AE to help make these images have those assets.
fire fly video model looks crappy compared to kling, kling is king ngl
Hey. Do you have a discord channel ?
We do! Check out our website to get in!
Thomas John Miller Jennifer Garcia Eric
With California's new law, who's going to pay Will Smith for all the spaghetti eating videos??
Jada
Obrigado pelo conteúdo!! Tudo o que precisamos é do conselho certo sobre como investir em cripto e estaremos prontos para a vida, ganhei mais de um milhão de dólares negociando no mercado de cripto este ano, independentemente das condições de mercado 😊.
മലയാളി from Kerala ❤️
Kling still King
Kling is cool!
"The future of healthcare is here! With AI-powered avatars, we'll have access to expert-level knowledge in real-time. No more waiting for doctors or searching for answers online. These avatars will be trained on vast amounts of medical data, enabling them to reason and respond like a PhD-level expert. By mid-2025, avatars will transform healthcare delivery, providing personalized medical information, emotional support, and even helping with diagnosis and treatment plans. Get ready for faster patient understanding, increased accessibility, and reduced healthcare costs! What do you think about this revolution in healthcare? Share your thoughts!"
so that's how AI will kill us.
At least for quick visits we can see the utility!
The (very, *_very_* young) woman in the coffee shop does not actually seem to be speaking real words; it looks more like she's clacking her teeth together at times. There are some really amazing horror-oriented AI "reels" on Facebook that might give H.P. Lovecraft chills. One especially has multiple scenes that weave together a sci-fi mini-story to great effect. I'm looking for an image-to-3D-model conversion that can work from clean drawings. BTW: I think your "walking bear" animation failed because of the raised arm position of the base model. Finally, although having nothing to do with film directly, a retopology tool for organic and hard-surface 3D models would seem like a highly useful (and non-controversial) use of AI; no sane person enjoys *_that_* tedious process.
It's true - it's not perfect. But imagine where we will be in another 6 months.
Cant wait to not be able to generate anything cool because it violates the tos
Lots of things to figure out!
so i bust my ass to learn 3 D modeling and you we can generate 3D models just by a script. Great :))))))
Those with 3d modeling experience, even with generative tools, will have a head start!
Yeah. Won't be using any Adobe products, AI or not.
Understood!
Hernandez Helen Lee Charles Perez Dorothy