Update: Gen-3 is now publicly available. I mentioned a couple weeks... It was only a day! :) Runway released a prompting guide to help you get better outputs here: help.runwayml.com/hc/en-us/articles/30586818553107-Gen-3-Alpha-Prompting-Guide
It is available only If you are paid member. But the price is heavy. 100 credits per 10 sec video. You get 625 credits for 15$ subs per month. So you can make at best 1min video. 😮
@@fast4549 Catch up to the February snapshot we saw. I know what you are saying but I think that Sora, like every other video model, is most likely being improved as we speak so whatever version we get when it comes out will not be what the Sora we saw back in February... More like a Sora 1.5 or even 2.0.
@@CosmicCells We know ClosedAI has been putting a lot of effort to peddle Sora to Hollywood and companies with deep pockets. To improve it or make it available to the public, not so much it seems.
While it's really impressive how far we've come with AI video, I'd really like to see how specific we can get with it. Your prompts were kinda generic. What happens if I want a character to wear very specific clothing? Colours, material, fit? Can it do a coat with mother-of-pearl buttons, or do we get any old buttons? Image Gen looks impressive as long as you do generic portraits of people, but breaks if you do anything complicated and out of the ordinary. Try prompting for a handstand or someone dangling from a tree branch heads down. To be useful for serious production work and not being just a toy, you need to be able to have a granular level of control. I know I'm asking for a lot, since we're only at the beginning stages yet, but it'd be nice if you really stress-tested it, so we know where we're standing at the moment. Still, great work, Matt! 🙂
Low quality clipart video for people who really don't need to have anything particularly good. It'll do because it doesn't really matter because no ones pay much attention to it anyway. This will be great for those adverts at the side of blog posts that everyones trying hard not to accidentally click on so they only focus on the thing they googled in the first place. Marketeers will love it. As long as it moves, it'll do.
bro hit me up. I stayed up 3 days and by day .7 I was getting studio quality shit. I also created the idea for several different AI idea bass fishing on my phone that the companies steal'
That’s cool, but I wish we had more control over the videos created. The videos are starting to look better, but the control is still lacking, which makes these videos not that useful for filmmaking projects. Also, we need *‘Image to Video’* if we want this tool to work with consistent characters, and to create scenes for films. I want to be the director of the film, rather than letting the AI have all of the control over how a scene looks.
These funky artefacts heavily remind me of image gen around DALL-E 2, Midjourney 4 and SD 1.5. As we've seen those ironed out and improved in later versions, now we're seeing better hand generation and legible text in the newer versions of those image gens - can we expect similar, along with improvements in temporal consistency, in these video gen models? Pretty excited for the next year, if so. That baguette video blew me away a little bit.
I would like a video on why the AI videos do what they do and what the blockers are in making them more realistic - I mean can the output be put back through an AI that corrects all the issues?
Runways costs 35$ per less the 8 minutes a month. And this is for Gen 2. The next plan is 95$ euro per month to have unlimited generations. This is a professional platform, not for end users.
11:02 Imagine if the man & woman hugging here were ‘consistent characters’. I hope it will become easier to create scenes with *‘multiple consistent characters’.*
The color palette it picks is pretty good. Also I like the contrast. I'd say in about another year we'll be looking at some pretty fantastic looking AI videos from all companies. Thank you Matt for putting in the time to show us all!
Just tried this and it's amazing for music videos. When making a real film you're also creating a lot of takes to cherry pick the best one, so I'm okay rerolling until it looks ok
Gen 3 said ‘not available in your country’. LTX was available today and it was like an interface for the usual crappy vid generators. A waste of time and early signup attention. Great vid as always 👌
I just started using Runway Gen-3 a week ago and so far I've made a zombie video and now a dancing cats video. I think both turned out great. Yes image to video will make Gen-3 easier to use but if you know enough about cinematography and camera shots, and you're not too picky about the details, you can do pretty well with just prompts alone in Gen-3.
Huh another greedy ass consumer expecting something for nothing, The current state of the market and what consumers have recreational access to is incredible
@@devonwilliams2423 huh another idiot who fanboys AI to the point of drool. What about my comment offended you sweetheart? Who said I wasn’t happy to pay? I pay for GPT and Midjourney right now - this had nothing to do with free access, it was about access and hype.
@devonwilliams2423 We don't necessarily expect something for nothing. Many would be happy to pay, including me. We'd just like a release date, and not get told it releases in "a few days" only to find out that's not true.
I feel like alot of the clunk could be resolved in the future with like a critique AI pass of sorts that can critique object perminance and physical interactions
Matt, have them export a depth map video (as an option - more points) with the video, which could be used to composite in some additional stuff - like some animated 3d objects that you add in.
Thanks for your video on Udio. I am addicted to it now. The “copyrighted lyrics detected” is pretty annoying. Considering we can easily get cover song licensing and both the AI prompter and rights holder could be making money, it’s kind of dumb to restrict usage of other peoples music.
I animate and do lip sync from MidJourney images, so I wish I knew how this was for that. I don't need any 1997 screen savers LOL. So far it doesn't look as nice as Luma but I'm only a third of the way into your video and I will finish watching now
Tried Sora/Keling/Gen3/Pika, have to say those things are not very differentiated, at least from users' perspective Text-to-video is still relatively barrier for most of mobile users.(And that's why we are founded, making productivity equality by video-to-video method)
As a working video editor / Multimedia designer Ive been using many different video ai tools over the last year. This looks a little better than the current crop but Honestly Video ai is not there yet.. not for professional use anyway. And im yet to be convinced that Sora have not heavily manipulated their sample videos in post to please investors. I hope im wrong.
Wow, this sure shows how much further they need to go to get a full 10 seconds that's usable. But it does look like there's a few seconds in most clips that are pretty good.
And THIS is just an Alpha-Release. ALPHA!! And look how well it is starting off at that Level. We'll be able to get improvements in the days and weeks to come. And this remember is with all those Creator Media 'Restrictions' they impose on AI Scraping as well.
This helped me a lot! I've been waiting for a minimum quality level and I think we're just there by our fingernails. I was going to wait a bit because I couldn't decide on the tools and procedures. I think the best control and quality for me will be midjourney/lumalabs/elevenlabs/synclabs. If anybody has alternate suggestions I'd love to hear them. I'd wait for runway to have image to video but there's always a reason to wait and no guarantee of how it will compare to Luma. Time to make some cool stuff!
I assume OpenAI is allowing others to release theirs before they release Sora so they don't catch flak for being the first. Then once the public is accustomed to decent video generation, they release Sora which will be probably better since they have more compute, and receive little to no backlash. At least that's my guess...
Actually it's better to be first. That way any issues or glitches or problems can be argued with "Well we were first, the others have had time to improve on ours", if Sora comes out last after all these and has issues that these ones don't it's going to kill it instantly. Also Sora will not be able to argue or demand a high pricing if the others are doing as good for cheaper.
very good video and I had similar experience from my first tests: you definitely need 95$ unlimited plan to use it because >90% of results are garbage. Also no image input. I would not recommend at the moment for normal usage. But if you want to experiment a lot and learn which things are working it can generate some interesting and sometimes just amazing output.
what makes someone qualify as a creator to get early access? I thought you have standard subscription you got early access? I have standard but I don’t see gen 3 option, I am not on desktop right now though only phone
Does anyone know how to Generate a video (or another generator) that can produce videos with no camera movement? I've set the camera movement to the lowest setting along with prompting "Tri-pod shot", "no camera movement" etc but nothing works and I need still camera video generations to edit in pre-existing characters filmed with a tri-pod.
Maybe they won't release it until the music industry's lawsuit against the ai music makers is over? These companies gotta be scared that they will get really sued for their copywrited fair use training.
Needs image to video (I find that gives better results on most AI Video generators). Result's look ok and have come a long way in short space of time but still a long way to go imo. Any idea on cost of this?
Could you please share just a few of your videos so that it would be possible to download them and analyse how they handle or manage the image to image (25 or 30 frames/second), just to see?
Just got an email giving access so I’ll get it a test drive(crash?) and post on X because that’s what people do. Waiting for Pika to make a move as well? Exciting times .
The bowling video is so uncanny that it made my legs go cold and filled me with a deep sense of dread 😨 Soon we'll have the name of a new phobia for these kinds of things 😂
@@bloxyman22 Thanks! So that's 18 Gen-3 videos per month on the $35 plan. Almost $2 per 10 sec video. Not too bad if the videos are good but otherwise it's quite expensive. I see they also have a $95 unlimited plan (relaxed speed).
Have to say, GEN-3 has been relatively disappointing considering the price point. I found it hard to get good results of 10s when generating fantasy art and animation. It's fair, but not as great as I was hoping for based on the demos.
The real life results were mostly pretty bad... i estimate that only next year we will get text to video models that will really create good results rhat can be used to create high quality short movies f.ex.
Hi Matt, would you mind saying what kind of computer system you are using to generate your videos. Just wondering how powerful a system you need to keep the generation time to sane levels. Thanks.
Coder might have hardcoded 'Runway' in his code for text generation and every other text is just random like before. Just satisfy business partners with Demo, you will get more work after they actually use the final product 😂
Matt! I’m so disappointed. My primary use case is image to video, and you didn’t even mention it. 😢 Please at least tell me if it’s an option in Gen 3.
Memorizing? Is it too late to cease Ai? Remember Hansel and Gretal and that scenario? Luring us in with sweets? Just to be… laid off by Ai, then… suffer human extinction? Or worse, imprisoned by an… ai new world order? Enforced by swell robotics popping up everywhere… replacing you with Ai jobloss?
Update: Gen-3 is now publicly available. I mentioned a couple weeks... It was only a day! :)
Runway released a prompting guide to help you get better outputs here: help.runwayml.com/hc/en-us/articles/30586818553107-Gen-3-Alpha-Prompting-Guide
YOU ARE THE BEST RUclipsR THAT TALKS ABOUT AI NEWS
Any word on when image to video is coming for it?? That's what I MOST wanted!
Is "no" publically available? Or "now"? Or "not"? Lol
It is available only If you are paid member. But the price is heavy. 100 credits per 10 sec video. You get 625 credits for 15$ subs per month. So you can make at best 1min video. 😮
@@mreflow now*
By the time Sora finally becomes available we’ll probably already have an open source version or free one that’s just as good
That would imply that OpenAI is putting no effort into improving the model since they showcased it in February, which i doubt...
@@CosmicCells I’m not saying it’s not going to be good I’m just saying they’re taking so long to release it that competitors have time to catch up
@@fast4549 Catch up to the February snapshot we saw. I know what you are saying but I think that Sora, like every other video model, is most likely being improved as we speak so whatever version we get when it comes out will not be what the Sora we saw back in February... More like a Sora 1.5 or even 2.0.
@@CosmicCells We know ClosedAI has been putting a lot of effort to peddle Sora to Hollywood and companies with deep pockets. To improve it or make it available to the public, not so much it seems.
@@CosmicCells Nah, they're putting all their energy and effort into making it totally useless for the exact things that we all want it for.
8:21 He was reloading the baguette with cheese.
While it's really impressive how far we've come with AI video, I'd really like to see how specific we can get with it. Your prompts were kinda generic. What happens if I want a character to wear very specific clothing? Colours, material, fit? Can it do a coat with mother-of-pearl buttons, or do we get any old buttons?
Image Gen looks impressive as long as you do generic portraits of people, but breaks if you do anything complicated and out of the ordinary. Try prompting for a handstand or someone dangling from a tree branch heads down.
To be useful for serious production work and not being just a toy, you need to be able to have a granular level of control. I know I'm asking for a lot, since we're only at the beginning stages yet, but it'd be nice if you really stress-tested it, so we know where we're standing at the moment. Still, great work, Matt! 🙂
We need the ability to put an image in, and animate from there
Don't plan on making a movie on this trash
I think it will be better when the image to video option becomes available.
Is it not available yet to creators?
@@lukewilliams7020 Apparently not. Several others answered my question to that effect. 🥲
I find in general with AI video generators that Image to video has better results than prompt to video.
@@lukewilliams7020 seems that way. img2vid is likely on the map, though.
It's available, just go to text to video and upload an image instead.
Low quality clipart video for people who really don't need to have anything particularly good. It'll do because it doesn't really matter because no ones pay much attention to it anyway. This will be great for those adverts at the side of blog posts that everyones trying hard not to accidentally click on so they only focus on the thing they googled in the first place. Marketeers will love it. As long as it moves, it'll do.
bro hit me up. I stayed up 3 days and by day .7 I was getting studio quality shit. I also created the idea for several different AI idea bass fishing on my phone that the companies steal'
The band playing music beneath the ocean was pretty cool! Even the floating microphone looked kinda real lol!
The movement of the singer's hair underwater impressed me the most.
Quite realistic.
comparing this to year-ago generations, it's a big improvement. Next year vid generations should be even better
That’s cool, but I wish we had more control over the videos created. The videos are starting to look better, but the control is still lacking, which makes these videos not that useful for filmmaking projects. Also, we need *‘Image to Video’* if we want this tool to work with consistent characters, and to create scenes for films. I want to be the director of the film, rather than letting the AI have all of the control over how a scene looks.
how will you control it ? by entering millions of prompts by trial and errors ?
next time you see a demo, expect that they generated 100s of videos to make 1 video demo!
These funky artefacts heavily remind me of image gen around DALL-E 2, Midjourney 4 and SD 1.5.
As we've seen those ironed out and improved in later versions, now we're seeing better hand generation and legible text in the newer versions of those image gens - can we expect similar, along with improvements in temporal consistency, in these video gen models? Pretty excited for the next year, if so.
That baguette video blew me away a little bit.
17:58 the girl on the water is practically perfect, very difficult to see any problems. Impressive!
I would like a video on why the AI videos do what they do and what the blockers are in making them more realistic - I mean can the output be put back through an AI that corrects all the issues?
Runways costs 35$ per less the 8 minutes a month. And this is for Gen 2.
The next plan is 95$ euro per month to have unlimited generations. This is a professional platform, not for end users.
more like an app with pro costs nonetheless n00b outcomes
11:02 Imagine if the man & woman hugging here were ‘consistent characters’. I hope it will become easier to create scenes with *‘multiple consistent characters’.*
The color palette it picks is pretty good. Also I like the contrast. I'd say in about another year we'll be looking at some pretty fantastic looking AI videos from all companies. Thank you Matt for putting in the time to show us all!
Just tried this and it's amazing for music videos. When making a real film you're also creating a lot of takes to cherry pick the best one, so I'm okay rerolling until it looks ok
Gen 3 said ‘not available in your country’. LTX was available today and it was like an interface for the usual crappy vid generators. A waste of time and early signup attention. Great vid as always 👌
I just started using Runway Gen-3 a week ago and so far I've made a zombie video and now a dancing cats video. I think both turned out great. Yes image to video will make Gen-3 easier to use but if you know enough about cinematography and camera shots, and you're not too picky about the details, you can do pretty well with just prompts alone in Gen-3.
Huh, another AI company showing off their product and preventing the general public from trying it out.
I know, it’s annoying. Esp when the companies don’t even give a release date to look forward to
AI is the new tech bubble… the bust this time will be bad.
Huh another greedy ass consumer expecting something for nothing,
The current state of the market and what consumers have recreational access to is incredible
@@devonwilliams2423 huh another idiot who fanboys AI to the point of drool. What about my comment offended you sweetheart?
Who said I wasn’t happy to pay?
I pay for GPT and Midjourney right now - this had nothing to do with free access, it was about access and hype.
@devonwilliams2423 We don't necessarily expect something for nothing. Many would be happy to pay, including me. We'd just like a release date, and not get told it releases in "a few days" only to find out that's not true.
So many of these videos look like they’re beyond CGI, they’re really captivating!
I feel like alot of the clunk could be resolved in the future with like a critique AI pass of sorts that can critique object perminance and physical interactions
Matt, have them export a depth map video (as an option - more points) with the video, which could be used to composite in some additional stuff - like some animated 3d objects that you add in.
Though the generations are hit or miss, this seems really excellent for visually storyboarding your actual production.
Thanks for your video on Udio. I am addicted to it now. The “copyrighted lyrics detected” is pretty annoying. Considering we can easily get cover song licensing and both the AI prompter and rights holder could be making money, it’s kind of dumb to restrict usage of other peoples music.
I animate and do lip sync from MidJourney images, so I wish I knew how this was for that. I don't need any 1997 screen savers LOL. So far it doesn't look as nice as Luma but I'm only a third of the way into your video and I will finish watching now
bruh when kling finally becomes available to everyone, it'll wipe the floor with every other video gen out there
Tried Sora/Keling/Gen3/Pika, have to say those things are not very differentiated, at least from users' perspective
Text-to-video is still relatively barrier for most of mobile users.(And that's why we are founded, making productivity equality by video-to-video method)
The quarterback at 12:30 was sacked so hard he's looking out the earhole of his helmet as he fumbles the ball.
As a working video editor / Multimedia designer Ive been using many different video ai tools over the last year. This looks a little better than the current crop but Honestly Video ai is not there yet.. not for professional use anyway. And im yet to be convinced that Sora have not heavily manipulated their sample videos in post to please investors. I hope im wrong.
9:06 He just turned clipping off on his hand for a second 😆
Wow, this sure shows how much further they need to go to get a full 10 seconds that's usable. But it does look like there's a few seconds in most clips that are pretty good.
I really hope makes one for us who are VJs for concerts/festivals so we can have generative content on the fly
This would be incredible for live VJ sets
@@RaxLakhani yup. i run some stuff i made myself for use on my LED walls when we do fests or concerts but it certainly could be better
Simply imagine the fun video editors are gonna have once a solid AI video generator arrives. Works will be done in a matter of secs. 🤩
And THIS is just an Alpha-Release. ALPHA!! And look how well it is starting off at that Level. We'll be able to get improvements in the days and weeks to come. And this remember is with all those Creator Media 'Restrictions' they impose on AI Scraping as well.
Imagine how amazing the next iteration of Walking with Dinosaurs will look with this tech.
This helped me a lot! I've been waiting for a minimum quality level and I think we're just there by our fingernails. I was going to wait a bit because I couldn't decide on the tools and procedures. I think the best control and quality for me will be midjourney/lumalabs/elevenlabs/synclabs. If anybody has alternate suggestions I'd love to hear them. I'd wait for runway to have image to video but there's always a reason to wait and no guarantee of how it will compare to Luma. Time to make some cool stuff!
As of this date, should one invest time in Luma or Runway. Which one is the best end result for the money/monthly credits?
Thanks, Matt. I'm glad I subscribed to 'Stripcue To Coeplee Chickk Thenn'.
I assume OpenAI is allowing others to release theirs before they release Sora so they don't catch flak for being the first. Then once the public is accustomed to decent video generation, they release Sora which will be probably better since they have more compute, and receive little to no backlash. At least that's my guess...
They're busy nerfing it so it's useless.
Actually it's better to be first. That way any issues or glitches or problems can be argued with "Well we were first, the others have had time to improve on ours", if Sora comes out last after all these and has issues that these ones don't it's going to kill it instantly. Also Sora will not be able to argue or demand a high pricing if the others are doing as good for cheaper.
@@bigglyguy8429 That. Being 'PC', and it'll only be after the chosen one from the country maximus championship.
very good video and I had similar experience from my first tests: you definitely need 95$ unlimited plan to use it because >90% of results are garbage. Also no image input. I would not recommend at the moment for normal usage. But if you want to experiment a lot and learn which things are working it can generate some interesting and sometimes just amazing output.
Now. What is the compute power used for these Gen 3 vídeos vs the compute power used for the Sora demo videos?
Sora isn't available so no one knows.
In other words, there is a long marathon left before this is really good... 😮💨
Runaway has suck at it always not putting my $ on this app
All I want is to be able to tell Netflix “Generate season 9 of game of thrones” or “generate a thriller about…”
what makes someone qualify as a creator to get early access? I thought you have standard subscription you got early access? I have standard but I don’t see gen 3 option, I am not on desktop right now though only phone
Does anyone know how to Generate a video (or another generator) that can produce videos with no camera movement? I've set the camera movement to the lowest setting along with prompting "Tri-pod shot", "no camera movement" etc but nothing works and I need still camera video generations to edit in pre-existing characters filmed with a tri-pod.
Is it possible to choose the aspect ratio? Like 9:16 instead of 16:9
Maybe they won't release it until the music industry's lawsuit against the ai music makers is over? These companies gotta be scared that they will get really sued for their copywrited fair use training.
Needs image to video (I find that gives better results on most AI Video generators). Result's look ok and have come a long way in short space of time but still a long way to go imo. Any idea on cost of this?
Do you know if it has live audio reactive generative visuals? if not. do you know any A.I software that has?
Could you please share just a few of your videos so that it would be possible to download them and analyse how they handle or manage the image to image (25 or 30 frames/second), just to see?
On that text test, I think you need to give it a more simple prompt that's a single word or two, and keep the background simple.
How long of a clip can you create with the Pro subscription?
its an alpha model they can tune it a bit like they showed in preview but the industry isn't ready for it
🎶 Runway train never coming back. Wrong way on a one way track 🎶
However, for some use cases definitely useable! 🎉🎉
Call of Bread Restaurant Zone - Loaf and Load
Just got an email giving access so I’ll get it a test drive(crash?) and post on X because that’s what people do.
Waiting for Pika to make a move as well?
Exciting times .
is there an ai that can help me generate some basic clip art style for a billboard advertisement?
tbf you did request the rapper to be "signing into his microphone" lol
The bowling video is so uncanny that it made my legs go cold and filled me with a deep sense of dread 😨 Soon we'll have the name of a new phobia for these kinds of things 😂
I would suggest genophobia, but that one's very much taken... 💀
2:24 What is the name of song and where I can find it?
hi there, how many credits does it cost to run one of these generations on 3gen?
Thanks
Nice to see someone make the greatest AI videos since I arleady make the Best AI songs ever made ^^
14:42 he’s an AI millionaire 🪙
I'm convinced they are all Comfy UI workflows lol
Matt - what is that matrix code "lamp" in your bg? I'd love to get one 🥺
It's unclear how many Gen-3 videos we can generate under the various paid plans.
3mins in total for $35 a month pro plan while 60 second for the $12 plan. So sadly dont think there will be many usable seconds.
@@bloxyman22 Thanks! So that's 18 Gen-3 videos per month on the $35 plan. Almost $2 per 10 sec video. Not too bad if the videos are good but otherwise it's quite expensive. I see they also have a $95 unlimited plan (relaxed speed).
Can Runway Gen 3 take in starter images like Luma?
And continuation of prior clips
It looks promosing - for me the most useful workflow is when I generate imagies and than animate them - I hope they will add this feature soon.
Have to say, GEN-3 has been relatively disappointing considering the price point. I found it hard to get good results of 10s when generating fantasy art and animation. It's fair, but not as great as I was hoping for based on the demos.
i want to see you do an AI video for typography if it exists.
When do we get to see you play the banjo, Matt?
AI will play the banjo
Marvel is watching runway. The lawyers are properly getting their paperwork ready.
I created 3 beautiful MTV for Kitaro without even using Sora or Gen 3, check them out. 😍😍😍
Thanks for this. I've been using Pika but this looks like it will be useful as well!
14:02 Guys, don't forget to _Stipcile to Cplehlee Chickk Then_
Seems like video game footages were used to train this model, look how good it did with all the video game prompts
The real life results were mostly pretty bad... i estimate that only next year we will get text to video models that will really create good results rhat can be used to create high quality short movies f.ex.
Can it do higher resolution than 720 p?
More interested in the image to video.
Hi Matt, would you mind saying what kind of computer system you are using to generate your videos. Just wondering how powerful a system you need to keep the generation time to sane levels. Thanks.
Runway Gen3 runs in the cloud.
Sure text to video has come a long way but what's the use case? Entertainment?
@@gamooor1386 B-roll mostly. Establishing shots and such.
Not a game changer. It's progress and makes some really cool stuff. But the Game Changer is still ~12 months away I think.
Very interesting so far. AI video and games really tickle my fancy.
I have a feeling Amazon and metis is gonna eat up all these AI companies
what about image to video? did you try that?
Coder might have hardcoded 'Runway' in his code for text generation and every other text is just random like before. Just satisfy business partners with Demo, you will get more work after they actually use the final product 😂
Don't forget to Chickle Then! 😂
eh. dont think its usable yet. maybe for some specific shots.
Crazy fire 🔥
18:17 omg that is Twiggy.
looks like he was loading the baguette
Matt! I’m so disappointed. My primary use case is image to video, and you didn’t even mention it. 😢 Please at least tell me if it’s an option in Gen 3.
Looks like not a feature yet 😢
It's not available yet. The camera controls and motion brushes are also missing.
@@maddocmiller6475 Damn, that’s disappointing. Thanks for letting me know.
@@JetSurfingNation I appreciate your letting me know. Guess I’ll be waiting a little longer. Luma Labs for now!
Memorizing? Is it too late to cease Ai? Remember Hansel and Gretal and that scenario? Luring us in with sweets? Just to be… laid off by Ai, then… suffer human extinction? Or worse, imprisoned by an… ai new world order? Enforced by swell robotics popping up everywhere… replacing you with Ai jobloss?
Your timer during the generation makes it look like it took an hour and 47 minutes
Only if you don't understand numbers
It's concerning to me that people agree with this.
Exactly, as a 1 year old who has never seen a number i wholeheartedly agree with
mreflow still freaks me out every time lol
that baguette ops looking good
I Love the Taylor Swift Gen-3 Song! 😁😂
You forgot to mention you have to upgrade tour account to try it. I wish it would give a free credits a month to make a certain amount of videos