Sora Will Change an Industry Forever.
HTML-код
- Опубликовано: 15 фев 2024
- openai.com/sora
Get early access to videos and help me, support me on Patreon / sebastiankamph
Chat with me in our community discord: / discord
Stable Diffusion for Beginners Playlist • Stable Diffusion Begin...
My Weekly AI Art Challenges • Let's AI Paint - Weekl...
My Stable diffusion workflow to Perfect Images • Revealing my Workflow ...
ControlNet tutorial and install guide • NEW ControlNet for Sta...
Famous Scenes Remade by ControlNet AI • Famous Scenes Remade b... - Хобби
Did you spot the dad joke?
You said helicopter footage instead of drone footage.
That's a dad thing to say.
Like someone asking u to fax them something. 😂😂😂.
I guess that drone joke went right over everyones head [cough]
A Maize zing? :D
Just sooo corny
@@ShocktorGaming Yep, that's the one I picked up on.
Mind blown here too
The leg switch at 0:06 is my favorite moment of that clip
Sneaky sneaky XD
Absolutely phenomenal. So glad I jumped into Stable diffusion last year to truly appreciate these insane advancements..
Thanks for the news brother! Your dry humor fills me with life, don't ever change lol.
Haha, glad you like it
That's exactly what I thought, mind blowing. You could clearly make full scenes with this.
Oh, for sure!
Would they be worth watching is the question.
probably takes two days to render a 10 second clip
Speechless and I am totally convinced! Can't wait to enter Sora world!
100%! Let's hope the open source tools got something similar in the works.
On the one hand I'm stoked, the consistency is amazing. It opens a lot of possibilities.
On the other hand I'm waiting to release the inner child within until we know what the limitations are. 😊
On some of these - the detail on the distant faces on the 'chinese lunar new year' video is far beyond what's normal even for SDXL or DALL-E3. This T2V model is in many ways clearly outperforming the dedicated T2I models. The hands aren't perfect (look at the grandma birthday party), but they're better than any T2I model I've ever used.
Stable diffusion doesn't even come close. These guys have accomplished another feat.
holy, this is craaaazy!!!
Right!?
I'm thinking to strat a ai videos and ai images service, do you think i should start it, i also love doing it 😅😊
There's a big demand for high quality stuff currently.
It will replace the need to use stock videos, but it's still a long way to generate text into video - I assume it's like generating images, if you start explaining too precisely, the quality of the image and the combinations of parts won't work. The solution is to develop 3D visualization and use it to produce videos. Now, however, it seems that AI generated video is making rapid progress, time will tell if it is expensive to use.
You're right! But give it a few months, or years. Development is insanely fast now 😅
Yeah we see why ”Open” AI has so much drama behind the scenes. I hope someone leaks the model.
Do we already have alternatives to sora to make photorealistic videos?
This looks too good to be true, I mean even for a simple image those are insanely good. Unless the render this for ever on a super computer, I can not imagine reaching this quality.
I'm proud to say I finally upgraded my GPU (thanks to stable diffusion obsession). It's still not an RTX but a lot better than what I had.. now to figure out the best "arguments" for my bat file lol
whoooa this is the next level
I 100% agree. I got a little excited again!
I am impressed by the advancement in temporal continuity, but I have three issues with this first iteration:
Text: Text remains a problem. I don’t think it’s a coincidence that the first scene is set in Tokyo and not New York, where the signs would be in English.
Human Realism: While the facial features are great, there is no facial movement. Additionally, when there is movement (such as the cat in bed), it becomes obvious that it’s AI-generated. Any hand movement also exhibits a familiar AI glitch.
Media Transition: There is no image-to-video or video-to-video transition to see how it combines different modes. Currently, it relies solely on text prompts.
I think for me what I find exciting is the length, 60 seconds, you can make a short movie if you wanted
I hope they can put no limits in lenght, just time consuming with your RTX at 100% and a lots of Gigs free, but hey everyone playng with Stable Diffusion already have that.....
Imagine what this means for the commercial industry, you can do commercials just with a prompt where in the past you had to invest thousands of $$$. If it really works like shown, I am blown away and can’t wait to get my hands on it.
AI is exponentially growing.... I installed SD A1111 just last month and it already seems outdated. Just installed FORGE (thanks for that video!) and juggernaut and other VAE baked in checkpoints are like 15 sec render now instead of 5 minutes. Now I'm playing around with the SVD in Forge but once i master it, in a week or so, I'm afraid, something else would come out by then. Thanks for all the content Kamph!
Happy to hear that! It's a wild ride this.
It's insane how fast the area is developing.. I can't imagine where we will be at the end of 2024!
100% agree :)
I want to play it!
Soon:tm:
wow, window reflection in dog scene
Insane! I bet you need NASA computer to ever process this locally though lol!
Let‘s see how Sora turns out when available.
Cherrypicked clips fuel the fire🤤
But if it will be THAT good interesting to see how Runway, Pika, Midjourney ( text to video was anounced) will react…
Sleazy salesman: YOU TOO CANNOT HAVE THIS PRODUCT!
sure looks amazing to me, decent clip lengths none of this 2 sec clip stuff. amazing details and consistency. Pretty exciting stuff
Yeah, I'm surprised how long these videos are!
Now, let's place our bets on the pricing model shall we?
Oh, probably much more expensive than chatgpt. What do you think?
Not too sure but every sub plan should have a $30 . Anything over that u scare off the hobbyist and the curious which could be a big chunk of income.
These videos must be hugely expensive to compute. I expect that pricing will be quite high.
Unless Microsoft goes and integrates it into Bing for free (again). That'd be funny.
This came even faster than I thought. I thought that these kinds of images we will see by the end of 2024. Well...nope... they are there now!!!
The wool helmets are lols but otherwise really impressive. Shame "Open"AI ain't actually open.
Props to that one red panda who absorbed its offsprind with its but
Honestly there's not much difference between this and a real footage, almost hundred percent consistency, no flickering at all, if you look closely at some details you can spot incoherencies (like the woman legs on the first clip) but tbh it's almost unnoticeable to an amateur's eye. Can't wait for the public release !
100% agree. Huge leap
I really hope this is genuinely this good, even allowing for them cherrypicking the results. It's definitely impressive to a whole new level.
With the amount of videos available, it really does seem like it is that good.
@@sebastiankamph I agree, it's just that these days I've got to the stage where I'll trust it when I see it in trusted hands (or my own).
I can see a future where movies made with 100% practical effects and models make a come back, purely for the novelty and the same reason why people care that actors do their own stunts etc.
'give this a couple years' 😂. I think its more like a couple months
To run it locally? - no. we lack the compute. By a longshot. - It has implications when we compare it with other revolutionary developments which are within our grasp (which we could run locally).
Interesting how so far, I have seen zero examples from the few who may play with it since they are part of the dev. team - non seemed to have escaped the lab? - lol?
people gonna make own animate and own movies now OMFG. this is gonna kill everyting
I'll wait for it to go public before giving my opining, avoiding the like of the Gemini promotion, creating a hype over fake demos
Can't wait to try, hope they make for public soon too.
A-maize-ing video once again
You found it, very nice! :D
The Cool: anybody can do anythign soon.
The not so cool: fakenews get new toys...
still exciting
How the hell do we get access to it , and where is my dad joke?
Oh, the dad joke is in there. Did you miss it?
@@sebastiankamph It was a bait - I was hoping for an answer on the other point 😉 by the way i gotta ask where are you based? you sound scandinavian?
i got it fast..i just seeinglast videos comment about sora..just sec ago😅
I got to serve the community and what you guys want to see, right? 😁
It's also the best anyone has ever seen. Well, amongst the plebians anyway.
Wow! I think these are all made with a quantum computer 🤣
Yes. It's cool. But it's going to be censured so I am not really interested beyond initial curiosity.
Still. Considering the "Challenge Accepted" Mentality from the open source community.
Stable Video Diffusion and its "offsprings" will likely be closed to, or better than Sora by the end of the year.
With the tools to make it even more controllable.
Censorship is a nightmare
Especially if u wanna create crime or Horror. 18 certificate etc.
I think they should have partnerships where solid reputable creators can have access to more uncensored stuff.
@@armondtanz Or! Just wait a year.
There is a plateau where the AI results are all but perfect.
The commercial AIs will likely get their first but Open Source will catch up shortly after.
And AI Hardware for the masses is an expanding market as we speak.
I would say, we'll get at that point before the end of the decade.
@@vi6ddarkking 6 FRIKKEN YEARS... ???? Oh c'mon man. Dont say that.
@@armondtanz Before we achieve near perfection not before Sora is passed by Open Source, as well as an open source model surpassing GPT 4.
That'll likely happen this year.
It's a race and we all win by the end.
Yes, amazing, but it worries me that closed-source is getting ahead.
Sometimes open source just needs a kick in the behind like this.
@@sebastiankamph but my arse is already sore. you know when Emad said we will work ourselves to death this year - no joke.
Give it a couple years?? Fast as AI is moving.. By the end of the year is more like it. Same jump the images had.
Massive closing down sale of drones with 4k cameras... RIP
You're probably quite right
The Matrix is coming to us.
I love it, but the actual creative people will have a massive upgrade to their abilities. The mediocre folks will still produce mediocre stuff with AI just as they have now. The winners will be the consumers.
It won't be for free and it gonna take a lot of computing power. Usual home PCs won't handle it. I think.
I think you're absolutely right
So they finally figured out how to make each frame reference the frames before it when generating new frames .. sheeesh, took them long enough .. but I got to say, the consistency is superb XD
So disappointed. Sam why..... 😭🤧
This is a huge "rip off" from all the videographers. A few days ago, in conversation with a friend from an agency, he, younger than me, partially broke my enthusiasm for generative ai, because it doesn't actually save anything or fill any need, it just removes it.
Yes, it will remove the craft in the same way that we're now using machines to forge steel or shape wood. We'll just be controlling the machines (and maybe not even that).
🫴🏿 anyone could make money selling ai stock footage, make creative music videos, cut cost for studios because the art is more important than whos pockets need to be filled, and its huge for the untalented with artistic passion and indie artists who are often bottlenecked by budget never allowing them to get their true story across.
You call yourself an artist and cant see the benefits and needs it fills 🫴🏿
@@sebastiankamph I wanted to write "rip off". I love the possibilities that are created with these tools, but what I see in agencies is very bad. Teams of 5 being reduced to 2 people. KeepUP!
Excited by the technical achievement, not excited about the implications for art and society. Tech like this enriches billionaires and titillates anime-loving fanboys, but it is not a net positive for artists. But don't let me spoil the party lol. I guess I'm just a Luddite.
auto 👎for clickbait title
lol this guys clearly not impressed
Sora makes the whole industry completely useless. I hope the movie tool industry dearly asks Openai to shelve Sora....
Bro you are late , this has been all over the internet already for the past days, all over and everywhere, while you were sleeping ?