@cyberjungle wooow. It is amazing. Could you please test it for anime-style images and animate them? I tried to use it. It says it is not accessible in my country. It seems it is not accessible for artists now.
Thank you for the overview. Question: do you know if these fantastic videos are generated by “only” one “prompt” shown in the caption? Or there are other prompts earlier to prepare the scenes!
My understanding is you can choose both paths: 1- Text to Image to Video (2 steps approach = first generate the scene and then set up the motion. 2- Text to Video (direct approach where you generate the whole video with a single prompt)
I agree with you that this looks promising in a different way to the over-hyped Sora promo clips. What interests me is things like someone getting in and out of a car, people interacting in a realistic way and real physics as you point out. Cats dressed as clowns or dragons riding bikes is getting really tired as a way to demoAI video. I can't wait for this to be released.
Simply the best t2v right now. Spatial consistency is superior to all the other models, but not surprising given the amount of data and years they've spent on mapping the world.
Let's hope that it keep as good as we hope it´s, and it's not pure HYPE like it was with Sora, I trust in Google Deep Mind a lot more than Open AI anyways..
Must I remind everyone that Sora had the BEST teaser trailers out there, that shocked everyone, after release we all know what happened, so don’t go by these curated teasers, it means nothing
@@armondtanz true. I never use text to video for my things - I always generate an image first so it cuts down, as you say, on randomness. But the comments on this new offering from Google include image to video doing exceptionally well. We will have to wait and see, but third party testers giving this kind of feedback is very encouraging. I believe that the testers are also required to give regular feedback to Google on the quality of the results they are getting. I get the feeling Google is going all out on this to make sure it is miles ahead of any other generator. Probably because they've been so far behind until now.
_Sora_ is being surpassed by everyone.
I hope _Kling_ releases their _2.0_ version soon.
Kling 1.6 is coming this week 👍🏼 Stay tuned for full review (2.0 needs more time I guess)
@@cyberjungle A "1.6" version?
Now, that is very interesting.
These AI models usually jump from 1.5 to 2.0.
I wonder what the 1.6 version will have...
Waiting on Kling
They took too many time
Well sora isnt useable yet so. I don’t count it same as this google.
That is the best looking video model that I still can't use...
I know the feeling. Google is teasing hard
That octopus vs knight was an epic example!
This is a way better thumbnail best one so far
Very impressive, thank you for updating us
💚
This looks great. Any insight into whether it can maintain character consistency?
Not enough information yet. I’m also curious to be honest
2:24 Guy in the street walking through invisible water.😂
That low angle POV shot of the beast in the water was amazing.
Veo 2 looks impressive, between Kling and Minimax which one do you think is the best now? Congratulations on your work
Kling is much better overall. But if your work requires many action - motion shots then I would recommend Minimax because it excels on that frontier.
@@cyberjungle tks
@@cyberjungle Minimax, however, is extremely slow in generating the video and has a low output resolution
Thank you 💖 can i animate images using this tool? Or only it accepts text prompts?
It accepts both images and text prompts 👍🏻
@cyberjungle wooow. It is amazing. Could you please test it for anime-style images and animate them? I tried to use it. It says it is not accessible in my country. It seems it is not accessible for artists now.
Excellent work.😍👏👏👍
This is a game changer, can't wait for its release. Thank you for sharing this piece of info!
💚
the image generator is so censored, i couldn't do the 1st 2 prompts, one was an action scene????? lol, not a great start
Thank you for the overview.
Question: do you know if these fantastic videos are generated by “only” one “prompt” shown in the caption? Or there are other prompts earlier to prepare the scenes!
My understanding is you can choose both paths: 1- Text to Image to Video (2 steps approach = first generate the scene and then set up the motion. 2- Text to Video (direct approach where you generate the whole video with a single prompt)
I agree with you that this looks promising in a different way to the over-hyped Sora promo clips. What interests me is things like someone getting in and out of a car, people interacting in a realistic way and real physics as you point out. Cats dressed as clowns or dragons riding bikes is getting really tired as a way to demoAI video. I can't wait for this to be released.
👍🏼
oh my, this is insane. This is what sora was supposed to be.
maybe it's because google has access to the youtube library?
Simply the best t2v right now. Spatial consistency is superior to all the other models, but not surprising given the amount of data and years they've spent on mapping the world.
0:29
Love your content. The correct word is "surpassed" not "supressed". 👍
Let's hope that it keep as good as we hope it´s, and it's not pure HYPE like it was with Sora, I trust in Google Deep Mind a lot more than Open AI anyways..
"If you could use it, it would be really good". This is what we call vaporware
Sora is so DOA
This videos are amazing. It will be more amazing if we can actually use any of them 😢
💯
Sora competing with Pika
"Google Just Changed AI Video Forever"... but you cant use it -lol-
:)
Google is cooking the biggest and finest curry.
Mazing.
I always join waitlists of google, but they never give access
Same here :(
forever isn't as long as it used to be.
Must I remind everyone that Sora had the BEST teaser trailers out there, that shocked everyone, after release we all know what happened, so don’t go by these curated teasers, it means nothing
You’re right about critical thinking when it comes to these models. We will see if Google can match the massive expectations they created
Yeah ok very good
Lol. Translation to "one of the best ive seen".
Heavily cherry picked, the best of best videos generated.
Apparently not so. Some testers have said that it really is this good on even the first attempt with a prompt.
@cbnewham5633 the big one is image to vid. All the leading AI movie makers use that format.
Using words is just too random.
@@armondtanz true. I never use text to video for my things - I always generate an image first so it cuts down, as you say, on randomness. But the comments on this new offering from Google include image to video doing exceptionally well. We will have to wait and see, but third party testers giving this kind of feedback is very encouraging. I believe that the testers are also required to give regular feedback to Google on the quality of the results they are getting. I get the feeling Google is going all out on this to make sure it is miles ahead of any other generator. Probably because they've been so far behind until now.
Don't bet against Google 😂
The tennis motion is pretty awful. Running is decent.
pretty awful in comparison to real life? :)
@@cyberjungle Yeah. What they were doing bears no resemblance to how (even a below average) casual player swings at the ball.
Everything looks great before release, look at Sora. A trash and overhyped model.
First
When google veo 2, adobe, sora, nvidia has launched, goodbye China AI. China just shows off but doesn't care about quality