What I love, is that you keep testing the same prompts of previous video, so we can have a clearer idea of what each generator can do, thanks man! I personally use Kling for very satisfying results, but sometime I'll give a try at minimax and genmo.
What is holding everyone to make films is not the time (since u can link 5-10 seconds clip by using last frame to reprompt the next 5-10 seconds) or the character reference since you can easily create a Lora + controlNets and will get you very consistent characters in different poses. The big issue everyone have is creating MULTIPLE consistent charaters in the same shot. Almost impossible to do without the use of external editing/face swapping.
@gnoel5722 well consistent characters doesn't exist I have scene the Control Net Scam for over two years. The biggest problem with AI artwork is that it hasn't been trained on the rules of art Anatomy & shape theory just the final illustration. Also we need a mixture of Experts on Art Generation, Background, Characters, environments, Motion, color, we need a Mixture of Experts for it. Also far more than 8B - 12B models. Films their are tricks to generate one character a scene a time than merge it but 5 Seconds isn't enough
Its interesting to see open-source video generation tools! Thank you for the video, I really enjoy your content & your funny comments :) Regarding the boy at 25:26, I think he's either shocked or emotionless, not scared well I think Mochi needs more improvement in quality and facial expressions.
I can’t use anything that doesn’t have ‘Image to Video’ though. I don’t find ‘Text to Video’ all that useful. So, I’m hoping these tools will offer ‘Image to Video’ soon.
Genmo's giant sea creature attacking the ship was very cool, BUT there were no splashes when the sea creature came out of the water like the huge splash that happens randomly at the beginning.
Unfortunately, it says it requires 4 Nvidia H100 GPUs to run it. That's some pretty immense hardware requirements, especially for just 480p videos. Man, I wonder how long it'll be before either consumer hardware or video generators become a thing that normal people can feasibly run.
Nope I made a tree within a hour with AI. It learned and made it perfect very fast. I had nvidia 1650. Latest news I always go forward in their technology but best AI is the one u save in file explorer ;).
I gotta say I would love these projects provide an easy to install options for these things. I get that it's not their focus as it's often stuff like research projects made public but still.
Thanks for the video!id like to ask you what generator do you recommend to do a full sequence of shots with consistent character, without using image2video. Is there any model that can take a photo reference of the likeness (face and wardrobe) of my character and keep the consistency throughout several shots?
Ummm.. when I checked my download for machi 1 (Genmo) from their website it did 960p, 30fps, and 4,000kbps. So maybe it has been updated on their website, but not on the open source model. They could have just adjusted size, but it looks about 960p to me.
interesting that the princess running away from the dragon doesn't work. There should be so many 'person running away from something'' reference videos to learn from
Bro, I am not sure if you can see this. Wanted to say thanks for all in one AI information. After watching your video I will be starting to use AI in my videos ❤. Thanks again
The only issue with these new open source models is as they get getter they also demand a lot more power to run to run locally. When you typically need to do many generations to get a decent shot then waiting an hour for each take is not an option unless you have a lot of time on your hands and are very patient. Affordable Consumer AI hardware at this stage is still lagging behind.
I get it.. for a free stuff. its amazing. (Remember, Midjourney was once free) Yet, I'd say its littler bit bold to say ANY of these beats Gen3 of Runway. Going by the fact that the videos being showcased are always the best. Right from the beginning.. the monk walking like floor is electrocuting LOL or that hat swimmer girl's "broken leg".. The pouring of liquid and the thing on the table is depleting, and so many other things . Most probably not up to date with Runway Gen3 .. The only thing one can say is that Runway's customer service is a CRIME.. ..besides, no image to video or video to video.. naaah.. Ant talk of that, MidJourney is working on their video model now that can generate artistic stuff. THAT would be a game changer for artists
okay so i am getting confused. like everyday there is a new AI update, firstly i am struggling with the clickbaits right now and i am not up to date: so, who is currently the leading LLM model in overall, problem solving, picture creation. and who is the leading free AI video generator?
these are free but with limited generations LLM- GPT-4o and Claude Sonnet 3.5 (new) Video- Minimax and Kling AI image creation- Ideogram 2.0 and Flux 1.1
Like many of these, but excluding the amazing Kling, seems very good for cutesey animals. Not so good for multiple people in a near shot, that is not just a person's head, or distant view. I guess it's down to training.
Are there video generators that are good for *adding* new things to existing video? Making a video of a princess running, and then uploading that to an editor *adding* a dragon following her in another prompt, might be an easier way to incrementally create the desired result.
@@geoffphillips5293 This is taken directly from the Genmo Huggingface weights page "Hardware Requirements The model requires at least 4 H100 GPUs to run. We welcome contributions from the community to reduce this requirement." So the next time you want to try and say something isn't true try researching before calling people liars.
@@ten_cents With respect I quoted the actual company who made the model. Some random came on to call me a liar yet I'm the combative one?. If you don't like the fact it takes 4 H100 £30,000 cards then take it up with Genmo. Not me.
Bro rhyme is minimax model lol and Hotshot still the best Video Generator but you never put in your list cause you ave partonaria with Kling now minimax that now is another crap. Runway 3 is ok but Hotshot still the best also the image quality is higher than every video generator. Like full HD clean no need to upscale copare to all video gerator. Your website generate like 420p video bro it not the best but the wackest😅
Thanks to our sponsor Abacus AI. Try their new ChatLLM platform here: chatllm.abacus.ai/?token=aisearch
What I love, is that you keep testing the same prompts of previous video, so we can have a clearer idea of what each generator can do, thanks man! I personally use Kling for very satisfying results, but sometime I'll give a try at minimax and genmo.
thanks for sharing!
wow im so hyped for this stage of humanity, Interdimensional cable HERE WE COME
YEAAAAHHHH
8:38 Will Smith must be a professional magician making that fork disappear this smooth🔥
As smooth as Jada makes his dignity disappear.
The panda falling over is actually hilarious 😂
damn that ship wreck by genmo was so awesome
I love your teaching from Nigeria
Thanks
As usual, very good video. Thank you bro
Thanks for watching!
@@theAIsearch Thank you sir.
Man once they can do 10 seconds - 1 Minute & character reference for consistency it's everyone's making a film
What is holding everyone to make films is not the time (since u can link 5-10 seconds clip by using last frame to reprompt the next 5-10 seconds) or the character reference since you can easily create a Lora + controlNets and will get you very consistent characters in different poses. The big issue everyone have is creating MULTIPLE consistent charaters in the same shot. Almost impossible to do without the use of external editing/face swapping.
@gnoel5722 well consistent characters doesn't exist I have scene the Control Net Scam for over two years. The biggest problem with AI artwork is that it hasn't been trained on the rules of art Anatomy & shape theory just the final illustration.
Also we need a mixture of Experts on Art Generation, Background, Characters, environments, Motion, color, we need a Mixture of Experts for it. Also far more than 8B - 12B models.
Films their are tricks to generate one character a scene a time than merge it but 5 Seconds isn't enough
I'm guessing we can make movies by around 2033 but maybe and hopefully I'm wrong and I will definitely be using it
I can't wait for video generation that can make text as well as image generation has
yeah it still doesnt do very well. i usually make the text on the image to start, then just animate the image
Its interesting to see open-source video generation tools!
Thank you for the video, I really enjoy your content & your funny comments :)
Regarding the boy at 25:26, I think he's either shocked or emotionless, not scared
well I think Mochi needs more improvement in quality and facial expressions.
thanks for watching!
I cant wait till we get image to video for this
6:45 left puppy's reaction is realistic 😮
I can’t use anything that doesn’t have ‘Image to Video’ though. I don’t find ‘Text to Video’ all that useful. So, I’m hoping these tools will offer ‘Image to Video’ soon.
i'm sure this will be added soon
CogVideo has image to video.
It will be something when it can be run on my own PC. Excellent!
Genmo's giant sea creature attacking the ship was very cool, BUT there were no splashes when the sea creature came out of the water like the huge splash that happens randomly at the beginning.
what kind of NASA I mean Elon Musk supercomputer do I need at home?
It runs on the Casio scientific calculator
@@WongEthan-ge6pq😂😂😂
If this is what I'm thinking it is, I think 4 H100
Unfortunately, it says it requires 4 Nvidia H100 GPUs to run it. That's some pretty immense hardware requirements, especially for just 480p videos.
Man, I wonder how long it'll be before either consumer hardware or video generators become a thing that normal people can feasibly run.
@@CaidicusProductionsSomeone made a fp8 quant I believe and it can run within 20gb of vram I believe
29:15 Genmo and Minimax is just Po from Kung Fu Panda. Especially the belt in the Minimax Video. Copyright discussion incoming 😅
lol
You just changed my entire month... subscribed! I wish i had this last month before I released my video. Thank you!
Thanks for the sub!
the amount of VRAM needed to run this thing locally must be insane
apparently it requires at least 4 H100 GPUs
Yeah the minimum is 64gb and
That’s just the minimum
Nope I made a tree within a hour with AI. It learned and made it perfect very fast. I had nvidia 1650. Latest news I always go forward in their technology but best AI is the one u save in file explorer ;).
for mochi, some people have made it work with only 12g
A programmer would be able to argue that no puppies is still a group of zero items.
If you're checking length, it's still 1 based, only index is 0 based
Genmo is 2 generations a day.
Yea, I just tried to use it. My first two generations failed, then it says you have run out of credits for the day. 😂
13:38 a princess moonwalking in front of a disastrous monster is its own way of art
Excellent analysis. I couldn’t find the site test link in the description.
are you looking for this www.genmo.ai/play
Thanks for sharing!
27:50 - Wow. The botton left is a movie without any work. Game-changer
7:50 That dog kneading the dough made me laugh 😂
cant wait for ai to learn to code perfectly
Every day, the case for building a 4x3090 desktop gets more appealing.
I gotta say I would love these projects provide an easy to install options for these things. I get that it's not their focus as it's often stuff like research projects made public but still.
this is why big tech proprietary slop keeps winning, way easier integration
Another great one! Thank you!!!
you're welcome!
When Blackforest Labs will release their video generator, it will kill all the other services.
What is the reason?
The boy looks like he started the fire
what a time to be alive
Dumb question: when you say it is open source does it mean it will available for we to use locally, for example, in the future?
If you have Comfyui you can run Mochi now.
the law suits are going to be interesting! actors and movie studios are not going to take this lying down 😅
Thanks for the video!id like to ask you what generator do you recommend to do a full sequence of shots with consistent character, without using image2video. Is there any model that can take a photo reference of the likeness (face and wardrobe) of my character and keep the consistency throughout several shots?
minimax still the best
I hope that they'll release video2video in the near future.
Open source?
It seems to be the best one in most cases, except it didn’t do so well at generating anime characters.
@@chariots8x230i won't say that you can do many things with a open source model like using with flux or many other things
@@Sujal-ow7cj no
waiting for 4bit quant or at least 6bit, or I might go ahead and do quantization myself
I love you because you give to us the free ones. ^_^
I look forward to the moment I will just have to enter a synopsis as a prompt and get a full length feature film in return.
true genmo videos are too damn good but it required too much compute power that a normal pc cant handle it
moshi and mochi? next it's momoshi and mochi...lol
Would be good if put the labels next to the generations to separate what output comes from what model.
20:57 Why does the generation by Minimax looks like Marie Schrader from Breaking Bad?
"Zombies in station" = Cash Jordan thumbnail !?!!
😅😂
thanks, can it be used with consumer grade gpus?
Kling is blowing me away!
9:26 Kling just straight up made Will smith Chinese. 😂
Thank you great job. I like the first AI Video generator. I think they will get better as they learn. Is there any AI Video Generators with sound?
Finnally abacus ai 😂🙌
Girl swimming legs looked broken in the intro
Ummm.. when I checked my download for machi 1 (Genmo) from their website it did 960p, 30fps, and 4,000kbps. So maybe it has been updated on their website, but not on the open source model. They could have just adjusted size, but it looks about 960p to me.
interesting. thanks for sharing
interesting that the princess running away from the dragon doesn't work. There should be so many 'person running away from something'' reference videos to learn from
Would anyone know what type of specs one would want in a PC or laptop if starting to learn about this type of ai software/websites? Thanks
17:47 this looks like itadori and Fushiguro walking in the left side???
I was looking for this comment and yes I thought the same thing
Bro, I am not sure if you can see this. Wanted to say thanks for all in one AI information. After watching your video I will be starting to use AI in my videos ❤. Thanks again
amazing bear Kling
lol what a joke! 3 times in a row "video failed"! No credits left...
The only issue with these new open source models is as they get getter they also demand a lot more power to run to run locally. When you typically need to do many generations to get a decent shot then waiting an hour for each take is not an option unless you have a lot of time on your hands and are very patient. Affordable Consumer AI hardware at this stage is still lagging behind.
Genmo not free, They tell me I can generate two videos a day unfortunately
We not escaping the false allegations with this one boys 🔥
Will either of these be optimized for Apple's new M4? What are the minimum specs to run them?
Minimax is by far the best one.
How does abacus compare to their competitor ninja chat?
How to use minimax? What is their website link
I get it.. for a free stuff. its amazing. (Remember, Midjourney was once free) Yet, I'd say its littler bit bold to say ANY of these beats Gen3 of Runway. Going by the fact that the videos being showcased are always the best. Right from the beginning.. the monk walking like floor is electrocuting LOL or that hat swimmer girl's "broken leg".. The pouring of liquid and the thing on the table is depleting, and so many other things . Most probably not up to date with Runway Gen3 ..
The only thing one can say is that Runway's customer service is a CRIME.. ..besides, no image to video or video to video.. naaah..
Ant talk of that, MidJourney is working on their video model now that can generate artistic stuff. THAT would be a game changer for artists
What it would take to finetune this model? Could you create a tutorial about finetuning video models?
25:20, the boy doesn't look scared. The boy looks like he just burned down his house
lol
okay so i am getting confused. like everyday there is a new AI update, firstly i am struggling with the clickbaits right now and i am not up to date: so, who is currently the leading LLM model in overall, problem solving, picture creation. and who is the leading free AI video generator?
these are free but with limited generations
LLM- GPT-4o and Claude Sonnet 3.5 (new)
Video- Minimax and Kling
AI image creation- Ideogram 2.0 and Flux 1.1
Like many of these, but excluding the amazing Kling, seems very good for cutesey animals. Not so good for multiple people in a near shot, that is not just a person's head, or distant view. I guess it's down to training.
Purz tested the smaller Mochi 1.
I think we need AI that will shrink these videos to 5-10 minutes. Apparently, author thinks that length is important for RUclips algorithm.
With all these AI companies, what good is a 3 to 5-second video?
Good comparison but I see minimax still the best ai generator on the platform
Are there video generators that are good for *adding* new things to existing video?
Making a video of a princess running, and then uploading that to an editor *adding* a dragon following her in another prompt, might be an easier way to incrementally create the desired result.
13:42 hailuo ai minimax✨✨🔥👍
what is the discord to use those models?
Genmo Mochi 1 requires 4 H100 cards to run. They are £30,000 each. So if you have £120k sitting in your bank go for it.
Not true, it will run with a 24Gig nvidia, using comfy. Set tiling to 8; takes about 20 minutes per run.
@@geoffphillips5293 This is taken directly from the Genmo Huggingface weights page
"Hardware Requirements
The model requires at least 4 H100 GPUs to run. We welcome contributions from the community to reduce this requirement."
So the next time you want to try and say something isn't true try researching before calling people liars.
@@TPCDAZwhy so combative? Watch the last section of the video you're on.
@@ten_cents With respect I quoted the actual company who made the model. Some random came on to call me a liar yet I'm the combative one?. If you don't like the fact it takes 4 H100 £30,000 cards then take it up with Genmo. Not me.
@@TPCDAZ he literally says it in the video you can run it on a comfyui node with ~20gb vram card
NO image to video is insane
Ah, Mochi requires 20GB VRAM... I guess I'll have to wait until I upgrade my computer then, but it is at least achievable.
is local installation allow nsfw generating? 😁
Why u gotta know 🤨
@IPutFishInAWashingMachine why not
I think you can, which we can train it
I would imagine it can, however, you need basically a supercomputer to run Mochi 1. 4 H100 GPUs. Good luck buying that.
@@IPutFishInAWashingMachine kinda need that nsfw . just readed the forum. can make nsfw on it use plugin
Woman with Kling and Minimax is visibly sadder and more distressed. I would be too if I had to pay so much.
for those who came for Will Smith 8:26
Man when sora 😭
sora? whats that?
Source code open-source or api open source?
Bro rhyme is minimax model lol and Hotshot still the best Video Generator but you never put in your list cause you ave partonaria with Kling now minimax that now is another crap. Runway 3 is ok but Hotshot still the best also the image quality is higher than every video generator. Like full HD clean no need to upscale copare to all video gerator. Your website generate like 420p video bro it not the best but the wackest😅
can genmo do image to video please answer
How to make your ai video generator Rank Up Tip Just Make The Model to able to generate Will Smith Eating Spaghetti😆😆
true
Could you show us all how to download and run it locally for all us noobs eager to learn.
You didn't mention pyramid flow - also recently released
"at least 4 H100" 💀
tutorial on how to install please!!!
All this times and there is no open source text to speech that beat elevenlabs with 28 supported languages
Bro video good but thumanial is not shereable places change we can 't shere
Mochi 1 has huge amounts of bad generations though, Kling is still alot more consistent
what happened to sora?
sora? whats that?
@@theAIsearchHe means about sora openai
18:23 Walmart anime be like😂
lol wouldnt do any videos just failed 3 times to generate and now im out of trys lol
Nothing is free. What's the catch?