What I love, is that you keep testing the same prompts of previous video, so we can have a clearer idea of what each generator can do, thanks man! I personally use Kling for very satisfying results, but sometime I'll give a try at minimax and genmo.
I've tested minimax, for 100 video generated, i only got 10 correct results. Subs fee is not worth it, around10 correct results for one month - (130 gens = 10/month in my case). I can't afford the unlimited one, its so expensive for a hobby.
I was ahead for once it seems ! Great to see people discovering Mochi1, we have had a blast with it on the research front ! Don't forget Mochi-Edit which is basically "Runways Act-One at home" Great Video !
I can’t use anything that doesn’t have ‘Image to Video’ though. I don’t find ‘Text to Video’ all that useful. So, I’m hoping these tools will offer ‘Image to Video’ soon.
Genmo's giant sea creature attacking the ship was very cool, BUT there were no splashes when the sea creature came out of the water like the huge splash that happens randomly at the beginning.
I gotta say I would love these projects provide an easy to install options for these things. I get that it's not their focus as it's often stuff like research projects made public but still.
What is holding everyone to make films is not the time (since u can link 5-10 seconds clip by using last frame to reprompt the next 5-10 seconds) or the character reference since you can easily create a Lora + controlNets and will get you very consistent characters in different poses. The big issue everyone have is creating MULTIPLE consistent charaters in the same shot. Almost impossible to do without the use of external editing/face swapping.
@gnoel5722 well consistent characters doesn't exist I have scene the Control Net Scam for over two years. The biggest problem with AI artwork is that it hasn't been trained on the rules of art Anatomy & shape theory just the final illustration. Also we need a mixture of Experts on Art Generation, Background, Characters, environments, Motion, color, we need a Mixture of Experts for it. Also far more than 8B - 12B models. Films their are tricks to generate one character a scene a time than merge it but 5 Seconds isn't enough
There are no 10 seconds to 1 minute clips in films. It's usually half a second to 2 seconds. Longest are around for seconds. Extremely rarely longer than that
Thanks for the video!id like to ask you what generator do you recommend to do a full sequence of shots with consistent character, without using image2video. Is there any model that can take a photo reference of the likeness (face and wardrobe) of my character and keep the consistency throughout several shots?
Its interesting to see open-source video generation tools! Thank you for the video, I really enjoy your content & your funny comments :) Regarding the boy at 25:26, I think he's either shocked or emotionless, not scared well I think Mochi needs more improvement in quality and facial expressions.
Nope I made a tree within a hour with AI. It learned and made it perfect very fast. I had nvidia 1650. Latest news I always go forward in their technology but best AI is the one u save in file explorer ;).
Are there video generators that are good for *adding* new things to existing video? Making a video of a princess running, and then uploading that to an editor *adding* a dragon following her in another prompt, might be an easier way to incrementally create the desired result.
Ummm.. when I checked my download for machi 1 (Genmo) from their website it did 960p, 30fps, and 4,000kbps. So maybe it has been updated on their website, but not on the open source model. They could have just adjusted size, but it looks about 960p to me.
The first Will Smith didn't throw the fork down. A second fork was always present in the spaghetti and the fork he is actually holding just disappears.
The only issue with these new open source models is as they get getter they also demand a lot more power to run to run locally. When you typically need to do many generations to get a decent shot then waiting an hour for each take is not an option unless you have a lot of time on your hands and are very patient. Affordable Consumer AI hardware at this stage is still lagging behind.
Bro, I am not sure if you can see this. Wanted to say thanks for all in one AI information. After watching your video I will be starting to use AI in my videos ❤. Thanks again
6:05 "and kling.. everything is more fluid" oh yea, and casually not mentioning - in the Kling version it's not even a Unicon, which takes a 1/3 of the prompt's essence
interesting that the princess running away from the dragon doesn't work. There should be so many 'person running away from something'' reference videos to learn from
Unfortunately, it says it requires 4 Nvidia H100 GPUs to run it. That's some pretty immense hardware requirements, especially for just 480p videos. Man, I wonder how long it'll be before either consumer hardware or video generators become a thing that normal people can feasibly run.
Like many of these, but excluding the amazing Kling, seems very good for cutesey animals. Not so good for multiple people in a near shot, that is not just a person's head, or distant view. I guess it's down to training.
okay so i am getting confused. like everyday there is a new AI update, firstly i am struggling with the clickbaits right now and i am not up to date: so, who is currently the leading LLM model in overall, problem solving, picture creation. and who is the leading free AI video generator?
these are free but with limited generations LLM- GPT-4o and Claude Sonnet 3.5 (new) Video- Minimax and Kling AI image creation- Ideogram 2.0 and Flux 1.1
I get it.. for a free stuff. its amazing. (Remember, Midjourney was once free) Yet, I'd say its littler bit bold to say ANY of these beats Gen3 of Runway. Going by the fact that the videos being showcased are always the best. Right from the beginning.. the monk walking like floor is electrocuting LOL or that hat swimmer girl's "broken leg".. The pouring of liquid and the thing on the table is depleting, and so many other things . Most probably not up to date with Runway Gen3 .. The only thing one can say is that Runway's customer service is a CRIME.. ..besides, no image to video or video to video.. naaah.. Ant talk of that, MidJourney is working on their video model now that can generate artistic stuff. THAT would be a game changer for artists
@@geoffphillips5293 This is taken directly from the Genmo Huggingface weights page "Hardware Requirements The model requires at least 4 H100 GPUs to run. We welcome contributions from the community to reduce this requirement." So the next time you want to try and say something isn't true try researching before calling people liars.
@@ten_cents With respect I quoted the actual company who made the model. Some random came on to call me a liar yet I'm the combative one?. If you don't like the fact it takes 4 H100 £30,000 cards then take it up with Genmo. Not me.
Thanks to our sponsor Abacus AI. Try their new ChatLLM platform here: chatllm.abacus.ai/?token=aisearch
Still perfer Runaway Gen 3 Video to Video
Yay. Local video. Awesome!
What I love, is that you keep testing the same prompts of previous video, so we can have a clearer idea of what each generator can do, thanks man! I personally use Kling for very satisfying results, but sometime I'll give a try at minimax and genmo.
thanks for sharing!
I've tested minimax, for 100 video generated, i only got 10 correct results. Subs fee is not worth it, around10 correct results for one month - (130 gens = 10/month in my case). I can't afford the unlimited one, its so expensive for a hobby.
It's true that I also find it easier to understand with repeated prompts. Thank you
I was ahead for once it seems ! Great to see people discovering Mochi1, we have had a blast with it on the research front !
Don't forget Mochi-Edit which is basically "Runways Act-One at home"
Great Video !
8:38 Will Smith must be a professional magician making that fork disappear this smooth🔥
As smooth as Jada makes his dignity disappear.
The panda falling over is actually hilarious 😂
I can’t use anything that doesn’t have ‘Image to Video’ though. I don’t find ‘Text to Video’ all that useful. So, I’m hoping these tools will offer ‘Image to Video’ soon.
i'm sure this will be added soon
CogVideo has image to video.
Genmo's giant sea creature attacking the ship was very cool, BUT there were no splashes when the sea creature came out of the water like the huge splash that happens randomly at the beginning.
You just changed my entire month... subscribed! I wish i had this last month before I released my video. Thank you!
Thanks for the sub!
damn that ship wreck by genmo was so awesome
I love your teaching from Nigeria
Thanks
I cant wait till we get image to video for this
You're a legend. You have no idea how much footwork and research you've saved a lot of people, including my self.
Thanks!
he just searches google lol
As usual, very good video. Thank you bro
Thanks for watching!
@@theAIsearch Thank you sir.
I gotta say I would love these projects provide an easy to install options for these things. I get that it's not their focus as it's often stuff like research projects made public but still.
this is why big tech proprietary slop keeps winning, way easier integration
Man once they can do 10 seconds - 1 Minute & character reference for consistency it's everyone's making a film
What is holding everyone to make films is not the time (since u can link 5-10 seconds clip by using last frame to reprompt the next 5-10 seconds) or the character reference since you can easily create a Lora + controlNets and will get you very consistent characters in different poses. The big issue everyone have is creating MULTIPLE consistent charaters in the same shot. Almost impossible to do without the use of external editing/face swapping.
@gnoel5722 well consistent characters doesn't exist I have scene the Control Net Scam for over two years. The biggest problem with AI artwork is that it hasn't been trained on the rules of art Anatomy & shape theory just the final illustration.
Also we need a mixture of Experts on Art Generation, Background, Characters, environments, Motion, color, we need a Mixture of Experts for it. Also far more than 8B - 12B models.
Films their are tricks to generate one character a scene a time than merge it but 5 Seconds isn't enough
I'm guessing we can make movies by around 2033 but maybe and hopefully I'm wrong and I will definitely be using it
There are no 10 seconds to 1 minute clips in films. It's usually half a second to 2 seconds. Longest are around for seconds. Extremely rarely longer than that
@@raspas99 Birdman enters the chat.
It will be something when it can be run on my own PC. Excellent!
A programmer would be able to argue that no puppies is still a group of zero items.
If you're checking length, it's still 1 based, only index is 0 based
Excellent analysis. I couldn’t find the site test link in the description.
are you looking for this www.genmo.ai/play
20:57 Why does the generation by Minimax looks like Marie Schrader from Breaking Bad?
Would be good if put the labels next to the generations to separate what output comes from what model.
29:15 Genmo and Minimax is just Po from Kung Fu Panda. Especially the belt in the Minimax Video. Copyright discussion incoming 😅
lol
Thanks for the video!id like to ask you what generator do you recommend to do a full sequence of shots with consistent character, without using image2video. Is there any model that can take a photo reference of the likeness (face and wardrobe) of my character and keep the consistency throughout several shots?
Its interesting to see open-source video generation tools!
Thank you for the video, I really enjoy your content & your funny comments :)
Regarding the boy at 25:26, I think he's either shocked or emotionless, not scared
well I think Mochi needs more improvement in quality and facial expressions.
thanks for watching!
6:45 left puppy's reaction is realistic 😮
Check out Kaye AI's video of cat music video, hers is far more realistic looking than those.
the amount of VRAM needed to run this thing locally must be insane
apparently it requires at least 4 H100 GPUs
Yeah the minimum is 64gb and
That’s just the minimum
Nope I made a tree within a hour with AI. It learned and made it perfect very fast. I had nvidia 1650. Latest news I always go forward in their technology but best AI is the one u save in file explorer ;).
for mochi, some people have made it work with only 12g
Would anyone know what type of specs one would want in a PC or laptop if starting to learn about this type of ai software/websites? Thanks
Dumb question: when you say it is open source does it mean it will available for we to use locally, for example, in the future?
If you have Comfyui you can run Mochi now.
wow im so hyped for this stage of humanity, Interdimensional cable HERE WE COME
YEAAAAHHHH
I want to know more about the plumbus
@@BearFulmer I need to know more about the poop eating society
@drcluck9573 what yal ain't about that snake jazz
17:47 this looks like itadori and Fushiguro walking in the left side???
I was looking for this comment and yes I thought the same thing
the law suits are going to be interesting! actors and movie studios are not going to take this lying down 😅
waiting for 4bit quant or at least 6bit, or I might go ahead and do quantization myself
Are there video generators that are good for *adding* new things to existing video?
Making a video of a princess running, and then uploading that to an editor *adding* a dragon following her in another prompt, might be an easier way to incrementally create the desired result.
7:50 That dog kneading the dough made me laugh 😂
13:38 a princess moonwalking in front of a disastrous monster is its own way of art
Whats the best Image to Video AI ? I tested so many and Runway works real good to make image a little bit real
Thank you great job. I like the first AI Video generator. I think they will get better as they learn. Is there any AI Video Generators with sound?
Ummm.. when I checked my download for machi 1 (Genmo) from their website it did 960p, 30fps, and 4,000kbps. So maybe it has been updated on their website, but not on the open source model. They could have just adjusted size, but it looks about 960p to me.
interesting. thanks for sharing
Yes and no, runaway Ai Gen 3 of the Video to video is really amazing, that none of the other video ai company have not made yet
what a time to be alive
AI SEARCH = on the edge !
👍😀
thanks, can it be used with consumer grade gpus?
27:50 - Wow. The botton left is a movie without any work. Game-changer
I love you because you give to us the free ones. ^_^
Every day, the case for building a 4x3090 desktop gets more appealing.
I'm confused, how an open source product is charging?
Genmo is 2 generations a day.
Yea, I just tried to use it. My first two generations failed, then it says you have run out of credits for the day. 😂
What it would take to finetune this model? Could you create a tutorial about finetuning video models?
Another great one! Thank you!!!
you're welcome!
cant login it says try later
i really want to see someone make a 1 hour and 30 min movie
Finnally abacus ai 😂🙌
cant wait for ai to learn to code perfectly
Thanks for sharing!
The first Will Smith didn't throw the fork down. A second fork was always present in the spaghetti and the fork he is actually holding just disappears.
You didn't mention pyramid flow - also recently released
I look forward to the moment I will just have to enter a synopsis as a prompt and get a full length feature film in return.
How to use minimax? What is their website link
Will either of these be optimized for Apple's new M4? What are the minimum specs to run them?
I'm really looking forward to AI VR video generation.
The only issue with these new open source models is as they get getter they also demand a lot more power to run to run locally. When you typically need to do many generations to get a decent shot then waiting an hour for each take is not an option unless you have a lot of time on your hands and are very patient. Affordable Consumer AI hardware at this stage is still lagging behind.
Thanks for the information
Bro, I am not sure if you can see this. Wanted to say thanks for all in one AI information. After watching your video I will be starting to use AI in my videos ❤. Thanks again
I miss the good old days when DALLE was all the rage.
6:05 "and kling.. everything is more fluid" oh yea, and casually not mentioning - in the Kling version it's not even a Unicon, which takes a 1/3 of the prompt's essence
Is there any quantized model available?
Thoughts on banadoco? thank you
The princess was moon walking so that must be a space dragon
what is the discord to use those models?
13:42 hailuo ai minimax✨✨🔥👍
MiniMax looks the best so far.
How does abacus compare to their competitor ninja chat?
amazing bear Kling
The boy looks like he started the fire
interesting that the princess running away from the dragon doesn't work. There should be so many 'person running away from something'' reference videos to learn from
Dude, at this point I have almost zero doubts we live in a simulation. Wich is a good thing. I think.maybe
what kind of NASA I mean Elon Musk supercomputer do I need at home?
It runs on the Casio scientific calculator
@@WongEthan-ge6pq😂😂😂
If this is what I'm thinking it is, I think 4 H100
Unfortunately, it says it requires 4 Nvidia H100 GPUs to run it. That's some pretty immense hardware requirements, especially for just 480p videos.
Man, I wonder how long it'll be before either consumer hardware or video generators become a thing that normal people can feasibly run.
@@CaidicusProductionsSomeone made a fp8 quant I believe and it can run within 20gb of vram I believe
Do you know what the minimum requirements are to use Allegro?, to use Mochi1 locally, it's completely crazy, the minimum is 80GB of GPU
"Zombies in station" = Cash Jordan thumbnail !?!!
😅😂
Kling is blowing me away!
Girl swimming legs looked broken in the intro
Purz tested the smaller Mochi 1.
can genmo do image to video please answer
Alegro on that nightmare fuel with the Will Smith spaghetti
Could you show us all how to download and run it locally for all us noobs eager to learn.
Like many of these, but excluding the amazing Kling, seems very good for cutesey animals. Not so good for multiple people in a near shot, that is not just a person's head, or distant view. I guess it's down to training.
With all these AI companies, what good is a 3 to 5-second video?
okay so i am getting confused. like everyday there is a new AI update, firstly i am struggling with the clickbaits right now and i am not up to date: so, who is currently the leading LLM model in overall, problem solving, picture creation. and who is the leading free AI video generator?
these are free but with limited generations
LLM- GPT-4o and Claude Sonnet 3.5 (new)
Video- Minimax and Kling
AI image creation- Ideogram 2.0 and Flux 1.1
9:26 Kling just straight up made Will smith Chinese. 😂
I get it.. for a free stuff. its amazing. (Remember, Midjourney was once free) Yet, I'd say its littler bit bold to say ANY of these beats Gen3 of Runway. Going by the fact that the videos being showcased are always the best. Right from the beginning.. the monk walking like floor is electrocuting LOL or that hat swimmer girl's "broken leg".. The pouring of liquid and the thing on the table is depleting, and so many other things . Most probably not up to date with Runway Gen3 ..
The only thing one can say is that Runway's customer service is a CRIME.. ..besides, no image to video or video to video.. naaah..
Ant talk of that, MidJourney is working on their video model now that can generate artistic stuff. THAT would be a game changer for artists
me *clones the mochi1*
my pc: I'm tired boss
true genmo videos are too damn good but it required too much compute power that a normal pc cant handle it
Unfortunately not available in Pinokio.
What pc specs do i need?
chatgpt it
Whats the Vram for Genmo?
When Blackforest Labs will release their video generator, it will kill all the other services.
What is the reason?
What is good, allegro! Fuck yeaa
Wait, I meant mochi, yeaaa!
Genmo not free, They tell me I can generate two videos a day unfortunately
Genmo Mochi 1 requires 4 H100 cards to run. They are £30,000 each. So if you have £120k sitting in your bank go for it.
Not true, it will run with a 24Gig nvidia, using comfy. Set tiling to 8; takes about 20 minutes per run.
@@geoffphillips5293 This is taken directly from the Genmo Huggingface weights page
"Hardware Requirements
The model requires at least 4 H100 GPUs to run. We welcome contributions from the community to reduce this requirement."
So the next time you want to try and say something isn't true try researching before calling people liars.
@@TPCDAZwhy so combative? Watch the last section of the video you're on.
@@ten_cents With respect I quoted the actual company who made the model. Some random came on to call me a liar yet I'm the combative one?. If you don't like the fact it takes 4 H100 £30,000 cards then take it up with Genmo. Not me.
@@TPCDAZ he literally says it in the video you can run it on a comfyui node with ~20gb vram card
Good comparison but I see minimax still the best ai generator on the platform
is moch img2vid?
13:17 she's not walking look closely she's moonwalking backwards!