Hi, for last 3 year my biggest test is, to try chatgpt to generate python script to generate 3d model in blender...keeps failing..otherwise...in programming..
Imagine taking a 3 year break from the world, living in a cabin and then seeing how far AI has come. just a single week and so much happens, 3-10 years....... like what the even f
This is actually really cool. Cant believe how far AI has come, Especially in the last few years. I rember when Dall-e 2 was the coolest thing ever and now it’s just unimpressive.
Most of the corporate released A.I. stuff has generally been limited by censorship. They want everyone to play nicely, only allowing you to imagine sunshine and lollipops.
Matt, It's incredible how quickly AI videos have advanced. In such a short time, we've gone from being amazed by the technology to now critiquing details like skin tone and other fine points. The progress is truly remarkable.
Am I the only one who read the title and assumed that it was OpenAI's voice mode that got a "release date" and thought that meant it was a date for when its guaranteed for all plus users?
I tried the chatgpt o1 model preview with university level math, and it is just better than me. It makes proper substitution to solve things I didnt think of. It is insanely powerful. Truly astonishing. AGI in 3 years? Much sooner imho.
@@JAK85. 5 years sounds like you're thinking linearly. We humans have great difficulty thinking exponentially and that is what is required here. If you've been following the development of AI models for a few years, you should be aware of the acceleration. Not just the acceleration but the accelerating acceleration. - That's what exponential is. I know we're not close to true AGI yet, that's why I think it'll take 6 months to 2 years.... thinking exponentially.
I've been using it for research, but you can practically use this in any way. It's mostly just limited by your creativity. It could use some additional features to control the output of the generation though.
@@14supersonic I think they were very narrow on it because it's both a prototype and a proof of concept. They don't want everyone using it yet, and probably don't know if it has a viable business model either. So being just English, casual style, gives them a good idea for how people use it. Do people load entire books? Dozens of small txt files? Websites with images that don't work? Etc.
AI, tech in general grows exponentially, not linearly. Prepare for everything to shift rapidly. You thought the jump from Y2K to 2020 was crazy, you'll be stunned by where we'll be by 2040.
@@DJ-dh3oe Why put all the effort into adding Advanced Voice mode into GPT-4o when GPT-4.5 is ready to ship? I think the only thing truly slowing them down at the moment is politics.
@@BrianMosleyUK Advanced voice mode is natively a feature of 4o, they've just been holding back the release of it to fix some problems (scaling, weird behaviour etc.) Advanced voice probably won't be possible in 4.5 anyway, since it will be a larger and slower, so not capable of real-time speech.
You are now a coder Matt! If you ask ChatGPT to explain the code to you could learn from a PHD level teacher. I want to start using more AI to help my channel, some of it requires coding, so all the examples u shared are very inspiring! Thank you
Have you heard of a multi-prompt 3d environment generator called WonderWorld? I'd like to see you cover it. The paper is called "WonderWorld: Interactive 3D Scene Generation from a Single Image"
The one thing to note on the IAME graphs. The X axis (train-time compute on one graph, test-time compute on the other) is log scale, not linear. So there's actually diminishing returns that aren't being represented.
Incredibile stuff btw, I can't believe we're getting used to AI building a software or a physics simulator in one go. Also the video to video is groundbreaking, we can expect an explosion of alternative/remastered videogames footage on youtube. Everything else also crazy stuff, AI is at full throttle
I’m just waiting for the day that an AI researcher is using an AI and the AI suggests something and the researcher thinks “huh, I never thought about that” and then implements it to improve the AI. It’s all downhill/uphill from there.
Moshi is interesting being open source. Nvidia Mistral Nemo r2.1.0, even on my single RTX4090, is incredibly capable of on all things voice and even NLP. It does better than anything I've experienced online; closed or open source. It's closed source too, so potentially you match these up (on a rig far more capable than my own), train the models together and you have a much higher quality product. Oh yeah, Nemo is very scalable and fantastic at training.
Maybe 😄 The big give away for OpenAI AIs is the use of the word "intriguing". For some reason that word is used a lot, often combined with blend: "intriguing blend of...".
I've been saying it for over a year now - OpenAI is WAAAAAY further ahead than everyone realises. They just like to wait for everyone else to almost catch up before releasing the next step change.
Great roundup Matty. Perhaps it’s time to use some AI to generate some better options for decorating your background. I mean there is nothing wrong with the living at home with mom vibe. But seems like a good ai related project. Whether it is an ai artificial background that evolves with ai improvements OR just designed by ai and physically built, seems like it would be a good project
9:37 one thing I've noticed with Kling and other generators is that they tend to flex objects that are in motion that are supposed to be rigid. You can see it here - the gun moves up and down from the running and there is a slight bending of the weapon. It's subtle, but once seen you can't unsee it 😄
It's just a shame that they are no longer updating it and are soon going to charge for using it. Pi is the only AI I've used where I've had conversations and forgotten I was talking to a machine.
Can you all imagine where ai be in ten years from now? Kling looks awesome and I loved that Gen3 video recreation. Will we be making independent animated movies in our bedrooms? Speaking of which, I actually did make a movie in my bedroom in 2013 and it just got put on Tubi! (which I am pretty excited about). By the way, it is also about Artificial Intelligence 😀 If you like animation and a good scifi story, it would be great if you had a little time to watch it. It's called The Mind Machine. If you do decide to watch it, let me know!
Hey man! It's awesome when you cover anything open source! Anyway you can cover Bittensor or anything from any of the subnets? Unless it's your thing, ignore the digital currency tied to it for now. Bittensor is decentralized open source AI on the Blockchain. All subnets talk to each other, so when there's a breakthrough in one, there's a breakthrough in all. It's what humanity needs to keep up with centralized, closed source AI like OpenAI, Microsoft, etc.
Genuine question for you mat, why do you think Kling or Minimax is better than Sora? from the demo videos we have scene Sora really is the superior model, the downsides are not its performance but how censored and inaccessible it is
Matt, are you reading off an AI-generated script? Your dialogue and delivery don't sound how they normally do when you speak in your videos (which is far more natural and human-like). If I'm right, please get the model to give you far more human-sounding scripts. Your genuineness makes your personality shine through; I've read, created, curated and corrected enough AI-generated material that things don't sound right... "so buckle up folks, because we're about to take a peek past the curtain into the future, how about we start this off right and talk about the big guy in the room OpenAI" just screams AI generated script. I love your material, you are one of the people I actively keep up with for AI news / content, keep it genuine to your exceptional personality mate.
I can send you unedited video files if you’d like. I never use scripts (except for sponsored segments) I’ve been doing a lot of recording and scrapping more videos lately. It’s all off the dome!
My general use of these generators for image to video (I tend not to use text to video) is that Kling is the best and Runway a close second. I recently tested real physics for a handful of generators and Kling was the best for real physics - although there is still a long long way to go. The first physics video is on my channel. I'll be posting up some others soon once I get some spare time.
Wow. Amazing. Astounding. But too little too late. I'm now using Claude and i don't think the release of AV is going to make me jump back to OpenAI. I tend to dislike companies that make announcements and then don't deliver. It's not far off lying - and that may be ok if your users aren't paying you money, but if they are, then that's a red line for me. I'm not paying you to lie to me and tease me about vapourware; I'm paying to use your product and be given realistic release dates for new features. Who knows, OpenAI may even release the mythical Sora - but given how good other AI video generators are now, it would have to be an order of a magnitude better than the demo reels we have seen. I'm not saying i would never go back to subscribing to OpenAI, but every time they pull a stunt and don't deliver then my view of them becomes dimmer and dimmer.
Does anyone remember this startup that aims to make ASIC chips just for inference? I remember they claimed like a 1000x speedup like the ASIC miner did for bitcoin mining. That means anyone could run a 10t or bigger model on a local machine with ease with minimal energy cost, especially if the chain of thought reasoning is so inference heavy and those chips are tailored exactly for that task. Damn, what company was that?
I have an idea for developers: create some model that can use an LLM to interact with a video model, image model in a specific way based on a prompt you want and an ability to post a whole book and have it create a film script itself and use it in parts to prompt the video, image models to get the correct visuals based on your description. maybe possible with o1?
23:40 I want there to be a colorful accuracy bar on all videos, so as people say things in videos that are true or false, it changes color or goes up and down a slider, bringing up links to credible sources. would be amazing.
Hey, at this point video diffusion models are more coherent than my own inner visualizations. Just as weird though. 😉 Actually, I was recently watching my closed eyed visuals and noticed that it's literally exactly the same as a diffusion model. Strange hands, warping features, odd physics.. Literally the same. Only continuous and unprompted mostly.
@@esimpson2751 even if it does, I don’t get the excitement. It’s not that amazing is it? (Or I should say the novelty’s worn off now where the announcement was seen ages ago. When it gets released I won’t feel the same buzz as when it was announced)
HUGE thanks to Hubspot for Sponsoring today's video! Checkout Breeze AI Copilot Here: clickhubspot.com/duab
Huzzah
👋 hi
good job matt
Hi, for last 3 year my biggest test is, to try chatgpt to generate python script to generate 3d model in blender...keeps failing..otherwise...in programming..
Do you think it could make your sims look real
Imagine taking a 3 year break from the world, living in a cabin and then seeing how far AI has come. just a single week and so much happens, 3-10 years....... like what the even f
This is actually really cool. Cant believe how far AI has come, Especially in the last few years. I rember when Dall-e 2 was the coolest thing ever and now it’s just unimpressive.
It was cool, but it didn't do much.
It's crazy how fast things feel normal or outdated
Most of the corporate released A.I. stuff has generally been limited by censorship. They want everyone to play nicely, only allowing you to imagine sunshine and lollipops.
Would it be cool if it showed the entire process...as it made the image?
Odd thought
Or a robot making art.
Seems yucky.
Lol
Dalle-3 is unimpressive now
Calling Black Myth Wukong a "weird furry man" is insane LOL
I think he never plays video games haha
@@abadidibadou5476 He does listen here 25:26
Omg the 24th actually lines up with what Altman said of having a couple weeks more patience. Could this finally be it please lord
It's already here Go see for your self.
@@Saif-j5o it isn’t I just checked, it’s the regular voice mode and I’m a plus subscriber, but it’s also 1am so I’m going to sleep
I read it "Antman" 😂
Matt, It's incredible how quickly AI videos have advanced. In such a short time, we've gone from being amazed by the technology to now critiquing details like skin tone and other fine points. The progress is truly remarkable.
Am I the only one who read the title and assumed that it was OpenAI's voice mode that got a "release date" and thought that meant it was a date for when its guaranteed for all plus users?
Yes in the beginning of the video it shows 9-24 as the next date….
Yes.
So do any of you have any voice capabilities with chatGPT?
I tried the chatgpt o1 model preview with university level math, and it is just better than me. It makes proper substitution to solve things I didnt think of. It is insanely powerful. Truly astonishing. AGI in 3 years? Much sooner imho.
thats crazy, but I still think within the next 5 years is more likely, for something to be declared AGI it has to do a lot more than math
now chemistry and coding are the most difficult for ai.@@JAK85.
@@JAK85.
5 years sounds like you're thinking linearly. We humans have great difficulty thinking exponentially and that is what is required here. If you've been following the development of AI models for a few years, you should be aware of the acceleration. Not just the acceleration but the accelerating acceleration. - That's what exponential is.
I know we're not close to true AGI yet, that's why I think it'll take 6 months to 2 years.... thinking exponentially.
Cough cough. It can't even do basic geometry! See the video on my channel where I test this. I would not trust o1 for anything mathematically based.
@@JAK85.And it can't even do that effectively. AGI possibly by 2028/9, but it will require a breakthrough beyond large language models.
The hope is that videos like these put pressure on OpenAI to actually deliver instead of just advertising everything for years and never releasing.
Me too, I don’t think it does though.
NotebookLM really surprised me. It can generate podcast conversations from any set of links, PDFs or other text content, and it's super natural.
I've been using it for research, but you can practically use this in any way. It's mostly just limited by your creativity. It could use some additional features to control the output of the generation though.
@@14supersonic I think they were very narrow on it because it's both a prototype and a proof of concept. They don't want everyone using it yet, and probably don't know if it has a viable business model either. So being just English, casual style, gives them a good idea for how people use it. Do people load entire books? Dozens of small txt files? Websites with images that don't work? Etc.
Sorry to be that guy, but it’s V-S Code (not versus) and it’s short for Visual Studio Code
I was looking for this commnet
Just a friendly PSA to stop and smell the roses because this stuff is moving so fast it's amazing!
AI, tech in general grows exponentially, not linearly. Prepare for everything to shift rapidly. You thought the jump from Y2K to 2020 was crazy, you'll be stunned by where we'll be by 2040.
some examples please to support your statement
@@Icemanr85have you not lived? Are you 10?
@@Tone_Of_Dials is your mom a man
I'll probably be dead. 😢
@@Icemanr85What statement?
“Weird furry guy” 😅
"This weird furry guy" lol that's Wukong Matt!
Good vid though lol!
who?
@@HUEHUEUHEPony Sun Wukong
@@Ricolaaaaaaaaaaaaaaaaa who?
@@apache937 look it up
Surely advanced voice mode should be a part of GPT-4.5 at this point? Would be amazing if we get 4.5 in 4 days time.
Why?
@@DJ-dh3oe Why put all the effort into adding Advanced Voice mode into GPT-4o when GPT-4.5 is ready to ship? I think the only thing truly slowing them down at the moment is politics.
@@BrianMosleyUK Advanced voice mode is natively a feature of 4o, they've just been holding back the release of it to fix some problems (scaling, weird behaviour etc.)
Advanced voice probably won't be possible in 4.5 anyway, since it will be a larger and slower, so not capable of real-time speech.
@@DJ-dh3oewe got something much better, the O1 model
You are now a coder Matt! If you ask ChatGPT to explain the code to you could learn from a PHD level teacher.
I want to start using more AI to help my channel, some of it requires coding, so all the examples u shared are very inspiring! Thank you
Why did the Moshi demo appear to be one-sided? I heard the green audio but not the purple audio.
I think it is to not confuse people between the ai voice and the real voice.
cut
Like how the internet is one of the defining inventions of the last century AI is a defining invention of the 21st century
To think at the rate that all these AI are improving already is only going to exponentially accelerate even more!
The question I always ask myself is: Why are Kling's videos always in slow motion?
Have you heard of a multi-prompt 3d environment generator called WonderWorld? I'd like to see you cover it.
The paper is called "WonderWorld: Interactive 3D Scene Generation from a Single Image"
Sun Wukong really settled down when he met Iron Man.
The one thing to note on the IAME graphs. The X axis (train-time compute on one graph, test-time compute on the other) is log scale, not linear. So there's actually diminishing returns that aren't being represented.
“Versus” Code Studio cracked me up 🤭
Incredibile stuff btw, I can't believe we're getting used to AI building a software or a physics simulator in one go.
Also the video to video is groundbreaking, we can expect an explosion of alternative/remastered videogames footage on youtube.
Everything else also crazy stuff, AI is at full throttle
Insane progress is happening!
Thank you for starting off with the news we are all most interested in!
Almost interested in.
Who else thought he was about to say "Im a massive fan of SHENMUE" lol
Me
I’m just waiting for the day that an AI researcher is using an AI and the AI suggests something and the researcher thinks “huh, I never thought about that” and then implements it to improve the AI.
It’s all downhill/uphill from there.
Moshi is interesting being open source. Nvidia Mistral Nemo r2.1.0, even on my single RTX4090, is incredibly capable of on all things voice and even NLP. It does better than anything I've experienced online; closed or open source. It's closed source too, so potentially you match these up (on a rig far more capable than my own), train the models together and you have a much higher quality product.
Oh yeah, Nemo is very scalable and fantastic at training.
So when can we see the AI eat and transform every ebook and audiobook and movie in any format in one fell swoop?
in the coming weeks, just buy nvidia stock, and pay subscriptions
That script was sooooo GPT!
When you say “buckle up “ you know it was AI generated
Maybe 😄 The big give away for OpenAI AIs is the use of the word "intriguing". For some reason that word is used a lot, often combined with blend: "intriguing blend of...".
I've been saying it for over a year now - OpenAI is WAAAAAY further ahead than everyone realises. They just like to wait for everyone else to almost catch up before releasing the next step change.
Nonsense. If they are that far ahead they'd not be waiting for competitors to catch up. That is not how companies work.
Wow! I learned a lot from this video. It covered products I haven't seen in other videos. Good job!
Great roundup Matty. Perhaps it’s time to use some AI to generate some better options for decorating your background. I mean there is nothing wrong with the living at home with mom vibe. But seems like a good ai related project. Whether it is an ai artificial background that evolves with ai improvements OR just designed by ai and physically built, seems like it would be a good project
That’s honestly a really good idea. Appreciate the comment!
1 LLM to reach AGI and ASI would be nice, but simply allowing AI to be module, such as using a calculator, makes more sense.
I can't wait for the youtube game movies / tv series being run through this, like Mass Effect or Warhammer 40k etc.
Of course the baby looks a little crispy. Where do you think the saying "bun in the oven" came from?
I was baffled months ago when I heard some people saying that AI would be slowing down and other things.
The first AGI just has to be called "Strawberry" at this point, I know thats o1's codename but I mean like officially
EU or at least here in Germany we still not have the normal improved voice to due to some stupid regulations. :(
Wow this is insane. it will soon become too hard to tell what is real online anymore. or over the phone, etc.
9:37 one thing I've noticed with Kling and other generators is that they tend to flex objects that are in motion that are supposed to be rigid. You can see it here - the gun moves up and down from the running and there is a slight bending of the weapon. It's subtle, but once seen you can't unsee it 😄
As usual top content! You're awesome, amazing workflow, thx!
Eleven labs has amazing tts ai voices.
I just want better NPCs in video games. Then a game where you have to play as a cyberpunk noir detective 😂
+💯 on inference scaling
Meanwhile, Pi AI has THE best human voice out of all AI. By far.
And it has been that way for over a year.
It's just a shame that they are no longer updating it and are soon going to charge for using it. Pi is the only AI I've used where I've had conversations and forgotten I was talking to a machine.
i really like the voice from what i heard but idk lazy to even google the site name
Can you all imagine where ai be in ten years from now? Kling looks awesome and I loved that Gen3 video recreation. Will we be making independent animated movies in our bedrooms? Speaking of which, I actually did make a movie in my bedroom in 2013 and it just got put on Tubi! (which I am pretty excited about). By the way, it is also about Artificial Intelligence 😀 If you like animation and a good scifi story, it would be great if you had a little time to watch it. It's called The Mind Machine. If you do decide to watch it, let me know!
Hey man! It's awesome when you cover anything open source! Anyway you can cover Bittensor or anything from any of the subnets? Unless it's your thing, ignore the digital currency tied to it for now. Bittensor is decentralized open source AI on the Blockchain. All subnets talk to each other, so when there's a breakthrough in one, there's a breakthrough in all. It's what humanity needs to keep up with centralized, closed source AI like OpenAI, Microsoft, etc.
So, most people don't have any voice capabilities with chatGPT?
What if you change the date on your pc to 25th heh
Will advance voice model available for free users!!?
Genuine question for you mat, why do you think Kling or Minimax is better than Sora? from the demo videos we have scene Sora really is the superior model, the downsides are not its performance but how censored and inaccessible it is
Bro just called the monkey king "this weird furry man"... 😮😂
why i feel like the intro was ai generated script
2:45 one of the audio tracks wasn't playing for us. Was that expected?
The V2V is impressive.
If you need to say it’s not slowing down… it’s slowing down
Don’t need to, just really want to
Love this! You have a great delivery!!!
Matt, are you reading off an AI-generated script? Your dialogue and delivery don't sound how they normally do when you speak in your videos (which is far more natural and human-like).
If I'm right, please get the model to give you far more human-sounding scripts. Your genuineness makes your personality shine through; I've read, created, curated and corrected enough AI-generated material that things don't sound right...
"so buckle up folks, because we're about to take a peek past the curtain into the future, how about we start this off right and talk about the big guy in the room OpenAI" just screams AI generated script.
I love your material, you are one of the people I actively keep up with for AI news / content, keep it genuine to your exceptional personality mate.
I can send you unedited video files if you’d like. I never use scripts (except for sponsored segments) I’ve been doing a lot of recording and scrapping more videos lately. It’s all off the dome!
What a time to be ALIVE 🔥🤟🔥
Hold on to your papers fellow scholars.
@@cbnewham5633 LOL 😆😆😆
Great summary, gonna check out the paid Kling option, though maybe Runway3 is better?
My general use of these generators for image to video (I tend not to use text to video) is that Kling is the best and Runway a close second. I recently tested real physics for a handful of generators and Kling was the best for real physics - although there is still a long long way to go. The first physics video is on my channel. I'll be posting up some others soon once I get some spare time.
Nice weekly recap
"Versus" code studio 😂
Imaggen is what I've been imagining in my dreams. It's finally on the way!
2MI
13 L
4 C
273K S
52 V
20 SEPT 2024
Thanks for your updates
the chess game was insane. Soon our only limitations will be our own imaginations.........
You know it’s crazy. They already found the AI can be more creative than humans so you can already scratch that out too.
bro called it versus studio but correctly set up a venv. RIP our jobs
Omnigen wow ❤❤❤❤❤❤❤ all of this is epic ❤
VS doesn't stand for Versus, but Visual Studio ;)
Greaat video. Inference scaling baby! 🥂
Great video. Thanks for your hard work and keep it up.
Has anyone tried out the Cerebras voice by Cerebras Inference
Every AI video model I've used hasn't been very good.
GuardRails AI should be used for Fact Checking Politicians.
Wow. Amazing. Astounding. But too little too late. I'm now using Claude and i don't think the release of AV is going to make me jump back to OpenAI. I tend to dislike companies that make announcements and then don't deliver. It's not far off lying - and that may be ok if your users aren't paying you money, but if they are, then that's a red line for me. I'm not paying you to lie to me and tease me about vapourware; I'm paying to use your product and be given realistic release dates for new features. Who knows, OpenAI may even release the mythical Sora - but given how good other AI video generators are now, it would have to be an order of a magnitude better than the demo reels we have seen.
I'm not saying i would never go back to subscribing to OpenAI, but every time they pull a stunt and don't deliver then my view of them becomes dimmer and dimmer.
Does anyone remember this startup that aims to make ASIC chips just for inference? I remember they claimed like a 1000x speedup like the ASIC miner did for bitcoin mining. That means anyone could run a 10t or bigger model on a local machine with ease with minimal energy cost, especially if the chain of thought reasoning is so inference heavy and those chips are tailored exactly for that task. Damn, what company was that?
VS Code "versus Code" LOL!
I have an idea for developers: create some model that can use an LLM to interact with a video model, image model in a specific way based on a prompt you want and an ability to post a whole book and have it create a film script itself and use it in parts to prompt the video, image models to get the correct visuals based on your description. maybe possible with o1?
You can do it now, try to test with o1 you can free test it, i don pay, i ger everything free. Anyway thanks for that.
@@contentfreeGPT5-py6uv oh wow I had no idea gpt can create images for free now, since when? haha thanks
oh okay !
23:40 I want there to be a colorful accuracy bar on all videos, so as people say things in videos that are true or false, it changes color or goes up and down a slider, bringing up links to credible sources. would be amazing.
Why we only hear one side of the conversation ? WTH ? 😅😅😅
22:30
Lol, calling Sun Wukong "a weird furry guy". 😄
It's like calling Merlin "a weird Gandalf guy".
Hey, at this point video diffusion models are more coherent than my own inner visualizations. Just as weird though. 😉
Actually, I was recently watching my closed eyed visuals and noticed that it's literally exactly the same as a diffusion model. Strange hands, warping features, odd physics.. Literally the same. Only continuous and unprompted mostly.
That is pretty cool 😎 voice model
I put in a video request for kling it took so long i forgot about it until this video (2 days later) and the vide is still not ready. WTF.
How many AI RUclipsrs named Matthew are there? Because that sounds like some kind of conspiracy, maaaaan!
Can you stop giving twitter links a lot of us dont use twitter.
*X
versus code studio?
Matt is really a unique individual
@@missoats8731 he is based, visual studio? nah, fuck that, its versus
@@HUEHUEUHEPonythis guy gets it
I mean, it’s a decent nickname atleast 😅🤣🤣
Hey you look like a grown up in the thumb nail 😛 thanks for the updates
Wow! Good stuff thanks!🤩
Thought voice was going to be out ages ago
if it really gets rolled out on the 24th Ill be content, but I dont think it will
@@esimpson2751 even if it does, I don’t get the excitement. It’s not that amazing is it? (Or I should say the novelty’s worn off now where the announcement was seen ages ago. When it gets released I won’t feel the same buzz as when it was announced)
@@SamWilkinsonn heroine drip
@@tuckerbugeater wut
So did we all 🙄
Lol...Nice Aunties is like saying Nice Karens
"versus code" lmao
The cat jump isn't realistic at all 😅
this strange furry man is sun wu kong, the monkey king
Kling is incredibly slow right now, I have been waiting days to complete :(