I re-rendered my AI-generated CG short film in Stable Diffusion
HTML-код
- Опубликовано: 2 июл 2024
- I am using a plugin for Blender to re-render my AI generated short movie in stable diffusion. Is this the future of 3d rendering?
Tools used in this video:
Blender: www.blender.org/
Dream Studio: beta.dreamstudio.ai/dream
AI Render: www.blendermarket.com/product...
EbSynth: ebsynth.com/
Cited sources:
nvlabs.github.io/GANcraft/
isl-org.github.io/Photorealis...
Buy me a coffee:
ko-fi.com/mickmumpitz
INSTAGRAM: mickmumpitz?hl=de
TIK TOK: tiktok.com/@mickmumpitz
Twitter: / mickmumpitz
The flickering effect really reminds me of old 1920's cartoons when people were just learning how to animate stuff. History will look back at today's AI tech with the same fascination.
This is going to slowly and slowly evolve as A.i progresses until it becomes full 3 hour length film
You can use it now to make your own film. Take 2 days off to learn blender and get to work!
It is highly likely that this technology will evolve into a fully immersive VR experience in the future, which can be manipulated by your commands, gestures, or even thoughts. This will enable you to delve into a lucid waking dream that offers freedom to explore and guide according to your preferences.
Great video and timing, just stumbled upon your first videos and now I can't wait for the sequel!
I love how good you’re videos are. They are info packed and don’t really hide anything like a lot of others do. The editing is great. Keep doing what you’re doing ❤ much love ❤️
Thank you so much! ❤
*your
You're so underrated, your videos are incredible, educational, and fun to watch. Keep it up, loved this video and the last one, keep going!
The cooking scene displayed at least 30 different hats. The phone call scene has a new face each frame, the sequence is nightmarish but stopping the movie at any time is comedy gold. Very interesting project tho!
Thank you so much for your time! Great video! Informative, helpful, creative, and above all, coherent! Please keep up the great work
So jealous that a such a perfectly precise phrase like "temporal cohesion" was able to just slide out of your mind so easily. Great video! Thanks!
Stable diffusion is gonna be real interesting when it can recognize frames for 2D animation
All these Image generators will be amazing when we can incorporate the prompting into our Mixed Reality Glasses! Let's all get to work !
A.I. Looney Tunes...!
good news, it now does!
I'd recommend looking up how to train a face in stable diffusion. It usually involves uploading about 20 different photos of that face or person in different situations, angles and lighting etc for best effect. The trick is looking through the stable diffusion output and finding the faces it got right, then training those generated faces as a set, it helps keep the faces consistent
The advancement of AI has always amazed me. From those silly chat box AI from years ago, and now to a more advanced composition. I'm very interested to see what you come up with next!
Keep going, mate! That's awesome!
Cool and thanks for sharing.
You should try taking the frames you like best and running them through EBSynth with the original video. I think that could potentially smooth out the jittering between frames you mentioned. :)
1000th like! Very cool series. Love this tech! It will only get better!
Looking forward to next year's video once you have even more tools!
I enjoyed this as well. It's only gonna get better with time. Keep going with this series.
Awesome! Would love to see a more in depth tutorial. Subscribed!
I loved it!
Keep posting on this update.
best of luck for a great success on RUclips 👍
Weirdly amazing!!
Subscribed, Thanks, You have a great videos!
Just when CG got beyond the creepy disturbing uncanny valley, AI brings it right back.
Thanks for sharing 😃🙏🥰
loved it!
richtig nice Videos🔥 wöchentliche Uploads wären einfach sooo nice 👍
I'm discovering so much stuff! Thanks! Suscribed
The way these flicker, it looks like different characters from the multiverse.
So cool !
That is awesome! Of course, I want to now how do it myself )
Excellent!
@Corridor found a method to make such things so much better. But I dont know if it only works with real people.
I just watched the videos and they actually use the same process. First the real video is transformed into the other style in Stable Diffusion and then the flickering is removed with ebSynth. The main difference is that they put in another whole week of work to clean it up by hand and add additional effects.
My goal was to render directly out of blender with Stable Diffusion without any post-processing and since I like the aesthetics and find it interesting when you pause the video to look at all the different frames, I decided against the manual cleanup!
Yes I enjoyed. Looking for that temporal cohesion and consistency and we are making movies with a simple base and doing it in any style
We enjoy this video❤😁
You should try to use a shot from the stable diffusion in EBSYNTH to really clean it up
Haha kept watching and that is exactly what you did lol
Wow it's the first time I see something like this!
Würde definitiv mehr Videos schauen!
It’s like a trippy fever dream.
Try the standalone version of SD and use negative prompts to make stunning results.
Awesome video man !! , i think you can dive deeper into rendering now with the new controlnet models, those are some game changing stuff !!!
the future is bright for who sees it's potential.
might want to look into control networks on stable diffusion. Makes a lot of new things possible.
I watched the previous version before this and I'm very impressed
I like that "photorealism" today is based on cheap digital cameras with poor white balance, in photo-realism we do not try to do anything 'realistic' but rather how we see the world through a camera, with lens flares and so that comes from seeing through several glas lenses, "realism" that games often use are often more realistic than "photorealism", it therefore I think AI can help with creativity but not replace designers and good artists,
Great video btw, very creative, interesting and informative.
You haven't been playing with enough then... Stable Diffusion 2 is a limited model set .... What this tech can do with video is mind blowing .... it just a case of computing power now.... Now , someone with 4 or 5 state-of-the art gaming PCs can make the same movie as Holywood could just 16 months ago.... The World has changed.... I would recommened you move fast into VR AR and Mixed Reality !
@@MikeClarkeARVR ... ?
@ Not sure what the thread was! lol .... wired on 4 coffees!
I love the video. It wasn’t perfect but you can see the progress. Sometimes the progress is more important to show how AI has moved forward. Hope you come up with more awesome vids in creating animation with AI and your skills too. 🎉
What about taking selected stable diffusion images and then put them into the "convert it into a 3d model" pipeline again? ...and then just again rig it. I know this is a lot of work again, but assuming we could actually automate the whole process this should work out. I'd be interested in how much the 3d model of the female ogre and the old woman would improve. Maybe, just a demo of this would be enough... What do you think?
Amazing...
The look of the woman reminds me of the scramble suit in "A Scannder Darkly" :)
Und wow, bin eben auf Deinen Kanal gestoßen! Tolle Arbeit, mein Lieber :)
I think this was a lot more comedic
The constantly shifting visuals should work well with a more psychedelic style. If you've ever eaten certain mushrooms it should look pretty familiar.
I kind a like the style of this short also. It’s more like a handrwn style.
This feels like a ketamine trip
What other single image to 3d program do u use
Bro I didn't know that you didn't have many subs, That's so sad, You deserve more, Don't worry, at this rate of amazing videos you'll get more subs, 🏆🏆
Either way, I crack up as soon as she says, "Cambria".
Try this again with the new corridor method!
Having the AI generation constrained to a single character, like a monster, could be pretty striking.
Damn, this animation already feels nostalgic, lol.
If you squint your eyes, it looks as good as the original.
That technique would be great for some form of trippy "Fear and Loathing in Las Vegas" type animation or straight up nightmare fuel lol Reminds me of claymation.
One word: Controlnet
Lmao, this is almost like watching a movie through some sort of Multiverse glasses. Kind of feels like an especially trippy scene from 'Everything, Everywhere, all at once'.
what if you just put the stable diffusion rendered version on top of the original one and lower the opacity or change blend mode
Absolutely love what you do, but now I do get some sickness while watching this. I know that in Deforum, you can get coherent results by the specified amount of frames. Also, you could try lower the fps.
Ebsynth will solve your problem!
I wonder why you didn;t do a pass where you rendered only the environment and then another where oyu only rendered the characters. Would that not have helped with cohesion?
Why not use Ebsynth for a more stable picture?
the next golden globes award
Hey that´s pretty cool. How did you manage to change the format size of that Dream AddOn in Blender? Normally you´re limited to those 512x512 pixels, that´s also my problem right now :D
Technology is allowing everybody to have a go at creating their own stuff, rather than just an elite few and "the industry". Why would anybody have a problem with that is beyond me.
Wonderful video. Have you heard about Machinima? It was the huge thing back in the 2000-2010, using the engine of videogames to generate animation. The tool is obsolete, but now, this A.I. tools remind me of that era. How long until we can produce a complete feature film in our computers?
Why dont you use Style transfer? Try Ebsynth and merge your Stable diffusion reworks on your video imput.
I find AI render is distracting as it changes almost every frame.But I'm sure it will do a significant progress soon!Thank you for sharing your videos!🙏
another option is 2D animation with these techniques, don't know if Blender can do 2D rigging like Dragonbones or Spine
You can check out Gen-1, new research from Runway ML, same guys that collaborated on Stable Diffusion.
It's a new AI that can stylize videos given an input video and a style image. Something similar to EbSynth, but more automated and with more versatility. Combine it with the newer Stable Diffusion models, DreamTextures for Blender and we really are cooking.
Looks like a movie made in 4D hehe
I felt like I was having a fever dream 🤣
Until now, you need to know what you are doing to use AI effectively.
But I wonder if we will have jobs for much longer as animators and 3D artists...
I hope I'm wrong, but who knows...
Before render it was much better.
5:09 Oh, it's "breathtaking" alright, but not for a good reason. With the way those images flickered and changed, I couldn't decide which hurt most: my eyes or my brain. It was an interesting experiment, though. I expect the AI programmers will get things sorted out eventually.
When I saw that AI film at the end I wanted to die.
It won't be long until games that never got a movie adaptation get a movie that's directed by not 1 director, but the entire community.
There will guarantee be silly and or complete flops as well as a potential legendary.
Did I just watch a movie entirely written, modeled, animated and composed, and rendered by Ai?
Yeah, temporal stability is the issue right now.
who needs acid, when you got ai ??
this video needs to be shown to the people who do not want this technology. I use SD myself and it's just a new technology, it may slightly be AI, but it's a huge step in advancement rendering.
fuck ist das cool
AI generate the music as well!
that stable diffusion was not stable for my eyes
no chill
why does bro look and sound like a sober jonas tyroller
It's almost like the viewer has become unstuck from reality and is flickering rapidly between different quantum versions of the world. 🙂
Whoever can solve the temporal cohesion problem will make a lot of money! 💰💰💰
This is the most horrific-looking thing I've ever seen. This short film needs real artists BAD. Just because you can make something that looks like something, doesn't mean it's not disturbing af.
Hat was. Leider nicht zu kontrollieren.
i do not see this future xd
Was the story also AI-generated?
Ai is the future you like it or not
It is not the future. It is TODAY. Start studying MLops!
Thanks for the video, I'm trying to update my knowledge on the ai scene to start creating, and tour video really inspired me. Would love to see more ideas, I'm a creative director, I used to write script and have a team once for indie games in vr then I moved to cgi short for commercia in a company that was kinda awful. So I quit the job and started a new filmmaking career, and I'm really looking with interest at ai now for this. Would love to see more and/or talk with you, this would have been amazing in unity or unreal engine, or animated with motion capture 💥💣
Get into new media fast. Especially VR and AR... now, with these image generating platforms... people will start to want to watch their OWN prompted stories! Not stuff written by other directors. Why watch a holywood film of someone else's drama, when I can suggest that AI generate a story of my ancestors life in 1765? (for example!)
The other looked better, purely because of how much this one jumped around. It actually made me feel a bit nauseous.
I would say the og was better