I re-rendered my AI-generated CG short film in Stable Diffusion

Поделиться
HTML-код
  • Опубликовано: 2 июл 2024
  • I am using a plugin for Blender to re-render my AI generated short movie in stable diffusion. Is this the future of 3d rendering?
    Tools used in this video:
    Blender: www.blender.org/
    Dream Studio: beta.dreamstudio.ai/dream
    AI Render: www.blendermarket.com/product...
    EbSynth: ebsynth.com/
    Cited sources:
    nvlabs.github.io/GANcraft/
    isl-org.github.io/Photorealis...
    Buy me a coffee:
    ko-fi.com/mickmumpitz
    INSTAGRAM: mickmumpitz?hl=de
    TIK TOK: tiktok.com/@mickmumpitz
    Twitter: / mickmumpitz

Комментарии • 133

  • @kinjogoldbar
    @kinjogoldbar Год назад +40

    The flickering effect really reminds me of old 1920's cartoons when people were just learning how to animate stuff. History will look back at today's AI tech with the same fascination.

  • @batrex_
    @batrex_ Год назад +101

    This is going to slowly and slowly evolve as A.i progresses until it becomes full 3 hour length film

    • @MikeClarkeARVR
      @MikeClarkeARVR Год назад +6

      You can use it now to make your own film. Take 2 days off to learn blender and get to work!

    • @Nightspyz1
      @Nightspyz1 Год назад

      It is highly likely that this technology will evolve into a fully immersive VR experience in the future, which can be manipulated by your commands, gestures, or even thoughts. This will enable you to delve into a lucid waking dream that offers freedom to explore and guide according to your preferences.

  • @HowNiftyYT
    @HowNiftyYT Год назад +3

    Great video and timing, just stumbled upon your first videos and now I can't wait for the sequel!

  • @neon_light5608
    @neon_light5608 Год назад +23

    I love how good you’re videos are. They are info packed and don’t really hide anything like a lot of others do. The editing is great. Keep doing what you’re doing ❤ much love ❤️

  • @cyberpxl
    @cyberpxl Год назад +3

    You're so underrated, your videos are incredible, educational, and fun to watch. Keep it up, loved this video and the last one, keep going!

  • @phileengs
    @phileengs Год назад +11

    The cooking scene displayed at least 30 different hats. The phone call scene has a new face each frame, the sequence is nightmarish but stopping the movie at any time is comedy gold. Very interesting project tho!

  • @MikeClarkeARVR
    @MikeClarkeARVR Год назад +1

    Thank you so much for your time! Great video! Informative, helpful, creative, and above all, coherent! Please keep up the great work

  • @ChristopherCopeland
    @ChristopherCopeland Год назад

    So jealous that a such a perfectly precise phrase like "temporal cohesion" was able to just slide out of your mind so easily. Great video! Thanks!

  • @avatar2xl
    @avatar2xl Год назад +72

    Stable diffusion is gonna be real interesting when it can recognize frames for 2D animation

    • @MikeClarkeARVR
      @MikeClarkeARVR Год назад +4

      All these Image generators will be amazing when we can incorporate the prompting into our Mixed Reality Glasses! Let's all get to work !

    • @carlosalbertolealrodriguez5529
      @carlosalbertolealrodriguez5529 Год назад

      A.I. Looney Tunes...!

    • @kokamoo
      @kokamoo Год назад +2

      good news, it now does!

  • @oldsoul3539
    @oldsoul3539 Год назад +5

    I'd recommend looking up how to train a face in stable diffusion. It usually involves uploading about 20 different photos of that face or person in different situations, angles and lighting etc for best effect. The trick is looking through the stable diffusion output and finding the faces it got right, then training those generated faces as a set, it helps keep the faces consistent

  • @That_Nightmare
    @That_Nightmare Год назад +8

    The advancement of AI has always amazed me. From those silly chat box AI from years ago, and now to a more advanced composition. I'm very interested to see what you come up with next!

  • @stebunov
    @stebunov Год назад

    Keep going, mate! That's awesome!

  • @saemranian
    @saemranian Год назад

    Cool and thanks for sharing.

  • @codymitchell4114
    @codymitchell4114 Год назад +7

    You should try taking the frames you like best and running them through EBSynth with the original video. I think that could potentially smooth out the jittering between frames you mentioned. :)

  • @TracksWithDax
    @TracksWithDax Год назад

    1000th like! Very cool series. Love this tech! It will only get better!

  • @nathanbanks2354
    @nathanbanks2354 Год назад +3

    Looking forward to next year's video once you have even more tools!

  • @USBEN.
    @USBEN. Год назад

    I enjoyed this as well. It's only gonna get better with time. Keep going with this series.

  • @phreakii
    @phreakii Год назад

    Awesome! Would love to see a more in depth tutorial. Subscribed!

  • @foxy2348
    @foxy2348 Год назад

    I loved it!

  • @jayantpaliwal
    @jayantpaliwal Год назад

    Keep posting on this update.
    best of luck for a great success on RUclips 👍

  • @antonjosefsson2728
    @antonjosefsson2728 Год назад +1

    Weirdly amazing!!

  • @himars_equalizer5125
    @himars_equalizer5125 Год назад

    Subscribed, Thanks, You have a great videos!

  • @st0n3p0ny
    @st0n3p0ny Год назад +2

    Just when CG got beyond the creepy disturbing uncanny valley, AI brings it right back.

  • @JuanGabrielOyolaCardona
    @JuanGabrielOyolaCardona Год назад

    Thanks for sharing 😃🙏🥰

  • @wo1fs_FX
    @wo1fs_FX Год назад

    loved it!

  • @footslife
    @footslife Год назад

    richtig nice Videos🔥 wöchentliche Uploads wären einfach sooo nice 👍

  • @SaturnineFR
    @SaturnineFR Год назад

    I'm discovering so much stuff! Thanks! Suscribed

  • @Lets-Go-Family
    @Lets-Go-Family Год назад

    The way these flicker, it looks like different characters from the multiverse.

  • @Fnalex
    @Fnalex Год назад

    So cool !

  • @ArtDir
    @ArtDir Год назад

    That is awesome! Of course, I want to now how do it myself )

  • @stefanguiton
    @stefanguiton Год назад

    Excellent!

  • @GrorfnOnTheLoose550
    @GrorfnOnTheLoose550 Год назад +8

    @Corridor found a method to make such things so much better. But I dont know if it only works with real people.

    • @mickmumpitz
      @mickmumpitz  Год назад +12

      I just watched the videos and they actually use the same process. First the real video is transformed into the other style in Stable Diffusion and then the flickering is removed with ebSynth. The main difference is that they put in another whole week of work to clean it up by hand and add additional effects.
      My goal was to render directly out of blender with Stable Diffusion without any post-processing and since I like the aesthetics and find it interesting when you pause the video to look at all the different frames, I decided against the manual cleanup!

  • @themightyflog
    @themightyflog Год назад +1

    Yes I enjoyed. Looking for that temporal cohesion and consistency and we are making movies with a simple base and doing it in any style

  • @Prabh_cc
    @Prabh_cc Год назад

    We enjoy this video❤😁

  • @DommoDommo
    @DommoDommo Год назад +1

    You should try to use a shot from the stable diffusion in EBSYNTH to really clean it up

    • @DommoDommo
      @DommoDommo Год назад +1

      Haha kept watching and that is exactly what you did lol

  • @gaia35
    @gaia35 Год назад

    Wow it's the first time I see something like this!

  • @djb842
    @djb842 Год назад

    Würde definitiv mehr Videos schauen!

  • @grey7603
    @grey7603 Год назад

    It’s like a trippy fever dream.

  • @mosamaster
    @mosamaster Год назад

    Try the standalone version of SD and use negative prompts to make stunning results.

  • @edmartincombemorel
    @edmartincombemorel Год назад +1

    Awesome video man !! , i think you can dive deeper into rendering now with the new controlnet models, those are some game changing stuff !!!
    the future is bright for who sees it's potential.

  • @RealShinpin
    @RealShinpin Год назад

    might want to look into control networks on stable diffusion. Makes a lot of new things possible.

  • @OreApampa
    @OreApampa Год назад

    I watched the previous version before this and I'm very impressed

  •  Год назад +3

    I like that "photorealism" today is based on cheap digital cameras with poor white balance, in photo-realism we do not try to do anything 'realistic' but rather how we see the world through a camera, with lens flares and so that comes from seeing through several glas lenses, "realism" that games often use are often more realistic than "photorealism", it therefore I think AI can help with creativity but not replace designers and good artists,
    Great video btw, very creative, interesting and informative.

    • @MikeClarkeARVR
      @MikeClarkeARVR Год назад

      You haven't been playing with enough then... Stable Diffusion 2 is a limited model set .... What this tech can do with video is mind blowing .... it just a case of computing power now.... Now , someone with 4 or 5 state-of-the art gaming PCs can make the same movie as Holywood could just 16 months ago.... The World has changed.... I would recommened you move fast into VR AR and Mixed Reality !

    •  Год назад +1

      @@MikeClarkeARVR ... ?

    • @MikeClarkeARVR
      @MikeClarkeARVR Год назад +1

      @ Not sure what the thread was! lol .... wired on 4 coffees!

  • @real23lions
    @real23lions Год назад +1

    I love the video. It wasn’t perfect but you can see the progress. Sometimes the progress is more important to show how AI has moved forward. Hope you come up with more awesome vids in creating animation with AI and your skills too. 🎉

  • @sdwarfs
    @sdwarfs Год назад +1

    What about taking selected stable diffusion images and then put them into the "convert it into a 3d model" pipeline again? ...and then just again rig it. I know this is a lot of work again, but assuming we could actually automate the whole process this should work out. I'd be interested in how much the 3d model of the female ogre and the old woman would improve. Maybe, just a demo of this would be enough... What do you think?

  • @dergutehund
    @dergutehund Год назад

    Amazing...

  • @andyony2
    @andyony2 Год назад

    The look of the woman reminds me of the scramble suit in "A Scannder Darkly" :)
    Und wow, bin eben auf Deinen Kanal gestoßen! Tolle Arbeit, mein Lieber :)

  • @smartduck904
    @smartduck904 Год назад +1

    I think this was a lot more comedic

  • @Sheevlord
    @Sheevlord Год назад

    The constantly shifting visuals should work well with a more psychedelic style. If you've ever eaten certain mushrooms it should look pretty familiar.

  • @phreakii
    @phreakii Год назад

    I kind a like the style of this short also. It’s more like a handrwn style.

  • @mikeg4691
    @mikeg4691 Год назад

    This feels like a ketamine trip

  • @darkpsychologybytes
    @darkpsychologybytes Год назад

    What other single image to 3d program do u use

  • @Agent-Spear
    @Agent-Spear Год назад

    Bro I didn't know that you didn't have many subs, That's so sad, You deserve more, Don't worry, at this rate of amazing videos you'll get more subs, 🏆🏆

  • @whyisblue923taken
    @whyisblue923taken Год назад

    Either way, I crack up as soon as she says, "Cambria".

  • @funnyberries4017
    @funnyberries4017 Год назад

    Try this again with the new corridor method!

  • @culpritdesign
    @culpritdesign Год назад +1

    Having the AI generation constrained to a single character, like a monster, could be pretty striking.

  • @Z0RLIG
    @Z0RLIG Год назад

    Damn, this animation already feels nostalgic, lol.

  • @blakekendall6156
    @blakekendall6156 7 месяцев назад

    If you squint your eyes, it looks as good as the original.

  • @dirkmandark
    @dirkmandark Год назад +1

    That technique would be great for some form of trippy "Fear and Loathing in Las Vegas" type animation or straight up nightmare fuel lol Reminds me of claymation.

  • @PriestessOfDada
    @PriestessOfDada Год назад +1

    One word: Controlnet

  • @JamieDunbar
    @JamieDunbar Год назад

    Lmao, this is almost like watching a movie through some sort of Multiverse glasses. Kind of feels like an especially trippy scene from 'Everything, Everywhere, all at once'.

  • @framcescomariacarreri5349
    @framcescomariacarreri5349 Год назад

    what if you just put the stable diffusion rendered version on top of the original one and lower the opacity or change blend mode

  • @sijimo
    @sijimo Год назад

    Absolutely love what you do, but now I do get some sickness while watching this. I know that in Deforum, you can get coherent results by the specified amount of frames. Also, you could try lower the fps.

  • @LonelionZK
    @LonelionZK Год назад +1

    Ebsynth will solve your problem!

  • @EricLefebvrePhotography
    @EricLefebvrePhotography 11 месяцев назад

    I wonder why you didn;t do a pass where you rendered only the environment and then another where oyu only rendered the characters. Would that not have helped with cohesion?

  • @user-jt9yu1eu1g
    @user-jt9yu1eu1g Год назад

    Why not use Ebsynth for a more stable picture?

  • @DSJOfficial94
    @DSJOfficial94 Год назад

    the next golden globes award

  • @user-hz7bl7xj9y
    @user-hz7bl7xj9y Год назад

    Hey that´s pretty cool. How did you manage to change the format size of that Dream AddOn in Blender? Normally you´re limited to those 512x512 pixels, that´s also my problem right now :D

  • @NelsonStJames
    @NelsonStJames Год назад

    Technology is allowing everybody to have a go at creating their own stuff, rather than just an elite few and "the industry". Why would anybody have a problem with that is beyond me.

  • @carlosalbertolealrodriguez5529

    Wonderful video. Have you heard about Machinima? It was the huge thing back in the 2000-2010, using the engine of videogames to generate animation. The tool is obsolete, but now, this A.I. tools remind me of that era. How long until we can produce a complete feature film in our computers?

  • @Neteroh
    @Neteroh Год назад

    Why dont you use Style transfer? Try Ebsynth and merge your Stable diffusion reworks on your video imput.

  • @Sneewitchen1
    @Sneewitchen1 Год назад

    I find AI render is distracting as it changes almost every frame.But I'm sure it will do a significant progress soon!Thank you for sharing your videos!🙏

  • @myartyfartyaistuff3132
    @myartyfartyaistuff3132 Год назад

    another option is 2D animation with these techniques, don't know if Blender can do 2D rigging like Dragonbones or Spine

  • @nocturne6320
    @nocturne6320 Год назад +1

    You can check out Gen-1, new research from Runway ML, same guys that collaborated on Stable Diffusion.
    It's a new AI that can stylize videos given an input video and a style image. Something similar to EbSynth, but more automated and with more versatility. Combine it with the newer Stable Diffusion models, DreamTextures for Blender and we really are cooking.

  • @supersajanin4710
    @supersajanin4710 Год назад

    Looks like a movie made in 4D hehe

  • @timetamerz
    @timetamerz Год назад

    I felt like I was having a fever dream 🤣

  • @hocestbellumchannel
    @hocestbellumchannel Год назад +1

    Until now, you need to know what you are doing to use AI effectively.
    But I wonder if we will have jobs for much longer as animators and 3D artists...
    I hope I'm wrong, but who knows...

  • @JiJi-kj
    @JiJi-kj Год назад +2

    Before render it was much better.

  • @whatwilliwatch3405
    @whatwilliwatch3405 Год назад

    5:09 Oh, it's "breathtaking" alright, but not for a good reason. With the way those images flickered and changed, I couldn't decide which hurt most: my eyes or my brain. It was an interesting experiment, though. I expect the AI programmers will get things sorted out eventually.

  • @veruzzzz
    @veruzzzz Год назад

    When I saw that AI film at the end I wanted to die.

  • @Knightimex
    @Knightimex Год назад

    It won't be long until games that never got a movie adaptation get a movie that's directed by not 1 director, but the entire community.
    There will guarantee be silly and or complete flops as well as a potential legendary.

  • @Blackskies-b1z
    @Blackskies-b1z Год назад

    Did I just watch a movie entirely written, modeled, animated and composed, and rendered by Ai?

  • @user-ik8vy1rg8f
    @user-ik8vy1rg8f Год назад

    Yeah, temporal stability is the issue right now.

  • @AllanM444
    @AllanM444 Год назад

    who needs acid, when you got ai ??

  • @synthoelectro
    @synthoelectro Год назад

    this video needs to be shown to the people who do not want this technology. I use SD myself and it's just a new technology, it may slightly be AI, but it's a huge step in advancement rendering.

  • @lookingforfreewifi
    @lookingforfreewifi Год назад

    fuck ist das cool

  • @egemenegemenegemen
    @egemenegemenegemen Год назад

    AI generate the music as well!

  • @captainsqwarters5973
    @captainsqwarters5973 Год назад +4

    that stable diffusion was not stable for my eyes

  • @pxrposewithnopurpose5801
    @pxrposewithnopurpose5801 Год назад

    no chill

  • @2-_
    @2-_ Год назад

    why does bro look and sound like a sober jonas tyroller

  • @DeclanMBrennan
    @DeclanMBrennan Год назад

    It's almost like the viewer has become unstuck from reality and is flickering rapidly between different quantum versions of the world. 🙂

  • @towardsthesingularity
    @towardsthesingularity Год назад

    Whoever can solve the temporal cohesion problem will make a lot of money! 💰💰💰

  • @somethingtojenga
    @somethingtojenga Год назад +1

    This is the most horrific-looking thing I've ever seen. This short film needs real artists BAD. Just because you can make something that looks like something, doesn't mean it's not disturbing af.

  • @peffken8834
    @peffken8834 Год назад

    Hat was. Leider nicht zu kontrollieren.

  • @abitw210
    @abitw210 Год назад +1

    i do not see this future xd

  • @Kyntteri
    @Kyntteri Год назад

    Was the story also AI-generated?

  • @pxrposewithnopurpose5801
    @pxrposewithnopurpose5801 Год назад +1

    Ai is the future you like it or not

    • @MikeClarkeARVR
      @MikeClarkeARVR Год назад +1

      It is not the future. It is TODAY. Start studying MLops!

  • @A77ila80
    @A77ila80 Год назад +1

    Thanks for the video, I'm trying to update my knowledge on the ai scene to start creating, and tour video really inspired me. Would love to see more ideas, I'm a creative director, I used to write script and have a team once for indie games in vr then I moved to cgi short for commercia in a company that was kinda awful. So I quit the job and started a new filmmaking career, and I'm really looking with interest at ai now for this. Would love to see more and/or talk with you, this would have been amazing in unity or unreal engine, or animated with motion capture 💥💣

    • @MikeClarkeARVR
      @MikeClarkeARVR Год назад

      Get into new media fast. Especially VR and AR... now, with these image generating platforms... people will start to want to watch their OWN prompted stories! Not stuff written by other directors. Why watch a holywood film of someone else's drama, when I can suggest that AI generate a story of my ancestors life in 1765? (for example!)

  • @Cezkarma
    @Cezkarma Год назад

    The other looked better, purely because of how much this one jumped around. It actually made me feel a bit nauseous.

  • @rainbowspongebob
    @rainbowspongebob Год назад

    I would say the og was better