2D Character Image To Full 3D Animation with AI

Поделиться
HTML-код
  • Опубликовано: 20 июн 2024
  • Find out more about AI Animation, and register as an AI creative for free at aianimation.com
    This video is an in-depth tutorial, where I take you through all the steps to turn a 2D AI character image created in Midjourney into a fully animated 3D character model using AI.
    With a mix of AI and traditional composition & motion graphic techniques to set up and add more and more to the scene.
    ------------------- -------------------- ---------------------
    Times
    0:00 - Intro
    0:56 - Create 2D Image Midjourney
    2:38 - 2D to 3D with CSM
    3:44 - AI Animation.com
    4:24 - CSM continued
    5:03 - Blender File Conversion
    7:06 - Rigging Mixamo
    8:15 - Deepmotion
    10:47 - Blender Setup
    13:56 - After Effects
    23:05 - Runway ML
    23:04 - Topaz AI
    25:03 - Depth Pass Runway ML
    25:31 - After Effects
    32:30 - Final Result
    ------------------- -------------------- ---------------------
    Discord:
    / discord
    Tools Used in this Tutorial:
    - Blender
    www.blender.org/
    - MidJourney (You could also use Leonardo.ai or Stable Diffusion)
    www.midjourney.com/
    - Runway ML
    runwayml.com/
    - Adobe Creative Suite
    - Topaz Labs Video AI: (affiliate link)
    topazlabs.com/ref/2271/
    ------------------- -------------------- ---------------------
    After Effects Plugins:
    Element 3D:
    www.videocopilot.net/products...
    Optical Flares:
    www.videocopilot.net/products...
    ------------------- -------------------- ---------------------
    Blender To After Effects Addon Link:
    github.com/sobotka/blender-ad...

Комментарии • 350

  • @2ndEarth
    @2ndEarth 8 месяцев назад +33

    If you want to avoid flickering and increase quality I suppose a more limited but effective method to produce 3D background would be to create a depth map with depth-scanner (totally worth the $100 as it is much better than any other depth-map creator), and then give it a slight composite blur, extract some of the simpler front objects like big leaves and place them separately in 3D space, and then give a cool movement that matches robot's path with a displacement map either in After Effects or Blender. GREAT VIDEO!!!

    • @batmanonholiday4477
      @batmanonholiday4477 7 месяцев назад +1

      so, what would such an 8 sec clip cost all together considering all of the subscriptions and paid plugins -- 500 bucks? that's mad man, unless you doing hundreds of those, totally not worth it.

  • @jonhanson8925
    @jonhanson8925 10 месяцев назад +15

    Wow! CSM looks amazing! I mean, obviously it's still rough, but it seems way ahead of most 3d model generating AI programs I've seen

    • @AIAnimationStudio
      @AIAnimationStudio  10 месяцев назад +2

      Yep, really impressive and will hopefully get better over time. You could also try breaking your image up into components to get a higher res mesh generation and then spend time joining bits together in Blender... but that was a bit too involved for the tutorial.

  • @BoldBrainWay
    @BoldBrainWay 6 месяцев назад +3

    Thank you for yet another great tutorial! Much appreciated.

  • @AAvfx
    @AAvfx 5 месяцев назад +1

    Thank you for your effort, this is great. One of the best I've seen.

  • @tryt0readd
    @tryt0readd 7 месяцев назад

    this is simply unrealistically cool, you have opened a new world for me, please continue, I watch your lessons in one breath ❤❤❤

  • @Prime_2024
    @Prime_2024 10 месяцев назад +9

    Thank you. This content is gold ✨️ 💛

  • @Dosujin
    @Dosujin 8 месяцев назад +1

    Holy shit, this is amazing. Definitely going to use this kind of workflow in the future, so THANK YOU VERY MUCH :D

  • @MultiFlashone
    @MultiFlashone 10 месяцев назад +2

    Fabulous tut. Look forward to a version of RunwayML that will run on Android!!! Thanks again for this.

  • @j_shelby_damnwird
    @j_shelby_damnwird 7 месяцев назад

    Stellar tut, my head´s brimming with ideas on how to apply this knowledge, thank you.

  • @anhnhvn
    @anhnhvn 6 месяцев назад +8

    Pretty cool tutorial. By the way, CSM can now export in OBJ file format, which can be uploaded directly into Mixamo, so you can skip the file conversion step with Blender.

    • @AIAnimationStudio
      @AIAnimationStudio  6 месяцев назад +1

      Nice... good to know. I definitely want to revisit CSM in the new year. 👍

  • @sabofx
    @sabofx 7 месяцев назад

    really impressive, great tutorial! thanks! 🙂

  • @OlliHuttunen78
    @OlliHuttunen78 4 месяца назад

    Quite a complex workpath but very interesting. Thanks for the tips!

  • @gilldanier4129
    @gilldanier4129 8 месяцев назад +4

    Just can't believe, everytime I look at a new tutorial I find yet another AI doing a different job, this time turning a 2d into a 3d, where will it end? Thank you for this tut. I have to say I am very impressed with your knowledge of all these programmes, and your ability to share them, so much to take in.

    • @AIAnimationStudio
      @AIAnimationStudio  8 месяцев назад

      Thanks very much. It is indeed a bit bonkers, the number of tools popping up that tackle some other creative process. More on the horizon too.

    • @alexandre.m
      @alexandre.m 8 месяцев назад

      2d to 3d has been a thing for many years, but no one will develop them further than their current state cause it's simply unpractical and impossible to get a decent quality

    • @mmorenopampin
      @mmorenopampin 8 месяцев назад

      I'm curious why is an ai auto retopology tool not a thing on these image to 3d ai models yet? not enough access to high quality data of 3d models? Not enough quantity of good topology data to train the ai? zbrush, 3d coat and blender all have an auto retopo feature that although not perfect give better mesh flow. I figure its a data quality problem... if trained with the proper 3d data ai could do better retopo (and uvs) then the current algorithms I mentioned... That's my theory anyway... but then again 3d data tends to be blocked behind paywalls, is bigger, heavier, and less abundant for data crawlers to grab compared to how they crawled through for all the images data on the internet for text to image@@alexandre.m

  • @memomind7415
    @memomind7415 10 месяцев назад +1

    Amazing tutorial. Thank you 💓

    • @AIAnimationStudio
      @AIAnimationStudio  10 месяцев назад +1

      Glad you liked it. A bit longer than I was aiming for but wanted to cover quite a lot in a single tutorial.

    • @memomind7415
      @memomind7415 10 месяцев назад

      @@AIAnimationStudio anything you present is great 👍

  • @jonrowe7694
    @jonrowe7694 4 месяца назад +2

    I don't think I've learnt so much in one tutorial before. Fantastic job. I'll be referencing this for years to come. Thank you.

  • @_TetKaneda
    @_TetKaneda 9 месяцев назад +1

    Great contribution. Great tutorial. You have my like. Thank you so much. Greetings

    • @AIAnimationStudio
      @AIAnimationStudio  9 месяцев назад

      Cheers @3DW3D-TetKaneda. Glad you liked it and greetings too.

  • @Xenocide78
    @Xenocide78 8 месяцев назад

    Amazing. Super cool video.

  • @dsfilmsmetzingen
    @dsfilmsmetzingen 10 месяцев назад

    Thank you for this great Tutorial

  • @mintsunvivid
    @mintsunvivid Месяц назад

    Amazing. Best RUclips channel. Thank you.

  • @FritzGnad
    @FritzGnad 7 месяцев назад

    thanks for sharing your workflow!

  • @viralbee6412
    @viralbee6412 6 месяцев назад +4

    awesome video. Amazing, how much time you spend as well creating it. I really like the output, but'd love to see an update with the shadows of the robot falling in front of it, aligning with the other shadows of the jungle scene. I think it would even more increase the realistic feel

  • @-_-DatDude
    @-_-DatDude 2 месяца назад

    Thanks very much for your tutorial. It is very much appreciated. 👌👌👌

  • @FoxGhost7
    @FoxGhost7 7 месяцев назад +4

    The 2d to 3d tool will become a lot better if it allows for multiple reference image, especially front-side-back.

  • @coldstarart
    @coldstarart 7 месяцев назад

    🔥Very cool tutorial! as always 🤩thank you for sharing this 🙏 Like👍

  • @borob.5168
    @borob.5168 9 месяцев назад +1

    very informative! thank you, Scott Adkins

  • @DailyComedi
    @DailyComedi 10 месяцев назад

    amazing work

  • @Ballzy247
    @Ballzy247 10 месяцев назад +2

    My brain now hurts from this information download. Thanks!

  • @Co-Op_Mode
    @Co-Op_Mode 8 месяцев назад

    This is insane !

  • @JasonSmith-jv7wl
    @JasonSmith-jv7wl 8 месяцев назад +30

    Honestly, the 2d to 3d tool would be great for crafting a base, and you can polish from there pretty well. If you made a better UV unwrap and baked the texture to a better UV that could also serve as a good base. Then with polish the Mixamo (or Accurig) rig would be more accurate and the animation would work a lot better too. Honestly, I can really get a lot out of this work flow as it could save a ton of time.

    • @TriZon-do7ex
      @TriZon-do7ex 8 месяцев назад

      Right.

    • @theheadmessage2934
      @theheadmessage2934 8 месяцев назад +6

      I seriously wish uving and retopo could be easier

    • @KokahZ777
      @KokahZ777 7 месяцев назад +2

      that's the thing with AI, you'd spend so much time just cleaning up what it gave you to just have a healthy base to work on

  • @axs203
    @axs203 8 месяцев назад

    So incredible!

  • @pocongVsMe
    @pocongVsMe 5 месяцев назад

    awesome tutorial

  • @mylesdb
    @mylesdb 8 месяцев назад +1

    Awesome! When the software giants bring this all together into one single pipeline instead of jumping between different software apps, I'll be jumping in and making my feature film and immersive game world!

    • @waldau8986
      @waldau8986 8 месяцев назад +9

      Yeah, like every untalented person in this world with a PC is going to. Like trying to sell books on amazon. Or doing Indie games since everything has become so easy for unskilled normies to do. And then no one is interested in entertainment anymore.
      Because every average joe is putting his boring crap online and there is so much crap on the internet that you barely find the good stuff :D

    • @eliescobis9922
      @eliescobis9922 7 месяцев назад

      the same happen to RPG maker and now any game that is made in hated because of simple joes that mass produced "games"

    • @gibbsduhem1066
      @gibbsduhem1066 7 месяцев назад

      ​@@waldau8986then it all ends up getting demonetized LMAO

    • @LostBots
      @LostBots 3 месяца назад

      If you’re not motivated enough to learn the skills to do it now your def never doing it. even when it’s almost done for you.

  • @Escelce
    @Escelce 9 месяцев назад

    Great video thank you

  • @igor_cojocaru
    @igor_cojocaru 8 месяцев назад

    Incredible

  • @MyDigitalHub
    @MyDigitalHub 8 месяцев назад +3

    As an AI myself, I am impressed how you play around my AI friends 🎉

  • @Mrzero_clks
    @Mrzero_clks 8 месяцев назад

    you blowed my mind!

  • @user-sy4sq1ck3e
    @user-sy4sq1ck3e 5 месяцев назад

    Cool!

  • @glineb
    @glineb 10 месяцев назад +15

    Awesome lessons, bro! I am very glad that there are enthusiasts like you. Thanks to your creativity, we get unique results! 🤖

  • @stephenmackenzie9016
    @stephenmackenzie9016 9 месяцев назад

    Nice topology

  • @LAMikeZ
    @LAMikeZ 6 дней назад

    Thank you.

  • @GlobeTrekker21
    @GlobeTrekker21 7 месяцев назад

    I subscribed, thanks you sharing!

  • @cosmochatterbot
    @cosmochatterbot 4 месяца назад

    Absolutely *mind-blowing!* This tutorial is a game-changer for anyone in the realm of digital art and animation. Watching a 2D image evolve into a full 3D animation through the seamless integration of AI and traditional techniques is nothing short of magical. The blend of Midjourney's creative image generation, Common Sense Machines' innovative 3D modeling, and the simplicity of rigging and animating with Mixamo and DeepMotion showcases an exciting frontier for creators. The final touch with Runway ML and Adobe After Effects bringing everything together in a vivid, animated scene is the cherry on top! It's thrilling to see how accessible and powerful these tools have become, opening up endless possibilities for storytelling and visual creativity. Kudos for such an enlightening and inspiring tutorial! Can't wait to dive into these tools and bring my own characters to life. #AnimationRevolution

  • @Neosin1
    @Neosin1 6 месяцев назад

    Mind = Blown!
    Only thing lacking atm is the textures on the character which look crap but I'm guessing ai will improve it soon 😮

  • @user-ws6lj2rq3e
    @user-ws6lj2rq3e 9 месяцев назад

    youre an angel

  • @animaaioficial
    @animaaioficial 10 месяцев назад +1

    brilhant content

  • @thebox385
    @thebox385 10 месяцев назад +7

    This is awesome!
    Hoping in the future any of these would consider cloth and hair physics as well. I'd presume some people would want long-haired (or long-clothed) characters to be processed this way and it would look weird if the hair and clothes are too stiff while the entire body is animating naturally.

    • @AIAnimationStudio
      @AIAnimationStudio  10 месяцев назад +2

      Thanks. Yeah absolutely. Even things like glasses on a character would need to seperately created and added to a model at the moment to give that seperation from the face... but it's useful for lots of other cases.
      The Character Creator and iClone from RealIllusion, can do some crazily good stuff that would cover alot of this right now though ... but the software cost and learning curve is considerably higher.

    • @viktortoleski
      @viktortoleski 8 месяцев назад +2

      Yes this is awesome, but it is more scary than awesome. In this video you could have employed 30 people specialized for separated stuff. Now youre going to need only one.

    • @sun_beams
      @sun_beams 8 месяцев назад

      ​@@viktortoleskiand it looks like one person cobbled it together. While it's neat it is still SO limited and not worth replacing anyone. That base mesh is garbage, the lights are baked into the textures, the tracking is bad because it isn't a real camera so the points aren't actually accurate. It's very cool that someone can make their own 3d shorts with this, but it's still way far off from replacing most of the artists.

    • @Luizfernando-dm2rf
      @Luizfernando-dm2rf 6 месяцев назад

      @@viktortoleski It's never as simple as "new tech->less jobs->more poverty", that's just plain naive.
      I mean, the less work is needed to produce something, the more you can make in the same time window and you can sell it for cheaper too! That is awesome if you ask me :D.

  • @KostasMachete
    @KostasMachete 9 месяцев назад +1

    Question: Can I use AIAnimation to actually animate a metahuman and import it these animations to Unreal 5 or is it only for AI Generated pics? Thank and great work with this.

  • @TheKylebear
    @TheKylebear 4 месяца назад

    This be useful for the artist who provide their own images, and not take from many online.

  • @jatinderarora2261
    @jatinderarora2261 10 месяцев назад +1

    Amazingggggggggggggggggggggggg.

  • @Yipper64
    @Yipper64 8 месяцев назад +2

    3:12 if im not mistaken this isnt actually AI. They say its AI but really its just people quickly approximating a model that looks somewhat similar to the image.
    At the very least there are websites out there that advertise themselves as AI when its not really AI at all.
    The biggest tell is usually if it takes hours to generate, if its AI I shouldnt take very long at all. Also if the typology is nearly perfect (not rough looking) it can often mean it was just generic assets meshed together.

  • @LookAroundBLOG
    @LookAroundBLOG 6 месяцев назад

    Thank you for such an informative tutorial. I would like to clarify, is it possible to fit 3d objects into a 360 video?
    I am making 360 video footages (I have them on my channel) and would like to make an unusual video by writing into it what I just saw in your lesson, namely animated 3d objects.
    Thank you again for such an informative tutorial.

  • @Sketching4Sanity
    @Sketching4Sanity 7 месяцев назад

    LOVE ✊🏿

  • @razorshark2146
    @razorshark2146 7 месяцев назад +3

    Great video, I think AI has its place perhaps as a tool for creating animated storyboards, to get the overall idea/feel of the project early on. And after that, replace those generic looking generated assets with some proper designs as you polish things in a later phase in the art pipeline.

    • @AIAnimationStudio
      @AIAnimationStudio  7 месяцев назад +1

      thanks. .. and yep 100% agree. All the various AI tools are intriguing and useful in certain instances and as part of a much broader pipeline involving an array of talented hands-on work.

  • @csabamolnar4416
    @csabamolnar4416 10 месяцев назад +1

    I would change lens flare layer to screen mode :)

  • @mukbanmulba1263
    @mukbanmulba1263 7 месяцев назад

    i like your method the best

  • @AngriestAmerican
    @AngriestAmerican 9 месяцев назад +1

    holy F*ck!! A model from an image? Impressive

  • @creatorsmafia
    @creatorsmafia 10 месяцев назад +12

    It is amazing to witness how AI can transform a 2D image into a fully rigged and animated 3D character.

  • @rogerb2280
    @rogerb2280 5 месяцев назад

    Great

  • @zergidrom4572
    @zergidrom4572 8 месяцев назад

    10:37 some solid site... foots almost not moving in space like on many other motion capture projects :)

  • @reflectionsAND
    @reflectionsAND Месяц назад +1

    I saw this tutorial 5 months ago. I now know how to use blender pretty well, and use so many other AI apps. My question is after all you did can you download it as an MP4 to upload to Facebook or RUclips? Thanks as I look forward to doing this too!?

    • @AIAnimationStudio
      @AIAnimationStudio  Месяц назад +1

      Hey, yeah via Adobe after effects you can add the composition to the ‘render queue’ and save out to an mp4. (Or other video formats)

  • @inkinno
    @inkinno 3 месяца назад

    좋아.. 아주 좋아.

  • @bomjariki
    @bomjariki 4 месяца назад

    nice jobe! How did you get such an accurate model at CSM?

  • @MrNovaboi90
    @MrNovaboi90 8 месяцев назад +2

    Wow, I just learned how to create and rig a character recently(which takes forever) and just the fact that I can spin up and rig a character with little effort is dope. Definitely can speed up my blender workflow. Probably won’t do the parts from after effects portion and on, but dope nonetheless.

    • @AIAnimationStudio
      @AIAnimationStudio  8 месяцев назад

      Yeah just using Mixamo to get a basic rig for a character is bloomin handy.

  • @wernerhiemer406
    @wernerhiemer406 9 месяцев назад +1

    So how much time did this "creation" consumed in real time versus making it from "hand", if typing, pushing mouse and clicking on it, and holding a graphics tablet, whilst already using a computer, software, can be considered hand work or crafting?

    • @AIAnimationStudio
      @AIAnimationStudio  9 месяцев назад +3

      Not actually sure, as was testing/learning and then making the tutorial soon after.
      Still lots of arguments for modelling the character manually depending what you wanted to depict in the scene but there's definitely use cases for these tools.
      You could model and texture a similar robot using basic modelling techniques comfortably in a day and have a better quality model and rig.
      But for speed and a simple 1 button press and walking away the CSM version is already impressive.
      The background scene in runway ML would take ages to create unless you paid for stock footage or flew to a rain forest.
      Once setup you could expand on the small clip I created using more motion capture, traditional 3d rig animation and various camera cuts.

  • @nelyrions1838
    @nelyrions1838 6 месяцев назад +1

    looking at the mesh i had internal screaming. I thought this would be of any greater use but i guess when you want cheap, low quality characters and assets it could still be useful.
    And dude.. just use blender for the animation work rather than after effects. Put ground as shadow catcher and use a HDRI of similar colours and shapes and you're gold. You could even tweak the animation as you go.

  • @richeskokoti1235
    @richeskokoti1235 9 месяцев назад +6

    Alright so this is the plan
    Step 1: Learn the basics of 3d.
    Step 2: Learn how to use AI tools.
    Step 3: Merge the two .

    • @AIAnimationStudio
      @AIAnimationStudio  9 месяцев назад +2

      Excellent... good plan. Plus, there are things coming out which will make it even more possible to make unique content, where some basic 3D skills will be really good to have in your skill set.

  • @PaoloH111
    @PaoloH111 6 месяцев назад

    it's great vedio but that's ,eans 3d animation will be end Or will demand decrease in the future?
    becouse that's make me afriad from this position at the future .

  • @qeveritas9319
    @qeveritas9319 4 месяца назад

    The key question is- can I freely use such assets created in CSM in commercial projets, ex. own game made in Unreal or Unity? I assume after some tuning in ex.Blender I can call them my own assets,with full commercial rights?

  • @davy955
    @davy955 5 месяцев назад

    is there a video to rigged animation for stable diffusion?

  • @giovannimontagnana6262
    @giovannimontagnana6262 9 месяцев назад +1

    lol I remember I saw your work on Discord channel first

  • @KJ-rc3io
    @KJ-rc3io 8 месяцев назад +1

    What are the terms and conditions for using the 3d-model in a commercial product, a game for example?

    • @joshuaborner
      @joshuaborner 8 месяцев назад

      AI generated art is not copyrightable

  • @beansandwiched
    @beansandwiched 9 месяцев назад

    my image does convert. I just shows "training preview" and nothing else. Do I have to get the paid verson for the 3d model?

  • @Zharkan16
    @Zharkan16 8 месяцев назад +1

    Very nice! but the shadow is going the wrong way in the end :)

    • @AIAnimationStudio
      @AIAnimationStudio  8 месяцев назад

      Yep.👍.. could and should have spent more time refining the composite.. but it was already a bloomin long video ... next time..👍

  • @Ana-ez7cj
    @Ana-ez7cj 4 месяца назад

    Would you reach the same with Adobe suite for 3D?

  • @MrRomanrin
    @MrRomanrin 6 месяцев назад +1

    HI !
    can i use a character IMAGE to create a T POSED MODEL of it ?

  • @alekseyzhuravlov
    @alekseyzhuravlov 8 месяцев назад +1

    This is amazing! But the complexity & amount of work is high for such a small scene... Haven't used Blender before, but can't wait to start learning.
    Thank you for this tutorial!

    • @user-nz8cl4pr5l
      @user-nz8cl4pr5l 8 месяцев назад +1

      Bruh

    • @daniel4647
      @daniel4647 8 месяцев назад +2

      This is no work at all, some of us know how to do all of this manually, modeling, unwrapping, texture painting, rigging, and animating. The dude basically took a month of work and reduced it to an hour of work, and you still think it's too much? Granted, the final result kind of looks like crap, but it's not the worst, and considering that with this speed he could make a full length movie in less than 6 months by himself it's pretty insane.

    • @sun_beams
      @sun_beams 8 месяцев назад +1

      ​@@daniel4647mindsets like the guy you're replying to is why I'm not worried about being replaced by consumer AI users hahaha

    • @Marvin-vs3tu
      @Marvin-vs3tu 2 месяца назад

      ​@@daniel4647lmao the work is good for few hours of work. Even for people who have little knowledge of CG

  • @williamminnaar6311
    @williamminnaar6311 4 месяца назад

    Good day, I've tried to use Mid-Journey - How do you install it and make it work? Do you maybe have a tutorial please? Thanks!

    • @AIAnimationStudio
      @AIAnimationStudio  4 месяца назад

      You can use it via Discord. I've covered it in a few videos. I think one of my older ones still holds true... ruclips.net/video/t5Vq4ahmn74/видео.htmlsi=zILv30Fn3mqJduln&t=117

  • @pranjal9830
    @pranjal9830 4 месяца назад

    Tutorials for mobile for this same type of ai animation is there ?

  • @plattcharlesjr
    @plattcharlesjr 8 месяцев назад

    How did you manage to get a image of the front and back of your character in one generation?

    • @AIAnimationStudio
      @AIAnimationStudio  8 месяцев назад +1

      I think in this instance I think it was pure luck. I did'nt actually use the rear view for the CSM process. But if you want it, you can try including the prompt 'character sheet' in Midjourney to encourage it to produce multiple views of the same character in its generations.

  • @TecnologySudamerican
    @TecnologySudamerican 8 месяцев назад

    WAUUU, esto nos quitara el empleo a muchos en blender :v

  • @WelshDog
    @WelshDog 10 месяцев назад +1

    👏👏❤️‍🔥👌👊

  • @derethreborn910
    @derethreborn910 9 месяцев назад +1

    We would do better with the image to 3d mesh if we were using Nvidia NERF directly?

    • @travissmith5994
      @travissmith5994 8 месяцев назад

      NERF requires several consistent images from multiple directions - something you can't currently get with AI generated images.

    • @derethreborn910
      @derethreborn910 8 месяцев назад

      @@travissmith5994 Thanks. It would be interesting to see if we could get the same images but in rotation by using image weights in order to create a panorama. Might be worth a try.

  • @MyDigitalHub
    @MyDigitalHub 8 месяцев назад

    👏👏👏👏

  • @JFKTLA
    @JFKTLA 7 месяцев назад

    Wait so midjorney can use your picture as reference?

  • @cynthiacasey6631
    @cynthiacasey6631 5 месяцев назад

    I have a laptop but I have a question to ask Will I need to get a better computer or do I need to get flash drives or anything with downloading so many things could it over power my computer making it to full or do I need a better computer also how much does it cost for all these subscriptions that you're mentioning and if I cannot follow or see the pixels cuz they are kind of small even with my computer on full blow can I get a tutor and get him or her paid by a Pell Grant I'm just asking so that I can know how to do I'm more of a Hands-On person I hear what you're saying but hearing it and doing it it's two different things and I feel that I might need that extra so what would you recommend for a live tutor

  • @puggyk4220
    @puggyk4220 6 месяцев назад

    Can I do this only in python open source?

  • @jjs9447
    @jjs9447 6 часов назад

    when using midjourney, I noticed that you didn't use your picture as a reference, you didn't send the link to the chat

  • @Chinshellbuford
    @Chinshellbuford 7 месяцев назад

    Imagine the new fnaf fan games with this

  • @cynthiacasey6631
    @cynthiacasey6631 5 месяцев назад

    Question what if you want to give your character a voice or a Pacific voice how do I add a voice to my animation

    • @AIAnimationStudio
      @AIAnimationStudio  5 месяцев назад

      You can generate some great AI voices using something like ElevenLabs. It's really very good, and you can optionally record you're own voice for improved control over emphasis, then swap in one of the AI voices. As for animating lip syncing, there's a few options (I need to explore these more soon). Things like DiD, Hey Gen are one approach. Lalamu Studio has a free lipync demo, which is low res b ut can work well. Plus there's sync labs new platform.

  • @mancsy3262
    @mancsy3262 3 часа назад

    Can a character pick up a prop, say a rifle, and do the animation of shooting?

  • @robertceron9056
    @robertceron9056 10 месяцев назад

    Just read not sur if true csm has royalty free to all things generated. Any word on this?

    • @robertceron9056
      @robertceron9056 10 месяцев назад +1

      Terms: Limited License Grant to CSM. By submitting any Capture to or via the Service, you grant CSM a worldwide, non-exclusive, irrevocable, royalty-free, fully paid right and license (with the right to sublicense through multiple tiers) to: (a) host, store, transfer, reproduce, modify for the purpose of formatting for display, create derivative works of such Captures as authorized in these Terms in order to provide the Service and Models to you, and (b) use your Models and any data we generate from the use of your Captures in order to improve and enhance the Service.

  • @rekallconsulting
    @rekallconsulting 9 месяцев назад

    I wonder how AI is impacting the pricing of creating AI generated videos and film?

  • @julx97
    @julx97 9 месяцев назад +1

    Why did you actually use -iw when you didn’t put the image link in there?

    • @AIAnimationStudio
      @AIAnimationStudio  9 месяцев назад +1

      🤦‍♂... ahhhh!... good spot. Just human error when making the video. Should have dragged the uploaded image down to the prompt (after typing /imagine) and then written out the text prompt. Which I'm sure I did actually do for the generated image I used, but obviously not when recording that part of the tutorial.

    • @julx97
      @julx97 9 месяцев назад

      @@AIAnimationStudio haha, okay 😁

  • @batmanonholiday4477
    @batmanonholiday4477 7 месяцев назад +1

    So how much did this all cost in the end? seems like an expensive set of toys where you need to pay for every step, is getting a 3d model with a rig free at least?

    • @DimensionDoorTeam
      @DimensionDoorTeam 7 месяцев назад

      To be honest, there is so much jumping into different software, and, as you mentioned, they can be quite expensive. It is simply much faster to do it all by hand. Additionally, I love to know what will happen if the person who makes the request asks for changes :)

  • @valtzar3200
    @valtzar3200 4 месяца назад

    Far from a professional result but i bet it will do a big favor for a lot of indie devs.

  • @igorthelight
    @igorthelight 8 месяцев назад

    5:03 - you mean, 3D modelling, sculpting, animation, drawing, 2D animation, physics simulation, video editing software? ;-)

  • @scrutch666
    @scrutch666 5 месяцев назад

    A good artist can do that quality in 5min

  • @00Mass00
    @00Mass00 7 месяцев назад

    In the end result the robot only moved 2 steps, where your movement in deepmotion was quite a bit longer, why is that?

    • @AIAnimationStudio
      @AIAnimationStudio  7 месяцев назад

      Simply cause my Deepmotion output was a bit poor. Largely due to my poor input video, messy scene and poor lighting, which didn't give Deepmotion the best input to work with. So, I'd simply cherry-picked that part of the motion for the purposes of the tutorial.
      In hindsight, had I known the video was going to get more than a few thousand views I would have spent a bit more time filming. 😆. Plus more time polishing the finished composition/shadow direction. etc etc... and maybe done a few different shots to tell a short narrative story. (Which was the goal at the time, before I get distracted by the next shiny AI process.)

  • @RonColeArt
    @RonColeArt 3 месяца назад

    You are no longer needed. -The Machines

  • @12Prophet
    @12Prophet 8 месяцев назад +1

    Ah beautiful. This is gonna put film studios out of business and I'm here for it.

    • @tuskmoon
      @tuskmoon 8 месяцев назад

    • @Thefan
      @Thefan 8 месяцев назад +2

      Unlikely, it'll be another tool that VFX artists in film studios use. It still needs human input.

    • @fernando3061
      @fernando3061 7 месяцев назад

      Yes VFX artists, far less of them with far less training. What a horrible argument my Lord I'm so tired of morons using this "it's just another tool in the tool belt" argument.@@Thefan