2D Character Image To Full 3D Animation with AI
HTML-код
- Опубликовано: 20 июн 2024
- Find out more about AI Animation, and register as an AI creative for free at aianimation.com
This video is an in-depth tutorial, where I take you through all the steps to turn a 2D AI character image created in Midjourney into a fully animated 3D character model using AI.
With a mix of AI and traditional composition & motion graphic techniques to set up and add more and more to the scene.
------------------- -------------------- ---------------------
Times
0:00 - Intro
0:56 - Create 2D Image Midjourney
2:38 - 2D to 3D with CSM
3:44 - AI Animation.com
4:24 - CSM continued
5:03 - Blender File Conversion
7:06 - Rigging Mixamo
8:15 - Deepmotion
10:47 - Blender Setup
13:56 - After Effects
23:05 - Runway ML
23:04 - Topaz AI
25:03 - Depth Pass Runway ML
25:31 - After Effects
32:30 - Final Result
------------------- -------------------- ---------------------
Discord:
/ discord
Tools Used in this Tutorial:
- Blender
www.blender.org/
- MidJourney (You could also use Leonardo.ai or Stable Diffusion)
www.midjourney.com/
- Runway ML
runwayml.com/
- Adobe Creative Suite
- Topaz Labs Video AI: (affiliate link)
topazlabs.com/ref/2271/
------------------- -------------------- ---------------------
After Effects Plugins:
Element 3D:
www.videocopilot.net/products...
Optical Flares:
www.videocopilot.net/products...
------------------- -------------------- ---------------------
Blender To After Effects Addon Link:
github.com/sobotka/blender-ad...
If you want to avoid flickering and increase quality I suppose a more limited but effective method to produce 3D background would be to create a depth map with depth-scanner (totally worth the $100 as it is much better than any other depth-map creator), and then give it a slight composite blur, extract some of the simpler front objects like big leaves and place them separately in 3D space, and then give a cool movement that matches robot's path with a displacement map either in After Effects or Blender. GREAT VIDEO!!!
so, what would such an 8 sec clip cost all together considering all of the subscriptions and paid plugins -- 500 bucks? that's mad man, unless you doing hundreds of those, totally not worth it.
Wow! CSM looks amazing! I mean, obviously it's still rough, but it seems way ahead of most 3d model generating AI programs I've seen
Yep, really impressive and will hopefully get better over time. You could also try breaking your image up into components to get a higher res mesh generation and then spend time joining bits together in Blender... but that was a bit too involved for the tutorial.
Thank you for yet another great tutorial! Much appreciated.
Thank you for your effort, this is great. One of the best I've seen.
this is simply unrealistically cool, you have opened a new world for me, please continue, I watch your lessons in one breath ❤❤❤
Thank you. This content is gold ✨️ 💛
Cheers
Holy shit, this is amazing. Definitely going to use this kind of workflow in the future, so THANK YOU VERY MUCH :D
Fabulous tut. Look forward to a version of RunwayML that will run on Android!!! Thanks again for this.
Stellar tut, my head´s brimming with ideas on how to apply this knowledge, thank you.
Pretty cool tutorial. By the way, CSM can now export in OBJ file format, which can be uploaded directly into Mixamo, so you can skip the file conversion step with Blender.
Nice... good to know. I definitely want to revisit CSM in the new year. 👍
really impressive, great tutorial! thanks! 🙂
Quite a complex workpath but very interesting. Thanks for the tips!
Just can't believe, everytime I look at a new tutorial I find yet another AI doing a different job, this time turning a 2d into a 3d, where will it end? Thank you for this tut. I have to say I am very impressed with your knowledge of all these programmes, and your ability to share them, so much to take in.
Thanks very much. It is indeed a bit bonkers, the number of tools popping up that tackle some other creative process. More on the horizon too.
2d to 3d has been a thing for many years, but no one will develop them further than their current state cause it's simply unpractical and impossible to get a decent quality
I'm curious why is an ai auto retopology tool not a thing on these image to 3d ai models yet? not enough access to high quality data of 3d models? Not enough quantity of good topology data to train the ai? zbrush, 3d coat and blender all have an auto retopo feature that although not perfect give better mesh flow. I figure its a data quality problem... if trained with the proper 3d data ai could do better retopo (and uvs) then the current algorithms I mentioned... That's my theory anyway... but then again 3d data tends to be blocked behind paywalls, is bigger, heavier, and less abundant for data crawlers to grab compared to how they crawled through for all the images data on the internet for text to image@@alexandre.m
Amazing tutorial. Thank you 💓
Glad you liked it. A bit longer than I was aiming for but wanted to cover quite a lot in a single tutorial.
@@AIAnimationStudio anything you present is great 👍
I don't think I've learnt so much in one tutorial before. Fantastic job. I'll be referencing this for years to come. Thank you.
Great contribution. Great tutorial. You have my like. Thank you so much. Greetings
Cheers @3DW3D-TetKaneda. Glad you liked it and greetings too.
Amazing. Super cool video.
Thank you for this great Tutorial
Amazing. Best RUclips channel. Thank you.
thanks for sharing your workflow!
awesome video. Amazing, how much time you spend as well creating it. I really like the output, but'd love to see an update with the shadows of the robot falling in front of it, aligning with the other shadows of the jungle scene. I think it would even more increase the realistic feel
Thanks very much for your tutorial. It is very much appreciated. 👌👌👌
The 2d to 3d tool will become a lot better if it allows for multiple reference image, especially front-side-back.
🔥Very cool tutorial! as always 🤩thank you for sharing this 🙏 Like👍
very informative! thank you, Scott Adkins
Thanks Scott.
amazing work
My brain now hurts from this information download. Thanks!
😁.. you're welcome
This is insane !
Honestly, the 2d to 3d tool would be great for crafting a base, and you can polish from there pretty well. If you made a better UV unwrap and baked the texture to a better UV that could also serve as a good base. Then with polish the Mixamo (or Accurig) rig would be more accurate and the animation would work a lot better too. Honestly, I can really get a lot out of this work flow as it could save a ton of time.
Right.
I seriously wish uving and retopo could be easier
that's the thing with AI, you'd spend so much time just cleaning up what it gave you to just have a healthy base to work on
So incredible!
awesome tutorial
Awesome! When the software giants bring this all together into one single pipeline instead of jumping between different software apps, I'll be jumping in and making my feature film and immersive game world!
Yeah, like every untalented person in this world with a PC is going to. Like trying to sell books on amazon. Or doing Indie games since everything has become so easy for unskilled normies to do. And then no one is interested in entertainment anymore.
Because every average joe is putting his boring crap online and there is so much crap on the internet that you barely find the good stuff :D
the same happen to RPG maker and now any game that is made in hated because of simple joes that mass produced "games"
@@waldau8986then it all ends up getting demonetized LMAO
If you’re not motivated enough to learn the skills to do it now your def never doing it. even when it’s almost done for you.
Great video thank you
Incredible
As an AI myself, I am impressed how you play around my AI friends 🎉
you blowed my mind!
Cool!
Awesome lessons, bro! I am very glad that there are enthusiasts like you. Thanks to your creativity, we get unique results! 🤖
Cool,, thanks mate 👍
Nice topology
Thank you.
I subscribed, thanks you sharing!
Absolutely *mind-blowing!* This tutorial is a game-changer for anyone in the realm of digital art and animation. Watching a 2D image evolve into a full 3D animation through the seamless integration of AI and traditional techniques is nothing short of magical. The blend of Midjourney's creative image generation, Common Sense Machines' innovative 3D modeling, and the simplicity of rigging and animating with Mixamo and DeepMotion showcases an exciting frontier for creators. The final touch with Runway ML and Adobe After Effects bringing everything together in a vivid, animated scene is the cherry on top! It's thrilling to see how accessible and powerful these tools have become, opening up endless possibilities for storytelling and visual creativity. Kudos for such an enlightening and inspiring tutorial! Can't wait to dive into these tools and bring my own characters to life. #AnimationRevolution
Mind = Blown!
Only thing lacking atm is the textures on the character which look crap but I'm guessing ai will improve it soon 😮
youre an angel
brilhant content
This is awesome!
Hoping in the future any of these would consider cloth and hair physics as well. I'd presume some people would want long-haired (or long-clothed) characters to be processed this way and it would look weird if the hair and clothes are too stiff while the entire body is animating naturally.
Thanks. Yeah absolutely. Even things like glasses on a character would need to seperately created and added to a model at the moment to give that seperation from the face... but it's useful for lots of other cases.
The Character Creator and iClone from RealIllusion, can do some crazily good stuff that would cover alot of this right now though ... but the software cost and learning curve is considerably higher.
Yes this is awesome, but it is more scary than awesome. In this video you could have employed 30 people specialized for separated stuff. Now youre going to need only one.
@@viktortoleskiand it looks like one person cobbled it together. While it's neat it is still SO limited and not worth replacing anyone. That base mesh is garbage, the lights are baked into the textures, the tracking is bad because it isn't a real camera so the points aren't actually accurate. It's very cool that someone can make their own 3d shorts with this, but it's still way far off from replacing most of the artists.
@@viktortoleski It's never as simple as "new tech->less jobs->more poverty", that's just plain naive.
I mean, the less work is needed to produce something, the more you can make in the same time window and you can sell it for cheaper too! That is awesome if you ask me :D.
Question: Can I use AIAnimation to actually animate a metahuman and import it these animations to Unreal 5 or is it only for AI Generated pics? Thank and great work with this.
This be useful for the artist who provide their own images, and not take from many online.
Amazingggggggggggggggggggggggg.
Thankyooooooooou.
3:12 if im not mistaken this isnt actually AI. They say its AI but really its just people quickly approximating a model that looks somewhat similar to the image.
At the very least there are websites out there that advertise themselves as AI when its not really AI at all.
The biggest tell is usually if it takes hours to generate, if its AI I shouldnt take very long at all. Also if the typology is nearly perfect (not rough looking) it can often mean it was just generic assets meshed together.
Thank you for such an informative tutorial. I would like to clarify, is it possible to fit 3d objects into a 360 video?
I am making 360 video footages (I have them on my channel) and would like to make an unusual video by writing into it what I just saw in your lesson, namely animated 3d objects.
Thank you again for such an informative tutorial.
LOVE ✊🏿
Great video, I think AI has its place perhaps as a tool for creating animated storyboards, to get the overall idea/feel of the project early on. And after that, replace those generic looking generated assets with some proper designs as you polish things in a later phase in the art pipeline.
thanks. .. and yep 100% agree. All the various AI tools are intriguing and useful in certain instances and as part of a much broader pipeline involving an array of talented hands-on work.
I would change lens flare layer to screen mode :)
i like your method the best
holy F*ck!! A model from an image? Impressive
It is amazing to witness how AI can transform a 2D image into a fully rigged and animated 3D character.
Great
10:37 some solid site... foots almost not moving in space like on many other motion capture projects :)
I saw this tutorial 5 months ago. I now know how to use blender pretty well, and use so many other AI apps. My question is after all you did can you download it as an MP4 to upload to Facebook or RUclips? Thanks as I look forward to doing this too!?
Hey, yeah via Adobe after effects you can add the composition to the ‘render queue’ and save out to an mp4. (Or other video formats)
좋아.. 아주 좋아.
nice jobe! How did you get such an accurate model at CSM?
Wow, I just learned how to create and rig a character recently(which takes forever) and just the fact that I can spin up and rig a character with little effort is dope. Definitely can speed up my blender workflow. Probably won’t do the parts from after effects portion and on, but dope nonetheless.
Yeah just using Mixamo to get a basic rig for a character is bloomin handy.
So how much time did this "creation" consumed in real time versus making it from "hand", if typing, pushing mouse and clicking on it, and holding a graphics tablet, whilst already using a computer, software, can be considered hand work or crafting?
Not actually sure, as was testing/learning and then making the tutorial soon after.
Still lots of arguments for modelling the character manually depending what you wanted to depict in the scene but there's definitely use cases for these tools.
You could model and texture a similar robot using basic modelling techniques comfortably in a day and have a better quality model and rig.
But for speed and a simple 1 button press and walking away the CSM version is already impressive.
The background scene in runway ML would take ages to create unless you paid for stock footage or flew to a rain forest.
Once setup you could expand on the small clip I created using more motion capture, traditional 3d rig animation and various camera cuts.
looking at the mesh i had internal screaming. I thought this would be of any greater use but i guess when you want cheap, low quality characters and assets it could still be useful.
And dude.. just use blender for the animation work rather than after effects. Put ground as shadow catcher and use a HDRI of similar colours and shapes and you're gold. You could even tweak the animation as you go.
Alright so this is the plan
Step 1: Learn the basics of 3d.
Step 2: Learn how to use AI tools.
Step 3: Merge the two .
Excellent... good plan. Plus, there are things coming out which will make it even more possible to make unique content, where some basic 3D skills will be really good to have in your skill set.
it's great vedio but that's ,eans 3d animation will be end Or will demand decrease in the future?
becouse that's make me afriad from this position at the future .
The key question is- can I freely use such assets created in CSM in commercial projets, ex. own game made in Unreal or Unity? I assume after some tuning in ex.Blender I can call them my own assets,with full commercial rights?
is there a video to rigged animation for stable diffusion?
lol I remember I saw your work on Discord channel first
What are the terms and conditions for using the 3d-model in a commercial product, a game for example?
AI generated art is not copyrightable
my image does convert. I just shows "training preview" and nothing else. Do I have to get the paid verson for the 3d model?
Very nice! but the shadow is going the wrong way in the end :)
Yep.👍.. could and should have spent more time refining the composite.. but it was already a bloomin long video ... next time..👍
Would you reach the same with Adobe suite for 3D?
HI !
can i use a character IMAGE to create a T POSED MODEL of it ?
This is amazing! But the complexity & amount of work is high for such a small scene... Haven't used Blender before, but can't wait to start learning.
Thank you for this tutorial!
Bruh
This is no work at all, some of us know how to do all of this manually, modeling, unwrapping, texture painting, rigging, and animating. The dude basically took a month of work and reduced it to an hour of work, and you still think it's too much? Granted, the final result kind of looks like crap, but it's not the worst, and considering that with this speed he could make a full length movie in less than 6 months by himself it's pretty insane.
@@daniel4647mindsets like the guy you're replying to is why I'm not worried about being replaced by consumer AI users hahaha
@@daniel4647lmao the work is good for few hours of work. Even for people who have little knowledge of CG
Good day, I've tried to use Mid-Journey - How do you install it and make it work? Do you maybe have a tutorial please? Thanks!
You can use it via Discord. I've covered it in a few videos. I think one of my older ones still holds true... ruclips.net/video/t5Vq4ahmn74/видео.htmlsi=zILv30Fn3mqJduln&t=117
Tutorials for mobile for this same type of ai animation is there ?
How did you manage to get a image of the front and back of your character in one generation?
I think in this instance I think it was pure luck. I did'nt actually use the rear view for the CSM process. But if you want it, you can try including the prompt 'character sheet' in Midjourney to encourage it to produce multiple views of the same character in its generations.
WAUUU, esto nos quitara el empleo a muchos en blender :v
👏👏❤️🔥👌👊
We would do better with the image to 3d mesh if we were using Nvidia NERF directly?
NERF requires several consistent images from multiple directions - something you can't currently get with AI generated images.
@@travissmith5994 Thanks. It would be interesting to see if we could get the same images but in rotation by using image weights in order to create a panorama. Might be worth a try.
👏👏👏👏
Wait so midjorney can use your picture as reference?
I have a laptop but I have a question to ask Will I need to get a better computer or do I need to get flash drives or anything with downloading so many things could it over power my computer making it to full or do I need a better computer also how much does it cost for all these subscriptions that you're mentioning and if I cannot follow or see the pixels cuz they are kind of small even with my computer on full blow can I get a tutor and get him or her paid by a Pell Grant I'm just asking so that I can know how to do I'm more of a Hands-On person I hear what you're saying but hearing it and doing it it's two different things and I feel that I might need that extra so what would you recommend for a live tutor
Can I do this only in python open source?
when using midjourney, I noticed that you didn't use your picture as a reference, you didn't send the link to the chat
Imagine the new fnaf fan games with this
Question what if you want to give your character a voice or a Pacific voice how do I add a voice to my animation
You can generate some great AI voices using something like ElevenLabs. It's really very good, and you can optionally record you're own voice for improved control over emphasis, then swap in one of the AI voices. As for animating lip syncing, there's a few options (I need to explore these more soon). Things like DiD, Hey Gen are one approach. Lalamu Studio has a free lipync demo, which is low res b ut can work well. Plus there's sync labs new platform.
Can a character pick up a prop, say a rifle, and do the animation of shooting?
Just read not sur if true csm has royalty free to all things generated. Any word on this?
Terms: Limited License Grant to CSM. By submitting any Capture to or via the Service, you grant CSM a worldwide, non-exclusive, irrevocable, royalty-free, fully paid right and license (with the right to sublicense through multiple tiers) to: (a) host, store, transfer, reproduce, modify for the purpose of formatting for display, create derivative works of such Captures as authorized in these Terms in order to provide the Service and Models to you, and (b) use your Models and any data we generate from the use of your Captures in order to improve and enhance the Service.
I wonder how AI is impacting the pricing of creating AI generated videos and film?
Why did you actually use -iw when you didn’t put the image link in there?
🤦♂... ahhhh!... good spot. Just human error when making the video. Should have dragged the uploaded image down to the prompt (after typing /imagine) and then written out the text prompt. Which I'm sure I did actually do for the generated image I used, but obviously not when recording that part of the tutorial.
@@AIAnimationStudio haha, okay 😁
So how much did this all cost in the end? seems like an expensive set of toys where you need to pay for every step, is getting a 3d model with a rig free at least?
To be honest, there is so much jumping into different software, and, as you mentioned, they can be quite expensive. It is simply much faster to do it all by hand. Additionally, I love to know what will happen if the person who makes the request asks for changes :)
Far from a professional result but i bet it will do a big favor for a lot of indie devs.
5:03 - you mean, 3D modelling, sculpting, animation, drawing, 2D animation, physics simulation, video editing software? ;-)
ha ha... yeah all of that too.
A good artist can do that quality in 5min
In the end result the robot only moved 2 steps, where your movement in deepmotion was quite a bit longer, why is that?
Simply cause my Deepmotion output was a bit poor. Largely due to my poor input video, messy scene and poor lighting, which didn't give Deepmotion the best input to work with. So, I'd simply cherry-picked that part of the motion for the purposes of the tutorial.
In hindsight, had I known the video was going to get more than a few thousand views I would have spent a bit more time filming. 😆. Plus more time polishing the finished composition/shadow direction. etc etc... and maybe done a few different shots to tell a short narrative story. (Which was the goal at the time, before I get distracted by the next shiny AI process.)
You are no longer needed. -The Machines
Ah beautiful. This is gonna put film studios out of business and I'm here for it.
Unlikely, it'll be another tool that VFX artists in film studios use. It still needs human input.
Yes VFX artists, far less of them with far less training. What a horrible argument my Lord I'm so tired of morons using this "it's just another tool in the tool belt" argument.@@Thefan