I've been using AE every day for over a decade and this is the first time I've seen someone connect the displacement map H and V values to the camera X and Y. Wow. Great technique!
Cool, I've had a few people say that's one of their big takeaways from the video. I was admittedly pretty pleased when I first tried it and it worked. In this other video, I use the free Displacer Pro plugin for After Effects, which works a little differently so you can link up a camera created using the Camera Tracker in After Effects. (Which used orientation). Whilst it might not work in every instance, it's another handy technique. ruclips.net/video/c3L7O-0gRiQ/видео.html Also the simple but handy 'layer stalker' effect, used in that video, opens up lots of ideas too, that I'd not considered before in After Effects.
Thanks for all the positive comments and for those that have already subscribed. Honestly, a little bit shocked as I checked last night and the video had 150 views after 3 days... then this morning it was at 3000. Thanks for the positive feedback, I'm working on a 2nd video with a deep dive video into using some more AI tools (with a good dose of traditional digital animation techniques). Also, I've added some high res character/scene images to download for free and try out the process in this current video yourself. aianimation.com/ai-animation-tutorials Cheers!! Edit: Sheesh we've crossed the 100k views mark in under 2 weeks. 🤷♀
another thing, INO is better to remove the character and add the green screen before you submit to DI, it is cheaper and faster and you get better results.
@@AIAnimationStudio sorry I meant in my opinion. what I do is: create image in Midjourney, separate character from background in Photoshop and add green background, import image to D-ID then export. your looks good too but this is way faster does not require runway. loved the tutorial :)
@@AIBizarroTheater 😅... oh.. my bad... 🤦♂. Cool, and yup 100% skip the RunwayML step and separate the character in Photoshop or similar. Think I got carried away trying to use more AI tools. 👍
@@AIAnimationStudio I did it like that too until I discovered D-ID works with a green background as well. Anyway come have a look at my work if you have time I am new to after but been learning quickly. would love to hear your thoughts :)
from a guy who has no idea of how to use adobe premiere this process looks extremely hard and time consuming ,especially given the tech limitation .but thank you for sharing this detailed workflow, much appreciate it. the amount of money and time you have to spent just to create this short few sec video is crazy
Thanks.. yeah it does take a good bit of effort to create a more involved scene. For a simpler version, you could just use a wide image of the scene with your character and send that to D-ID and just use that output, but you'd miss out on the 3D depth effect. But then have a look at something like Leiapix to add some extra motion relatively easily. (*I've not tried it out myself).
As good as the AI stuff in this is, the most surprising take away for me is using a layer as a source for Optical Flares, I had no idea you could do that - so thank you!
Love it coz you even explained in depth ie - -ar 16:9 for landscape image not square. The way you speak it’s so soothing and easy to listen. Subscribed as soon as I saw the thumbnail.
I think this is the first video on RUclips that uses a variety of AI tools combining with traditional design tools to create an amazing result! keep up the good work, looking forward to see how we all could do in the future a longer and wider scene without limits
Thanks. It's interesting to see how we can all utilise the different AI tools to create something that isn't completely uncontrollable but still has a crisp finish and kinda efficient. Lots to learn, and different workflows to try out. The one I'm working on now should be another interesting mix of AI and digital animation techniques, though I might try and make more of a story with it first.
Ive been animating ai images for about 40 hours total. It's so fun and satisfying to see them come to life. It's true that the ai images typically need a few touchups and a color grade, but the end results are beautiful. I made a cyborg enchanted SS Blue animated wallpaper that looks insanely cool. Im def going to try some of the tricks in this vid! Thank you :)
Great to hear. The various AI tools are opening up a lot of creative options, and when combined with some more traditional animation/media skills the outputs can be great!. Good luck.
Wow, thank you. Quite a lot of commands going on there for a rusty old retired person, but extremely cool video, and very inspiring, things sure have changed. Subbed.
Thanks @gilldanier4129. Feel like a rust old person here too. Hopefully, the 'version 2' video I'm working on for this will be a 'little' bit more straight forward. But yep, it's all changing quite rapidly at the moment. For example, Dalle 3 for image generation is now live this week (currently available via Bing Image Creator), which is providing some good results and provides a free alternative to Mid Journey. (*Not yet as good or as controllable, but it is another option).
Nice work. One suggestion is to isolate the character and put on green BG before D-ID. That way you can easily key it out when without Runway artefacts
Thanks very much. A new 1 hour long video using Wonder Studio / Blender / Mid Journey/ Runway ML / Topaz AI / Chat GPT ... and a lot of After Effects should be live later this evening.
Thats amazing . Thank you for sharing . A little intimidating but I'm convince I could learn how to do this.... With time, and patience.... Lots of it...
This skills kicks ass, bravo! There will always be a place for a human mind in creaive process, even when ai will learn how to reduce this rather complicated workflow with a single prompt. I still hope there will )))
That's a huge talent, my easy hack, will be a very detailed prompt with midjourny ( dept of field and all ), then dale E, and of course elvenlab, and will sort the camera movement with FCPX and/or Capcut for flare and lighting. Thanks for the video it really nice
Thank you for your video, it's exactly the kind of animation I would like to make. Perfect for a beginner like me who is looking for his answers, can't wait to try your method.
Leia Pix creates depth maps of any image with editing capabilities adding or decreasing layers of depth in detail, such as adjustments in levels in depth, hardness, or opacity. Stabilized coherence in Kaiber and Decoherence are also noteworthy alternatives.
Great.. I need to check out Leia Pix a lot more than I have. I've also been looking at some higher-quality depth pass creation techniques using a Google colab approach. 👍
Great breakdown, thanks for sharing your steps. I think the Runway step probably is unnecessary. I green screen the character in photoshop and load onto D-ID. Alternatively, we can use AE's roto brush.
Thanks very much. Yeah in hindsight, I should have just separated him in Photoshop in this run-through. Check out the V2 video for this and a much easier way for the background too. ruclips.net/video/7u0FYVPQ5rc/видео.html
This is wonderful! Absolutely wonderful! I've been working very hard to learn different techniques and expand my animating skills. This is exactly what I was looking for. It's very easy to understand and follow and you don't zoom through it and skip over steps. Thank you so much!
Thanks so much. Likewise I've been jumping through a wide range of new AI tools to find ones shown in the video (and others) that I like and can be combined with traditional techniques to create something a bit controllable.
😅 yeah, that was super easy.. For you maybe. For person who never had anything common with animation, like me it's a magic. So much steps, different website(subscriptions plans) just for 5s animated picture 🙈 at least I know now how hard is to make professional animated movie
Mate! Whoah! Much respect for sharing your workflow. I have long been using AE etc and was wondering about how best to get into Ai for a while. You have kicked open the door to a whole new wold. Cheers!
! Thanks so much. Glad you liked the video. Coming from a world of hand-drawn and digital animation for the past 20 years... I was also (after the initial panic) keen to find out ways to integrate some of the new AI things into my day-to-day work. Still can't beat the control of a well-crafted AE project for exact client work, but this AI stuff does open up a lot of new ideas and possibilities.
Thanks for commenting. Hopefully, you'll get there in the end. The After Effects a bit can deffo seem a bit over whelming if you're not familiar with the software. Good luck.
Thank you! Very well explained AI course. I do have a question. Do you know any other site that can to the 3d Animation Model? For some reason Midas is not working for me.
Thanks. You can use Zoe Depth to achieve the same thing as Midas. See here: huggingface.co/spaces/shariqfarooq/ZoeDepth It also has the option to create a GLB 3D file, which is supported in recent updates to After Effects. I did an update to this video here, where I showed an approach using that. ruclips.net/video/7u0FYVPQ5rc/видео.html
cool stuff! Just wondering why you didn't mask out the character as an image before applying the animation? Would save you the trouble of messing around with keylight. Or am I missing something here?
Yep.. not sure why I didn't either... I think I was too keen to show an extra AI tool with the background removal from RunwayML. But yep.. 100% just mask out the character before sending to D-ID. 👍
i don't know editing so i got lost in some parts.. but wow it's amazing.. respect to people studying editing/ multimedia and all those stuffs.. C'est trop cool 🤙🏾
World will remember me that i subscribe your channel since you have 45 subscriber i am your 45th subcriber 😁😁 why i am saying that because i amwatching your first 3rd video of your channel and i loved it. More power to you. and keep making videos like this because i am planing to make Ai content channel.
Yes, this is good. I have scoured through RUclips and the web to find something like this and today this popped up. Naturally subscribed immediately. A few weeks and you should have hoards of followers. A quick question, if you don't mind, what gpu do you run Firefly on?
I'm just running an M1 mac for this tutorial. I keep looking at the dusty PC with an old Nvidia 2070 on it, and wondering about trying out some local Stable Diffusion setup... but not fired it up yet.
really good walkthrough and was happy to see a tool chain that i already have subscriptions for !! Are you making use of Gen 2 image to video on Runway ML ?
Thanks, glad you enjoyed the walkthrough. Plus interesting and good to hear you're already using a similar stack of tools. Yeah, I had a 'play' with the new 'image only' prompt on Gen-2 last night. Great to be able to bring an image to life and have it match your original input, though definitely missing the lack of control. But certainly, one I'm going to explore more this week as there's obvious potential for telling stories quickly, if not a bit hit and miss at times. I think it'll be even more useful when you can use the image prompt, (with a text prompt to guide what happens) whilst still keeping the visuals in line with the image prompt for characters and scene composition, a bit more like the control you have with Gen-1 video to video. Whereas a text prompt (with image reference) at the moment in Gen-2 creates interesting, at times great, but mostly uncontrollable outcomes, though the preview frame is helpful. I'll probably do the next video about RunwayML soon.
This is super helpful! I've been looking for a tutorial just like this! Thank you!! And your website is awesome! I will sign up after I create my portfolio 😃
This video walks through a complete AI animation method for animating images made with MidJourney. It's an excellent tool for learning how to add life to your static photographs.
Thanks. Yeah unfortunately it's a paid bit of software, there is a free trial then it starts from $5 a month, depending on the amount of content you want to create. I did look at some other similar options and the price is all pretty similar.
Great tutorial! I was able to follow along and create my own character. Just curious, why you didn't use neural filters in PS to create the depth map? Is hugging face better?
Thanks very much. Ah... I didn't use PS neural filters, quite honestly cause I didn't know they were a thing, must have missed the Adobe memo on that one. Just had a quick look and they look great. The Hugging Face option was quick and free at least, but I'll deffo try out the PS approach to see if it gives a higher quality of detail. Thanks for the tip.
that's incredible. can you do a video about bringing ai images alive that aren't specific to the text but that are like looping background videos where a lot of little movements happens. like those lofi videos on RUclips.
Awesome stuff! Love the result here! By the way, doesn't AE have some sort of "puppet warp" built in? I imagine this could be utilized on the boy when he says the word "beach" by doing a subtle little head tilt.
Yeah absolutely. 👍. I mention it briefly at 21:55 in the video. I actually used the puppet tool, on the version at the beginning of the video, to add a little extra head tilt and 'very' subtle shoulder movement.
Hey, thanks for the comment. I'm afraid I'm not to up to speed with Resolve. I imagine there's a similar effect to work with a 'Displacement Map' but where and how I'm not sure. You could also use Blender and create a similar fake 3D depth with a displacement map there too.
Impressive AI animation workflow! Given your expertise in AI, ZeroBot AI, the internet's first verbal chatbot, might catch your interest. It would be great to see your perspective on it in an upcoming video. Keep the innovative content coming!
I've been using AE every day for over a decade and this is the first time I've seen someone connect the displacement map H and V values to the camera X and Y. Wow. Great technique!
Cool, I've had a few people say that's one of their big takeaways from the video. I was admittedly pretty pleased when I first tried it and it worked.
In this other video, I use the free Displacer Pro plugin for After Effects, which works a little differently so you can link up a camera created using the Camera Tracker in After Effects. (Which used orientation). Whilst it might not work in every instance, it's another handy technique.
ruclips.net/video/c3L7O-0gRiQ/видео.html
Also the simple but handy 'layer stalker' effect, used in that video, opens up lots of ideas too, that I'd not considered before in After Effects.
ruclips.net/user/shortsP6EzZmdq_ac?feature=share
Thanks for all the positive comments and for those that have already subscribed. Honestly, a little bit shocked as I checked last night and the video had 150 views after 3 days... then this morning it was at 3000. Thanks for the positive feedback, I'm working on a 2nd video with a deep dive video into using some more AI tools (with a good dose of traditional digital animation techniques). Also, I've added some high res character/scene images to download for free and try out the process in this current video yourself. aianimation.com/ai-animation-tutorials Cheers!!
Edit: Sheesh we've crossed the 100k views mark in under 2 weeks. 🤷♀
another thing, INO is better to remove the character and add the green screen before you submit to DI, it is cheaper and faster and you get better results.
@@AIBizarroTheater . I've not come across INO and google isn't helping. Care to share a link?
@@AIAnimationStudio sorry I meant in my opinion. what I do is: create image in Midjourney, separate character from background in Photoshop and add green background, import image to D-ID then export. your looks good too but this is way faster does not require runway. loved the tutorial :)
@@AIBizarroTheater 😅... oh.. my bad... 🤦♂. Cool, and yup 100% skip the RunwayML step and separate the character in Photoshop or similar. Think I got carried away trying to use more AI tools. 👍
@@AIAnimationStudio I did it like that too until I discovered D-ID works with a green background as well. Anyway come have a look at my work if you have time I am new to after but been learning quickly. would love to hear your thoughts :)
from a guy who has no idea of how to use adobe premiere this process looks extremely hard and time consuming ,especially given the tech limitation .but thank you for sharing this detailed workflow, much appreciate it.
the amount of money and time you have to spent just to create this short few sec video is crazy
Thanks.. yeah it does take a good bit of effort to create a more involved scene. For a simpler version, you could just use a wide image of the scene with your character and send that to D-ID and just use that output, but you'd miss out on the 3D depth effect. But then have a look at something like Leiapix to add some extra motion relatively easily. (*I've not tried it out myself).
Crazy how in 23mins you could inspire so many possibilities in how I work potentially forever. Thank you sir
Arrr... cool. Thanks very much.
As good as the AI stuff in this is, the most surprising take away for me is using a layer as a source for Optical Flares, I had no idea you could do that - so thank you!
Love it coz you even explained in depth ie - -ar 16:9 for landscape image not square. The way you speak it’s so soothing and easy to listen. Subscribed as soon as I saw the thumbnail.
With so much ai hype you stand out in the crowd. Keep it coming brother!
I think this is the first video on RUclips that uses a variety of AI tools combining with traditional design tools to create an amazing result! keep up the good work, looking forward to see how we all could do in the future a longer and wider scene without limits
Thanks.
It's interesting to see how we can all utilise the different AI tools to create something that isn't completely uncontrollable but still has a crisp finish and kinda efficient. Lots to learn, and different workflows to try out.
The one I'm working on now should be another interesting mix of AI and digital animation techniques, though I might try and make more of a story with it first.
@@AIAnimationStudio looking forward to it mate
Ive been animating ai images for about 40 hours total. It's so fun and satisfying to see them come to life. It's true that the ai images typically need a few touchups and a color grade, but the end results are beautiful. I made a cyborg enchanted SS Blue animated wallpaper that looks insanely cool.
Im def going to try some of the tricks in this vid! Thank you :)
Great to hear. The various AI tools are opening up a lot of creative options, and when combined with some more traditional animation/media skills the outputs can be great!. Good luck.
Stunning. Sending this to someone who does this sort of thing so he can see how to add the effects and animated background
Great, glad you liked it. Good luck with your project.
I'm actually just starting a project of mine and you have no idea how much this helps. Thank you sir :)
the map displacement trick was *explodes* mind blowing
The Ai kids looks like you as a kid. Also, I'm glad I can rewatch this tutorial, it has so much good information
🤣.... errrrm... thanks 🤷♂
That was incredible. The depth pass and displacement map technique absolutely blew me away!
Thanks. Loving the content on your channel too. 👍
I was looking for this everywhere but couldn't find it until I found this video. Thx so much 🙂 this is incredible 💯 best I have seen so far
Wow, thank you. Quite a lot of commands going on there for a rusty old retired person, but extremely cool video, and very inspiring, things sure have changed. Subbed.
Thanks @gilldanier4129. Feel like a rust old person here too. Hopefully, the 'version 2' video I'm working on for this will be a 'little' bit more straight forward. But yep, it's all changing quite rapidly at the moment. For example, Dalle 3 for image generation is now live this week (currently available via Bing Image Creator), which is providing some good results and provides a free alternative to Mid Journey. (*Not yet as good or as controllable, but it is another option).
Thank you, I will check out Dalle 3.@@AIAnimationStudio
I'm very new to all of this. This tutorial was absolutely jammed with info. Great work!
13:02 Linking displacement map to a camera and separating two compositions to differents windows were eye openers to me
At the right time, we will have a conversation, AI Animation. I promise you will not be disappointed 😊
Wow blew me away subscribed for sure. A lot more techie than my knowledge but I am willing to learn. Great video
Nice work. One suggestion is to isolate the character and put on green BG before D-ID. That way you can easily key it out when without Runway artefacts
Good tip.
I definitely recommended that approach to save the unnecessary Runway step. 👍
And also upscale x2 before You will use D-ID. You recive mp4 :) it will be easier to keyig with more data :)
This video deserves more than a like or subscribe. Thank you!
encore..thank God i found this channel..i want to explore and learn more.. keep it up sir..
Thanks very much. A new 1 hour long video using Wonder Studio / Blender / Mid Journey/ Runway ML / Topaz AI / Chat GPT ... and a lot of After Effects should be live later this evening.
Great work, high quality indeed, we need more tutorials from you. Thank you.
4:08 You can leave it blank and press Generate, it will fill the background
Thats amazing . Thank you for sharing . A little intimidating but I'm convince I could learn how to do this.... With time, and patience.... Lots of it...
This skills kicks ass, bravo! There will always be a place for a human mind in creaive process, even when ai will learn how to reduce this rather complicated workflow with a single prompt. I still hope there will )))
Love this video!!! Amazing work. I can't wait for the next one.
Thanks so much. A few more tutorials are now live.
That's a huge talent, my easy hack, will be a very detailed prompt with midjourny ( dept of field and all ), then dale E, and of course elvenlab, and will sort the camera movement with FCPX and/or Capcut for flare and lighting. Thanks for the video it really nice
Great tutorial, workflow and final result... Thank you!
Thank you for your video, it's exactly the kind of animation I would like to make. Perfect for a beginner like me who is looking for his answers, can't wait to try your method.
Looks incredible, going to play with this for my next AI powered mockumentary
Hi-tech Hacker the little intro is so cute!😊
Cheers!! I need to do some more with that guy.
Amazing ! Thanks for taking time to create and share with us this great tutorial.
Leia Pix creates depth maps of any image with editing capabilities adding or decreasing layers of depth in detail, such as adjustments in levels in depth, hardness, or opacity. Stabilized coherence in Kaiber and Decoherence are also noteworthy alternatives.
Great.. I need to check out Leia Pix a lot more than I have. I've also been looking at some higher-quality depth pass creation techniques using a Google colab approach. 👍
top shelf resources and quick composite.
Great breakdown, thanks for sharing your steps. I think the Runway step probably is unnecessary. I green screen the character in photoshop and load onto D-ID. Alternatively, we can use AE's roto brush.
Thanks very much. Yeah in hindsight, I should have just separated him in Photoshop in this run-through. Check out the V2 video for this and a much easier way for the background too. ruclips.net/video/7u0FYVPQ5rc/видео.html
Great tutorial Jon!
Congratulations for your creation
This is wonderful! Absolutely wonderful! I've been working very hard to learn different techniques and expand my animating skills. This is exactly what I was looking for. It's very easy to understand and follow and you don't zoom through it and skip over steps. Thank you so much!
Thanks so much. Likewise I've been jumping through a wide range of new AI tools to find ones shown in the video (and others) that I like and can be combined with traditional techniques to create something a bit controllable.
what sort of AI tool do you use sir? Are you working alone or you have a team with similar interest and goal?
4:17 I type simply delete or remove in prompt which do same thing
Thank you for sharing .. so important - Love it! 🙂
Bravo! I am amazed that this can be done with such ease!
😅 yeah, that was super easy.. For you maybe. For person who never had anything common with animation, like me it's a magic. So much steps, different website(subscriptions plans) just for 5s animated picture 🙈 at least I know now how hard is to make professional animated movie
Mate! Whoah! Much respect for sharing your workflow. I have long been using AE etc and was wondering about how best to get into Ai for a while. You have kicked open the door to a whole new wold. Cheers!
! Thanks so much. Glad you liked the video. Coming from a world of hand-drawn and digital animation for the past 20 years... I was also (after the initial panic) keen to find out ways to integrate some of the new AI things into my day-to-day work. Still can't beat the control of a well-crafted AE project for exact client work, but this AI stuff does open up a lot of new ideas and possibilities.
This is amazing!! Keep creating! 💖🫶🏻
14:21 I understand you just moved the layer. Thank you
Wow this is really cool! Happy to have found this video!
At one point everything sounded so foreign to me. I'm sure after I watch a couple of more times I'll get it. Thank you for this!
Thanks for commenting. Hopefully, you'll get there in the end. The After Effects a bit can deffo seem a bit over whelming if you're not familiar with the software. Good luck.
Glad this popped up . .you got mad skills
😀 Thanks... appreciate the 'mad skills' comment.
Thank you! Very well explained AI course. I do have a question. Do you know any other site that can to the 3d Animation Model? For some reason Midas is not working for me.
Thanks. You can use Zoe Depth to achieve the same thing as Midas. See here:
huggingface.co/spaces/shariqfarooq/ZoeDepth
It also has the option to create a GLB 3D file, which is supported in recent updates to After Effects.
I did an update to this video here, where I showed an approach using that. ruclips.net/video/7u0FYVPQ5rc/видео.html
@@AIAnimationStudio Thank you so much for replying and for the suggestions I will check them up soon, keep up the great work with your page!!
Is it Jon or John? Well, I’m glad I found you Johnny!
😂😂 Jon ... Like Don Draper from Mad Men but with a J
I'd love to see some tutorials that included NVidia's Audio2Face which does real time audio lip sync from a wav file.
That was amazing. Thank you so much.
I think I will Use this Once you don't have to keep jumping to different Sights ..when it's all in one Program It will be Impressive and usable
You can make very good depth maps with stable diffusion and control net as well
Great tut, clear and to the point… many thanks!
Thanks for commenting. Good motivation to make more. 👍
cool stuff! Just wondering why you didn't mask out the character as an image before applying the animation? Would save you the trouble of messing around with keylight. Or am I missing something here?
Yep.. not sure why I didn't either... I think I was too keen to show an extra AI tool with the background removal from RunwayML. But yep.. 100% just mask out the character before sending to D-ID. 👍
i don't know editing so i got lost in some parts.. but wow it's amazing.. respect to people studying editing/ multimedia and all those stuffs.. C'est trop cool 🤙🏾
Pure simple beauty... 🔥
That was great bro looking forward to learn more from you new sub 🎉
Awesome tutorial video, I am gonna fail to get this done on my Pc
World will remember me that i subscribe your channel since you have 45 subscriber i am your 45th subcriber 😁😁
why i am saying that because i amwatching your first 3rd video of your channel and i loved it. More power to you.
and keep making videos like this because i am planing to make Ai content channel.
@Enjoyablee ... So glad you liked the video and thanks for being the 45th subscriber... 12 hours later, now up to 336. Which is a bit bonkers.
Great video! Concise and to the point. Keep it up
Cheers! I'll definitely try and get into the habit of making more over the coming weeks and months.
Keep up with the great content. 🤘This was awesome!
Really really awesome video.
Yes, this is good. I have scoured through RUclips and the web to find something like this and today this popped up. Naturally subscribed immediately. A few weeks and you should have hoards of followers. A quick question, if you don't mind, what gpu do you run Firefly on?
I'm just running an M1 mac for this tutorial. I keep looking at the dusty PC with an old Nvidia 2070 on it, and wondering about trying out some local Stable Diffusion setup... but not fired it up yet.
really good walkthrough and was happy to see a tool chain that i already have subscriptions for !!
Are you making use of Gen 2 image to video on Runway ML ?
Thanks, glad you enjoyed the walkthrough. Plus interesting and good to hear you're already using a similar stack of tools.
Yeah, I had a 'play' with the new 'image only' prompt on Gen-2 last night. Great to be able to bring an image to life and have it match your original input, though definitely missing the lack of control. But certainly, one I'm going to explore more this week as there's obvious potential for telling stories quickly, if not a bit hit and miss at times. I think it'll be even more useful when you can use the image prompt, (with a text prompt to guide what happens) whilst still keeping the visuals in line with the image prompt for characters and scene composition, a bit more like the control you have with Gen-1 video to video.
Whereas a text prompt (with image reference) at the moment in Gen-2 creates interesting, at times great, but mostly uncontrollable outcomes, though the preview frame is helpful. I'll probably do the next video about RunwayML soon.
This is super helpful! I've been looking for a tutorial just like this! Thank you!! And your website is awesome! I will sign up after I create my portfolio 😃
Great Tutorial! Love your flow and style. Thank you
omg, this is so beautiful 🙂
Fantastic work mate. Well done bouncing back from the system crash too! Always the way isn’t it! 😂
This video walks through a complete AI animation method for animating images made with MidJourney. It's an excellent tool for learning how to add life to your static photographs.
D-id is a paid tool i gues? You have made absolutely fire content! )
Thanks. Yeah unfortunately it's a paid bit of software, there is a free trial then it starts from $5 a month, depending on the amount of content you want to create. I did look at some other similar options and the price is all pretty similar.
@@AIAnimationStudio thank you for the answer + yur analisys of that similar products) Thank you)
Great tutorial! I was able to follow along and create my own character. Just curious, why you didn't use neural filters in PS to create the depth map? Is hugging face better?
Thanks very much. Ah... I didn't use PS neural filters, quite honestly cause I didn't know they were a thing, must have missed the Adobe memo on that one. Just had a quick look and they look great. The Hugging Face option was quick and free at least, but I'll deffo try out the PS approach to see if it gives a higher quality of detail. Thanks for the tip.
that's incredible. can you do a video about bringing ai images alive that aren't specific to the text but that are like looping background videos where a lot of little movements happens. like those lofi videos on RUclips.
Good idea... I'll put it on the list.👍
Awesome stuff! Love the result here! By the way, doesn't AE have some sort of "puppet warp" built in? I imagine this could be utilized on the boy when he says the word "beach" by doing a subtle little head tilt.
Yeah absolutely. 👍.
I mention it briefly at 21:55 in the video. I actually used the puppet tool, on the version at the beginning of the video, to add a little extra head tilt and 'very' subtle shoulder movement.
Oh indeed, look at that! Thanks for putting up with me and showing me the way in the feedback :)
Thanks for sparking my imagination!
Glad to hear it helped with the spark. Good luck with your project, whatever you create.
I got it, my own epic trailer of Stronghold and Castle Town , lovely ❤
I LOVE IT, a really good inspiration way to start animating with AI ♥ Thanks!
Amazing work!
Thank you!!! This was exactly what I was looking for! Any tips on how I can achieve this in Davinci Resolve?
Hey, thanks for the comment. I'm afraid I'm not to up to speed with Resolve. I imagine there's a similar effect to work with a 'Displacement Map' but where and how I'm not sure. You could also use Blender and create a similar fake 3D depth with a displacement map there too.
I've subscribeed. I got lost after the Runway part :)
Thanks for subscribing... good feedback... sorry if the After Effects bit was a bit too quick or too techy.
This is brilliant. Thank you so much!
Actually you can make the depth map right in Photoshop. I think it's somewhere in the new AI effects panel.
Yeah absolutely, didn't realise when making this video. But it's going in the next video. Using the Neural Filters in Photoshop. 👍
Great! Thank you very much.. I just learned cool new stuff.. 😎✌🏻
Thank you! This really helped me out.
Cool, great to hear.
Great tutorial! Keep it coming. Ty much.
incredible.. thanks, learned a lot of things along the way😮
Impressive AI animation workflow! Given your expertise in AI, ZeroBot AI, the internet's first verbal chatbot, might catch your interest. It would be great to see your perspective on it in an upcoming video. Keep the innovative content coming!
not enough videos on youtube about zerobot yet!
It's Really Amazing Superb !
Absolutely fantastic
great tutorial bro thanks bro very good work
This is wonderful! Absolutely magnifique :D
Great Thanks for the video. Subscribed!!!!
Very incredible video , appreciate for sharing 🙇♂️
Great work. thanks for sharing!
Completely mindblowing, thank you!
A very interesting video. My after effects skills are not up to the tutorial but If I get time, I'll try and learn.
Wow..thank you for your video ! Learned something new ! Subscribed ❤
This is very impressive. Thanks for the share