'blew my mind' after 20 years of traditional compositing. It's not about replacing Hollywood's high-end VFX, but about democratizing access to quality visuals for indie creators and RUclips producers like myself
That' just plain false lol. We already use them as part of bigger, more complex, comps. The main difference is that we train these models on custom datasets, sometimes even per show...
@@Daniel_Bettega_52 i dont think so. someone should use the new ai tools and they will do it better than people which are not familiar with compositing
This is extremely impressive, you have found a method to put a subject into any scene and make them look native, applications are much bigger for videos than for photos as changing the lighting of a photo can also be circumnavigated by just generating a new ai photo, or using generative fill etc, yet for video this is a game changer
This is a bit beyond what I'm comfortable attempting, but it's refreshing to see a young tutorial creator on RUclips that a) really knows his shit and b) is innovating and experimenting, not regurgitating the same basic info.
your videos are so exciting and very easy to follow! If you are a VFX artist or supervisor with a team that is evolving frequently. THESE are the solutions for lower budgets, shorter turnarounds ... Thanks for all of your hard work Micky! You are trusted and valued! 🫡
there's a SAM2 model which natively takes video input. It can make temporally stable masks with much better control (can take positive and negative points as well as prompts as input), and it's much faster too, I'd recommend you check it out!
I would love to see an Image version of this, Alot of the current image compositing workflow lack things like edge fix and keeping the person in the image looking like they do.
Bro, I'm working on a horror series rn. Was going to try and create my own workflow like this after I was done. You saved me so much time. Thank you, G!
@@eccentricballad9039 Boris is better for manual rotoscope, still not very good to automatically roto out subjects perfectly. MatteAssistML is still quite jittery
Man, this comfy UI thing looks wild! It reminds me a lot of BMD'S Fusion but like from another planet. The node-based UI feels familiar, but all its various functions and associated technical jargon are completely incomprehensible to me. Might be fun to learn, but I already spend way too much time plugging things into other things "to see what happens" in other programs xD
Node-based interfaces always look intimidating until you actually just look at each node individually (and learn what it's doing) and take a step back to see the logic of the system.
Such a wonderful video! 👏 Will you please consider to update the simple workflow for subject image (PNG / JPG) and Background (which you already show how it works) instead of only video? I've tried it myself via "Load Image" and some bypass video related nodes, but something always broken in the way. (it works fine with video) It will be great to have it as a workflow if you'll be willing to share of course, thanks ahead 🙏
Your video are Amazing, i would like to ask you a question. Is it possibile to generate the model sheet of the character without using the prompt? For example if i draw a character myself in front view, is it able to create the rest of the model?
Thank you very much. Honestly, I didn't even expect to be able to get this scheme up and running quickly. The only problem is the blurred face. Could you please tell me in which module to solve this?
i am entirely new to ai ...where i have to start creating video content through ai? what is the very basic workflow to get to image generation and consistent character creation for video content
Thank you for sharing. Unfortunately the build is not working for me. Error in Ksample, I have not been able to solve it yet. No solution found on the forums.
Assuming this all being done locally? What is the maximum number of frames that can be rendered and will you get consistent results if you have to do this in batches?
I was first introduced to AI through stable diffusion and was against learning another AI program like CumfyUI, but after seeing this, I think you have made the Alpha of what I need for my RUclips animation show. Thanks. Take my sub sir.
so far oen of the best tutorials on this..i need to try it in the cloud now...BUT..i have a question..what if I wanted to animate the entire video to cartoon but keep one object (like a table) realistic..how do i do this? Can it be done with this workflow?
Do you have any knowledge of Continuity management plugins. I work mostly on long form animation. Tools for large batch automation will be highly interesting to explore. Setting up key shots then using them alongside the animatic to frame and fx setup all shots in that sequence (or provide exportable settings for individual shots if the scene gets too heavy).
Hello and thank you very much for this tutorial. I have just installed everything and set up my files, but the background is not rendering in the output. Is there a limit to the image size? I am using a still image in the background and it keeps showing an error on the image input of the repeatimagebatch node.
Using traditional Compositing and rotoscoping for close to 20 years, this new workflow just blew my mind. Thank you brother and bless you
Same I feel stuned 😂
Your tutorials have moved all the way to the top of my tutorial tiers. Good job bro.
'blew my mind' after 20 years of traditional compositing. It's not about replacing Hollywood's high-end VFX, but about democratizing access to quality visuals for indie creators and RUclips producers like myself
Yes, these amazing workflows are really great, I've tried running it with comfyui on Mimicpc and the results didn't disappoint!
That' just plain false lol. We already use them as part of bigger, more complex, comps. The main difference is that we train these models on custom datasets, sometimes even per show...
Absolutely.
As for replacing Hollywood's high-end VFX, I believe it's a matter of time, a short time.
@@Daniel_Bettega_52 i dont think so. someone should use the new ai tools and they will do it better than people which are not familiar with compositing
This is extremely impressive, you have found a method to put a subject into any scene and make them look native, applications are much bigger for videos than for photos as changing the lighting of a photo can also be circumnavigated by just generating a new ai photo, or using generative fill etc, yet for video this is a game changer
You are a true innovator. I bet your failed workflows have more genius in them than most people's working ones!🤣
A lot of crystallized thought and effort in your videos. You are really making use of AI. AI community surely values your work, Mick
This is a bit beyond what I'm comfortable attempting, but it's refreshing to see a young tutorial creator on RUclips that a) really knows his shit and b) is innovating and experimenting, not regurgitating the same basic info.
I usually just watch your videos for eye openings and I rarely comment but this time I really want to try this out. Thanks so much for this video.
Bro we're getting there!
Creating history! Great workflow brother
This is awesome my man, you’re a killer in this space!!!!
absolutely amazing, really have loved your videos on my ai+filmmaking journey thanks!
My jaw hits the floor
Thank you for this knowledge
This is brilliant workflow
your videos are so exciting and very easy to follow! If you are a VFX artist or supervisor with a team that is evolving frequently. THESE are the solutions for lower budgets, shorter turnarounds ... Thanks for all of your hard work Micky! You are trusted and valued! 🫡
Thank you so much for these videos and sharing the workflows. You are Epic
i had to add: my favourite channel on comfy/ai animation
This is just insane brother what????
Well done
Amazing work! Thank you for the free workflow!
Thanks!
this was the more dense video i ever watched. thanks bro, great job
Man I love you ! Your workflows are the best 🤩
I appreciate you for co-creating a magnificent future! 🙏
Regenerating the edges with generative AI is a stroke of genius.
there's a SAM2 model which natively takes video input. It can make temporally stable masks with much better control (can take positive and negative points as well as prompts as input), and it's much faster too, I'd recommend you check it out!
What’s it called?
use track-anything if you have nvidia with cuda core
Crazy, thank you so much! You are just on time with this video 💯
Bravo! Its like if corridor crew was one person
You so good bro!🙏 i wish i could do this!
This is actually super amazing 😮 ñ
Like this would take so many hours to do if not day holy cow
Pitz workflows and work arounds are slept on
Really impressive workflow, I'll definitely try it! Thanks for sharing.
The power of Hollywood has come to the bedroom hobbyist!
*the greed of capitalism has killed Art & Intention
I haven't even gotten into the video yet and have pee'd myself in excitement from the intro!! KUDOS! 😍
Simply brilliant, thanks so much!
Thanx a lot fof sharing. Really precious content showing how to use advanced AI features to really make a difference.
You are making the best tutorials!
Thank you so much!
I would love to see an Image version of this, Alot of the current image compositing workflow lack things like edge fix and keeping the person in the image looking like they do.
crikey, your content keeps blowing me away
Bro, I'm working on a horror series rn. Was going to try and create my own workflow like this after I was done.
You saved me so much time. Thank you, G!
Let's see it please
@@ernesto.iglesias It's called under the black rainbow. Episode 9 just dropped yesterday.
no cap
I actually think the whole process would be smoother on Mimicpc, utilizing comfyui to go through it
Great work, hopefully IC Light comes out with a SDXL version.
wow you are really talented and thank you for the workflow :)
God ... this guys is on another level... 100
Great video man . Love from india🇮🇳
Your works is amazing
its a great tutorial, thanks for sharing and it has improved our learning knowledge. pls keep it up
Amazing work Bro!
Dude!! you are awesome! liked and subscribed!!
i like every video you did hopefully the best for you
Maybe you saved my life. I will test this workflow
Sweeet! Nice flow man 🎉
I think we can also use Boris Fx or other softwares for better Rotoscoping. Am i wrong ?
@@eccentricballad9039 Boris is better for manual rotoscope, still not very good to automatically roto out subjects perfectly. MatteAssistML is still quite jittery
This is utterly insane!!
What is the specs that are needed to run this thing locally? Will 12vram be enough?
Modern tech is like magic and we can't even explain it 100% in the case of the black box called AI.
ooh damn this looks amazing
Man, this comfy UI thing looks wild! It reminds me a lot of BMD'S Fusion but like from another planet. The node-based UI feels familiar, but all its various functions and associated technical jargon are completely incomprehensible to me.
Might be fun to learn, but I already spend way too much time plugging things into other things "to see what happens" in other programs xD
Interesting Approach ! great video :)
you are a beast, thank you for all this knowledge
Relighting is an absolute industry game changer.
Thank You, you are a Genius!! Brilliant Work 👍🏾👍🏾
Consider adding a way to match black/white values of the plate and foreground elements prior to relighting.
18:52 How do i "use" your workflow? where do i get it from? I followed every prior step already
The next few coming years is gonna be crazy
Love it, will try this😊
this video is gold af
New subscriber, so amazing
Fantastic Exploration ~~~~!!!!
You are bigg time saver
Node-based interfaces always look intimidating until you actually just look at each node individually (and learn what it's doing) and take a step back to see the logic of the system.
Wow… that’s fantastic!
Such a wonderful video! 👏
Will you please consider to update the simple workflow for subject image (PNG / JPG) and Background (which you already show how it works) instead of only video?
I've tried it myself via "Load Image" and some bypass video related nodes, but something always broken in the way. (it works fine with video)
It will be great to have it as a workflow if you'll be willing to share of course, thanks ahead 🙏
Is there a way to do all this work, but keep the background out from rendering? I mean using this as a rotoscoping tool
Holy shit man , this is very impressive work...
18:52 Im confused, where and what is that file? "COMP_SMPL v10" where do i find that? where did it came from? help pls
Your video are Amazing, i would like to ask you a question. Is it possibile to generate the model sheet of the character without using the prompt? For example if i draw a character myself in front view, is it able to create the rest of the model?
amazing workflow
This makes me wonder if I can save a lot of money on my next shoot by renting a big empty room instead of a green screen studio.
bro du bist der krasseste alter
Thank you very much. Honestly, I didn't even expect to be able to get this scheme up and running quickly. The only problem is the blurred face. Could you please tell me in which module to solve this?
mind blown!
Just seeing this video and i feel like trying it out...
i am entirely new to ai ...where i have to start creating video content through ai? what is the very basic workflow to get to image generation and consistent character creation for video content
I'm new to ComfyUI, can you make a video on explaining Comfy and how much system config for smoothly run comfyui
Great job!
we need to learn us how to use this in google colab or anything like this
Great. 💪💯💪 Now I just have to learn how to use comfrey 😂😂.
Thank you for sharing.
Unfortunately the build is not working for me.
Error in Ksample, I have not been able to solve it yet.
No solution found on the forums.
Nice work!!! Thanks!
Assuming this all being done locally? What is the maximum number of frames that can be rendered and will you get consistent results if you have to do this in batches?
Thats so cool! would be awesome in music videos!
Question, if you shoot also in the direction away from the actor, wouldn't this be the plate you need for relight correkt?
I was first introduced to AI through stable diffusion and was against learning another AI program like CumfyUI, but after seeing this, I think you have made the Alpha of what I need for my RUclips animation show. Thanks. Take my sub sir.
you are that worth of finding something after 2cn night not sleeping well now will go with some comfort comfy ;-)
Cool as usual! Now what about replacing SD with flux.1?
Where do we download the custom models from then?
so far oen of the best tutorials on this..i need to try it in the cloud now...BUT..i have a question..what if I wanted to animate the entire video to cartoon but keep one object (like a table) realistic..how do i do this? Can it be done with this workflow?
Superb video.
great video, thanks man
Do you have any knowledge of Continuity management plugins.
I work mostly on long form animation. Tools for large batch automation will be highly interesting to explore.
Setting up key shots then using them alongside the animatic to frame and fx setup all shots in that sequence (or provide exportable settings for individual shots if the scene gets too heavy).
nice. thank you
Hello and thank you very much for this tutorial. I have just installed everything and set up my files, but the background is not rendering in the output. Is there a limit to the image size? I am using a still image in the background and it keeps showing an error on the image input of the repeatimagebatch node.
mind blown. Is this compatible with Mac?
Thanks. Can this workflow also be done in Flux