This is extremely impressive, you have found a method to put a subject into any scene and make them look native, applications are much bigger for videos than for photos as changing the lighting of a photo can also be circumnavigated by just generating a new ai photo, or using generative fill etc, yet for video this is a game changer
'blew my mind' after 20 years of traditional compositing. It's not about replacing Hollywood's high-end VFX, but about democratizing access to quality visuals for indie creators and RUclips producers like myself
That' just plain false lol. We already use them as part of bigger, more complex, comps. The main difference is that we train these models on custom datasets, sometimes even per show...
@@Daniel_Bettega_52 i dont think so. someone should use the new ai tools and they will do it better than people which are not familiar with compositing
This is a bit beyond what I'm comfortable attempting, but it's refreshing to see a young tutorial creator on RUclips that a) really knows his shit and b) is innovating and experimenting, not regurgitating the same basic info.
I would love to see an Image version of this, Alot of the current image compositing workflow lack things like edge fix and keeping the person in the image looking like they do.
your videos are so exciting and very easy to follow! If you are a VFX artist or supervisor with a team that is evolving frequently. THESE are the solutions for lower budgets, shorter turnarounds ... Thanks for all of your hard work Micky! You are trusted and valued! 🫡
Bro, I'm working on a horror series rn. Was going to try and create my own workflow like this after I was done. You saved me so much time. Thank you, G!
Man, this comfy UI thing looks wild! It reminds me a lot of BMD'S Fusion but like from another planet. The node-based UI feels familiar, but all its various functions and associated technical jargon are completely incomprehensible to me. Might be fun to learn, but I already spend way too much time plugging things into other things "to see what happens" in other programs xD
there's a SAM2 model which natively takes video input. It can make temporally stable masks with much better control (can take positive and negative points as well as prompts as input), and it's much faster too, I'd recommend you check it out!
Such a wonderful video! 👏 Will you please consider to update the simple workflow for subject image (PNG / JPG) and Background (which you already show how it works) instead of only video? I've tried it myself via "Load Image" and some bypass video related nodes, but something always broken in the way. (it works fine with video) It will be great to have it as a workflow if you'll be willing to share of course, thanks ahead 🙏
Thank you very much. Honestly, I didn't even expect to be able to get this scheme up and running quickly. The only problem is the blurred face. Could you please tell me in which module to solve this?
@@eccentricballad9039 Boris is better for manual rotoscope, still not very good to automatically roto out subjects perfectly. MatteAssistML is still quite jittery
so far oen of the best tutorials on this..i need to try it in the cloud now...BUT..i have a question..what if I wanted to animate the entire video to cartoon but keep one object (like a table) realistic..how do i do this? Can it be done with this workflow?
Assuming this all being done locally? What is the maximum number of frames that can be rendered and will you get consistent results if you have to do this in batches?
Your video are Amazing, i would like to ask you a question. Is it possibile to generate the model sheet of the character without using the prompt? For example if i draw a character myself in front view, is it able to create the rest of the model?
Thank you for sharing. Unfortunately the build is not working for me. Error in Ksample, I have not been able to solve it yet. No solution found on the forums.
Do you have any knowledge of Continuity management plugins. I work mostly on long form animation. Tools for large batch automation will be highly interesting to explore. Setting up key shots then using them alongside the animatic to frame and fx setup all shots in that sequence (or provide exportable settings for individual shots if the scene gets too heavy).
i am entirely new to ai ...where i have to start creating video content through ai? what is the very basic workflow to get to image generation and consistent character creation for video content
Great work man. Love your content. I'm having a bit of trouble running this worflow. Each time I tried, an error seems to occur in the IC-Light conditioning node. This is the message I get: Sizes of tensors must match except in dimension 1. Expected size 121 but got size 119 for tensor number 1 in the list. Has this happened to anyone? Is there anyway to solve it?
Fantastic work! It's is very close to something that could be usable for professional work. The only problem I have with it is that the result looks too much like a human generated AI. It has that typical wax skin effect. Do you think it would be possible to eleminate it somehow and make the human look realistic?
Using traditional Compositing and rotoscoping for close to 20 years, this new workflow just blew my mind. Thank you brother and bless you
Same I feel stuned 😂
Your tutorials have moved all the way to the top of my tutorial tiers. Good job bro.
This is extremely impressive, you have found a method to put a subject into any scene and make them look native, applications are much bigger for videos than for photos as changing the lighting of a photo can also be circumnavigated by just generating a new ai photo, or using generative fill etc, yet for video this is a game changer
You are a true innovator. I bet your failed workflows have more genius in them than most people's working ones!🤣
I usually just watch your videos for eye openings and I rarely comment but this time I really want to try this out. Thanks so much for this video.
'blew my mind' after 20 years of traditional compositing. It's not about replacing Hollywood's high-end VFX, but about democratizing access to quality visuals for indie creators and RUclips producers like myself
Yes, these amazing workflows are really great, I've tried running it with comfyui on Mimicpc and the results didn't disappoint!
That' just plain false lol. We already use them as part of bigger, more complex, comps. The main difference is that we train these models on custom datasets, sometimes even per show...
Absolutely.
As for replacing Hollywood's high-end VFX, I believe it's a matter of time, a short time.
@@Daniel_Bettega_52 i dont think so. someone should use the new ai tools and they will do it better than people which are not familiar with compositing
This is a bit beyond what I'm comfortable attempting, but it's refreshing to see a young tutorial creator on RUclips that a) really knows his shit and b) is innovating and experimenting, not regurgitating the same basic info.
My jaw hits the floor
Thank you for this knowledge
This is brilliant workflow
Bro we're getting there!
absolutely amazing, really have loved your videos on my ai+filmmaking journey thanks!
Creating history! Great workflow brother
This is awesome my man, you’re a killer in this space!!!!
Thank you so much for these videos and sharing the workflows. You are Epic
This is just insane brother what????
Well done
A lot of crystallized thought and effort in your videos. You are really making use of AI. AI community surely values your work, Mick
this was the more dense video i ever watched. thanks bro, great job
Crazy, thank you so much! You are just on time with this video 💯
Man I love you ! Your workflows are the best 🤩
I appreciate you for co-creating a magnificent future! 🙏
Thanks!
Thanx a lot fof sharing. Really precious content showing how to use advanced AI features to really make a difference.
Amazing work! Thank you for the free workflow!
I would love to see an Image version of this, Alot of the current image compositing workflow lack things like edge fix and keeping the person in the image looking like they do.
Really impressive workflow, I'll definitely try it! Thanks for sharing.
You so good bro!🙏 i wish i could do this!
its a great tutorial, thanks for sharing and it has improved our learning knowledge. pls keep it up
Regenerating the edges with generative AI is a stroke of genius.
your videos are so exciting and very easy to follow! If you are a VFX artist or supervisor with a team that is evolving frequently. THESE are the solutions for lower budgets, shorter turnarounds ... Thanks for all of your hard work Micky! You are trusted and valued! 🫡
Dit is vollkommener Mumpitz ... werd ich mal machen. Ty
wow you are really talented and thank you for the workflow :)
i like every video you did hopefully the best for you
Bravo! Its like if corridor crew was one person
I haven't even gotten into the video yet and have pee'd myself in excitement from the intro!! KUDOS! 😍
Bro, I'm working on a horror series rn. Was going to try and create my own workflow like this after I was done.
You saved me so much time. Thank you, G!
Let's see it please
@@ernesto.iglesias It's called under the black rainbow. Episode 9 just dropped yesterday.
no cap
I actually think the whole process would be smoother on Mimicpc, utilizing comfyui to go through it
i had to add: my favourite channel on comfy/ai animation
Simply brilliant, thanks so much!
Man, this comfy UI thing looks wild! It reminds me a lot of BMD'S Fusion but like from another planet. The node-based UI feels familiar, but all its various functions and associated technical jargon are completely incomprehensible to me.
Might be fun to learn, but I already spend way too much time plugging things into other things "to see what happens" in other programs xD
Dude!! you are awesome! liked and subscribed!!
there's a SAM2 model which natively takes video input. It can make temporally stable masks with much better control (can take positive and negative points as well as prompts as input), and it's much faster too, I'd recommend you check it out!
What’s it called?
use track-anything if you have nvidia with cuda core
This is actually super amazing 😮 ñ
Like this would take so many hours to do if not day holy cow
Great work, hopefully IC Light comes out with a SDXL version.
Pitz workflows and work arounds are slept on
Such a wonderful video! 👏
Will you please consider to update the simple workflow for subject image (PNG / JPG) and Background (which you already show how it works) instead of only video?
I've tried it myself via "Load Image" and some bypass video related nodes, but something always broken in the way. (it works fine with video)
It will be great to have it as a workflow if you'll be willing to share of course, thanks ahead 🙏
Amazing work Bro!
Great tutorial! Can this be followed for macOS as well?
The power of Hollywood has come to the bedroom hobbyist!
*the greed of capitalism has killed Art & Intention
Maybe you saved my life. I will test this workflow
you are a beast, thank you for all this knowledge
New subscriber, so amazing
crikey, your content keeps blowing me away
Thank You, you are a Genius!! Brilliant Work 👍🏾👍🏾
Great video man . Love from india🇮🇳
Your works is amazing
Interesting Approach ! great video :)
God ... this guys is on another level... 100
This is utterly insane!!
What is the specs that are needed to run this thing locally? Will 12vram be enough?
Love it, will try this😊
Consider adding a way to match black/white values of the plate and foreground elements prior to relighting.
Thank you very much. Honestly, I didn't even expect to be able to get this scheme up and running quickly. The only problem is the blurred face. Could you please tell me in which module to solve this?
Sweeet! Nice flow man 🎉
I think we can also use Boris Fx or other softwares for better Rotoscoping. Am i wrong ?
@@eccentricballad9039 Boris is better for manual rotoscope, still not very good to automatically roto out subjects perfectly. MatteAssistML is still quite jittery
ooh damn this looks amazing
Relighting is an absolute industry game changer.
Cool as usual! Now what about replacing SD with flux.1?
Fantastic Exploration ~~~~!!!!
Question, if you shoot also in the direction away from the actor, wouldn't this be the plate you need for relight correkt?
I'm new to ComfyUI, can you make a video on explaining Comfy and how much system config for smoothly run comfyui
this video is gold af
so far oen of the best tutorials on this..i need to try it in the cloud now...BUT..i have a question..what if I wanted to animate the entire video to cartoon but keep one object (like a table) realistic..how do i do this? Can it be done with this workflow?
Holy shit man , this is very impressive work...
mind blown. Is this compatible with Mac?
Assuming this all being done locally? What is the maximum number of frames that can be rendered and will you get consistent results if you have to do this in batches?
18:52 How do i "use" your workflow? where do i get it from? I followed every prior step already
Hi, do you have the answer please?
Great. 💪💯💪 Now I just have to learn how to use comfrey 😂😂.
Thanks. Can this workflow also be done in Flux
Your video are Amazing, i would like to ask you a question. Is it possibile to generate the model sheet of the character without using the prompt? For example if i draw a character myself in front view, is it able to create the rest of the model?
Thank you for sharing.
Unfortunately the build is not working for me.
Error in Ksample, I have not been able to solve it yet.
No solution found on the forums.
You are making the best tutorials!
Thank you so much!
Just seeing this video and i feel like trying it out...
Thats so cool! would be awesome in music videos!
amazing workflow
Wow… that’s fantastic!
What to do to see in run comfy which node is currently being processed?
Modern tech is like magic and we can't even explain it 100% in the case of the black box called AI.
Nice work!!! Thanks!
You are bigg time saver
Do you have any knowledge of Continuity management plugins.
I work mostly on long form animation. Tools for large batch automation will be highly interesting to explore.
Setting up key shots then using them alongside the animatic to frame and fx setup all shots in that sequence (or provide exportable settings for individual shots if the scene gets too heavy).
great video, thanks man
Great job!
The next few coming years is gonna be crazy
i am entirely new to ai ...where i have to start creating video content through ai? what is the very basic workflow to get to image generation and consistent character creation for video content
Great work man. Love your content. I'm having a bit of trouble running this worflow. Each time I tried, an error seems to occur in the IC-Light conditioning node. This is the message I get: Sizes of tensors must match except in dimension 1. Expected size 121 but got size 119 for tensor number 1 in the list.
Has this happened to anyone? Is there anyway to solve it?
Dude your awesome lol yes it all seems a bit overwhelming.
This makes me wonder if I can save a lot of money on my next shoot by renting a big empty room instead of a green screen studio.
Superb video.
Fantastic work! It's is very close to something that could be usable for professional work. The only problem I have with it is that the result looks too much like a human generated AI. It has that typical wax skin effect. Do you think it would be possible to eleminate it somehow and make the human look realistic?
Is there a way to relight images easily without using Stable Diffusion?
Can we use AI to matchmove camera movement so it really matches with BG plate
we need to learn us how to use this in google colab or anything like this
you are that worth of finding something after 2cn night not sleeping well now will go with some comfort comfy ;-)
this is awesome