Wow, what a compliment! Thanks so much! Come join the twitch streams, we always have lots of fun on there and the chat is super friendly and knowledgable!
I'm just starting out in this process, I ask you, is there somewhere where I can download a folder with all the modules and loras and controls already pre-installed, like a portable version? I tried for 5 hours to get the installations to work and nothing worked. Thanks. Eu sou iniciando nesse processo, te pergunto, tem algum lugar onde eu posso baixar uma pasta com todos os modulos e loras e controles já pré-instalados, tipo uma versão portable ? Eu tentei por 5h fazer as instalações funcionarem e nada deu certo. Obrigado.
hi, I am facing a strange error, the image node in VideoCombine is not taking any inputs from any image node. What could be the issue? Great workflow btw
@@AshChoudhary how did you fix the filename prefix issue? Failed to validate prompt for output 252: * (prompt): - Required input is missing: filename_prefix The only input options I see are: images, audio, meta_batch, and vae.
Thx for your great work! I'm having an issue: background doesn't seem to be taking too much inspiration from the "IPAdapter (Background)" section and instead (mostly) copies the original video despite the SAM mask. What am I doing wrong? thx
You're going to want to mess around with the controlnet combos. The most creative controlnet combo is depth + openpose + controlgif. I'd keep controlgif at 0.4 and not go above that though. Depending on what your original input video is, you will also want to try to find IP images that have at least some of the same context for the BG. At the end of the day, it's going to trace what is already in the source video.
this workflow is sick, does anybody have a suggestion on how to have the background stay unchanged, but have some options on the mask blurriness to allow say a mask to blend the stable diffusion/ animfiff into the live action, I think thats what QR monster code is for, but if any suggestions I would appreciate it!
@Civitai, would this work for Pony Based models, (I tried and suffered a bit) I had some funky issues trying to generate just one image/frame and it wasnt producing the images i was expecting from the image i provided. without going into too many workflow shenanigans... what are the current model limitations for doing this kind of thing? i.e is it only SD1.5, etc? without blowing up my brain too xD?
This is only compatible with 1.5 and 1.5 LCM models. I’d recommend using the LCM 1.5 LoRA at a strength of 1.0 for non LCM models and a strength of 0.18 for LCM models 🙏🏽🫶🏽
I can't load your workflow inside comfy - it gives me error message - Reactorfaceswap , i've tried to do a fix but getting this message - ReActor Node for ComfyUI fix failed: This action is not allowed with this security level configuration
I can get 4-5 sec outputs without any issues on a 4090 and 30 gigs of RAM on Runpod. However, when I tried to do a 15 sec video, it straight up kills the process right after the controlnet processing. Any tips to get it work on slightly longer videos? I mean isn't that enough computing power to generate 10-15 second animations? Thanks for the updated workflow btw. You are doing a great service by putting these things together for people who aren't able to sit and figure these out on their own.
I am able to do up to 1000 frames at a time with my 4090, but i'm doing it locally. I'm not entirely sure, but that sounds like it could be on runpod's side. Sounds like that key difference is local vs cloud.
In that case, i'd just delete it. I can't remember the last time I sued it and I just have it there as a "just in case". But tbh, it's not worth the wrestling probably
great job as always bro, quick question, is there a reason why the output video is always 1 second short than the input video ? i didn't skip any frames nor put a frame cap on it
Right Click anywhere on screen to bring up menu prompt. Scroll to RGThree Comfy. Click on Settings. Scroll down to (Groups) Show Fast Toggles in Group Headers, Select Toggle: Bypass and Show: Always. Once you do that, then the ByPass Button will appear in top right corner of each group.
has anybody gotten "sampler(efficient) "noneType" object has no attribute "shape"? I downloaded controlnets, checkpoints and loras like in the video and I get this error. Help
Hello, for a few days now I have been claiming the daily buzz restart, but when I claim, the daily buzz never adds up. Before all this everything was perfect, but now it is not anymore. I have seen that the same thing happens to other people, can you fix it?
Comfy is bad as always. Videos don't show the selected parts, frames, framerates... Nodes are not properly connected. FaceSwap as always is broken. Videos can't be created with the used Node. Videos can't be played in any other Format than webm. How long am I trying to use this Dumpster-fire of a Ui now? 2 years? 2 years and still very similar problems to when I first had the displeasure of trying it. And btw, I switched PC in the meantime, so that is not the problem! You tried and I thank you for that :) One time it even worked very good until the "Updates" arrived ^^ And it seemed to work today too, just after doing lots of disconnecting and swapping to normal Nodes, and without Video Settings working. One day they will release Sora and maybe at that point there will be a good Ui ^^ Maybe... but likely not xD
@@nirdeshshrestha9056 But don't expect too much :) I had to do a quick fix on the Workflow and about half an Hour was spent on Linking it somehow to you ^^ If you got Questions, ask away... and don't forget that you need all the ControlNet Models(btw: play around with them. Try switching one to Open Pose, that's better for me mostly) and the IP Adapter + Pytorch Model. jboogx has a Setup Guide somewhere. But I don't know where.
@@nirdeshshrestha9056 Ok... I tried Linking it here but NO CHANCE! Not even giving you my Mail. AND THIS MESSAGE WAS DELETED TOO!!!! So you have to go to the Link in the Video Description and look there for my Comment under his Workflow. My Name there is FrozenGT. I hope that worked now xD... I have spent over an Hour on this now D:
Still 100% lost. I use flux and this 100% doesn't help me out at all. Like I just started comfyui like 10 days ago and already know that you MUST use only certain models with certain IPadapters with certain unets and etc. I just need someone to show me ALL FLUX. This is SD.1.5. but no body in the World has done this with just FLUX yet. Kinda annyoing i now have to swap everything to lame SD models ugh.
Flux does not have a working motion model yet so there is no way to do clean vis2vid style transfers with it just yet. We are sure there will be one, but it has not been released yet. This workflow is only for SD1.5. We also have a tutorial from a few weeks back with Inner Reflections showing how to use his SDXL workflow.
@@jonrich9675 Unfortunately, it does not work that way. The issue is that Flux is too new (a few months old) while SD is very established for several years now and thus has way more tools and knowledge established among the community, researchers, and businesses. Civitai does not make any of this tech, they merely act as a host of models. Models like Flux cost hundreds of millions to produce. As for merging/training checkpoints, as seen with SD, no one has been able to figure out how to do with with Flux yet (currently, checkpoint merges with Flux are something else entirely in that they're highly prone to degraded visuals, prompt adherence, major issues with Lora, and frequently crash for most users... thus no major checkpoint has gained popularity over base model yet). Rather, the focus is on Lora's with Flux, at the moment at least. It isn't even known, and is heavily doubted, we'll ever even seen proper checkpoint releases beyond base Flux considering how it works, unlike SD. Now, the ones who do make these models are individual companies spending hundreds of millions to develop these models while the tools like Controlnet, etc. are developed by researchers and sometimes very capable members of the community (usually extending on researcher's shared open source results to then implement it in ComfyUI, etc.). In short, it will take more time for Flux support to grow. However, you can still generate images in Flux and then transfer them to SD for certain processes to refine or generate additional content off this just like you can do with real world photos via img2img, etc.
Nice, so many workflows, so little time. You're killing it on Thursdays! Much thanks.
Aye!! Thanks for the great work!
Super good match civit and you , I actually like civit now a lot more . I'll have to check out a twitch one day
Wow, what a compliment! Thanks so much! Come join the twitch streams, we always have lots of fun on there and the chat is super friendly and knowledgable!
I'm just starting out in this process, I ask you, is there somewhere where I can download a folder with all the modules and loras and controls already pre-installed, like a portable version? I tried for 5 hours to get the installations to work and nothing worked. Thanks.
Eu sou iniciando nesse processo, te pergunto, tem algum lugar onde eu posso baixar uma pasta com todos os modulos e loras e controles já pré-instalados, tipo uma versão portable ? Eu tentei por 5h fazer as instalações funcionarem e nada deu certo. Obrigado.
Hello. Is it strictly necessary to put a mask in this workflow?
hi, I am facing a strange error, the image node in VideoCombine is not taking any inputs from any image node. What could be the issue? Great workflow btw
I'm facing the same issue after updating ComfyUI
@@Zennu9 i got no permanent change but cloning the VideoCombine node worked for me. just had to reconnect the image and filename prefix nodes
@@AshChoudhary how did you fix the filename prefix issue?
Failed to validate prompt for output 252:
* (prompt):
- Required input is missing: filename_prefix
The only input options I see are: images, audio, meta_batch, and vae.
nvm, fixed it
Thx for your great work! I'm having an issue: background doesn't seem to be taking too much inspiration from the "IPAdapter (Background)" section and instead (mostly) copies the original video despite the SAM mask. What am I doing wrong? thx
You're going to want to mess around with the controlnet combos. The most creative controlnet combo is depth + openpose + controlgif. I'd keep controlgif at 0.4 and not go above that though. Depending on what your original input video is, you will also want to try to find IP images that have at least some of the same context for the BG. At the end of the day, it's going to trace what is already in the source video.
this workflow is sick, does anybody have a suggestion on how to have the background stay unchanged, but have some options on the mask blurriness to allow say a mask to blend the stable diffusion/ animfiff into the live action, I think thats what QR monster code is for, but if any suggestions I would appreciate it!
@Civitai, would this work for Pony Based models, (I tried and suffered a bit)
I had some funky issues trying to generate just one image/frame and it wasnt producing the images i was expecting from the image i provided.
without going into too many workflow shenanigans... what are the current model limitations for doing this kind of thing?
i.e is it only SD1.5, etc? without blowing up my brain too xD?
This is only compatible with 1.5 and 1.5 LCM models. I’d recommend using the LCM 1.5 LoRA at a strength of 1.0 for non LCM models and a strength of 0.18 for LCM models 🙏🏽🫶🏽
I can't load your workflow inside comfy - it gives me error message - Reactorfaceswap , i've tried to do a fix but getting this message - ReActor Node for ComfyUI fix failed: This action is not allowed with this security level configuration
I have same issue bro, I don't know how to fix it too
I can get 4-5 sec outputs without any issues on a 4090 and 30 gigs of RAM on Runpod. However, when I tried to do a 15 sec video, it straight up kills the process right after the controlnet processing. Any tips to get it work on slightly longer videos? I mean isn't that enough computing power to generate 10-15 second animations? Thanks for the updated workflow btw. You are doing a great service by putting these things together for people who aren't able to sit and figure these out on their own.
I am able to do up to 1000 frames at a time with my 4090, but i'm doing it locally. I'm not entirely sure, but that sounds like it could be on runpod's side. Sounds like that key difference is local vs cloud.
I always get an error with the reactor node, it basically says " Error loading reactor node " regardless of installing it or even fixing it
In that case, i'd just delete it. I can't remember the last time I sued it and I just have it there as a "just in case". But tbh, it's not worth the wrestling probably
@@civitai So this whole workflow would work even without the reactor node is it ?
great job as always bro, quick question, is there a reason why the output video is always 1 second short than the input video ? i didn't skip any frames nor put a frame cap on it
hmmm, I never have that problem so i'm not entirely sure. I'm sorry :/
@@civitai yea i googled and couldn't find anything about that , it's just weird and still does that for some reason🤔 it's always 1 second shorter
Apply ControlNet Stack
'NoneType' object has no attribute 'copy'
I got this error message, any clue sir?
ControlNets in the proper folders? Hard to tell without seeing what you got going on
ByPass button is not there i even update the comfyui, is there anything I am missing
There is a little button in the top right corner of each group. Just click it :)
Right Click anywhere on screen to bring up menu prompt. Scroll to RGThree Comfy. Click on Settings.
Scroll down to (Groups) Show Fast Toggles in Group Headers, Select Toggle: Bypass and Show: Always.
Once you do that, then the ByPass Button will appear in top right corner of each group.
Is there a way to only diffuse certain parts of the mask? i.e only generate on the white and leave the background black?
After you cut out your character, try using a solid black frame in your background ipadapter and prompting for a black background :)
@@civitai Perfect, thanks!
has anybody gotten "sampler(efficient) "noneType" object has no attribute "shape"? I downloaded controlnets, checkpoints and loras like in the video and I get this error. Help
ok, I figured the problem is with the linear contorlnet
made it work, but it doesn't seem to be taking the background I chose
Yuh! (first too! on my bday as well! holla)
Hello, for a few days now I have been claiming the daily buzz restart, but when I claim, the daily buzz never adds up. Before all this everything was perfect, but now it is not anymore. I have seen that the same thing happens to other people, can you fix it?
Feel free to reach out to us via our support email or in discord 🙏🏽
Please review your workflow seems to be broken nodes links are broken.
Great workflow. However, it looks like without a 4080 or 4090 it will take forever just to get a 5sec video output.
Unfortunately it is not low VRAM friendly. This workflow will take 12-15gb at least to run because of the mask and the ip adapters
@@civitai thank you for reminding me. is there another version for 8gb?
Comfy is bad as always. Videos don't show the selected parts, frames, framerates...
Nodes are not properly connected. FaceSwap as always is broken. Videos can't be created with the used Node. Videos can't be played in any other Format than webm.
How long am I trying to use this Dumpster-fire of a Ui now? 2 years? 2 years and still very similar problems to when I first had the displeasure of trying it.
And btw, I switched PC in the meantime, so that is not the problem!
You tried and I thank you for that :)
One time it even worked very good until the "Updates" arrived ^^ And it seemed to work today too,
just after doing lots of disconnecting and swapping to normal Nodes, and without Video Settings working.
One day they will release Sora and maybe at that point there will be a good Ui ^^ Maybe... but likely not xD
can you send your fixed workflow please i am having a headache
@@nirdeshshrestha9056 Have you received the Link? I don't know how YT will handle putting Links in the Comments.
@@nirdeshshrestha9056 But don't expect too much :)
I had to do a quick fix on the Workflow and about half an Hour was spent on Linking it somehow to you ^^
If you got Questions, ask away... and don't forget that you need all the ControlNet Models(btw: play around with them. Try switching one to Open Pose, that's better for me mostly) and the IP Adapter + Pytorch Model.
jboogx has a Setup Guide somewhere. But I don't know where.
@@nirdeshshrestha9056 Ok... I tried Linking it here but NO CHANCE! Not even giving you my Mail. AND THIS MESSAGE WAS DELETED TOO!!!!
So you have to go to the Link in the Video Description and look there for my Comment under his Workflow.
My Name there is FrozenGT.
I hope that worked now xD... I have spent over an Hour on this now D:
" most important part of what we do....CREATING!" 🥹🥹
Still 100% lost. I use flux and this 100% doesn't help me out at all.
Like I just started comfyui like 10 days ago and already know that you MUST use only certain models with certain IPadapters with certain unets and etc.
I just need someone to show me ALL FLUX. This is SD.1.5. but no body in the World has done this with just FLUX yet. Kinda annyoing i now have to swap everything to lame SD models ugh.
Flux does not have a working motion model yet so there is no way to do clean vis2vid style transfers with it just yet. We are sure there will be one, but it has not been released yet. This workflow is only for SD1.5. We also have a tutorial from a few weeks back with Inner Reflections showing how to use his SDXL workflow.
@@civitai can..can i just give u buzz so you can train one?
@@jonrich9675 Unfortunately, it does not work that way. The issue is that Flux is too new (a few months old) while SD is very established for several years now and thus has way more tools and knowledge established among the community, researchers, and businesses. Civitai does not make any of this tech, they merely act as a host of models. Models like Flux cost hundreds of millions to produce. As for merging/training checkpoints, as seen with SD, no one has been able to figure out how to do with with Flux yet (currently, checkpoint merges with Flux are something else entirely in that they're highly prone to degraded visuals, prompt adherence, major issues with Lora, and frequently crash for most users... thus no major checkpoint has gained popularity over base model yet). Rather, the focus is on Lora's with Flux, at the moment at least. It isn't even known, and is heavily doubted, we'll ever even seen proper checkpoint releases beyond base Flux considering how it works, unlike SD.
Now, the ones who do make these models are individual companies spending hundreds of millions to develop these models while the tools like Controlnet, etc. are developed by researchers and sometimes very capable members of the community (usually extending on researcher's shared open source results to then implement it in ComfyUI, etc.). In short, it will take more time for Flux support to grow.
However, you can still generate images in Flux and then transfer them to SD for certain processes to refine or generate additional content off this just like you can do with real world photos via img2img, etc.
far too much yapping, but good information nonetheless.
Oh my god, how did you know I like Waifu's?? That's crazy, you're so right, though. 😇
Lucky guess :P
Yes, we like wifus 🤗😍🤘
This we do, my friend. This we do. Go make a cool one and share it!