Is it possible to run these so you would have face detailer followed by hand & person version as well in one workflow? thanks for the video, quick information instead of loads of confusing stuff!
I use the same prompt words as the model face_yolov8n to fix the face But why is the effect of using ComfyUI not as good as StableDiffusion? Thank you for your workflow and instructional videos. They are great.
Interesting video I am an automatic user and I would like to fix some images of some video game characters using LoRas, my question is, facedetailer will respect the character I used with LoRas ? Or will it make me a totally different face?, that's my doubt that's why I don't let go automatic 1111 adetailer helps me a lot in that but I would like to do the same in comfyui and make more complex images here
TY for the quick and straight tutorial. I just want to ask. The node says Face Detailer, but will it still work if i changed the ultralytics to hand, person, etc, or whatever other than face? TIA
thx for vid - very helpful. I have both comfyUI running as you described above and Adetailer running on SD. I can't even get close to the trained resemblance quality of Adetailer using facedetailer. I have added separate prompts for facedetailer to closely match the Adetailer features but still no success in matching Adetailer's quality. I use trained chkpts for the comparison and facedetailer does improve overall face quality but destroys much of the trained model details. Using Facedetailer seems very similar to using an upscaler which removes much of the trained likenesses. I stopped using upscalers for this reason since it overpowers any trained resemblance characteristics and makes older trained subjects younger. Do you have any recommendations on what to change in Facedetailer to get similar performance to Adetailer?
Sure, in the UltralyticsDetectorProvider node there should be an option for bbox/hand_yolov8s.pt that you can use - simply set that and it should autodetect and fix bad hands.
Thanks for the vid! Hey ive noticed you stray towards 1.5 often is there a reason for that? Not that I dont like it or anything, its actually super great! Just curious thats all. Also Ive noticed i have a tough time getting this to work on a multitude of faces in one image . It works fine but the output usually looks worse than the input (definitely could be user input error) but for example some would be good but some would be like a face looking to the left when it should be to the right or forward and so on
Quite honestly just the render times. When filming a video, taking a 30 second break in the center breaks my train of thought when using SDXL - so the faster render times of SD 1.5 models helps to keep things going along. Although for thumbnails I usually use SDXL models. Hmm, that's an interesting scenario on group pictures. I wonder if you could mask or inpaint the image and have it fixed through face detailer. This is just a guess, I would have to play with this more and see how to approach it. Tricky scenario nevertheless.
Ahhh that actually makes alot of sense yeah the timeframe to generate on my system if i were making a video with SDXL would be...yeah undoable haha. Yeah ive played around a bit with it since and actually got it working more or less but havent tried with like 20+ faces and what not@@PromptingPixels
Thank you very much!!! Great pitch, no one is creating stuff for gamedev(how to draw sprites in SD trees for a 2d game for example) Maybe I gave you an idea...
Thanks so much for the idea!! I definitely want to make a focus to more practical use cases with diffusion models. As for game dev, what type of tools are sprites typically made in? I have no gamedev experience but super curious. Is it just Photoshop or similar image editing tools?
Thanks for this very clear guide! I'm using a Lora to style the face after a specific person, and the adetailer makes it better but kinda destroys the similarity and facial traits. Any way to preserve the "lora effect"?
Ah tricky use case - but i just found this in the Impact Pack repo: ImpactWildcardEncode - Similar to ImpactWildcardProcessor, this provides the loading functionality of LoRAs (e.g. ). Populated prompts are encoded using the clip after all the lora loading is done. If the Inspire Pack is installed, you can use Lora Block Weight in the form of LBW=lbw spec; , , I _think_ that might do the trick. Repo link: github.com/ltdrdata/ComfyUI-Impact-Pack
that was cool! have a question.. with animateDiff, how can I use ADetailer? like fix the generated image sequence before generating the video, or does it fix videos?
Yeah shouldn't be a problem at all - after the images pass AnimateDiff and Ksampler, and immediately after VAE Decode (before VHS Combine) you can place the FaceDetailer node to clean up at that stage in the process.
I use this one on my workflow, but what the issue is face is i run a faceswap before this detailer- the detailer completely destroys the face swap results and render a new one- any fix for this 😊
True but the downside with ReActor (or any roop/inswapper solution) is that it generally kills the expressions with some generic ones (i.e., a smile, etc).
@@noonesbiznass5389 That's correct (and very annoying), but in my case, with my cheap computer, I had to do that. I make some raw correction with Gimp then I choose some random face, then swap the face.
@@wasfiakab Yeh maybe one of these days someone will improve upon Roop and Inswapper... as awesome as they are... looks like no dev is being due to all the people abusing the tech.
Never comment on guide video because they explain everything that we don't really need/used, but you sir, STRAIGHT TO THE POINT
Is it possible to run these so you would have face detailer followed by hand & person version as well in one workflow? thanks for the video, quick information instead of loads of confusing stuff!
Really thanks man, i searched alot to find this specific piece of information .. many thanks
I use the same prompt words as the model face_yolov8n to fix the face
But why is the effect of using ComfyUI not as good as StableDiffusion?
Thank you for your workflow and instructional videos. They are great.
Thanks a lot that fixed mine
Great video! You should cover "handDetailer" and "mesh graphormer hand refiner" together 🙏
clear, quick & simple! nice work 🙂
Thanks! 👍
Thanks bro!
Love your tutorials so far! So clear in communication and to the point! Love it! :D
Happy to hear that this sort of format works out - let me know if there are ever any topics you'd like for me to cover. All the best!
thanks very much
Interesting video I am an automatic user and I would like to fix some images of some video game characters using LoRas, my question is, facedetailer will respect the character I used with LoRas ? Or will it make me a totally different face?, that's my doubt that's why I don't let go automatic 1111 adetailer helps me a lot in that but I would like to do the same in comfyui and make more complex images here
TY for the quick and straight tutorial.
I just want to ask. The node says Face Detailer, but will it still work if i changed the ultralytics to hand, person, etc, or whatever other than face?
TIA
thx for vid - very helpful. I have both comfyUI running as you described above and Adetailer running on SD. I can't even get close to the trained resemblance quality of Adetailer using facedetailer. I have added separate prompts for facedetailer to closely match the Adetailer features but still no success in matching Adetailer's quality. I use trained chkpts for the comparison and facedetailer does improve overall face quality but destroys much of the trained model details. Using Facedetailer seems very similar to using an upscaler which removes much of the trained likenesses. I stopped using upscalers for this reason since it overpowers any trained resemblance characteristics and makes older trained subjects younger.
Do you have any recommendations on what to change in Facedetailer to get similar performance to Adetailer?
Fast and so efficient Thanks a lot.
Thanks for the video)))) Could you also show me how to correct my fingers?
Sure, in the UltralyticsDetectorProvider node there should be an option for bbox/hand_yolov8s.pt that you can use - simply set that and it should autodetect and fix bad hands.
Thanks for the vid!
Hey ive noticed you stray towards 1.5 often is there a reason for that? Not that I dont like it or anything, its actually super great! Just curious thats all.
Also Ive noticed i have a tough time getting this to work on a multitude of faces in one image . It works fine but the output usually looks worse than the input (definitely could be user input error) but for example some would be good but some would be like a face looking to the left when it should be to the right or forward and so on
Quite honestly just the render times. When filming a video, taking a 30 second break in the center breaks my train of thought when using SDXL - so the faster render times of SD 1.5 models helps to keep things going along. Although for thumbnails I usually use SDXL models.
Hmm, that's an interesting scenario on group pictures. I wonder if you could mask or inpaint the image and have it fixed through face detailer. This is just a guess, I would have to play with this more and see how to approach it. Tricky scenario nevertheless.
Ahhh that actually makes alot of sense yeah the timeframe to generate on my system if i were making a video with SDXL would be...yeah undoable haha.
Yeah ive played around a bit with it since and actually got it working more or less but havent tried with like 20+ faces and what not@@PromptingPixels
what folder do i put the modes in for the ultralytics?
Thanks
Thank you very much!!! Great pitch, no one is creating stuff for gamedev(how to draw sprites in SD trees for a 2d game for example) Maybe I gave you an idea...
Thanks so much for the idea!! I definitely want to make a focus to more practical use cases with diffusion models. As for game dev, what type of tools are sprites typically made in? I have no gamedev experience but super curious. Is it just Photoshop or similar image editing tools?
Wow thanks, this was super helpful
Glad to hear!
amazing video
Thx matte.
Hi, I`ve noticed that I don`t have an ultralyticsDetector Provider after installing Impact Pack
TY
Thanks for this very clear guide! I'm using a Lora to style the face after a specific person, and the adetailer makes it better but kinda destroys the similarity and facial traits. Any way to preserve the "lora effect"?
Ah tricky use case - but i just found this in the Impact Pack repo:
ImpactWildcardEncode - Similar to ImpactWildcardProcessor, this provides the loading functionality of LoRAs (e.g. ). Populated prompts are encoded using the clip after all the lora loading is done.
If the Inspire Pack is installed, you can use Lora Block Weight in the form of LBW=lbw spec;
, ,
I _think_ that might do the trick. Repo link: github.com/ltdrdata/ComfyUI-Impact-Pack
is just for photo?or we can used for mov and batch photo?
that was cool!
have a question.. with animateDiff, how can I use ADetailer? like fix the generated image sequence before generating the video, or does it fix videos?
Yeah shouldn't be a problem at all - after the images pass AnimateDiff and Ksampler, and immediately after VAE Decode (before VHS Combine) you can place the FaceDetailer node to clean up at that stage in the process.
I use this one on my workflow, but what the issue is face is i run a faceswap before this detailer- the detailer completely destroys the face swap results and render a new one- any fix for this 😊
it s normal. faceswap use the open source inswapper model 128 pixel. the details can be added only before the faceswap
On my computer it would take more than 6 hours with ADetailer. Using ReActor it takes less than 1 minute and the face will be enhanced.
True but the downside with ReActor (or any roop/inswapper solution) is that it generally kills the expressions with some generic ones (i.e., a smile, etc).
@@noonesbiznass5389 That's correct (and very annoying), but in my case, with my cheap computer, I had to do that. I make some raw correction with Gimp then I choose some random face, then swap the face.
@@wasfiakab Yeh maybe one of these days someone will improve upon Roop and Inswapper... as awesome as they are... looks like no dev is being due to all the people abusing the tech.
@@noonesbiznass5389 That's exactly what I was thinking. Thank you for putting in words
form Vietnamese, thanks so much for tutorials bro.
Of course, happy to hear you like them!