How to FACE-SWAP with Stable Diffusion and ControlNet. Simple and flexible.
HTML-код
- Опубликовано: 20 дек 2023
- We use Stable Diffusion Automatic1111 to swap some very different faces. Learn about ControlNet IP-Adapter Plus-Face and its use cases. This solution requires two additional ControlNet models only, and no large installation packages like other tools for swapping faces.
ControlNet:
github.com/Mikubill/sd-webui-...
IP-Adapter models including plus-face:
huggingface.co/h94/IP-Adapter...
Open-pose-models for ControlNet:
huggingface.co/lllyasviel/Con...
--
My Video about Upscaling and ControlNet:
• How to UPSCALE with St... - Наука
You the man!...concise..informative..nonsense free lesson...pleasantly mixed audio and a relaxing...enjoyable presentation..Bravo!..and Thank you*
Wow - thank you very much for this motivating and inspiring feedback!
@@NextTechandAI..it was well earned my friend..thanks again for the great tutorial!
I love your videos, thank you for your help so far You have helped me install and learn to use SDXL, Upscalers and ControlNet!
Your feedback is the best motivation for me to make more videos. Thanks for that!
Excellent! Really great tutorial! Thank you very much! Subscribed and looking forward to learn from your other videos! 🙂
Thanks a lot for your feedback and the sub. I'm definitely very motivated for my next video :)
Very helpful. Liked and Subscribed. Thanks!
Thanks a lot for the like and the sub. I'm happy that my vid was helpful.
I tried your instructions and copied the settings. I can't get it right. When i set denoising strength to 1 the face is replaced by a random generated image and not the controlnet image. I think i'm missing a setting that i may not see in your video?
Everything should be in the video. Have you enabled the ControlNet unit? I assume you have selected the img2img tab, right?
me to, not the same face at all, did you solve?
Same for me
When I select IP-adapter, ip-adapter_clip_sd15 disappears from Preprocessor, and only xl is available. What do I do wrong?
I found what the problem was. For some reason, ip-adapter_clip_sd15 doesn't appear with my model, but it does with the one you used. You should've told about that.
Because it is designed for SD 1.5 models, and you probably used it on SDXL, I think.@@KotleKettle
i testing every step but it never worked..
thank's .
did you know how can i use this method in Forge UI ? in Forge Preprocessor is `InsightFace+CLIP-H` and Model is `ip-adapter-plus-face_sd15` , i Don't know how to use `ip-adapter-clip_sd15` as preprocessor !
Yeah my faces end up looking VERY abstract, like completely wrong
Awesome, thanks.
Thanks a lot!
Thanks, this was useful :)
Happy to read this, thanks for your feedback :)
@@NextTechandAI Thanks you were quite civil about my critique on the other video. I valued that, thanks as well.
@fpvx3922 Constructive criticism like yours is valuable. Even if I don't always agree, it's an opportunity to grow. Thank you also for this feedback!
Do you have a tutorial about training loRAs models on stable diffusion with dreambooth? i tried it with other tutorials but the interface they had was different and mine is different too and it is hard to follow their steps.
@marcusmeins1839 Currently not, but I'll put it on my list for video ideas. You're using Automatic1111, I guess?
@@NextTechandAI yes, locally
Why we didnt download ipadapter in Ilyasviel? And you you know diffrents beetwen ip-adapter_sd15 and plus one?
I wanted to show you the main github page for ip-adapter from h94. There you can find among other things in the model card the descriptions of all ip-adapter files, like sd15 and plus-face sd15 etc.
Great video, thank you for sharing! Is this possible or even advised on Mac OS?
Thanks a lot for your feedback! As far as I know it's possible to install Automatic1111 WebUI on Mac OS, only support for GPU might be a bit tricky. The extension ControlNet seems to work, too. So, yes :)
Unfortunately I ran into a similar issue as others described in the comments. My preprocessor will not update to show the same ones available for you, I'm not sure which preprocessor I should use so every time I run this it looks horrible. I have tried updating control net, using different models, including the base 1.5 model. I'm not sure what else I can try. I'm sure I've followed the video's instructions to the letter.
Any ideas? My model shows up correctly, but not the preprocessor.
Thant's strange. I followed my own instructions with my relatively new Automatic1111 Zluda installation (see related short and vids) and the result was exactly like in the video. I noticed only one difference: The preprocessor being offered was called ip-adapter-auto. When doing the first run, the WebUI automatically downloaded clip_vision\clip_h. Setting the preprocessor to ip-adapter_clip_h for next runs resulted in same (good) results.
Maybe the preprocessor's name is included in WebUI and you have to update WebUI. BUT if your generation works as expected I would choose ip-adapter-auto or ip-adapter-clip_h without touching the existing installation.
@@NextTechandAI hi thanks for the reply, yes I have auto, and h, but I had issues with the quality of the result, I will retry again tonight and let you know how it goes.
@@NextTechandAI So I tried again with the clip_h preprocessor, and experimented quite a bit with different settings to see if I could attain anything usable. I tried altering the source image, control weight, starting step, ending step (which was basically useless? anything less than 1.0 gave me results that looked nothing like the control image). I also altered the sampling steps, and the method.
I really tried but couldn't get anything useable.
Using an 7800 xt, SD1.5 installed as per your March 2024 video.
Mine doesn't have "DPM++ 2M SDE Karras" for the sampling method. Mine has "DPM++ 2M SDE" and "DPM++ 2M SDE Heun". Which one of these should I use?
It depends on your version of Automatic1111. You could try to update if this doesn't hurt your installation. In latest versions you have to select e.g. Karras in a list right from the sampling method or leave it on "automatic". If this is no option for you, start with DPM++ 2M SDE.
RuntimeError: 'addmm_impl_cpu_' not implemented for 'Half' Still trying to work this one out... Seems to be working fine until the final output. Then error. I wonder if I've got something in the wrong folder. That happens often.
Half/float16 is for GPU and I guess your stable diffusion is using CPU. You have to use GPU or change to using "Full" (float32) instead of Half.
very very well explained every step, subbed, can you make some controlnet for sdxl? of course inside in automatic1111
Thank you very much. I'll add it to my list. For sure you will need lots of VRAM :)
I followed every step but I get just the original image with no face, like if I've deleted it with paint where I drew the mask, and a pose image.
Have you checked whether the ControlNet component is active? Are there any error messages in the window of your automatic1111 server?
When I do this process, I get a blended face. I'll have the expression of the original with the colors and shape of the new. It looks bad. I'd like to have a link to the images you use, so I can truly replicate your process.
Also a problem with combining images like this is skin tones don't match.
i also have issius tried some other faces tried playing with settings no luck. often i get totaly defomed faces or just a hairy mess
updateed xformer and pytorch and found reactor. I now use a mixture of controllnet with ip-adapter, open-pose and reactor- works fine
I don't know, for me nothing beats reactor, ip adapter sucks even with plus 2 version. The best way i get the desired result is with using reactor (txt2img) with any model with their specific cfg/steps etc and then img2img with my trained model with its cfg/steps/method settings without reactor with 0.17 denoiser with 2x size.
After img2img the face gets the same style as the rest of the picture without changing its appearance.
But with ip adapter, the face doesn't even come close to 50%. And i tried with single to 20 pictures for the face so that ip adapter can get it right but still fails. I am waiting for instant id, it only works with sdxl at the moment and i use sd1.5.
can this be done with sdxl?
You need the models and ControlNet files for SDXL - and depending on the size lots of VRAM.
Very well done ! The accuracy and explanations / « whys » are highly valuable. May be the German way!
I definitely subscribe to your channel and activate the notification mode ;) 👏 👏 👏
Looking forward
Thank you very much for the inspiring feedback and the sub! I'm very happy that the video and my 'German way' are helpful :)
i subscribe and comment but my result aren't tha i excepted can you elp me please ?
Give more details. What exactly have you done and what was the result?
Finally after 4 days found this video and it finally works, But now i got a new issue, The eyes are asian now. Both models have almost nearly the same eyes so not sure why it becomes asian eyes help :3
Thanks for your feedback. I've never heard about such a strange issue. I guess, you are working on the img2img tab, have tried different values for control weight, set denoising strength to 1 and left the prompt empty?
It sounds crazy, but maybe the default face of your checkpoint is asian and somehow it is mixed with your models. So enter a prompt describing your target model and please report back :)
Idk iu was just following every step you were done, including the prompt etc@@NextTechandAI
So entering a prompt describing your target model or adding 'asian' to the negative prompt does not help? @necrolydevlogs3932
correct, Neg prompt :Asian then tried, No asians, then Asian eyes etc@@NextTechandAI
@necrolydevlogs3932 Very strange. I'm running out of ideas, you could try other checkpoints, although that connection is pretty far-fetched.
But why ? we already have tools like roop or reactor thar are way easier to use and give amazing result . Why would you use so much steps to achieve something that is not even better.
Thanks for asking. Both Roop and Reactor are based on models that do not allow commercial use. If this is no limitation for you - feel free. Additionally Reactor, a sort of "successor" to Roop (no longer maintained), requires Microsoft Visual Studio to be installed - which seems like a bit of overkill.
Against this background, the IP adapter face-plus delivers decent results.
i dont think Roop or Reactor produce a more accurate face swap then this method..i tried the other 2 and my results were far from exact face swaps..plus this method does not require prompts to execute such good results..just my opinion...
I see it the same way. Thank you for sharing your opinion.
@@NextTechandAI thanks interesting 👍
@@kdzvocalcovers3516 will give it a try then ty
how to fix this error? "AttributeError: module 'torch.nn.functional' has no attribute 'scaled_dot_product_attention'"
Please add some details. What exactly have you done in order to get this error, what's your hardware and especially, which torch-version are you using?
Did you fix it? I have same error. Please. I have 6gb vram. Images 1000x1000
Good Content.. Really Loved it.. Do you have any page on social media ?
Thanks a lot for this motivating feedback. I'm still working on my online presence. As soon as my social media pages are available, I will create a post in the Community tab.
@@NextTechandAI we People trying to interact with you to create more effective content. It's Helpful to us
ip-adaptor_clip_sd15 does not appear at 3:15 and I don't know why?
I guess to the right there is no ip-adapter-plus-face, either? Have you copied the ControlNet file into the correct directory? Are you mixing SDXL with SD15?
@@NextTechandAI Well I'm very new to stable diffusion so I don't know how if I'm mixing SDXL with SD15. On the right there is ip-adapter-plus-face as an option. I'm confident I put everything in the right directory. I decided to put it on ip-adapter-auto, and on the left with ip-adapter-plus-face on the right, and saw in the command prompt that, somehow, the preprocessor is ip-adapter-plus-face_sd15.
It's probably due to my lack of knowledge on SDXL and SD15 which I'm probably mixing but I don't know what are these settings nor how to change them / avoid mixing them.
In all cases, after watching a whole bunch of face swap videos, yours came up on top! After slightly twitching a few settings over your recommendations, plus using multi inputs instead of a single image in ControlNet, it really helped me out having more than a single angle for my virtual character. Thanks a lot!
Well, if you could select ip-adapter-plus-face-sd15, then your selection is correct. Regarding SDXL and SD15, some people seem to have trouble as they have selected the ControlNet files for Stable Diffusion 1.5 like in my video and combined it with some checkpoints for Stable Diffusion XL. When you download checkpoints from HuggingFace or Civitai, you'll see whether it's for 1.5 or XL.
Nevertheless, I'm glad that you managed to modify your virtual character - thanks for your feedback!