How to FACE-SWAP with Stable Diffusion and ControlNet. Simple and flexible.

Поделиться
HTML-код
  • Опубликовано: 20 дек 2023
  • We use Stable Diffusion Automatic1111 to swap some very different faces. Learn about ControlNet IP-Adapter Plus-Face and its use cases. This solution requires two additional ControlNet models only, and no large installation packages like other tools for swapping faces.
    ControlNet:
    github.com/Mikubill/sd-webui-...
    IP-Adapter models including plus-face:
    huggingface.co/h94/IP-Adapter...
    Open-pose-models for ControlNet:
    huggingface.co/lllyasviel/Con...
    --
    My Video about Upscaling and ControlNet:
    • How to UPSCALE with St...
  • НаукаНаука

Комментарии • 81

  • @kdzvocalcovers3516
    @kdzvocalcovers3516 5 месяцев назад +5

    You the man!...concise..informative..nonsense free lesson...pleasantly mixed audio and a relaxing...enjoyable presentation..Bravo!..and Thank you*

    • @NextTechandAI
      @NextTechandAI  5 месяцев назад +1

      Wow - thank you very much for this motivating and inspiring feedback!

    • @kdzvocalcovers3516
      @kdzvocalcovers3516 5 месяцев назад

      @@NextTechandAI..it was well earned my friend..thanks again for the great tutorial!

  • @Chonky_Nerd
    @Chonky_Nerd 5 месяцев назад +2

    I love your videos, thank you for your help so far You have helped me install and learn to use SDXL, Upscalers and ControlNet!

    • @NextTechandAI
      @NextTechandAI  5 месяцев назад

      Your feedback is the best motivation for me to make more videos. Thanks for that!

  • @RobertWildling
    @RobertWildling 4 месяца назад +1

    Excellent! Really great tutorial! Thank you very much! Subscribed and looking forward to learn from your other videos! 🙂

    • @NextTechandAI
      @NextTechandAI  4 месяца назад

      Thanks a lot for your feedback and the sub. I'm definitely very motivated for my next video :)

  • @pilotdawn1661
    @pilotdawn1661 19 дней назад

    Very helpful. Liked and Subscribed. Thanks!

    • @NextTechandAI
      @NextTechandAI  18 дней назад

      Thanks a lot for the like and the sub. I'm happy that my vid was helpful.

  • @siewertw
    @siewertw 4 месяца назад +4

    I tried your instructions and copied the settings. I can't get it right. When i set denoising strength to 1 the face is replaced by a random generated image and not the controlnet image. I think i'm missing a setting that i may not see in your video?

    • @NextTechandAI
      @NextTechandAI  4 месяца назад

      Everything should be in the video. Have you enabled the ControlNet unit? I assume you have selected the img2img tab, right?

    • @mariorancheroni9427
      @mariorancheroni9427 4 месяца назад +3

      me to, not the same face at all, did you solve?

    • @MONGIE30
      @MONGIE30 4 месяца назад +1

      Same for me

  • @KotleKettle
    @KotleKettle 4 месяца назад +3

    When I select IP-adapter, ip-adapter_clip_sd15 disappears from Preprocessor, and only xl is available. What do I do wrong?

    • @KotleKettle
      @KotleKettle 4 месяца назад +1

      I found what the problem was. For some reason, ip-adapter_clip_sd15 doesn't appear with my model, but it does with the one you used. You should've told about that.

    • @captainblackbeard9104
      @captainblackbeard9104 2 месяца назад

      Because it is designed for SD 1.5 models, and you probably used it on SDXL, I think.@@KotleKettle

  • @mhacksunknown2229
    @mhacksunknown2229 3 месяца назад +3

    i testing every step but it never worked..

  • @faredit-cq2xl
    @faredit-cq2xl 26 дней назад

    thank's .
    did you know how can i use this method in Forge UI ? in Forge Preprocessor is `InsightFace+CLIP-H` and Model is `ip-adapter-plus-face_sd15` , i Don't know how to use `ip-adapter-clip_sd15` as preprocessor !

  • @dustinpoissant
    @dustinpoissant 4 месяца назад +4

    Yeah my faces end up looking VERY abstract, like completely wrong

  • @calurodriguezrome888
    @calurodriguezrome888 3 месяца назад

    Awesome, thanks.

  • @fpvx3922
    @fpvx3922 2 месяца назад

    Thanks, this was useful :)

    • @NextTechandAI
      @NextTechandAI  2 месяца назад

      Happy to read this, thanks for your feedback :)

    • @fpvx3922
      @fpvx3922 2 месяца назад

      @@NextTechandAI Thanks you were quite civil about my critique on the other video. I valued that, thanks as well.

    • @NextTechandAI
      @NextTechandAI  2 месяца назад

      @fpvx3922 Constructive criticism like yours is valuable. Even if I don't always agree, it's an opportunity to grow. Thank you also for this feedback!

  • @marcusmeins1839
    @marcusmeins1839 4 месяца назад

    Do you have a tutorial about training loRAs models on stable diffusion with dreambooth? i tried it with other tutorials but the interface they had was different and mine is different too and it is hard to follow their steps.

    • @NextTechandAI
      @NextTechandAI  4 месяца назад +1

      @marcusmeins1839 Currently not, but I'll put it on my list for video ideas. You're using Automatic1111, I guess?

    • @marcusmeins1839
      @marcusmeins1839 4 месяца назад

      @@NextTechandAI yes, locally

  • @sessizinsan1111
    @sessizinsan1111 4 месяца назад

    Why we didnt download ipadapter in Ilyasviel? And you you know diffrents beetwen ip-adapter_sd15 and plus one?

    • @NextTechandAI
      @NextTechandAI  4 месяца назад

      I wanted to show you the main github page for ip-adapter from h94. There you can find among other things in the model card the descriptions of all ip-adapter files, like sd15 and plus-face sd15 etc.

  • @trashmike7642
    @trashmike7642 4 месяца назад

    Great video, thank you for sharing! Is this possible or even advised on Mac OS?

    • @NextTechandAI
      @NextTechandAI  4 месяца назад

      Thanks a lot for your feedback! As far as I know it's possible to install Automatic1111 WebUI on Mac OS, only support for GPU might be a bit tricky. The extension ControlNet seems to work, too. So, yes :)

  • @bigdeutsch5588
    @bigdeutsch5588 Месяц назад

    Unfortunately I ran into a similar issue as others described in the comments. My preprocessor will not update to show the same ones available for you, I'm not sure which preprocessor I should use so every time I run this it looks horrible. I have tried updating control net, using different models, including the base 1.5 model. I'm not sure what else I can try. I'm sure I've followed the video's instructions to the letter.
    Any ideas? My model shows up correctly, but not the preprocessor.

    • @NextTechandAI
      @NextTechandAI  Месяц назад

      Thant's strange. I followed my own instructions with my relatively new Automatic1111 Zluda installation (see related short and vids) and the result was exactly like in the video. I noticed only one difference: The preprocessor being offered was called ip-adapter-auto. When doing the first run, the WebUI automatically downloaded clip_vision\clip_h. Setting the preprocessor to ip-adapter_clip_h for next runs resulted in same (good) results.
      Maybe the preprocessor's name is included in WebUI and you have to update WebUI. BUT if your generation works as expected I would choose ip-adapter-auto or ip-adapter-clip_h without touching the existing installation.

    • @bigdeutsch5588
      @bigdeutsch5588 Месяц назад +1

      @@NextTechandAI hi thanks for the reply, yes I have auto, and h, but I had issues with the quality of the result, I will retry again tonight and let you know how it goes.

    • @bigdeutsch5588
      @bigdeutsch5588 Месяц назад

      @@NextTechandAI So I tried again with the clip_h preprocessor, and experimented quite a bit with different settings to see if I could attain anything usable. I tried altering the source image, control weight, starting step, ending step (which was basically useless? anything less than 1.0 gave me results that looked nothing like the control image). I also altered the sampling steps, and the method.
      I really tried but couldn't get anything useable.
      Using an 7800 xt, SD1.5 installed as per your March 2024 video.

  • @QuackCow144
    @QuackCow144 25 дней назад

    Mine doesn't have "DPM++ 2M SDE Karras" for the sampling method. Mine has "DPM++ 2M SDE" and "DPM++ 2M SDE Heun". Which one of these should I use?

    • @NextTechandAI
      @NextTechandAI  24 дня назад

      It depends on your version of Automatic1111. You could try to update if this doesn't hurt your installation. In latest versions you have to select e.g. Karras in a list right from the sampling method or leave it on "automatic". If this is no option for you, start with DPM++ 2M SDE.

  • @amarissimus29
    @amarissimus29 2 месяца назад

    RuntimeError: 'addmm_impl_cpu_' not implemented for 'Half' Still trying to work this one out... Seems to be working fine until the final output. Then error. I wonder if I've got something in the wrong folder. That happens often.

    • @NextTechandAI
      @NextTechandAI  2 месяца назад

      Half/float16 is for GPU and I guess your stable diffusion is using CPU. You have to use GPU or change to using "Full" (float32) instead of Half.

  • @Oxes
    @Oxes 3 месяца назад

    very very well explained every step, subbed, can you make some controlnet for sdxl? of course inside in automatic1111

    • @NextTechandAI
      @NextTechandAI  3 месяца назад +1

      Thank you very much. I'll add it to my list. For sure you will need lots of VRAM :)

  • @Azmortsu
    @Azmortsu 2 месяца назад

    I followed every step but I get just the original image with no face, like if I've deleted it with paint where I drew the mask, and a pose image.

    • @NextTechandAI
      @NextTechandAI  2 месяца назад

      Have you checked whether the ControlNet component is active? Are there any error messages in the window of your automatic1111 server?

  • @ligmuhnugs
    @ligmuhnugs 4 месяца назад +1

    When I do this process, I get a blended face. I'll have the expression of the original with the colors and shape of the new. It looks bad. I'd like to have a link to the images you use, so I can truly replicate your process.
    Also a problem with combining images like this is skin tones don't match.

    • @ichhassdievoll
      @ichhassdievoll 4 месяца назад

      i also have issius tried some other faces tried playing with settings no luck. often i get totaly defomed faces or just a hairy mess

    • @ichhassdievoll
      @ichhassdievoll 4 месяца назад

      updateed xformer and pytorch and found reactor. I now use a mixture of controllnet with ip-adapter, open-pose and reactor- works fine

  • @zaselimgamingvideos6881
    @zaselimgamingvideos6881 3 месяца назад

    I don't know, for me nothing beats reactor, ip adapter sucks even with plus 2 version. The best way i get the desired result is with using reactor (txt2img) with any model with their specific cfg/steps etc and then img2img with my trained model with its cfg/steps/method settings without reactor with 0.17 denoiser with 2x size.
    After img2img the face gets the same style as the rest of the picture without changing its appearance.
    But with ip adapter, the face doesn't even come close to 50%. And i tried with single to 20 pictures for the face so that ip adapter can get it right but still fails. I am waiting for instant id, it only works with sdxl at the moment and i use sd1.5.

  • @___x__x_r___xa__x_____f______
    @___x__x_r___xa__x_____f______ 4 месяца назад

    can this be done with sdxl?

    • @NextTechandAI
      @NextTechandAI  4 месяца назад

      You need the models and ControlNet files for SDXL - and depending on the size lots of VRAM.

  • @arnaudcaplier7909
    @arnaudcaplier7909 4 месяца назад

    Very well done ! The accuracy and explanations / « whys » are highly valuable. May be the German way!
    I definitely subscribe to your channel and activate the notification mode ;) 👏 👏 👏
    Looking forward

    • @NextTechandAI
      @NextTechandAI  4 месяца назад

      Thank you very much for the inspiring feedback and the sub! I'm very happy that the video and my 'German way' are helpful :)

  • @mhacksunknown2229
    @mhacksunknown2229 3 месяца назад

    i subscribe and comment but my result aren't tha i excepted can you elp me please ?

    • @NextTechandAI
      @NextTechandAI  3 месяца назад

      Give more details. What exactly have you done and what was the result?

  • @necrolydevlogs3932
    @necrolydevlogs3932 4 месяца назад

    Finally after 4 days found this video and it finally works, But now i got a new issue, The eyes are asian now. Both models have almost nearly the same eyes so not sure why it becomes asian eyes help :3

    • @NextTechandAI
      @NextTechandAI  4 месяца назад

      Thanks for your feedback. I've never heard about such a strange issue. I guess, you are working on the img2img tab, have tried different values for control weight, set denoising strength to 1 and left the prompt empty?
      It sounds crazy, but maybe the default face of your checkpoint is asian and somehow it is mixed with your models. So enter a prompt describing your target model and please report back :)

    • @necrolydevlogs3932
      @necrolydevlogs3932 4 месяца назад

      Idk iu was just following every step you were done, including the prompt etc@@NextTechandAI

    • @NextTechandAI
      @NextTechandAI  4 месяца назад

      So entering a prompt describing your target model or adding 'asian' to the negative prompt does not help? @necrolydevlogs3932

    • @necrolydevlogs3932
      @necrolydevlogs3932 4 месяца назад

      correct, Neg prompt :Asian then tried, No asians, then Asian eyes etc@@NextTechandAI

    • @NextTechandAI
      @NextTechandAI  4 месяца назад

      @necrolydevlogs3932 Very strange. I'm running out of ideas, you could try other checkpoints, although that connection is pretty far-fetched.

  • @aresic34
    @aresic34 5 месяцев назад +1

    But why ? we already have tools like roop or reactor thar are way easier to use and give amazing result . Why would you use so much steps to achieve something that is not even better.

    • @NextTechandAI
      @NextTechandAI  5 месяцев назад +3

      Thanks for asking. Both Roop and Reactor are based on models that do not allow commercial use. If this is no limitation for you - feel free. Additionally Reactor, a sort of "successor" to Roop (no longer maintained), requires Microsoft Visual Studio to be installed - which seems like a bit of overkill.
      Against this background, the IP adapter face-plus delivers decent results.

    • @kdzvocalcovers3516
      @kdzvocalcovers3516 5 месяцев назад +3

      i dont think Roop or Reactor produce a more accurate face swap then this method..i tried the other 2 and my results were far from exact face swaps..plus this method does not require prompts to execute such good results..just my opinion...

    • @NextTechandAI
      @NextTechandAI  5 месяцев назад +2

      I see it the same way. Thank you for sharing your opinion.

    • @aresic34
      @aresic34 5 месяцев назад +1

      @@NextTechandAI thanks interesting 👍

    • @aresic34
      @aresic34 5 месяцев назад

      @@kdzvocalcovers3516 will give it a try then ty

  • @alinasama721
    @alinasama721 3 месяца назад

    how to fix this error? "AttributeError: module 'torch.nn.functional' has no attribute 'scaled_dot_product_attention'"

    • @NextTechandAI
      @NextTechandAI  3 месяца назад

      Please add some details. What exactly have you done in order to get this error, what's your hardware and especially, which torch-version are you using?

    • @rubencastro3854
      @rubencastro3854 2 месяца назад

      Did you fix it? I have same error. Please. I have 6gb vram. Images 1000x1000

  • @wittwickey
    @wittwickey 4 месяца назад

    Good Content.. Really Loved it.. Do you have any page on social media ?

    • @NextTechandAI
      @NextTechandAI  4 месяца назад +1

      Thanks a lot for this motivating feedback. I'm still working on my online presence. As soon as my social media pages are available, I will create a post in the Community tab.

    • @wittwickey
      @wittwickey 4 месяца назад +1

      @@NextTechandAI we People trying to interact with you to create more effective content. It's Helpful to us

  • @Wakssbm
    @Wakssbm Месяц назад +1

    ip-adaptor_clip_sd15 does not appear at 3:15 and I don't know why?

    • @NextTechandAI
      @NextTechandAI  Месяц назад

      I guess to the right there is no ip-adapter-plus-face, either? Have you copied the ControlNet file into the correct directory? Are you mixing SDXL with SD15?

    • @Wakssbm
      @Wakssbm Месяц назад

      ​@@NextTechandAI Well I'm very new to stable diffusion so I don't know how if I'm mixing SDXL with SD15. On the right there is ip-adapter-plus-face as an option. I'm confident I put everything in the right directory. I decided to put it on ip-adapter-auto, and on the left with ip-adapter-plus-face on the right, and saw in the command prompt that, somehow, the preprocessor is ip-adapter-plus-face_sd15.
      It's probably due to my lack of knowledge on SDXL and SD15 which I'm probably mixing but I don't know what are these settings nor how to change them / avoid mixing them.
      In all cases, after watching a whole bunch of face swap videos, yours came up on top! After slightly twitching a few settings over your recommendations, plus using multi inputs instead of a single image in ControlNet, it really helped me out having more than a single angle for my virtual character. Thanks a lot!

    • @NextTechandAI
      @NextTechandAI  Месяц назад

      Well, if you could select ip-adapter-plus-face-sd15, then your selection is correct. Regarding SDXL and SD15, some people seem to have trouble as they have selected the ControlNet files for Stable Diffusion 1.5 like in my video and combined it with some checkpoints for Stable Diffusion XL. When you download checkpoints from HuggingFace or Civitai, you'll see whether it's for 1.5 or XL.
      Nevertheless, I'm glad that you managed to modify your virtual character - thanks for your feedback!