Quickly fix bad faces using inpaint

Поделиться
HTML-код
  • Опубликовано: 17 фев 2023
  • This is a short tutorial on how to fix bad faces in stable diffusion using the inpaint feature.
  • НаукаНаука

Комментарии • 57

  • @thenewfoundation9822
    @thenewfoundation9822 Год назад +1

    Finally someone who gave a clear and working instruction on how to improve faces in SD. Thank you so much for this, it's really appreciated.

  • @MyAI6659
    @MyAI6659 Год назад

    You are one of the few people on youtube who actually know what he's talking about and how SD works. Much love Bernard.

  • @tristanwheeler2300
    @tristanwheeler2300 4 месяца назад +1

    oldie but goldie

  • @exile_national
    @exile_national Год назад

    512*512 and keep original instead of fill in masked saved my day, thank you Sir Bernard! you are indeed the VERY BEST!

  • @AscendantStoic
    @AscendantStoic Год назад +1

    Learning this trick is quite a game changer.

  • @MonkeyDIvan
    @MonkeyDIvan 9 месяцев назад +1

    Amazing stuff! Thank you!

  • @anuragkerketta6708
    @anuragkerketta6708 Год назад +1

    your tutorial did the job for me thanks a lot and subscribed.
    post regularly and i'm sure you'll get lot of followers quickly.

  • @metasamsara
    @metasamsara 11 месяцев назад +2

    thank you, really clear and concise tutorial

  • @hatuey6326
    @hatuey6326 Год назад +2

    magnifique enregistré dans mes favoris ! Cela va tellement améliorer mon workflow !!! Merci !!!!!

  • @Sandel99456
    @Sandel99456 Год назад +1

    The best informative tutorial 👌

  • @mufeedco
    @mufeedco Год назад

    Thank you. great explanation.

  • @Ilovebrushingmyhorse
    @Ilovebrushingmyhorse Год назад

    the 512x512 inpaint method helped dramatically, but i think the denoising shouldn't be so high if you want to maintain a similar image to before, the less you want to change the lower you should make it. i set mine all the way down to 0.01-0.05 just to add a little detail sometimes. also as far as i know keeping the same prompt isn't always necessary.

  • @toptalkstamil5435
    @toptalkstamil5435 Год назад

    Installed, everything works, thanks!

  • @whyjordie
    @whyjordie 9 месяцев назад +1

    this works!! thank you! i was struggling so hard lol

  • @339Memes
    @339Memes Год назад +2

    Wow, I didn't know you could change the width to do inpainting, thanks

    • @metasamsara
      @metasamsara 11 месяцев назад

      Yes it's not obvious esp on mine for some reason it calls the dimensions section the "resize" feature, it isn't obvious that you can pick a custom dimension for when you render mask only.

  • @syno3608
    @syno3608 Год назад

    Thank you so much.

  • @yogxoth1959
    @yogxoth1959 Год назад +1

    Thanks a lot!

  • @Sandel99456
    @Sandel99456 Год назад

    Is there a kohya documentation for the settings and what do they do

  • @RikkTheGaijin
    @RikkTheGaijin Год назад

    thank you!

  • @MAASTEER007
    @MAASTEER007 Месяц назад

    Thanks a lot

  • @kritikusnezopont8652
    @kritikusnezopont8652 Год назад +2

    Amazing tutorial, thanks! Also, while watching it, I noticed the VAE and hypernetwork selection options on the top. I'm just wondering if that is an extension or something, because I don't have those options on my Automatic1111. Where can we find those please? Thanks!

    • @mrzackcole
      @mrzackcole Год назад

      I also noticed that. Can't find it in extensions. Would love to know if anyone figures out where we can download it

  • @arunuday8814
    @arunuday8814 Год назад

    Hi, I couldn't understand how you linked the inpainting to a specific custom model. Can you pls help explain? Thx a ton!

  • @quantumevolution4502
    @quantumevolution4502 Год назад

    Thank you

  • @Doop3r
    @Doop3r Год назад +4

    I'm running into an odd issue. I'm following every single step to the T, and sometimes it works just as show here...other times instead of just the face in the masked area I'm getting a scrunched up version of the entire photo in the area instead of just the face.

    • @sestep09
      @sestep09 Год назад +1

      lowering the Denoising strength worked for me when i had this happen.

  • @sarpsomer
    @sarpsomer Год назад +1

    Another great step by step tutorial from you. Can someone explain what is "masked only padding, pixels =32" value for?

    • @stonebronson5
      @stonebronson5 Год назад +3

      As I understand, it is the area around the masked region that is looked up when it generates new image. So if you set it higher it will try to blend in better, if you set it lower, then it will make more drastic changes. Padding value only works when you set "Inpaint area" to "Only masked" since in the "Whole picture" mode the padding expands to the whole canvas.

    • @sarpsomer
      @sarpsomer Год назад +1

      @@stonebronson5 This is so helpful. It was similar to padding in design terminology; ex. css padding. Never thought about that.

    • @kneecaps2000
      @kneecaps2000 Год назад +1

      Yeah it's also called "feathering" ...just a gradient on the edge to avoid it looking cut and pasted in.

  • @testales
    @testales Год назад +5

    Ok, now we only need a way to do this with hands in let's say under 100 attempts. ;-)

    • @subashchandra9557
      @subashchandra9557 Год назад

      You can use controlNet for that

    • @testales
      @testales Год назад

      ​@@subashchandra9557 Two weeks ago, when I wrote the comment, this wasn't common knowledge yet. ;-) Also having to create a fitting depthmap can still be somewhat labor intensive.

  • @syno3608
    @syno3608 Год назад

    Can we replace the face with a face of another Lora ?

  • @baobabkoodaa
    @baobabkoodaa Год назад +2

    I'm unable to reproduce similar quality results. Can you share more details on what you did to achieve this level of quality? Are you running in half precision or full precision mode? Did you toggle on the "color corrections after inpainting" option in settings? Where did you get the Lora model for this? I tried all the Ana De Armas Loras in Civitai, but it looks like the one you used in this video was not on Civitai. I suspect that your Lora model is the "magic" here that allows good inpainting results, possibly in conjunction with some settings you have toggled on.

    • @mkaleborn
      @mkaleborn Год назад +6

      Not sure I can help on the Lora side. But with my vanilla Automatic1111 and custom Checkpoint Merges, I had good results with this workflow:
      1. Generate a prompt2image of a lady standing in some wooded/natural setting. Medium distance with a face that was decidedly 'sub-optimal' (I purposely did not do Hi-Rez upscaling).
      2. I Upscaled that original 512x768 image in the Extras tab - 2.5x, ESRGAN_4x (I've switched to this from SwinIR_4x), no other upscale settings changed (all default)
      3. I copied my entire Positive and Negative Prompt from the Prompt2Img tab over to Inpaint. Then copied my newly upscaled image to Inpaint. Same as he did in his video.
      4. I Masked out the model's entire face and a little bit of her hair (but not all of it)
      5. Sorry for the ugly formatting, but here are my Inpainting settings:
      Resize Mode: Just resize Mask Blur: 4, Mask Mode: Inpaint Masked (all defaults)
      Masked content: **Original** - I'm pretty sure *this* is the critical setting that needs to be selected for this to work. It keeps the original 'bad' face as a reference for general 'composition' when drawing the new face. Otherwise it will try to render the *entire* prompt, body and all, or just doesn't work properly.
      Inpaint Area: Only Masked (for the reasons he stated in the video, you only want it to focus on rendering your Masked area at the resolution you select below)
      Only masked padding, pixels: 64 - After some tests, I doubled the 'padding' value from 32 to 64. I found this helped the AI to 'see' the surrounding colour palette better, allowing the new face to 'blend in' better with her neck, shoulders, and overall skin tone
      Sampling Method: Euler A (same as prompt2img sampler), Sampling Steps: 60 (same as prompt2img)
      Width: 512 Height: 512 - for the exact reason he gave in the video
      CFG Scale: 7 (same as prompt2img). I didn't play with this setting, but I think it's fine left at the same level as your original render
      Denoising strength: 0.3, 0.7. My first attempt was at 0.7 and it was 'ok' or 'roughly Acceptable'. Until I lowered it down to 0.3 and tried again - I had much better results. A more natural fit for her neck and head position. Basically it used the original 'ugly face' as a closer reference point, but was able to render the whole face at 512x512 resolution
      Seed: -1 Restore Faces: Checked (I did not try it unchecked)
      And that was it. I think the flexibility probably comes with Denoising and CFG in how the image will look, and what variety you get with multiple renders. But a lower Denoising with a suitable "Only Masked Padding" set high enough to 'see' the surrounding area seemed to really help me get a face that blended in nicely with her body and the overall colour palette.
      Anyway, that's just my very brief and quick experience trying to fix some images that had 'broken' faces at medium / far model distances. Hope it helps!

    • @markdavidalcampado7784
      @markdavidalcampado7784 Год назад

      @@mkalebornIm gonna try this now. It looks promising! My greatest problem with inpaint is that blurry artifacts is too clear to see when image is upscaled. Any repair for that? sorry for my english. I wrote this for almost 15mins

    • @kneecaps2000
      @kneecaps2000 Год назад

      You must set the inpainting area to "mask only" and also set the resolution to 512x512.

  • @androidgamerxc
    @androidgamerxc Год назад

    how does you have sd vae aand hypernetwork

  • @roseydeep4896
    @roseydeep4896 Год назад +1

    Is there an extension that could do this automatically right after generating an image??? (I want to use this for videos, I need the frames to come out good right away)

    • @JJ-vp3bd
      @JJ-vp3bd 4 месяца назад

      did you find this?

  • @s3bl31
    @s3bl31 9 месяцев назад +2

    Dont work for me i dont know what the problem is in the preview i see a good face but in the last step i turns back to the bad face and the output is just a even worse oversharpend face

    • @marksanders3662
      @marksanders3662 4 месяца назад

      I have the same problem. Have you solved it?

    • @s3bl31
      @s3bl31 3 месяца назад

      @@marksanders3662 Are you using a amd card? if so i think i fixed with --no-half in the command line. But idk since its that long ago and i switched to Nvidia.

  • @progeman
    @progeman Год назад

    When i try this, exact the same as you showed, it tries to paint whole prompt to that small area of the face, doesnt work with me, could it be the model i use?

    • @progeman
      @progeman Год назад

      correction: needed the mask little bit more of the face, then it worked

    • @TutorialesGeekReal
      @TutorialesGeekReal Год назад

      How did you fix this? everytime i've tried it's always all the prompt drawing on that small area

    • @progeman
      @progeman Год назад +1

      @@TutorialesGeekReal try to lower the CFG scale, i put something like 0.4

  • @MarcioSilva-vf5wk
    @MarcioSilva-vf5wk Год назад

    Detection detailer extension do this automatically

    • @BakerChann
      @BakerChann Год назад

      how does it work? i found it to download but unsure where to put it or activate it.

  • @p_p
    @p_p Год назад

    how you pasted the prompt like that at 0:35?

    • @BernardMaltais
      @BernardMaltais  Год назад +1

      I just dragged a copy of a previously generated images. The prompt and config info is stored as metadata in each images you create... so you can just drag them on the prompt feild and load them back in the interface that way.

    • @p_p
      @p_p Год назад

      @@BernardMaltais wait... whaaat?? ive been dragging into PNG info tab all the time for nothing lmao thanks you!

  • @goldenboy3627
    @goldenboy3627 Год назад

    can this be used to fix hands?