How to change ANYTHING you want in an image with INPAINT ANYTHING+ControlNet A1111 [Tutorial Part2]

Поделиться
HTML-код
  • Опубликовано: 14 май 2024
  • #aiart, #stablediffusiontutorial, #automatic1111
    This is Part 2 of the Inpaint Anything tutorial. Previously, we went through how to change anything you want in an image with the powerful Inpaint Anything extension. In this tutorial, we will take a look at how to use the Control Net Inpaint model, and the Control Net and Cleaner features within the Inpaint Anything extension! Installation instructions for Control Net v1.1 included. Let's get started!
    If you haven't seen Part 1 of this Inpaint Anything tutorial, go ahead and check it out here:
    • How to change ANYTHING...
    Chapters:
    00:00 Intro
    00:29 Outline of Topics
    01:10 Prompts and Negative prompts
    02:44 How to install Control Net
    03:38 Where to download the Control Net Inpaint model
    04:40 How to use Control Net 'inpaint_global_harmonious'
    06:20 How to use Inpaint Anything - Control Net Inpaint Tab
    10:14 How to use Inpaint Anything - Cleaner Tab
    11:48 Try to fix hands with Inpainting Anything - Control Net Inpaint Tab
    14:04 Final image
    14:50 How to use Inpaint Anything - Reference-Only Control Inpaint method
    Useful links
    Control Net Github page:
    github.com/Mikubill/sd-webui-...
    Hugging Face Control Net models for v1.1.
    huggingface.co/lllyasviel/Con...
    Control Net Inpainting:
    github.com/lllyasviel/Control...
    Control Net Reference Only:
    github.com/Mikubill/sd-webui-...
    Inpaint Anything github:
    github.com/Uminosachi/sd-webu...
    Segment Anything github:
    github.com/facebookresearch/s...
    Comparison of the different Segment Anything Models (SAMs):
    docs.google.com/spreadsheets/...
    **If you enjoy my videos, consider supporting me on Ko-fi**
    ko-fi.com/keyboardalchemist

Комментарии • 73

  • @DigitalGhost269
    @DigitalGhost269 8 месяцев назад +13

    I appreciate your videos so much! Over my year long obsession w Stable Diffusion i've watched every single tutorial creator I can find on RUclips n ur stuff hits me right in my brainbox
    You do a couple things others don't that really impact my ability to learn stable diffusion deeper:
    - you describe not just the action but _why_ you did the action.... All the way down to 'i ticked this box because...'
    - you _actually zoom in on what you're doing_ and that's absolutely essential for this kinda tutorial content
    - you have a fun, friendly and 'low stakes' vibe as u narrate
    - you're explaining extensions and content in far greater depth than anyone else imo; perfect for the experience tier I'm at
    Over these last two videos you've taught me:
    - to use 'inpaint anything', an extension I initially wrote off as 'meh not that much more useful than inpainting' n removed it
    - incidental explanations of things along the way, like cleaning the artifacts up in this video
    - _what the actual fuck ControlNet reference is used for_ cuz nobody seemed to be able to explain it adequately on RUclips, reddit or that horribly designed Stable Diffusion Tutorials site
    Thank you so much for your time, labor and knowledge! It's made an appreciable difference in my understanding, workflow, final results and enjoyment of Stable Diffusion.
    If ur looking for ideas to go deeper into, I'd love to learn more about:
    - ROOP, the difference between n effect of the 'use generated face' and 'use face restore'
    - can one use resources like ROOP as a way to create nonexistent amalgamations of folk to then train not real ppl 'nobodys' into LoRA for consistent characters
    - how to actually train _concepts_ into LoRA; I got ppl n even my dog working but it seems to be an entirely different process to get stuff like recurring magical/sci-fi effects, holograms, _a goddamn cigarette_ in someone's mouth/hands or - my holy grail - translucent streaming ethereal ribbons of azure barcode.
    - ways to fix or make less frustrating the recurring memory leaks automatic1111 has
    - ways you're slowing down your automatic1111 without realizing it. Like once I figured out - pretty sure u said it tbh - installed extensions slow u down loading, I removed stuff like image viewer n things sped up. It's made me wonder: does having an absurd amount of checkpoints or LoRA in your source directory slow it down, something I'm beginning to suspect is true
    But like - you do you, I'll enjoy your wisdom along the way no matter what u pick. I'm not ur dad.
    Once again: thank you. You're an asset I truly appreciate 💪

    • @KeyboardAlchemist
      @KeyboardAlchemist  8 месяцев назад +3

      Hello! First of all, thank you very much for taking the time to write such a detailed comment. My initial drive to create SD tutorials was because I wanted to create and share more in-depth tutorials with the community such that newcomers and intermediate users would not struggle to find certain information like I did when I first started out. It feels good to know that my videos made an impact for viewers like you. Plus genuine and kind feedback like yours gives me extra motivation to continue making videos and improve myself. So from the bottom of my heart, thank you!
      Also, thank you for sharing ideas for future videos. Some of your ideas match videos that I'm working on, so you will definitely see a few being explained in future tutorials.
      Again, thank you for taking the time to share your thoughts and feedback. I really appreciate it! Cheers!

    • @TheDocPixel
      @TheDocPixel 8 месяцев назад +2

      I agree 100% with everything you wrote. The other SD channels go from bad to terrible extremely fast. Especially since Aitrepreneur decided to stop doing SD tuts.

  • @Tigermania
    @Tigermania 22 дня назад +1

    Been using SD1.5 for a year but found some really useful techniques from this video. 👍

  • @rexs2185
    @rexs2185 8 месяцев назад +1

    Excellent content as always. Thank you for the consistent and informative tutorials!

    • @KeyboardAlchemist
      @KeyboardAlchemist  8 месяцев назад

      I'm glad you liked the video! Thank you very much for your support!

  • @nenickvu8807
    @nenickvu8807 8 месяцев назад +1

    Thanks for pointing it out. Didn't even see it there.

  • @ChrisadaSookdhis
    @ChrisadaSookdhis 8 месяцев назад +3

    Great tutorial!

  • @ViratxDoodle
    @ViratxDoodle 7 месяцев назад +2

    Hats off to your editing. You're doing much more for the SD community than the flood of walk-through type videos floating around on youtube.

    • @KeyboardAlchemist
      @KeyboardAlchemist  7 месяцев назад

      Thank you for your kind words, I really appreciate it!

  • @TheDocPixel
    @TheDocPixel 8 месяцев назад +2

    Absolutely the best channel for intermediate to advanced SDers! Keep up the great content, and I truly enjoy your no-frills, yet professionally edited tutorials. A lot of time-wasters and narcissists who like to see themselves inline, in the SD community(!). PLEASE don't become one just for views and the algorithm.

  • @EddieLF
    @EddieLF 8 месяцев назад +1

    really great video!!

  • @MonotonousLifeEnjoyer
    @MonotonousLifeEnjoyer 8 месяцев назад +1

    Bro doing god's work 🙏🏻🔥Keep posting bro!! Your videos are literally so insightful bro!!!

    • @KeyboardAlchemist
      @KeyboardAlchemist  8 месяцев назад

      I'm glad you liked the videos! Thank you for your support!

  • @user-or4ks4bs5p
    @user-or4ks4bs5p 3 месяца назад +1

    amazingly well explained the settings :)

  • @DrOrion
    @DrOrion 8 месяцев назад +1

    Yes, please do hand correction. Thanks!

  • @cyberprompt
    @cyberprompt 7 месяцев назад +1

    I actually DID l&s during your breakmercial. this is a nice tutorial as I've been meaning to try to use CN more.

    • @KeyboardAlchemist
      @KeyboardAlchemist  7 месяцев назад

      Thank you for tuning in! I'm glad you liked the video!

  • @omniscientvillage
    @omniscientvillage 6 месяцев назад +1

    This is huge man. Thanks for sharing. Ive been inpainting the old fashioned way. Now i have some big scale images that take for ever using this method. i like that you can "export" the mask basically and use the "inpaint upload" section. handy for using masks from segment anything. and the cleaner function is so huge for me. im hoping using these new tools can speed up my process for my channel!

    • @KeyboardAlchemist
      @KeyboardAlchemist  6 месяцев назад

      I'm glad you found the video helpful to your workflow! This extension has been invaluable for me in making better images. I hope it will do the same for you and your channel. Cheers!

  • @magazynnn
    @magazynnn 8 месяцев назад

    one more question have you got an idea how to setup Stable diffusion on Google colab and save work on Google drive with preset settings?

  • @AIPixelFusion
    @AIPixelFusion 5 месяцев назад +1

    Top notch content

  • @user-ru4mm2yf4h
    @user-ru4mm2yf4h 8 месяцев назад +1

    How to change different hairstyles?

  • @TheBlackOperations
    @TheBlackOperations 6 месяцев назад +1

    This is WIld!!

  • @melissie7396
    @melissie7396 8 месяцев назад +1

    Amazing video! I have not found a YT tutorial for intermediate users that is this detailed.
    Quick question, based on Part 1 and 2 of this video, isn't the Control Inpaint tab in Inpaint Anything the superior method? Why bother with the regular Inpainting tab in Inpaint Anything or the Controlnet Inpaint in Img2Img?
    Looking forward to your other future tutorials, especially how to fix broken hands!

    • @KeyboardAlchemist
      @KeyboardAlchemist  8 месяцев назад +1

      Thank you for your kind feedback! I appreciate it! Regarding your question, yes, I agree with you that the ControlNet Inpainting Tab is better than the regular Inpainting tab in the Inpaint Anything extension. However, the Img2img Inpaint in combination with ControlNet global harmonious preprocessor offer some flexibility if you need to utilize other Control Net Units (i.e., more than 1 ControlNet Units working together), which is why I wanted to show that method in the video as well.

  • @yiluwididreaming6732
    @yiluwididreaming6732 7 месяцев назад +1

    appreciate the tutorial. Thank you. It seems that you ca do a lot of similar stuff to fix images using PS or Krita. And prob just as time consuming if not quicker....This might have good application for finer details, hair, eyes, fingers maybe....

    • @KeyboardAlchemist
      @KeyboardAlchemist  6 месяцев назад

      I'm glad you liked the video! Thank you for watching and for the comment!

  • @coco71920
    @coco71920 8 месяцев назад +1

    hey, I have a problem when I click on "create mask" the entire image is masked, not just the part I wanted. I already tried to reinstall the extension but it still doesn't work.
    anyway, nice video :)

  • @sossepanter
    @sossepanter 7 месяцев назад +1

    hi, Thanks for the tutorial! I have few problems which you could maybe help me with. First is that i can only run the segment using my CPU. Second is that a lot of the Segment Anything Model ID fail running them. with this error ( Inpaint Anything - ERROR - Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU). Do you have any idea why this could be? I have a 7900xtx just in case this would help in anyway.

    • @KeyboardAlchemist
      @KeyboardAlchemist  7 месяцев назад

      For the first question, there is a checkbox in the Inpaint Anything Settings that says "Run Segment Anything on CPU", make sure this box is not checked. If that is not the source of the problem, you might want to check whether your computer is in fact using your graphics card when generating images or not.
      I'm not sure why the second issue happens, maybe it has something to do with the fact that it is running on your CPU instead of GPU; not sure. You may have to uninstall the extension and reinstall to see if it will fix this issue.

  • @Rasukix
    @Rasukix 8 месяцев назад +1

    Amazing tutorial, but quick question, why not use the part 1 method and then use the reference model for just the kimono after? Surely controlnet only has an impact if it has input data to use (e.g. depth, openpose, canny)

    • @KeyboardAlchemist
      @KeyboardAlchemist  7 месяцев назад +1

      I'm not sure if I understand the question fully, but I'll take a shot at it. In Part 2, I mainly wanted to use the example to illustrate how to use the different features of the extension, so perhaps some methods are a bit more convoluted. I would say if you have a workflow that works well, then definitely go with it. Cheers!

  • @magazynnn
    @magazynnn 8 месяцев назад +2

    Hi great tututorial. But do you hava an idea to use inpainting to swithc clothes to the cloths from other image? asking because using prompts you ll not receive image with a man or women with exact cloth.

    • @magazynnn
      @magazynnn 8 месяцев назад +1

      I mean is there a possiblity to give a stable diffiusion flat image of a shirt or other cloth and than try to put it on on the person ?

    • @KeyboardAlchemist
      @KeyboardAlchemist  8 месяцев назад

      Hi thank you for watching! I answered a similar question yesterday, the short answer is, you can do it, but it's not very easy. Here is the long answer:
      I have not seen a perfect workflow that will essentially copy a piece of clothing from the reference image to an input image, but the workflow that I showed in this video with Inpainting + Control Net Reference preprocessor will get you close (you can do this in Img2Img too). Be sure to do the following things to increase your chances of success: (1) make sure your reference image and input image are the same size; you will have a much easier time with it, (2) don't put any positive prompts in when you are doing inpainting; you never know which keyword is going to mess with your reference clothing's style (you can always add keywords back later), (3) make sure your inpaint denoising strength is very high (0.9 - 1.0), (4) make sure your Control Weight is very high (greater than 1.5), (5) Control Mode = 'ControlNet is more important', and (6) you may need to try a few different models/checkpoints because the impact of the model on this process is very high. Finally, you will probably need to generate a bunch of images with the random seed and hopefully get the one that you like.
      I hope this helps you. Cheers!

    • @magazynnn
      @magazynnn 8 месяцев назад +1

      @@KeyboardAlchemist Thank you a lot i ll try, and if it will work ill share a link :)

  • @masroorbabar3381
    @masroorbabar3381 8 месяцев назад +1

    subscribed

  • @DoozyyTV
    @DoozyyTV 8 месяцев назад

    Is this available for comfyui?

  • @Aaisn
    @Aaisn 8 месяцев назад +1

    what is the difference between inpainting menu and controlnet inpainting menu?

    • @KeyboardAlchemist
      @KeyboardAlchemist  7 месяцев назад +1

      Good question, within the Inpaint Anything extension, the inpainting menu is like a simplified version of your normal Img2Img Inpaint interface. The ControlNet Inpainting menu is like using the Img2Img Inpaint interface + enabling a Control Net Unit with inpaint model selected. Hope this helps!

  • @user-wd2nc4mp9m
    @user-wd2nc4mp9m 8 месяцев назад +1

    Are there anyway to keep the exact same clothe of the reference image ?. Or change the girl in input image but keep her clothe ?

    • @KeyboardAlchemist
      @KeyboardAlchemist  8 месяцев назад +2

      Okay, so your second question is easier. Just use the inpainting with Control Net method I showed in this video to change the girl's face. If you need it to be a specific face, then you will probably need to use Roop.
      Your first question is a bit more involved, here is a long answer, but I hope this helps you:
      I have not seen a perfect workflow that will essentially copy a piece of clothing from the reference image to an input image, but the workflow that I showed in this video with Inpainting + Control Net Reference preprocessor will get you close (you can do this in Img2Img too). Be sure to do the following things to increase your chances of success: (1) make sure your reference image and input image are the same size; you will have a much easier time with it, (2) don't put any positive prompts in when you are doing inpainting; you never know which keyword is going to mess with your reference clothing's style (you can always add keywords back later), (3) make sure your inpaint denoising strength is very high (0.9 - 1.0), (4) make sure your Control Weight is very high (greater than 1.5), (5) Control Mode = 'ControlNet is more important', and (6) you may need to try a few different models/checkpoints because the impact of the model on this process is very high. Finally, you will probably need to generate a bunch of images with the random seed and hopefully get the one that you like.
      Best of luck!

  • @vincentmilane
    @vincentmilane 5 месяцев назад +1

    Hello
    Thank you very much for your content
    I tried to reproduce the part with reference.
    There is one problem, i created the mask, but when i run the process it is also changing the rest of the picture, not just the mask as it should be.
    Do you have any idea where it comes from ?
    Best regards

    • @KeyboardAlchemist
      @KeyboardAlchemist  5 месяцев назад

      You're welcome! Regarding your problem, I found that sometimes, the program will remember your previous mask (this is a bug). It doesn't show this in the mask window, but it's combining your previous mask with the current one, and that might be why you are seeing it change things outside of your current mask. The way to correct this is just to clear everything and re-create the mask. If that doesn't work, you can reload the webUI. I hope this helps.

  • @sessizinsan1111
    @sessizinsan1111 4 месяца назад

    When i try inpaint 10:28 this section showing up black (and inside writing srat drawing) what should i do?

  • @TheFoxstory
    @TheFoxstory 8 месяцев назад +1

    my sd-webui-controlnet/models is empty just the one that I put in? how so?

    • @KeyboardAlchemist
      @KeyboardAlchemist  8 месяцев назад

      I just checked my folder and all the .yaml files are gone too. I think it has something to do with the latest v1.1.4 update. If you put the model files in there and everything works, then don't worry about the .yaml files. If you need to download the .yaml files, they are here: huggingface.co/lllyasviel/ControlNet-v1-1/tree/main

  • @LouisGedo
    @LouisGedo 8 месяцев назад +1

    👋

  • @knoqx79
    @knoqx79 6 месяцев назад +2

    4:15 i don't have any of all those models allready in my folder, is that normal ? also when i try to download the .yaml it appears as .txt ... what is a .yaml ^^' ?

    • @KeyboardAlchemist
      @KeyboardAlchemist  6 месяцев назад

      Actually, not to worry if you don't already have the .yaml files in that folder. After I made this video and after a particular controlNet update, it got rid of all my existing .yaml files, so I believe those are no longer needed for the models to work. You will just need to download the .pth files from Hugging Face. There's a link in the video descriptions if you need to find the model files for downloading.

  • @fortoday04
    @fortoday04 3 дня назад

    How are you doing the AI voice?

  • @novysingh713
    @novysingh713 3 месяца назад

    Why does only inpaint anything use all of my GPU when I upload any image and then give an error out of cuda memory

  • @knoqx79
    @knoqx79 6 месяцев назад +1

    5:43 I get this error : RuntimeError: mat1 and mat2 shapes cannot be multiplied (1232x2048 and 768x320)
    the image i'm trying to inpaint is 840hx512w

    • @knoqx79
      @knoqx79 6 месяцев назад +1

      or this one : AttributeError: 'ControlNet' object has no attribute 'label_emb'
      when i use low vram

    • @KeyboardAlchemist
      @KeyboardAlchemist  6 месяцев назад

      @@knoqx79 Hi, thanks for watching the video! Unfortunately, I have never gotten those errors before when inpainting with ControlNet. So I won't be much help. You might want to update your control net extension, in case it's not updated already. I hope you figure out those errors.

  • @artofgarduno
    @artofgarduno 6 месяцев назад

    whats the difference between inpainting in control net vs inpainting in img2img?

    • @KeyboardAlchemist
      @KeyboardAlchemist  6 месяцев назад

      To clarify, you do inpainting in img2img, but controlNet has an inpainting model that will support and help make the inpainting result better. Hope this helps. Thanks for watching!

  • @philipp1960
    @philipp1960 8 месяцев назад +1

    RNGsus - I fell off my chair mate!

  • @corza5647
    @corza5647 6 месяцев назад +1

    My face skintones don't match. It looks like a bad photoshop face replacement for some reason. How do you get it to not suck when it isn't working?

    • @KeyboardAlchemist
      @KeyboardAlchemist  6 месяцев назад

      After inpainting, I would do a latent upscaling in img2img to get rid of artifacts like the skintone mismatch. Take a look at my other inpaint video where I explain how to do latent upscaling, it starts at 16:35. I hope this helps you. Cheers!

  • @futurefun3274
    @futurefun3274 2 месяца назад

    your talk too fast like you in rush to finish this lesson 😂 i don't understand much except installation. 😂 yeahhh