How to change ANYTHING you want in an image with INPAINT ANYTHING A1111 Extension [Tutorial Part1]

Поделиться
HTML-код
  • Опубликовано: 10 июн 2024
  • #aiart, #stablediffusiontutorial, #automatic1111
    This tutorial walks you through how to change anything you want in an image with the powerful Inpaint Anything extension. We will install the extension, then show you a few methods to inpaint and change anything in your image. The results are AMAZING!
    Chapters:
    00:00 Intro
    01:12 Overview of Inpaint Anything Extension
    01:43 Install Inpaint Anything
    02:37 How to use Inpaint Anything
    03:18 Comparing different SAMs
    04:43 Changing the cloth
    10:05 Changing the background - method 1
    11:55 Changing the background - method 2
    13:27 Continue to change the image
    15:30 Changing hair color
    16:35 Bonus: Latent Upscaling to fix minor issues
    18:49 Final result
    Useful links
    Inpaint Anything github:
    github.com/Uminosachi/sd-webu...
    Segment Anything github:
    github.com/facebookresearch/s...
    Comparison of the different Segment Anything Models (SAMs):
    docs.google.com/spreadsheets/...
    **If you enjoy my videos, consider supporting me on Ko-fi**
    ko-fi.com/keyboardalchemist

Комментарии • 182

  • @whalhard
    @whalhard 9 месяцев назад +16

    This is one of the clearest video's on a stable diffusion subject that I have seen, without feeling rushed. Well done.
    Keep making them and I will keep watching them. 👍

    • @KeyboardAlchemist
      @KeyboardAlchemist  9 месяцев назад +2

      Thank you very much! I'll keep them coming! =)

    • @sairampv1
      @sairampv1 9 месяцев назад

      @@KeyboardAlchemist can you change the position of generated images as in make them have actions like take a portrait picture make it run , fught , climb , etc ?

  • @rexs2185
    @rexs2185 9 месяцев назад +1

    Once again, KA brings the great tutorials! Thank you for the detailed explanation!

  • @undoriel
    @undoriel 7 месяцев назад +1

    Your tutorials on SD are the easiest to follow and very informative. Please keep them coming! You've got yourself a subscriber :)

    • @KeyboardAlchemist
      @KeyboardAlchemist  7 месяцев назад

      I'm glad you liked my videos! Thank you for support my channel!

  • @FullStackFalcon
    @FullStackFalcon 8 месяцев назад +2

    Amazing tutorials, your content got me hooked with SD and AI editing. you are a great teacher, liked and subbed 🚀

    • @KeyboardAlchemist
      @KeyboardAlchemist  8 месяцев назад

      I'm glad you liked the tutorial! And thanks for your like and sub!

  • @brynbulloch
    @brynbulloch 9 месяцев назад +12

    You are a REALLY great teacher! Everyone has their own learning style and since getting started in Ai art, I have watched countless different channels hoping to find someone whose pace felt natural to me. I finally found you!!! I gave up on SD and A1111 several months ago out of frustration but I have missed control over the details that MJ lacks. Watching your tutorial made me eager to give it another go. Can’t wait to watch the rest of your videos. I have LIKED and SUBSCRIBED and I will definitely SHARE your content. Thank you for the time and attention to detail in this video. I especially appreciated that you put important details in text in sync with where you were speaking about them. Very helpful to hear and read the important points at the same time. I wish you the BEST of luck with your channel and can’t wait to watch your subscriber numbers SOAR soon! Sorry th is was so long. But gotta’ go now and watch some more of your vids!

    • @KeyboardAlchemist
      @KeyboardAlchemist  9 месяцев назад

      Thank you very much for your kind feedback and your support! I hope you enjoy my other videos and all future videos as well. Cheers!

  • @allenraysales
    @allenraysales 9 месяцев назад +1

    Thank you just what I needed! Keep up the great tutorials! Time saver!

    • @KeyboardAlchemist
      @KeyboardAlchemist  9 месяцев назад

      Thank you very much! I'm glad this was helpful for you. Stay tuned for Part 2 of this Inpaint Anything video.

  • @winonaiverdoberman2496
    @winonaiverdoberman2496 9 месяцев назад +1

    OMG!! this is SUPER helpful and detailed tutorial!!i have been dying to learn how to do those things. finally dream come true !!! thank u sooo much!!Defo SUBSCRIBED and LIKED IT!!!!

    • @KeyboardAlchemist
      @KeyboardAlchemist  9 месяцев назад

      I'm glad this tutorial was helpful for you! Thank you very much for the support! I have part 2 of this inpainting tutorial coming soon. Stay tuned.

  • @cbccbd
    @cbccbd 8 месяцев назад +1

    Great video. You deserve A LOT more views!

  • @sebastianmueller1740
    @sebastianmueller1740 9 месяцев назад +1

    Great and detailed tutorial, thank you!

    • @KeyboardAlchemist
      @KeyboardAlchemist  9 месяцев назад

      You're welcome! I'm glad you enjoyed the video!

  • @76abbath
    @76abbath 9 месяцев назад +1

    I don't know this extension, thanks a lot for this video!!! ❤

  • @Bj0rn666
    @Bj0rn666 2 месяца назад +1

    This video just earned you a new follower. Im using SD forge but this is still good information. Thanks!

  • @TapticDigital
    @TapticDigital 3 месяца назад +3

    You can indeed zoom in on inpaint with A1111, just hover over the (i) button for a list of controls. Alt+wheel to zoom, ctrl+wheel to adjust brush, etc.

    • @HorseyWorsey
      @HorseyWorsey 3 месяца назад

      based but what "i" button I don't see it in the Inpaint section or anywhere really.

  • @chapicer
    @chapicer 6 месяцев назад +1

    yuor channel is so great, plz continue making videos!!!

  • @jettro8523
    @jettro8523 9 месяцев назад +1

    great video, covered many questions i had!

  • @SteveWarner
    @SteveWarner 9 месяцев назад +50

    Really great tutorial. Just a heads up. There's a much faster and easier way to do this that uses less resources. It's the Photopea extension. It will add Photopea, which is akin to an online version of Photoshop, into your A1111 install. You send the T2I image to Photopea, then use the standard masking features that you would in a program like Photoshop to mask out the area you want. The full range of tools is there and you can make extremely complex masks in seconds. When done, use the Send to Inpaint button in Photopea to send the image and mask back to the I2I section of A1111. Now you don't have to download unnecessary Segmentation models that eat up your hard drive space. This works like a charm and makes all inpainting tasks so much easier.

    • @KeyboardAlchemist
      @KeyboardAlchemist  9 месяцев назад +5

      Thank you for the tip! I've heard good things about Photopea. Will definitely give it a try!

    • @deama15
      @deama15 9 месяцев назад +3

      @@KeyboardAlchemist Another video with photopedia?

    • @DannySi
      @DannySi 9 месяцев назад

      not sure if it's because I'm using an A1111 fork called SDNext, but the photopea extension doesn't seem to work properly. It doesn't allow me to send anything to photopea or back from photopea idk

    • @j_shelby_damnwird
      @j_shelby_damnwird 8 месяцев назад +2

      Great suggestion, man, thank you very much!

    • @Elfyja
      @Elfyja 8 месяцев назад

      this made me giggle, its understandable but I'm imagening a person who's just know how to open their email and surf the internet finding this comment be like ????

  • @barcob5558
    @barcob5558 9 месяцев назад +1

    Excellent! Thanks for sharing.

  • @daishum000
    @daishum000 5 месяцев назад

    its so helpful and details! thanks!

  • @alec-gy8ey
    @alec-gy8ey 8 месяцев назад +5

    You can zoom in by holding Alt + mouse scroll

  • @yeezythabest
    @yeezythabest 9 месяцев назад +1

    Subscribed and activated the bell ! great video

  • @Vadim666I
    @Vadim666I 9 месяцев назад +1

    Great tutorial. I`ll try this tommorow)

    • @KeyboardAlchemist
      @KeyboardAlchemist  9 месяцев назад

      Thank you and have fun! This is a great extension.

  • @Rasukix
    @Rasukix 8 месяцев назад +1

    incredible tutorial!

  • @ignat3802
    @ignat3802 2 месяца назад +1

    Thanks my guy. Great guide, even my stunted brain could understand!

  • @lenny_Videos
    @lenny_Videos 7 месяцев назад +1

    thanx for the great tutorial 🙂

  • @CEAG23
    @CEAG23 9 дней назад +1

    gracias!!!!!

  • @just_logi
    @just_logi 8 месяцев назад +1

    realy good thank u

  • @sb6934
    @sb6934 8 месяцев назад +1

    Thanks !

  • @blender_wiki
    @blender_wiki 8 месяцев назад +1

    Is so refreshing a real person talking with a real voice instead this "comics" toons RUclipsrs that are really hard to follow in a tutorial with their funky voice.

  • @DrivenTrigger
    @DrivenTrigger 5 месяцев назад

    What GPU are you using out of curiousity? For Inpaint Anything I only get like 1.5 it/s on a 3080
    Great tutorial also, liked and subscribed 👍

  • @japaoyagami3273
    @japaoyagami3273 5 месяцев назад

    Muito obrigado por ensinar

  • @anup-kaushal
    @anup-kaushal 6 месяцев назад +1

    Really detailed and to the point, loved it

    • @KeyboardAlchemist
      @KeyboardAlchemist  6 месяцев назад

      Thank you! I'm glad you liked the video.

    • @anup-kaushal
      @anup-kaushal 6 месяцев назад

      You're welcome@@KeyboardAlchemist

  • @_inspirasiislam
    @_inspirasiislam 9 месяцев назад +1

    Thanks

  • @tomarco7998
    @tomarco7998 9 месяцев назад +2

    Is it also pissble to impaint image2image here? Got a Photo of a Shirt that I want to replace on a Model generated with midjourney

  • @waterwater5931
    @waterwater5931 7 месяцев назад

    Thank you for this impressive video! I would like to know is it possible to change the target cloth that in other image to the mask instead of using prompt to generate random cloth according to the prompt?

    • @KeyboardAlchemist
      @KeyboardAlchemist  7 месяцев назад

      Yes, you can. Watch Part 2 of my Inpaint Anything video (ruclips.net/video/k8FfCicu5G8/видео.html) where I provide some suggestions using controlNet Reference Only preprocessor.

  • @wakeup2.369
    @wakeup2.369 8 месяцев назад +1

    you can enlarge the image by pressing the alt key and using the mouse wheel!

  • @celiocarvalho64
    @celiocarvalho64 9 месяцев назад +2

    3:17 show the message "Segment Anything failed"
    how to solve it?

  • @gothix114
    @gothix114 7 месяцев назад

    Just out of curiosity, do you usually have 2 people talking/alternating in your videos?

  • @vanarunedottir
    @vanarunedottir 8 месяцев назад

    Does this work with all versions of SD? In particular does it work with the latest SDXL, or only 1.5?

  • @joeskis
    @joeskis 3 месяца назад

    do you know what to do if we're getting an error during run segment anything: cannot set version_counter for inference tensor.?

  • @angloland4539
    @angloland4539 9 месяцев назад +1

  • @DrAmro
    @DrAmro 9 месяцев назад +3

    Hey Alchemist, i'll nominate you "man of the year" for Nobel prize, you're a living guide bro....
    btw, can you make a guide about secret Capita written words like BREAK, AND & those we don't know anything about + advanced extensions with its detailed uses.
    I think it'll be a magical series.❤👍

    • @KeyboardAlchemist
      @KeyboardAlchemist  9 месяцев назад +3

      Thank you very much for your suggestions! It's funny that you mention the keywords like BREAK, AND, etc. I'm working on tutorial about prompting basics, which will include these keywords, and I can explore their uses and effects a bit more in that video. And of course, I will have more tutorials about detailed usage of A1111 extensions. Stay tuned for more! Cheers!

  • @xunbaoxinwen
    @xunbaoxinwen 8 месяцев назад

    I try install expansion "inpaint anything" on colab, but not show in the main page. any how can help?

  • @TransCanadaPhil
    @TransCanadaPhil 5 месяцев назад

    all i get is a black background, not sure what i'm doing wrong. I'm putting in a prompt but whatever I mask out doing this and generate, is always just black background,

  • @diablokatakuri
    @diablokatakuri 6 месяцев назад +1

    hey @KeyboardAlchemist great tutorial, I just want to ask how do you get the custom inpainting model 7:28 and howd you install it? can you put in the safetensor/pickletensor format? Uminosachi said its model diffusers on this folder , C:\Users\username\.cache\huggingface\hub

    • @KeyboardAlchemist
      @KeyboardAlchemist  6 месяцев назад

      Hi, I'm glad you liked the video. This is correct, the models are located in this directory ('C:\Users\username\.cache\huggingface\hub'). I did not have to manually put the models in here though. After I installed the Inpaint Anything extension, these were auto-populated. You can try adding a subfolder in the 'hub' folder with this name: 'models--Uminosachi--realisticVisionV51_v51VAE-inpainting' to see if it will pull the files for you. If it does successfully pull the files from huggingface, you should get a 'snapshots' subfolder with the model in there. If this doesn't work, then you might have to reinstall the Inpaint Anything extension. I hope this helps you.

  • @FullStackFalcon
    @FullStackFalcon 8 месяцев назад +1

    please do a video on prompts

  • @lugotorix6911
    @lugotorix6911 8 месяцев назад +1

    Thanks for the tutorial, is there a system where we just change the pose and keep the face and clothes the same? I'm looking forward to the tutorial if it can be done somehow.

    • @KeyboardAlchemist
      @KeyboardAlchemist  8 месяцев назад +1

      Thanks for watching! Yes, there are a number of ways that you can go about it. Most of it will involve using ControlNet models. I may make a video about it down the line, but it might be a while since I have quite a few videos in the queue. Stay tuned.

  • @MobileJeremie
    @MobileJeremie 4 месяца назад

    great video! i loaded it created mask and selected the models/sampler but i am getting a error when i run inpainting... nonprogrammer here, any ideas why?

  • @WaseemOnlines
    @WaseemOnlines 3 месяца назад

    I got a black image when I press on run segment anything, any idea why?

  • @proyectorealidad9904
    @proyectorealidad9904 9 месяцев назад +3

    you can zoom with mousewheel+alt

    • @KeyboardAlchemist
      @KeyboardAlchemist  9 месяцев назад

      TIL, thank you for this tip! I never knew about this keyboard shortcut.

  • @rezahasny9036
    @rezahasny9036 5 месяцев назад

    Dude, can you make a tutorial to generate using InpaintAnything without changing the pose. Like we use open pose controlnet

  • @user-gq2bq3zf1f
    @user-gq2bq3zf1f 7 месяцев назад

    When I run inpaintanything in StableDiffusionUI, especially when I run inpainting, I keep getting error Unexpected end of JSON input.I ran it through Google Labs, what should I do?

  • @chrisrosch4731
    @chrisrosch4731 7 месяцев назад +2

    Really enjoyed this tutorial. Do you think there is a way to add specific items to an image using inpaint? Let's say I want to add specific lamps, plants, or paintings to an image how would I go about this? Does it make sense to train my own LoRa for each item and then just use the LoRa for the mask to add the specified object in the image? Can not quite wrap my head around of how that could be achieved. Liked and subscribed! :)

    • @KeyboardAlchemist
      @KeyboardAlchemist  7 месяцев назад

      Hello, thanks for the sub! Yes, a technique that you can use is, put your image into photoshop or gimp, then overlay or draw the object you want onto the image, then bring that edited image into a1111 and do inpaint on that object. Or another way is you can use Inpaint Sketch to draw the object you want directly on to the image within A1111. You can use different colors to give some context to the AI. Both of these methods will give you more consistent results. Hope this helps!

    • @chrisrosch4731
      @chrisrosch4731 7 месяцев назад +1

      Is there a way to use the second option and get consistens results i.e. the same model of lamp placed in the room for different rooms? The problem when I just use inpaint sketch is that it will just place any generic lamp, no? Does it make sense to train my own LoRa (or maybe use dreambooth?) to get more consistent results without using photoshop? Thanks for you help!@@KeyboardAlchemist

    • @KeyboardAlchemist
      @KeyboardAlchemist  7 месяцев назад

      @@chrisrosch4731 Yes, training your own LORA of the object and then inpaint using that LORA will definitely do the trick. But if you have limitations regarding training your own LORAs, then you can also try a different method (not involving a LORA), which will work but may involve a bit of trial and error. If you have watched part2 of my inpainting video (link here for reference: ruclips.net/video/k8FfCicu5G8/видео.html), I described a method to use Inpainting + Control Net Reference preprocessor, which will probably get you close to what you want to do (you can do the same method in Img2Img as well; not just within Inpaint Anything extension). Be sure to do the following things to increase your chances of success: (1) make sure your reference image and input image are the same size; you will have a much easier time with it, (2) don't put any positive prompts in when you are doing inpainting; you never know which keyword is going to mess with your reference image's style (you can always add keywords back later), (3) make sure your inpaint denoising strength is very high (0.9 - 1.0), (4) make sure your Control Weight is very high (greater than 1.5), (5) Control Mode = 'ControlNet is more important', and (6) you may need to try a few different models/checkpoints because the impact of the model on this process is very high. Finally, you will probably need to generate several images with random seeds and hopefully get the one that you like. Hope this helps!

    • @chrisrosch4731
      @chrisrosch4731 7 месяцев назад

      Hey Keyboard Alchemist. First off, thank you so much for your detailed answer. Honestly it took me quite a while to reply to you because this is new to me and I first had to dig a little deeper to understand your reply.
      Now, if I understood you correctly training my own LoRas with specific models of furniture or species of plants should yield the most consistently good results. I would love to go for the option that produces the best results without having to do a lot of manual refining. My goal is to have my clients use this so they can upload images of their own apartments and have good results where the furniture looks realistic, both in terms of the actual model of furniture (e.g. specific Ikea lamp) and also in terms of the furniture looking realistic in the image.
      I watched your part 2 inpaint anything video twice but was not able to get the inpaint anything tab shown. Did they change the appearance? Is it now integrated into the below tab where you have to click the checkbox to enable Unit 0, unit 1, etc? Maybe I have to uninstall everything to have it show again? Maybe that is not needed anymore and the below tab of Control net yields similar results?
      Really grateful for the information you provide and if there is anything that comes to your mind that could work best for my experiment please let me know. I do not know if that is an option to you but if we could hop on a quick 5 minute Discord call and talk about possibilities I would be so happy. Also willing to pay you for your time of course (also beforehand if you wish).
      Cheers,
      Chris
      @@KeyboardAlchemist

  • @MrFreeagent505
    @MrFreeagent505 8 месяцев назад

    Hi i've haven't been able to run inpaint anything. i get "ImportError: cannot import name 'YOLO' from 'ultralytics' (unknown location)". I've spent a good bit of time looking but can't find a solution to what i'm doing wrong. thank you if anyone can help.

  • @lilillllii246
    @lilillllii246 7 месяцев назад

    I use StableDiffusion locally and when I press RUN SEGMENTANYTHING in INPAINT ANYTHING, it doesn't generate a masking image. what should I do?

  • @rudeoff
    @rudeoff 6 месяцев назад

    Did you change the nationality of your AI voiceover halfway through this video?

  • @novysingh713
    @novysingh713 3 месяца назад

    Why does only inpaint anything use all of my GPU when I upload any image and then give an error out of cuda memory

  • @philliphartman2381
    @philliphartman2381 8 месяцев назад

    Why does processing take so much longer with this app? Isn't there a way to control resolution?

  • @Gh0sty.14
    @Gh0sty.14 9 месяцев назад

    For some reason it's not adding any of the inpainting models I already have.

  • @the17bman
    @the17bman 5 месяцев назад

    Need help... I downloaded the models but when I hit the "Run Segment Anything" button, it just fails almost instantly. Saying something about tensor sizes not matching. How am I supposed to fix that?

  • @chea9986
    @chea9986 6 месяцев назад

    I follow step all, This step run inpainting is "error" message. How to fix it?

  • @relaxation_ambience
    @relaxation_ambience 9 месяцев назад +1

    Hi, from your examples I see, that you inpaint already existing things. But, for example, if I want a parrot on her shoulder, will it inpaint ?

    • @KeyboardAlchemist
      @KeyboardAlchemist  9 месяцев назад +1

      The short answer is yes, you can inpaint an area in the image and prompt for something that doesn't already exist in the image, but the result you get will be inconsistent. You might get lucky and get the result that you want, or you might re-roll a bunch of times and still do not get the result that you want. A technique that you can use is, put your image into photoshop or gimp, then overlay or draw a parrot on her shoulder, then bring that edited image into a1111 and do inpaint on that parrot portion of the image. You will have a much easier time. Hope this helps.

  • @jonorgames6596
    @jonorgames6596 8 месяцев назад

    Im on AMD GPU. Gives me errors: ... Cannot set version_counter for inference tensor...

  • @u.google
    @u.google 9 месяцев назад +1

    how do you put the inpainted masked photo inpaint anything tab to the img2img because there no Only masked padding, pixels in it so i want to move it to the img2img inpain is there a way to do that? pls help

    • @KeyboardAlchemist
      @KeyboardAlchemist  8 месяцев назад +1

      Yes, there is a way to do this, if I'm understanding your question correctly. After you create your mask, on the left-hand side there is a 'Mask Only' tab. In that tab, you can click the 'Get Mask' button, then click the 'Send to Img2Img Inpaint', which will bring the mask to the 'Inpaint Upload' tab within Img2Img. I hope this helps you. Cheers!

  • @JeanDeLaCroix_
    @JeanDeLaCroix_ 8 месяцев назад +1

    When I use the standard version of inpaint in Img2img, I get results that are heavily influenced by the masked area. For example, if I want to change the clothes and the character is wearing white, it's hard for me to replace it with red without going through Photoshop. Does this method help to ignore a bit more what's happening under the mask?

    • @KeyboardAlchemist
      @KeyboardAlchemist  8 месяцев назад

      I'm just guessing here, but it sounds like you might be using the Masked Content = 'original' setting. If you want to change something in the regular inpaint interface, you should be using Masked Content = 'fill'. If you use Inpaint Anything, there is no selection for which Masked Content setting, so in a sense this extension makes it a bit easier for you by taking away some of the options that could cause you problems. I hope this helps you. Cheers!

    • @JeanDeLaCroix_
      @JeanDeLaCroix_ 8 месяцев назад +1

      @@KeyboardAlchemist thanks ! I'll test that :)

  • @bingbang9643
    @bingbang9643 9 месяцев назад +1

    ive been using midjourney, but the wide range of options of stable diffusion makes me feel I'm missing out, can you guys comment all the reasons why stable diffusion is better. thanks... i have a rtx 3060 and a 1060 ti but I've heard those GPUs are not good enough so didn't even bother to try installing stable diffusion.

    • @KeyboardAlchemist
      @KeyboardAlchemist  9 месяцев назад +1

      RTX 3060 (assuming 8GB card) is more than enough to do stable diffusion with. There are some applications where more VRAM is better, but overall with 8GBs you can do a lot with stable diffusion.
      Personally, I just don't want to pay for midjourney and I don't feel like doing my image generation online. So I run stable diffusion locally and free on my PC. But there are many different reasons why someone might want to use stable diffusion over midjourney or vice versa, and you can find plenty of those opinions in RUclips videos.

    • @PawFromTheBroons
      @PawFromTheBroons 6 месяцев назад

      I do everything I want, with very advanced usages, sporting a 2060.
      So you should be fine...

  • @OptimusGPrime
    @OptimusGPrime 9 месяцев назад +2

    So I got this to change the colour of my image's hair, but it keeps changing the hairstyle. How do you get it to keep the hairstyle but only change the colour?

    • @KeyboardAlchemist
      @KeyboardAlchemist  9 месяцев назад

      With this type of simple inpainting, unfortunately, we are at the mercy of RNG for the most part. You can try a couple of things to increase your odds a little: (1) specify the hair style you want in the positive prompt (i.e., instead of just saying "pink hair", you can say "long pink hair with broad curls"), (2) similarly if the model is constantly giving you hairstyle that you don't want, you can specify those styles in your negative prompt, (3) make sure that you don't expand the mask area too much, I would say 2 to 3 clicks of the 'expand mask region' should be enough. I hope this helps as a quick fix.
      In future videos, I'll introduce ways to use ControlNet to keep your composition exactly the same as the reference image. So stay tuned for more content later on! Cheers!

  • @datngo27
    @datngo27 7 месяцев назад

    I got error: "Segment anything failed". Anyone knows how to fix it? Many thanks.

  • @i01binary
    @i01binary 9 месяцев назад

    try pressing s to get full screen canvas

  • @GES1985
    @GES1985 13 дней назад +1

    Is there a way to take an item or jewelry from one picture and put into another? Or is that just something to do photoshop

    • @KeyboardAlchemist
      @KeyboardAlchemist  10 дней назад

      I made a video previously about this, check it out here: ruclips.net/video/akzu3R7lDZ4/видео.html. I hope this helps.

  • @duskairable
    @duskairable 8 месяцев назад

    at 17:35, i'm courius why did u need to upscale the image to 720x1080 in order to change/fix/add the detail?? is that even necessary??
    why not just keep the same image resolution and then just change the denoising strength and optionally upscale the image later?
    in my experiment, changing only the denoising strength is enough to change/fix/add the image (no need to upscale)
    iv'e tried this way and there is no difference in detail between the upscaled image and the non upscaled image with the same denoising strength,
    the only difference is the image resolution of course, but the detail on the subject is the same.

  • @arifkuyucu
    @arifkuyucu 3 месяца назад

    I take error Segment Anything Failed. return torch.empty_strided(
    TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.

  • @unoreverseyourmom6119
    @unoreverseyourmom6119 9 месяцев назад

    Great tutorial. Any tips on how to generate naked full body portraits of myself in different poses? I need really cool pics for my tinder.

  • @xyzxyz324
    @xyzxyz324 7 месяцев назад +1

    Why the models have an inpaint title in their definition names ? Are there different versions of the models to use in inpaint? i.e. realisticvision vs realisticvision inpaint ?

    • @KeyboardAlchemist
      @KeyboardAlchemist  7 месяцев назад

      Yes, some models have an inpaint version of it; not all models have an inpaint version though.

  • @guillermosepulvedaf
    @guillermosepulvedaf 9 месяцев назад +1

    Hello, Im trying to find "realisticVisionV30_v30VAE-inpainting" but on civitai the file is "realisticVisionV51_v30VAE-inpainting.safetensors".. its the same version??

    • @KeyboardAlchemist
      @KeyboardAlchemist  9 месяцев назад +1

      Yes, that's perfectly fine. It's just the latest version of the realisticVision model. I have this version too.

    • @guillermosepulvedaf
      @guillermosepulvedaf 9 месяцев назад

      @@KeyboardAlchemist Thanks!!

  • @RSV9
    @RSV9 9 месяцев назад +3

    It is a good tool for complex masks and the result is very good but on my computer it is extremely slow. With A1111 normal inpainting it's much faster and also gives good results, so I don't know why in this extension it's so slow. I only have an NVIDIA®GeForce RTX™3050 Ti 4GB, maybe Google Colab could be faster.
    Good job, thanks

    • @KeyboardAlchemist
      @KeyboardAlchemist  9 месяцев назад

      Thank you!

    • @kartikashri
      @kartikashri 8 месяцев назад

      you can reduce sample steps to 30 or 20 to generate fast but note it might reduce quality

  • @bazadam6635
    @bazadam6635 9 месяцев назад +1

    I see that it is downloading something when I am running the inpainting and is taking forever to show the results. Downloading something like pytorch .... any help?

    • @KeyboardAlchemist
      @KeyboardAlchemist  9 месяцев назад

      I'm assuming this happened after you clicked on 'Run Inpainting'? The first time you ever run this extension, it will be downloading some things in the background, which includes the inpainting model that you have selected (those are files around 2GB or more), so it will take a few minutes. But after download is complete, you should be able to see results. I hope it worked for you.

    • @bazadam6635
      @bazadam6635 9 месяцев назад

      @@KeyboardAlchemist Figured it out it was downloading the Inpainting Model

  • @Yoshenesis
    @Yoshenesis 9 месяцев назад

    Hello, I have original clothing designs, I usually do deformations in Photoshop to adjust them to a model but it's a lot of work, I see that you can change clothes and even people's faces, but I don't know if I can wear my clothes without having to train a model, is there a method to change clothes from one image to another? greetings

    • @KeyboardAlchemist
      @KeyboardAlchemist  9 месяцев назад +1

      Hello, thanks for watching! The short answer is, you can do it, but it will take you some trial-and-error and time. Here is the long answer:
      I have not seen a perfect workflow that will essentially copy a piece of clothing from a reference image to an input image, but the workflow that I showed in this video (ruclips.net/video/k8FfCicu5G8/видео.html) with Inpainting + Control Net Reference preprocessor will get you close (you can do this in Img2Img too). Be sure to do the following things to increase your chances of success: (1) make sure your reference image and input image are the same size; you will have a much easier time with it, (2) don't put any positive prompts in when you are doing inpainting; you never know which keyword is going to mess with your reference clothing's style (you can always add keywords back later), (3) make sure your inpaint denoising strength is very high (0.9 - 1.0), (4) make sure your Control Weight is very high (greater than 1.5), (5) Control Mode = 'ControlNet is more important', and (6) you may need to try a few different models/checkpoints because the impact of the model on this process is very high. Finally, you will probably need to generate a bunch of images with the random seed and hopefully get the one that you like.
      I hope this helps you. Cheers!

    • @Yoshenesis
      @Yoshenesis 9 месяцев назад +1

      @@KeyboardAlchemist Thanks for such a complete answer, I really appreciate it, I'll take your advice, it's really helpful

  • @michaelbuzbee5123
    @michaelbuzbee5123 9 месяцев назад

    This is a very good tutorial but I have came across the "not enough GPU memory" error any time I try to use it. Even if it is something I generated at 512x512. Anyone know of a work around for this or do I just have to wait?

    • @KeyboardAlchemist
      @KeyboardAlchemist  9 месяцев назад

      How many GBs of VRAM are you working with? If it's 4GBs or lower, you might want to try putting '--lowvram' in your command line arguments. This will enable low VRAM usage, but it will make your generation slower. Also, if you are not using '--xformers' I would highly recommend using this in your command line arguments (it makes image generation faster).

    • @michaelbuzbee5123
      @michaelbuzbee5123 9 месяцев назад

      @@KeyboardAlchemist I have 8 on my 5700, I have figured out the work arounds for everything else so I have included --medvram already. I also hit it when trying to use the SDXL checkpoint.

    • @KeyboardAlchemist
      @KeyboardAlchemist  9 месяцев назад

      @@michaelbuzbee5123 Oh you have a Radeon card. Unfortunately, I won't be much help with using Radeon cards with stable diffusion. I found this reddit post of someone saying they have success with NMKDs ONNX implementation, which I know nothing about, but the link is here if you want to check it out: www.reddit.com/r/StableDiffusion/comments/106i83w/onnx_only_512x512px_on_amd_card_more_than_that/. I hope you can figure out some work around.

  • @AntonioDal.
    @AntonioDal. 3 месяца назад

    where do you get the original positive and negative prompts? 17:15

    • @KeyboardAlchemist
      @KeyboardAlchemist  2 месяца назад +1

      Got it from the CivitAI model download page for majicMix v5.

  • @felixmontanez4090
    @felixmontanez4090 17 дней назад +1

    what model did u use to make the base image?

    • @KeyboardAlchemist
      @KeyboardAlchemist  10 дней назад

      The model is called 'majicMIX realistic', you can find it on CivitAI.

    • @felixmontanez4090
      @felixmontanez4090 10 дней назад

      @@KeyboardAlchemist what prompt did u use

    • @KeyboardAlchemist
      @KeyboardAlchemist  9 дней назад

      @@felixmontanez4090 17:10 of the video has all the prompt info that you will need. Cheers!

  • @magnos_decimus
    @magnos_decimus 6 месяцев назад

    The inpaint anything tool didn't work for me. All I get is a black screen.

  • @vpst00
    @vpst00 9 месяцев назад +1

    Can I use a MacBook pro with these tools?

    • @KeyboardAlchemist
      @KeyboardAlchemist  8 месяцев назад

      As long as you can successfully install and run Automatic1111 on your Mac, then installing these extensions would be possible too. Best of luck!

  • @Esendor
    @Esendor 2 месяца назад +1

    16:10 How your gens is so fast? When i start "Run Inpainting" it goes for 10 minutes! Impossible to use.

    • @KeyboardAlchemist
      @KeyboardAlchemist  2 месяца назад

      When generating your image, you should take a look at the Performance Tab in Task Manager and see whether all of your dedicated GPU memory is maxed out. If it is maxed out and spilling into Shared Memory, that's when the image gen gets very slow. Not sure if this is the case, but worth looking into.

    • @Esendor
      @Esendor 2 месяца назад

      @@KeyboardAlchemist changed cuda settings - disabled shared memory for Python.exe in Nvidia panel. SD stopped to generate hi-res fix (not enough gpu memory), which was working fine before. As result i returned old settings. I don't understand those thigs at all. RTX 3060 12 gb

  • @panzerkampfwagen1944
    @panzerkampfwagen1944 6 месяцев назад +1

    alt + mousewheel = zoom

  • @nihilitys
    @nihilitys 8 месяцев назад

    inpaint anything doesnt work on amd gpus:(

  • @jetson35
    @jetson35 9 месяцев назад +1

    :O

  • @thekotfather
    @thekotfather 7 месяцев назад +1

    canvas-zoom, man

  • @awais6044
    @awais6044 6 месяцев назад

    Make a video.user upload their own image and change clothes,hair,fashion items using prompt.

  • @zerokelvinmedia9955
    @zerokelvinmedia9955 9 месяцев назад +1

    So..basically... is photoshoping😊 without photohop....

  • @uzairansari9222
    @uzairansari9222 9 месяцев назад

    Tried this. The inpainting procedure has been going on since 20 minutes now. I don't think it's supposed to take this long.

  • @stormmage
    @stormmage 9 месяцев назад

    9:40 I would disagree that the Stable Diffiusion 2 models are less quality than the Realistic Vision V3 models. They all look equally bad. The Realistic Vision V3 models both have the problem that the head is too big for the body, with shoulders and arms that are too small. This distorts the neck, making it look thick and giraffe-like. Without inpainting, the Realistic Vision V3 models would look better, because they have more details on the skin. The SD models look like they're using a soft mesh instead of skin, and there are errors on the clothes (both are missing necessary support seams / lines). Used as an inpainting model, the RVv3 model did not work. All four images did a terrible job of matching skin tone at the inpainting line, and you can see where her neck is a warmer color than her upper chest.

  • @eminence_
    @eminence_ 8 месяцев назад

    You should add note that this does not work on AMD GPUs

  • @dailyrum2203
    @dailyrum2203 5 месяцев назад

    your voice changed in between

  • @12Jerbs
    @12Jerbs 9 месяцев назад

    Not sure if anything has changed, but the Inpaint Anything is pretty useless. I can make a mask, set a prompt, using realVision inpaint, and nothing really changes. I tried to change a white top to black = turns grey; white top to red = pink; etc.. my experience is no where near what you are showing in the video.

  • @kallamamran
    @kallamamran 9 месяцев назад +3

    OMG, the piano overlay did NOTHING for this video... Great video otherwise ;)

  • @dennisaubry9384
    @dennisaubry9384 8 месяцев назад

    dont say its great whitout test it just because video seems cool....

  • @-flanders-8975
    @-flanders-8975 7 месяцев назад

    ooof, no more background music plz.

  • @choppergirl
    @choppergirl 8 месяцев назад

    So um... how do you install Stable Diffusion, or is it a web app?
    I love how the driver instructor immediately starts talking about how to drift a car and shift gears, and the student is standing outside the locked car wondering... how do I get in? Did you forget to give me the keys? Wait no, that was a plugin plugging RUclips channel... not a driving instructor.

    • @KeyboardAlchemist
      @KeyboardAlchemist  8 месяцев назад +1

      Thanks for watching, I have other videos on the channel, for example this one that goes through in detailed steps how to install Stable Diffusion Automatic 1111: ruclips.net/video/AcxSjFUt_aE/видео.html, feel free to look around.

  • @heckensteiner4713
    @heckensteiner4713 9 месяцев назад +1

    Try hands next time!

  • @Silverstreamable
    @Silverstreamable 9 месяцев назад +1

    whos the girl i want to date her
    when are we getting turing passed robotics x AI?

  • @placidfalcon7715
    @placidfalcon7715 8 месяцев назад

    did u suddenly turn asian in the middle of this video?