How To Inpaint Anything in Stable Diffusion WebUI - Just 3 clicks!
HTML-код
- Опубликовано: 15 май 2024
- Inpainting can be fun, but making the masks... not so much. Segment Anything to the rescue! Pick anything you like from the segmentation map, and have the mask created for you, ready for ControlNet Inpainting. Easily change clothes, faces, dogs, cats - segment anything! No need to try and draw masks by hand, which is often tedious and inaccurate.
Just give whatever prompt you like as normal, then as if by magic, the thing you wanted to change is transformed before your very eyes!
No complex or tedious ControlNet setups required as you have a ready-to-go section given to you in the Inpaint Anything tab. Unless you like complex setups, because you can do that too thanks to the mask tab :)
== Links! ==
* Inpaint Anything - github.com/Uminosachi/sd-webu...
* Automatic1111 Web UI - github.com/AUTOMATIC1111/stab...
* ControlNet Extension - github.com/Mikubill/sd-webui-...
* How do I create an animated SD avatar? - • Create your own animat...
* Installing Anaconda for MS Windows Beginners - • Anaconda - Python Inst...
== Stable Diffusion Playlists ==
* The MASSIVE Stable Diffusion Playlist! - ruclips.net/p/PLj...
* Interested in ADDING NEW THINGS to your AI Art? Try these!
** Dreambooth Playlist - • Stable Diffusion Dream...
** Textual Inversion Playlist - • Stable Diffusion Textu... - Наука
I’m glad you are still covering Stablediffusion. Very useful!
If it's AI, I cover it ;)
I love how many generally useful tools A1111 has, even for non-SD tasks. Like I could install this separately, but here it's just an extension.
You have been the most consistent channel of ai related tutorials and stuff! Appreciate everything!
like I needed another reason to be more obsessed with auto111. So goood!
update: this crashed webui I had to spend some time with a workaround to get it started again. What worked: reinstalling stable diffusion in a separate folder from scratch then copying the 'venv' folder and replacing the old one. It had to do some reinstalling but now seems to work fine.
Been looking for an hour on a video that gives simple instructions and goes through the basics. Thank you!
Glad it helped!
Another great addon and another great Nerdy Rodent explaining!! 🎉❤🎉
Dude! So good to see you not only still going, but nailing it! I don't spend any time on RUclips these days but I happen too see this and it made me very happy to see you doing such great stuff. Big love Nerdy!
Hello again! Good to see you still around as well 😉 Big love back at ya!
This is a really helpful tool. I hope they make a swap anything next. Like masking two images and choose a mask to swap, like the face masks, clothes mask or even the background masks. That would be awesome.
Totally badass! Thanks for the heads up!
The power of the sun
In the palm of my hands
This is most powerful tool. Would be nice to have segmentation before model/lora training
Fascinating stuff!
Ikr! 😉
Omg, you packed so much in this video. Workflow should be more harmonious. (or a Harmonica Workflow)! Ha, totally funny! I laughed so hard at the end !
Brilliant thank you!
Robert is your Father's Brother! LOL love it!
Great tutorial thanks, i use stable diffusion with amd gpu and i have issue with inpaint anything, when running "run segment anything" it search and produce a black image on the right. Any idea? Thanks
I don’t use AMD GPUs unfortunately
Are you able to create a segmentation from anything and then using that segmentation
sketch your own image from stratch?
Sure, go for it!
love it!!!!
Thank you 😌
Can you run this in batch mode to correct a specific feature in a series of images?
Do you know of any local AI voice enhancing options out there? Removing noise, reverb without denaturing the original voice quality is kind of hard.
Just a tip, I think you have to quit and restart webui and A1111 after installing Inpaint Anything as I got some weirdness going on with my controlnet inpaint. It seemed to work afterwards.
Please I need a tutorial on how to put the same exact face in another "body" or Controlnet pose, if its possible with Stable Diffusion. With Controlnet I see the "reference" mode makes possible something similar, but not permits to choose the exact body (or Controlnet) pose.
bravo!
I like to see how creative my computer is.. I will invest in it and teach it to get more out of itself...
However, if I want to fill an empty room with interior furnishings, what methodology could I use?
Photobashing would be one way
@@NerdyRodent the 'problem' is that I would like it to furnish the room but randomly, so that it would give me some ideas on the layout through many images in batches.
I have a 10gb 3080 and after creating mask, I can't impaint anything due to "out of CUDA memory". Is there a fix?
Try going for a much lower resolution image.
Thank youuuuuu~
Was wondering what are the ram requirements for this?
Thank you very much. Such a helpful video. Where can I find that document. Please
The extension can be found via the automatic1111 extensions tab
No idea how you got the ControlNet Inpaint to work properly. I selected a white t-shirt and set the prompt to "red t-shirt". It gave me a white t-shirt with red trim and another time a pink t-shirt. I switched to the default Inpainting tab and only got a red t-shirt after turning the Guidance Scale to max. Seeing the video, I had high hopes for this, but it seems rather "meh" on my end.
Always worked great for me! Perhaps play with denoise rather than the classifier free guidance?
Did everything but when I run segment anything after it runs it give an error under download model tab and on the left output window. Also I do not have any Yaml file in the folder. How do I fix this pray tell please.
Just one thing that didn't catch my eye and it's pretty important. You can segment lower resolutions then afterwards add an upscaled one, then create mask and create controlnet inpainting.
I suggest you guys doing that instead of segmenting an already upscaled image which take too much time to be done.
Can you run it with batch sequences?
Very cool! But I guess a direct Automatic1111 plugin in Photoshop is still better? (more tools)
Better = free and open source 😉
Thank you! @@NerdyRodent
Thanks!
No problem!
how can u make a head bigger and put some text on the wall or anywhere'
I can't get the segmentation work. Always get a single color image. What am I doing wrong?
Can I use segment anything on Videos somehow? Would be usefull to blur faces and license plates in my dashcam Videos instead of doing it by hand for hours 😂
Hi! Nice video, but can anyone help me with this error?: ControlNet inpaint model is not available.
Requires the ControlNet-v1-1
I've already installedthe controller, but still have this error...
Is is possible to do virtual-try on with this method? I am thinking of getting the mask first and make the target cloth to fit in the mask.
Sounds like you may like this: Stable Diffusion Face + Pose + Outfit Swap - NO training required!
ruclips.net/video/ZcCfwTkYSz8/видео.html
having so much trouble. I have everthing installed. Running a gtx 1060 16gb card and 16gb ram ssd hard drive but for some reason mine takes almost an hour to finish. and can't figure out why. I didnt change any settings past what you showed.
Takes a few seconds for me. Check your system’s resource usage
@@NerdyRodent thanks I'll try. I should mention there's almost nothing on the computer since it's a pretty new Windows install. Only other thing I could figure is it's trying to use the onboard intel hd 3000 graphics
When I run inpaintanything in StableDiffusionUI, especially when I run inpainting, I keep getting error Unexpected end of JSON input.I ran it through Google Labs, what should I do?
I do always get an error when clicking segment anything. But I do think the reason for that is that I use an amd gpu, with the direct-ml lshqqytiger's fork. The Error has do to something with cuda
Try making the resolution of your initial image lower
I use StableDiffusion locally and when I press RUN SEGMENTANYTHING in INPAINT ANYTHING, it doesn't generate a masking image. what should I do?
"Just like Robert is your father's brother..." I'm not British, but I got it 😂
Hi Nerdy Rodent, Thanks so much for making this video. I especially appreciated the step-by-step instructions on how to install -- I needed that. However, (there's always a however), I'm pretty sure I got everything installed correctly, but ControlNet Inpaint doesn't color in the lines! It changes everything in the image, not just the masked area, even with ControlNet Preprocessor set on inpaint-only. By "installing everything correctly," I mean I'm running the latest version of Automatic1111on a Windows 10 machine with plenty of processing power. I'm using anything-v4.5.safetensor [1d1e459f9f] for the checkpoint and sam_hq_vit_h.pth as the Model ID, and reference_adain+attn as the Reference Type. Both extensions, sd-webui-inpaint-anything.git, and sd-webui-segment-anything.git seem to be installed correctly. Segment Anything runs beautifully and gives a great mask. It's just that ControlNet Inpaint won't stay in the mask. Any suggesstions will be greatly appreciated. Thanks, Zaffer
Does your controlnet inpaint work in img2img?
@@NerdyRodent yes
I've got a workaround, I make the seperator mask and save it, then upload it using img2img upload paint. That works pretty well, but is extra steps. Thanks for your help
Hi Nerdy, I got everything working! :-) I did install of Automatic1111, controlNet and Inpaint anything and now, the inpainting will stay inside the mask. I love controlNet. It seems to have so much more "imagination" than plain Inpainting.
@@zafferflower 😀
Hi. I get a "ControlNet inpaint model is not available. Requires the ControlNet-v1-1 inpaint model in the extensions\sd-webui-controlnet\models directory." error. I have the Inpant Model in the folder, but I noticed that you downloaded a 723MB savetensor model, but If I follow the link in the description I the inpaint file is a 1,4GB pth file.
Either size model is fine, I just like to use the smaller files to save disk space :)
@@NerdyRodent thanks, ir :-)
Please tell me what location to install sam_hq_vit h,l,b?
Would it be possible to colour b&w pictures ? And how to do it ?
While you can easily make full colour images from b&w sources, they tend to change substantially from the source image. For the most part, things like deoldify etc. are better at adding colour but retaining the original image composition.
cooool
hello.. does anyone has a problem with creating the mask?? mine creates a mask a second then it disappear
happens the same to me 🥲
unable to find the fp16 controlnet inpaint model
As mentioned, any controlnet inpainting model will do so feel free to use the 1.4gb file instead of the half sized one!
Hi Nerdy. When I create a mask it automatically disappears. Any ideas?
Not had that happen as yet!
I downloaded the inpainting extension through A1111 but I don't see the tab.
Make sure you’re running the latest version of a1111 and restart?
AFter pressing "Run Segment Anything" the segmented image flashes for a moment and dissapears :(
Could be a browser thing, maybe?
6:50 pretty sure that's not how you use the outpainting feature. For one thing, if you're using Preprocessor, you need to have something in the canvas of the control net.
And yet there it is working really well 😉
Now the remove anything stuff has to work.
im selecting "Inpaint only" and its still changing the entire image,
same
Seem to not work on AMD Gpu
Linux + Nvidia is your best bet for compatibility & performance
why do you sound like matt from @DIYPerks
Anyone know where the inpaint model download by this extension located ?
It seem that the inpaint models download by this extension are not located within the stable diffusion folder.
it should be in models/ControlNet
Check out my control net video for more info!
@@NerdyRodent I meant the inpaint models that automatically download by Inpaint Anything.
It is not located within any folder inside stable diffusion folder.
The extension has download over 10GB of file but the total stable diffusion folder size haven't change at all.
@@alonsogarrote8898 Thank you for your reply, but it is not. The models/ControlNet is empty.
The extensions\sd-webui-inpaint-anything\models folder only have those SAM files. It doesn't include those inpaint model file.
@@Cutieplus The diffusers models go into the default location (your home directory cache)
You sound like Techmoan
"..feels like traditional art"
this guy... lol
😉
My issue with A1111 is that so much functionnality are being created, updated, changed that is is becoming way too hard to keep up. I see tons of this kinds of video but we really need a written ressource. Videos are not a good format for this kind of fast changing tech.
Some people really don’t do well with the written format and much prefer videos - and they tend to use youtube 😉
To be honest it much difficult then photoshop, but more advanced , I agreed. And its not just 3 clicks
CAUTION ☢ Extension can ruin you current SD installer. Many people even cant run SD after this addon.
I deleted Venv folder from my "stable diffusion" and runned SD (to redownload and fix problem). I think this kind of problem can happen when you installed SD long time ago. Sometimes better to do clean instal SD.
Yup, it's best to make sure you've got a working SD install first ;)
yeh just delete venv folder and run sd again to fix apparently - I'm using vlad's version of a1111 and seems to work fine, just requires a few restarts and the tab appears! ;)
@@2PeteShakur nice. I just use anaconda myself - less hassle! Let the segmentation commence 😀
@@2PeteShakur I tried many times to install this extension but the tab never appeared, I will try restarting multiple times
Your youtube notification just got me killed in fortnite. downvote.
git gud ;)
Subscribed
Welcome! 😉
I found inpaint_only+lama to do outpainting is insane
Yeah, it’s a cool model and set of preprocesses!