Automatic1111 Stable Diffusion 2.0 Install (easy as)
HTML-код
- Опубликовано: 27 ноя 2022
- A quick (and unusually high energy) walkthrough tutorial for installing and using stable diffusion 2.0 with the Automatic1111 webui.
Automatic1111 Install Guide: github.com/AUTOMATIC1111/stab...
Automatic1111 repo: github.com/AUTOMATIC1111/stab...
Xformers Install Tutorial: • Install XFormers in on...
Discord: / discord
------- Music -------
Music from freetousemusic.com
‘Travel’ by ‘LuKremBo’: • lukrembo - travel (roy...
‘Butter’ by LuKremBo: • lukrembo - butter (roy...
‘Daily’ by ‘LuKremBo’: • (no copyright music) c...
‘Late Morning’ by ‘LuKremBo’: • (no copyright music) c...
‘Rose’ by ‘LuKremBo’: • lukrembo - rose (royal... - Наука
2.0 requires more thought in the prompt creation, and really relies heavily on negative prompts to get the best image quality. No negative prompts - expect crap. Good negative prompts, and you can get details that surpass what 1.5 could do. However, losing the artists and NSFW content has made it a lot harder to get some of the really high-quality output. Hopefully some custom models that restore some of these features aren't too far away.
things are going to get messy now that there's no single "best" stable diffusion model. Negotiating your way around different setups makes everything much slower. This is a good sign for Midjourney, who can keep improving their already state-of-the-art single model.
@@lewingtonn Actually, I don't think it will be bad. StabilityAI's CEO mentioned that new tools for fine-tuning and training are on the way. If the community can also train and improve models, maybe we can get better models. Competition in never a bad thing.
Thank you very much! This helps a lot!
Something that is really important when using SD 2.0 is to play with negative prompts. You get much better results now, if you you have good negative prompts.
I copied some negative prompts that people have been using, they are as follows:
Deformed, blurry, bad anatomy, disfigured, poorly drawn face, mutation, mutated, extra limb, ugly, poorly drawn hands, missing limb, blurry, floating limbs, disconnected limbs, malformed hands, blur, out of focus, long neck, long body, ((((mutated hands and fingers)))), (((out of frame)))
Thanks for the help, it means a lot!
like and subscribe for more unwarranted abuse
Hey can i ask ?
Im currently still on windows 10 (laptop) but when using python 3.10.6 is broken when open webui.bat it says like cant install torch and numbers+11211 like that. How i can fix this?
Do you know what should be put as the Initialization Text when creating an embedding? New to SD and saw another video that said put the same thing as the Name, but when I did that txt2img with a trained model of me, regardless of any other text, only spit out something basically identical to the pictures I trained it on.
it depends what initialization text you chose, it's something you specify during training
Awesome! Thank you for sharing!
haha you were the one who got me onto this lol
better have enjoyed it!!!
@@lewingtonn LOL, I needed the help! Your guides are the best.
hey ho, first things first! thanks for the content! I was able to make alot of videos already with 1.4 and 1.5. I´ve got a question to Version 2.0 Models. Do you know how to use the new SD-2.0-Upscale Model with the Automatic1111 WebUI? can i just select the upscale-modle and then use sd upscale in the "post"-scripts section of batch-img2img?
go to this link github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features (the one in the description) and ctrl+f "upscale" there's a section on it
I have AMD 7900 XT 20GB GPU & 7900x CPU. My question is can I install any of these one click stable diffusion installs or do I need to install a special AMD version of stable diffusion? Thanks in advance
If I have added the git pull command into the webui.bat, does that mean that it does the same thing that you described with gitghub desktop automatically ?
it does!
So i have AMD Radeon and unfortunately i can't install it then can I use it in Collab somehow?
Goodjob man
Why do I get "LatentDiffusion: Running eps-prediction mode" instead of "v-prediction mode" as shown in the video. Is there any difference?
is it still working despite that message? My guess is that there has been an update to the repo in the last 4 hours and that yeah, it's no big deal
@@lewingtonn I think because you were using SD v2.0 so when you launched the webui, that prediction mode was set for it. If you switch back to the 1.5 version, the prediction mode changes to EPS-prediction mode.
@@lewingtonn Also, could you please update the Krita-extension video? That would be very helpful, thank you.
Is there any possibility to use SD 2.0 in github codespaces... Bcz im on mac... So i wanna run that on cloud with Nvidia gpu
HA! I have no idea, I would assume not since the hardware required is kind of expensive, but you CAN use google colab! search SD 2.0 google colab in google search and you will find something for sure.
Seems like there is no news about new upscaler and depth map models. I wonder how long it would take to implement them in automatic
the step up probably isn't that huge for depth mapping I imagine compared to 1.5 and it's probably hell to implement
Thanks
Not sure what's wrong but I keep getting the following error during the load weights part in cmd.
"size mismatch for model.diffusion_model.input_blocks.0.0.weight: copying a param with shape torch.Size([320, 4, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 5, 3, 3])"
Neil Slater
3 weeks ago (edited)
Aha! Got it. The YAML file was hiding that it had an extra `.txt` extension - thanks Windows!
To fix it, I needed to change the view in Windows Explorer to show the extension properly. Then I could rename the file to correctly match the downloaded model checkpoint.
Probably going into console and renaming the file there would have worked too.
lovely being your 100th like here
Hi How do I get the Aesthetic Embedings to work with SD 2.0 ?
aesthetic embeddings are very mid, and they won't work for 2.0 yet. You can't use the old ones because we're using a new text encoder.
Can you do a video about how upgrade the ShivamShrirao Drembooth Repo to train with 2.0, using WSL2 + Ubuntu?
I only have windows PCs at the moment sadly
@@lewingtonn Me too, that is the point of WSL2, let you run Ubuntu CMD on your Windows PC.
Search for this: "Train on Your Own face - Dreambooth, 10GB VRAM, 50% Faster, for FREE!" you will see what i'm talking about.
Where I can get old version dreambooth, new shivamshirao manymanymany tabs sucks(
Could you perhaps do a tutorial on getting dreambooth setup locally with SD's webui? I really like the way you explain things but I keep runnin into issues when trying to get it to work, thanks :)
1. Do you have 12GB VRAM or more?
1a. If no, you're out of luck (10 GB might work if you're lucky, can run without GUI and if you're a haxxor 😉)
1b. If yes, lucky you
2. Install Dreambooth extension
3. Restart SD
4. Follow random A1111 Dreambooth tutorial
5. Done 😉
@@kallamamran yeah unfortunately i have an rtx 2080 with 8GB, but even so when I start the training it just stops automatically after a few seconds and doesn't even state whether its a memory issue, so it gave me some hope lol thanks for the help tho! :)
If you read the cmd windows it will tell you more about the issue, but yes 12 gb ram IS required. I'm in the same boat as you dude, it's just not gonna happen till someone makes an improvement
@0:43, you can do this with Mac...?
no sir
Sorry, I must be very dumb with the prompts, but i see that with 2.0, instead of getting paintings that resemble that of Rembrandt, Velázquez, or Bouguereau, with 1.4 or 1.5, I got colored drawings of 4th grade primary kids. Don't see the point of making the prompts harder.
OH¡ Searching, I see that this is a problem with Copyright issues , and many artist or styles had been wiped out, but the creators said, that in a near future, we could train the models ourselves, in an easier way, and let the copyrights in our side. Hope that. I think that copying the style of an artist, is not copying but inspiration. This had been done by the best artists, for millennia.
@@heiferTV yeah, there are definitely some disadvantages to 2.0, but some things are better too
Nice! Please make a tuto how to install Depth2img too. Thanks!
hmmmmmmmmmm
I followed this guide carefully, but get a "size mismatch" error from Torch every time I try to switch to the new model.
Is there anyone else had similar problem and can point me at things to look at? I'm a software dev by trade, so don't mind if it gets technical, no need to talk through concepts.
Aha! Got it. The YAML file was hiding that it had an extra `.txt` extension - thanks Windows!
To fix it, I needed to change the view in Windows Explorer to show the extension properly. Then I could rename the file to correctly match the downloaded model checkpoint.
Probably going into console and renaming the file there would have worked too.
@@neilslater8223 yeah, windows is kind of terrible, well done!
imagin someone train this already trained on CLIP but adding all LAON set (no one have money for that sadly)
5:02 you did not have to expose me 😢
this channel is about 2 things: technology and wanton abuse
Not one word about Python and version?! I've learned that it's pretty important to use Python 3.10.x, NOT 3.11.x. Am I missing something here?!
I actually had no idea python 11 was out! I'll be sure to mention it in the future, but in general my advice is: avoid new language updates for the first 3-6 months cuz the ecosystem probably hasn't caught up yet
IS it worth it tho?
in the long run yeah... right now... nah
No chance of it working on 4gb vram?
it always takes more for me.. to be honest I don't even think 2.0 is that much better
@@lewingtonn lol at least i can hear you now without any volume issues, well done! ;)
Is 2.0 inpainting working with automatic1111 yet? Anybody know?
according to the repo it is, check out the guide linked above and ctrl+f inpainting
@@lewingtonn Thanks. I already figured it out.
@@mikealbert728 How did you make it work? I don't see any instruction of installing the 2.0 inpainting model in the guide linked above, only instruction for using the old one. I found instructions elsewhere and put the yaml file together with the inpainting model file in the "models" directory (with the yaml file suitably renamed from "v2-inpainting-inference.yaml" to "512-inpainting-ema.yaml"). But then the a1111 web-ui returns a "size mismatch" error when I try to load the model and I must restart it to be able to load a different model.
@@pokerandphilosophy8328 I've tried to reply about 5 times but RUclips keeps deleting the comment
@@pokerandphilosophy8328 open the YAML in a text editor and delete the line about fine-tuning null then save. Make sure you have an internet connection.
I noticed a lot of youtubers just say it's easy and works great, but everytime something new comes out they complain about how terrible and hard to install the previous version was.
haha that's how you get those juicy juicy eyeballs. I don't think I said anything misleading here though...
@lewingtonn No, you didn't. I think going over common install errors and problems would help. I get weird errors every time I try to install it where it tries to use the CPU instead of the GPU. I have a 3080 TI.
When I try to install other things such as training locally I get Xformer issues.
When I try to do the gradients I get Pytorch issues.
I have the latest drivers and whatnot.
Anyways, thank you
How about AMD GPU?
sadly no, amd doesn't work yet
thx I broke my SD :(
Man, please, Make an easy way for AMD user to have stable diffusion, please please please.
I recommand to always use xformers my 3090 is 30-40% faster with xformers
Xformers is LIIIIIIIT
@@lewingtonn n LIIIIIIIT is LIIIIIIIT! ;)
Finally i did it! it was the best tutorial you're awesome
Hour and a half downloading?? Lol
I'm in Australia our internet is carried by kangaroos with pouches full of USB drives
@@lewingtonn ahhahhhah
@@lewingtonn - Thanks to Malcolm Turnbull for destroying 'fibre to the home' - thus protecting Murdoch's cable business. Never forgive him for that.