Automatic1111 Stable Diffusion 2.0 Install (easy as)

Поделиться
HTML-код
  • Опубликовано: 27 ноя 2022
  • A quick (and unusually high energy) walkthrough tutorial for installing and using stable diffusion 2.0 with the Automatic1111 webui.
    Automatic1111 Install Guide: github.com/AUTOMATIC1111/stab...
    Automatic1111 repo: github.com/AUTOMATIC1111/stab...
    Xformers Install Tutorial: • Install XFormers in on...
    Discord: / discord
    ------- Music -------
    Music from freetousemusic.com
    ‘Travel’ by ‘LuKremBo’: • lukrembo - travel (roy...
    ‘Butter’ by LuKremBo: • lukrembo - butter (roy...
    ‘Daily’ by ‘LuKremBo’: • (no copyright music) c...
    ‘Late Morning’ by ‘LuKremBo’: • (no copyright music) c...
    ‘Rose’ by ‘LuKremBo’: • lukrembo - rose (royal...
  • НаукаНаука

Комментарии • 85

  • @guns1inger
    @guns1inger Год назад +7

    2.0 requires more thought in the prompt creation, and really relies heavily on negative prompts to get the best image quality. No negative prompts - expect crap. Good negative prompts, and you can get details that surpass what 1.5 could do. However, losing the artists and NSFW content has made it a lot harder to get some of the really high-quality output. Hopefully some custom models that restore some of these features aren't too far away.

    • @lewingtonn
      @lewingtonn  Год назад +2

      things are going to get messy now that there's no single "best" stable diffusion model. Negotiating your way around different setups makes everything much slower. This is a good sign for Midjourney, who can keep improving their already state-of-the-art single model.

    • @abdelhakkhalil7684
      @abdelhakkhalil7684 Год назад +2

      @@lewingtonn Actually, I don't think it will be bad. StabilityAI's CEO mentioned that new tools for fine-tuning and training are on the way. If the community can also train and improve models, maybe we can get better models. Competition in never a bad thing.

  • @lynnqi6451
    @lynnqi6451 Год назад +1

    Thank you very much! This helps a lot!

  • @ZeroIQ2
    @ZeroIQ2 Год назад +5

    Something that is really important when using SD 2.0 is to play with negative prompts. You get much better results now, if you you have good negative prompts.
    I copied some negative prompts that people have been using, they are as follows:
    Deformed, blurry, bad anatomy, disfigured, poorly drawn face, mutation, mutated, extra limb, ugly, poorly drawn hands, missing limb, blurry, floating limbs, disconnected limbs, malformed hands, blur, out of focus, long neck, long body, ((((mutated hands and fingers)))), (((out of frame)))

  • @Hangul_Is_Forbidden_In_Handles

    Thanks for the help, it means a lot!

  • @lewingtonn
    @lewingtonn  Год назад +4

    like and subscribe for more unwarranted abuse

  • @the_proffesional1713
    @the_proffesional1713 Год назад

    Hey can i ask ?
    Im currently still on windows 10 (laptop) but when using python 3.10.6 is broken when open webui.bat it says like cant install torch and numbers+11211 like that. How i can fix this?

  • @augustdawn8348
    @augustdawn8348 Год назад +1

    Do you know what should be put as the Initialization Text when creating an embedding? New to SD and saw another video that said put the same thing as the Name, but when I did that txt2img with a trained model of me, regardless of any other text, only spit out something basically identical to the pictures I trained it on.

    • @lewingtonn
      @lewingtonn  Год назад

      it depends what initialization text you chose, it's something you specify during training

  • @KainSpero
    @KainSpero Год назад +1

    Awesome! Thank you for sharing!

    • @lewingtonn
      @lewingtonn  Год назад +1

      haha you were the one who got me onto this lol

    • @lewingtonn
      @lewingtonn  Год назад +1

      better have enjoyed it!!!

    • @KainSpero
      @KainSpero Год назад

      @@lewingtonn LOL, I needed the help! Your guides are the best.

  • @bodobo
    @bodobo Год назад +1

    hey ho, first things first! thanks for the content! I was able to make alot of videos already with 1.4 and 1.5. I´ve got a question to Version 2.0 Models. Do you know how to use the new SD-2.0-Upscale Model with the Automatic1111 WebUI? can i just select the upscale-modle and then use sd upscale in the "post"-scripts section of batch-img2img?

    • @lewingtonn
      @lewingtonn  Год назад +1

      go to this link github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features (the one in the description) and ctrl+f "upscale" there's a section on it

  • @satoshichaser6537
    @satoshichaser6537 Год назад

    I have AMD 7900 XT 20GB GPU & 7900x CPU. My question is can I install any of these one click stable diffusion installs or do I need to install a special AMD version of stable diffusion? Thanks in advance

  • @Scatty666
    @Scatty666 Год назад +1

    If I have added the git pull command into the webui.bat, does that mean that it does the same thing that you described with gitghub desktop automatically ?

  • @Peppermint_juice
    @Peppermint_juice Год назад

    So i have AMD Radeon and unfortunately i can't install it then can I use it in Collab somehow?

  • @cryptidsNstuff
    @cryptidsNstuff Год назад +1

    Goodjob man

  • @abdelhakkhalil7684
    @abdelhakkhalil7684 Год назад +1

    Why do I get "LatentDiffusion: Running eps-prediction mode" instead of "v-prediction mode" as shown in the video. Is there any difference?

    • @lewingtonn
      @lewingtonn  Год назад

      is it still working despite that message? My guess is that there has been an update to the repo in the last 4 hours and that yeah, it's no big deal

    • @abdelhakkhalil7684
      @abdelhakkhalil7684 Год назад

      @@lewingtonn I think because you were using SD v2.0 so when you launched the webui, that prediction mode was set for it. If you switch back to the 1.5 version, the prediction mode changes to EPS-prediction mode.

    • @abdelhakkhalil7684
      @abdelhakkhalil7684 Год назад

      @@lewingtonn Also, could you please update the Krita-extension video? That would be very helpful, thank you.

  • @gpt5
    @gpt5 Год назад +1

    Is there any possibility to use SD 2.0 in github codespaces... Bcz im on mac... So i wanna run that on cloud with Nvidia gpu

    • @lewingtonn
      @lewingtonn  Год назад

      HA! I have no idea, I would assume not since the hardware required is kind of expensive, but you CAN use google colab! search SD 2.0 google colab in google search and you will find something for sure.

  • @CMak3r
    @CMak3r Год назад +1

    Seems like there is no news about new upscaler and depth map models. I wonder how long it would take to implement them in automatic

    • @lewingtonn
      @lewingtonn  Год назад

      the step up probably isn't that huge for depth mapping I imagine compared to 1.5 and it's probably hell to implement

  • @hammagamma3646
    @hammagamma3646 Год назад +1

    Thanks

  • @cosmiccoincidence8627
    @cosmiccoincidence8627 Год назад +1

    Not sure what's wrong but I keep getting the following error during the load weights part in cmd.
    "size mismatch for model.diffusion_model.input_blocks.0.0.weight: copying a param with shape torch.Size([320, 4, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 5, 3, 3])"

    • @Said-lt2oz
      @Said-lt2oz Год назад

      Neil Slater
      3 weeks ago (edited)
      Aha! Got it. The YAML file was hiding that it had an extra `.txt` extension - thanks Windows!
      To fix it, I needed to change the view in Windows Explorer to show the extension properly. Then I could rename the file to correctly match the downloaded model checkpoint.
      Probably going into console and renaming the file there would have worked too.

  • @g.kirilov1352
    @g.kirilov1352 Год назад +1

    lovely being your 100th like here

  • @zhanezar
    @zhanezar Год назад +1

    Hi How do I get the Aesthetic Embedings to work with SD 2.0 ?

    • @lewingtonn
      @lewingtonn  Год назад

      aesthetic embeddings are very mid, and they won't work for 2.0 yet. You can't use the old ones because we're using a new text encoder.

  • @ZeroCool22
    @ZeroCool22 Год назад +1

    Can you do a video about how upgrade the ShivamShrirao Drembooth Repo to train with 2.0, using WSL2 + Ubuntu?

    • @lewingtonn
      @lewingtonn  Год назад

      I only have windows PCs at the moment sadly

    • @ZeroCool22
      @ZeroCool22 Год назад

      @@lewingtonn Me too, that is the point of WSL2, let you run Ubuntu CMD on your Windows PC.
      Search for this: "Train on Your Own face - Dreambooth, 10GB VRAM, 50% Faster, for FREE!" you will see what i'm talking about.

  • @MitrichDX
    @MitrichDX Год назад

    Where I can get old version dreambooth, new shivamshirao manymanymany tabs sucks(

  • @backster4744
    @backster4744 Год назад +1

    Could you perhaps do a tutorial on getting dreambooth setup locally with SD's webui? I really like the way you explain things but I keep runnin into issues when trying to get it to work, thanks :)

    • @kallamamran
      @kallamamran Год назад +2

      1. Do you have 12GB VRAM or more?
      1a. If no, you're out of luck (10 GB might work if you're lucky, can run without GUI and if you're a haxxor 😉)
      1b. If yes, lucky you
      2. Install Dreambooth extension
      3. Restart SD
      4. Follow random A1111 Dreambooth tutorial
      5. Done 😉

    • @backster4744
      @backster4744 Год назад +1

      @@kallamamran yeah unfortunately i have an rtx 2080 with 8GB, but even so when I start the training it just stops automatically after a few seconds and doesn't even state whether its a memory issue, so it gave me some hope lol thanks for the help tho! :)

    • @lewingtonn
      @lewingtonn  Год назад

      If you read the cmd windows it will tell you more about the issue, but yes 12 gb ram IS required. I'm in the same boat as you dude, it's just not gonna happen till someone makes an improvement

  • @jj4l
    @jj4l Год назад

    @0:43, you can do this with Mac...?

  • @heiferTV
    @heiferTV Год назад +2

    Sorry, I must be very dumb with the prompts, but i see that with 2.0, instead of getting paintings that resemble that of Rembrandt, Velázquez, or Bouguereau, with 1.4 or 1.5, I got colored drawings of 4th grade primary kids. Don't see the point of making the prompts harder.

    • @heiferTV
      @heiferTV Год назад +1

      OH¡ Searching, I see that this is a problem with Copyright issues , and many artist or styles had been wiped out, but the creators said, that in a near future, we could train the models ourselves, in an easier way, and let the copyrights in our side. Hope that. I think that copying the style of an artist, is not copying but inspiration. This had been done by the best artists, for millennia.

    • @lewingtonn
      @lewingtonn  Год назад +1

      @@heiferTV yeah, there are definitely some disadvantages to 2.0, but some things are better too

  • @henriquegonfer7750
    @henriquegonfer7750 Год назад +1

    Nice! Please make a tuto how to install Depth2img too. Thanks!

  • @neilslater8223
    @neilslater8223 Год назад +1

    I followed this guide carefully, but get a "size mismatch" error from Torch every time I try to switch to the new model.
    Is there anyone else had similar problem and can point me at things to look at? I'm a software dev by trade, so don't mind if it gets technical, no need to talk through concepts.

    • @neilslater8223
      @neilslater8223 Год назад +4

      Aha! Got it. The YAML file was hiding that it had an extra `.txt` extension - thanks Windows!
      To fix it, I needed to change the view in Windows Explorer to show the extension properly. Then I could rename the file to correctly match the downloaded model checkpoint.
      Probably going into console and renaming the file there would have worked too.

    • @lewingtonn
      @lewingtonn  Год назад

      @@neilslater8223 yeah, windows is kind of terrible, well done!

  • @p_p
    @p_p Год назад +1

    imagin someone train this already trained on CLIP but adding all LAON set (no one have money for that sadly)

  • @pipinstallyp
    @pipinstallyp Год назад +2

    5:02 you did not have to expose me 😢

    • @lewingtonn
      @lewingtonn  Год назад

      this channel is about 2 things: technology and wanton abuse

  • @kallamamran
    @kallamamran Год назад +1

    Not one word about Python and version?! I've learned that it's pretty important to use Python 3.10.x, NOT 3.11.x. Am I missing something here?!

    • @lewingtonn
      @lewingtonn  Год назад

      I actually had no idea python 11 was out! I'll be sure to mention it in the future, but in general my advice is: avoid new language updates for the first 3-6 months cuz the ecosystem probably hasn't caught up yet

  • @the_devil_1230
    @the_devil_1230 Год назад +1

    IS it worth it tho?

    • @lewingtonn
      @lewingtonn  Год назад +1

      in the long run yeah... right now... nah

  • @GyroO7
    @GyroO7 Год назад +2

    No chance of it working on 4gb vram?

    • @lewingtonn
      @lewingtonn  Год назад

      it always takes more for me.. to be honest I don't even think 2.0 is that much better

    • @2PeteShakur
      @2PeteShakur Год назад

      @@lewingtonn lol at least i can hear you now without any volume issues, well done! ;)

  • @mikealbert728
    @mikealbert728 Год назад +1

    Is 2.0 inpainting working with automatic1111 yet? Anybody know?

    • @lewingtonn
      @lewingtonn  Год назад

      according to the repo it is, check out the guide linked above and ctrl+f inpainting

    • @mikealbert728
      @mikealbert728 Год назад +1

      @@lewingtonn Thanks. I already figured it out.

    • @pokerandphilosophy8328
      @pokerandphilosophy8328 Год назад

      @@mikealbert728 How did you make it work? I don't see any instruction of installing the 2.0 inpainting model in the guide linked above, only instruction for using the old one. I found instructions elsewhere and put the yaml file together with the inpainting model file in the "models" directory (with the yaml file suitably renamed from "v2-inpainting-inference.yaml" to "512-inpainting-ema.yaml"). But then the a1111 web-ui returns a "size mismatch" error when I try to load the model and I must restart it to be able to load a different model.

    • @mikealbert728
      @mikealbert728 Год назад

      @@pokerandphilosophy8328 I've tried to reply about 5 times but RUclips keeps deleting the comment

    • @mikealbert728
      @mikealbert728 Год назад

      @@pokerandphilosophy8328 open the YAML in a text editor and delete the line about fine-tuning null then save. Make sure you have an internet connection.

  • @CultofThings
    @CultofThings Год назад +1

    I noticed a lot of youtubers just say it's easy and works great, but everytime something new comes out they complain about how terrible and hard to install the previous version was.

    • @lewingtonn
      @lewingtonn  Год назад +1

      haha that's how you get those juicy juicy eyeballs. I don't think I said anything misleading here though...

    • @CultofThings
      @CultofThings Год назад

      @lewingtonn No, you didn't. I think going over common install errors and problems would help. I get weird errors every time I try to install it where it tries to use the CPU instead of the GPU. I have a 3080 TI.
      When I try to install other things such as training locally I get Xformer issues.
      When I try to do the gradients I get Pytorch issues.
      I have the latest drivers and whatnot.
      Anyways, thank you

  • @madlazytrader
    @madlazytrader Год назад +1

    How about AMD GPU?

    • @lewingtonn
      @lewingtonn  Год назад

      sadly no, amd doesn't work yet

  • @lagun4716
    @lagun4716 Год назад

    thx I broke my SD :(

  • @mumomuma6917
    @mumomuma6917 Год назад

    Man, please, Make an easy way for AMD user to have stable diffusion, please please please.

  • @Zohrdan
    @Zohrdan Год назад +1

    I recommand to always use xformers my 3090 is 30-40% faster with xformers

    • @lewingtonn
      @lewingtonn  Год назад +1

      Xformers is LIIIIIIIT

    • @2PeteShakur
      @2PeteShakur Год назад

      @@lewingtonn n LIIIIIIIT is LIIIIIIIT! ;)

  • @Akatsuki287
    @Akatsuki287 Год назад

    Finally i did it! it was the best tutorial you're awesome

  • @Pauluz_The_Web_Gnome
    @Pauluz_The_Web_Gnome Год назад +2

    Hour and a half downloading?? Lol

    • @lewingtonn
      @lewingtonn  Год назад +4

      I'm in Australia our internet is carried by kangaroos with pouches full of USB drives

    • @p_p
      @p_p Год назад

      @@lewingtonn ahhahhhah

    • @MichaelFlynn0
      @MichaelFlynn0 Год назад

      @@lewingtonn - Thanks to Malcolm Turnbull for destroying 'fibre to the home' - thus protecting Murdoch's cable business. Never forgive him for that.