Generate up to 60% faster than base SDXL with less compute power!

Поделиться
HTML-код
  • Опубликовано: 6 ноя 2023
  • SSD-1B is like SDXL - only it's up to 60% faster AND uses less VRAM! At a mere 4.6GB download and with comparable image generation quality, SSD-1B brings the power of SDXL to even the potato computer :)
    Even with a decent PC, with SSD-1B you can now enjoy both faster training and generation times, so what's not to like?
    == Links ==
    Model card: huggingface.co/segmind/SSD-1B
    Files: huggingface.co/segmind/SSD-1B...
    Workflow: github.com/nerdyrodent/AVeryC...
    == More Stable Diffusion Stuff! ==
    * Learn about ComfyUI! • ComfyUI Tutorials and ...
    * ControlNet Extension - github.com/Mikubill/sd-webui-...
    * How do I create an animated SD avatar? - • Create your own animat...
    * Installing Anaconda for MS Windows Beginners - • Anaconda - Python Inst...
    * Dreambooth Playlist - • Stable Diffusion Dream...
    * Textual Inversion Playlist - • Stable Diffusion Textu...
  • НаукаНаука

Комментарии • 91

  • @vi6ddarkking
    @vi6ddarkking 6 месяцев назад +32

    I love how the open source projects are prioritizing efficiency over raw power.
    Since the community takes care of the tools It leaves them free to optimize the AIs as much as possible before advancing to the next step in power.
    Still can't wait for the 2048 x 2048 images to become the standard. The jump from 512 to 1024 made ControlNet so much better due to all the new pixels it had to work with.
    The next jump will be marvelous.

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад +4

      Woo! Go open source!

    • @testales
      @testales 6 месяцев назад

      It's not exactly easy or fast but already possible, I'm doing 1792x2304 quite often. It works by "upscaling" but actually if you do it right, it's a very guided resampling of an input image. That means it more or less is indeed created from scratch at this resolution.

    • @Phobos11
      @Phobos11 5 месяцев назад

      Open source progresses based on users’ needs. Corporations progress based on control, adding limitations, blocking competition and getting money. They forgot about the users

  • @JustMaier
    @JustMaier 6 месяцев назад +16

    We added SSD-1B as a base model to Civitai just last week because we’re so excited about more people being able to run SDXL locally.

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад +1

      Nice! :)

    • @liquidmind
      @liquidmind 6 месяцев назад

      are there any more models like this one? Low GPU friendly? i manage to generate one photo on a 6 vram gpu..... 1024-1024 takes about 3 minutes for 1 photo, but still faster than SDXL in which can take 10 minutes or even 1 hour to generate a phot on a 6 vram card, LOL

    • @benmaynard3059
      @benmaynard3059 6 месяцев назад +1

      Why? @@liquidmind I have 6gb and it takes 1minute to generate an XL image ?

    • @benmaynard3059
      @benmaynard3059 6 месяцев назад +1

      @@liquidmind I also have 24gb regular ram maybe that's the difference?

    • @liquidmind
      @liquidmind 6 месяцев назад

      what model you use? maybe you have special hidden powers? the stability AI team even have a chart mentioning how SLOW is to generate a 1024x1024 on a 6vram gpu, that can take HOURS, and that a direct quote,,, let me see if i find it..... :D whats your magic? what resolution you use?@@benmaynard3059

  • @Mr.Sinister_666
    @Mr.Sinister_666 6 месяцев назад +7

    Man you are just out here putting out consistently S Tier videos! I honestly am thrilled anytime I see a new video of yours pop up and often find myself checking the channel just to make sure I didn't miss anything. Straight to the point but fun and informative. People are sleeping on you man for sure! Thank you for your work, it is massively appreciated 👊

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад

      Thanks! Glad you like the things ;)

  • @MrSongib
    @MrSongib 6 месяцев назад +1

    Actually an improvement.

  • @FlamespeedyAMV
    @FlamespeedyAMV 6 месяцев назад +3

    Open Source is so dam good, screw the greedy corporations

  • @TomerGa
    @TomerGa 6 месяцев назад

    Hey Nerdy I lovw your videos! Is there anyway to combine this model with the LCM LoRA to be used on 1 6GB VRAM card?

  • @IlRincreTeam
    @IlRincreTeam 6 месяцев назад +5

    It is also to mention that the 4.6 gb file is in FP32, which should mean that the FP16 model is about the same size of a SD2.1!

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад +1

      Awesome!

    • @sherpya
      @sherpya 6 месяцев назад

      but unfortunately does not work on fp32, I have a 1650 and since it doesn't support fp16 it always run at fp32, I only get black images

  • @animatedjess
    @animatedjess 3 месяца назад

    Thanks for the tutorial! Do you know how to train a lora on this model?

  • @puoiripetere
    @puoiripetere 6 месяцев назад

    Beautiful video and for my "potato" computer it is a godsend :) Question: is the VAE included in the SSD-1B SPEC model or do you recommend using sdxl-vae-fp16-fix? Thanks for the great work.

    • @CoconutPete
      @CoconutPete 2 месяца назад

      wondering what VAE is

  • @artist.zahmed
    @artist.zahmed 6 месяцев назад +1

    Can you do sd xl model training tutorial please I have 4090 v card and i really wanna make my own model please 😢❤❤

  • @twilightfilms9436
    @twilightfilms9436 6 месяцев назад

    Can you do a tutorial for ZL controllers formA1111? Thanks in advance…..

  • @Elwaves2925
    @Elwaves2925 6 месяцев назад

    Nice video, not something I need but many others will.
    So, if you can't use existing loras with SSD-1B, can you use loras trained on this model with other SDXL checkpoints (like RealVis and JuggernautXL)?

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад +1

      SSD-1B loras will train super fast :)

  • @MrLerola
    @MrLerola 6 месяцев назад +1

    I run into OoM frequently with my 8 GB 3070, so super excited to try this! Do we know what got 'slimmed down'?

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад

      They did nerdy stuff 😆 The model card has a bit more info…

  • @leafdriving
    @leafdriving 6 месяцев назад +4

    Comfy UI ~ actually runs full SDXL on 6GB (I have GTX1060) ~ super slow ~ SSD-1B Faster (What I use to set up) ~ Comfy UI auto-selects "low v-ram load" ~ slow ~ but stack the queue and come back later, getting it done.

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад

      Nice! Cool to hear it runs on 6GB too 😀 Awesome that this model should be faster on whatever hardware!

    • @synthoelectro
      @synthoelectro 6 месяцев назад

      1650 GTX works too, 4GB VRAM, but you have to work with it like virtual swap.

    • @liquidmind
      @liquidmind 6 месяцев назад

      A tensor with all NaNs was produced in VAE.
      Web UI will now convert VAE into 32-bit float and retry.
      To disable this behavior, disable the 'Automatically revert VAE to 32-bit floats' setting.
      To always start with 32-bit VAE, use --no-half-vae commandline flag.@@synthoelectro

    • @Adante.
      @Adante. 6 месяцев назад

      How slow?

  • @MegaGasek
    @MegaGasek 6 месяцев назад +4

    Thanks for bringing such great content. The spaghetti UI is not at all comfy, looks very intimidating and this is coming from a guy who used to work with 3D Maya a lot (Ah, the Maya multilister... such a ''joy'' to use!)... Anyway, I've got a 2080 with 8GB of Vram so from what you mentioned, it is faster in COMFY UI. Will try comfy UI for the first time. However, I have to say this: with all the Civitai LORAS and community based tools I don't feel like SDXL is really necessary. I'm still using 1.5 in all my projects.

    • @neocaron87
      @neocaron87 6 месяцев назад +1

      Davinci Resolve user here, love the nodes there, hate it in comfy even though I really want to like it XD

    • @MegaGasek
      @MegaGasek 6 месяцев назад

      @@neocaron87 Da Vinci Resolve is just a marvel of the modern world. An unbelievably great piece of software for free that is as capable as Premiere or any other video editor. I use Premiere myself just because It integrates well with Photoshop, Illustrator, AE and Audition but I see myself using it in the future.

  • @beecee793
    @beecee793 6 месяцев назад +3

    Well, what's not to like is we lose all the custom fine tunes/lora's etc which is a huge part of what makes the SD ecosystem so useful, right? Would you need to train new LORA's and whatnot using the 1B as base model?

    • @NineSeptims
      @NineSeptims 6 месяцев назад

      60% is worth it as sad as it is.

    • @mattmarket5642
      @mattmarket5642 6 месяцев назад +1

      True, what would make it *really* useful was if a genius figured out how to convert models/loras between the two. I guess it’s impossible, but that would be brilliant. The community being split between making things for 1.5 and SDXL is already putting a damper on things a bit.

    • @Phobos11
      @Phobos11 5 месяцев назад

      @@mattmarket5642there’s not really a “split” based on the models, but based on the hardware limitations. Most people don’t have machines to run SDXL with and SD1.5 is good for almost everyone

  • @midgard9552
    @midgard9552 5 месяцев назад

    have no ideas why but in auto1111 i always get NaN errors with this model only

  • @elmyohipohia936
    @elmyohipohia936 6 месяцев назад +1

    I don't know how to train lora since sdxl (I have a 8gb vram gpu), do you have a tuto or something ? Before this I used to use astria, collab, 1111 a bit...

    • @_arkel7374
      @_arkel7374 6 месяцев назад

      Same here. I can train LORAs in 1.5, but not SDXL due to VRAM limitations. Do we know whether it's possible to train with SSD-1B? If so, a how to video would be MUCH appreciated.

  • @KonImperator
    @KonImperator 6 месяцев назад +2

    My guy talking about 8 gigs VRAM like it's the standard for every low end pc out there 🤣

  • @banzai316
    @banzai316 6 месяцев назад

    Any improvement in the language model , prompt understanding?
    Looks good 👍

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад +2

      Seems pretty much the same so far… or at least my tiny, rodent brain hasn’t found a noticeable difference as yet

    • @puoiripetere
      @puoiripetere 6 месяцев назад +1

      With the latest Nvidia i 546 drivers the vram is no longer a "problem" using normal RAM

    • @banzai316
      @banzai316 6 месяцев назад

      @@puoiripetere , good to know. Probably some better too if you use the Studio Driver, vs Game Ready

  • @cyril1111
    @cyril1111 6 месяцев назад

    yes! But what do you think of the quality difference between this and Normal XL ?

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад +2

      49% of the time I prefer the other model

    • @MrAwesomeTheAwesome
      @MrAwesomeTheAwesome 6 месяцев назад

      @@NerdyRodent Does that mean that 51% of the time you prefer this new model? Or are we also accounting for some ties?

  • @moo6080
    @moo6080 6 месяцев назад

    Wow, thank you for keeping up with this news and sharing it with us, I can run SDXL on ComfuUI, but not on my GPU, so i'm going to give this a try!
    EDIT: Unfortunately, it looks like it still gives out of memory error on my 6GB VRAM card

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад

      Plenty of other commenters say it works on on their 6GB cards… I think someone said even a 980ti was ok!

    • @moo6080
      @moo6080 6 месяцев назад

      Yeah i was expecting it to, as well, considering the model is only 4.2GB. I tried with --lowvram on comfyUI and I still got the OOM error@@NerdyRodent

    • @liquidmind
      @liquidmind 6 месяцев назад +1

      it does works, change OPTIMIZATION TO AUTOMATIC. and try LOWER pixels like 1024x768 first!!! the go higher!!

    • @moo6080
      @moo6080 6 месяцев назад +1

      @@liquidmind i don't see that option on ComfyUI, what are you using as your interface?

    • @liquidmind
      @liquidmind 6 месяцев назад

      automatic1111 - go to the Xformers and SDP section and choose xformers or automatic optimization,,, can you see xforemrs?@@moo6080

  • @eukaryote-prime
    @eukaryote-prime 6 месяцев назад

    I've been using fooocus for SDXL and it takes 7 minutes an image on my 6gb 980ti 😅😅😅

  • @Kelticfury
    @Kelticfury 6 месяцев назад +2

    Have you checked out SwarmUI yet?

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад +2

      Not yet, no. Any good?

    • @Kelticfury
      @Kelticfury 6 месяцев назад +1

      You might like it. It is fast and runs on a backend of comfyui. Still in beta I think? So not perfect but definitely an intriguing start. One thing to keep in mind is that it works with python 3.11 (took me a bit to figure out what my problem was)

  • @FusionDraw9527
    @FusionDraw9527 6 месяцев назад

    感謝分享 AI進步真很快速呢 還沒習慣SDXL 就有新的SDXL Distilled了 進步太神速了

  • @Gh0sty.14
    @Gh0sty.14 6 месяцев назад +2

    For some reason I'm running out of memory using this model but not while using regular SDXL.

    • @moo6080
      @moo6080 6 месяцев назад

      same

    • @Gh0sty.14
      @Gh0sty.14 6 месяцев назад

      @@moo6080 I saw someone on reddit say it only works with the dev branch of a1111 so maybe that's the issue.

    • @moo6080
      @moo6080 6 месяцев назад

      @@Gh0sty.14 Im using ComfyUI, it says on their huggingface it should be compatible

  • @CoconutPete
    @CoconutPete 2 месяца назад

    I tried the same prompt with A1111 and SSD-1B and my image looks like a cheap cartoon lol

  • @vintagegenious
    @vintagegenious 6 месяцев назад

    Obviously also supported inside SDNext

  • @liquidmind
    @liquidmind 6 месяцев назад

    Error on a 2060 6GB RAM
    A tensor with all NaNs was produced in VAE.
    Web UI will now convert VAE into 32-bit float and retry.
    To disable this behavior, disable the 'Automatically revert VAE to 32-bit floats' setting.
    To always start with 32-bit VAE, use --no-half-vae commandline flag.

    • @liquidmind
      @liquidmind 6 месяцев назад

      ok i manage to get it to work..... CANT use SDP on a 6gb ram gpu, i chose automatic optimization and --xformers and works well, SLOW AF, but great

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад +1

      Awesome to hear!

    • @liquidmind
      @liquidmind 6 месяцев назад

      thanks for all your tutorials.@@NerdyRodent

  • @NineSeptims
    @NineSeptims 6 месяцев назад

    The rate this tech is moving I might be able to press generate and get 100 1024x1024 images instantly. 😳

  • @CasanovaSan
    @CasanovaSan 6 месяцев назад +1

    does it need a refiner?

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад +1

      Not needed, but you can if you like!

  • @puoiripetere
    @puoiripetere 6 месяцев назад

    I tested the model and noticed that the model is very sensitive to individual variations of the prompt. To be more precise, the writing of the prompt must be very precise to obtain good results. With the standard model a very generic prompt can give nicer results. You will have to spend more time writing the prompt. Once you understand how to interface with this model the results are exceptional. The learning curve is higher. I recommend starting from an extremely generic prompt and then working on adding details. For example in the generation of a person, you will have to be very specific in the construction of each part of the body, from the face, eye alignment, skin texture and so on. Let's say this model is like a car with manual gearbox.

  • @LIMBICNATIONARTIST
    @LIMBICNATIONARTIST 6 месяцев назад +1

    First!

  • @greendsnow
    @greendsnow 6 месяцев назад +2

    why are you talking this way? :D

    • @MegaGasek
      @MegaGasek 6 месяцев назад +6

      What do you mean? He has a great voice and explains things really clearly. If it is a joke I didn't get it.

    • @Elwaves2925
      @Elwaves2925 6 месяцев назад +1

      Why are you typing that way? 😉

    • @MegaGasek
      @MegaGasek 6 месяцев назад +1

      @@Elwaves2925 Don't blame me, it was my cat.