Civitai with Stable Diffusion Automatic 1111 (Checkpoint, LoRa Tutorial)

Поделиться
HTML-код
  • Опубликовано: 15 ноя 2024

Комментарии • 56

  • @binee77
    @binee77 Год назад +1

    Starting from stable-diffusion-webui version 1.5.0 a1111-sd-webui-lycoris extension is no longer needed. All its features have been integrated into the native LoRA extension. LyCORIS models can now be used as if there are regular LoRA models.

    • @controlaltai
      @controlaltai  Год назад +2

      Thank you. I had mentioned this in the latest video for version 1.6. I forgot about extensions here. Comment Pinned.

    • @KLoLPlayer
      @KLoLPlayer 8 месяцев назад

      ​@@controlaltai Is there an updated tutorial? I did exactly what you do and I even use your tutorial to install 2.1 1111 Stable Diffusion as well and I cannot get xformers for the life of me. I redo everything from scratch multiple time, I feel like I am a speedrun for the first installation process.
      My issue is that it would keep saying "No module xformers". Maybe its bc I did something wrong in the past? How do I properly clear all cache and data of my Stable diffusion? I only ever delete the folder.

    • @controlaltai
      @controlaltai  8 месяцев назад

      The xformers and the pytorch version keep changing month on month. If you can tell me your torch version, cuda version, within a1111 environment, i can give you the pip install command for xformers. Xformers always has issues because it's never compatible with the latest version of torch+cuda. So the solution is to downgrade to the torch version compatible with xformers latest version.

    • @KLoLPlayer
      @KLoLPlayer 8 месяцев назад

      Where do I check that? I apologize I am a complete noobs at this@@controlaltaiDo you have a discord server?

    • @KLoLPlayer
      @KLoLPlayer 8 месяцев назад

      @@controlaltai My Torch version or at least for the line "Collecting torch==2.1.2" "Collecting torchvision==0.16.2" I do not have Cuda

  • @WayneHendersonsr
    @WayneHendersonsr 7 месяцев назад

    I am new to generative AI and I had a lot of questions that were answered in this video. Very informative. Particularly when saving images for CivitAI and fixing errors. I'm a new subscriber.

  • @iljabuinitski9745
    @iljabuinitski9745 Год назад +2

    Wow, great descriptions! And finally I got the answers how to use

  • @DrDaab
    @DrDaab Год назад +2

    Holy Cow your tutorial is INCREDIBLY GREAT! Saving the PNG to load all of the parameters saves tons of time, plus let's you see if anything is missing. Wow! Thanks ! ! !

  • @felixeduardomirandateran824
    @felixeduardomirandateran824 11 месяцев назад +1

    This is the video what I was looking for, Can I know what the components of your PC are or what are the minimum requirements to have that generation speed?

  • @sangol9236
    @sangol9236 Год назад +1

    Thank you so much for the video! One question: when i tried to generate an image on my Asus laptop with 4050m 6Gb VRAM i've got an error: "OutOfMemoryError: CUDA out of memory. Tried to allocate 768.00 MiB (GPU 0; 6.00 GiB total capacity; 3.95 GiB already allocated; 0 bytes free; 5.12 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF" Is it possible to decrease the amount of memory allocated by PyTorch? or any other tricks?

    • @controlaltai
      @controlaltai  Год назад

      Hi, Welcome, I am glad you found the video useful. For your issue, try this - Right click and open webui-user.bat file in notepad (your just click on edit).
      above this line: set COMMANDLINE_ARGS=
      Add this:
      set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:512
      Then change set COMMANDLINE_ARGS= to the following:
      set COMMANDLINE_ARGS=--precision full --no-half --lowvram --always-batch-cond-uncond --xformers
      This happens when the resolution is too high for the card to handle, if you are using sdxl 1024 and trying to upscale, do it less than 2x, or better yet, if not doing a face high res fix is not needed, so just generate the image in 1024 and use ultimate sd upscale image to image method to upscale it by 2x each time.
      Second reason is if generating in batches, reduce batch size.
      lastly it its still giving error, try this command
      set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:256
      Let me know if that works or not.

  • @dagclips
    @dagclips Год назад +1

    thanks for the nice tutorial ! wanted to ask: what specs do you have ? you generate images instantly

    • @controlaltai
      @controlaltai  Год назад

      Welcome and thank you. I am running this on an AMD 7950x3D with a nVidia 4090 GPU and 64gb of RAM (6000 MHz).

    • @dagclips
      @dagclips Год назад

      do you know by any chance why i always get error regarding to vram not being enough even if i have 8 gb of vram RX 7600S ? i cant upscale at all, no upscaling its ok, with upscale on, i get the error @@controlaltai

    • @controlaltai
      @controlaltai  Год назад

      @@dagclips Firstly use xformers and half vram or low vram in a1111. Second upscale 2x. If it still gives an error try 1.5x. After that use a tool called ultimate sd upscaler which upscales 2x easily. I have made a tutorial for that as well.
      ruclips.net/video/tQEcG2Rry7Q/видео.html
      You will get no errors when using the above method. Just make sure when using ultimate sd upscale, do it 2x each time.
      Even if you get an error in 1.5x use an upscaler at 1x for high res fix. Then use ultimate sd upscaler 2x. This is an optimal workaround.

    • @dagclips
      @dagclips Год назад +1

      xformers doesnt work on amd if i understood correctly, i cant get 2.1.0 pytorch and cuda on amd so yea..@@controlaltai

    • @controlaltai
      @controlaltai  Год назад

      Ohh yes right you have amd, what I suggest is upscale using high res fix at value 1 where out is 512x512, meaning it will only use highres fix for doubling of faces or deformity, then use the sd upscale method via extension as per the tutorial link in the previous comment at 2x at a time. This should work for you.

  • @habeebrahman9009
    @habeebrahman9009 11 месяцев назад +2

    wow u are just 🔥🔥

    • @controlaltai
      @controlaltai  11 месяцев назад

      Thank you Sir, much appreciated!!

  • @GamerGee
    @GamerGee 10 месяцев назад

    What backend interface does the website use. I’ve tried take web based prompts from the site but when I bring them to automatic1111 they don’t match. Especially the sdxl versions. Any help?

    • @controlaltai
      @controlaltai  10 месяцев назад

      Have you tried loading saving the png directly and loading it in A1111 via PNG Info tab. If the file has meta data, A1111 will show it.

    • @GamerGee
      @GamerGee 10 месяцев назад

      @@controlaltai yes. I’ve done PNG info and copied to txt2img. It works well with external generated images but when taking an a image from the civitai web generate especially with sdxl prompts I have had the hardest time to get anything resmbly close

    • @controlaltai
      @controlaltai  10 месяцев назад

      Recently they don't have any meta data or are comfy ui workflows. Can you tell me the checkpoint name, i can have a look and confirm from my side if that's the case.

    • @GamerGee
      @GamerGee 10 месяцев назад

      @@controlaltai sure let me find one that I tried to reproduce

    • @GamerGee
      @GamerGee 10 месяцев назад

      @@controlaltai what’s your email if that’s okay with you. I’ll link some

  • @clintonthorncraft2164
    @clintonthorncraft2164 Год назад

    Fantastic video! Thanks for the clear and concise advice.
    Quick off-topic question, but what AI voice generation do you use?

    • @controlaltai
      @controlaltai  Год назад

      Thank you. Appreciate your comment and I am glad you found the video useful.
      About the voice, we are a team of only two people. Every faceless video is not AI (tbh). It's just perceived that way. If the video is useful and it helps people what does it matter. Technical ones are mostly done by me. Others like blue willow, Midjourney etc are done by the other person. But the research work for each video is done by both of us.

    • @clintonthorncraft2164
      @clintonthorncraft2164 Год назад +1

      @@controlaltai I could have sworn it was using text to speech because of some of the inflection, but regardless...
      Amazing video controlaltai. Very happy to have found your channel and subscribed :)

  • @4kAustralia
    @4kAustralia 10 месяцев назад

    watched many of your video;s but this one catchs me out .I change the files and get xforms but when I change back it comes error x forms only run on nvidia cards so they get disabled has i have amd system with radeon rx 7900 which is probably has good if not better than many nvidia cards but it will not use it

    • @controlaltai
      @controlaltai  10 месяцев назад

      Most of the things on these local ai interfaces are nVidia only. If you are using A1111, check the GitHub Repository if there are any instructions for AMD.

  • @peeks7165
    @peeks7165 7 месяцев назад

    how are your generations so fast? I have a 3080 and it still usually takes over a half hour

    • @controlaltai
      @controlaltai  7 месяцев назад

      Amd 7950x3D, 64 gigs 6000Mhz DDR5 RAM, RTX 4090.

    • @peeks7165
      @peeks7165 7 месяцев назад

      @@controlaltai Is it normal for generations to take 45 minutes with a 3080, 12gb vram and 7900x? Am I doing something wrong? Oftentimes it won't even show a "percent completed" it will just show the "interrupt" and "skip" buttons. I don't know if it's doing anything sometimes

    • @controlaltai
      @controlaltai  7 месяцев назад

      Your cpu is very good. If you are on an nvme, then all is good. If doing SDXL then yeah it's normal. It's the 3080 with 12 VRAM. You can use some optimization but that won't bring it into minutes like 2 to 3. Basically it's the hardware. Also I don't recommend buying a 4090 now, completely wrong timing. Personally I prefer to buy GPUs within 3 months of release to get full value. Unless a company is paying for the hardware and you need it for work. My advise is wait for the 5090, then either go in for a 4090 or 5090. 5090 is slated for end of year or Jan 2025 announcement.
      Also try to make a switch to comfy ui. It's more optimized would be slightly faster.

  • @kaidokun200
    @kaidokun200 10 месяцев назад

    after changing the resolution to 1920x1080p the image generate 2 person or object, how can i fix it?

    • @controlaltai
      @controlaltai  10 месяцев назад +1

      That resolution maybe too wide for the checkpoint used. Generate images in 1024x1024 to avoid doubling of characters.

  • @judge_li9947
    @judge_li9947 Год назад

    greati videos! i have trained my own model, sadly now with the installations i always get 4 pictures merged together. example: i want a woman on the beach, it generates 3-4 woman mergerd into each other. before the installs i did not have that problem. the quality of the pictures are a lot better though. do you happen to know what that might cause?
    also explicitly prompting/negative prompting does not help.
    thanks for the help!!

    • @judge_li9947
      @judge_li9947 Год назад

      ok so the problem was when the resolution is different to what you trian your own model on. easiest is to merge your model with dreamshaper... guess i figured it out myself ;) hope this helps anyone who runs into the same problem

    • @controlaltai
      @controlaltai  Год назад +1

      Yeh with stable diffusion there are 2 and 3 heads, legs, etc problem. One solution is to change the resolution, upscale, and then save and crop. The other solution is to go to civit ai and check some models which are similar to yours and check out all the negative commands. Some of the commands are very weird but they work. I hope this help. And thank you for liking the video.

  • @taron_tv
    @taron_tv 9 месяцев назад

    super!

  • @vivienneomen
    @vivienneomen 10 месяцев назад

    The xformers some Cuda issue? Now my Loras don't show up and nothing works right. I'm sure someone more competent would know what to do. I'll just re-install Automatic 1111 and look for another video.

    • @controlaltai
      @controlaltai  10 месяцев назад

      The driver version of xformers has been obviously updated to some other version by now. The version of Xformer will depend on your pytorch and cuda version both. All three has to be compatible. Also, xFormers has nothing to do with LoRA not showing up. If your LoRA is not showing up there is some other issue.

    • @KLoLPlayer
      @KLoLPlayer 8 месяцев назад

      @@controlaltai Could you do an updated tutorial?

  • @U_n_d_e_r_s_c_o_r_e_n
    @U_n_d_e_r_s_c_o_r_e_n 4 месяца назад +2

    doshghit video doesnt explain anything