Install Stable Diffusion for AMD GPUs on Windows | ComfyUI and webUI on AMD.

Поделиться
HTML-код
  • Опубликовано: 16 ноя 2024

Комментарии • 204

  • @Luinux-Tech
    @Luinux-Tech  Месяц назад +2

    Please, if you are asking for help, explain your error in detail, inform about your hardware(CPU + GPU), and specify whether you used WebUI or ComfyUI.

  • @ryzenrich
    @ryzenrich Месяц назад +3

    A webui 1024x1024 image takes 1min 50sec, on a RX 6600. Pretty good. Btw I got an error by just using --use-directml. Instead, I used all the commands in your video but removed the --lowram. Also used a different model not sure how if that affects the processing time. Thanks you for the clear instructions.

    • @Luinux-Tech
      @Luinux-Tech  Месяц назад

      I'm glad I could help, and you're right. Some GPUs only need "--use-directml" to work, others need all or some of the other commands. I'll clarify that in the description.

  • @SaintsSwords
    @SaintsSwords Месяц назад +2

    Thanks a lot, I was looking for a way to install it with my AMD GPU, I have a 7900 XT and your tutorial is very explicit. I was on fooocus before and it took like 4 times longer to generate lmao, thanks a lot

  • @uxot
    @uxot 2 месяца назад +7

    should have really pasted all the cmds you used in description.........

  • @ItsKagiVids
    @ItsKagiVids 29 дней назад +1

    Thank you so much for the tutorial i've tried other vids but this is the only one I was able to find that actually works with AMD.

  • @JewelryHustlersCorner
    @JewelryHustlersCorner 19 дней назад +1

    finally a no BS tutorial! Thanks

  • @ginisksam
    @ginisksam 2 месяца назад +1

    Thanks for the serenading guide - works with me old RT 6700 XT. Now looking for other good models to try.

  • @greenesyt563
    @greenesyt563 Месяц назад +2

    The first command worked then at 3:40 I typed the command correctly and it says "'Set-ExecutionPolicy' is not recognized as an internal or external command, operable program or batch file.". I didn't follow the tutorial from the start because I already have Python 3.10.6 and Git installed because I used to use A1111 and now moving to Comfy. Could this be because of I am in a different drive that where my python is installed? Do I have to do everything in C drive?

    • @Luinux-Tech
      @Luinux-Tech  Месяц назад +1

      Are you running the command in PowerShell? This command is not for Windows CMD.

    • @greenesyt563
      @greenesyt563 Месяц назад

      @@Luinux-Tech TYSM it worked! And one more question, Can you you tell me if I can use Zluda with it because I only got a RX 580 8GB and if I can't can I run Flux with directml? and thanks again😁

    • @Luinux-Tech
      @Luinux-Tech  Месяц назад

      Yes, ZLUDA works, but from my experience, at least on Polaris GPUs, the performance difference is not very significant. I haven't tested Flux yet, but if you have ComfyUi running, just download the model and a workflow and test it.

  • @wurfelgott1520
    @wurfelgott1520 8 дней назад +1

    Very nice it worked thank you so much!

  • @andysitus5485
    @andysitus5485 6 дней назад

    I did everything according to the guide, but only the CPU works, the rx 6600 xt does not work 😭😭😭

  • @Tigermania
    @Tigermania 3 месяца назад +2

    What AMD GPU were you using for this? Comfy looks several times faster than A1111.

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад +5

      The video is sped up. It took me 1 minute and 32 seconds to generate an image with 20 steps in ComfyUI and 2 minutes and 3 seconds for an image also with 20 steps in webUI. My GPU is an RX 550 4GB.

    • @CapaUno1322
      @CapaUno1322 2 месяца назад +3

      @@Luinux-Tech Wow, it's pretty cool that you can use just 4gb of vram, that's impressive....

  • @abhisheksinghnepal910
    @abhisheksinghnepal910 Месяц назад +1

    Followed each steps. But, module not found error: No module named 'torch_directml' ☹ 07:30

    • @Luinux-Tech
      @Luinux-Tech  Месяц назад

      Try running the command "pip install torch-directml torchaudio" again and see if there are any errors in the terminal.

    • @abhisheksinghnepal910
      @abhisheksinghnepal910 Месяц назад

      @@Luinux-Tech Hmm now it worked but still the problem is same--- "Could not allocate tensor with 268435456 bytes. There is not enough GPU video memory available!".......... AMD Radeon (TM) R5 M330 (2 GB) and Intel(R) HD Graphics 620 (4GB) .............. Dell Inspiron i7 -- Model- 15 3567----- 7500 CPU @2.7 Ghz 29012 Cores...... 8GB RAM...... Mhz

  • @KayuroV
    @KayuroV 2 месяца назад +2

    If I have 16 ram should I put normalvram or something else?

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад +1

      I recommend using the "--normalvram" parameter for cards with VRAM between 6GB and 12GB. For cards with a higher capacity, such as 16GB or 24GB, use the "--highvram" parameter. This will ensure that the models are loaded in the GPU memory and will accelerate the generation process.

  • @arteon2017
    @arteon2017 2 месяца назад +1

    i get error of "there is no enough gpu video memory available" but it doesnt even use my gpu (im not using a laptop)
    gpu : rx 6600

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      ComfyUI or WebUI? What resolution did you use? Does the terminal say you are using "CPU only" mode?

    • @arteon2017
      @arteon2017 2 месяца назад

      @@Luinux-Tech webui 512x512 idk

  • @normon6314
    @normon6314 Месяц назад +1

    Hello. (i3 9100 and RX580 8Gb) Im using WebUI. Image generation works fine but when i checked my vram usage after generating it stays at max usage for some reason. Is this normal? Due to this, i cannot upscale the image, it will say that i dont have enough vram.
    RuntimeError: Could not allocate tensor with 1073741824 bytes. There is not enough GPU video memory available!
    Could you help please?

    • @Luinux-Tech
      @Luinux-Tech  Месяц назад +2

      Hi, sorry I didn't answer earlier. Normally, the VRAM ends up being full because the model is sent to VRAM to speed up the generation process. However, since your GPU has 8GB, this should only happen if you are using SDXL (models trained for 1024x1024). I suggest trying the Tiled Diffusion & VAE extension and the "--lowvram" parameter. This should be sufficient to eliminate errors caused by insufficient memory. Another option would be to generate the images first and then upscale them. It is also worth saying that DirectML (the API that allows you to use AMD cards for Machine Learning) has some memory problems and may be contributing to this abnormal use of VRAM.

  • @srivarshan780
    @srivarshan780 2 месяца назад +2

    after requirements txt mine stuck at gradio in the last paragraph thing

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад +1

      Some people have reported this bug. Try pressing enter in the terminal, and it should continue.

  • @leozinhojunior2879
    @leozinhojunior2879 2 месяца назад +1

    Then make a video teaching how to install ComfyUI on Ubuntu Linux! I saw that you did it for Stable Diffusion, but I really wanted to install ComfyUI on Ubuntu!

  • @TT-go3sl
    @TT-go3sl 2 месяца назад +1

    Could you further explain the "Unblock-File -Path." Tried researching but could not find anything.

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      Open the terminal in the folder with the file you want to enable for execution. For example, for webUI, you must be in the folder that contains the file "webui-user.bat". Then, right-click and open the terminal. Next, run the command: "Unblock-File -Path .\webui-user.bat". Without the quotes.

    • @TT-go3sl
      @TT-go3sl 2 месяца назад

      @@Luinux-Tech ohhh ok thanks!!

    • @rubensoliveira9681
      @rubensoliveira9681 2 месяца назад

      @@Luinux-Tech and what about comfyui itself? what do I put in the place of "name_of_script_to_unblock"?

    • @ilxosarui197
      @ilxosarui197 2 месяца назад

      @@rubensoliveira9681 did you find out? im stuck there too

  • @gianlucalorusso8130
    @gianlucalorusso8130 2 месяца назад +1

    first of all your guide is very clean and easy to follow, i followed all youstep , and it all worked, then stable diffusion opens up in the webpage, just like in the video, and i tested it with a random prompt but it cannot generate and the error is this : "SafetensorError: device privateuseone:0 is invalid"
    i tried to download 2 different Models but the error is the same.
    Any idea ? cuz i checked online but couldnt find anything.

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      Thanks. What is your CPU and GPU? Are you using ComfyUI or WebUI?

    • @gianlucalorusso8130
      @gianlucalorusso8130 2 месяца назад

      @@Luinux-Tech AMD Ryzen 9 3900X 12 CPU
      NVIDIA GeForce RTX 3080
      and i am using WebUI

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      Sorry, but this video will not work for you. The installation method for Nvidia GPUs is different. You do not need to use torch-directml. The standard torch with CUDA is what you need. Unfortunately, I do not have any tutorial for Nvidia cards yet.

    • @gianlucalorusso8130
      @gianlucalorusso8130 2 месяца назад +1

      @@Luinux-Tech oh ok, well thx for the answer and the support

  • @Minecrafter-cv6rb
    @Minecrafter-cv6rb Месяц назад

    Can I directly download manually Comfy from Github because for some reason the git clone command keep failing

    • @Luinux-Tech
      @Luinux-Tech  Месяц назад

      Yes, I believe it will work normally.

  • @ardysalinggih
    @ardysalinggih 2 месяца назад +2

    thx for tutorial is very helping
    can i ask ?
    can comyui run using zluda in windows ?

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад +2

      Thank you. Other people have already asked me about ZLUDA, but I haven't tested it yet. As soon as I have time, I will test it and make a video with ZLUDA on Windows with ComfyUI and webUI.

    • @luxiland6117
      @luxiland6117 2 месяца назад +1

      ​@@Luinux-Tech its working with zluda, i have a 6700xt but its a mess install and cant use sampling method cudnn error. Only lcm... F

    • @richardtorres5105
      @richardtorres5105 2 месяца назад

      @@luxiland6117 Can you show me the method you used to make it work? I have a RX 6750 XT and I can't get it to work with Zluda.

  • @memoryhole7229
    @memoryhole7229 3 месяца назад +2

    Ubuntu vs Windows: Which was faster?

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад +2

      Windows DirectML does not manage memory very well at the moment. In Linux, I was able to use the "--normalvram" argument perfectly and obtained much better performance. Generating an image with exactly the same parameters(seed, lora, model...) in Linux took about 145 seconds, while in Windows it took 201 seconds.

  • @vekkaro
    @vekkaro Месяц назад

    First try, works awesome using gpu, second try without even close the terminal I get this: py:688: UserWarning: The operator 'aten::count_nonzero.dim_IntList' is not currently supported on the DML backend and will fall back to run on the CPU. So now ComfyUi is using my CPU 😮‍💨

    • @Luinux-Tech
      @Luinux-Tech  Месяц назад

      In the browser, press the "load default" option and try again.

  • @louisbeauger
    @louisbeauger 16 дней назад

    Does Comfyui work well with a 7900 xtx?

    • @Luinux-Tech
      @Luinux-Tech  16 дней назад

      Yes, and with this GPU, you can easily use SDXL (models with higher resolution).

  • @nachoferreyra8677
    @nachoferreyra8677 Месяц назад

    i get stuck in pip install -r .
    equirements.txt ...i put the correct directory...help

    • @Luinux-Tech
      @Luinux-Tech  Месяц назад

      Be more specific. What happens? What does the error say?

  • @abhisheksinghnepal910
    @abhisheksinghnepal910 Месяц назад

    Following WebUI Error I got.... After image generation and it disappeared and got this msg in terminal....
    RuntimeError: Could not allocate tensor with 134217728 bytes. There is not enough GPU video memory available!

    • @Luinux-Tech
      @Luinux-Tech  Месяц назад

      What is your GPU? and what image size are you trying to generate?

    • @abhisheksinghnepal910
      @abhisheksinghnepal910 Месяц назад

      @@Luinux-Tech AMD Radeon (TM) R5 M330 (2 GB) and Intel(R) HD Graphics 620 (4GB)
      .............. Dell Inspiron i7 7500 CPU @2.7 Ghz 29012 Cores...... 8GB RAM...... Mhz--- Model- 15 3567

    • @abhisheksinghnepal910
      @abhisheksinghnepal910 Месяц назад

      @@Luinux-Tech I generated simple image of "Cat" to test it.

    • @Luinux-Tech
      @Luinux-Tech  Месяц назад

      This method will only use your AMD GPU, and since you only have 2GB of VRAM, it will be very difficult to generate an image. You can try to reduce the resolution of the image you want to generate or use webUI with the "Tiled Diffusion & VAE" extension and the "--lowvram" parameter.

  • @AtajoSeries
    @AtajoSeries Месяц назад

    I have win10 enterprise LTSC, I think torch doesnt work on LTSC..
    because I did everything as you did. I got ryzen5500 and rx 6700 xt

    • @Luinux-Tech
      @Luinux-Tech  Месяц назад

      Can you describe the error? What does it say?

    • @eshistorai
      @eshistorai Месяц назад

      @@Luinux-Tech
      Traceback (most recent call last):
      File "C:\Users\Ooze\Documents\stable-diff\ComfyUI\main.py", line 90, in
      import execution
      File "C:\Users\Ooze\Documents\stable-diff\ComfyUI\execution.py", line 13, in
      import nodes
      File "C:\Users\Ooze\Documents\stable-diff\ComfyUI
      odes.py", line 21, in
      import comfy.diffusers_load
      File "C:\Users\Ooze\Documents\stable-diff\ComfyUI\comfy\diffusers_load.py", line 3, in
      import comfy.sd
      File "C:\Users\Ooze\Documents\stable-diff\ComfyUI\comfy\sd.py", line 5, in
      from comfy import model_management
      File "C:\Users\Ooze\Documents\stable-diff\ComfyUI\comfy\model_management.py", line 62, in
      import torch_directml
      File "C:\Users\Ooze\Documents\stable-diff\venv\Lib\site-packages\torch_directml\__init__.py", line 21, in
      import torch_directml_native
      ImportError: DLL load failed while importing torch_directml_native: The specified module could not be found.
      this is the error powershell says
      I Installed everystep without any problem. but when I try to run python main.py and the whole code, it gives that error.

  • @JoseEmanuelRojasRivas1
    @JoseEmanuelRojasRivas1 2 месяца назад +1

    Hello friend, thanks for the video, well explained.
    How can I add models ?
    I mean other models like the civitai ones .
    thanks

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      If you are using comfyUI, download the model and place it in the ComfyUI > models > checkpoints folder, or if you are using webUI, download the model and place it in the stable-diffusion-webui-directml > models > Stable-diffusion folder, after that just reload the page in the browser and select the new model.

  • @Driftmonkey
    @Driftmonkey Месяц назад

    How do I get the ability to open up python from the folder I'm currently in? Is it clicking that box that says add Python 3.10 to path? Because its still not there for me. Maybe its my windows appearance settings to make windows 10 appear like win 7?

    • @Luinux-Tech
      @Luinux-Tech  Месяц назад

      Yes, it should work by checking the "Add to Path" box. If not, open the environment variables menu, double click on "Path," and add the path of your Python installation (usually: C:\Users\[user]\AppData\Local\Programs\Python\Python[version]).

  • @Yamaguchi-Kawaki
    @Yamaguchi-Kawaki Месяц назад

    what the difference between "pip install torch-directml" and "pip install torch-directml torchaudio" ?

    • @Luinux-Tech
      @Luinux-Tech  Месяц назад

      I specify "torchaudio" to prevent version mismatches.

    • @Yamaguchi-Kawaki
      @Yamaguchi-Kawaki Месяц назад +1

      @@Luinux-Tech nah it's just on git hub it without "torchaudio" so just little confused. But, hey, I used both for experiment...with and without torchaudio and both way seems to be working.

  • @Matheus-mr4tl
    @Matheus-mr4tl Месяц назад

    in the ComfyUI it gets the erros "theres no enough gpu video memory available" even thoug i follow every step for low vram. my gpu is 4gb

    • @Luinux-Tech
      @Luinux-Tech  Месяц назад

      Are you using SDXL or the normal models? What size image are you trying to generate (512x512, 768x768...)?

    • @Matheus-mr4tl
      @Matheus-mr4tl Месяц назад

      @@Luinux-Tech good question... what should I use?

    • @Luinux-Tech
      @Luinux-Tech  Месяц назад

      Stable diffusion 1.5 models and images with 512x512 pixels. If you want larger images, just upscale them later.

  • @abhisheksinghnepal910
    @abhisheksinghnepal910 Месяц назад

    I am using windows 10 and we don't have the option of "Open in terminal" option. 03:01

    • @Luinux-Tech
      @Luinux-Tech  Месяц назад

      Shift + Right Click --> open PowerShell here.

    • @abhisheksinghnepal910
      @abhisheksinghnepal910 Месяц назад +1

      @@Luinux-Tech Yes, just tried and got the option.. Thank u.. I am gonna subscribe u right now for ur help. 😊😊

  • @fotomez7345
    @fotomez7345 2 месяца назад

    Hi I have a problem, when i run " py -m venv venv" it says Error: Command '['C:\\AI\\venv\\Scripts\\python.exe', '-Im', 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit status 1. So im blocked at the starting point, could you help me? Thank you so much

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      First, run the command: "py -m pip install --upgrade pip" and then "py -m pip install virtualenv". Then, try again to create the venv. If it doesn't work, I recommend reinstalling Python.

  • @xverny0
    @xverny0 Месяц назад

    Hi, if I want to get an image it always gives me the problem that says “Could not allocate tensor with 52428800 bytes. There is not enough GPU video memory available!” What should be the solution to the problem?

    • @Luinux-Tech
      @Luinux-Tech  Месяц назад

      What is your GPU? This is a lack of VRAM problem, try using the "--lowvram" parameter or decreasing the image resolution.

    • @xverny0
      @xverny0 Месяц назад

      ​@@Luinux-Tech my gpu is 6700 XT,
      When I use -lowvram, -normalvram or -highvram I still get the same error

    • @Luinux-Tech
      @Luinux-Tech  Месяц назад

      @@xverny0 What is the size of the image you are trying to generate?

    • @xverny0
      @xverny0 Месяц назад

      @@Luinux-Tech size of image is 512x512

    • @Luinux-Tech
      @Luinux-Tech  Месяц назад

      @@xverny0 Your GPU should be able to generate images much larger than 512x512. Are you using webUI or ComfyUI? Please provide the full command you are using to start it. Also, try generating an image and check in the task manager if the VRAM is actually full.

  • @alexiskonto1166
    @alexiskonto1166 Месяц назад

    Thank you for the video. How can i dedicate more VRAM to the "server" ? i have RTX 6750 12 GB but it only reserves 1GB ("--reserve-vram 4096" or "--reserve-vram 2.0" or other numbers is not working)

    • @alexiskonto1166
      @alexiskonto1166 Месяц назад

      ComfyUI

    • @Luinux-Tech
      @Luinux-Tech  Месяц назад

      This is just a bug in DirectML. Don't worry, it will use all the VRAM it needs. You can check in the Task Manager.

  • @emauelmoschen9835
    @emauelmoschen9835 2 месяца назад +1

    Does it work with an 8GB asrock rx 570?

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      Yes, in the video I used an RX 550 4GB. On yours, it will work even better because of the 8GB of VRAM.

  • @Galova
    @Galova Месяц назад

    oh. how do I enable OLIVE support as well? I've read an article on AMD blog that it helps optimize ai model to run faster, sometimes a lot faster particularily on AMD gpu. I've seen there is a branch of stable diffusion on Github with olive support but I failed to make it work because of errors during installation. I've managed to install ComfyUI but webUI returns OSerror

    • @Luinux-Tech
      @Luinux-Tech  Месяц назад

      Try deleting "C:\users\\.cache\huggingface" to fix OSError.

    • @Galova
      @Galova Месяц назад

      @@Luinux-Tech I've managed to make ComfyUI to run using your method and links. I've tried to install webUI and it returns error that it can't find repository on hugginface. It is returned by python script run from bat file... I tried to reinstall and some other tutorials where I get 'no cuda drivers detected' error even though I used -directml commandline etc. What can it be?
      Since comfyui seems to work fine I tried photoshop plugin for comfyui. I installed everything following instructions but it doesn't work reporting error that VAE not found, while it works fine in browser. So sad that NOTHING works out of the box..... goddamn quest.

    • @Luinux-Tech
      @Luinux-Tech  Месяц назад

      About webUI, I also noticed these errors coming from Hugging Face. Apparently, some files are no longer available on Hugging Face, and the webUI guys still haven't fixed the broken links. As for ComfyUI and Photoshop, unfortunately, I haven't tested them yet, so I can't help at the moment.

  • @pepedontlie
    @pepedontlie 2 месяца назад

    Can I create a full hd quality image with VGA RX6600 8GB?

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      These models are trained to generate images in specific resolutions, such as 512x512, 1024x1024... To get larger images, you first need to generate the image in a size supported by the model you are using, then upscale it to the resolution you want. That said, yes, your GPU is capable of producing images in FullHD using upscaling.

  • @Paddi_o
    @Paddi_o 2 месяца назад

    Hi there, I installed as u showed in the tutorial. I have a 7900xtx and at the time i start the queue my vram gets to about 20gb usage but it still says fallback to cpu. i get about 3-4 it/s and after finishing the process my vram is still full at 20gb. whats happening here? it still says fallback to cpu while executing

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      Were you using webui or comfyui? Does the task manager show high GPU usage? What parameters did you use to start?

    • @Paddi_o
      @Paddi_o 2 месяца назад

      I used comfyui. While executing the graphics cards goes up in GPU usage like to 80%. The Vram stays at 20gb at any time after first image, after the second it goes eben higher. I still only have like 3-4 it/s. I used standard parametrs from the basic layout, with my own prompt. if i put up like 3 batches with 40 steps it will even drop to 1.5-2.6 it/s

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад +1

      Sorry. You switched the units and confused me. If you are getting 3-4 iterations per second, it seems correct to me. The VRAM is getting full because machine learning with AMD on Windows is a bit limited, and DirectML does not manage VRAM very well. Do not expect performance similar to NVIDIA cards. If I am not mistaken, a high-end AMD card will have a third of the performance of a high-end Nvidia 4000 Series card in machine learning.

    • @Paddi_o
      @Paddi_o 2 месяца назад

      @@Luinux-Tech I remember the same performance on other people's 6900xt. Is there really not that big of a difference at all between the 6xxx and 7xxx and cards?
      Would it be a big difference switching to Linux?

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад +1

      About the performance between the 6000 and 7000 series, according to AMD: "Radeon™ 7000 series GPUs feature more than 2x higher AI performance per Compute Unit (CU) compared to the previous generation." If this is true, I have no way to test it because the most recent card I have access to is an RX 6600. Regarding Linux, when using it, you will have much better VRAM management, allowing you to use more complex models and workflows. In my case, I was able to enable some more optimizations and experienced a considerable performance gain. However, this was on a very limited GPU.

  • @seanmorgan4119
    @seanmorgan4119 2 месяца назад

    Please help:
    Traceback (most recent call last):
    File "C:\Users\*redatcted*\Documents\Stable Diffusion\ComfyUI\main.py", line 87, in
    import comfy.utils
    File "C:\Users\*redatcted*\Documents\Stable Diffusion\ComfyUI\comfy\utils.py", line 20, in
    import torch
    File "C:\Users\*redatcted*\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\__init__.py", line 148, in
    raise err
    OSError: [WinError 126] The specified module could not be found. Error loading "C:\Users\*redatcted*\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\lib\fbgemm.dll" or one of its dependencies.

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      Try installing the latest VC_redist.x64, download link: learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170

    • @seanmorgan4119
      @seanmorgan4119 2 месяца назад

      @@Luinux-Tech downloaded it but still doesn't getting the same error message

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      What is your GPU?

    • @seanmorgan4119
      @seanmorgan4119 2 месяца назад

      @@Luinux-Tech I just have an integrated AMD GPU. Would that be the reason?

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      First, install the "App Installer" from the Microsoft Store. Then, restart your computer and run the following command without the quotes: "winget install --id Microsoft.VisualStudio.2022.BuildTools --override "--passive --add Microsoft.VisualStudio.Component.VC.Tools.x86.x64""

  • @ConanRider
    @ConanRider Месяц назад

    I keep getting VAE object has no attibute vae_dtype

    • @Luinux-Tech
      @Luinux-Tech  Месяц назад

      More information, Comfy or WebUI? When does the error occur? What is your hardware?

  • @DenFed-v3m
    @DenFed-v3m 2 месяца назад

    "RuntimeError: Couldn't clone Stable Diffusion.
    Error code: 128"
    What is it?

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      It looks like there is an error with your git installation. Please check if you followed the installation instructions exactly as shown in the video. If not, try reinstalling it and then restart your computer.

    • @DenFed-v3m
      @DenFed-v3m 2 месяца назад

      @@Luinux-Tech the problem is that git clone download is interrupted. And I can't download at all, it's constantly interrupted. I downloaded it on another machine, then copied it to the machine with the GPU. But now I have the next problem. I can't run webui-user.bat because the user name on the machine from where I have copied git clone is different, and the path to python is different, too. Do you know by any chance in which repository's files I can correct the path to python? That's getting silly.

    • @DenFed-v3m
      @DenFed-v3m 2 месяца назад +1

      Nevermind, I have copied the venv folder from the second machine, too. Copying only git clone files made it work.

  • @ledroy69
    @ledroy69 2 месяца назад

    Do you have to always do that step at 12:56 when launching it?

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      Yes, if you close the terminal session, you will have to do all that to use it again, but everything will already be downloaded and configured, so it only takes a few seconds.

    • @burmy1774
      @burmy1774 9 дней назад

      @@Luinux-Tech Is there a way to create a .bat file that does all those steps to launch it?

    • @Luinux-Tech
      @Luinux-Tech  8 дней назад

      @@burmy1774 Yes, and it's very simple. Create the script on your desktop and use the "call" command to activate the venv. Then, use the "call" command to run the "webui-user.bat" script. Something like that.

  • @JoeM771
    @JoeM771 2 месяца назад

    Thakns for the great video. When I try ot generate a model, it is not using the GPU at all, just CPU. I have a 6650XT. When runing comfy, I get this at the start:
    Using directml with device:
    Total VRAM 1024 MB, total RAM 16333 MB
    pytorch version: 2.3.1+cpu
    Set vram state to: LOW_VRAM
    Device: privateuseone
    How do I get it to use the GPU? Instead it has the Device as "privateuseone". I did some googling but have come up blank so far. Thanks for any help!

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      When you installed torch-directml, did you notice any errors? Try running the command "pip install torch-directml" and see if there are any errors.

    • @JoeM771
      @JoeM771 2 месяца назад

      @@Luinux-Tech Thanks for the reply. I did the directml install command and there were no errors. Then I noticed as I was watching your video that your GPU was listed as privateuseone also. I checked my GPU and it was working off and on, hitting 99% then dropping . Took about 193 seconds to make an image. One thing is I have 8gigs of VRAM but it only shows 1 gig (just like in your video). It errors out unless I use the lowvram parameter.

    • @JoeM771
      @JoeM771 2 месяца назад

      Oh, there was a question in there- can I get it to use the 8 gigs of VRAM? Do you think it might be using it even if it only shows 1 gig? Thanks!

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      Usually, the VRAM is completely used. You can check this in the task manager. Torch reports 1GB of VRAM because DirectML does not manage memory very well. The only alternatives for AMD cards would be to use ZLUDA, which I do not recommend because, in my case, I had many more crashes, or use Linux, which officially supports the full AMD ROCm.

  • @NoemieValois-u4z
    @NoemieValois-u4z Месяц назад

    Stable Diffusion anf fooocuS is it the same ?

    • @Luinux-Tech
      @Luinux-Tech  Месяц назад

      Fooocus is more automated, requiring less user input.

  • @luxiland6117
    @luxiland6117 2 месяца назад

    it so frustrating i'm on 6700 xt 12gb follow your steps with the same install results, when i built a image 512x512 queue prompt no gpu working only 100% of RAM, gpu no use, cpu 5% took five min to generated first time second time six min again six and so on, put normalvram or lowvram same result the comfyui dont touch my gpu. T_T

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад +1

      Activate the venv and run the command "python" or "py", this will start a python console(interative mode), then run "import torch", and then check if torch is seeing your gpu with the command: "torch.cuda.get_device_name(0)" does this command return the name of your GPU?

    • @luxiland6117
      @luxiland6117 2 месяца назад

      @@Luinux-Tech
      venv\Lib\site-packages\torch\cuda\__init__.py", line 414, in get_device_name
      return get_device_properties(device).name
      raise AssertionError("Torch not compiled with CUDA enabled")
      AssertionError: Torch not compiled with CUDA enabled

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      Somehow you ended up installing the CPU-only version of PyTorch. Activate the venv and try running the command "pip install torch-directml". Then, repeat the steps mentioned in my previous comment and see if anything changes.

    • @luxiland6117
      @luxiland6117 2 месяца назад

      @@Luinux-Tech not working, and Made a clean install... Something don't work or bypass in the install

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      When installing torch-directml, do you notice any errors regarding mismatches in the versions of the torch packages?

  • @xProto_Gaming
    @xProto_Gaming 2 месяца назад

    so if i have more than 8gb vram, i type highvram?

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      Yes, but remember that DirectML's VRAM management is very limited, and you may end up with errors due to a lack of VRAM.

  • @oma3467
    @oma3467 3 дня назад

    when i create a pictuare my computer restart dose anyone know why ?

    • @Luinux-Tech
      @Luinux-Tech  3 дня назад

      This usually happens when the VRAM is completely full and causes the system itself to crash, what is your GPU? What is the resolution of the image you are trying to generate?

    • @oma3467
      @oma3467 3 дня назад

      @@Luinux-Tech i try to make the vanilla bottle picture, i have a amd 7950 xtx.
      i upated the newest triber version and the temp is below 66 c°

  • @MerhabaBenMert
    @MerhabaBenMert 2 месяца назад

    no zluda?

  • @758185luan
    @758185luan 2 месяца назад

    I have problem: Numpy is not available. Please help me

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      If you activate the venv and try to install with the command "pip install numpy," what happens?

    • @TanquetaOwO
      @TanquetaOwO 2 месяца назад

      @@Luinux-Tech It happens to me too, this is what i get from the terminal:
      (venv) PS C:\Users\avile\OneDrive\Documentos\StableDiffusion\ComfyUI> pip install numpy
      Requirement already satisfied: numpy in c:\users\avile\onedrive\documentos\stablediffusion\venv\lib\site-packages (2.1.1)
      [notice] A new release of pip available: 22.2.1 -> 24.2
      [notice] To update, run: python.exe -m pip install --upgrade pip
      (venv) PS C:\Users\avile\OneDrive\Documentos\StableDiffusion\ComfyUI> python main.py --directml --use-split-cross-attention --normalvram
      A module that was compiled using NumPy 1.x cannot be run in
      NumPy 2.1.1 as it may crash. To support both 1.x and 2.x
      versions of NumPy, modules must be compiled with NumPy 2.0.
      Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
      If you are a user of the module, the easiest solution will be to
      downgrade to 'numpy

    • @TanquetaOwO
      @TanquetaOwO 2 месяца назад +2

      I just found the solution, you have to run
      pip uninstall numpy
      And then
      pip install numpy

    • @758185luan
      @758185luan 2 месяца назад +1

      thank all you guys, It work :DDDD

    • @TanquetaOwO
      @TanquetaOwO 2 месяца назад

      @@758185luan yeah, it also works for me but I really don't like the time it needs to generate, I'm probably moving to linux

  • @manhchuuc4336
    @manhchuuc4336 2 месяца назад

    can you have me why my sd doesnt run on my gpu?

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      I need information: what is your GPU? Is it giving an error? What does the error say?

  • @zerpoll2k
    @zerpoll2k 27 дней назад +1

    thanks idol

  • @Gainax507
    @Gainax507 2 месяца назад

    404 window error Stable Diffusion mode

  • @gabrielpires3365
    @gabrielpires3365 2 месяца назад

    there is any way to use lora with this?

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      Yes, it works normally. Just put the LoRA files in the correct folder (ComfyUI > models > loras) and use.

  • @Thealle09
    @Thealle09 2 месяца назад

    What does the "allow scripts" part do?

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      Allows you to run the ComfyUI/WebUI startup script on your user.

    • @kevinmiole
      @kevinmiole 2 месяца назад +2

      @@Luinux-Tech and this is very dangerous because your opening everything to malware. Is there another way not to do this?

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад +1

      You are correct, sorry. I have updated the description with a safer way to enable script execution. There is also a command provided to revert the configuration that is shown in the video.

    • @kevinmiole
      @kevinmiole 2 месяца назад

      @@Luinux-Tech thank you I'm very bad at scripting but I know a little about security. But how do you use the code you suggested to allow only the scripts in this tutorial

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      Open the terminal in the folder with the file you want to enable for execution. For example, for webUI, you must be in the folder that contains the file "webui-user.bat". Then, right-click and open the terminal. Next, run the command: "Unblock-File -Path .\webui-user.bat". Without the quotes.

  • @Pro-arm
    @Pro-arm Месяц назад

    nice, very good , its work!

  • @dudububu-ll8zq
    @dudububu-ll8zq Месяц назад

    Hey man can we get flux guide too?

  • @okachpmeow
    @okachpmeow 2 месяца назад +1

    tks u

  • @MauroSgamer
    @MauroSgamer 2 месяца назад

    ciao, cosa digiti dopo py? 3:08

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      py -m venv venv

    • @MauroSgamer
      @MauroSgamer 2 месяца назад +1

      @@Luinux-Tech Grazie! piu tardi riprendo con l'installazione. Se puoi fai una versione aggiornata con Zluda, sarebbe il top!

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад +1

      Sure, several people told me about Zluda. I did some research, and I'm going to test if there is any gain in performance or stability. If there is, I'll make a video as soon as possible.

  • @SwilightTparkle
    @SwilightTparkle Месяц назад

    ComfyUI
    Total VRAM 1024 MB
    rx 580 8 GB

    • @Luinux-Tech
      @Luinux-Tech  Месяц назад +1

      This is just a bug in DirectML. Don't worry, it will use all the VRAM it needs. You can check in the Task Manager.

  • @RefRed_King
    @RefRed_King Месяц назад

    thx bruh im subscribe

  • @Ромакотор-ю8в
    @Ромакотор-ю8в 2 месяца назад

    Stable Diffusion model site leads to error 404

    • @Ромакотор-ю8в
      @Ромакотор-ю8в 2 месяца назад

      Also almost all upscalers lead to problem - Cannot set version_counter for inference tensor . Can anyone tell me how to fix this?

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      Yes, they took down the link. I've already updated it with new links. Are you using ComfyUI?

    • @Ромакотор-ю8в
      @Ромакотор-ю8в 2 месяца назад

      @@Luinux-Tech Nope, stable-diffusion-webui-amdgpuI version.

  • @ImanWahriz
    @ImanWahriz 2 месяца назад

    Error, not useable

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      Try to describe the error better, indicating at which step the error occurs and whether you used WebUI or ComfyUI. Any additional information that could be helpful would be appreciated.

  • @RefRed_King
    @RefRed_King Месяц назад

    OMG PLS HELP ME LINUX MADE EZ I ACCIDENTLY REMOVED MY IMAGE OUTPUT WHAT DO I DO 😭😭😭😭

  • @RefRed_King
    @RefRed_King Месяц назад +1

    OK NEVERMIND I CLICKED LOAD DEFAULT THANK

  • @NeverForever40
    @NeverForever40 Месяц назад

    Hello, Linux Made EZ! I ran into this problem when trying to generate images in WebUI and I don't know how to solve it:
    NotImplementedError:
    Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device.
    Do you know anything about this?
    amd rx 5700, intel core i7-3770k ivy bridge, windows 11 23h2
    cmd log:
    PS C:\Users\King\Documents\Stable-diff> .\venv\Scripts\activate
    (venv) PS C:\Users\King\Documents\Stable-diff> cd .\stable-diffusion-webui-directml\
    (venv) PS C:\Users\King\Documents\Stable-diff\stable-diffusion-webui-directml> .\webui.bat
    venv "C:\Users\King\Documents\Stable-diff\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
    ROCm Toolkit 6.1 was found.
    Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
    Version: v1.10.1-amd-9-g46397d07
    Commit hash: 46397d078cff4547eb4bd87adc5c56283e2a8d20
    Using ZLUDA in C:\Users\King\Documents\Stable-diff\stable-diffusion-webui-directml.zluda
    Failed to load ZLUDA: list index out of range
    Using CPU-only torch
    ...
    Failed to create model quickly; will retry using slow method.
    Applying attention optimization: InvokeAI... done.
    Model loaded in 67.0s (load weights from disk: 1.1s, create model: 2.1s, apply weights to model: 21.0s, apply half(): 2.6s, calculate empty prompt: 40.1s).
    after I pressed GENERATE
    cmd log:
    RuntimeError: Input type (float) and bias type (struct c10::Half) should be the same
    Time taken: 1 min. 16.6 sec.
    or someone who want to help, my discord is iwish6768 , discird id is 292757299309838337
    when you add or write to me, please indicate the reason: I want to help with SD for AMD

    • @Luinux-Tech
      @Luinux-Tech  Месяц назад

      Are you trying to use ZLUDA?
      What is happening is that ZLUDA is not working, and the webUI is trying to use CPU fallback. However, your CPU is very old and does not work with PyTorch, so it gives an error.

    • @NeverForever40
      @NeverForever40 Месяц назад

      @@Luinux-Tech no, I'm not trying to use ZLUDA, I have it on my PC, I use it for Blender and I understand roughly what it is, but I repeated the installation exactly according to your video tutorial and what happens is that no images are generated with any model with the following error described above.
      by the way, ComfyUI works fine for me, but stable-diffusion-webui-directml does not, I'll try to install SD in another folder from scratch and if this error appears again, I'll edit this message and add to it.

    • @NeverForever40
      @NeverForever40 Месяц назад +1

      ... I don't know what exactly happened, deleting all versions of Visual Studio or installing/updating AMD HIP SDK with installing the beta version of the driver in its settings, but now at half past 6 in the morning I was finally able to get a working SD on the 20th try using your video, so thank you very much for the video tutorial, etc. and remember that AMD is a cheap option only for games

  • @andrezozo666
    @andrezozo666 10 дней назад

    Can anyone help me I have this error running .\webui-user.bat
    stderr: error: subprocess-exited-with-error
    Preparing metadata (pyproject.toml) did not run successfully.
    exit code: 1
    [21 lines of output]
    + meson setup C:\Users\andre\AppData\Local\Temp\pip-install-tqlyoivq\scikit-image_75eb9669dc0a4203a78c580c08d85302 C:\Users\andre\AppData\Local\Temp\pip-install-tqlyoivq\scikit-image_75eb9669dc0a4203a78c580c08d85302\.mesonpy-rsocldgk -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --native-file=C:\Users\andre\AppData\Local\Temp\pip-install-tqlyoivq\scikit-image_75eb9669dc0a4203a78c580c08d85302\.mesonpy-rsocldgk\meson-python-native-file.ini
    The Meson build system
    Version: 1.6.0
    Source dir: C:\Users\andre\AppData\Local\Temp\pip-install-tqlyoivq\scikit-image_75eb9669dc0a4203a78c580c08d85302
    Build dir: C:\Users\andre\AppData\Local\Temp\pip-install-tqlyoivq\scikit-image_75eb9669dc0a4203a78c580c08d85302\.mesonpy-rsocldgk
    Build type: native build
    Project name: scikit-image
    Project version: 0.21.0
    WARNING: Failed to activate VS environment: Could not find C:\Program Files (x86)\Microsoft Visual Studio\Installer\vswhere.exe
    ..\meson.build:1:0: ERROR: Unknown compiler(s): [['icl'], ['cl'], ['cc'], ['gcc'], ['clang'], ['clang-cl'], ['pgcc']]
    The following exception(s) were encountered:
    Running `icl ""` gave "[WinError 2] El sistema no puede encontrar el archivo especificado"
    Running `cl /?` gave "[WinError 2] El sistema no puede encontrar el archivo especificado"
    Running `cc --version` gave "[WinError 2] El sistema no puede encontrar el archivo especificado"
    Running `gcc --version` gave "[WinError 2] El sistema no puede encontrar el archivo especificado"
    Running `clang --version` gave "[WinError 2] El sistema no puede encontrar el archivo especificado"
    Running `clang-cl /?` gave "[WinError 2] El sistema no puede encontrar el archivo especificado"
    Running `pgcc --version` gave "[WinError 2] El sistema no puede encontrar el archivo especificado"
    A full log can be found at C:\Users\andre\AppData\Local\Temp\pip-install-tqlyoivq\scikit-image_75eb9669dc0a4203a78c580c08d85302\.mesonpy-rsocldgk\meson-logs\meson-log.txt
    [end of output]
    note: This error originates from a subprocess, and is likely not a problem with pip.
    error: metadata-generation-failed
    Encountered error while generating package metadata.
    See above for output.
    note: This is an issue with the package mentioned above, not pip.
    hint: See above for details.

    • @Luinux-Tech
      @Luinux-Tech  10 дней назад

      Apparently, Pip is trying to compile a package because it couldn't install the binary. Are you using Python 3.10? Activate the virtual environment and try running the command "pip install scikit-image" to see what happens.

  • @Gainax507
    @Gainax507 2 месяца назад

    Requested to load AutoencoderKL
    Loading 1 new model
    loaded partially 64.0 63.99990463256836 0
    !!! Exception during processing !!! Numpy is not available
    Traceback (most recent call last):
    File "F:\stable diffusion\comfyUI\execution.py", line 317, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
    File "F:\stable diffusion\comfyUI\execution.py", line 192, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
    File "F:\stable diffusion\comfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
    File "F:\stable diffusion\comfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
    File "F:\stable diffusion\comfyUI
    odes.py", line 1497, in save_images
    i = 255. * image.cpu().numpy()
    RuntimeError: Numpy is not available
    Prompt executed in 124.74 seconds
    (venv) PS F:\stable diffusion\comfyUI>

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад +1

      Something has been updated and is causing this error. Please look at the comment below that discusses numpy.