Install Stable Diffusion for AMD GPUs on Windows | ComfyUI and webUI on AMD.

Поделиться
HTML-код
  • Опубликовано: 11 сен 2024

Комментарии • 89

  • @leozinhojunior2879
    @leozinhojunior2879 3 дня назад +1

    Then make a video teaching how to install ComfyUI on Ubuntu Linux! I saw that you did it for Stable Diffusion, but I really wanted to install ComfyUI on Ubuntu!

  • @xProto_Gaming
    @xProto_Gaming 5 часов назад

    so if i have more than 8gb vram, i type highvram?

  • @Tigermania
    @Tigermania 25 дней назад +2

    What AMD GPU were you using for this? Comfy looks several times faster than A1111.

    • @LinuxMadeEZ
      @LinuxMadeEZ  25 дней назад +3

      The video is sped up. It took me 1 minute and 32 seconds to generate an image with 20 steps in ComfyUI and 2 minutes and 3 seconds for an image also with 20 steps in webUI. My GPU is an RX 550 4GB.

    • @CapaUno1322
      @CapaUno1322 22 дня назад +1

      @@LinuxMadeEZ Wow, it's pretty cool that you can use just 4gb of vram, that's impressive....

  • @𡿺
    @𡿺 10 дней назад +1

    🙂There was a lot of trial and error when I tried to install comfyui without any guidance. I downloaded everything and installed it into the terminal. And somehow it worked.. Even though it takes up a lot of storage

  • @gianlucalorusso8130
    @gianlucalorusso8130 2 дня назад

    first of all your guide is very clean and easy to follow, i followed all youstep , and it all worked, then stable diffusion opens up in the webpage, just like in the video, and i tested it with a random prompt but it cannot generate and the error is this : "SafetensorError: device privateuseone:0 is invalid"
    i tried to download 2 different Models but the error is the same.
    Any idea ? cuz i checked online but couldnt find anything.

    • @LinuxMadeEZ
      @LinuxMadeEZ  2 дня назад

      Thanks. What is your CPU and GPU? Are you using ComfyUI or WebUI?

    • @gianlucalorusso8130
      @gianlucalorusso8130 2 дня назад

      @@LinuxMadeEZ AMD Ryzen 9 3900X 12 CPU
      NVIDIA GeForce RTX 3080
      and i am using WebUI

    • @LinuxMadeEZ
      @LinuxMadeEZ  2 дня назад

      Sorry, but this video will not work for you. The installation method for Nvidia GPUs is different. You do not need to use torch-directml. The standard torch with CUDA is what you need. Unfortunately, I do not have any tutorial for Nvidia cards yet.

    • @gianlucalorusso8130
      @gianlucalorusso8130 2 дня назад +1

      @@LinuxMadeEZ oh ok, well thx for the answer and the support

  • @TT-go3sl
    @TT-go3sl 8 дней назад +1

    Could you further explain the "Unblock-File -Path." Tried researching but could not find anything.

    • @LinuxMadeEZ
      @LinuxMadeEZ  8 дней назад

      Open the terminal in the folder with the file you want to enable for execution. For example, for webUI, you must be in the folder that contains the file "webui-user.bat". Then, right-click and open the terminal. Next, run the command: "Unblock-File -Path .\webui-user.bat". Without the quotes.

    • @TT-go3sl
      @TT-go3sl 8 дней назад

      @@LinuxMadeEZ ohhh ok thanks!!

    • @rubensoliveira9681
      @rubensoliveira9681 3 дня назад

      @@LinuxMadeEZ and what about comfyui itself? what do I put in the place of "name_of_script_to_unblock"?

  • @srivarshan780
    @srivarshan780 18 дней назад +2

    after requirements txt mine stuck at gradio in the last paragraph thing

    • @LinuxMadeEZ
      @LinuxMadeEZ  18 дней назад +1

      Some people have reported this bug. Try pressing enter in the terminal, and it should continue.

  • @ledroy69
    @ledroy69 2 дня назад

    Do you have to always do that step at 12:56 when launching it?

    • @LinuxMadeEZ
      @LinuxMadeEZ  2 дня назад

      Yes, if you close the terminal session, you will have to do all that to use it again, but everything will already be downloaded and configured, so it only takes a few seconds.

  • @okachpmeow
    @okachpmeow 23 дня назад +1

    tks u

  • @JoseEmanuelRojasRivas1
    @JoseEmanuelRojasRivas1 21 день назад +1

    Hello friend, thanks for the video, well explained.
    How can I add models ?
    I mean other models like the civitai ones .
    thanks

    • @LinuxMadeEZ
      @LinuxMadeEZ  21 день назад

      If you are using comfyUI, download the model and place it in the ComfyUI > models > checkpoints folder, or if you are using webUI, download the model and place it in the stable-diffusion-webui-directml > models > Stable-diffusion folder, after that just reload the page in the browser and select the new model.

  • @memoryhole7229
    @memoryhole7229 25 дней назад +2

    Ubuntu vs Windows: Which was faster?

    • @LinuxMadeEZ
      @LinuxMadeEZ  25 дней назад +2

      Windows DirectML does not manage memory very well at the moment. In Linux, I was able to use the "--normalvram" argument perfectly and obtained much better performance. Generating an image with exactly the same parameters(seed, lora, model...) in Linux took about 145 seconds, while in Windows it took 201 seconds.

  • @ardysalinggih
    @ardysalinggih 15 дней назад +2

    thx for tutorial is very helping
    can i ask ?
    can comyui run using zluda in windows ?

    • @LinuxMadeEZ
      @LinuxMadeEZ  15 дней назад +1

      Thank you. Other people have already asked me about ZLUDA, but I haven't tested it yet. As soon as I have time, I will test it and make a video with ZLUDA on Windows with ComfyUI and webUI.

    • @luxiland6117
      @luxiland6117 12 дней назад

      ​@@LinuxMadeEZ its working with zluda, i have a 6700xt but its a mess install and cant use sampling method cudnn error. Only lcm... F

    • @richardtorres5105
      @richardtorres5105 День назад

      @@luxiland6117 Can you show me the method you used to make it work? I have a RX 6750 XT and I can't get it to work with Zluda.

  • @KayuroV
    @KayuroV 15 дней назад +1

    If I have 16 ram should I put normalvram or something else?

    • @LinuxMadeEZ
      @LinuxMadeEZ  15 дней назад

      I recommend using the "--normalvram" parameter for cards with VRAM between 6GB and 12GB. For cards with a higher capacity, such as 16GB or 24GB, use the "--highvram" parameter. This will ensure that the models are loaded in the GPU memory and will accelerate the generation process.

  • @Paddi_o
    @Paddi_o 8 дней назад

    Hi there, I installed as u showed in the tutorial. I have a 7900xtx and at the time i start the queue my vram gets to about 20gb usage but it still says fallback to cpu. i get about 3-4 it/s and after finishing the process my vram is still full at 20gb. whats happening here? it still says fallback to cpu while executing

    • @LinuxMadeEZ
      @LinuxMadeEZ  7 дней назад

      Were you using webui or comfyui? Does the task manager show high GPU usage? What parameters did you use to start?

    • @Paddi_o
      @Paddi_o 7 дней назад

      I used comfyui. While executing the graphics cards goes up in GPU usage like to 80%. The Vram stays at 20gb at any time after first image, after the second it goes eben higher. I still only have like 3-4 it/s. I used standard parametrs from the basic layout, with my own prompt. if i put up like 3 batches with 40 steps it will even drop to 1.5-2.6 it/s

    • @LinuxMadeEZ
      @LinuxMadeEZ  7 дней назад +1

      Sorry. You switched the units and confused me. If you are getting 3-4 iterations per second, it seems correct to me. The VRAM is getting full because machine learning with AMD on Windows is a bit limited, and DirectML does not manage VRAM very well. Do not expect performance similar to NVIDIA cards. If I am not mistaken, a high-end AMD card will have a third of the performance of a high-end Nvidia 4000 Series card in machine learning.

    • @Paddi_o
      @Paddi_o 7 дней назад

      @@LinuxMadeEZ I remember the same performance on other people's 6900xt. Is there really not that big of a difference at all between the 6xxx and 7xxx and cards?
      Would it be a big difference switching to Linux?

    • @LinuxMadeEZ
      @LinuxMadeEZ  7 дней назад +1

      About the performance between the 6000 and 7000 series, according to AMD: "Radeon™ 7000 series GPUs feature more than 2x higher AI performance per Compute Unit (CU) compared to the previous generation." If this is true, I have no way to test it because the most recent card I have access to is an RX 6600. Regarding Linux, when using it, you will have much better VRAM management, allowing you to use more complex models and workflows. In my case, I was able to enable some more optimizations and experienced a considerable performance gain. However, this was on a very limited GPU.

  • @emauelmoschen9835
    @emauelmoschen9835 13 дней назад +1

    Does it work with an 8GB asrock rx 570?

    • @LinuxMadeEZ
      @LinuxMadeEZ  13 дней назад

      Yes, in the video I used an RX 550 4GB. On yours, it will work even better because of the 8GB of VRAM.

  • @Gainax507
    @Gainax507 6 дней назад

    404 window error Stable Diffusion mode

  • @MerhabaBenMert
    @MerhabaBenMert 24 дня назад

    no zluda?

  • @JoeM771
    @JoeM771 3 дня назад

    Thakns for the great video. When I try ot generate a model, it is not using the GPU at all, just CPU. I have a 6650XT. When runing comfy, I get this at the start:
    Using directml with device:
    Total VRAM 1024 MB, total RAM 16333 MB
    pytorch version: 2.3.1+cpu
    Set vram state to: LOW_VRAM
    Device: privateuseone
    How do I get it to use the GPU? Instead it has the Device as "privateuseone". I did some googling but have come up blank so far. Thanks for any help!

    • @LinuxMadeEZ
      @LinuxMadeEZ  3 дня назад

      When you installed torch-directml, did you notice any errors? Try running the command "pip install torch-directml" and see if there are any errors.

    • @JoeM771
      @JoeM771 2 дня назад

      @@LinuxMadeEZ Thanks for the reply. I did the directml install command and there were no errors. Then I noticed as I was watching your video that your GPU was listed as privateuseone also. I checked my GPU and it was working off and on, hitting 99% then dropping . Took about 193 seconds to make an image. One thing is I have 8gigs of VRAM but it only shows 1 gig (just like in your video). It errors out unless I use the lowvram parameter.

    • @JoeM771
      @JoeM771 2 дня назад

      Oh, there was a question in there- can I get it to use the 8 gigs of VRAM? Do you think it might be using it even if it only shows 1 gig? Thanks!

    • @LinuxMadeEZ
      @LinuxMadeEZ  2 дня назад

      Usually, the VRAM is completely used. You can check this in the task manager. Torch reports 1GB of VRAM because DirectML does not manage memory very well. The only alternatives for AMD cards would be to use ZLUDA, which I do not recommend because, in my case, I had many more crashes, or use Linux, which officially supports the full AMD ROCm.

  • @uxot
    @uxot 12 дней назад

    should have really pasted all the cmds you used in description.........

  • @758185luan
    @758185luan 7 дней назад

    I have problem: Numpy is not available. Please help me

    • @LinuxMadeEZ
      @LinuxMadeEZ  6 дней назад

      If you activate the venv and try to install with the command "pip install numpy," what happens?

    • @TanquetaOwO
      @TanquetaOwO 6 дней назад

      @@LinuxMadeEZ It happens to me too, this is what i get from the terminal:
      (venv) PS C:\Users\avile\OneDrive\Documentos\StableDiffusion\ComfyUI> pip install numpy
      Requirement already satisfied: numpy in c:\users\avile\onedrive\documentos\stablediffusion\venv\lib\site-packages (2.1.1)
      [notice] A new release of pip available: 22.2.1 -> 24.2
      [notice] To update, run: python.exe -m pip install --upgrade pip
      (venv) PS C:\Users\avile\OneDrive\Documentos\StableDiffusion\ComfyUI> python main.py --directml --use-split-cross-attention --normalvram
      A module that was compiled using NumPy 1.x cannot be run in
      NumPy 2.1.1 as it may crash. To support both 1.x and 2.x
      versions of NumPy, modules must be compiled with NumPy 2.0.
      Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
      If you are a user of the module, the easiest solution will be to
      downgrade to 'numpy

    • @TanquetaOwO
      @TanquetaOwO 6 дней назад +2

      I just found the solution, you have to run
      pip uninstall numpy
      And then
      pip install numpy

    • @758185luan
      @758185luan 5 дней назад +1

      thank all you guys, It work :DDDD

    • @TanquetaOwO
      @TanquetaOwO 5 дней назад

      @@758185luan yeah, it also works for me but I really don't like the time it needs to generate, I'm probably moving to linux

  • @DenFed-v3m
    @DenFed-v3m 14 дней назад

    "RuntimeError: Couldn't clone Stable Diffusion.
    Error code: 128"
    What is it?

    • @LinuxMadeEZ
      @LinuxMadeEZ  14 дней назад

      It looks like there is an error with your git installation. Please check if you followed the installation instructions exactly as shown in the video. If not, try reinstalling it and then restart your computer.

    • @DenFed-v3m
      @DenFed-v3m 13 дней назад

      @@LinuxMadeEZ the problem is that git clone download is interrupted. And I can't download at all, it's constantly interrupted. I downloaded it on another machine, then copied it to the machine with the GPU. But now I have the next problem. I can't run webui-user.bat because the user name on the machine from where I have copied git clone is different, and the path to python is different, too. Do you know by any chance in which repository's files I can correct the path to python? That's getting silly.

    • @DenFed-v3m
      @DenFed-v3m 13 дней назад +1

      Nevermind, I have copied the venv folder from the second machine, too. Copying only git clone files made it work.

  • @luxiland6117
    @luxiland6117 10 дней назад

    it so frustrating i'm on 6700 xt 12gb follow your steps with the same install results, when i built a image 512x512 queue prompt no gpu working only 100% of RAM, gpu no use, cpu 5% took five min to generated first time second time six min again six and so on, put normalvram or lowvram same result the comfyui dont touch my gpu. T_T

    • @LinuxMadeEZ
      @LinuxMadeEZ  10 дней назад +1

      Activate the venv and run the command "python" or "py", this will start a python console(interative mode), then run "import torch", and then check if torch is seeing your gpu with the command: "torch.cuda.get_device_name(0)" does this command return the name of your GPU?

    • @luxiland6117
      @luxiland6117 10 дней назад

      @@LinuxMadeEZ
      venv\Lib\site-packages\torch\cuda\__init__.py", line 414, in get_device_name
      return get_device_properties(device).name
      raise AssertionError("Torch not compiled with CUDA enabled")
      AssertionError: Torch not compiled with CUDA enabled

    • @LinuxMadeEZ
      @LinuxMadeEZ  10 дней назад

      Somehow you ended up installing the CPU-only version of PyTorch. Activate the venv and try running the command "pip install torch-directml". Then, repeat the steps mentioned in my previous comment and see if anything changes.

    • @luxiland6117
      @luxiland6117 10 дней назад

      @@LinuxMadeEZ not working, and Made a clean install... Something don't work or bypass in the install

    • @LinuxMadeEZ
      @LinuxMadeEZ  10 дней назад

      When installing torch-directml, do you notice any errors regarding mismatches in the versions of the torch packages?

  • @manhchuuc4336
    @manhchuuc4336 12 дней назад

    can you have me why my sd doesnt run on my gpu?

    • @LinuxMadeEZ
      @LinuxMadeEZ  12 дней назад

      I need information: what is your GPU? Is it giving an error? What does the error say?

  • @MauroSgamer
    @MauroSgamer 19 дней назад

    ciao, cosa digiti dopo py? 3:08

    • @LinuxMadeEZ
      @LinuxMadeEZ  19 дней назад

      py -m venv venv

    • @MauroSgamer
      @MauroSgamer 19 дней назад +1

      @@LinuxMadeEZ Grazie! piu tardi riprendo con l'installazione. Se puoi fai una versione aggiornata con Zluda, sarebbe il top!

    • @LinuxMadeEZ
      @LinuxMadeEZ  18 дней назад +1

      Sure, several people told me about Zluda. I did some research, and I'm going to test if there is any gain in performance or stability. If there is, I'll make a video as soon as possible.

  • @Thealle09
    @Thealle09 19 дней назад

    What does the "allow scripts" part do?

    • @LinuxMadeEZ
      @LinuxMadeEZ  19 дней назад

      Allows you to run the ComfyUI/WebUI startup script on your user.

    • @kevinmiole
      @kevinmiole 19 дней назад +2

      @@LinuxMadeEZ and this is very dangerous because your opening everything to malware. Is there another way not to do this?

    • @LinuxMadeEZ
      @LinuxMadeEZ  18 дней назад +1

      You are correct, sorry. I have updated the description with a safer way to enable script execution. There is also a command provided to revert the configuration that is shown in the video.

    • @kevinmiole
      @kevinmiole 18 дней назад

      @@LinuxMadeEZ thank you I'm very bad at scripting but I know a little about security. But how do you use the code you suggested to allow only the scripts in this tutorial

    • @LinuxMadeEZ
      @LinuxMadeEZ  18 дней назад

      Open the terminal in the folder with the file you want to enable for execution. For example, for webUI, you must be in the folder that contains the file "webui-user.bat". Then, right-click and open the terminal. Next, run the command: "Unblock-File -Path .\webui-user.bat". Without the quotes.

  • @seanmorgan4119
    @seanmorgan4119 12 дней назад

    Please help:
    Traceback (most recent call last):
    File "C:\Users\*redatcted*\Documents\Stable Diffusion\ComfyUI\main.py", line 87, in
    import comfy.utils
    File "C:\Users\*redatcted*\Documents\Stable Diffusion\ComfyUI\comfy\utils.py", line 20, in
    import torch
    File "C:\Users\*redatcted*\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\__init__.py", line 148, in
    raise err
    OSError: [WinError 126] The specified module could not be found. Error loading "C:\Users\*redatcted*\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\lib\fbgemm.dll" or one of its dependencies.

    • @LinuxMadeEZ
      @LinuxMadeEZ  12 дней назад

      Try installing the latest VC_redist.x64, download link: learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170

    • @seanmorgan4119
      @seanmorgan4119 11 дней назад

      @@LinuxMadeEZ downloaded it but still doesn't getting the same error message

    • @LinuxMadeEZ
      @LinuxMadeEZ  11 дней назад

      What is your GPU?

    • @seanmorgan4119
      @seanmorgan4119 10 дней назад

      @@LinuxMadeEZ I just have an integrated AMD GPU. Would that be the reason?

    • @LinuxMadeEZ
      @LinuxMadeEZ  10 дней назад

      First, install the "App Installer" from the Microsoft Store. Then, restart your computer and run the following command without the quotes: "winget install --id Microsoft.VisualStudio.2022.BuildTools --override "--passive --add Microsoft.VisualStudio.Component.VC.Tools.x86.x64""

  • @user-ls1fu9lg8m
    @user-ls1fu9lg8m 3 дня назад

    Stable Diffusion model site leads to error 404

    • @user-ls1fu9lg8m
      @user-ls1fu9lg8m 3 дня назад

      Also almost all upscalers lead to problem - Cannot set version_counter for inference tensor . Can anyone tell me how to fix this?

    • @LinuxMadeEZ
      @LinuxMadeEZ  3 дня назад

      Yes, they took down the link. I've already updated it with new links. Are you using ComfyUI?

  • @Gainax507
    @Gainax507 6 дней назад

    Requested to load AutoencoderKL
    Loading 1 new model
    loaded partially 64.0 63.99990463256836 0
    !!! Exception during processing !!! Numpy is not available
    Traceback (most recent call last):
    File "F:\stable diffusion\comfyUI\execution.py", line 317, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
    File "F:\stable diffusion\comfyUI\execution.py", line 192, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
    File "F:\stable diffusion\comfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
    File "F:\stable diffusion\comfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
    File "F:\stable diffusion\comfyUI
    odes.py", line 1497, in save_images
    i = 255. * image.cpu().numpy()
    RuntimeError: Numpy is not available
    Prompt executed in 124.74 seconds
    (venv) PS F:\stable diffusion\comfyUI>

    • @LinuxMadeEZ
      @LinuxMadeEZ  6 дней назад +1

      Something has been updated and is causing this error. Please look at the comment below that discusses numpy.