Install Stable Diffusion for AMD GPUs on Windows | ComfyUI and webUI on AMD.

Поделиться
HTML-код
  • Опубликовано: 17 дек 2024

Комментарии • 261

  • @Luinux-Tech
    @Luinux-Tech  2 месяца назад +5

    Please, if you are asking for help, explain your error in detail, inform about your hardware(CPU + GPU), and specify whether you used WebUI or ComfyUI.

  • @sir_uurcan
    @sir_uurcan 6 дней назад

    I have a problem at 4:30, I did write the .\venv\Scrips\activate code but it says "the specified module .venv was not loaded because no valid module file was found in any module directory"

    • @Luinux-Tech
      @Luinux-Tech  5 дней назад

      You are using the wrong path, check if you are in the correct folder and if there are no misspelled words, for example in your comment it says "\Scrips" instead of "\Scripts".

  • @uxot
    @uxot 3 месяца назад +9

    should have really pasted all the cmds you used in description.........

  • @greenesyt563
    @greenesyt563 2 месяца назад +2

    The first command worked then at 3:40 I typed the command correctly and it says "'Set-ExecutionPolicy' is not recognized as an internal or external command, operable program or batch file.". I didn't follow the tutorial from the start because I already have Python 3.10.6 and Git installed because I used to use A1111 and now moving to Comfy. Could this be because of I am in a different drive that where my python is installed? Do I have to do everything in C drive?

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад +1

      Are you running the command in PowerShell? This command is not for Windows CMD.

    • @greenesyt563
      @greenesyt563 2 месяца назад

      @@Luinux-Tech TYSM it worked! And one more question, Can you you tell me if I can use Zluda with it because I only got a RX 580 8GB and if I can't can I run Flux with directml? and thanks again😁

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      Yes, ZLUDA works, but from my experience, at least on Polaris GPUs, the performance difference is not very significant. I haven't tested Flux yet, but if you have ComfyUi running, just download the model and a workflow and test it.

  • @SaintsSwords
    @SaintsSwords 2 месяца назад +2

    Thanks a lot, I was looking for a way to install it with my AMD GPU, I have a 7900 XT and your tutorial is very explicit. I was on fooocus before and it took like 4 times longer to generate lmao, thanks a lot

  • @ItsKagiVids
    @ItsKagiVids Месяц назад +2

    Thank you so much for the tutorial i've tried other vids but this is the only one I was able to find that actually works with AMD.

  • @abhisheksinghnepal910
    @abhisheksinghnepal910 2 месяца назад +1

    Followed each steps. But, module not found error: No module named 'torch_directml' ☹ 07:30

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      Try running the command "pip install torch-directml torchaudio" again and see if there are any errors in the terminal.

    • @abhisheksinghnepal910
      @abhisheksinghnepal910 2 месяца назад

      @@Luinux-Tech Hmm now it worked but still the problem is same--- "Could not allocate tensor with 268435456 bytes. There is not enough GPU video memory available!".......... AMD Radeon (TM) R5 M330 (2 GB) and Intel(R) HD Graphics 620 (4GB) .............. Dell Inspiron i7 -- Model- 15 3567----- 7500 CPU @2.7 Ghz 29012 Cores...... 8GB RAM...... Mhz

  • @CatalinMinulescu
    @CatalinMinulescu 18 дней назад +1

    After I have knock my head to a wall 4 hours, finally I found your video !
    Thanks, you're a genius !
    Subscribed

  • @TryrantAnonymous
    @TryrantAnonymous 3 дня назад +1

    bro, at this point please marry me, I tried for days, those Bs tuts never worked for me. this one is a livesaver, THANK YOU!!!!!!!!!!!!!

  • @birdup8884
    @birdup8884 28 дней назад +1

    Man I wasted all my day, trying to work AI on my computer, I got to use txt to txt etc like Ollama3.2 visual, but damn, image was impossible. So many useless, old, and wrong information. Thank you, amazing, really show have more views. Subscribed.

  • @ryzenrich
    @ryzenrich 2 месяца назад +3

    A webui 1024x1024 image takes 1min 50sec, on a RX 6600. Pretty good. Btw I got an error by just using --use-directml. Instead, I used all the commands in your video but removed the --lowram. Also used a different model not sure how if that affects the processing time. Thanks you for the clear instructions.

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      I'm glad I could help, and you're right. Some GPUs only need "--use-directml" to work, others need all or some of the other commands. I'll clarify that in the description.

  • @fighteraircraft4576
    @fighteraircraft4576 19 дней назад +2

    Hi, I have an i5-11400F and an RX 6600. I followed all your steps for Comfy UI (local, not web UI) and while everything works well except, It's using my CPU. When I see task manager, it's showing CPU usage but no GPU usage. It's also taking very long to generate the image for this reason, I assume? How can I fix this? Please help me.

    • @Luinux-Tech
      @Luinux-Tech  19 дней назад

      Activate the venv and run the command "python" or "py", this will start a python console(interative mode), then run "import torch", and then check if torch has access to your gpu with the command: "torch.cuda.get_device_name(0)" does this command return the name of your GPU?

  • @srivarshan780
    @srivarshan780 5 дней назад +1

    how to install forge ui on amd gpu there is no video in yt pls upload soon

  • @sorryyourenotawinner2506
    @sorryyourenotawinner2506 3 дня назад

    can't use powershell if i use it as a admin the route changes and if i use in the same folder error... just happen...

    • @Luinux-Tech
      @Luinux-Tech  2 дня назад

      Explain your error in detail, inform about your hardware(CPU + GPU), and specify whether you used WebUI or ComfyUI.

  • @syedhamza7207
    @syedhamza7207 23 дня назад +1

    Super dooper amazing tutorial worked like a charm 😊😊

  • @JewelryHustlersCorner
    @JewelryHustlersCorner Месяц назад +1

    finally a no BS tutorial! Thanks

  • @TT-go3sl
    @TT-go3sl 3 месяца назад +1

    Could you further explain the "Unblock-File -Path." Tried researching but could not find anything.

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад

      Open the terminal in the folder with the file you want to enable for execution. For example, for webUI, you must be in the folder that contains the file "webui-user.bat". Then, right-click and open the terminal. Next, run the command: "Unblock-File -Path .\webui-user.bat". Without the quotes.

    • @TT-go3sl
      @TT-go3sl 3 месяца назад

      @@Luinux-Tech ohhh ok thanks!!

    • @rubensoliveira9681
      @rubensoliveira9681 3 месяца назад

      @@Luinux-Tech and what about comfyui itself? what do I put in the place of "name_of_script_to_unblock"?

    • @ilxosarui197
      @ilxosarui197 3 месяца назад

      @@rubensoliveira9681 did you find out? im stuck there too

  • @abhisheksinghnepal910
    @abhisheksinghnepal910 2 месяца назад

    I am using windows 10 and we don't have the option of "Open in terminal" option. 03:01

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      Shift + Right Click --> open PowerShell here.

    • @abhisheksinghnepal910
      @abhisheksinghnepal910 2 месяца назад +1

      @@Luinux-Tech Yes, just tried and got the option.. Thank u.. I am gonna subscribe u right now for ur help. 😊😊

  • @ledroy69
    @ledroy69 3 месяца назад

    Do you have to always do that step at 12:56 when launching it?

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад

      Yes, if you close the terminal session, you will have to do all that to use it again, but everything will already be downloaded and configured, so it only takes a few seconds.

    • @burmy1774
      @burmy1774 Месяц назад

      @@Luinux-Tech Is there a way to create a .bat file that does all those steps to launch it?

    • @Luinux-Tech
      @Luinux-Tech  Месяц назад

      @@burmy1774 Yes, and it's very simple. Create the script on your desktop and use the "call" command to activate the venv. Then, use the "call" command to run the "webui-user.bat" script. Something like that.

  • @deboa5376
    @deboa5376 20 дней назад

    I did everything right. But when I click on Queue the PC freezes and X Reconnecting appears. I have more VRAM than used in the video and the image does not generate. I've done everything. How to solve?

    • @Luinux-Tech
      @Luinux-Tech  19 дней назад

      Please, when asking for help, inform about your hardware (CPU + GPU), and specify whether you used WebUI or ComfyUI, and what the dimensions of the image you tried to generate are.

  • @SteveJ.Johnson88
    @SteveJ.Johnson88 22 дня назад

    well this is what i am getting : The GPU will not respond to more commands, most likely because some other application submitted invalid commands.
    The calling application should re-create the device and continue.
    when it is reaching 30% this is what i am getting, it is not fast too! my pc ram is 32g, my GPU according to task manager is 4g and shared gpu is 14g, so why it isn't working i don't get it!

    • @Luinux-Tech
      @Luinux-Tech  22 дня назад

      What width and height are you using? What is the exact model of your GPU?

  • @ruumasa
    @ruumasa 3 дня назад

    (venv) PS D:\as\Ai\ComfyUI> pip install -r requirements.txt
    ERROR: Could not find a version that satisfies the requirement torch (from versions: none)
    ERROR: No matching distribution found for torch

    • @ruumasa
      @ruumasa 3 дня назад

      im use cpu r5 5600 +gpu rx6600

    • @Luinux-Tech
      @Luinux-Tech  2 дня назад

      Are you using Python 3.10.6?

    • @ruumasa
      @ruumasa День назад

      @@Luinux-Tech now working . im downgrade python

  • @normon6314
    @normon6314 2 месяца назад +1

    Hello. (i3 9100 and RX580 8Gb) Im using WebUI. Image generation works fine but when i checked my vram usage after generating it stays at max usage for some reason. Is this normal? Due to this, i cannot upscale the image, it will say that i dont have enough vram.
    RuntimeError: Could not allocate tensor with 1073741824 bytes. There is not enough GPU video memory available!
    Could you help please?

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад +2

      Hi, sorry I didn't answer earlier. Normally, the VRAM ends up being full because the model is sent to VRAM to speed up the generation process. However, since your GPU has 8GB, this should only happen if you are using SDXL (models trained for 1024x1024). I suggest trying the Tiled Diffusion & VAE extension and the "--lowvram" parameter. This should be sufficient to eliminate errors caused by insufficient memory. Another option would be to generate the images first and then upscale them. It is also worth saying that DirectML (the API that allows you to use AMD cards for Machine Learning) has some memory problems and may be contributing to this abnormal use of VRAM.

  • @Goddsindra
    @Goddsindra 3 дня назад

    hello,i have an issue here it using my cpu not my gpu any fix for this? im using 6650 xt and 5 5600 and using webui

    • @Luinux-Tech
      @Luinux-Tech  2 дня назад

      Activate the venv and run the command "python" or "py", this will start a python console(interative mode), then run "import torch", and then check if torch is seeing your gpu with the command: "torch.cuda.get_device_name(0)" does this command return the name of your GPU?

    • @Goddsindra
      @Goddsindra 2 дня назад

      @Luinux-Tech Yes i've already fix that but there is another error could not locate tensor, the first one is around 400k bytes and after i use the --lowvram its change to 100k bytes any fix for this?

  • @srivarshan780
    @srivarshan780 3 месяца назад +2

    after requirements txt mine stuck at gradio in the last paragraph thing

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад +1

      Some people have reported this bug. Try pressing enter in the terminal, and it should continue.

    • @srivarshan780
      @srivarshan780 5 дней назад

      @@Luinux-Tech thanks

  • @KayuroV
    @KayuroV 3 месяца назад +2

    If I have 16 ram should I put normalvram or something else?

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад +1

      I recommend using the "--normalvram" parameter for cards with VRAM between 6GB and 12GB. For cards with a higher capacity, such as 16GB or 24GB, use the "--highvram" parameter. This will ensure that the models are loaded in the GPU memory and will accelerate the generation process.

  • @AtajoSeries
    @AtajoSeries 2 месяца назад

    I have win10 enterprise LTSC, I think torch doesnt work on LTSC..
    because I did everything as you did. I got ryzen5500 and rx 6700 xt

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      Can you describe the error? What does it say?

    • @eshistorai
      @eshistorai 2 месяца назад

      @@Luinux-Tech
      Traceback (most recent call last):
      File "C:\Users\Ooze\Documents\stable-diff\ComfyUI\main.py", line 90, in
      import execution
      File "C:\Users\Ooze\Documents\stable-diff\ComfyUI\execution.py", line 13, in
      import nodes
      File "C:\Users\Ooze\Documents\stable-diff\ComfyUI
      odes.py", line 21, in
      import comfy.diffusers_load
      File "C:\Users\Ooze\Documents\stable-diff\ComfyUI\comfy\diffusers_load.py", line 3, in
      import comfy.sd
      File "C:\Users\Ooze\Documents\stable-diff\ComfyUI\comfy\sd.py", line 5, in
      from comfy import model_management
      File "C:\Users\Ooze\Documents\stable-diff\ComfyUI\comfy\model_management.py", line 62, in
      import torch_directml
      File "C:\Users\Ooze\Documents\stable-diff\venv\Lib\site-packages\torch_directml\__init__.py", line 21, in
      import torch_directml_native
      ImportError: DLL load failed while importing torch_directml_native: The specified module could not be found.
      this is the error powershell says
      I Installed everystep without any problem. but when I try to run python main.py and the whole code, it gives that error.

  • @andysitus5485
    @andysitus5485 Месяц назад

    I did everything according to the guide, but only the CPU works, the rx 6600 xt does not work 😭😭😭

  • @Tigermania
    @Tigermania 4 месяца назад +2

    What AMD GPU were you using for this? Comfy looks several times faster than A1111.

    • @Luinux-Tech
      @Luinux-Tech  4 месяца назад +5

      The video is sped up. It took me 1 minute and 32 seconds to generate an image with 20 steps in ComfyUI and 2 minutes and 3 seconds for an image also with 20 steps in webUI. My GPU is an RX 550 4GB.

    • @CapaUno1322
      @CapaUno1322 3 месяца назад +3

      @@Luinux-Tech Wow, it's pretty cool that you can use just 4gb of vram, that's impressive....

  • @Bigg_Sipp
    @Bigg_Sipp 7 дней назад

    3 Things.
    1. Thank you for this tutorial. it worked 1st time and does what I need it to do without fighting with me
    2. Is there a way to create a launcher of sorts? I don't know ANYTHING about python or Git or coding. I know you could call on it, but I've not found a tutorial to help.
    3. I've got a 7900XTX and I'm still getting 1-2 IT/s. when I had Automatic1111 I was cranking out 11-15 IT/s but I switched to Comfy after hearing it was superior. a 1024x1024 image generates is 24 secs so I'm not complaining but more asking is there a way to improve IT/s?

    • @Luinux-Tech
      @Luinux-Tech  7 дней назад

      Thank you. To create a launcher on Windows, try following the pinned comment in this video: ruclips.net/video/b9pqNQBSlpw/видео.html. Regarding improving performance, you can try using ComfyUI with the "efficiency-nodes-comfyui" extension. Another way to improve performance would be using ROCm directly on Linux.

  • @gianlucalorusso8130
    @gianlucalorusso8130 3 месяца назад +1

    first of all your guide is very clean and easy to follow, i followed all youstep , and it all worked, then stable diffusion opens up in the webpage, just like in the video, and i tested it with a random prompt but it cannot generate and the error is this : "SafetensorError: device privateuseone:0 is invalid"
    i tried to download 2 different Models but the error is the same.
    Any idea ? cuz i checked online but couldnt find anything.

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад

      Thanks. What is your CPU and GPU? Are you using ComfyUI or WebUI?

    • @gianlucalorusso8130
      @gianlucalorusso8130 3 месяца назад

      @@Luinux-Tech AMD Ryzen 9 3900X 12 CPU
      NVIDIA GeForce RTX 3080
      and i am using WebUI

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад

      Sorry, but this video will not work for you. The installation method for Nvidia GPUs is different. You do not need to use torch-directml. The standard torch with CUDA is what you need. Unfortunately, I do not have any tutorial for Nvidia cards yet.

    • @gianlucalorusso8130
      @gianlucalorusso8130 3 месяца назад +1

      @@Luinux-Tech oh ok, well thx for the answer and the support

  • @Driftmonkey
    @Driftmonkey 2 месяца назад

    How do I get the ability to open up python from the folder I'm currently in? Is it clicking that box that says add Python 3.10 to path? Because its still not there for me. Maybe its my windows appearance settings to make windows 10 appear like win 7?

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      Yes, it should work by checking the "Add to Path" box. If not, open the environment variables menu, double click on "Path," and add the path of your Python installation (usually: C:\Users\[user]\AppData\Local\Programs\Python\Python[version]).

  • @Galova
    @Galova 2 месяца назад

    oh. how do I enable OLIVE support as well? I've read an article on AMD blog that it helps optimize ai model to run faster, sometimes a lot faster particularily on AMD gpu. I've seen there is a branch of stable diffusion on Github with olive support but I failed to make it work because of errors during installation. I've managed to install ComfyUI but webUI returns OSerror

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      Try deleting "C:\users\\.cache\huggingface" to fix OSError.

    • @Galova
      @Galova 2 месяца назад

      @@Luinux-Tech I've managed to make ComfyUI to run using your method and links. I've tried to install webUI and it returns error that it can't find repository on hugginface. It is returned by python script run from bat file... I tried to reinstall and some other tutorials where I get 'no cuda drivers detected' error even though I used -directml commandline etc. What can it be?
      Since comfyui seems to work fine I tried photoshop plugin for comfyui. I installed everything following instructions but it doesn't work reporting error that VAE not found, while it works fine in browser. So sad that NOTHING works out of the box..... goddamn quest.

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      About webUI, I also noticed these errors coming from Hugging Face. Apparently, some files are no longer available on Hugging Face, and the webUI guys still haven't fixed the broken links. As for ComfyUI and Photoshop, unfortunately, I haven't tested them yet, so I can't help at the moment.

  • @Xhinism
    @Xhinism 20 дней назад +1

    where is the download link for the stable diffusion checkpoint? theres so many links in the description and im confused 🥲

    • @Luinux-Tech
      @Luinux-Tech  20 дней назад

      "Stable Diffusion model" or "Stable Diffusion alternative model"

  • @Minecrafter-cv6rb
    @Minecrafter-cv6rb 2 месяца назад

    Can I directly download manually Comfy from Github because for some reason the git clone command keep failing

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      Yes, I believe it will work normally.

  • @Matheus-mr4tl
    @Matheus-mr4tl 2 месяца назад

    in the ComfyUI it gets the erros "theres no enough gpu video memory available" even thoug i follow every step for low vram. my gpu is 4gb

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      Are you using SDXL or the normal models? What size image are you trying to generate (512x512, 768x768...)?

    • @Matheus-mr4tl
      @Matheus-mr4tl 2 месяца назад

      @@Luinux-Tech good question... what should I use?

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      Stable diffusion 1.5 models and images with 512x512 pixels. If you want larger images, just upscale them later.

  • @ChinmayWaingankar-q3t
    @ChinmayWaingankar-q3t 3 дня назад

    I have Lenovo ideapad gaming 3 laptop with AMD ryzen 4600h, will comfy UI work in my system

    • @Luinux-Tech
      @Luinux-Tech  3 дня назад

      it should work normally. Unfortunately, I can't guarantee it because I haven't tested it on APUs.

  • @ardysalinggih
    @ardysalinggih 3 месяца назад +2

    thx for tutorial is very helping
    can i ask ?
    can comyui run using zluda in windows ?

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад +2

      Thank you. Other people have already asked me about ZLUDA, but I haven't tested it yet. As soon as I have time, I will test it and make a video with ZLUDA on Windows with ComfyUI and webUI.

    • @luxiland6117
      @luxiland6117 3 месяца назад +1

      ​@@Luinux-Tech its working with zluda, i have a 6700xt but its a mess install and cant use sampling method cudnn error. Only lcm... F

    • @richardtorres5105
      @richardtorres5105 3 месяца назад

      @@luxiland6117 Can you show me the method you used to make it work? I have a RX 6750 XT and I can't get it to work with Zluda.

  • @yabeginilah6946
    @yabeginilah6946 22 дня назад

    can you help me bro, when I tried to load the model/check point, it got an error

    • @Luinux-Tech
      @Luinux-Tech  22 дня назад

      Please, if you are asking for help, explain your error in detail, inform about your hardware(CPU + GPU), and specify whether you used WebUI or ComfyUI.

    • @yabeginilah6946
      @yabeginilah6946 21 день назад

      @@Luinux-Tech ok bro i have a new problem, when i type .\venv\Scripts\activate in window powershell it got an error,
      i try again your tutorial, because I reinstalled my laptop yesterday, and that's what happened.
      sory my english so bad I hope you understand what I mean

    • @Luinux-Tech
      @Luinux-Tech  21 день назад

      @@yabeginilah6946 I can understand what you write, but I need you to specify what the error says. You can copy the error lines and paste them here.

    • @yabeginilah6946
      @yabeginilah6946 21 день назад +1

      @@Luinux-Tech i have done bro, sorry my mistake lol.

  • @alexiskonto1166
    @alexiskonto1166 2 месяца назад

    Thank you for the video. How can i dedicate more VRAM to the "server" ? i have RTX 6750 12 GB but it only reserves 1GB ("--reserve-vram 4096" or "--reserve-vram 2.0" or other numbers is not working)

    • @alexiskonto1166
      @alexiskonto1166 2 месяца назад

      ComfyUI

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      This is just a bug in DirectML. Don't worry, it will use all the VRAM it needs. You can check in the Task Manager.

  • @abhisheksinghnepal910
    @abhisheksinghnepal910 2 месяца назад

    Following WebUI Error I got.... After image generation and it disappeared and got this msg in terminal....
    RuntimeError: Could not allocate tensor with 134217728 bytes. There is not enough GPU video memory available!

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      What is your GPU? and what image size are you trying to generate?

    • @abhisheksinghnepal910
      @abhisheksinghnepal910 2 месяца назад

      @@Luinux-Tech AMD Radeon (TM) R5 M330 (2 GB) and Intel(R) HD Graphics 620 (4GB)
      .............. Dell Inspiron i7 7500 CPU @2.7 Ghz 29012 Cores...... 8GB RAM...... Mhz--- Model- 15 3567

    • @abhisheksinghnepal910
      @abhisheksinghnepal910 2 месяца назад

      @@Luinux-Tech I generated simple image of "Cat" to test it.

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      This method will only use your AMD GPU, and since you only have 2GB of VRAM, it will be very difficult to generate an image. You can try to reduce the resolution of the image you want to generate or use webUI with the "Tiled Diffusion & VAE" extension and the "--lowvram" parameter.

  • @xverny0
    @xverny0 2 месяца назад

    Hi, if I want to get an image it always gives me the problem that says “Could not allocate tensor with 52428800 bytes. There is not enough GPU video memory available!” What should be the solution to the problem?

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      What is your GPU? This is a lack of VRAM problem, try using the "--lowvram" parameter or decreasing the image resolution.

    • @xverny0
      @xverny0 2 месяца назад

      ​@@Luinux-Tech my gpu is 6700 XT,
      When I use -lowvram, -normalvram or -highvram I still get the same error

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      @@xverny0 What is the size of the image you are trying to generate?

    • @xverny0
      @xverny0 2 месяца назад

      @@Luinux-Tech size of image is 512x512

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      @@xverny0 Your GPU should be able to generate images much larger than 512x512. Are you using webUI or ComfyUI? Please provide the full command you are using to start it. Also, try generating an image and check in the task manager if the VRAM is actually full.

  • @fotomez7345
    @fotomez7345 3 месяца назад

    Hi I have a problem, when i run " py -m venv venv" it says Error: Command '['C:\\AI\\venv\\Scripts\\python.exe', '-Im', 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit status 1. So im blocked at the starting point, could you help me? Thank you so much

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад

      First, run the command: "py -m pip install --upgrade pip" and then "py -m pip install virtualenv". Then, try again to create the venv. If it doesn't work, I recommend reinstalling Python.

  • @nachoferreyra8677
    @nachoferreyra8677 2 месяца назад

    i get stuck in pip install -r .
    equirements.txt ...i put the correct directory...help

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      Be more specific. What happens? What does the error say?

  • @rubenhuisman4915
    @rubenhuisman4915 17 дней назад

    AMD Radeon RX 7600 XT, CPU AMD Ryzen 7 7800X3D 8-Core, i ran your entire tutorial, it was very clear, but in the end i had the following error: "AttributeError: module 'torch' has no attribute 'Tensor' " i hope you can help me fix it

    • @Luinux-Tech
      @Luinux-Tech  16 дней назад +1

      Are you using Python 3.10.6? There seems to be an error with your installation. I recommend deleting your VENV, reinstalling Python, and trying again.

    • @rubenhuisman4915
      @rubenhuisman4915 16 дней назад

      @ thank you for your reaction, i will definitely try that

    • @rubenhuisman4915
      @rubenhuisman4915 16 дней назад +1

      @@Luinux-Tech thank you so much, it works now, really appreciate your help!

  • @Paddi_o
    @Paddi_o 3 месяца назад

    Hi there, I installed as u showed in the tutorial. I have a 7900xtx and at the time i start the queue my vram gets to about 20gb usage but it still says fallback to cpu. i get about 3-4 it/s and after finishing the process my vram is still full at 20gb. whats happening here? it still says fallback to cpu while executing

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад

      Were you using webui or comfyui? Does the task manager show high GPU usage? What parameters did you use to start?

    • @Paddi_o
      @Paddi_o 3 месяца назад

      I used comfyui. While executing the graphics cards goes up in GPU usage like to 80%. The Vram stays at 20gb at any time after first image, after the second it goes eben higher. I still only have like 3-4 it/s. I used standard parametrs from the basic layout, with my own prompt. if i put up like 3 batches with 40 steps it will even drop to 1.5-2.6 it/s

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад +2

      Sorry. You switched the units and confused me. If you are getting 3-4 iterations per second, it seems correct to me. The VRAM is getting full because machine learning with AMD on Windows is a bit limited, and DirectML does not manage VRAM very well. Do not expect performance similar to NVIDIA cards. If I am not mistaken, a high-end AMD card will have a third of the performance of a high-end Nvidia 4000 Series card in machine learning.

    • @Paddi_o
      @Paddi_o 3 месяца назад

      @@Luinux-Tech I remember the same performance on other people's 6900xt. Is there really not that big of a difference at all between the 6xxx and 7xxx and cards?
      Would it be a big difference switching to Linux?

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад +1

      About the performance between the 6000 and 7000 series, according to AMD: "Radeon™ 7000 series GPUs feature more than 2x higher AI performance per Compute Unit (CU) compared to the previous generation." If this is true, I have no way to test it because the most recent card I have access to is an RX 6600. Regarding Linux, when using it, you will have much better VRAM management, allowing you to use more complex models and workflows. In my case, I was able to enable some more optimizations and experienced a considerable performance gain. However, this was on a very limited GPU.

  • @nazarmorhun814
    @nazarmorhun814 17 дней назад

    Hey, thank you for the tutorial. My PC crashes when it comes to generating image (Radeon 6800xt, Ryzen 7950x). torch.cuda.get_device_name(0) returns 'Torch not compiled with CUDA enabled', and 'pip install torch-directml torchaudio' returns Requirement already satisfied. Any idea what might be the problem?

    • @Luinux-Tech
      @Luinux-Tech  16 дней назад

      Have you tried launching using the parameter: "--skip-torch-cuda-test"?

  • @Yamaguchi-Kawaki
    @Yamaguchi-Kawaki 2 месяца назад

    what the difference between "pip install torch-directml" and "pip install torch-directml torchaudio" ?

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      I specify "torchaudio" to prevent version mismatches.

    • @Yamaguchi-Kawaki
      @Yamaguchi-Kawaki 2 месяца назад +1

      @@Luinux-Tech nah it's just on git hub it without "torchaudio" so just little confused. But, hey, I used both for experiment...with and without torchaudio and both way seems to be working.

  • @MauroSgamer
    @MauroSgamer 3 месяца назад

    ciao, cosa digiti dopo py? 3:08

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад

      py -m venv venv

    • @MauroSgamer
      @MauroSgamer 3 месяца назад +1

      @@Luinux-Tech Grazie! piu tardi riprendo con l'installazione. Se puoi fai una versione aggiornata con Zluda, sarebbe il top!

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад +1

      Sure, several people told me about Zluda. I did some research, and I'm going to test if there is any gain in performance or stability. If there is, I'll make a video as soon as possible.

  • @JoseEmanuelRojasRivas1
    @JoseEmanuelRojasRivas1 3 месяца назад +1

    Hello friend, thanks for the video, well explained.
    How can I add models ?
    I mean other models like the civitai ones .
    thanks

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад

      If you are using comfyUI, download the model and place it in the ComfyUI > models > checkpoints folder, or if you are using webUI, download the model and place it in the stable-diffusion-webui-directml > models > Stable-diffusion folder, after that just reload the page in the browser and select the new model.

  • @NeverForever40
    @NeverForever40 2 месяца назад

    Hello, Linux Made EZ! I ran into this problem when trying to generate images in WebUI and I don't know how to solve it:
    NotImplementedError:
    Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device.
    Do you know anything about this?
    amd rx 5700, intel core i7-3770k ivy bridge, windows 11 23h2
    cmd log:
    PS C:\Users\King\Documents\Stable-diff> .\venv\Scripts\activate
    (venv) PS C:\Users\King\Documents\Stable-diff> cd .\stable-diffusion-webui-directml\
    (venv) PS C:\Users\King\Documents\Stable-diff\stable-diffusion-webui-directml> .\webui.bat
    venv "C:\Users\King\Documents\Stable-diff\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
    ROCm Toolkit 6.1 was found.
    Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
    Version: v1.10.1-amd-9-g46397d07
    Commit hash: 46397d078cff4547eb4bd87adc5c56283e2a8d20
    Using ZLUDA in C:\Users\King\Documents\Stable-diff\stable-diffusion-webui-directml.zluda
    Failed to load ZLUDA: list index out of range
    Using CPU-only torch
    ...
    Failed to create model quickly; will retry using slow method.
    Applying attention optimization: InvokeAI... done.
    Model loaded in 67.0s (load weights from disk: 1.1s, create model: 2.1s, apply weights to model: 21.0s, apply half(): 2.6s, calculate empty prompt: 40.1s).
    after I pressed GENERATE
    cmd log:
    RuntimeError: Input type (float) and bias type (struct c10::Half) should be the same
    Time taken: 1 min. 16.6 sec.
    or someone who want to help, my discord is iwish6768 , discird id is 292757299309838337
    when you add or write to me, please indicate the reason: I want to help with SD for AMD

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      Are you trying to use ZLUDA?
      What is happening is that ZLUDA is not working, and the webUI is trying to use CPU fallback. However, your CPU is very old and does not work with PyTorch, so it gives an error.

    • @NeverForever40
      @NeverForever40 2 месяца назад

      @@Luinux-Tech no, I'm not trying to use ZLUDA, I have it on my PC, I use it for Blender and I understand roughly what it is, but I repeated the installation exactly according to your video tutorial and what happens is that no images are generated with any model with the following error described above.
      by the way, ComfyUI works fine for me, but stable-diffusion-webui-directml does not, I'll try to install SD in another folder from scratch and if this error appears again, I'll edit this message and add to it.

    • @NeverForever40
      @NeverForever40 2 месяца назад +1

      ... I don't know what exactly happened, deleting all versions of Visual Studio or installing/updating AMD HIP SDK with installing the beta version of the driver in its settings, but now at half past 6 in the morning I was finally able to get a working SD on the 20th try using your video, so thank you very much for the video tutorial, etc. and remember that AMD is a cheap option only for games

  • @DenFed-v3m
    @DenFed-v3m 3 месяца назад

    "RuntimeError: Couldn't clone Stable Diffusion.
    Error code: 128"
    What is it?

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад

      It looks like there is an error with your git installation. Please check if you followed the installation instructions exactly as shown in the video. If not, try reinstalling it and then restart your computer.

    • @DenFed-v3m
      @DenFed-v3m 3 месяца назад

      @@Luinux-Tech the problem is that git clone download is interrupted. And I can't download at all, it's constantly interrupted. I downloaded it on another machine, then copied it to the machine with the GPU. But now I have the next problem. I can't run webui-user.bat because the user name on the machine from where I have copied git clone is different, and the path to python is different, too. Do you know by any chance in which repository's files I can correct the path to python? That's getting silly.

    • @DenFed-v3m
      @DenFed-v3m 3 месяца назад +1

      Nevermind, I have copied the venv folder from the second machine, too. Copying only git clone files made it work.

  • @rexfullbuster8325
    @rexfullbuster8325 21 день назад

    i can't do it with w10 the power shell don't work like w11 :(

    • @Luinux-Tech
      @Luinux-Tech  20 дней назад +1

      What do you mean? Windows 11 terminal is just a "skin" for PowerShell and Windows CMD, all commands work normally. To open PowerShell in Windows 10, go to the File Explorer and press Shift + Right Click --> Open PowerShell here.

    • @rexfullbuster8325
      @rexfullbuster8325 20 дней назад

      @@Luinux-Tech Tsm i'll try again later, I didn't know you could open the terminal like that

    • @rexfullbuster8325
      @rexfullbuster8325 20 дней назад +1

      @@Luinux-Tech it works!!! TSM, but I don't see the panel "queue prompt"

    • @rexfullbuster8325
      @rexfullbuster8325 20 дней назад

      @@Luinux-Tech i'll try with the webui

    • @Luinux-Tech
      @Luinux-Tech  20 дней назад +2

      @@rexfullbuster8325 ComfyUI recently updated the interface, and they decided to remove the generate button. To generate an image, you need to press "CTRL + Enter".

  • @seanmorgan4119
    @seanmorgan4119 3 месяца назад

    Please help:
    Traceback (most recent call last):
    File "C:\Users\*redatcted*\Documents\Stable Diffusion\ComfyUI\main.py", line 87, in
    import comfy.utils
    File "C:\Users\*redatcted*\Documents\Stable Diffusion\ComfyUI\comfy\utils.py", line 20, in
    import torch
    File "C:\Users\*redatcted*\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\__init__.py", line 148, in
    raise err
    OSError: [WinError 126] The specified module could not be found. Error loading "C:\Users\*redatcted*\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\lib\fbgemm.dll" or one of its dependencies.

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад

      Try installing the latest VC_redist.x64, download link: learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170

    • @seanmorgan4119
      @seanmorgan4119 3 месяца назад

      @@Luinux-Tech downloaded it but still doesn't getting the same error message

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад

      What is your GPU?

    • @seanmorgan4119
      @seanmorgan4119 3 месяца назад

      @@Luinux-Tech I just have an integrated AMD GPU. Would that be the reason?

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад

      First, install the "App Installer" from the Microsoft Store. Then, restart your computer and run the following command without the quotes: "winget install --id Microsoft.VisualStudio.2022.BuildTools --override "--passive --add Microsoft.VisualStudio.Component.VC.Tools.x86.x64""

  • @wurfelgott1520
    @wurfelgott1520 Месяц назад +1

    Very nice it worked thank you so much!

  • @ConanRider
    @ConanRider 2 месяца назад

    I keep getting VAE object has no attibute vae_dtype

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      More information, Comfy or WebUI? When does the error occur? What is your hardware?

  • @JoeM771
    @JoeM771 3 месяца назад

    Thakns for the great video. When I try ot generate a model, it is not using the GPU at all, just CPU. I have a 6650XT. When runing comfy, I get this at the start:
    Using directml with device:
    Total VRAM 1024 MB, total RAM 16333 MB
    pytorch version: 2.3.1+cpu
    Set vram state to: LOW_VRAM
    Device: privateuseone
    How do I get it to use the GPU? Instead it has the Device as "privateuseone". I did some googling but have come up blank so far. Thanks for any help!

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад

      When you installed torch-directml, did you notice any errors? Try running the command "pip install torch-directml" and see if there are any errors.

    • @JoeM771
      @JoeM771 3 месяца назад

      @@Luinux-Tech Thanks for the reply. I did the directml install command and there were no errors. Then I noticed as I was watching your video that your GPU was listed as privateuseone also. I checked my GPU and it was working off and on, hitting 99% then dropping . Took about 193 seconds to make an image. One thing is I have 8gigs of VRAM but it only shows 1 gig (just like in your video). It errors out unless I use the lowvram parameter.

    • @JoeM771
      @JoeM771 3 месяца назад

      Oh, there was a question in there- can I get it to use the 8 gigs of VRAM? Do you think it might be using it even if it only shows 1 gig? Thanks!

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад

      Usually, the VRAM is completely used. You can check this in the task manager. Torch reports 1GB of VRAM because DirectML does not manage memory very well. The only alternatives for AMD cards would be to use ZLUDA, which I do not recommend because, in my case, I had many more crashes, or use Linux, which officially supports the full AMD ROCm.

  • @xProto_Gaming
    @xProto_Gaming 3 месяца назад

    so if i have more than 8gb vram, i type highvram?

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад

      Yes, but remember that DirectML's VRAM management is very limited, and you may end up with errors due to a lack of VRAM.

  • @emauelmoschen9835
    @emauelmoschen9835 3 месяца назад +1

    Does it work with an 8GB asrock rx 570?

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад

      Yes, in the video I used an RX 550 4GB. On yours, it will work even better because of the 8GB of VRAM.

  • @oma3467
    @oma3467 Месяц назад

    when i create a pictuare my computer restart dose anyone know why ?

    • @Luinux-Tech
      @Luinux-Tech  Месяц назад

      This usually happens when the VRAM is completely full and causes the system itself to crash, what is your GPU? What is the resolution of the image you are trying to generate?

    • @oma3467
      @oma3467 Месяц назад

      @@Luinux-Tech i try to make the vanilla bottle picture, i have a amd 7950 xtx.
      i upated the newest triber version and the temp is below 66 c°

  • @Shegosbathwater
    @Shegosbathwater 29 дней назад

    I keep getting this error during image generation: 'aten::count_nonzero.dim_IntList' is not currently supported on the DML backend and will fall back to run on the CPU. This may have performance implications.

  • @758185luan
    @758185luan 3 месяца назад

    I have problem: Numpy is not available. Please help me

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад

      If you activate the venv and try to install with the command "pip install numpy," what happens?

    • @TanquetaOwO
      @TanquetaOwO 3 месяца назад

      @@Luinux-Tech It happens to me too, this is what i get from the terminal:
      (venv) PS C:\Users\avile\OneDrive\Documentos\StableDiffusion\ComfyUI> pip install numpy
      Requirement already satisfied: numpy in c:\users\avile\onedrive\documentos\stablediffusion\venv\lib\site-packages (2.1.1)
      [notice] A new release of pip available: 22.2.1 -> 24.2
      [notice] To update, run: python.exe -m pip install --upgrade pip
      (venv) PS C:\Users\avile\OneDrive\Documentos\StableDiffusion\ComfyUI> python main.py --directml --use-split-cross-attention --normalvram
      A module that was compiled using NumPy 1.x cannot be run in
      NumPy 2.1.1 as it may crash. To support both 1.x and 2.x
      versions of NumPy, modules must be compiled with NumPy 2.0.
      Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
      If you are a user of the module, the easiest solution will be to
      downgrade to 'numpy

    • @TanquetaOwO
      @TanquetaOwO 3 месяца назад +2

      I just found the solution, you have to run
      pip uninstall numpy
      And then
      pip install numpy

    • @758185luan
      @758185luan 3 месяца назад +1

      thank all you guys, It work :DDDD

    • @TanquetaOwO
      @TanquetaOwO 3 месяца назад

      @@758185luan yeah, it also works for me but I really don't like the time it needs to generate, I'm probably moving to linux

  • @pepedontlie
    @pepedontlie 3 месяца назад

    Can I create a full hd quality image with VGA RX6600 8GB?

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад

      These models are trained to generate images in specific resolutions, such as 512x512, 1024x1024... To get larger images, you first need to generate the image in a size supported by the model you are using, then upscale it to the resolution you want. That said, yes, your GPU is capable of producing images in FullHD using upscaling.

  • @arteon2017
    @arteon2017 3 месяца назад +1

    i get error of "there is no enough gpu video memory available" but it doesnt even use my gpu (im not using a laptop)
    gpu : rx 6600

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад

      ComfyUI or WebUI? What resolution did you use? Does the terminal say you are using "CPU only" mode?

    • @arteon2017
      @arteon2017 3 месяца назад

      @@Luinux-Tech webui 512x512 idk

  • @memoryhole7229
    @memoryhole7229 4 месяца назад +2

    Ubuntu vs Windows: Which was faster?

    • @Luinux-Tech
      @Luinux-Tech  4 месяца назад +2

      Windows DirectML does not manage memory very well at the moment. In Linux, I was able to use the "--normalvram" argument perfectly and obtained much better performance. Generating an image with exactly the same parameters(seed, lora, model...) in Linux took about 145 seconds, while in Windows it took 201 seconds.

  • @sorryyourenotawinner2506
    @sorryyourenotawinner2506 3 дня назад

    can't install comfy ui is impossible can''t find the route.. is a PAIN in the ass..

  • @luxiland6117
    @luxiland6117 3 месяца назад

    it so frustrating i'm on 6700 xt 12gb follow your steps with the same install results, when i built a image 512x512 queue prompt no gpu working only 100% of RAM, gpu no use, cpu 5% took five min to generated first time second time six min again six and so on, put normalvram or lowvram same result the comfyui dont touch my gpu. T_T

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад +1

      Activate the venv and run the command "python" or "py", this will start a python console(interative mode), then run "import torch", and then check if torch is seeing your gpu with the command: "torch.cuda.get_device_name(0)" does this command return the name of your GPU?

    • @luxiland6117
      @luxiland6117 3 месяца назад

      @@Luinux-Tech
      venv\Lib\site-packages\torch\cuda\__init__.py", line 414, in get_device_name
      return get_device_properties(device).name
      raise AssertionError("Torch not compiled with CUDA enabled")
      AssertionError: Torch not compiled with CUDA enabled

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад

      Somehow you ended up installing the CPU-only version of PyTorch. Activate the venv and try running the command "pip install torch-directml". Then, repeat the steps mentioned in my previous comment and see if anything changes.

    • @luxiland6117
      @luxiland6117 3 месяца назад

      @@Luinux-Tech not working, and Made a clean install... Something don't work or bypass in the install

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад

      When installing torch-directml, do you notice any errors regarding mismatches in the versions of the torch packages?

  • @louisbeauger
    @louisbeauger Месяц назад

    Does Comfyui work well with a 7900 xtx?

    • @Luinux-Tech
      @Luinux-Tech  Месяц назад

      Yes, and with this GPU, you can easily use SDXL (models with higher resolution).

  • @leozinhojunior2879
    @leozinhojunior2879 3 месяца назад +1

    Then make a video teaching how to install ComfyUI on Ubuntu Linux! I saw that you did it for Stable Diffusion, but I really wanted to install ComfyUI on Ubuntu!

  • @NoemieValois-u4z
    @NoemieValois-u4z 2 месяца назад

    Stable Diffusion anf fooocuS is it the same ?

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      Fooocus is more automated, requiring less user input.

  • @Thealle09
    @Thealle09 3 месяца назад

    What does the "allow scripts" part do?

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад

      Allows you to run the ComfyUI/WebUI startup script on your user.

    • @kevinmiole
      @kevinmiole 3 месяца назад +2

      @@Luinux-Tech and this is very dangerous because your opening everything to malware. Is there another way not to do this?

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад +1

      You are correct, sorry. I have updated the description with a safer way to enable script execution. There is also a command provided to revert the configuration that is shown in the video.

    • @kevinmiole
      @kevinmiole 3 месяца назад

      @@Luinux-Tech thank you I'm very bad at scripting but I know a little about security. But how do you use the code you suggested to allow only the scripts in this tutorial

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад

      Open the terminal in the folder with the file you want to enable for execution. For example, for webUI, you must be in the folder that contains the file "webui-user.bat". Then, right-click and open the terminal. Next, run the command: "Unblock-File -Path .\webui-user.bat". Without the quotes.

  • @manhchuuc4336
    @manhchuuc4336 3 месяца назад

    can you have me why my sd doesnt run on my gpu?

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад

      I need information: what is your GPU? Is it giving an error? What does the error say?

  • @gabrielpires3365
    @gabrielpires3365 3 месяца назад

    there is any way to use lora with this?

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад

      Yes, it works normally. Just put the LoRA files in the correct folder (ComfyUI > models > loras) and use.

  • @vekkaro
    @vekkaro 2 месяца назад

    First try, works awesome using gpu, second try without even close the terminal I get this: py:688: UserWarning: The operator 'aten::count_nonzero.dim_IntList' is not currently supported on the DML backend and will fall back to run on the CPU. So now ComfyUi is using my CPU 😮‍💨

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      In the browser, press the "load default" option and try again.

  • @andre.laguerre
    @andre.laguerre День назад

    Don't work for me

  • @dudububu-ll8zq
    @dudububu-ll8zq 2 месяца назад

    Hey man can we get flux guide too?

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад

      Sure, give me a few days.

  • @Gainax507
    @Gainax507 3 месяца назад

    404 window error Stable Diffusion mode

  • @MerhabaBenMert
    @MerhabaBenMert 4 месяца назад

    no zluda?

  • @Ромакотор-ю8в
    @Ромакотор-ю8в 3 месяца назад

    Stable Diffusion model site leads to error 404

    • @Ромакотор-ю8в
      @Ромакотор-ю8в 3 месяца назад

      Also almost all upscalers lead to problem - Cannot set version_counter for inference tensor . Can anyone tell me how to fix this?

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад

      Yes, they took down the link. I've already updated it with new links. Are you using ComfyUI?

    • @Ромакотор-ю8в
      @Ромакотор-ю8в 3 месяца назад

      @@Luinux-Tech Nope, stable-diffusion-webui-amdgpuI version.

  • @ginisksam
    @ginisksam 3 месяца назад +1

    Thanks for the serenading guide - works with me old RT 6700 XT. Now looking for other good models to try.

  • @Pro-arm
    @Pro-arm 3 месяца назад

    nice, very good , its work!

  • @SwilightTparkle
    @SwilightTparkle 2 месяца назад

    ComfyUI
    Total VRAM 1024 MB
    rx 580 8 GB

    • @Luinux-Tech
      @Luinux-Tech  2 месяца назад +1

      This is just a bug in DirectML. Don't worry, it will use all the VRAM it needs. You can check in the Task Manager.

  • @ImanWahriz
    @ImanWahriz 3 месяца назад

    Error, not useable

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад

      Try to describe the error better, indicating at which step the error occurs and whether you used WebUI or ComfyUI. Any additional information that could be helpful would be appreciated.

  • @zerpoll2k
    @zerpoll2k Месяц назад +1

    thanks idol

  • @okachpmeow
    @okachpmeow 3 месяца назад +1

    tks u

  • @RefRed_King
    @RefRed_King 2 месяца назад

    thx bruh im subscribe

  • @RefRed_King
    @RefRed_King 2 месяца назад

    OMG PLS HELP ME LINUX MADE EZ I ACCIDENTLY REMOVED MY IMAGE OUTPUT WHAT DO I DO 😭😭😭😭

  • @FunnyFilmShorts
    @FunnyFilmShorts 3 дня назад

    ,uh mein kya liya hua he bab ji ka thullu

  • @RefRed_King
    @RefRed_King 2 месяца назад +1

    OK NEVERMIND I CLICKED LOAD DEFAULT THANK

  • @andrezozo666
    @andrezozo666 Месяц назад

    Can anyone help me I have this error running .\webui-user.bat
    stderr: error: subprocess-exited-with-error
    Preparing metadata (pyproject.toml) did not run successfully.
    exit code: 1
    [21 lines of output]
    + meson setup C:\Users\andre\AppData\Local\Temp\pip-install-tqlyoivq\scikit-image_75eb9669dc0a4203a78c580c08d85302 C:\Users\andre\AppData\Local\Temp\pip-install-tqlyoivq\scikit-image_75eb9669dc0a4203a78c580c08d85302\.mesonpy-rsocldgk -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --native-file=C:\Users\andre\AppData\Local\Temp\pip-install-tqlyoivq\scikit-image_75eb9669dc0a4203a78c580c08d85302\.mesonpy-rsocldgk\meson-python-native-file.ini
    The Meson build system
    Version: 1.6.0
    Source dir: C:\Users\andre\AppData\Local\Temp\pip-install-tqlyoivq\scikit-image_75eb9669dc0a4203a78c580c08d85302
    Build dir: C:\Users\andre\AppData\Local\Temp\pip-install-tqlyoivq\scikit-image_75eb9669dc0a4203a78c580c08d85302\.mesonpy-rsocldgk
    Build type: native build
    Project name: scikit-image
    Project version: 0.21.0
    WARNING: Failed to activate VS environment: Could not find C:\Program Files (x86)\Microsoft Visual Studio\Installer\vswhere.exe
    ..\meson.build:1:0: ERROR: Unknown compiler(s): [['icl'], ['cl'], ['cc'], ['gcc'], ['clang'], ['clang-cl'], ['pgcc']]
    The following exception(s) were encountered:
    Running `icl ""` gave "[WinError 2] El sistema no puede encontrar el archivo especificado"
    Running `cl /?` gave "[WinError 2] El sistema no puede encontrar el archivo especificado"
    Running `cc --version` gave "[WinError 2] El sistema no puede encontrar el archivo especificado"
    Running `gcc --version` gave "[WinError 2] El sistema no puede encontrar el archivo especificado"
    Running `clang --version` gave "[WinError 2] El sistema no puede encontrar el archivo especificado"
    Running `clang-cl /?` gave "[WinError 2] El sistema no puede encontrar el archivo especificado"
    Running `pgcc --version` gave "[WinError 2] El sistema no puede encontrar el archivo especificado"
    A full log can be found at C:\Users\andre\AppData\Local\Temp\pip-install-tqlyoivq\scikit-image_75eb9669dc0a4203a78c580c08d85302\.mesonpy-rsocldgk\meson-logs\meson-log.txt
    [end of output]
    note: This error originates from a subprocess, and is likely not a problem with pip.
    error: metadata-generation-failed
    Encountered error while generating package metadata.
    See above for output.
    note: This is an issue with the package mentioned above, not pip.
    hint: See above for details.

    • @Luinux-Tech
      @Luinux-Tech  Месяц назад

      Apparently, Pip is trying to compile a package because it couldn't install the binary. Are you using Python 3.10? Activate the virtual environment and try running the command "pip install scikit-image" to see what happens.

  • @Gainax507
    @Gainax507 3 месяца назад

    Requested to load AutoencoderKL
    Loading 1 new model
    loaded partially 64.0 63.99990463256836 0
    !!! Exception during processing !!! Numpy is not available
    Traceback (most recent call last):
    File "F:\stable diffusion\comfyUI\execution.py", line 317, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
    File "F:\stable diffusion\comfyUI\execution.py", line 192, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
    File "F:\stable diffusion\comfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
    File "F:\stable diffusion\comfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
    File "F:\stable diffusion\comfyUI
    odes.py", line 1497, in save_images
    i = 255. * image.cpu().numpy()
    RuntimeError: Numpy is not available
    Prompt executed in 124.74 seconds
    (venv) PS F:\stable diffusion\comfyUI>

    • @Luinux-Tech
      @Luinux-Tech  3 месяца назад +1

      Something has been updated and is causing this error. Please look at the comment below that discusses numpy.