Please, if you are asking for help, explain your error in detail, inform about your hardware(CPU + GPU), and specify whether you used WebUI or ComfyUI.
I have a problem at 4:30, I did write the .\venv\Scrips\activate code but it says "the specified module .venv was not loaded because no valid module file was found in any module directory"
You are using the wrong path, check if you are in the correct folder and if there are no misspelled words, for example in your comment it says "\Scrips" instead of "\Scripts".
The first command worked then at 3:40 I typed the command correctly and it says "'Set-ExecutionPolicy' is not recognized as an internal or external command, operable program or batch file.". I didn't follow the tutorial from the start because I already have Python 3.10.6 and Git installed because I used to use A1111 and now moving to Comfy. Could this be because of I am in a different drive that where my python is installed? Do I have to do everything in C drive?
@@Luinux-Tech TYSM it worked! And one more question, Can you you tell me if I can use Zluda with it because I only got a RX 580 8GB and if I can't can I run Flux with directml? and thanks again😁
Yes, ZLUDA works, but from my experience, at least on Polaris GPUs, the performance difference is not very significant. I haven't tested Flux yet, but if you have ComfyUi running, just download the model and a workflow and test it.
Thanks a lot, I was looking for a way to install it with my AMD GPU, I have a 7900 XT and your tutorial is very explicit. I was on fooocus before and it took like 4 times longer to generate lmao, thanks a lot
@@Luinux-Tech Hmm now it worked but still the problem is same--- "Could not allocate tensor with 268435456 bytes. There is not enough GPU video memory available!".......... AMD Radeon (TM) R5 M330 (2 GB) and Intel(R) HD Graphics 620 (4GB) .............. Dell Inspiron i7 -- Model- 15 3567----- 7500 CPU @2.7 Ghz 29012 Cores...... 8GB RAM...... Mhz
Man I wasted all my day, trying to work AI on my computer, I got to use txt to txt etc like Ollama3.2 visual, but damn, image was impossible. So many useless, old, and wrong information. Thank you, amazing, really show have more views. Subscribed.
A webui 1024x1024 image takes 1min 50sec, on a RX 6600. Pretty good. Btw I got an error by just using --use-directml. Instead, I used all the commands in your video but removed the --lowram. Also used a different model not sure how if that affects the processing time. Thanks you for the clear instructions.
I'm glad I could help, and you're right. Some GPUs only need "--use-directml" to work, others need all or some of the other commands. I'll clarify that in the description.
Hi, I have an i5-11400F and an RX 6600. I followed all your steps for Comfy UI (local, not web UI) and while everything works well except, It's using my CPU. When I see task manager, it's showing CPU usage but no GPU usage. It's also taking very long to generate the image for this reason, I assume? How can I fix this? Please help me.
Activate the venv and run the command "python" or "py", this will start a python console(interative mode), then run "import torch", and then check if torch has access to your gpu with the command: "torch.cuda.get_device_name(0)" does this command return the name of your GPU?
Open the terminal in the folder with the file you want to enable for execution. For example, for webUI, you must be in the folder that contains the file "webui-user.bat". Then, right-click and open the terminal. Next, run the command: "Unblock-File -Path .\webui-user.bat". Without the quotes.
Yes, if you close the terminal session, you will have to do all that to use it again, but everything will already be downloaded and configured, so it only takes a few seconds.
@@burmy1774 Yes, and it's very simple. Create the script on your desktop and use the "call" command to activate the venv. Then, use the "call" command to run the "webui-user.bat" script. Something like that.
I did everything right. But when I click on Queue the PC freezes and X Reconnecting appears. I have more VRAM than used in the video and the image does not generate. I've done everything. How to solve?
Please, when asking for help, inform about your hardware (CPU + GPU), and specify whether you used WebUI or ComfyUI, and what the dimensions of the image you tried to generate are.
well this is what i am getting : The GPU will not respond to more commands, most likely because some other application submitted invalid commands. The calling application should re-create the device and continue. when it is reaching 30% this is what i am getting, it is not fast too! my pc ram is 32g, my GPU according to task manager is 4g and shared gpu is 14g, so why it isn't working i don't get it!
(venv) PS D:\as\Ai\ComfyUI> pip install -r requirements.txt ERROR: Could not find a version that satisfies the requirement torch (from versions: none) ERROR: No matching distribution found for torch
Hello. (i3 9100 and RX580 8Gb) Im using WebUI. Image generation works fine but when i checked my vram usage after generating it stays at max usage for some reason. Is this normal? Due to this, i cannot upscale the image, it will say that i dont have enough vram. RuntimeError: Could not allocate tensor with 1073741824 bytes. There is not enough GPU video memory available! Could you help please?
Hi, sorry I didn't answer earlier. Normally, the VRAM ends up being full because the model is sent to VRAM to speed up the generation process. However, since your GPU has 8GB, this should only happen if you are using SDXL (models trained for 1024x1024). I suggest trying the Tiled Diffusion & VAE extension and the "--lowvram" parameter. This should be sufficient to eliminate errors caused by insufficient memory. Another option would be to generate the images first and then upscale them. It is also worth saying that DirectML (the API that allows you to use AMD cards for Machine Learning) has some memory problems and may be contributing to this abnormal use of VRAM.
Activate the venv and run the command "python" or "py", this will start a python console(interative mode), then run "import torch", and then check if torch is seeing your gpu with the command: "torch.cuda.get_device_name(0)" does this command return the name of your GPU?
@Luinux-Tech Yes i've already fix that but there is another error could not locate tensor, the first one is around 400k bytes and after i use the --lowvram its change to 100k bytes any fix for this?
I recommend using the "--normalvram" parameter for cards with VRAM between 6GB and 12GB. For cards with a higher capacity, such as 16GB or 24GB, use the "--highvram" parameter. This will ensure that the models are loaded in the GPU memory and will accelerate the generation process.
@@Luinux-Tech Traceback (most recent call last): File "C:\Users\Ooze\Documents\stable-diff\ComfyUI\main.py", line 90, in import execution File "C:\Users\Ooze\Documents\stable-diff\ComfyUI\execution.py", line 13, in import nodes File "C:\Users\Ooze\Documents\stable-diff\ComfyUI odes.py", line 21, in import comfy.diffusers_load File "C:\Users\Ooze\Documents\stable-diff\ComfyUI\comfy\diffusers_load.py", line 3, in import comfy.sd File "C:\Users\Ooze\Documents\stable-diff\ComfyUI\comfy\sd.py", line 5, in from comfy import model_management File "C:\Users\Ooze\Documents\stable-diff\ComfyUI\comfy\model_management.py", line 62, in import torch_directml File "C:\Users\Ooze\Documents\stable-diff\venv\Lib\site-packages\torch_directml\__init__.py", line 21, in import torch_directml_native ImportError: DLL load failed while importing torch_directml_native: The specified module could not be found. this is the error powershell says I Installed everystep without any problem. but when I try to run python main.py and the whole code, it gives that error.
The video is sped up. It took me 1 minute and 32 seconds to generate an image with 20 steps in ComfyUI and 2 minutes and 3 seconds for an image also with 20 steps in webUI. My GPU is an RX 550 4GB.
3 Things. 1. Thank you for this tutorial. it worked 1st time and does what I need it to do without fighting with me 2. Is there a way to create a launcher of sorts? I don't know ANYTHING about python or Git or coding. I know you could call on it, but I've not found a tutorial to help. 3. I've got a 7900XTX and I'm still getting 1-2 IT/s. when I had Automatic1111 I was cranking out 11-15 IT/s but I switched to Comfy after hearing it was superior. a 1024x1024 image generates is 24 secs so I'm not complaining but more asking is there a way to improve IT/s?
Thank you. To create a launcher on Windows, try following the pinned comment in this video: ruclips.net/video/b9pqNQBSlpw/видео.html. Regarding improving performance, you can try using ComfyUI with the "efficiency-nodes-comfyui" extension. Another way to improve performance would be using ROCm directly on Linux.
first of all your guide is very clean and easy to follow, i followed all youstep , and it all worked, then stable diffusion opens up in the webpage, just like in the video, and i tested it with a random prompt but it cannot generate and the error is this : "SafetensorError: device privateuseone:0 is invalid" i tried to download 2 different Models but the error is the same. Any idea ? cuz i checked online but couldnt find anything.
Sorry, but this video will not work for you. The installation method for Nvidia GPUs is different. You do not need to use torch-directml. The standard torch with CUDA is what you need. Unfortunately, I do not have any tutorial for Nvidia cards yet.
How do I get the ability to open up python from the folder I'm currently in? Is it clicking that box that says add Python 3.10 to path? Because its still not there for me. Maybe its my windows appearance settings to make windows 10 appear like win 7?
Yes, it should work by checking the "Add to Path" box. If not, open the environment variables menu, double click on "Path," and add the path of your Python installation (usually: C:\Users\[user]\AppData\Local\Programs\Python\Python[version]).
oh. how do I enable OLIVE support as well? I've read an article on AMD blog that it helps optimize ai model to run faster, sometimes a lot faster particularily on AMD gpu. I've seen there is a branch of stable diffusion on Github with olive support but I failed to make it work because of errors during installation. I've managed to install ComfyUI but webUI returns OSerror
@@Luinux-Tech I've managed to make ComfyUI to run using your method and links. I've tried to install webUI and it returns error that it can't find repository on hugginface. It is returned by python script run from bat file... I tried to reinstall and some other tutorials where I get 'no cuda drivers detected' error even though I used -directml commandline etc. What can it be? Since comfyui seems to work fine I tried photoshop plugin for comfyui. I installed everything following instructions but it doesn't work reporting error that VAE not found, while it works fine in browser. So sad that NOTHING works out of the box..... goddamn quest.
About webUI, I also noticed these errors coming from Hugging Face. Apparently, some files are no longer available on Hugging Face, and the webUI guys still haven't fixed the broken links. As for ComfyUI and Photoshop, unfortunately, I haven't tested them yet, so I can't help at the moment.
Thank you. Other people have already asked me about ZLUDA, but I haven't tested it yet. As soon as I have time, I will test it and make a video with ZLUDA on Windows with ComfyUI and webUI.
Please, if you are asking for help, explain your error in detail, inform about your hardware(CPU + GPU), and specify whether you used WebUI or ComfyUI.
@@Luinux-Tech ok bro i have a new problem, when i type .\venv\Scripts\activate in window powershell it got an error, i try again your tutorial, because I reinstalled my laptop yesterday, and that's what happened. sory my english so bad I hope you understand what I mean
Thank you for the video. How can i dedicate more VRAM to the "server" ? i have RTX 6750 12 GB but it only reserves 1GB ("--reserve-vram 4096" or "--reserve-vram 2.0" or other numbers is not working)
Following WebUI Error I got.... After image generation and it disappeared and got this msg in terminal.... RuntimeError: Could not allocate tensor with 134217728 bytes. There is not enough GPU video memory available!
This method will only use your AMD GPU, and since you only have 2GB of VRAM, it will be very difficult to generate an image. You can try to reduce the resolution of the image you want to generate or use webUI with the "Tiled Diffusion & VAE" extension and the "--lowvram" parameter.
Hi, if I want to get an image it always gives me the problem that says “Could not allocate tensor with 52428800 bytes. There is not enough GPU video memory available!” What should be the solution to the problem?
@@xverny0 Your GPU should be able to generate images much larger than 512x512. Are you using webUI or ComfyUI? Please provide the full command you are using to start it. Also, try generating an image and check in the task manager if the VRAM is actually full.
Hi I have a problem, when i run " py -m venv venv" it says Error: Command '['C:\\AI\\venv\\Scripts\\python.exe', '-Im', 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit status 1. So im blocked at the starting point, could you help me? Thank you so much
First, run the command: "py -m pip install --upgrade pip" and then "py -m pip install virtualenv". Then, try again to create the venv. If it doesn't work, I recommend reinstalling Python.
AMD Radeon RX 7600 XT, CPU AMD Ryzen 7 7800X3D 8-Core, i ran your entire tutorial, it was very clear, but in the end i had the following error: "AttributeError: module 'torch' has no attribute 'Tensor' " i hope you can help me fix it
Hi there, I installed as u showed in the tutorial. I have a 7900xtx and at the time i start the queue my vram gets to about 20gb usage but it still says fallback to cpu. i get about 3-4 it/s and after finishing the process my vram is still full at 20gb. whats happening here? it still says fallback to cpu while executing
I used comfyui. While executing the graphics cards goes up in GPU usage like to 80%. The Vram stays at 20gb at any time after first image, after the second it goes eben higher. I still only have like 3-4 it/s. I used standard parametrs from the basic layout, with my own prompt. if i put up like 3 batches with 40 steps it will even drop to 1.5-2.6 it/s
Sorry. You switched the units and confused me. If you are getting 3-4 iterations per second, it seems correct to me. The VRAM is getting full because machine learning with AMD on Windows is a bit limited, and DirectML does not manage VRAM very well. Do not expect performance similar to NVIDIA cards. If I am not mistaken, a high-end AMD card will have a third of the performance of a high-end Nvidia 4000 Series card in machine learning.
@@Luinux-Tech I remember the same performance on other people's 6900xt. Is there really not that big of a difference at all between the 6xxx and 7xxx and cards? Would it be a big difference switching to Linux?
About the performance between the 6000 and 7000 series, according to AMD: "Radeon™ 7000 series GPUs feature more than 2x higher AI performance per Compute Unit (CU) compared to the previous generation." If this is true, I have no way to test it because the most recent card I have access to is an RX 6600. Regarding Linux, when using it, you will have much better VRAM management, allowing you to use more complex models and workflows. In my case, I was able to enable some more optimizations and experienced a considerable performance gain. However, this was on a very limited GPU.
Hey, thank you for the tutorial. My PC crashes when it comes to generating image (Radeon 6800xt, Ryzen 7950x). torch.cuda.get_device_name(0) returns 'Torch not compiled with CUDA enabled', and 'pip install torch-directml torchaudio' returns Requirement already satisfied. Any idea what might be the problem?
@@Luinux-Tech nah it's just on git hub it without "torchaudio" so just little confused. But, hey, I used both for experiment...with and without torchaudio and both way seems to be working.
Sure, several people told me about Zluda. I did some research, and I'm going to test if there is any gain in performance or stability. If there is, I'll make a video as soon as possible.
If you are using comfyUI, download the model and place it in the ComfyUI > models > checkpoints folder, or if you are using webUI, download the model and place it in the stable-diffusion-webui-directml > models > Stable-diffusion folder, after that just reload the page in the browser and select the new model.
Hello, Linux Made EZ! I ran into this problem when trying to generate images in WebUI and I don't know how to solve it: NotImplementedError: Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device. Do you know anything about this? amd rx 5700, intel core i7-3770k ivy bridge, windows 11 23h2 cmd log: PS C:\Users\King\Documents\Stable-diff> .\venv\Scripts\activate (venv) PS C:\Users\King\Documents\Stable-diff> cd .\stable-diffusion-webui-directml\ (venv) PS C:\Users\King\Documents\Stable-diff\stable-diffusion-webui-directml> .\webui.bat venv "C:\Users\King\Documents\Stable-diff\stable-diffusion-webui-directml\venv\Scripts\Python.exe" ROCm Toolkit 6.1 was found. Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: v1.10.1-amd-9-g46397d07 Commit hash: 46397d078cff4547eb4bd87adc5c56283e2a8d20 Using ZLUDA in C:\Users\King\Documents\Stable-diff\stable-diffusion-webui-directml.zluda Failed to load ZLUDA: list index out of range Using CPU-only torch ... Failed to create model quickly; will retry using slow method. Applying attention optimization: InvokeAI... done. Model loaded in 67.0s (load weights from disk: 1.1s, create model: 2.1s, apply weights to model: 21.0s, apply half(): 2.6s, calculate empty prompt: 40.1s). after I pressed GENERATE cmd log: RuntimeError: Input type (float) and bias type (struct c10::Half) should be the same Time taken: 1 min. 16.6 sec. or someone who want to help, my discord is iwish6768 , discird id is 292757299309838337 when you add or write to me, please indicate the reason: I want to help with SD for AMD
Are you trying to use ZLUDA? What is happening is that ZLUDA is not working, and the webUI is trying to use CPU fallback. However, your CPU is very old and does not work with PyTorch, so it gives an error.
@@Luinux-Tech no, I'm not trying to use ZLUDA, I have it on my PC, I use it for Blender and I understand roughly what it is, but I repeated the installation exactly according to your video tutorial and what happens is that no images are generated with any model with the following error described above. by the way, ComfyUI works fine for me, but stable-diffusion-webui-directml does not, I'll try to install SD in another folder from scratch and if this error appears again, I'll edit this message and add to it.
... I don't know what exactly happened, deleting all versions of Visual Studio or installing/updating AMD HIP SDK with installing the beta version of the driver in its settings, but now at half past 6 in the morning I was finally able to get a working SD on the 20th try using your video, so thank you very much for the video tutorial, etc. and remember that AMD is a cheap option only for games
It looks like there is an error with your git installation. Please check if you followed the installation instructions exactly as shown in the video. If not, try reinstalling it and then restart your computer.
@@Luinux-Tech the problem is that git clone download is interrupted. And I can't download at all, it's constantly interrupted. I downloaded it on another machine, then copied it to the machine with the GPU. But now I have the next problem. I can't run webui-user.bat because the user name on the machine from where I have copied git clone is different, and the path to python is different, too. Do you know by any chance in which repository's files I can correct the path to python? That's getting silly.
What do you mean? Windows 11 terminal is just a "skin" for PowerShell and Windows CMD, all commands work normally. To open PowerShell in Windows 10, go to the File Explorer and press Shift + Right Click --> Open PowerShell here.
@@rexfullbuster8325 ComfyUI recently updated the interface, and they decided to remove the generate button. To generate an image, you need to press "CTRL + Enter".
Please help: Traceback (most recent call last): File "C:\Users\*redatcted*\Documents\Stable Diffusion\ComfyUI\main.py", line 87, in import comfy.utils File "C:\Users\*redatcted*\Documents\Stable Diffusion\ComfyUI\comfy\utils.py", line 20, in import torch File "C:\Users\*redatcted*\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\__init__.py", line 148, in raise err OSError: [WinError 126] The specified module could not be found. Error loading "C:\Users\*redatcted*\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\lib\fbgemm.dll" or one of its dependencies.
First, install the "App Installer" from the Microsoft Store. Then, restart your computer and run the following command without the quotes: "winget install --id Microsoft.VisualStudio.2022.BuildTools --override "--passive --add Microsoft.VisualStudio.Component.VC.Tools.x86.x64""
Thakns for the great video. When I try ot generate a model, it is not using the GPU at all, just CPU. I have a 6650XT. When runing comfy, I get this at the start: Using directml with device: Total VRAM 1024 MB, total RAM 16333 MB pytorch version: 2.3.1+cpu Set vram state to: LOW_VRAM Device: privateuseone How do I get it to use the GPU? Instead it has the Device as "privateuseone". I did some googling but have come up blank so far. Thanks for any help!
@@Luinux-Tech Thanks for the reply. I did the directml install command and there were no errors. Then I noticed as I was watching your video that your GPU was listed as privateuseone also. I checked my GPU and it was working off and on, hitting 99% then dropping . Took about 193 seconds to make an image. One thing is I have 8gigs of VRAM but it only shows 1 gig (just like in your video). It errors out unless I use the lowvram parameter.
Usually, the VRAM is completely used. You can check this in the task manager. Torch reports 1GB of VRAM because DirectML does not manage memory very well. The only alternatives for AMD cards would be to use ZLUDA, which I do not recommend because, in my case, I had many more crashes, or use Linux, which officially supports the full AMD ROCm.
This usually happens when the VRAM is completely full and causes the system itself to crash, what is your GPU? What is the resolution of the image you are trying to generate?
I keep getting this error during image generation: 'aten::count_nonzero.dim_IntList' is not currently supported on the DML backend and will fall back to run on the CPU. This may have performance implications.
@@Luinux-Tech It happens to me too, this is what i get from the terminal: (venv) PS C:\Users\avile\OneDrive\Documentos\StableDiffusion\ComfyUI> pip install numpy Requirement already satisfied: numpy in c:\users\avile\onedrive\documentos\stablediffusion\venv\lib\site-packages (2.1.1) [notice] A new release of pip available: 22.2.1 -> 24.2 [notice] To update, run: python.exe -m pip install --upgrade pip (venv) PS C:\Users\avile\OneDrive\Documentos\StableDiffusion\ComfyUI> python main.py --directml --use-split-cross-attention --normalvram A module that was compiled using NumPy 1.x cannot be run in NumPy 2.1.1 as it may crash. To support both 1.x and 2.x versions of NumPy, modules must be compiled with NumPy 2.0. Some module may need to rebuild instead e.g. with 'pybind11>=2.12'. If you are a user of the module, the easiest solution will be to downgrade to 'numpy
These models are trained to generate images in specific resolutions, such as 512x512, 1024x1024... To get larger images, you first need to generate the image in a size supported by the model you are using, then upscale it to the resolution you want. That said, yes, your GPU is capable of producing images in FullHD using upscaling.
Windows DirectML does not manage memory very well at the moment. In Linux, I was able to use the "--normalvram" argument perfectly and obtained much better performance. Generating an image with exactly the same parameters(seed, lora, model...) in Linux took about 145 seconds, while in Windows it took 201 seconds.
it so frustrating i'm on 6700 xt 12gb follow your steps with the same install results, when i built a image 512x512 queue prompt no gpu working only 100% of RAM, gpu no use, cpu 5% took five min to generated first time second time six min again six and so on, put normalvram or lowvram same result the comfyui dont touch my gpu. T_T
Activate the venv and run the command "python" or "py", this will start a python console(interative mode), then run "import torch", and then check if torch is seeing your gpu with the command: "torch.cuda.get_device_name(0)" does this command return the name of your GPU?
@@Luinux-Tech venv\Lib\site-packages\torch\cuda\__init__.py", line 414, in get_device_name return get_device_properties(device).name raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled
Somehow you ended up installing the CPU-only version of PyTorch. Activate the venv and try running the command "pip install torch-directml". Then, repeat the steps mentioned in my previous comment and see if anything changes.
Then make a video teaching how to install ComfyUI on Ubuntu Linux! I saw that you did it for Stable Diffusion, but I really wanted to install ComfyUI on Ubuntu!
You are correct, sorry. I have updated the description with a safer way to enable script execution. There is also a command provided to revert the configuration that is shown in the video.
@@Luinux-Tech thank you I'm very bad at scripting but I know a little about security. But how do you use the code you suggested to allow only the scripts in this tutorial
Open the terminal in the folder with the file you want to enable for execution. For example, for webUI, you must be in the folder that contains the file "webui-user.bat". Then, right-click and open the terminal. Next, run the command: "Unblock-File -Path .\webui-user.bat". Without the quotes.
First try, works awesome using gpu, second try without even close the terminal I get this: py:688: UserWarning: The operator 'aten::count_nonzero.dim_IntList' is not currently supported on the DML backend and will fall back to run on the CPU. So now ComfyUi is using my CPU 😮💨
Try to describe the error better, indicating at which step the error occurs and whether you used WebUI or ComfyUI. Any additional information that could be helpful would be appreciated.
Can anyone help me I have this error running .\webui-user.bat stderr: error: subprocess-exited-with-error Preparing metadata (pyproject.toml) did not run successfully. exit code: 1 [21 lines of output] + meson setup C:\Users\andre\AppData\Local\Temp\pip-install-tqlyoivq\scikit-image_75eb9669dc0a4203a78c580c08d85302 C:\Users\andre\AppData\Local\Temp\pip-install-tqlyoivq\scikit-image_75eb9669dc0a4203a78c580c08d85302\.mesonpy-rsocldgk -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --native-file=C:\Users\andre\AppData\Local\Temp\pip-install-tqlyoivq\scikit-image_75eb9669dc0a4203a78c580c08d85302\.mesonpy-rsocldgk\meson-python-native-file.ini The Meson build system Version: 1.6.0 Source dir: C:\Users\andre\AppData\Local\Temp\pip-install-tqlyoivq\scikit-image_75eb9669dc0a4203a78c580c08d85302 Build dir: C:\Users\andre\AppData\Local\Temp\pip-install-tqlyoivq\scikit-image_75eb9669dc0a4203a78c580c08d85302\.mesonpy-rsocldgk Build type: native build Project name: scikit-image Project version: 0.21.0 WARNING: Failed to activate VS environment: Could not find C:\Program Files (x86)\Microsoft Visual Studio\Installer\vswhere.exe ..\meson.build:1:0: ERROR: Unknown compiler(s): [['icl'], ['cl'], ['cc'], ['gcc'], ['clang'], ['clang-cl'], ['pgcc']] The following exception(s) were encountered: Running `icl ""` gave "[WinError 2] El sistema no puede encontrar el archivo especificado" Running `cl /?` gave "[WinError 2] El sistema no puede encontrar el archivo especificado" Running `cc --version` gave "[WinError 2] El sistema no puede encontrar el archivo especificado" Running `gcc --version` gave "[WinError 2] El sistema no puede encontrar el archivo especificado" Running `clang --version` gave "[WinError 2] El sistema no puede encontrar el archivo especificado" Running `clang-cl /?` gave "[WinError 2] El sistema no puede encontrar el archivo especificado" Running `pgcc --version` gave "[WinError 2] El sistema no puede encontrar el archivo especificado" A full log can be found at C:\Users\andre\AppData\Local\Temp\pip-install-tqlyoivq\scikit-image_75eb9669dc0a4203a78c580c08d85302\.mesonpy-rsocldgk\meson-logs\meson-log.txt [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed Encountered error while generating package metadata. See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details.
Apparently, Pip is trying to compile a package because it couldn't install the binary. Are you using Python 3.10? Activate the virtual environment and try running the command "pip install scikit-image" to see what happens.
Requested to load AutoencoderKL Loading 1 new model loaded partially 64.0 63.99990463256836 0 !!! Exception during processing !!! Numpy is not available Traceback (most recent call last): File "F:\stable diffusion\comfyUI\execution.py", line 317, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "F:\stable diffusion\comfyUI\execution.py", line 192, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "F:\stable diffusion\comfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "F:\stable diffusion\comfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) File "F:\stable diffusion\comfyUI odes.py", line 1497, in save_images i = 255. * image.cpu().numpy() RuntimeError: Numpy is not available Prompt executed in 124.74 seconds (venv) PS F:\stable diffusion\comfyUI>
Please, if you are asking for help, explain your error in detail, inform about your hardware(CPU + GPU), and specify whether you used WebUI or ComfyUI.
I have a problem at 4:30, I did write the .\venv\Scrips\activate code but it says "the specified module .venv was not loaded because no valid module file was found in any module directory"
You are using the wrong path, check if you are in the correct folder and if there are no misspelled words, for example in your comment it says "\Scrips" instead of "\Scripts".
should have really pasted all the cmds you used in description.........
The first command worked then at 3:40 I typed the command correctly and it says "'Set-ExecutionPolicy' is not recognized as an internal or external command, operable program or batch file.". I didn't follow the tutorial from the start because I already have Python 3.10.6 and Git installed because I used to use A1111 and now moving to Comfy. Could this be because of I am in a different drive that where my python is installed? Do I have to do everything in C drive?
Are you running the command in PowerShell? This command is not for Windows CMD.
@@Luinux-Tech TYSM it worked! And one more question, Can you you tell me if I can use Zluda with it because I only got a RX 580 8GB and if I can't can I run Flux with directml? and thanks again😁
Yes, ZLUDA works, but from my experience, at least on Polaris GPUs, the performance difference is not very significant. I haven't tested Flux yet, but if you have ComfyUi running, just download the model and a workflow and test it.
Thanks a lot, I was looking for a way to install it with my AMD GPU, I have a 7900 XT and your tutorial is very explicit. I was on fooocus before and it took like 4 times longer to generate lmao, thanks a lot
Glad it helped
Thank you so much for the tutorial i've tried other vids but this is the only one I was able to find that actually works with AMD.
Glad I could help
Followed each steps. But, module not found error: No module named 'torch_directml' ☹ 07:30
Try running the command "pip install torch-directml torchaudio" again and see if there are any errors in the terminal.
@@Luinux-Tech Hmm now it worked but still the problem is same--- "Could not allocate tensor with 268435456 bytes. There is not enough GPU video memory available!".......... AMD Radeon (TM) R5 M330 (2 GB) and Intel(R) HD Graphics 620 (4GB) .............. Dell Inspiron i7 -- Model- 15 3567----- 7500 CPU @2.7 Ghz 29012 Cores...... 8GB RAM...... Mhz
After I have knock my head to a wall 4 hours, finally I found your video !
Thanks, you're a genius !
Subscribed
Glad it helped!
bro, at this point please marry me, I tried for days, those Bs tuts never worked for me. this one is a livesaver, THANK YOU!!!!!!!!!!!!!
Thanks, glad to help
Man I wasted all my day, trying to work AI on my computer, I got to use txt to txt etc like Ollama3.2 visual, but damn, image was impossible. So many useless, old, and wrong information. Thank you, amazing, really show have more views. Subscribed.
A webui 1024x1024 image takes 1min 50sec, on a RX 6600. Pretty good. Btw I got an error by just using --use-directml. Instead, I used all the commands in your video but removed the --lowram. Also used a different model not sure how if that affects the processing time. Thanks you for the clear instructions.
I'm glad I could help, and you're right. Some GPUs only need "--use-directml" to work, others need all or some of the other commands. I'll clarify that in the description.
Hi, I have an i5-11400F and an RX 6600. I followed all your steps for Comfy UI (local, not web UI) and while everything works well except, It's using my CPU. When I see task manager, it's showing CPU usage but no GPU usage. It's also taking very long to generate the image for this reason, I assume? How can I fix this? Please help me.
Activate the venv and run the command "python" or "py", this will start a python console(interative mode), then run "import torch", and then check if torch has access to your gpu with the command: "torch.cuda.get_device_name(0)" does this command return the name of your GPU?
how to install forge ui on amd gpu there is no video in yt pls upload soon
can't use powershell if i use it as a admin the route changes and if i use in the same folder error... just happen...
Explain your error in detail, inform about your hardware(CPU + GPU), and specify whether you used WebUI or ComfyUI.
Super dooper amazing tutorial worked like a charm 😊😊
finally a no BS tutorial! Thanks
Could you further explain the "Unblock-File -Path." Tried researching but could not find anything.
Open the terminal in the folder with the file you want to enable for execution. For example, for webUI, you must be in the folder that contains the file "webui-user.bat". Then, right-click and open the terminal. Next, run the command: "Unblock-File -Path .\webui-user.bat". Without the quotes.
@@Luinux-Tech ohhh ok thanks!!
@@Luinux-Tech and what about comfyui itself? what do I put in the place of "name_of_script_to_unblock"?
@@rubensoliveira9681 did you find out? im stuck there too
I am using windows 10 and we don't have the option of "Open in terminal" option. 03:01
Shift + Right Click --> open PowerShell here.
@@Luinux-Tech Yes, just tried and got the option.. Thank u.. I am gonna subscribe u right now for ur help. 😊😊
Do you have to always do that step at 12:56 when launching it?
Yes, if you close the terminal session, you will have to do all that to use it again, but everything will already be downloaded and configured, so it only takes a few seconds.
@@Luinux-Tech Is there a way to create a .bat file that does all those steps to launch it?
@@burmy1774 Yes, and it's very simple. Create the script on your desktop and use the "call" command to activate the venv. Then, use the "call" command to run the "webui-user.bat" script. Something like that.
I did everything right. But when I click on Queue the PC freezes and X Reconnecting appears. I have more VRAM than used in the video and the image does not generate. I've done everything. How to solve?
Please, when asking for help, inform about your hardware (CPU + GPU), and specify whether you used WebUI or ComfyUI, and what the dimensions of the image you tried to generate are.
well this is what i am getting : The GPU will not respond to more commands, most likely because some other application submitted invalid commands.
The calling application should re-create the device and continue.
when it is reaching 30% this is what i am getting, it is not fast too! my pc ram is 32g, my GPU according to task manager is 4g and shared gpu is 14g, so why it isn't working i don't get it!
What width and height are you using? What is the exact model of your GPU?
(venv) PS D:\as\Ai\ComfyUI> pip install -r requirements.txt
ERROR: Could not find a version that satisfies the requirement torch (from versions: none)
ERROR: No matching distribution found for torch
im use cpu r5 5600 +gpu rx6600
Are you using Python 3.10.6?
@@Luinux-Tech now working . im downgrade python
Hello. (i3 9100 and RX580 8Gb) Im using WebUI. Image generation works fine but when i checked my vram usage after generating it stays at max usage for some reason. Is this normal? Due to this, i cannot upscale the image, it will say that i dont have enough vram.
RuntimeError: Could not allocate tensor with 1073741824 bytes. There is not enough GPU video memory available!
Could you help please?
Hi, sorry I didn't answer earlier. Normally, the VRAM ends up being full because the model is sent to VRAM to speed up the generation process. However, since your GPU has 8GB, this should only happen if you are using SDXL (models trained for 1024x1024). I suggest trying the Tiled Diffusion & VAE extension and the "--lowvram" parameter. This should be sufficient to eliminate errors caused by insufficient memory. Another option would be to generate the images first and then upscale them. It is also worth saying that DirectML (the API that allows you to use AMD cards for Machine Learning) has some memory problems and may be contributing to this abnormal use of VRAM.
hello,i have an issue here it using my cpu not my gpu any fix for this? im using 6650 xt and 5 5600 and using webui
Activate the venv and run the command "python" or "py", this will start a python console(interative mode), then run "import torch", and then check if torch is seeing your gpu with the command: "torch.cuda.get_device_name(0)" does this command return the name of your GPU?
@Luinux-Tech Yes i've already fix that but there is another error could not locate tensor, the first one is around 400k bytes and after i use the --lowvram its change to 100k bytes any fix for this?
after requirements txt mine stuck at gradio in the last paragraph thing
Some people have reported this bug. Try pressing enter in the terminal, and it should continue.
@@Luinux-Tech thanks
If I have 16 ram should I put normalvram or something else?
I recommend using the "--normalvram" parameter for cards with VRAM between 6GB and 12GB. For cards with a higher capacity, such as 16GB or 24GB, use the "--highvram" parameter. This will ensure that the models are loaded in the GPU memory and will accelerate the generation process.
I have win10 enterprise LTSC, I think torch doesnt work on LTSC..
because I did everything as you did. I got ryzen5500 and rx 6700 xt
Can you describe the error? What does it say?
@@Luinux-Tech
Traceback (most recent call last):
File "C:\Users\Ooze\Documents\stable-diff\ComfyUI\main.py", line 90, in
import execution
File "C:\Users\Ooze\Documents\stable-diff\ComfyUI\execution.py", line 13, in
import nodes
File "C:\Users\Ooze\Documents\stable-diff\ComfyUI
odes.py", line 21, in
import comfy.diffusers_load
File "C:\Users\Ooze\Documents\stable-diff\ComfyUI\comfy\diffusers_load.py", line 3, in
import comfy.sd
File "C:\Users\Ooze\Documents\stable-diff\ComfyUI\comfy\sd.py", line 5, in
from comfy import model_management
File "C:\Users\Ooze\Documents\stable-diff\ComfyUI\comfy\model_management.py", line 62, in
import torch_directml
File "C:\Users\Ooze\Documents\stable-diff\venv\Lib\site-packages\torch_directml\__init__.py", line 21, in
import torch_directml_native
ImportError: DLL load failed while importing torch_directml_native: The specified module could not be found.
this is the error powershell says
I Installed everystep without any problem. but when I try to run python main.py and the whole code, it gives that error.
I did everything according to the guide, but only the CPU works, the rx 6600 xt does not work 😭😭😭
What AMD GPU were you using for this? Comfy looks several times faster than A1111.
The video is sped up. It took me 1 minute and 32 seconds to generate an image with 20 steps in ComfyUI and 2 minutes and 3 seconds for an image also with 20 steps in webUI. My GPU is an RX 550 4GB.
@@Luinux-Tech Wow, it's pretty cool that you can use just 4gb of vram, that's impressive....
3 Things.
1. Thank you for this tutorial. it worked 1st time and does what I need it to do without fighting with me
2. Is there a way to create a launcher of sorts? I don't know ANYTHING about python or Git or coding. I know you could call on it, but I've not found a tutorial to help.
3. I've got a 7900XTX and I'm still getting 1-2 IT/s. when I had Automatic1111 I was cranking out 11-15 IT/s but I switched to Comfy after hearing it was superior. a 1024x1024 image generates is 24 secs so I'm not complaining but more asking is there a way to improve IT/s?
Thank you. To create a launcher on Windows, try following the pinned comment in this video: ruclips.net/video/b9pqNQBSlpw/видео.html. Regarding improving performance, you can try using ComfyUI with the "efficiency-nodes-comfyui" extension. Another way to improve performance would be using ROCm directly on Linux.
first of all your guide is very clean and easy to follow, i followed all youstep , and it all worked, then stable diffusion opens up in the webpage, just like in the video, and i tested it with a random prompt but it cannot generate and the error is this : "SafetensorError: device privateuseone:0 is invalid"
i tried to download 2 different Models but the error is the same.
Any idea ? cuz i checked online but couldnt find anything.
Thanks. What is your CPU and GPU? Are you using ComfyUI or WebUI?
@@Luinux-Tech AMD Ryzen 9 3900X 12 CPU
NVIDIA GeForce RTX 3080
and i am using WebUI
Sorry, but this video will not work for you. The installation method for Nvidia GPUs is different. You do not need to use torch-directml. The standard torch with CUDA is what you need. Unfortunately, I do not have any tutorial for Nvidia cards yet.
@@Luinux-Tech oh ok, well thx for the answer and the support
How do I get the ability to open up python from the folder I'm currently in? Is it clicking that box that says add Python 3.10 to path? Because its still not there for me. Maybe its my windows appearance settings to make windows 10 appear like win 7?
Yes, it should work by checking the "Add to Path" box. If not, open the environment variables menu, double click on "Path," and add the path of your Python installation (usually: C:\Users\[user]\AppData\Local\Programs\Python\Python[version]).
oh. how do I enable OLIVE support as well? I've read an article on AMD blog that it helps optimize ai model to run faster, sometimes a lot faster particularily on AMD gpu. I've seen there is a branch of stable diffusion on Github with olive support but I failed to make it work because of errors during installation. I've managed to install ComfyUI but webUI returns OSerror
Try deleting "C:\users\\.cache\huggingface" to fix OSError.
@@Luinux-Tech I've managed to make ComfyUI to run using your method and links. I've tried to install webUI and it returns error that it can't find repository on hugginface. It is returned by python script run from bat file... I tried to reinstall and some other tutorials where I get 'no cuda drivers detected' error even though I used -directml commandline etc. What can it be?
Since comfyui seems to work fine I tried photoshop plugin for comfyui. I installed everything following instructions but it doesn't work reporting error that VAE not found, while it works fine in browser. So sad that NOTHING works out of the box..... goddamn quest.
About webUI, I also noticed these errors coming from Hugging Face. Apparently, some files are no longer available on Hugging Face, and the webUI guys still haven't fixed the broken links. As for ComfyUI and Photoshop, unfortunately, I haven't tested them yet, so I can't help at the moment.
where is the download link for the stable diffusion checkpoint? theres so many links in the description and im confused 🥲
"Stable Diffusion model" or "Stable Diffusion alternative model"
Can I directly download manually Comfy from Github because for some reason the git clone command keep failing
Yes, I believe it will work normally.
in the ComfyUI it gets the erros "theres no enough gpu video memory available" even thoug i follow every step for low vram. my gpu is 4gb
Are you using SDXL or the normal models? What size image are you trying to generate (512x512, 768x768...)?
@@Luinux-Tech good question... what should I use?
Stable diffusion 1.5 models and images with 512x512 pixels. If you want larger images, just upscale them later.
I have Lenovo ideapad gaming 3 laptop with AMD ryzen 4600h, will comfy UI work in my system
it should work normally. Unfortunately, I can't guarantee it because I haven't tested it on APUs.
thx for tutorial is very helping
can i ask ?
can comyui run using zluda in windows ?
Thank you. Other people have already asked me about ZLUDA, but I haven't tested it yet. As soon as I have time, I will test it and make a video with ZLUDA on Windows with ComfyUI and webUI.
@@Luinux-Tech its working with zluda, i have a 6700xt but its a mess install and cant use sampling method cudnn error. Only lcm... F
@@luxiland6117 Can you show me the method you used to make it work? I have a RX 6750 XT and I can't get it to work with Zluda.
can you help me bro, when I tried to load the model/check point, it got an error
Please, if you are asking for help, explain your error in detail, inform about your hardware(CPU + GPU), and specify whether you used WebUI or ComfyUI.
@@Luinux-Tech ok bro i have a new problem, when i type .\venv\Scripts\activate in window powershell it got an error,
i try again your tutorial, because I reinstalled my laptop yesterday, and that's what happened.
sory my english so bad I hope you understand what I mean
@@yabeginilah6946 I can understand what you write, but I need you to specify what the error says. You can copy the error lines and paste them here.
@@Luinux-Tech i have done bro, sorry my mistake lol.
Thank you for the video. How can i dedicate more VRAM to the "server" ? i have RTX 6750 12 GB but it only reserves 1GB ("--reserve-vram 4096" or "--reserve-vram 2.0" or other numbers is not working)
ComfyUI
This is just a bug in DirectML. Don't worry, it will use all the VRAM it needs. You can check in the Task Manager.
Following WebUI Error I got.... After image generation and it disappeared and got this msg in terminal....
RuntimeError: Could not allocate tensor with 134217728 bytes. There is not enough GPU video memory available!
What is your GPU? and what image size are you trying to generate?
@@Luinux-Tech AMD Radeon (TM) R5 M330 (2 GB) and Intel(R) HD Graphics 620 (4GB)
.............. Dell Inspiron i7 7500 CPU @2.7 Ghz 29012 Cores...... 8GB RAM...... Mhz--- Model- 15 3567
@@Luinux-Tech I generated simple image of "Cat" to test it.
This method will only use your AMD GPU, and since you only have 2GB of VRAM, it will be very difficult to generate an image. You can try to reduce the resolution of the image you want to generate or use webUI with the "Tiled Diffusion & VAE" extension and the "--lowvram" parameter.
Hi, if I want to get an image it always gives me the problem that says “Could not allocate tensor with 52428800 bytes. There is not enough GPU video memory available!” What should be the solution to the problem?
What is your GPU? This is a lack of VRAM problem, try using the "--lowvram" parameter or decreasing the image resolution.
@@Luinux-Tech my gpu is 6700 XT,
When I use -lowvram, -normalvram or -highvram I still get the same error
@@xverny0 What is the size of the image you are trying to generate?
@@Luinux-Tech size of image is 512x512
@@xverny0 Your GPU should be able to generate images much larger than 512x512. Are you using webUI or ComfyUI? Please provide the full command you are using to start it. Also, try generating an image and check in the task manager if the VRAM is actually full.
Hi I have a problem, when i run " py -m venv venv" it says Error: Command '['C:\\AI\\venv\\Scripts\\python.exe', '-Im', 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit status 1. So im blocked at the starting point, could you help me? Thank you so much
First, run the command: "py -m pip install --upgrade pip" and then "py -m pip install virtualenv". Then, try again to create the venv. If it doesn't work, I recommend reinstalling Python.
i get stuck in pip install -r .
equirements.txt ...i put the correct directory...help
Be more specific. What happens? What does the error say?
AMD Radeon RX 7600 XT, CPU AMD Ryzen 7 7800X3D 8-Core, i ran your entire tutorial, it was very clear, but in the end i had the following error: "AttributeError: module 'torch' has no attribute 'Tensor' " i hope you can help me fix it
Are you using Python 3.10.6? There seems to be an error with your installation. I recommend deleting your VENV, reinstalling Python, and trying again.
@ thank you for your reaction, i will definitely try that
@@Luinux-Tech thank you so much, it works now, really appreciate your help!
Hi there, I installed as u showed in the tutorial. I have a 7900xtx and at the time i start the queue my vram gets to about 20gb usage but it still says fallback to cpu. i get about 3-4 it/s and after finishing the process my vram is still full at 20gb. whats happening here? it still says fallback to cpu while executing
Were you using webui or comfyui? Does the task manager show high GPU usage? What parameters did you use to start?
I used comfyui. While executing the graphics cards goes up in GPU usage like to 80%. The Vram stays at 20gb at any time after first image, after the second it goes eben higher. I still only have like 3-4 it/s. I used standard parametrs from the basic layout, with my own prompt. if i put up like 3 batches with 40 steps it will even drop to 1.5-2.6 it/s
Sorry. You switched the units and confused me. If you are getting 3-4 iterations per second, it seems correct to me. The VRAM is getting full because machine learning with AMD on Windows is a bit limited, and DirectML does not manage VRAM very well. Do not expect performance similar to NVIDIA cards. If I am not mistaken, a high-end AMD card will have a third of the performance of a high-end Nvidia 4000 Series card in machine learning.
@@Luinux-Tech I remember the same performance on other people's 6900xt. Is there really not that big of a difference at all between the 6xxx and 7xxx and cards?
Would it be a big difference switching to Linux?
About the performance between the 6000 and 7000 series, according to AMD: "Radeon™ 7000 series GPUs feature more than 2x higher AI performance per Compute Unit (CU) compared to the previous generation." If this is true, I have no way to test it because the most recent card I have access to is an RX 6600. Regarding Linux, when using it, you will have much better VRAM management, allowing you to use more complex models and workflows. In my case, I was able to enable some more optimizations and experienced a considerable performance gain. However, this was on a very limited GPU.
Hey, thank you for the tutorial. My PC crashes when it comes to generating image (Radeon 6800xt, Ryzen 7950x). torch.cuda.get_device_name(0) returns 'Torch not compiled with CUDA enabled', and 'pip install torch-directml torchaudio' returns Requirement already satisfied. Any idea what might be the problem?
Have you tried launching using the parameter: "--skip-torch-cuda-test"?
what the difference between "pip install torch-directml" and "pip install torch-directml torchaudio" ?
I specify "torchaudio" to prevent version mismatches.
@@Luinux-Tech nah it's just on git hub it without "torchaudio" so just little confused. But, hey, I used both for experiment...with and without torchaudio and both way seems to be working.
ciao, cosa digiti dopo py? 3:08
py -m venv venv
@@Luinux-Tech Grazie! piu tardi riprendo con l'installazione. Se puoi fai una versione aggiornata con Zluda, sarebbe il top!
Sure, several people told me about Zluda. I did some research, and I'm going to test if there is any gain in performance or stability. If there is, I'll make a video as soon as possible.
Hello friend, thanks for the video, well explained.
How can I add models ?
I mean other models like the civitai ones .
thanks
If you are using comfyUI, download the model and place it in the ComfyUI > models > checkpoints folder, or if you are using webUI, download the model and place it in the stable-diffusion-webui-directml > models > Stable-diffusion folder, after that just reload the page in the browser and select the new model.
Hello, Linux Made EZ! I ran into this problem when trying to generate images in WebUI and I don't know how to solve it:
NotImplementedError:
Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device.
Do you know anything about this?
amd rx 5700, intel core i7-3770k ivy bridge, windows 11 23h2
cmd log:
PS C:\Users\King\Documents\Stable-diff> .\venv\Scripts\activate
(venv) PS C:\Users\King\Documents\Stable-diff> cd .\stable-diffusion-webui-directml\
(venv) PS C:\Users\King\Documents\Stable-diff\stable-diffusion-webui-directml> .\webui.bat
venv "C:\Users\King\Documents\Stable-diff\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
ROCm Toolkit 6.1 was found.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1-amd-9-g46397d07
Commit hash: 46397d078cff4547eb4bd87adc5c56283e2a8d20
Using ZLUDA in C:\Users\King\Documents\Stable-diff\stable-diffusion-webui-directml.zluda
Failed to load ZLUDA: list index out of range
Using CPU-only torch
...
Failed to create model quickly; will retry using slow method.
Applying attention optimization: InvokeAI... done.
Model loaded in 67.0s (load weights from disk: 1.1s, create model: 2.1s, apply weights to model: 21.0s, apply half(): 2.6s, calculate empty prompt: 40.1s).
after I pressed GENERATE
cmd log:
RuntimeError: Input type (float) and bias type (struct c10::Half) should be the same
Time taken: 1 min. 16.6 sec.
or someone who want to help, my discord is iwish6768 , discird id is 292757299309838337
when you add or write to me, please indicate the reason: I want to help with SD for AMD
Are you trying to use ZLUDA?
What is happening is that ZLUDA is not working, and the webUI is trying to use CPU fallback. However, your CPU is very old and does not work with PyTorch, so it gives an error.
@@Luinux-Tech no, I'm not trying to use ZLUDA, I have it on my PC, I use it for Blender and I understand roughly what it is, but I repeated the installation exactly according to your video tutorial and what happens is that no images are generated with any model with the following error described above.
by the way, ComfyUI works fine for me, but stable-diffusion-webui-directml does not, I'll try to install SD in another folder from scratch and if this error appears again, I'll edit this message and add to it.
... I don't know what exactly happened, deleting all versions of Visual Studio or installing/updating AMD HIP SDK with installing the beta version of the driver in its settings, but now at half past 6 in the morning I was finally able to get a working SD on the 20th try using your video, so thank you very much for the video tutorial, etc. and remember that AMD is a cheap option only for games
"RuntimeError: Couldn't clone Stable Diffusion.
Error code: 128"
What is it?
It looks like there is an error with your git installation. Please check if you followed the installation instructions exactly as shown in the video. If not, try reinstalling it and then restart your computer.
@@Luinux-Tech the problem is that git clone download is interrupted. And I can't download at all, it's constantly interrupted. I downloaded it on another machine, then copied it to the machine with the GPU. But now I have the next problem. I can't run webui-user.bat because the user name on the machine from where I have copied git clone is different, and the path to python is different, too. Do you know by any chance in which repository's files I can correct the path to python? That's getting silly.
Nevermind, I have copied the venv folder from the second machine, too. Copying only git clone files made it work.
i can't do it with w10 the power shell don't work like w11 :(
What do you mean? Windows 11 terminal is just a "skin" for PowerShell and Windows CMD, all commands work normally. To open PowerShell in Windows 10, go to the File Explorer and press Shift + Right Click --> Open PowerShell here.
@@Luinux-Tech Tsm i'll try again later, I didn't know you could open the terminal like that
@@Luinux-Tech it works!!! TSM, but I don't see the panel "queue prompt"
@@Luinux-Tech i'll try with the webui
@@rexfullbuster8325 ComfyUI recently updated the interface, and they decided to remove the generate button. To generate an image, you need to press "CTRL + Enter".
Please help:
Traceback (most recent call last):
File "C:\Users\*redatcted*\Documents\Stable Diffusion\ComfyUI\main.py", line 87, in
import comfy.utils
File "C:\Users\*redatcted*\Documents\Stable Diffusion\ComfyUI\comfy\utils.py", line 20, in
import torch
File "C:\Users\*redatcted*\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\__init__.py", line 148, in
raise err
OSError: [WinError 126] The specified module could not be found. Error loading "C:\Users\*redatcted*\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\lib\fbgemm.dll" or one of its dependencies.
Try installing the latest VC_redist.x64, download link: learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170
@@Luinux-Tech downloaded it but still doesn't getting the same error message
What is your GPU?
@@Luinux-Tech I just have an integrated AMD GPU. Would that be the reason?
First, install the "App Installer" from the Microsoft Store. Then, restart your computer and run the following command without the quotes: "winget install --id Microsoft.VisualStudio.2022.BuildTools --override "--passive --add Microsoft.VisualStudio.Component.VC.Tools.x86.x64""
Very nice it worked thank you so much!
You're welcome!
I keep getting VAE object has no attibute vae_dtype
More information, Comfy or WebUI? When does the error occur? What is your hardware?
Thakns for the great video. When I try ot generate a model, it is not using the GPU at all, just CPU. I have a 6650XT. When runing comfy, I get this at the start:
Using directml with device:
Total VRAM 1024 MB, total RAM 16333 MB
pytorch version: 2.3.1+cpu
Set vram state to: LOW_VRAM
Device: privateuseone
How do I get it to use the GPU? Instead it has the Device as "privateuseone". I did some googling but have come up blank so far. Thanks for any help!
When you installed torch-directml, did you notice any errors? Try running the command "pip install torch-directml" and see if there are any errors.
@@Luinux-Tech Thanks for the reply. I did the directml install command and there were no errors. Then I noticed as I was watching your video that your GPU was listed as privateuseone also. I checked my GPU and it was working off and on, hitting 99% then dropping . Took about 193 seconds to make an image. One thing is I have 8gigs of VRAM but it only shows 1 gig (just like in your video). It errors out unless I use the lowvram parameter.
Oh, there was a question in there- can I get it to use the 8 gigs of VRAM? Do you think it might be using it even if it only shows 1 gig? Thanks!
Usually, the VRAM is completely used. You can check this in the task manager. Torch reports 1GB of VRAM because DirectML does not manage memory very well. The only alternatives for AMD cards would be to use ZLUDA, which I do not recommend because, in my case, I had many more crashes, or use Linux, which officially supports the full AMD ROCm.
so if i have more than 8gb vram, i type highvram?
Yes, but remember that DirectML's VRAM management is very limited, and you may end up with errors due to a lack of VRAM.
Does it work with an 8GB asrock rx 570?
Yes, in the video I used an RX 550 4GB. On yours, it will work even better because of the 8GB of VRAM.
when i create a pictuare my computer restart dose anyone know why ?
This usually happens when the VRAM is completely full and causes the system itself to crash, what is your GPU? What is the resolution of the image you are trying to generate?
@@Luinux-Tech i try to make the vanilla bottle picture, i have a amd 7950 xtx.
i upated the newest triber version and the temp is below 66 c°
I keep getting this error during image generation: 'aten::count_nonzero.dim_IntList' is not currently supported on the DML backend and will fall back to run on the CPU. This may have performance implications.
I have problem: Numpy is not available. Please help me
If you activate the venv and try to install with the command "pip install numpy," what happens?
@@Luinux-Tech It happens to me too, this is what i get from the terminal:
(venv) PS C:\Users\avile\OneDrive\Documentos\StableDiffusion\ComfyUI> pip install numpy
Requirement already satisfied: numpy in c:\users\avile\onedrive\documentos\stablediffusion\venv\lib\site-packages (2.1.1)
[notice] A new release of pip available: 22.2.1 -> 24.2
[notice] To update, run: python.exe -m pip install --upgrade pip
(venv) PS C:\Users\avile\OneDrive\Documentos\StableDiffusion\ComfyUI> python main.py --directml --use-split-cross-attention --normalvram
A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.1.1 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.0.
Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
If you are a user of the module, the easiest solution will be to
downgrade to 'numpy
I just found the solution, you have to run
pip uninstall numpy
And then
pip install numpy
thank all you guys, It work :DDDD
@@758185luan yeah, it also works for me but I really don't like the time it needs to generate, I'm probably moving to linux
Can I create a full hd quality image with VGA RX6600 8GB?
These models are trained to generate images in specific resolutions, such as 512x512, 1024x1024... To get larger images, you first need to generate the image in a size supported by the model you are using, then upscale it to the resolution you want. That said, yes, your GPU is capable of producing images in FullHD using upscaling.
i get error of "there is no enough gpu video memory available" but it doesnt even use my gpu (im not using a laptop)
gpu : rx 6600
ComfyUI or WebUI? What resolution did you use? Does the terminal say you are using "CPU only" mode?
@@Luinux-Tech webui 512x512 idk
Ubuntu vs Windows: Which was faster?
Windows DirectML does not manage memory very well at the moment. In Linux, I was able to use the "--normalvram" argument perfectly and obtained much better performance. Generating an image with exactly the same parameters(seed, lora, model...) in Linux took about 145 seconds, while in Windows it took 201 seconds.
can't install comfy ui is impossible can''t find the route.. is a PAIN in the ass..
it so frustrating i'm on 6700 xt 12gb follow your steps with the same install results, when i built a image 512x512 queue prompt no gpu working only 100% of RAM, gpu no use, cpu 5% took five min to generated first time second time six min again six and so on, put normalvram or lowvram same result the comfyui dont touch my gpu. T_T
Activate the venv and run the command "python" or "py", this will start a python console(interative mode), then run "import torch", and then check if torch is seeing your gpu with the command: "torch.cuda.get_device_name(0)" does this command return the name of your GPU?
@@Luinux-Tech
venv\Lib\site-packages\torch\cuda\__init__.py", line 414, in get_device_name
return get_device_properties(device).name
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
Somehow you ended up installing the CPU-only version of PyTorch. Activate the venv and try running the command "pip install torch-directml". Then, repeat the steps mentioned in my previous comment and see if anything changes.
@@Luinux-Tech not working, and Made a clean install... Something don't work or bypass in the install
When installing torch-directml, do you notice any errors regarding mismatches in the versions of the torch packages?
Does Comfyui work well with a 7900 xtx?
Yes, and with this GPU, you can easily use SDXL (models with higher resolution).
Then make a video teaching how to install ComfyUI on Ubuntu Linux! I saw that you did it for Stable Diffusion, but I really wanted to install ComfyUI on Ubuntu!
Will do soon
Stable Diffusion anf fooocuS is it the same ?
Fooocus is more automated, requiring less user input.
What does the "allow scripts" part do?
Allows you to run the ComfyUI/WebUI startup script on your user.
@@Luinux-Tech and this is very dangerous because your opening everything to malware. Is there another way not to do this?
You are correct, sorry. I have updated the description with a safer way to enable script execution. There is also a command provided to revert the configuration that is shown in the video.
@@Luinux-Tech thank you I'm very bad at scripting but I know a little about security. But how do you use the code you suggested to allow only the scripts in this tutorial
Open the terminal in the folder with the file you want to enable for execution. For example, for webUI, you must be in the folder that contains the file "webui-user.bat". Then, right-click and open the terminal. Next, run the command: "Unblock-File -Path .\webui-user.bat". Without the quotes.
can you have me why my sd doesnt run on my gpu?
I need information: what is your GPU? Is it giving an error? What does the error say?
there is any way to use lora with this?
Yes, it works normally. Just put the LoRA files in the correct folder (ComfyUI > models > loras) and use.
First try, works awesome using gpu, second try without even close the terminal I get this: py:688: UserWarning: The operator 'aten::count_nonzero.dim_IntList' is not currently supported on the DML backend and will fall back to run on the CPU. So now ComfyUi is using my CPU 😮💨
In the browser, press the "load default" option and try again.
Don't work for me
Hey man can we get flux guide too?
Sure, give me a few days.
404 window error Stable Diffusion mode
no zluda?
Stable Diffusion model site leads to error 404
Also almost all upscalers lead to problem - Cannot set version_counter for inference tensor . Can anyone tell me how to fix this?
Yes, they took down the link. I've already updated it with new links. Are you using ComfyUI?
@@Luinux-Tech Nope, stable-diffusion-webui-amdgpuI version.
Thanks for the serenading guide - works with me old RT 6700 XT. Now looking for other good models to try.
Glad to help
old?? bruh
nice, very good , its work!
Glad it helped
ComfyUI
Total VRAM 1024 MB
rx 580 8 GB
This is just a bug in DirectML. Don't worry, it will use all the VRAM it needs. You can check in the Task Manager.
Error, not useable
Try to describe the error better, indicating at which step the error occurs and whether you used WebUI or ComfyUI. Any additional information that could be helpful would be appreciated.
thanks idol
You're welcome👍
tks u
thx bruh im subscribe
OMG PLS HELP ME LINUX MADE EZ I ACCIDENTLY REMOVED MY IMAGE OUTPUT WHAT DO I DO 😭😭😭😭
,uh mein kya liya hua he bab ji ka thullu
OK NEVERMIND I CLICKED LOAD DEFAULT THANK
Can anyone help me I have this error running .\webui-user.bat
stderr: error: subprocess-exited-with-error
Preparing metadata (pyproject.toml) did not run successfully.
exit code: 1
[21 lines of output]
+ meson setup C:\Users\andre\AppData\Local\Temp\pip-install-tqlyoivq\scikit-image_75eb9669dc0a4203a78c580c08d85302 C:\Users\andre\AppData\Local\Temp\pip-install-tqlyoivq\scikit-image_75eb9669dc0a4203a78c580c08d85302\.mesonpy-rsocldgk -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --native-file=C:\Users\andre\AppData\Local\Temp\pip-install-tqlyoivq\scikit-image_75eb9669dc0a4203a78c580c08d85302\.mesonpy-rsocldgk\meson-python-native-file.ini
The Meson build system
Version: 1.6.0
Source dir: C:\Users\andre\AppData\Local\Temp\pip-install-tqlyoivq\scikit-image_75eb9669dc0a4203a78c580c08d85302
Build dir: C:\Users\andre\AppData\Local\Temp\pip-install-tqlyoivq\scikit-image_75eb9669dc0a4203a78c580c08d85302\.mesonpy-rsocldgk
Build type: native build
Project name: scikit-image
Project version: 0.21.0
WARNING: Failed to activate VS environment: Could not find C:\Program Files (x86)\Microsoft Visual Studio\Installer\vswhere.exe
..\meson.build:1:0: ERROR: Unknown compiler(s): [['icl'], ['cl'], ['cc'], ['gcc'], ['clang'], ['clang-cl'], ['pgcc']]
The following exception(s) were encountered:
Running `icl ""` gave "[WinError 2] El sistema no puede encontrar el archivo especificado"
Running `cl /?` gave "[WinError 2] El sistema no puede encontrar el archivo especificado"
Running `cc --version` gave "[WinError 2] El sistema no puede encontrar el archivo especificado"
Running `gcc --version` gave "[WinError 2] El sistema no puede encontrar el archivo especificado"
Running `clang --version` gave "[WinError 2] El sistema no puede encontrar el archivo especificado"
Running `clang-cl /?` gave "[WinError 2] El sistema no puede encontrar el archivo especificado"
Running `pgcc --version` gave "[WinError 2] El sistema no puede encontrar el archivo especificado"
A full log can be found at C:\Users\andre\AppData\Local\Temp\pip-install-tqlyoivq\scikit-image_75eb9669dc0a4203a78c580c08d85302\.mesonpy-rsocldgk\meson-logs\meson-log.txt
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
Encountered error while generating package metadata.
See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
Apparently, Pip is trying to compile a package because it couldn't install the binary. Are you using Python 3.10? Activate the virtual environment and try running the command "pip install scikit-image" to see what happens.
Requested to load AutoencoderKL
Loading 1 new model
loaded partially 64.0 63.99990463256836 0
!!! Exception during processing !!! Numpy is not available
Traceback (most recent call last):
File "F:\stable diffusion\comfyUI\execution.py", line 317, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "F:\stable diffusion\comfyUI\execution.py", line 192, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "F:\stable diffusion\comfyUI\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "F:\stable diffusion\comfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "F:\stable diffusion\comfyUI
odes.py", line 1497, in save_images
i = 255. * image.cpu().numpy()
RuntimeError: Numpy is not available
Prompt executed in 124.74 seconds
(venv) PS F:\stable diffusion\comfyUI>
Something has been updated and is causing this error. Please look at the comment below that discusses numpy.