FE-Engineer
FE-Engineer
  • Видео 37
  • Просмотров 309 077
March 2024 - Stable Diffusion with AMD on windows -- use zluda ;)
SD is so much better now using Zluda!
Here is how to run automatic1111 with zluda on windows, and get all the features you were missing before!
** Only GPU's that are fully supported or partially supported with ROCm can run this, check if yours is fully or partially supported before starting! **
check if your gpu is fully supported on windows here:
rocm.docs.amd.com/projects/install-on-windows/en/develop/reference/system-requirements.html
Links to files and things:
Git for windows: gitforwindows.org/
Python: www.python.org/downloads/
Zluda: github.com/lshqqytiger/ZLUDA/releases/
AMD HIP SDK: rocm.docs.amd.com/projects/install-on-windows/en/develop/
Add PATH for HIP SDK and wherever you copies Zluda...
Просмотров: 58 233

Видео

Absolute basics of setting up a website using Remix, React, and Mantine.
Просмотров 67310 месяцев назад
The video literally no one asked for. :-p *I understand this is not my channel's usual content* I am spinning up a website that will compliment the youtube channel. I am not 100% what all will be on it quite yet. But you will be able to access my youtube videos and probably even more content on the site. This video shows how I go about setting up a new website using React, Remix (react framewor...
How to fix Automatic1111 DirectML on AMD 12/2023! Fix broken stable diffusion setup for ONNX/Olive
Просмотров 35 тыс.10 месяцев назад
*Update March 2024 better way to do this* ruclips.net/video/n8RhNoAenvM/видео.html Currently if you try to install Automatic1111 and are using the DirectML fork for AMD GPU's, you will get several errors. This show how to get around the broken pieces and be able to use Automatic1111 again. Install Git for windows: gitforwindows.org/ Install Python 3.10.6 for windows: www.python.org/downloads/re...
AMD GPU + Windows + ComfyUI! How to get comfyUI running on windows with an AMD GPU!
Просмотров 33 тыс.10 месяцев назад
Happy Holidays! ComfyUI in windows and running on an AMD GPU! Install Git gitforwindows.org/ Install miniconda for windows (remember to add to path!) docs.conda.io/projects/miniconda/en/latest/ Complete steps coming after the holidays calm down a bit, for now you will have to actually watch the whole 6 minutes of video! ;-p
Homelab software ideas to get you started! How to get started with your homelab software.
Просмотров 8 тыс.11 месяцев назад
Recommendations for getting started with the right software on a homelab. Different ideas for different projects. If I don't already have a video for one of these projects, or a different project that you would like to see, leave a comment with the software you would like to see! Video about ideas for hardware and cloud to get started: ruclips.net/video/c4gENZYcKWc/видео.html Awesome Self-hoste...
Monitor EVERYTHING! Simple homelab monitoring for servers, websites, and more!
Просмотров 3 тыс.11 месяцев назад
Setting up monitoring on your homelab is crucial. Get alerted when anything goes down, configure settings, and be confident your homelab services are all working appropriately. Uptime-Kuma github: github.com/louislam/uptime-kuma Uptime-Kuma installation instrucitons: github.com/louislam/uptime-kuma/wiki/🔧-How-to-Install Install NVM on ubuntu: sudo apt install curl, git curl raw.githubuserconten...
Homelab hardware and free cloud ideas to get you started! Choosing starting server hardware.
Просмотров 61211 месяцев назад
Getting started with a homelab is tough, equipment is expensive, there are tons of options, ultimately what you want to do now, and want to do in the future is going to dictate the most reasonable hardware to get. Since very few of us want to go out and spend $20,000 on a top of the line server, this guide should help get you started including 100% free forever options. Once you are reasonably ...
Run the newest LLM's locally! No GPU needed, no configuration, fast and stable LLM's!
Просмотров 7 тыс.11 месяцев назад
This is crazy, it can run LLM's without needing a gpu at all, and it runs it fast enough that it is usable! Setup your own AI chatbots, AI coder, AI medical bot, AI creative writer, and more! Install on Linux or Windows Subsystem for Linux 2 curl ollama.ai/install.sh | sh Install on Mac: ollama.ai/download/mac Pull and run a model ollama run [modelname] Pull and run a 13b model ollama run [mode...
Install Stable Diffusion on windows in one click! AMD GPU's fully supported!
Просмотров 19 тыс.11 месяцев назад
Want to run Stable diffusion on windows with an AMD gpu? Install and run Shark from Nod.ai in one click. Simplest way to get stable diffusion up and running even on AMD. Page to download installer: github.com/nod-ai/shark/releases/tag/20231009.984 Direct link to installer: github.com/nod-ai/SHARK/releases/download/20231009.984/nodai_shark_studio_20231009_984.exe
Training SD Models with AMD GPU's in DreamBooth! Surprisingly fast results!
Просмотров 3,8 тыс.11 месяцев назад
Training your own custom Stable Diffusion models in dreambooth with AMD GPU's is awesome! Add pictures of people you know and train the AI to put them into pictures. Bring patterns and textures into a model. Train SD to draw a character more reliably by using real photos and telling SD who that character is to produce reliable results! Endless possibilities! Install Dreambooth extension in Auto...
How to run your own VOIP server. Open source voice server for friends, gaming, and more!
Просмотров 2,4 тыс.11 месяцев назад
In this video we will go over step by step instruction for installing and running your own VOIP server. Incredibly stable voice server that can be used for chatting, gaming, friends, family, and more. Ultra low latency for clear communication, highly customizable, and able to support hundreds of concurrent users. Client software available on windows/mac/linux/ios/android. Users should be able t...
The BEST SDXL Model for realism right now! Just updated 25 November!
Просмотров 1,3 тыс.11 месяцев назад
This model is insane! Images in video were not cherry picked, I would change the style from stylez tab, generate one image, then change stylez again. I ran through roughly 60 styles one image per style, these were the results! Gorgeous results from dead simple prompts. Better prompts will likely yield better results, but even with simple prompts it creates far better results than any other mode...
How to convert civitai models to ONNX! AMD GPU's on windows can use tons of SD models!
Просмотров 7 тыс.11 месяцев назад
I had numerous folks from comments asking how to convert models from civitai. I went and looked at several different ways of doing this, and spent days fighting through broken programs, bad code, worse documentation, only to find out, the Olive tab already worked just fine. For anyone who has no idea what is going on, this is using Automatic1111 fork for DirectML on windows 11. See how to insta...
Install and run LLM's Locally with text generation webui on AMD gpu's!
Просмотров 10 тыс.11 месяцев назад
Let's setup and run large language models similar to ChatGPT locally on our AMD gpu's! Installing ROCm sudo apt update sudo apt install git python3-pip python3-venv python3-dev libstdc -12-dev sudo apt update wget repo.radeon.com/amdgpu-install/5.7.1/ubuntu/jammy/amdgpu-install_5.7.50701-1_all.deb sudo apt install ./amdgpu-install_5.7.50701-1_all.deb sudo amdgpu-install usecase=graphics,rocm su...
ControlNet is amazing! Install it on Automatic1111 now!
Просмотров 1,2 тыс.Год назад
ControlNet is absolutely incredible. Easily one of the best tools to use with stable diffusion! ControlNet Install from Git: github.com/Mikubill/sd-webui-controlnet.git Page to download ControlNet models: huggingface.co/lllyasviel/ControlNet-v1-1/tree/main If you have installed controlnet and do not see it running, try stopping the server entirely and restart it.
Create 500+ step images, hacking Automatic1111 UI changing min/max settings!
Просмотров 481Год назад
Create 500 step images, hacking Automatic1111 UI changing min/max settings!
1 SECOND Stable Diffusion images! SD + LORA's are FAST!
Просмотров 1,4 тыс.Год назад
1 SECOND Stable Diffusion images! SD LORA's are FAST!
Installing hundreds of preset styles in Automatic1111! Creating your own styles!
Просмотров 893Год назад
Installing hundreds of preset styles in Automatic1111! Creating your own styles!
AMD ROCm in linux with Automatic1111 running stable diffusion! Simple guide to getting started!
Просмотров 22 тыс.Год назад
AMD ROCm in linux with Automatic1111 running stable diffusion! Simple guide to getting started!
Let's make a dual-boot PC! Windows + Ubuntu 22.04 Desktop!
Просмотров 1,4 тыс.Год назад
Let's make a dual-boot PC! Windows Ubuntu 22.04 Desktop!
Thank you for 100 subs!
Просмотров 74Год назад
Thank you for 100 subs!
Automatically activate conda and run your SD from one bat file! Super easy!
Просмотров 1,5 тыс.Год назад
Automatically activate conda and run your SD from one bat file! Super easy!
Absolute beginners guide to linux command line
Просмотров 433Год назад
Absolute beginners guide to linux command line
Beginners guide using automatic1111 and stable diffusion with AMD gpus! Tips , tricks, and gotchas!
Просмотров 3 тыс.Год назад
Beginners guide using automatic1111 and stable diffusion with AMD gpus! Tips , tricks, and gotchas!
AMD GPU's are screaming fast at stable diffusion! How to install Automatic1111 on windows with AMD
Просмотров 44 тыс.Год назад
AMD GPU's are screaming fast at stable diffusion! How to install Automatic1111 on windows with AMD
Make Nextcloud fast! Full tutorial and server setup!
Просмотров 21 тыс.Год назад
Make Nextcloud fast! Full tutorial and server setup!
Setup a DNS server with automatic Ad Blocking!
Просмотров 5 тыс.Год назад
Setup a DNS server with automatic Ad Blocking!
UFW Firewalls are EASY! Setup and manage your UFW in Ubuntu 22.04
Просмотров 446Год назад
UFW Firewalls are EASY! Setup and manage your UFW in Ubuntu 22.04
Build your own Minecraft server - add texture packs - let your friends play!
Просмотров 272Год назад
Build your own Minecraft server - add texture packs - let your friends play!
Setup a NAS on your home server! How to create a samba server in Ubuntu 22.04
Просмотров 5 тыс.Год назад
Setup a NAS on your home server! How to create a samba server in Ubuntu 22.04

Комментарии

  • @tsvetelinkrumov1875
    @tsvetelinkrumov1875 18 часов назад

    Everything works great and it is more simple, than other configs on the net, where shells do not work or main shell do not work. Here everything is just fine. Thank you very much for this video.

  • @BikeTravelerX
    @BikeTravelerX 2 дня назад

    does not work in my case. i was able to install everything according to your very clear and nice instruction. no problems. i even can start comfyui. i did however copied some of the checkpoints i already had for my cpu version. i can select the checkpoint and as soon as i hit the generate button i have a bluescreen when it walks to the last node.... i have no idea what i am doing wrong

  • @Yarozeze
    @Yarozeze 4 дня назад

    it's sad, every time i try to generate something i receive the message ''Could not allocate tensor with 1221853184 bytes. There is not enough GPU video memory available!'' amd cpu+ amd radfeon 6550 xt

  • @pokeysplace
    @pokeysplace 4 дня назад

    Just what I was looking for to add a few headers. Seems to work fine on version 8.2

  • @sundowner6191
    @sundowner6191 5 дней назад

    "RuntimeError: Failed to load shared library '/home/me/gpt/text-generation-webui/venv/lib/python3.10/site-packages/llama_cpp_cuda/lib/libllama.so': libomp.so: cannot open shared object file: No such file or directory"

  • @mustafadagtekin1757
    @mustafadagtekin1757 6 дней назад

    I am using StabilityMatrix to run ComfyUI, and it only detects 1GB of VRAM and i have 12GB. How can i solve it? GPU: RX 6700 XT XFX

  • @ParkNathan
    @ParkNathan 6 дней назад

    Legend status, thank you. Linux was like pulling teeth and didn't want to work properly. Thank you kindly sir

  • @karvakorvatjupulit5716
    @karvakorvatjupulit5716 7 дней назад

    I followed your tutorial and there were no errors in install but when I "Queue prompt" I get: The operator 'aten::count_nonzero.dim_IntList' is not currently supported on the DML backend and will fall back to run on the CPU. I have AMD XT 7900 XTX.

  • @F1r4icks
    @F1r4icks 7 дней назад

    hey thx u only one who helped, u the best, dont u know any ways to download oobabooga at amd gpu with windows mb zluda or som eother way?

    • @FE-Engineer
      @FE-Engineer Час назад

      I think I have a way to make it work. Been testing something different

  • @alphaomega5017
    @alphaomega5017 8 дней назад

    I am seeing the Ubuntu screen flickering on Proxmox 8.0 version

  • @5onor306
    @5onor306 9 дней назад

    hello mate. so all the steps went well but it's not working. when i press queue prompt, it says reconnecting and then gives me an unknown error. also, the archive i downloaded gives me errors and its not extracting. any advice?

  • @Mup182
    @Mup182 10 дней назад

    Like several others, I am having an issue where comfyui is falling back to using my CPU instead of my GPU. The error i get is "The operator 'aten::count_nonzero.dim_IntList' is not currently supported on the DML backend and will fall back to run on the CPU." Any chance you know how to fix this?

    • @FE-Engineer
      @FE-Engineer 10 дней назад

      Sounds like it’s using direct ml. I’m trying to see if I can get it to use zluda instead

  • @theblackcrowttv
    @theblackcrowttv 10 дней назад

    and how to update comfyUI this way? ty

  • @pcproz3215
    @pcproz3215 10 дней назад

    Great job with this video! no music, no endless nonsense talk, straight to the point! The only way to make it any better would be for you to stop by and do it for me. 😄👍

  • @calebholst5776
    @calebholst5776 11 дней назад

    So, I've gotten through part of this. When I get to the command prompt section and I enter the github link it cloned, but when I typed in webui.bat, it was not recognized as an external command. Any suggestions?

    • @FE-Engineer
      @FE-Engineer 6 дней назад

      You have to change directory into the new directory it created.

  • @ИльяАникин
    @ИльяАникин 11 дней назад

    you are the best, man. still works.

  • @ИльяАникин
    @ИльяАникин 11 дней назад

    why if it constantly types compiling in progress please wait?

  • @ИльяАникин
    @ИльяАникин 11 дней назад

    could someone please explain me how almost all 6xxx or 7xxx AMD gpus support rocm including 7600 on that amd documentation but not specifically 7600xt I own

  • @egribikayt5518
    @egribikayt5518 14 дней назад

    damn so there is no way to use automatic1111 with rx 570 ... rip

  • @KK47..
    @KK47.. 14 дней назад

    how do we add the manger?

  • @Bart-n9p
    @Bart-n9p 16 дней назад

    now when i run it like this checkpoint are undefined anyone has problem like this? cuz runin by cpu works well

  • @Greyhoundsniper
    @Greyhoundsniper 18 дней назад

    getting Exception Code: 0xC0000005 with a 6700xt on ROCm 6.1, any tips on what's the issue? Used the python ver you said to use and tried 3.10.11 and still lacking any changes

  • @emreakbas8262
    @emreakbas8262 19 дней назад

    Doesn't work. It uses normal ram instead of GPU. Can you help?

    • @FE-Engineer
      @FE-Engineer 16 дней назад

      They must have changed something in the code. I’ll take a look.

  • @Salman8506
    @Salman8506 19 дней назад

    I had nextcloud on docker based instance with enough ram, cpu cores and nvme storage still struggling with slowness, will give this a try. Only one question how are nextcloud updates handled? Do we have to download and copy the nextcloud files into the www folder every time there is an update or does nextcloud manages that automatically?

    • @FE-Engineer
      @FE-Engineer 18 дней назад

      Nextcloud has an updater that will do it for you. It’s not too bad.

    • @Salman8506
      @Salman8506 18 дней назад

      ​@@FE-Engineerinstalled and it's day and night vs docker based install. On a nvme zfs with ample ram.. It's just so faster now. Thanks for the video.

  • @Briannoger-j1w
    @Briannoger-j1w 21 день назад

    Thank you so much for this tutorial! Haven't even finished the entire video yet but already started generating, even without replacing the files (which I did anyways, didn't seem to affect speed). Getting around 20-25 it/s which seems great! 7900XTX sure is a beast of a card!

    • @FE-Engineer
      @FE-Engineer 16 дней назад

      Yea they changed some things to make it a lot easier.

  • @ArindamSaha-c8q
    @ArindamSaha-c8q 21 день назад

    hey we wanted to setup the same, can we connect?

  • @BanditEssex
    @BanditEssex 22 дня назад

    Hi, Great guide, - When I run the Webui --use-zluda at the very last step, I get "return torch._C._cuda_memoryStats(device)" - "RuntimeError: invalid argument to memory_allocated" anyidea, it loads the Ui, but of course any attempt to run anything fails. I'm on a 7900XTX

  • @harborroleplay2099
    @harborroleplay2099 22 дня назад

    Still working, just avoid adding --onnx

  • @karlrimes2425
    @karlrimes2425 23 дня назад

    Helped a lot

  • @MuttleyVonErich00
    @MuttleyVonErich00 25 дней назад

    Ive managed to get it up and running, but every time i try generate an image, my PC crashes and reboots. What could be causing this?

  • @MuttleyVonErich00
    @MuttleyVonErich00 25 дней назад

    Ok guys, be honest now. How many of you followed the instructions to the 'letter' and mistyped requirements.txr

  • @ВасилийМудакович
    @ВасилийМудакович 26 дней назад

    that's all great but what about ArchLinux though

  • @SoulCrySoul
    @SoulCrySoul 28 дней назад

    How can we make it work on Intel Iris plus gpu? most or all of them Intel gpus have a shared ram which means it would be amazing if Zluda can support them. Please let me know.

  • @VamosViverFora
    @VamosViverFora 29 дней назад

    Great video. Which amd gpu models are working on Ollama? Thanks

    • @FE-Engineer
      @FE-Engineer 29 дней назад

      Ollama only supports some models. Most of the popular ones. They have a list on their site showing which ones you can pull. I have not had any issues with the ones on their site.

  • @VamosViverFora
    @VamosViverFora 29 дней назад

    Fantastic! I’m still considering if I should buy Nvidia or AMD and I was afraid of having problems with Nvidia on Linux. It seems AMD is closer to Linux. Great video. I have a question: did you install Llama or another text LLM on your machine running Linux and with AMD? Many thanks

    • @FE-Engineer
      @FE-Engineer 29 дней назад

      Yes. I have run ollama and a few other ones. On Linux especially they work great!

  • @figure17tsubasayhikaru43
    @figure17tsubasayhikaru43 Месяц назад

    Hi your video was really helpful some months ago, but it seems that one update changed something and now there are some errors, do you know what causes: "OSError: none is not a local folder and is not a valid model listed on 'huggingface models' if this is a private repository make sure to pass a token having permission to this repo either by logging or by passing 'token=<your_token>' And Failed to create a model quickly; will retry using slow method. Those are the errors I'm getting, I hope you know how can I solve them 🙏.

  • @raystyles9326
    @raystyles9326 Месяц назад

    thanks a lot i got it to work without downloading onnx...onnx was giveing problems

    • @raystyles9326
      @raystyles9326 Месяц назад

      have a good day really appreciate it

    • @FE-Engineer
      @FE-Engineer Месяц назад

      You are welcome. There have been a ton of updates and code changes. Most folks use zluda or rocm for running AMD cards for stable diffusion. So ONNX is no longer as necessary as it was before.

  • @tvanime6747
    @tvanime6747 Месяц назад

    Bro podes hacer el tutorial de Applio RVC salio ya para Zluda pero no entiendo bien el tutorial del Git Hub saludos.Logre hacer el metodo pero no me detecta la GPU.Bro, can you do the Applio RVC tutorial? It's already out for Zluda, but I don't really understand the Git Hub tutorial. Regards. I managed to do the method, but it doesn't detect my GPU.😔😔

  • @DiamondGeezer_27
    @DiamondGeezer_27 Месяц назад

    Every time you say “effectively” I’m taking a whiskey shot.

    • @FE-Engineer
      @FE-Engineer Месяц назад

      Do I say it a lot? It’s really weird hearing myself from videos. Things I say too often is really weird for me to hear. Heh I’ll have to be careful and watch how many times I say effectively in the future. 😂 thanks for letting me know!

    • @FE-Engineer
      @FE-Engineer Месяц назад

      The real question is what whiskey are you drinking? 👀

    • @DiamondGeezer_27
      @DiamondGeezer_27 Месяц назад

      @@FE-Engineer We all do it bud. Only difference is you’re publishing it! Besides, I’m mostly relieved to consume content that was written by a human. 🤙🏻

  • @ViralWatchMedia
    @ViralWatchMedia Месяц назад

    No sorry, AMD GPU's are a joke for stable diffusion unless you are willing to install Linux and use DirectML. Also you have to convert each model to ONNX which sometimes doesnt even work, I could not get automatic1111 to detect the folder after conversion and gave up and just got an Nvidia card and now I can generate each image in 1-3 seconds, AMD is great for gaming but terrible for any AI stuff.

    • @FE-Engineer
      @FE-Engineer Месяц назад

      This is not accurate. Zluda and rocm both enable and cards to not use ONNX models at all, and work exactly like nvidia minus transformers as transformers is nvidia specific last I knew.

    • @OcihEvE
      @OcihEvE 2 дня назад

      @@FE-Engineer ROCm changed the game. My 1440x1440 renders went from 900 seconds to 140 seconds. Yes, I need to dual boot in to Debian and getting it working wasn't a one click install but that's the trade off right now for half the money on the card. If I could go back in time 30 years and learn Python I'd have an easy button install for AMD people but coulda, woulda, shoulda. I can't and I don't.

  • @jcdenton7914
    @jcdenton7914 Месяц назад

    I never got into SD or FLux so I'm not going to keep up with what is automatic1111 or what is needed if I want to make images, upscale the res, do SD video, and basically everything.

  • @Limmo1337
    @Limmo1337 Месяц назад

    does not work for me... i get RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check, I have a 7900xtx. I added the command to the webui-user.bat and still get the same error

    • @FE-Engineer
      @FE-Engineer Месяц назад

      This is very old code. Check the video description for links to updated code

    • @Limmo1337
      @Limmo1337 Месяц назад

      @@FE-Engineer I did the updated version it works now but it wont go into quick mode, It just goes into slow mode and takes forever.

  • @Blue_Razor_
    @Blue_Razor_ Месяц назад

    It runs very well at 512x512, but the vram usage spikes past 768x768, and it maxes out at 1024x1024 with 16gb of vram? is that normal for zluda? It's also downloading something every time I try to select a larger model (larger in file size) "Creating model from config: E:\ai again un\stable-diffusion-webui-amdgpu\configs\v1-inference.yaml" should I let it download the file? Did it break? Am I being a goober? who knows

  • @sandwichninja
    @sandwichninja Месяц назад

    rocBLAS error: Cannot read C:\Program Files\AMD\ROCm\6.1\bin\/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1032

    • @FE-Engineer
      @FE-Engineer Месяц назад

      Slashes are the wrong direction…

    • @sandwichninja
      @sandwichninja Месяц назад

      ​@@FE-Engineer I copied and pasted that error message directly from the command prompt. I don't know why it uses back-slashes.

    • @FE-Engineer
      @FE-Engineer Месяц назад

      My guess is that when you added the location to path you used the wrong slashes

    • @sandwichninja
      @sandwichninja Месяц назад

      ​@@FE-Engineer The two entries I made are as follows: C:\SD\ZLUDA\zluda %HIP_PATH%bin The location and slashes are correct because I didn't do it by memory. I copied and pasted it.

    • @yozari4
      @yozari4 26 дней назад

      ​@@sandwichninja solution: re install following new instructions because instructions has been changed, check ishqqytiger repository for the new ones

  • @bjarne431
    @bjarne431 Месяц назад

    I am a simple and an old school man, i just run ubuntu with a handful of applications. I think people tend to over-complicate things nowadays with unnecessary virtualization I only use docker for developing locally on my mac

  • @Marcelo1406pipo
    @Marcelo1406pipo Месяц назад

    Hey, would be great a next video about optimal installation of onlyofice Document Server to use it in nextcloud.

    • @FE-Engineer
      @FE-Engineer Месяц назад

      Never heard of it. But I’ll look into it.

  • @raystyles9326
    @raystyles9326 Месяц назад

    how do you install the error updates on cmd, im so new to this

    • @FE-Engineer
      @FE-Engineer Месяц назад

      Error updates is not a package or thing to be installed. When a program hits an error it usually sends out an error message to help users have an idea of what went wrong.

    • @raystyles9326
      @raystyles9326 Месяц назад

      @@FE-Engineer this is where i got and the error i got...not sure if my gpu can run...if you have a fix just let me know (sd_olive) C:\Users\DJ Dubbii\sd-test\stable-diffusion-webui-directml>webui.bat --onnx --backend directml venv "C:\Users\DJ Dubbii\sd-test\stable-diffusion-webui-directml\venv\Scripts\Python.exe" Python 3.10.9 | packaged by Anaconda, Inc. | (main, Mar 8 2023, 10:42:25) [MSC v.1916 64 bit (AMD64)] Version: v1.10.1-amd-11-gefddd05e Commit hash: efddd05e11d9cc5339a41192457e6ff8ad06ae00 Traceback (most recent call last): File "C:\Users\DJ Dubbii\sd-test\stable-diffusion-webui-directml\launch.py", line 48, in <module> main() File "C:\Users\DJ Dubbii\sd-test\stable-diffusion-webui-directml\launch.py", line 39, in main prepare_environment() File "C:\Users\DJ Dubbii\sd-test\stable-diffusion-webui-directml\modules\launch_utils.py", line 592, in prepare_environment raise RuntimeError( RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

  • @steala
    @steala Месяц назад

    Yep - one of the best videos out there for installing Nextcloud with great advice on tuning. Fantastic and thank you

    • @FE-Engineer
      @FE-Engineer Месяц назад

      Thank you so much. Glad you found it useful!

  • @arnab_san
    @arnab_san Месяц назад

    How do I fix this?? ONNX failed to initialize: module 'optimum.onnxruntime.modeling_diffusion' has no attribute '_ORTDiffusionModelPart'

    • @FE-Engineer
      @FE-Engineer Месяц назад

      Code has changed a good bit. Might be worth looking at my newer videos

  • @igortsvetkov9427
    @igortsvetkov9427 Месяц назад

    Hey! Please help I have this error (after .python main. py --directml) ImportError: DLL load failed while importing torch_directml_native: