Stable diffusion webui with an AMD GPU? [2024]

Поделиться
HTML-код
  • Опубликовано: 11 сен 2024

Комментарии • 273

  • @justthinkaboutit7983
    @justthinkaboutit7983 6 месяцев назад +45

    I even got this working on a AMD MiniPC with shared VRAM! works brilliantly. On CPU took 10+ mins to produce a picture, on AMD GPU got that down to 1 min!!! so impressed.

    • @IshanJaiswal26
      @IshanJaiswal26 4 месяца назад +1

      heyy , can you tell how to do i am having problem i have rx 6600

    • @IshanJaiswal26
      @IshanJaiswal26 4 месяца назад +1

      do you discord ??? pls help me

    • @starmanindisguise5844
      @starmanindisguise5844 4 месяца назад +1

      @@IshanJaiswal26 RX 6600 wont work. You need to have an rx 6700 or better to run ai on amd

    • @arghyaprotimhalder5592
      @arghyaprotimhalder5592 Месяц назад

      ​@@starmanindisguise5844you can run it technically somehow the eat around or force it to run but should we nope, running and getting good performance is different.
      Why not fry the GPU why not bro let's push it hard

    • @kevinmiole
      @kevinmiole 11 дней назад

      @@starmanindisguise5844 no it works fine. My only wish if this canr un sdxl models

  • @Kybalion3.6.9
    @Kybalion3.6.9 6 месяцев назад +7

    You're the best!! This is the only video that helped me. I'm new to this, and you explained everything very well. I hope you continue uploading videos about stable diffusion for AMD graphics. Thank you so much

    • @Kybalion3.6.9
      @Kybalion3.6.9 6 месяцев назад +1

      By the way, maybe you know a solution for this: I'm trying to load juggernautXL_v9Rdphoto2Lightning , but it gives me the following error "RuntimeError: Could not allocate tensor with 52428800 bytes. There is not enough GPU video memory available!" My graphics card is a Vega 56 with 8GB

    • @online_degenerate
      @online_degenerate 6 месяцев назад +1

      @@Kybalion3.6.9you have way to less vram then it requiered. Just make the image resolution smaller and it will work.

    • @northbound6937
      @northbound6937  5 месяцев назад +2

      Adding this argument might help: ' --lowvram' to webui-user.bat (in the COMMANDLINE_ARGS line)

    • @Kybalion3.6.9
      @Kybalion3.6.9 5 месяцев назад

      Thanks to both 🙌@@northbound6937 @Official_Memelus

    • @IshanJaiswal26
      @IshanJaiswal26 4 месяца назад

      @@northbound6937 do u have discord help me i cant do it

  • @rwarren58
    @rwarren58 5 месяцев назад +3

    Just subscribed. I had to a reinstall because I was adding scripts via a CSV file and suddenly I got the dreaded out of memory error then it wouldn't load my GPU. Thanks for being clear and having all the resources right here. You made it easy.

    • @IshanJaiswal26
      @IshanJaiswal26 4 месяца назад

      heyy , can you tell how to do i am having problem i have rx 6600 , discord

  • @Vilmar_Florenco
    @Vilmar_Florenco 2 месяца назад +3

    Vou testar na minha máquina, equipada com GPU AMD, eu tinha uma GPU Nvídia e tudo isso apareceu depois que troquei de GPU. Em toda a minha vida sempre usei GPU Nvídia, a primeira vez que Uso GPU AMD e me arrependo que não consigo me consolar. Obrigado por disponibilizar tão rico ensinamento. OBRIGADO!
    I'm going to test it on my machine, equipped with an AMD GPU, I had an Nvidia GPU and all this appeared after I changed GPUs. In my entire life I have always used Nvidia GPU, the first time I use AMD GPU and I regret that I can't console myself. Thank you for providing such rich teaching. THANKS!

    • @Solizeus
      @Solizeus 2 месяца назад +2

      Se estiver usando windows com uma placa inferior a RX6800 você vai precisar botar esse codigo " --use-directml --opt-sub-quad-attention --precision full --no-half-vae --no-half --opt-split-attention --medvram --disable-nan-check ", caso contrario inpaint não funciona, sugiro pegar controlnet tbm, se for superior a RX6800 vc pode instalar o HIP SDK pra usar zluda, só botar --use-zluda ao invés de directml

    • @23Puck666
      @23Puck666 7 дней назад

      Sigh* Same amigo, economizei 300 reais e não valeu a dor de cabeça, nunca mais faço essa graça.

  • @derekj54
    @derekj54 3 месяца назад +1

    this worked. i loved your tutorial, you didnt just show how you explained why. i love that, i learned something rather than just barelly got something working.

  • @KaosWulf
    @KaosWulf 5 месяцев назад +4

    Thanks for the tutorial! My Stable Diffusion recently shat itself, and following this finally got it to work again.

    • @IshanJaiswal26
      @IshanJaiswal26 4 месяца назад

      heyy , can you tell how to do i am having problem i have rx 6600

    • @core36
      @core36 2 месяца назад

      @@IshanJaiswal26 you have more problems than just a rx6600

    • @IshanJaiswal26
      @IshanJaiswal26 2 месяца назад

      @@core36 nvm bro, i bought a new nvidia graphics card, anyways thanks

  • @rew3991
    @rew3991 4 месяца назад +2

    This video is amazing and great. Thanks so much! It was so easy to follow all the instructions and things JUST WORKED! May all your days be wonderful.

  • @MarcoEscobarg
    @MarcoEscobarg 6 месяцев назад +3

    This is the only one that managed to work, not even official guide was doing its job!! thank you very much!!

    • @IshanJaiswal26
      @IshanJaiswal26 4 месяца назад

      heyy , can you tell how to do i am having problem i have rx 6600 , discord

    • @MarcoEscobarg
      @MarcoEscobarg 4 месяца назад

      @@IshanJaiswal26 Mmm hi, I've just followed the instructions blindly. I have not installed python and git before this.

    • @IshanJaiswal26
      @IshanJaiswal26 4 месяца назад

      @@MarcoEscobarg what gpu u have ?

    • @DiSHTiX
      @DiSHTiX 3 месяца назад

      @@IshanJaiswal26 you used the wrong git repo, you need to use the directml amd repo, else it downloads files for nvidia. The link is in the description

  • @krajaka
    @krajaka 3 месяца назад +5

    Great Tutorial thank you, works smooth with my RX580 with this ARGS : --autolaunch --theme dark --skip-version-check --use-directml --upcast-sampling --precision full --lowvram --opt-sub-quad-attention

    • @matiasaliste5883
      @matiasaliste5883 3 месяца назад +1

      I got the same card but got this error message (module 'torch' has no attribute 'dml') how did you fix that?
      btw how it works? the card I mean are you using vram? does it take to long to make the image?
      is it fluid? you are the first person that I found who has the same card as me

    • @krajaka
      @krajaka 3 месяца назад

      @@matiasaliste5883 delete the venv folder and run it with --use-directml in the ARGS or just copy my ARGS line and paste it to yours should look like this in your webui-user.bat
      set COMMANDLINE_ARGS= --autolaunch --theme dark --skip-version-check --use-directml --upcast-sampling --precision full --lowvram --opt-sub-quad-attention

    • @Solizeus
      @Solizeus 2 месяца назад

      precision full is a must have with amd, might as well add --no-half and --no-half-vae too, as well as --disable-nan-check to work with tiles when upscaling

    • @Antonin1738
      @Antonin1738 Месяц назад

      Thanks I have an rx580 as well, but it has 8gb vram, do I still use lowvram mode?

    • @krajaka
      @krajaka Месяц назад +1

      @@Antonin1738 Yes, you use lowvram with the RX580 i have also a RX580 with 8gb, have fun creating.

  • @pantuflas4561
    @pantuflas4561 13 дней назад

    man, i serarched like for a month of how to do this, and finally i found the perfect tutorial. you have one more sub!

  • @kurrotulaini
    @kurrotulaini 5 месяцев назад +3

    i got this problem DLL load failed while importing onnxruntime_pybind11_state: The specified module could not be found how to fix it

  • @trinhmanhviet807
    @trinhmanhviet807 2 месяца назад +1

    when i use your args, i see "No module named 'torch_directml'". What do i have to do now? Please show me the way o wise Northbound

  • @ralphbeez1411
    @ralphbeez1411 2 месяца назад +1

    I kept running into "no module for pip" so I learned Stable Diffusion ships with its own copy of Python (specific version of 3.10.6) but doesn't have pip installed. So when you run webui-user.bat it uses local Python version in folder:
    \venv\Scripts
    Just cmd into above folder and run:
    python -m ensurepip
    Fixed my problem in case anyone else encounters this issue!

  • @min1gun97
    @min1gun97 5 месяцев назад +2

    I am was trying to make this work and gave up 3 times and this 4th time you saved me! Thanks!!!

    • @IshanJaiswal26
      @IshanJaiswal26 4 месяца назад

      heyy , can you tell how to do i am having problem i have rx 6600 , discord

  • @olivierlecornu2924
    @olivierlecornu2924 6 месяцев назад +3

    Thank you for this, at least I can run stable diffusion with my rx6800. Clear explantions. Merci beaucoup.

  • @ironmenvagas5917
    @ironmenvagas5917 4 месяца назад +2

    how can i thank you its really worked after two days attempting just for this line --use-directml --opt-sub-quad-attention --no-half --disable-nan-check --autolaunch, my GPU is rx580 , 8gb vram and it takes 5 sc to generate a pitcher 😍😍😍😍😍

    • @icefrog101
      @icefrog101 4 месяца назад

      Have you tested sdxl models

  • @robertgoldbornatyout
    @robertgoldbornatyout 5 месяцев назад +6

    Excellent very helpful working for me with my amd rx 580 new subscriber thanks

    • @rwarren58
      @rwarren58 5 месяцев назад +1

      You got this to work with an RX580? That is my gpu too. How is it?

    • @robertgoldbornatyout
      @robertgoldbornatyout 5 месяцев назад +3

      @@rwarren58 ITS OK TAKES ME ABOUT 30 SECS TO DO 512 X 512 IMAGE

    • @rwarren58
      @rwarren58 5 месяцев назад +4

      @@robertgoldbornatyout Ohmigod you answered! It never showed on my feed. Thank you! and 30 seconds is a small price to pay for independence and about the same speed as an online generator. Thanks again. Two hours later Edit. It works. With a RX580 GPU and a RYZEN 3600 CPU, it works.

    • @chealay5744
      @chealay5744 3 месяца назад +1

      I use fooocus and it took 10min for a image rx580 too i will try this one

  • @agg3l370
    @agg3l370 2 месяца назад

    bro legit there is no fcking way.. i was searching for the past 3 days 10 hours per day for a fix and i was about to drop out but then i steped on ur video. MAY THE GODS BLESS YOU AND YOUR FAMILY BROTHER T H A N K Y O U

  • @fredyfranco4893
    @fredyfranco4893 5 месяцев назад +1

    Bro, you don't even know how much I love you, I've been having problem and trying whithout a reast for 5 hours, you are my fucking hero
    Edit: It was 8 hours actually, now I saw it, and your video was so relaxing and I perfectly explained, and I don't even speak english

    • @rwarren58
      @rwarren58 5 месяцев назад

      For a non english speaker, you are clearly understood. Much Love from the USA. Bro has a good channel.

  • @jesuscampos6980
    @jesuscampos6980 4 месяца назад

    My man thank you so much being trying to get this for a while, Thanks for holding my hand through this.

    • @IshanJaiswal26
      @IshanJaiswal26 4 месяца назад +1

      u also hold my hand , heyy , can you tell how to do i am having problem i have rx 6600

    • @jesuscampos6980
      @jesuscampos6980 4 месяца назад

      ​@@IshanJaiswal26what seems to be the issue

  • @gamingwithmagus
    @gamingwithmagus 8 дней назад

    Your method worked first time! This really cut the render time from 3 minutes on 5950x down to 3 seconds on 7900xtx!

  • @ThePeacebob
    @ThePeacebob 6 месяцев назад +2

    Finally something that worked! BIG ty man!

  • @Moustafa.Morgan
    @Moustafa.Morgan 2 месяца назад

    Thanks a lot! It really worked with me, with adding some other files, but after all, it worked! Thanks a lot for your guide!

  • @davidr.x.7661
    @davidr.x.7661 4 месяца назад +1

    Is it still viable? At my computer when i run the bat it only opens and closes. Not even tries do download the what's suppose to download after the webui-ser.bat edit.

  • @JoyridePinks
    @JoyridePinks 4 месяца назад

    you may feel a bit sad by having to answer all the questions, but know that I read your answers you gave to other users to fix the issues I had, which was caused by missing the"-" on the "--use-directml" which caused it to download a broken torch which I had to delete venv and fix the command args to fix
    your attempt to help others fix it helps more than you may notice :)

  • @ahmadxgame8885
    @ahmadxgame8885 3 месяца назад

    IT WORKED YAAAA ALLAAAH , I WASTED 2 DAYS FOR THIS, THANK YOU VERY MUCH

  • @Ronteah
    @Ronteah 6 месяцев назад +1

    Thank you very much sir!
    I was stuck with the "Torch is not able to use GPU" error until I watched your video. Works fine with the RX 6700XT.

    • @herrcrazy4242
      @herrcrazy4242 6 месяцев назад

      windows or linux ?

    • @Ronteah
      @Ronteah 6 месяцев назад

      @@herrcrazy4242 I'm on Windows 10, no WSL or anything, just Windows CMD

    • @MrStorm-op1ej
      @MrStorm-op1ej 6 месяцев назад

      У меня такая же беда

  • @Dragowolf_dev
    @Dragowolf_dev 4 месяца назад

    Hey there, I'm experiencing an error when trying to generate images. I'm getting the AttributeError: 'NoneType' object has no attribute 'lowvram' error. If you know how to fix it, please help me out

  • @IshanJaiswal26
    @IshanJaiswal26 4 месяца назад +1

    its is not using my gpu it is using my ram to generate the image pls help

  • @dustinstephan7437
    @dustinstephan7437 5 месяцев назад

    Epic, command line codes are what fixed it for me thanks!

  • @binskmst8247
    @binskmst8247 5 месяцев назад

    Finally a really good tutorial. Thank you so much bro, it worked on my RX 6600

    • @IshanJaiswal26
      @IshanJaiswal26 4 месяца назад

      heyy , can you tell how to do i am having problem i have rx 6600 , discord

    • @architector.design
      @architector.design 2 месяца назад +1

      @@IshanJaiswal26 you fix this issue? I have rx 6600 xt and when i generate i have error "RuntimeError: Could not allocate tensor with 471859200 bytes. There is not enough GPU video memory available!"

  • @BANDAGOR
    @BANDAGOR 6 месяцев назад

    Thank you very much for your tutorial, it was very helpful. Greeting from Chile.

  • @dustboii
    @dustboii 3 месяца назад +1

    im actually still getting the error that it cant use gpu. Double checked I used the right version of python and I used your fork you provided. Deleted the folder retried. Still giving me that error. "RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check"

    • @northbound6937
      @northbound6937  3 месяца назад +1

      Can you paste your COMMANDLINE_ARGS line from the webui bat file?

    • @dustboii
      @dustboii 3 месяца назад

      @@northbound6937 Hey I got it working! Not sure what it was but I just kept going back and triple checking stuff. I also mixed your guide with ruclips.net/video/n8RhNoAenvM/видео.html to use Zluda. Not sure if that helped. But cheers!

  • @louisturiot3573
    @louisturiot3573 3 месяца назад +1

    thanks you very much, just thanks you ^^

  • @jeremyvolland8508
    @jeremyvolland8508 3 месяца назад +1

    For anyone interested in an alternative, I got Stable diffusion up and running on an AMD Radeon RX 6700 XT in windows using zluda (as I understand it, zluda is cuda functionality for AMD). There are several good tutrials for this out there.

    • @killy8784
      @killy8784 3 месяца назад

      Yes pléiade I need your help

  • @yyy0yyy0
    @yyy0yyy0 23 дня назад

    Bro u r insane, Thanks a lot ❤

  • @JahonCross
    @JahonCross 2 месяца назад

    Everything worked great but would you happen to have another guide on fully using Webui and its features?

  • @jbnrusnya_should_be_punished
    @jbnrusnya_should_be_punished 3 месяца назад

    Another manual that does not work (for me in the "as is" way). I followed the manual as usual, almost deleted all files after Errors, but suddenly my curiosity led me to the author of this SD fork for AMD on his page in Git. And the discussion on his page, where errors like mine were reported, really helped. He says: "After installing all this, we need to DELETE the 'venv' folder and re-launch webui-user.bat with "--use directml" args. Looks like a joke, but it works!

  • @CRAKCOIN
    @CRAKCOIN 5 месяцев назад

    Best video I've found so far thanks bro

  • @IshanJaiswal26
    @IshanJaiswal26 4 месяца назад +1

    heyy , can you tell how to do i am having problem i have rx 6600 , discord

  • @glettestv3230
    @glettestv3230 6 дней назад

    I appriciate your video! Short, informative and non tech user friendly. Big thanks to your work ❤Keep it up!
    Idk if u can help me with that, but if i generated a person with that and i want to make different pictures with the same person. Is there a way for this?

  • @omorfaruk055
    @omorfaruk055 Месяц назад

    i have an error {RuntimeError: Input type (float) and bias type (struct c10::Half) should be the same} please help

  • @suzy6690
    @suzy6690 3 месяца назад

    Thank you for the video! It helped me make it work! :)

  • @Torva01
    @Torva01 6 месяцев назад

    You saved me, thank you! I was using Shark, this is an upgrade

  • @caiuscezar
    @caiuscezar 3 месяца назад

    I finally made it work, thanks to you sir!

  • @beStoic1
    @beStoic1 3 месяца назад

    worked for me, thank you so much!

  • @SalvadorAmari
    @SalvadorAmari 2 месяца назад

    omg finally works, ty my man, you save me

  • @souravroy8834
    @souravroy8834 6 месяцев назад +3

    brother can show us the installation of webui forge version which is recently released , i tried to install it but it shows error . i heard its much faster than the normal webui 1111

    • @geraltofvengerberg3049
      @geraltofvengerberg3049 6 месяцев назад +1

      yeah im having gpu error then it says "nvidia driver cannot found"

    • @souravroy8834
      @souravroy8834 6 месяцев назад +1

      same problem@@geraltofvengerberg3049

    • @northbound6937
      @northbound6937  6 месяцев назад +1

      Couldn't make it work either, keep an eye on the issues section of the github under the amd tag, at some point someone will post a solution (currently the only "solution" I've seen being offered is to use the --skip-torch-cuda-test flag to run with the CPU not the GPU, which is defeating the whole point of using a faster solution) github.com/lllyasviel/stable-diffusion-webui-forge/labels/AMD @geraltofvengerberg3049 @souravroy8834

  • @IDIlII
    @IDIlII 4 месяца назад

    Finally got it to work on a rx6600 but getting gpu memory errors after a couple of generations not sure as to what i should be putting into my commandline args either than what u have in the description?

  • @user-is4dj4uk8i
    @user-is4dj4uk8i 6 месяцев назад +1

    Hi!
    Directml work with rx6700 xt?

  • @axandraalex5869
    @axandraalex5869 6 месяцев назад

    Just a suggestion, you probably should show task manager side by side. That way it will show how much % does the GPU being utilized when rendering.

  • @WiseManTimeLord
    @WiseManTimeLord 2 месяца назад

    I think its named "stable-diffusion-webui-amdgpu" now, but thanks for the 2024 update.

  • @zcraugar
    @zcraugar 6 месяцев назад

    working on 6700 XT. oh my god, thanks for your tutorial. each picture generate would consume estimated 20seconds.

  • @MrOlek700
    @MrOlek700 4 месяца назад

    Thank you, it's finally working!

  • @ammards7730
    @ammards7730 6 месяцев назад +1

    thank you so much that was really helpful it worked

    • @northbound6937
      @northbound6937  6 месяцев назад +1

      You're welcome!

    • @IshanJaiswal26
      @IshanJaiswal26 4 месяца назад

      @@northbound6937 heyy , can you tell how to do i am having problem i have rx 6600

  • @romanamir3631
    @romanamir3631 5 месяцев назад

    got this The specified module could not be found. Error loading "C:\Users
    oman\code\stable-diffusion-webui-directml\venv\lib\site-packages\torch\lib\c10.dll" or one of its dependencies.
    Press any key to continue . .

  • @Matheus-fb1hy
    @Matheus-fb1hy 4 месяца назад +1

    Man my computer crashs every time a try to generate an image, i did every thing like you, and the stable difusion opens, but every time i try to generate an image it just crash my pc and reset, do you know how to solve it? Thanks

    • @northbound6937
      @northbound6937  4 месяца назад

      @Matheus-fb1hy
      What CPU/GPU do you have? If you try it with a HWinfo64 sensor window open while you generate with SD, what are the temperatures for the GPU?
      If nothing else, try the new ZLUDA video approach, uses less video memory, might be worth a try.

  • @cezarybaryka9744
    @cezarybaryka9744 Месяц назад +1

    it shows me this error from .onnxruntime_pybind11_state import * # noqa
    ImportError: DLL load failed while importing onnxruntime_pybind11_state

  • @Ghadrat
    @Ghadrat 3 месяца назад

    finally a tutorial for amd that did work for me, I only have one problem, sometimes I get "RuntimeError: Could not allocate tensor with 335544320 bytes. There is not enough GPU video memory available!" and the faces generally look super bad, they improve as I increase the steps but it makes the generation much slower, I have an rx 6750 xt, 16gb of ram and a ryzen 5 5600

    • @northbound6937
      @northbound6937  3 месяца назад

      Are you using the --medvram argument? Try that if you haven't (or lowvram) Alternative, try the zluda AI video, that uses way less VRAM

    • @Ghadrat
      @Ghadrat 3 месяца назад +1

      @@northbound6937 I changed from --medvram to --lowvram but it only made the generation slower, after generating an image the program is using all the vram (12gb of my rx 6750 xt) and all my ram (16gb), it does not go down to Unless I restart the .bat, I'll try zluda

  • @diy6310
    @diy6310 Месяц назад

    finally it worked on my RX580 . thanks

  • @techh9171
    @techh9171 6 месяцев назад

    Hey mate I have some images which I want to train my model on so for that do I need exact image model of those images or any model would work?

    • @northbound6937
      @northbound6937  6 месяцев назад

      What exactly are you trying to do? If it's creating a new model/checkpoint from scratch, I have no idea how to do that. If it's changing some images that you have (say, if in the original you have a suit & jacket on and want to change that to tshirt and some shorts) you can try different models from civitai, especially those that were trained with real people and not anime/drawn art.

  • @VKJinja
    @VKJinja 6 месяцев назад

    Finally, after months, I was able to get SD working on my RX 6600. Thank you so much.
    I have a question: is it expected for the rendering to crash when the resolution is higher than 512x512, or when rendering a batch larger than a single render?

    • @northbound6937
      @northbound6937  6 месяцев назад +1

      You're welcome! It depends, what error do you get in the console window? If it is 'memory exceeded', then yes, it's expected behavior, apparently webui is not very good at memory management. What you can do is add these arguments in the webui-user.bat file: --medvram (and if that doesn't help, change to --lowvram). They'll be slower, but should help it preventing it from crashing. But again, best practice generate to 512x512 and then upscale later

  • @hellokunji9871
    @hellokunji9871 8 дней назад

    DLL load failed while importing onnxruntime_pybind11_state: The specified module could not be found.
    Press any key to continue . . . im geting this error

    • @hellokunji9871
      @hellokunji9871 8 дней назад

      pip uninstall onnxruntime
      pip install onnxruntime==1.14.0
      tried this by opening cmd in script then works yay

  • @TPkarov
    @TPkarov 4 месяца назад

    DirectML initialization failed: No module named 'torch_directml' in a RX5500 XT

  • @icantthinkofagoodname8002
    @icantthinkofagoodname8002 29 дней назад

    Mine broke randomly and I can't get it to work anymore :c

  • @erniecutler1143
    @erniecutler1143 5 месяцев назад +1

    didnt work for me on a 7900xt, i got this error, Launching Web UI with arguments: --use-directml --opt-sub-quad-attention --no-half --disable-nan-check --autolaunch
    DirectML initialization failed: No module named 'torch_directml'

  • @ayylmao394
    @ayylmao394 5 месяцев назад +1

    I have a 6800xt, and i have a cannot allocate enough memory when using the hires fix, any help?

    • @architector.design
      @architector.design 2 месяца назад

      you fix this issue? I have rx 6600 xt and when i generate i have error "RuntimeError: Could not allocate tensor with 471859200 bytes. There is not enough GPU video memory available!"

  • @wissen5701
    @wissen5701 6 месяцев назад

    Thanks, bro. You made my day

  • @Sable501
    @Sable501 4 месяца назад

    Whenever I run stable diffusion, it gets part of the way through creating the image, then crashes and reboots. I'm using all the normal command line arguments, and my PC is quite beefy, but I consistently have this issue.

    • @northbound6937
      @northbound6937  4 месяца назад

      @Sable501 Do you have medvram or lowvram command arguments enabled? What CPU/GPU? If you try it with a HWinfo64 sensor window open while you generate with SD, what are the temperatures for the GPU?If nothing else, try the new ZLUDA video approach, uses less video memory, might be worth a try.

  • @alan_hyral5490
    @alan_hyral5490 5 месяцев назад

    So that's mean SD can run doing CPU process?? I have RX580 and SD can't running GPU

  • @juanvaregasresurrection5470
    @juanvaregasresurrection5470 2 месяца назад

    Please good man, I have been trying to install it for almost a month and I always get this error
    from .onnxruntime_pybind11_state import * # noqa
    ImportError: DLL load failed while importing onnxruntime_pybind11_state: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
    Presione una tecla para continuar . . .

  • @7sonderling
    @7sonderling 6 месяцев назад

    yup... this worked for me... thx a lot.

  • @gigaval97
    @gigaval97 3 месяца назад

    I tried and followed the procedure but when lauching the webui-user.bat it show: "RuntimeError: Couldn't install torch" with "No module named pip
    Traceback (most recent call last)", any idea to resolve this problem ?

    • @northbound6937
      @northbound6937  3 месяца назад

      I would delete the \venv\ folder and try again. Other than that, google the error and try the solutions people suggest.

  • @duytnhcm
    @duytnhcm 3 месяца назад

    Đã thử và thành công. Cảm ơn rất nhiều.

  • @DiSHTiX
    @DiSHTiX 3 месяца назад +3

    rx6900xt i use --autolaunch --theme dark --skip-version-check --use-directml --upcast-sampling --precision full --medvram
    rx6600 probably --autolaunch --theme dark --skip-version-check --use-directml --upcast-sampling --precision full --lowvram --opt-sub-quad-attention
    the option of --upcast-sampling is favorable over --no-half, you dont use both

  • @zentruth
    @zentruth 6 месяцев назад

    which torch version does it run on?

  • @napalmstrike2007
    @napalmstrike2007 6 месяцев назад

    FINALLY, THIS ONE WORKS!

  • @luseho
    @luseho 2 месяца назад

    Doesn't work on linux with RX 7800XT: I got this error after using "./webui.sh " with all the necessary arguments:
    /usr/bin/env: 'bash
    ': No such file or directory
    /usr/bin/env: use -[v]S to pass options in shebang lines

    • @northbound6937
      @northbound6937  2 месяца назад

      This video guide is for Windows, maybe try with these steps: github.com/AUTOMATIC1111/stable-diffusion-webui?tab=readme-ov-file#automatic-installation-on-linux

  • @chrisgust3797
    @chrisgust3797 Месяц назад

    got it to work after 10 different tutorials. 'BUT when using dreamshaper its really slow, yesterday i went from gtx 1070 to rx 7600 xt 16gb and the nvidia was faster. any tips?

    • @northbound6937
      @northbound6937  Месяц назад

      nvidia is generally faster then amd (even old GPUs). Try ZLUDA, I have a tutorial here, it's faster for AMD ruclips.net/video/gsrhKosljgI/видео.html

  • @Hungor97
    @Hungor97 5 месяцев назад

    I can do this with a 5600xt?

  • @Wackey17
    @Wackey17 5 месяцев назад

    I've got it working on a 7900gre but can't go big like i could with the rtx 3060. Runs out of vram now.

  • @MugiwaraRuffy
    @MugiwaraRuffy 5 месяцев назад

    Ah, thats it, i remember now. When I first started SD on Win, i used that DirectML fork. Before I switched to Linux, becuase working RocM support there.
    When I recently wanted to toy again with it back on Win to see if anything improved performance wise, I got exactly that error upon installation.

  • @cazadoresdecryptos
    @cazadoresdecryptos 6 месяцев назад

    Hey bro, thank you so much for the video. You really explain everything in great detail and very well. I've seen many RUclips tutorials and honestly, I think you're the best at it. I was just reading comments and, like many others, I would like to install Forge. But I saw that you mentioned that Forge cannot yet be used with AMD if you want to leverage the GPU. So, I was wondering if you'll upload a tutorial on how to run Forge as soon as there's any news about it. Thank you very much.

    • @northbound6937
      @northbound6937  5 месяцев назад +2

      No prob, glad it helped. No promises on the forge tutorial. I assume what you're after with forge is improved performance? If I manage to make zluda work and get a substantial improvement in performance, then I'll probably update a tutorial about that. But not a lot of movement on forge atm with the AMD implementation

    • @cazadoresdecryptos
      @cazadoresdecryptos 5 месяцев назад +1

      @@northbound6937 Thanks for the reply. Yeah, that's the goal. It seems that Forge is quite a bit better in performance than Automatic11111, and I've been checking out a lot of tutorials. I've also been trying it out for myself, but until now, it has been impossible to install. Anyway, I will keep an eye on your channel in case you find a solution. Hehe, thanks again!

    • @davidr.x.7661
      @davidr.x.7661 4 месяца назад

      @@northbound6937 Please do this tutorial if you learn I'm also trying to use forge :) with AMD

  • @brandonavelino
    @brandonavelino 6 месяцев назад

    what do u think about buy new Ryzen 5 8600g without GPU, can stable diffusion work? i would like to wake the RTX series 5xxxm or what do u think a AMD Radeon XFX RX 7600XT Speedster SWFT 210 / 16GB for this CPU? Thanks

    • @northbound6937
      @northbound6937  5 месяцев назад

      I think 8600g iGPU will probably run this, but extremely slow (TBH don't know what will be slower, running it on CPU only or iGPU, maybe the former). If you're interested in mainly working with SD, I would go for nvidia, either RTX40 or RTX50 later this year (although first only RTX5090 will be released, which will cost an arm and a leg). From my limited experience, a 3060 12GB had better performance in SD than a 7800XT 16GB

    • @rekii848
      @rekii848 5 месяцев назад +1

      i use this on 8600g and use directml, got around 2s/it and around 1 min for full pic with 28 sampling steps , tbh for me this is amazing already given its an igpu compared to not using directml which takes around 10 mins for an image💀

  • @abatzharmagambetov-ty4qn
    @abatzharmagambetov-ty4qn 5 месяцев назад

    after editing "webui-user.bat", I get an error like this "AttributeError: module 'torch' has no attribute 'dml' ", can u help me to fix it?

    • @northbound6937
      @northbound6937  5 месяцев назад +1

      stackoverflow.com/questions/77337158/module-torch-has-no-attribute-dml

  • @ml-qq5ek
    @ml-qq5ek 6 месяцев назад

    Do you know how to setup and use olive to optimize sd model? Can't find any up to date guide.

    • @northbound6937
      @northbound6937  6 месяцев назад

      I have tried to no avail. You can try these steps and see if they work for you: github.com/lshqqytiger/stable-diffusion-webui-directml/discussions/149#discussioncomment-8392257

  • @brandonavelino
    @brandonavelino 6 месяцев назад

    i7 6700 24 ram ddr4rx 5600xt,run webui, but when i try to generated ,i get an error "RuntimeError: Could not allocate tensor with 268435456 bytes. There is not enough GPU video memory available!"

    • @online_degenerate
      @online_degenerate 6 месяцев назад +1

      you have way to less vram then it requiered. Just make the image resolution smaller and it will work.

    • @northbound6937
      @northbound6937  5 месяцев назад

      You can also add an ' --lowvram' argument to webui-user.bat (in the COMMANDLINE_ARGS line)

  • @mister-ace
    @mister-ace 3 месяца назад

    Will it work with igpu (vega 7)?

  • @Darkdayfate
    @Darkdayfate 5 месяцев назад

    thank soo much my friend

  • @Tigermania
    @Tigermania 6 месяцев назад

    Interesting to see your AMD 7800 running at 4its/ sec I get 1it/sec on my 580(8gb) using the "Ish-que-tiger" fork :) . Have you tried the "LCM-LoRA Weights - Stable Diffusion Acceleration Module" on civitai. Load the lora into your prompt set your CFG to 1-2 and your STEPS to 6-8. You get a good speed increase because the steps are so low. You can add more step but it starts to burn/artifact the image with lines unless you set the lora weight down from:1 to :0.5.

    • @northbound6937
      @northbound6937  6 месяцев назад

      Thanks for the suggestion! I tried it and it runs faster, but I got underwhelming results, here's my output and the settings I used imgur.com/a/oMhvSlt Which checkpoint/model did you use?

  • @ProWideoqu
    @ProWideoqu 6 месяцев назад

    It worked for me!

  • @DouglasRivitti
    @DouglasRivitti 3 месяца назад

    I got the error: "RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check"
    Even after excluded the env folder and running with "--use-directml"
    Help please

    • @northbound6937
      @northbound6937  3 месяца назад +1

      Can you paste the webui-user.bat file content here? (or a link to pastebin) Trying to see what arguments you are using

    • @DouglasRivitti
      @DouglasRivitti 3 месяца назад

      @@northbound6937 I'm using only one:
      "--skip-torch-cuda-test"
      Was the last I tryed

    • @DouglasRivitti
      @DouglasRivitti 3 месяца назад

      @@northbound6937
      @echo off
      set PYTHON=
      set GIT=
      set VENV_DIR=
      set COMMANDLINE_ARGS=--skip-torch-cuda-test
      call webui.bat

    • @northbound6937
      @northbound6937  3 месяца назад

      Replace the set COMMANDLINE_ARGS line with this:
      set COMMANDLINE_ARGS=--use-directml --opt-sub-quad-attention --no-half --disable-nan-check --autolaunch --medvram
      Save the file and then doubleclick webui-user.bat or run it through CLI

  • @souloftarasoutofcontext
    @souloftarasoutofcontext 5 месяцев назад

    I did everything I saw step by step with the help of a friend of mine who has been using the program for a long time, but in the end it still gave the nvidia error? could you tell me why?

    • @northbound6937
      @northbound6937  5 месяцев назад +2

      Which error, can you paste the error message?

  • @anhminhcu2596
    @anhminhcu2596 5 месяцев назад

    Thank you so much.

  • @guiplays8062
    @guiplays8062 6 месяцев назад

    thanks it worked well

  • @geraltofvengerberg3049
    @geraltofvengerberg3049 6 месяцев назад +1

    ok now how can we use ForgeUI via AMD ?

    • @northbound6937
      @northbound6937  6 месяцев назад

      AMD is not supported yet by ForgeUI. You can hack files to make it work, but the end result is slower than with webui (in my case, from 4 it/s to 2.6 it/s). The files to change are listed here: github.com/lllyasviel/stable-diffusion-webui-forge/issues/58#issuecomment-1948689419

  • @Zeinrasyid
    @Zeinrasyid 4 месяца назад

    it's work but there's a problem with me,,i'm using rx 6600 8gb and everytime i generate a image, dedicated gpu usage is very high, i'm using --lowvram, but the main problem is after generate a image, high usage of dedicated gpu doesn't clear, it's stuck on high usage, if i generated again, it will shutdown my PC..
    but if i want to clear the dedicated gpu usage to normal, i must close the CMD-web-ui..
    so everytime i do generate any image, i must close/restart the SD web-ui to use it again..
    in the task-manager, high gpu usage is stuck on python..
    any solution friends..?(windows 11)
    (last year, i install the SD and everything is good). now so much trouble

    • @northbound6937
      @northbound6937  4 месяца назад

      Directml does hog a ton of VRAM. Check out my ZLUDA video, using that uses 1/3 of the VRAM (in my case, might also work for you)

  • @zach.spencer
    @zach.spencer 5 месяцев назад

    It didn’t work for me. I have Intel Core i5 11600KF CPU and an AMD Radeon RX 7800 XT 16GB VRAM GPU. Followed all the steps exactly. I had to add the skip cuda command arg to get it running. 😢

    • @northbound6937
      @northbound6937  5 месяцев назад

      Are you using all the arguments in the description? (under 'arguments for webui-user.bat file:')

  • @hashtag_
    @hashtag_ 6 месяцев назад

    Just dropping this comment cause I managed to get 3it/s on my 6650 on SD.next with zluda. Might be a better option?

    • @northbound6937
      @northbound6937  6 месяцев назад

      How do you use SD.next with zluda? Are there arguments for that or how?

  • @Knockout1811
    @Knockout1811 5 месяцев назад

    works like a charm. thank you! having some problems with xl models ... vram not big enough (6900xt, 16GB )

    • @northbound6937
      @northbound6937  5 месяцев назад

      Yeah, XL models are tough. Add the argument --medvram and try again (if that still doesn't help, maybe last Hail Mary attempt and replace with --lowvram)

    • @Knockout1811
      @Knockout1811 5 месяцев назад

      What exactly is the issue here? I guess it's the software, right? There are people running this at 8GB NV cards ... I also read that Automatic1111 is tanking performance/Vram.
      Hope that at some point it will be less a struggle!
      I give your advice a shot! Thanks

    • @northbound6937
      @northbound6937  5 месяцев назад

      As far as I understand, A1111 is bad at memory management and will hog all the memory it can. The arguments are a workaround to delimit the amount of memory that can be used (as a result, it might be slightly slower to generate, but it's less likely to crash)

    • @Knockout1811
      @Knockout1811 5 месяцев назад

      @@northbound6937 Thanks for explanation! -medvram worked by the way. I can now use 1.5 or XL models up to 1366x768.