Faster Stable Diffusion with ZLUDA (AMD GPU)

Поделиться
HTML-код
  • Опубликовано: 27 окт 2024

Комментарии • 56

  • @shadyb834
    @shadyb834 3 месяца назад +9

    This actually doubled my generation speed. abosuletly lovely

  • @simrace70
    @simrace70 4 месяца назад +1

    Finally this is a manual to get SD running on my 7900XT. It seems as if if builds parts of the software during installation process. Great, so it is compatible to "my" hardware. Also the error message are understandable. The speed on a 512x512 pic (stable diffusion 1.5 model) on the 7900XT is between 13 and 15 it/s. Thank you very much. I've had an enjoyable afternoon yesterday - and it will not be the last one.

  • @israelcano
    @israelcano 3 месяца назад +1

    This tutorial does not work anymore. The installation guide has changed and I just get errors when installing.

  • @As1ra96
    @As1ra96 2 месяца назад +2

    Hi. Does it make sense to get a rx 7800xt today or is it better to get a rtx 3080 ti. For training ii and image generation. Thanks in advance.

    • @northbound6937
      @northbound6937  2 месяца назад +1

      RTX 3080 ti is much faster in terms of it/s (even if it has 12GB vs 16GB of the 7800 XT) and more compatible than any AMD card with AI tools due to CUDA

    • @As1ra96
      @As1ra96 2 месяца назад

      @@northbound6937 thank you

  • @cmdr_stretchedguy
    @cmdr_stretchedguy 3 месяца назад

    I'll have to give this a try this evening with my RX7600 (although a different process since I use Fooocus). I am tired of the 5-7 s/it compared to my other PC with RTX3050 that gets around 1 s/it. Luckily I have a 12GB RTX2060 on the way that should speed it up into the it/s range.

  • @tallast5043
    @tallast5043 2 месяца назад +1

    all faces are messed up or it just keeps creating black screens

  • @lastnamefirstname2278
    @lastnamefirstname2278 5 месяцев назад +2

    you think its possible to use deepspeed with zluda, with things such as RVC?

  • @SimilakChild
    @SimilakChild 2 месяца назад +10

    AMD made ZLUDA closed source. It's no longer gonna be a thing, but I think the original developer is redeveloping it from scratch.

    • @tisen1241
      @tisen1241 2 месяца назад +1

      fking amd man

    • @Aremisalive
      @Aremisalive Месяц назад +1

      The version that was released when AMD stopped developing it has been forked so that others can continue without starting from scratch. That's how automatic1111 works with zluda now.

  • @Gaming_Legend2
    @Gaming_Legend2 2 месяца назад

    I have done every step for my card that is an rx6600 yet i still can't get it to work sadly.
    OSError: Building PyTorch extensions using ROCm and Windows is not supported is the error i get from it, idk why the fix doesn't seem to work

  • @Nightowl_IT
    @Nightowl_IT 3 месяца назад

    If it keeps using the wrong python version even though you edited the paths try deleting the venv and ...pycache.. folders.

  • @arteon2017
    @arteon2017 5 месяцев назад +1

    how can i install hip sdk to another folder? i dont have enough space in my disk

    • @northbound6937
      @northbound6937  5 месяцев назад

      There's an install location option according to the guide rocm.docs.amd.com/projects/install-on-windows/en/docs-6.0.2/index.html

  • @manaphylv100
    @manaphylv100 4 месяца назад

    How many it/s can you get with the RX 6800 non-XT (at default 512 x 512)?
    I could get the RTX 2080 Ti or 3070 for the same price where I live, but I'd lose a lot of gaming performance and longevity due to the much lower VRAM.

    • @northbound6937
      @northbound6937  4 месяца назад +1

      You just gotta make a judgement call. If AI is your main priority, I would go for nvidia (TBH I regret getting an AMD card now), it/s is gonna be significantly faster than a 6800 and less compatibility issues (unless you don't mind using linux with AMD cards and fiddling with errors until you get it to work). If your main priority is gaming with a bit of AI on the side, AMD is fine for that.

  • @Drake-bq1sk
    @Drake-bq1sk 3 месяца назад +1

    Finally got it working but now I can't use safetensors? (Edit) got it all working

  • @MrSongib
    @MrSongib 3 месяца назад

    Sadly can't run on my 5700 series.
    I've try the tutorial on vlad github it said running on zluda but the render time still the same with DirectML.
    Maybe it's actually running or not, or maybe the model is the problem or what, I still didn't know. XD
    ATM Using RocM 5.7 with ROCmLibs.
    In the process of this thinkering, I found each WebUI have different style characteristic of render, even though they use the same architecture, either DirectML or ZLUDA the images is slightly different from each WebUI.
    Since I mainly use "stable-diffusion-webui-amdgpu" and trying to use ZLUDA there it still give some error in the installation, that's why I try the SDNext it's seems works but the render time still the same, it got me confuse.
    Maybe I'll try to run it with various different model next time.

  • @ErikaPwnz
    @ErikaPwnz 3 месяца назад

    Where can I upscale generated images? I can't find this.

    • @northbound6937
      @northbound6937  3 месяца назад

      In the extras tab if I recall correctly

  • @sunny99179
    @sunny99179 4 месяца назад

    DO you know if i can run nice on RX 6650xt?

  • @bitkarek
    @bitkarek 5 месяцев назад

    hmm, i got to a state that after an hour it started to make a picture, but ended in like 5% of that. And it uses zluda on my CPU, not GPU :D nice.

  • @ProcXelA
    @ProcXelA 4 месяца назад +2

    Dont work on RX6600 😪

    • @luisgustavodl
      @luisgustavodl 3 месяца назад

      It does, you have to do an extra step installing HIP SDK and replacing Library files

    • @ProcXelA
      @ProcXelA 3 месяца назад

      @@luisgustavodl Yea thanks. Its work

  • @bakkerfrans
    @bakkerfrans 5 месяцев назад

    I tried it but when i want to make something it says model not loaded ?

    • @northbound6937
      @northbound6937  5 месяцев назад

      Have you downloaded an SD model (from for example civit ai) and put it in the SD model folder? On the dropdown in the top left there should be a list of all the models that are loaded. If it's empty, you either haven't downloaded an SD model or it's in the wrong folder (see my prev video on SD, I mention the location for models there)

    • @bakkerfrans
      @bakkerfrans 5 месяцев назад

      @@northbound6937 OSError: Building PyTorch extensions using ROCm and Windows is not supported.
      17:50:16-045137 DEBUG Script callback init time: image_browser.py:ui_tabs=0.46 system-info.py:app_started=0.25
      task_scheduler.py:app_started=0.12
      17:50:16-046137 INFO Startup time: 9.98 torch=3.48 gradio=0.67 libraries=1.94 extensions=0.64 face-restore=0.05
      ui-en=0.13 ui-img2img=0.05 ui-control=0.08 ui-settings=0.17 ui-extensions=0.87 ui-defaults=0.06
      launch=0.12 api=0.07 app-started=0.37 checkpoint=1.06
      17:50:16-048139 DEBUG Save: file="config.json" json=29 bytes=1184 time=0.003
      17:50:16-050141 INFO Launching browser
      17:50:19-143698 INFO MOTD: N/A
      17:50:23-884855 DEBUG Themes: builtin=12 gradio=5 huggingface=0
      17:50:26-962581 INFO Browser session: user=None client=127.0.0.1 agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64;
      rv:126.0) Gecko/20100101 Firefox/126.0
      17:52:00-177439 DEBUG Server: alive=True jobs=1 requests=37 uptime=109 memory=0.98/31.92 backend=Backend.DIFFUSERS
      state=idle
      17:54:00-281476 DEBUG Server: alive=True jobs=1 requests=38 uptime=229 memory=0.98/31.92 backend=Backend.DIFFUSERS
      state=idle
      17:56:00-376863 DEBUG Server: alive=True jobs=1 requests=38 uptime=350 memory=0.98/31.92 backend=Backend.DIFFUSERS
      state=idle
      17:58:00-481532 DEBUG Server: alive=True jobs=1 requests=38 uptime=470 memory=0.98/31.92 backend=Backend.DIFFUSERS
      state=idle
      17:59:59-582811 DEBUG Server: alive=True jobs=1 requests=38 uptime=589 memory=0.98/31.92 backend=Backend.DIFFUSERS
      state=idle
      18:01:59-688014 DEBUG Server: alive=True jobs=1 requests=38 uptime=709 memory=0.98/31.92 backend=Backend.DIFFUSERS
      state=idle
      I also use "stable-diffusion-webui-directml" and this one works fine.

    • @northbound6937
      @northbound6937  5 месяцев назад

      Don't know how to fix that. Maybe ask in the ZLUDA thread in the help channel on discord (link in github.com/vladmandic/automatic)

  • @gigend
    @gigend 5 месяцев назад

    is support RX 570 4GB VRAM, or not?

    • @bitkarek
      @bitkarek 5 месяцев назад

      not long ago I tried directML with RX580 and i got about 4s per 1 instruction. It was slow, but worked. Dont know if zluda was supported, but if you are serious about generative AI, upgrade your GPU, its worth it. The newer cards are way more optimized and way more powerful. With 7800XT I now get about 4 instructions per second, so its waaaay faster.

    • @DouglasRivitti
      @DouglasRivitti 4 месяца назад

      @@bitkarek Man, how do you fix that "RuntimeError: Torch is not able to use GPU;" ?? I got a RX 580 gpu and always have problem using SD

    • @bitkarek
      @bitkarek 4 месяца назад +1

      @@DouglasRivitti i would like to help you but everybody has different errors. Radeon software up to date, windows, install all requirements from the txt file... it "should" run. There was some workaround for older cards for the ROCm! be sure to have it, not sure you can run RX580 with Zluda and ROCm... Try that directML version. Its slower, but worked for me.

  • @HanafiYusufSamAfghani
    @HanafiYusufSamAfghani 2 месяца назад

    with COMFYUI mu RX6600 can generate 8.5it/s at FHD : 4

  • @BadutBego
    @BadutBego 5 месяцев назад

    compared to non zluda, how faster is it?

    • @northbound6937
      @northbound6937  5 месяцев назад

      8x

    • @BadutBego
      @BadutBego 5 месяцев назад

      @@northbound6937 Whoa, so fast!! I'll try it, thanks

  • @icefrog101
    @icefrog101 5 месяцев назад

    Does sdxl models work on this, have you tried it.

    • @northbound6937
      @northbound6937  5 месяцев назад

      Which model are you interested in? I can try it

    • @rwarren58
      @rwarren58 5 месяцев назад

      What's your gpu? VRAM?

  • @JanDahl
    @JanDahl 5 месяцев назад

    "Sheckel time" - pas på med de kanter, kammerat 😁

  • @mengli1706
    @mengli1706 5 месяцев назад

    Can the AMD Radeon RX580 2048SP graphics card run the program?

    • @northbound6937
      @northbound6937  5 месяцев назад

      I'm not sure, it might. For that follow the steps in the section 'Replace HIP SDK library files for unsupported GPU architectures.' of the guide linked in description (specifically the last bullet point of step 3 of that section, the one who links to 'ROCmLibs.7z'). Good luck!

    • @mareck6946
      @mareck6946 5 месяцев назад

      directml Works. ZLUDA in general also. ( blender etc. ) on that card

  • @zeror6590
    @zeror6590 3 месяца назад

    do pony models work?

  • @uxot
    @uxot Месяц назад

    it seems to me alot of models arent working with this zluda crap...not worth it if its the case.

  • @GHOSTTIEF
    @GHOSTTIEF 4 месяца назад

    My anime hentai model is not compatible smh

  • @belcrosssama2
    @belcrosssama2 27 дней назад

    GPU RX7800XT... todo funciono bien la primera vez pero para la segunda ya no abre con webui.bat

    • @toledo_s3480
      @toledo_s3480 21 день назад

      Como que te funciono, yo pense que zluda ya habia muerto

  • @POKLONITES_KLOYnY
    @POKLONITES_KLOYnY 5 месяцев назад

    7900gre support?

    • @northbound6937
      @northbound6937  5 месяцев назад

      Should be supported, for all intents and purposes the 7900GRE has the same chipset as 7900XT

    • @POKLONITES_KLOYnY
      @POKLONITES_KLOYnY 5 месяцев назад

      @@northbound6937 thanks

  • @JasonAnderson-f2i
    @JasonAnderson-f2i 27 дней назад

    67112 Cassandre Harbors