Force Ollama to Use Your AMD GPU (even if it's not officially supported)

Поделиться
HTML-код
  • Опубликовано: 17 янв 2025

Комментарии • 41

  • @shiv4667
    @shiv4667 День назад

    Thanks brother, I also have a 6750xt. I installed the llama3.2 11b today and it was very slow and I realized that it was using 100% CPU. After this fix it is around 50% for both CPU and GPU. Thanks a lot.

  • @Karmezz
    @Karmezz Месяц назад

    This video was easy to understand and proffesional. Worked on rx 6600 no problems keep up!

  • @Senorkawai
    @Senorkawai Месяц назад +2

    Here is the translation:
    "Thank you, I followed the steps in the video and finally Ollama ran on my GPU RX 6600. I just had to restart for it to work (before the restart, it was using the CPU). Now I will check if I can use Docker and WebUI to use it with a better interface. Thanks again!"

    • @TigerTriangleTech
      @TigerTriangleTech  Месяц назад +1

      Great! I'm glad it worked for you. I also have videos on Docker and Web UI if you need help with that. Thanks for watching!

  • @nome_de_usuario
    @nome_de_usuario 8 дней назад

    You are awesome, you saved my life and my graduation project
    and a lot of money LOL

  • @MichelBertrand
    @MichelBertrand 2 месяца назад +1

    I have been wanting to run ollama on my 6750XT ever since AMD announced they were supporting their GPUs.
    THANK YOU!
    Just installed and it seems to work perfectly. So much faster than running on my 13700k !

  • @abdullahal-jauni3013
    @abdullahal-jauni3013 Месяц назад

    perfect explanation video, it worked on amd 5700 xt. thank you

  • @anjinho_ruindade_pura
    @anjinho_ruindade_pura Месяц назад

    Great video! It worked perfectly

  • @DiegoThoms
    @DiegoThoms 11 дней назад

    Thanks! I made it work with my 6600!

  • @Byrex_Lorence
    @Byrex_Lorence 2 месяца назад

    It works! thank you so much. you did a very good video

    • @TigerTriangleTech
      @TigerTriangleTech  2 месяца назад

      Great! Thank you for watching and for your feedback!

  • @vikoscharger2427
    @vikoscharger2427 15 дней назад

    Thank you!It works with my 5700xt!
    Night and day istead of running on my CPU!

  • @ThermoIdiot
    @ThermoIdiot 25 дней назад

    can it run on rx 6500m laptop gpu

    • @TigerTriangleTech
      @TigerTriangleTech  25 дней назад

      Hi, you will need to determine the LLVM target of your GPU and then use the related ROCBLAS packages. I would recommend looking at your log file and determine that target which should start with gfx. In your case it might be GFX1033 but I'm not 100% sure. Maybe someone else can chime in here.
      Here is the place in the video that will show you in the Ollama log file:
      ruclips.net/video/G-kpvlvKM1g/видео.html
      Here is the latest release (currently) of the ROCBLAS packages:
      github.com/likelovewant/ROCmLibs-for-gfx1103-AMD780M-APU/releases/tag/v0.6.1.2
      Here is a video I made to show how to look at the log file:
      ruclips.net/video/on3rtyPWSgA/видео.html
      Hope that helps!

  • @iljaruppel3047
    @iljaruppel3047 Месяц назад

    This is the first video that realy simple, well explained and also works. Thank you👌 But i have the Problem that mistral is much slower with GPU than with the CPU. I use a rx 6600XT. I dont know what is the Problem.

    • @TigerTriangleTech
      @TigerTriangleTech  Месяц назад

      Thanks, I'm glad it was helpful. The problem you are having usually points to a resource problem. Switching to smaller models like Llama 3.2 3B (or models that are around that size) might work better. Just keep in mind it's a trade off because you do lose some quality. If you haven't yet watched my video on performance, you might want to check it out as it demonstrates a similar problem I had when using Llama 3 on one of my systems. ruclips.net/video/l0tc2TSxkO8/видео.html

  • @coolcha
    @coolcha 9 дней назад

    Can this work with integrated graphics?

    • @TigerTriangleTech
      @TigerTriangleTech  7 дней назад

      Like with the AMD APUs? I'm not sure. Even if it was supported, I doubt if it could perform very well. Too many limitations.

  • @joshuaosei5628
    @joshuaosei5628 21 день назад

    This worked on RX 5500M! Thanks

  • @louisgamercool2324
    @louisgamercool2324 Месяц назад

    Hi, I have checked your other video regarding whether or not it is using gpu and ollama says that GPU is being used. Moreover, adrenaline shows that my 5700xt is at 99% usage. However, I am getting slower output compared to CPU. Is there a fix for this? Thanks

    • @louisgamercool2324
      @louisgamercool2324 Месяц назад

      Otherwise Great video, thanks a lot.

    • @TigerTriangleTech
      @TigerTriangleTech  Месяц назад +1

      I have run into this as well. You might try a smaller model (3B parameter like Llama 3.2 or maybe Phi). I'm wondering if it's a bottleneck between transferring data between the CPU and GPU and it could be a matter of needing more RAM. I address this in my video on performance: ruclips.net/video/l0tc2TSxkO8/видео.html

    • @TigerTriangleTech
      @TigerTriangleTech  Месяц назад +1

      Thanks for your feedback!

  • @iamnobody-001
    @iamnobody-001 2 месяца назад

    hi, can you make similar video, but for discrete gpu, i have old gtx 1050 and ollama is using cpu only, i want to know how to activate the gpu, also how to disable it, to see the difference. thanks

    • @TigerTriangleTech
      @TigerTriangleTech  2 месяца назад

      Hi there! Actually, the video I made was for a discrete GPU, but it was AMD rather than NVIDIA. You shouldn't have to install any work around. Ollama supports Nvidia GPUs with compute capability 5.0+. You should be in good shape because the GTX 1050 has a compute capability of 6.1. If this is not working correctly you might want to update your drivers? Not sure. To force CPU usage, set an environment variable of CUDA_VISIBLE_DEVICES to -1. To enable GPU usage just remove that variable and it should utilize your GPU again. For more info view this page: github.com/ollama/ollama/blob/main/docs/gpu.md

  • @RomanMondragon-lh5fh
    @RomanMondragon-lh5fh 16 дней назад

    worked on the rx 6700xt, thank you so much

  • @kevinmiole
    @kevinmiole Месяц назад

    But I have rx6600 can I use ollama?

    • @TigerTriangleTech
      @TigerTriangleTech  Месяц назад +1

      It should work. That is an LLVM target of gfx1032.
      rocm.docs.amd.com/projects/install-on-windows/en/latest/reference/system-requirements.html#supported-gpus-win
      github.com/likelovewant/ollama-for-amd#windows

    • @varunaeeriyaulla
      @varunaeeriyaulla Месяц назад +1

      Yes you can. Follow the steps

  • @KhongGuan-e8h
    @KhongGuan-e8h Месяц назад

    its work on my trash gpu (amd rx580)