Thanks brother, I also have a 6750xt. I installed the llama3.2 11b today and it was very slow and I realized that it was using 100% CPU. After this fix it is around 50% for both CPU and GPU. Thanks a lot.
Here is the translation: "Thank you, I followed the steps in the video and finally Ollama ran on my GPU RX 6600. I just had to restart for it to work (before the restart, it was using the CPU). Now I will check if I can use Docker and WebUI to use it with a better interface. Thanks again!"
I have been wanting to run ollama on my 6750XT ever since AMD announced they were supporting their GPUs. THANK YOU! Just installed and it seems to work perfectly. So much faster than running on my 13700k !
Hi, you will need to determine the LLVM target of your GPU and then use the related ROCBLAS packages. I would recommend looking at your log file and determine that target which should start with gfx. In your case it might be GFX1033 but I'm not 100% sure. Maybe someone else can chime in here. Here is the place in the video that will show you in the Ollama log file: ruclips.net/video/G-kpvlvKM1g/видео.html Here is the latest release (currently) of the ROCBLAS packages: github.com/likelovewant/ROCmLibs-for-gfx1103-AMD780M-APU/releases/tag/v0.6.1.2 Here is a video I made to show how to look at the log file: ruclips.net/video/on3rtyPWSgA/видео.html Hope that helps!
This is the first video that realy simple, well explained and also works. Thank you👌 But i have the Problem that mistral is much slower with GPU than with the CPU. I use a rx 6600XT. I dont know what is the Problem.
Thanks, I'm glad it was helpful. The problem you are having usually points to a resource problem. Switching to smaller models like Llama 3.2 3B (or models that are around that size) might work better. Just keep in mind it's a trade off because you do lose some quality. If you haven't yet watched my video on performance, you might want to check it out as it demonstrates a similar problem I had when using Llama 3 on one of my systems. ruclips.net/video/l0tc2TSxkO8/видео.html
Hi, I have checked your other video regarding whether or not it is using gpu and ollama says that GPU is being used. Moreover, adrenaline shows that my 5700xt is at 99% usage. However, I am getting slower output compared to CPU. Is there a fix for this? Thanks
I have run into this as well. You might try a smaller model (3B parameter like Llama 3.2 or maybe Phi). I'm wondering if it's a bottleneck between transferring data between the CPU and GPU and it could be a matter of needing more RAM. I address this in my video on performance: ruclips.net/video/l0tc2TSxkO8/видео.html
hi, can you make similar video, but for discrete gpu, i have old gtx 1050 and ollama is using cpu only, i want to know how to activate the gpu, also how to disable it, to see the difference. thanks
Hi there! Actually, the video I made was for a discrete GPU, but it was AMD rather than NVIDIA. You shouldn't have to install any work around. Ollama supports Nvidia GPUs with compute capability 5.0+. You should be in good shape because the GTX 1050 has a compute capability of 6.1. If this is not working correctly you might want to update your drivers? Not sure. To force CPU usage, set an environment variable of CUDA_VISIBLE_DEVICES to -1. To enable GPU usage just remove that variable and it should utilize your GPU again. For more info view this page: github.com/ollama/ollama/blob/main/docs/gpu.md
It should work. That is an LLVM target of gfx1032. rocm.docs.amd.com/projects/install-on-windows/en/latest/reference/system-requirements.html#supported-gpus-win github.com/likelovewant/ollama-for-amd#windows
Thanks brother, I also have a 6750xt. I installed the llama3.2 11b today and it was very slow and I realized that it was using 100% CPU. After this fix it is around 50% for both CPU and GPU. Thanks a lot.
This video was easy to understand and proffesional. Worked on rx 6600 no problems keep up!
Good deal! Thanks for your feedback!
Here is the translation:
"Thank you, I followed the steps in the video and finally Ollama ran on my GPU RX 6600. I just had to restart for it to work (before the restart, it was using the CPU). Now I will check if I can use Docker and WebUI to use it with a better interface. Thanks again!"
Great! I'm glad it worked for you. I also have videos on Docker and Web UI if you need help with that. Thanks for watching!
You are awesome, you saved my life and my graduation project
and a lot of money LOL
Glad to hear it! Best of luck with your project!
I have been wanting to run ollama on my 6750XT ever since AMD announced they were supporting their GPUs.
THANK YOU!
Just installed and it seems to work perfectly. So much faster than running on my 13700k !
I'm glad to hear it. Thanks for watching!
perfect explanation video, it worked on amd 5700 xt. thank you
You're welcome! Thanks for the feedback!
Great video! It worked perfectly
Glad it helped! Thanks for the feedback!
Thanks! I made it work with my 6600!
Very cool! Thanks for the feedback.
It works! thank you so much. you did a very good video
Great! Thank you for watching and for your feedback!
Thank you!It works with my 5700xt!
Night and day istead of running on my CPU!
Great! Thanks for the feedback!
can it run on rx 6500m laptop gpu
Hi, you will need to determine the LLVM target of your GPU and then use the related ROCBLAS packages. I would recommend looking at your log file and determine that target which should start with gfx. In your case it might be GFX1033 but I'm not 100% sure. Maybe someone else can chime in here.
Here is the place in the video that will show you in the Ollama log file:
ruclips.net/video/G-kpvlvKM1g/видео.html
Here is the latest release (currently) of the ROCBLAS packages:
github.com/likelovewant/ROCmLibs-for-gfx1103-AMD780M-APU/releases/tag/v0.6.1.2
Here is a video I made to show how to look at the log file:
ruclips.net/video/on3rtyPWSgA/видео.html
Hope that helps!
This is the first video that realy simple, well explained and also works. Thank you👌 But i have the Problem that mistral is much slower with GPU than with the CPU. I use a rx 6600XT. I dont know what is the Problem.
Thanks, I'm glad it was helpful. The problem you are having usually points to a resource problem. Switching to smaller models like Llama 3.2 3B (or models that are around that size) might work better. Just keep in mind it's a trade off because you do lose some quality. If you haven't yet watched my video on performance, you might want to check it out as it demonstrates a similar problem I had when using Llama 3 on one of my systems. ruclips.net/video/l0tc2TSxkO8/видео.html
Can this work with integrated graphics?
Like with the AMD APUs? I'm not sure. Even if it was supported, I doubt if it could perform very well. Too many limitations.
This worked on RX 5500M! Thanks
Good deal. You're welcome!
Hi, I have checked your other video regarding whether or not it is using gpu and ollama says that GPU is being used. Moreover, adrenaline shows that my 5700xt is at 99% usage. However, I am getting slower output compared to CPU. Is there a fix for this? Thanks
Otherwise Great video, thanks a lot.
I have run into this as well. You might try a smaller model (3B parameter like Llama 3.2 or maybe Phi). I'm wondering if it's a bottleneck between transferring data between the CPU and GPU and it could be a matter of needing more RAM. I address this in my video on performance: ruclips.net/video/l0tc2TSxkO8/видео.html
Thanks for your feedback!
hi, can you make similar video, but for discrete gpu, i have old gtx 1050 and ollama is using cpu only, i want to know how to activate the gpu, also how to disable it, to see the difference. thanks
Hi there! Actually, the video I made was for a discrete GPU, but it was AMD rather than NVIDIA. You shouldn't have to install any work around. Ollama supports Nvidia GPUs with compute capability 5.0+. You should be in good shape because the GTX 1050 has a compute capability of 6.1. If this is not working correctly you might want to update your drivers? Not sure. To force CPU usage, set an environment variable of CUDA_VISIBLE_DEVICES to -1. To enable GPU usage just remove that variable and it should utilize your GPU again. For more info view this page: github.com/ollama/ollama/blob/main/docs/gpu.md
worked on the rx 6700xt, thank you so much
You're welcome! Glad it worked!
But I have rx6600 can I use ollama?
It should work. That is an LLVM target of gfx1032.
rocm.docs.amd.com/projects/install-on-windows/en/latest/reference/system-requirements.html#supported-gpus-win
github.com/likelovewant/ollama-for-amd#windows
Yes you can. Follow the steps
its work on my trash gpu (amd rx580)
Thanks for watching! And for your feedback!