Wow! Thank you so much for this clear and easily to follow tutorial. Everything worked exactly as you described, and I also appreciate you including the extra commands in the description. This is my first time messing with local LLMs, and I'm really excited (although I might melt this old laptop). You mention configuring ollama to listen more broadly - would that enable me to access the machine running ollama / docker via the web ui from a different computer on my network? ideally, I'd like to be able to share the local llm with others on my home network, setting up a profile for them. I hope that makes sense. Thank you again!
Thank you for the comment, I really appriciate it :) If you the OLLAMA_HOST enviroment variable to 0.0.0.0 that should open it to all trafic, but there might be some security concerns if you do this. You might find this Reddit post interesting www.reddit.com/r/ollama/comments/1guwg0w/your_ollama_servers_are_so_open_even_my_grandma/
@BlueSpork thank you for the reply. I was just coming to update this as I've figured some of these things out. I set a static local ip on this old laptop via my router. After, I could access the web ui via the ip:3000 from any browser on any local computer. I was also able to add users via the admin panel settings in openwebui. The one trick there was the models were set to private by default, so I had to make them public. Adding models can also be done through the webUI. The ai are not fast, and the ~3B versions this old laptop can handle are pretty crappy, but this is so cool and I'm really excited to explore and tinker. Thank you again!!
Hello and thank you. After changing the Environment and running sudo systemctl daemon-reload, sudo systemctl restart ollama and then doing an ollama list I am getting a Error: could not connect to ollama app, is it running? Then when I run ollama serve I get a really long list of errors in Debug Mode. Is this OK? I do see my model in Open WebUI so it does appear to be working. Thanks!! Jason Mehlhoff
If Open WebUI is working and showing your model, then Ollama is likely running correctly. The error when running ollama list might be due to permission issues or an environment configuration problem.
@@BlueSpork unfortunately I also get the error mentioned above... and I don't know how to solve this problem when I try to add a new model... do you have any idea?
At 2:41", regarding the "permission denied" issue, if it continue to occur even after you LogOut - LogIn, try to reboot the entire OS. That is how I solve this issue on my computer.
hello, I am the only one that have black screen after login in Open WebUI and nothing happen. I have tried to reinstall it all but nothing changed. it seems that Ollama can't communicate with Open WebUI. Not sure how to fix it. Any idea?
There could be many different reasons for this issue. Troubleshooting through RUclips comments can be challenging and time-consuming. I recommend researching your specific problem (with your configuration) online to find more targeted solutions.
THanks the tuto, and merrie christmas Ffor me not sure why but im installing open-webui:main for the last week and gets flagged with unhealty :D , while the open-webui:v0.3.35 works
@@BlueSpork no worries, funny thing is that first time i installed it worked straight away ... i did a fresh popos install and since than iit failed :D ... proably some stuff that i need to tweak ... wil let you know when i figure this out
This is the best video tutorial I have found. The fact that you provide information if something doesn't work is awesome.
Thank you 🙂
Perfect, not a single word is left out, thank you.
Excellent! Thanks!
I been searching to fix the issue with the not found models like 2 days and this worked thank u men
Great! Good timing :)
This video was so informative, straight-forward and thorough! Working perfectly on Zorin 17 You definitely got a sub from me
Great! I appreciate you taking the time to make this comment.
Breaf and clear, excelent tutorial can we have a the Manual Installation video?
What exactly do you mean by "manual installation"?
simple and very helpful tutorial 👍👍👍
Thanks! 👍🙂
Great video , resolved a problem I was facing for days in few minutes. Thank you
Yeah, it took me while to figure it out too. I'm glad it helped.
thanks for the tutorial, followed everything and works perfectly. i had bad performance issues on windows, on linux it is blazingly fast.
Well done my friend thank you so much!!!
Thank you so much ! It works perfectly on Pop_OS!.
I can run deepseek r1 locally on my PC, that doesn't even have a GPU. That's just amazing.
😁
You're direct to the point. thanks
Awesome quick guide, thanks a lot!!
Thank you, just completed it
Great work, finally found working tutorial! :D
Thanks for the feedback! Glad the tutorial helped. All my tutorials are working tutorials :)
Thanks I tried and it works on debian 12.8
That is good to know. Thanks!
For some reason I still can’t see the model in web ui even after following your instructions?
Thanks! This video helps a lot
Spot On, ... Great Job
Wow! Thank you so much for this clear and easily to follow tutorial. Everything worked exactly as you described, and I also appreciate you including the extra commands in the description. This is my first time messing with local LLMs, and I'm really excited (although I might melt this old laptop).
You mention configuring ollama to listen more broadly - would that enable me to access the machine running ollama / docker via the web ui from a different computer on my network? ideally, I'd like to be able to share the local llm with others on my home network, setting up a profile for them. I hope that makes sense.
Thank you again!
Thank you for the comment, I really appriciate it :)
If you the OLLAMA_HOST enviroment variable to 0.0.0.0 that should open it to all trafic, but there might be some security concerns if you do this.
You might find this Reddit post interesting
www.reddit.com/r/ollama/comments/1guwg0w/your_ollama_servers_are_so_open_even_my_grandma/
@BlueSpork thank you for the reply. I was just coming to update this as I've figured some of these things out. I set a static local ip on this old laptop via my router. After, I could access the web ui via the ip:3000 from any browser on any local computer. I was also able to add users via the admin panel settings in openwebui. The one trick there was the models were set to private by default, so I had to make them public. Adding models can also be done through the webUI.
The ai are not fast, and the ~3B versions this old laptop can handle are pretty crappy, but this is so cool and I'm really excited to explore and tinker. Thank you again!!
Thank you brother !!
Thanks. What are the reasons for running in a docker container? Why not just run in a terminal?
Hello and thank you. After changing the Environment and running sudo systemctl daemon-reload, sudo systemctl restart ollama and then doing an ollama list I am getting a Error: could not connect to ollama app, is it running? Then when I run ollama serve I get a really long list of errors in Debug Mode.
Is this OK? I do see my model in Open WebUI so it does appear to be working.
Thanks!!
Jason Mehlhoff
If Open WebUI is working and showing your model, then Ollama is likely running correctly. The error when running ollama list might be due to permission issues or an environment configuration problem.
@BlueSpork cool! Thanks for getting back so quickly. Yes, otherwise seems to be working very nicely. Have a good evening.
Jason
No problem. You too
@@BlueSpork unfortunately I also get the error mentioned above... and I don't know how to solve this problem when I try to add a new model... do you have any idea?
I think I solved the problem when I changed the local host to 0.0.0.0, reload the system configuration and restarted the ollama.
At 2:41", regarding the "permission denied" issue, if it continue to occur even after you LogOut - LogIn, try to reboot the entire OS. That is how I solve this issue on my computer.
Thanks for the info
hello, I am the only one that have black screen after login in Open WebUI and nothing happen. I have tried to reinstall it all but nothing changed. it seems that Ollama can't communicate with Open WebUI. Not sure how to fix it. Any idea?
Hi, one issue I’m still having is that ollama won’t use my GPU. I’m on 24.04 and I’m using the llama 3.2 8b model.
What is your GPU?
I have an RTX 4070 TI super.
How do you know Ollama is not using your GPU?
I’m using NVTOP and all my commands are going through the CPU
There could be many different reasons for this issue. Troubleshooting through RUclips comments can be challenging and time-consuming. I recommend researching your specific problem (with your configuration) online to find more targeted solutions.
Nice video, do you have the possibility of producing it for lobe-chat as well ?
Thank you. I'm curious, why you have interest in lobe-chat.
@@BlueSpork Because it's a bullet too 😍
Awesome tutorial, very informative. thanks a lot!!
How to running on Intel(R) Iris(R) Xe Graphics
Intel Iris Xe Graphics can't handle large language models well. If they do work, it'll be really slow. A dedicated GPU would do a much better job.
@@BlueSpork Oke, Thankyou sir.
please create a tutorial for Ollama + OpenWebUI but on WSL
Good idea! I’ll look into it after finishing the video I’m currently working on.
I just did mine in wsl and the only issue I have is the model not showing in open web ui
THanks the tuto, and merrie christmas
Ffor me not sure why but im installing open-webui:main for the last week and gets flagged with unhealty :D , while the open-webui:v0.3.35 works
You are welcome. I’m not sure why a different WebUI version works. I haven’t seen that problem before.
@@BlueSpork no worries, funny thing is that first time i installed it worked straight away ... i did a fresh popos install and since than iit failed :D ... proably some stuff that i need to tweak ... wil let you know when i figure this out
Hmm… yeah, let me know. I’m curious
its not founding me a models
Did you created a new Environment "OLLAMA_HOST" variable in the Ollama service file?
@@BlueSpork I fixed it thank you