I will hold a talk on the possibilities of local AI in the medical field and use your video as a quick demonstration how easy it is to set up OPENWEB UI. THANK YOU!
Hello, thanks for the tuto. Why, when I remove the ollama/ollama image from Docker Desktop, can I no longer access the Ollama 3.2vision model, even though it is still stored on my machine? How does the ollama/ollama image bridge the connection between the locally installed model and OpenWeb-GUI?
You’re welcome. You can install and run Ollama natively on your system or inside a Docker container. My tutorial demonstrates the setup where Ollama is installed on Windows and doesn’t run in a Docker container. From your description, it seems you’re running Ollama in a Docker container. Is that correct?
@BlueSpork no. I remade the tutorial exactly like you. The ony container that runs on Docker Desktop is openWeb-gui. But, I have another problem : the model took big amount of time to answer me ( I used this ollama3.2-vision). For example, I just ask for one-question quizz about AWS services. I notice that the CPU of openweb-gui is very low (go up to 85% when I started it then 0,3% when it's running). Do you think that normal or ?
Thank you for this video. Is that possible to fine tuning ollama on windows? Could you please provide a video about that issue? And why devolopers are not recomending to use ollama on windows? Thank you again.
My favorite prompt is no yapping. You were on point. Thanks!
This is the style of tech content I need in my life. Subbed, liked, and commented on.
Thank you, I appreciate it very much.
Seriously I can't thank you enough for being so direct. All those yappings about unrelated stuff just confuses me.
Thank you. I’m really glad you like it, and I appreciate you taking the time to make this comment.
Finally someone who gets to the point. THANK YOU> SUBSCRIBED
😁 🙏
This was really great guide. No bs, quick and easy. I managed to FINALLY make it work. Thanks! 👍
Great! I'm really happy to hear it helped you make it work :)
I will hold a talk on the possibilities of local AI in the medical field and use your video as a quick demonstration how easy it is to set up OPENWEB UI. THANK YOU!
Awesome! Not only easy to set up, but also open source 🙂 I’m making another video for Linux setup and then for a MacOS setup.
Hello, thanks for the tuto.
Why, when I remove the ollama/ollama image from Docker Desktop, can I no longer access the Ollama 3.2vision model, even though it is still stored on my machine? How does the ollama/ollama image bridge the connection between the locally installed model and OpenWeb-GUI?
You’re welcome. You can install and run Ollama natively on your system or inside a Docker container. My tutorial demonstrates the setup where Ollama is installed on Windows and doesn’t run in a Docker container. From your description, it seems you’re running Ollama in a Docker container. Is that correct?
@BlueSpork no. I remade the tutorial exactly like you. The ony container that runs on Docker Desktop is openWeb-gui.
But, I have another problem : the model took big amount of time to answer me ( I used this ollama3.2-vision). For example, I just ask for one-question quizz about AWS services. I notice that the CPU of openweb-gui is very low (go up to 85% when I started it then 0,3% when it's running). Do you think that normal or ?
How much GPU VRAM do you have, and how much RAM do you have on your computer?
@@BlueSpork RAM : 16 Go
Gpu vram : 7,9 Go
What GPU do you have?
many thanks for this concise video.
Glad it was helpful :)
You are a Great help dude. Many thanks for the video.
You are welcome my friend.
thank you vary much for straight to steps.
You are welcome
Thank you for this video. Is that possible to fine tuning ollama on windows? Could you please provide a video about that issue? And why devolopers are not recomending to use ollama on windows? Thank you again.
You’re welcome! Are you asking about fine-tuning Ollama or models?
@ I am asking fine tuning models. I am using ollama on open web ui.
I am considering fine-tuning LLMs, but it is time-consuming and currently not necessary, as the LLMs I use provide everything I need.
Hi, When I run localhost it says that page isn’t working, any ideas what the problem is?
Is your docker container running and what is your local host full address?
PERFECT! Sub! and a thumb's up!
Thanks for the sub and thumb's up!
from RU w thx