Ollama 3.1 & Open-WebUI with Docker For Multiple Models Locally

Поделиться
HTML-код
  • Опубликовано: 15 янв 2025

Комментарии • 15

  • @Smoshfaaaaaaaaaaan
    @Smoshfaaaaaaaaaaan 3 месяца назад +1

    New video! I love how concise your videos always are man.
    Was just listerning to your ML build videos.
    I’m an ML student at the Univeristy of Tuebingen and was thinking of building a rig with one or two rtx3090s. They’re around 650-700€ on eBay rn, which seems like a good price for the performance.
    You mentioned you are a PHD student, what do you study?

    • @TheDataDaddi
      @TheDataDaddi  2 месяца назад

      Hey there! Thanks so much for the comment and the kind words!
      Love it. Yeah, the RTX 3090s are some of favorite GPUs in general from a cost to performance perspective. If you have any question on the build, I would be happy to help, let me know how it turns out!
      Yep. I am current getting my PhD in computer science at the University of Georgia. My research focus at the moment is in AI and web browser security (in particular computer vision and LLM/VLM models). I am about to transition to a new project though it looks like that will focus more on tabular anomaly detection.

  • @TheGaussFan
    @TheGaussFan 2 месяца назад +4

    I've Been having Ubuntu system issues. Since you have a known working combination, can you tell us your current Linux, Lunux Kernel/Nvidia driver, cuda containerDocker versions, and any other system data you think is relevant to a successful install? were there any magic configuration steps not shown in the install web pages? Me and Claude AI have been running circles trying to get docker to see the 3090. No success so far.

    • @TheDataDaddi
      @TheDataDaddi  2 месяца назад

      So sorry to hear you are having issues. First thing that comes to my mind is that you may not have install the nvidia-container-toolkit. This must be installed in order for your docker container to see the GPUs. The commands below should fix this issue (if you have not already installed it).
      distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
      curl -s -L nvidia.github.io/nvidia-docker/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
      curl -s -L nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
      sed 's#deb #deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] #g' | \
      sudo tee /etc/apt/sources.list.d/nvidia-docker.list
      sudo apt update
      sudo apt install -y nvidia-container-toolkit
      sudo systemctl restart docker
      Just in case that does not work here is my info:
      Ubuntu Version: Ubuntu 22.04
      Kernel Version: 5.15.0-124-generic
      Driver Version: 535.216.01
      CUDA Version: 12.2
      NVIDIA Container Runtime version: 1.14.6
      I did not do anything special I just pulled down the premade container if I remember correctly. Hope this helps! Also, if you want to provide me specific errors, I would be happy to help you troubleshoot further! Sorry again, for the issues!

    • @TheGaussFan
      @TheGaussFan 2 месяца назад

      @TheDataDaddi Thanks, I have been working with ubutu 22.04.4, different kernel, driver, and Cuda version. I'll try again starting with 22.04 and match everything else up to your install, and see if that works. Thanks for your info.

  • @JohnsonNong
    @JohnsonNong Месяц назад +1

    daday i really love you

  • @tsizzle
    @tsizzle 2 месяца назад +1

    did you show whether open webui can do side by side chat display of 2 different models?

    • @TheDataDaddi
      @TheDataDaddi  2 месяца назад

      I did not! I was not aware that this was possible. Thanks for letting me know. I will play around with it and try to figure that out. Thanks!

  • @EngineeredByEvan
    @EngineeredByEvan 2 месяца назад +2

    Either I'm just having trouble following or this is mostly the steps for linux specifically? Most of the steps here are giving me issues

    • @TheDataDaddi
      @TheDataDaddi  2 месяца назад

      Hi there. Thanks for the comment! Sorry to hear you are having issues.
      This video is showing how to use Docker for both the Ollama side and the Open-WebUI side so it should work theoretically for any OS that supports docker as long as you have Docker installed correctly. The OS inside of both of the docker containers is Linux based I believe, however, that should not really matter too much in this case. If you can send me the error messages you are getting or let me know where you are getting stuck I would be happy to help!

    • @EngineeredByEvan
      @EngineeredByEvan 2 месяца назад +1

      @@TheDataDaddihey! Thanks for the reply! I ended up getting it up and running. Using two llama models from Ollama. However, I was interested in using other models that I got access to on Hugging Face (llama 3.2 70B for example) but no matter how much I tinkered with the setup, the UI never seems to point to my models I had locally. I was curious if downloading other local models was incompatible with this setup, and you can only use Ollamas list of models? Thank you again!

    • @TheDataDaddi
      @TheDataDaddi  2 месяца назад

      @@EngineeredByEvan So this is something I have not experimented with yet. I know that it is possible to run other models with the underlying Llama.cpp model, I have tested this actually, but I have not tried it yet with Ollama. I will tinker with this when I get a chance and make a follow up video to show how to use other model besides just what is the Ollama library. Stay tuned!

  • @hand-eye4517
    @hand-eye4517 2 месяца назад +1

    Docker Desktop is NOT open source .... Not the same as using regular docker

    • @TheDataDaddi
      @TheDataDaddi  Месяц назад

      Yep, you’re absolutely right-thanks for pointing that out! Docker Desktop indeed isn’t open source and has a different licensing model compared to Docker Engine, which is open source. I appreciate the clarification!