Ollama on Kubernetes: ChatGPT for free!

Поделиться
HTML-код
  • Опубликовано: 11 ноя 2024
  • НаукаНаука

Комментарии • 9

  • @oscarandresdiazmorales7180
    @oscarandresdiazmorales7180 2 дня назад

    Excelente video! Me encantó cómo explicaste el proceso de implementar Ollama en Kubernetes. Gracias por compartir tu conocimiento!

  • @HosseinOjvar
    @HosseinOjvar День назад

    Helpful tutorial
    thank you

  • @Techonsapevole
    @Techonsapevole 3 дня назад +2

    I use docker compose but i was curious about k8s

  • @samson-olusegun
    @samson-olusegun 3 дня назад +1

    Would using a k8s job to make the pull API call suffice?

    • @mathisve
      @mathisve  3 дня назад +1

      Yes and no! On paper, if you only had one pod this could work. But the API call needs to be made every time a new Ollama pod is scheduled (unless you're using a PVC mounted to the pod to store the model). As far as I'm aware it's not possible to start a Kubernetes job at the creation of a new pod without using an operator.

  • @Sentientforce
    @Sentientforce 3 дня назад

    Can you please advise how to run ollama in k3d cluster in wsl2- windows 11 and docker desktop environment. The issue I’m not able to solve is making gpu visible in a node.

  • @MuhammadRehanAbbasi-j5w
    @MuhammadRehanAbbasi-j5w 12 часов назад

    Would really like the video on how to add a GPU to this, both locally and on the cloud.

    • @mathisve
      @mathisve  5 часов назад

      Stay tuned for that video! I'm working on it as we speak, should be out later this week!