Accessing Llama2 LLLM On Docker Using Ollama | Running Ollama Docker Container | How To Run Ollama

Поделиться
HTML-код
  • Опубликовано: 13 янв 2025

Комментарии • 8

  • @m.aakaashkumar6125
    @m.aakaashkumar6125 7 месяцев назад +1

    I have ollama on my Windows PC. I have a GPU which is old with NVidia CUDA 10.0 and compute capability 3.0. This is not supported in the Windows version of Ollama where it requires atleast CUDA 11.3 and compute capability of 3.5. Is there any solution for this to make it work on my GPU. Because the CPU will get damaged due to running Ollama on it and I want to make use of my GPU Instead.

    • @SAIKUMARREDDYN
      @SAIKUMARREDDYN  7 месяцев назад

      Try changing main process in nvidia control panel for CPU to GPU. Where everything runs on GPU. So it should support. Else you need to use Ollama docker container. Check out my video to connect to Docker container

    • @m.aakaashkumar6125
      @m.aakaashkumar6125 7 месяцев назад

      @@SAIKUMARREDDYN made that setting previously yet on ollama serve it states that gpu is too old and ignored. Will running on a docker container make use of my GPU.. And is this the video you are referring to. Or any other?

  • @SushantKulkarni-je5cd
    @SushantKulkarni-je5cd 8 месяцев назад +1

    brother i am working on a project of fastapi so i want to download the ollama and llama 3 directly in the docker after building image do u know how to do that
    pls help me

    • @SAIKUMARREDDYN
      @SAIKUMARREDDYN  8 месяцев назад

      There is a docker image of ollama already available pull it and start working on it. Here is the link - ruclips.net/video/pateZb8ZVmM/видео.htmlsi=0f7hC7H7ax0ZSrvl

    • @Ashif13143
      @Ashif13143 2 месяца назад

      Thanks buddy, i have a program and it uses llama3 from my machine. Iwant to deploy the code to azure but on azure i can not use llama3. Help.?