Ollama on Linux: Easily Install Any LLM on Your Server

Поделиться
HTML-код
  • Опубликовано: 28 авг 2024

Комментарии • 49

  • @crazytom
    @crazytom Месяц назад +4

    Thanks for leaving all the errors in and correcting them. Excellent.

  • @sto3359
    @sto3359 11 месяцев назад +5

    This is amazing news! I'm limited to 16gb RAM on my Macs, but not so on my Linux machines!

  • @datpspguy
    @datpspguy 7 месяцев назад +2

    Was using Ubuntu Desktop running mixtral on ollama so i can make api calls with my FastApi app on VS code but realized i should separate them out and go headless for ollama. I didn’t realize that CORS was preventing outside calls from my dev machine and this video helped once i found the github page as well. Thanks for sharing

    • @IanWootten
      @IanWootten  7 месяцев назад

      Glad to hear you sorted it!

    • @datpspguy
      @datpspguy 7 месяцев назад

      thank you, I ended up storing the environment variable into the .conf file to bind the IP address so it handles this process automatically.@@IanWootten

  • @DataDrivenDailies
    @DataDrivenDailies 9 месяцев назад +4

    just what i was looking for, thanks ian!

  • @timjx3675
    @timjx3675 10 месяцев назад +4

    Mistral 7B running really sweet on my old Asus (16GB ram ) laptop

    • @IanWootten
      @IanWootten  10 месяцев назад +2

      Runs really fast on my MBP too, just started playing with it yesterday.

    • @timjx3675
      @timjx3675 10 месяцев назад

      @@IanWootten sweet

  • @trapez_yt
    @trapez_yt 2 месяца назад +1

    i cant run it on service ollama start, it says the following:
    $sudo: service ollama start
    ollama: unrecognized service

  • @perschinski
    @perschinski 2 месяца назад

    Great stuff, thanks a lot!

  • @PengfeiXue
    @PengfeiXue 4 месяца назад +1

    can we use ollama to serve in production ? if not,what is your suggestion?

  • @atrocitus777
    @atrocitus777 6 месяцев назад +2

    how does this scale for multiple users sending multiple requests at a time? do you need to use a load balancer / reverse proxy? i don't think ollama supports batch inference still

    • @jakestevens3694
      @jakestevens3694 6 месяцев назад

      You would have to launch and run the application multiple times, the best way is to just use something like docker. Otherwise, I believe theirs the "screen" command. If I remember correctly on Linux this will allow you to run applications in the CLI with multiple virtual "screens" or rather more like sessions, you would then want to make sure what ever port it uses is different from the others. Also take note the ram it uses, is the ram it uses, CPU can be shared. It might be possible with ram (with some tricks) however it's unlikely.

    • @atrocitus777
      @atrocitus777 6 месяцев назад

      what about pulling from a custom endpoint where i have my own hosted models? i want to run this on an air gapped network that doen'st have any access to the internet so if i could point it to an on prem server i have that would be awesome. @@jakestevens3694

  • @74Gee
    @74Gee 3 месяца назад

    Run Pod is very affordable too. From 17c per hour for a Nvidea 3080

    • @IanWootten
      @IanWootten  3 месяца назад +1

      Yeah, I wanted to do a comparison of all the new services appearing.

  • @BileGamer2002
    @BileGamer2002 6 месяцев назад

    Hello. I'm developing an OnPremises application that consumes Ollama via API. However, after a few minutes, the Ollama Server stops automatically. I would like to know if there is any way to keep it running until I stop it.
    Thank you very much.

  • @VulcanOnWheels
    @VulcanOnWheels 5 месяцев назад

    0:08 How did you get to your pronunciation of Linux?
    10:53 How could one correct the error occurring here?

  • @JordanCassady
    @JordanCassady 4 месяца назад

    Which version of Ubuntu did you choose? It seems to be missing from the video.

  • @ITworld-gw9iy
    @ITworld-gw9iy 4 месяца назад

    for 70B model, what server would I need to rent? docs says at least 64GB of RAM... but regarding NVIDEA card no minimal specs in the docs. Who has experience with this?

  • @rishavbharti5225
    @rishavbharti5225 6 месяцев назад

    This was a really helpful video Ian!
    But I am facing one issue after running ollama serve the server is shutting down when I close terminal. Please tell me if there is a way to prevent this.
    Thanks!

  • @AdarshSingh-rm6er
    @AdarshSingh-rm6er 2 месяца назад

    hello Ian, Its a very great video. I have some query, i will very thankful if you can help me. I am stuck since 3 days. Apparently, I am trying to host the ollama on my server. i am very new to linux and dont understand the whats wrong i am doing. I am using nginx to host the ollama on my proxies and configure the nginx file and yet getting access denied error. I can show you the code if you want, please respond.

  • @jamiecropley
    @jamiecropley 8 месяцев назад +1

    Anyone got this running on anything lower than 8GB of RAM on digital ocean? I tried locally on my own computer with a huge prompt with a 3B model, and it only used around 1GB of RAM maximum

    • @IanWootten
      @IanWootten  7 месяцев назад

      Yeah, depends on the model itself. ollama often lists the memory requirements on the model page. e.g. ollama.ai/library/llama2

  • @peteprive1361
    @peteprive1361 11 месяцев назад +1

    I got an error while executing the curl command : Failure writing output to destination

    • @IanWootten
      @IanWootten  11 месяцев назад +2

      Weird. Perhaps try running it from a directory you are certain you have write access to.

  • @SuperRia33
    @SuperRia33 Месяц назад

    How do you connect to server via Python Client or Fast APIs for integration with projects/notebook?

    • @IanWootten
      @IanWootten  Месяц назад

      If you simply want to make a request to an API from Python, there are plenty of options. You can use a package from Python itself like urlllib, or a popular library like requests.

  • @ankitvaghasiya3789
    @ankitvaghasiya3789 3 месяца назад

    thank you🦙

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w 9 месяцев назад

    do you think it is safe to install on your own laptop instead of the cloud server?

    • @IanWootten
      @IanWootten  9 месяцев назад

      Yes. Ollama has desktop versions too and it doesn't send anything externally when query when you go that route. I have another video where I do this on my mac.

  • @sugihwarascom
    @sugihwarascom 9 месяцев назад

    How come the model run in 8gb of ram? On the docs it self it need at least 16gb for llama2

    • @IanWootten
      @IanWootten  9 месяцев назад +2

      No idea - I was going on experience using ollama rather than the model itself.

  • @blasandresayalagarcia3472
    @blasandresayalagarcia3472 7 месяцев назад

    what is the cost of webhosting ollama or these type of LLM models?

    • @IanWootten
      @IanWootten  7 месяцев назад

      In this case, it'll be the price of the virtual machine you choose to install it on so depends on the provider.

  • @GenerativeAI-Guru
    @GenerativeAI-Guru 9 месяцев назад

    How do i change IP and port for Ollama

    • @IanWootten
      @IanWootten  9 месяцев назад

      Use the env var OLLAMA_HOST. e.g. OLLAMA_HOST=127.0.0.1:8001 ollama serve

    • @GenerativeAI-Guru
      @GenerativeAI-Guru 9 месяцев назад

      Thanks

  • @petermarin
    @petermarin 10 месяцев назад

    benefits of running it like this vs docker?

    • @IanWootten
      @IanWootten  10 месяцев назад

      Running anything within a container will always mean the app runs slower.

  • @nickholden585
    @nickholden585 9 месяцев назад

    Right now there is an issue with Ollama where if you create an model, it spams you with "do not have permission to open Modelfile"
    It's super odd, because even if you give full read and execution rights to every user or run the command with sudo it still fails.
    The only viable work around is to run it on /tmp

    • @IanWootten
      @IanWootten  9 месяцев назад

      This is an issue with the current user not having access to the ollama group. There's a recommended solution posted here (though sounds like it might not be completely resolved): github.com/jmorganca/ollama/issues/613#issuecomment-1756293841

    • @nickholden585
      @nickholden585 9 месяцев назад

      @@IanWootten saw that.
      Even after running sudo usermod -a -G ollama $(whoami)
      It still won't work.
      The idea to run it in /tmp came from that thread haha.
      Outside of this issue, the rest of the project is pretty cool imo.
      Local llm with reinforced learning, wifi and direct brain integration will be the future.

  • @davidbl1981
    @davidbl1981 7 месяцев назад

    Even if the killer is dead on the floor the killer is still there and would still be a killer 😅 so the correct answer would be 3.