Ollama adds OpenAI API support

Поделиться
HTML-код
  • Опубликовано: 21 дек 2024

Комментарии • 14

  • @dank_shiv
    @dank_shiv 3 месяца назад

    Can you tell me how can i use ollama in a pre made app that uses openAI API, and showed error for both, ollama locally installed llama3 and groq API, which is sad I don't want to pay for a LLM when i have a good grafic card, but i would prefer groq one how can i set groq API in the place of openAI API, what changes I need to do.

  • @perspectivex
    @perspectivex 8 месяцев назад +1

    This was good, very concise. Unfortunately when I try using the openai api in python to access an ollama server running on a _different_ machine (not just 'localhost') I get "Connection refused" despite 2hrs of trying to vary every possible factor I could think of. Really frustrating but I'll keep hammering away at it tomorrow.

    • @learndatawithmark
      @learndatawithmark  8 месяцев назад

      Did you try setting the OLLAMA_HOST environment variable to "0.0.0.0"?

    • @perspectivex
      @perspectivex 7 месяцев назад

      @@learndatawithmark thanks for the reply. No, I saw that in the FAQ I think but glossed over it. Now I tried your suggestion, unfortunately no luck. I killed the running ollama using the toolbar ollama 'quit' (this is on an M1 Macbook Air). Did the launchctl setenv for OLLAMA_HOST to 0.0.0.0, confirmed it was set with launchctl getenv, then did "ollama serve" and ollama started up (spewed a bunch of log output then was apparently waiting). I went to the remote computer (older iMac i7) and ran the code and still got "Connection refused." Pinging it works so IP# is ok. I guess I can paste the whole program here since it's just a few lines. I did also try code variations using http or https, with and without v1, with and without trailing "/". The code (probably won't look formatted like I see it at the moment):
      from openai import OpenAI
      #
      client = OpenAI(base_url="10.8.6.11:11434/v1")
      #
      completion = client.chat.completions.create(
      model="mistral",
      messages=[{"role": "user", "content": "Say hi!"}]
      )
      #
      print(completion.choices[0].message).
      Anyway, whether or not you look any deeper into this problem, thanks for the 0.0.0.0 suggestion. I'll keep trying other things.

    • @perspectivex
      @perspectivex 7 месяцев назад

      @@learndatawithmark hi, thanks for the reply. I tried replying 5 times before this but apparently the youtube comment bouncer found issue with my reply (I had some code snippets?).

    • @madhudson1
      @madhudson1 7 месяцев назад

      @@perspectivexhave you successfully connected to the remote ollama API with any other tooling, or a simple http request?

    • @perspectivex
      @perspectivex 7 месяцев назад

      @@madhudson1 It will be some days before I can look at this problem again. But last I checked even simple http (or https) to the ollama server didn't work either. I feel like maybe it's some security rule on the Mac Sonoma OS that isn't anywhere obvious in Settings ( Privacy&Security, Firewall...). My router should not be blocking anything as they're both on the same local wifi. I just wiped and reinstalled the whole system and will reinstall ollama and try it again maybe on the weekend.

  • @shobhitagnihotri416
    @shobhitagnihotri416 10 месяцев назад

    Sir how we work with this on VScode , Please reply me it is required in my internship

    • @tacorevenge87
      @tacorevenge87 10 месяцев назад

      Self entitled bit? It’s your job. Not his.

    • @shobhitagnihotri416
      @shobhitagnihotri416 10 месяцев назад

      I am asking hin not you @@tacorevenge87

    • @learndatawithmark
      @learndatawithmark  10 месяцев назад +1

      I'm not entirely sure what you're asking - if you can use this in a Python programme written in VS Code? If so then. yeh, you can use it the same as other Python libraries.

  • @bithigh8301
    @bithigh8301 5 месяцев назад

    Very nice channel, your videos and tutorials are really great! Thanks for sharing.
    Let's comment and subscribe, Mark deserves to have 100M subscribers!