Ollama Web UI Tutorial- Alternate To ChatGPT With Open Source Models

Поделиться
HTML-код
  • Опубликовано: 26 ноя 2024

Комментарии • 33

  • @shivampradhan3645
    @shivampradhan3645 6 месяцев назад +7

    there is no one who is able to capture the changing genai domain like you sir

  • @gsignup0987
    @gsignup0987 6 месяцев назад +1

    One of the best channel on RUclips
    Learn alot from you sir

  • @rohulamin496
    @rohulamin496 3 месяца назад

    your are so a great teacher keep continue uploading videos

  • @mspicela
    @mspicela 6 месяцев назад

    Great video, I had not heard of it yet. Got it working. I had been looking for something just like this.

  • @FranciscoMonteiro25
    @FranciscoMonteiro25 5 месяцев назад +1

    open-webui has moved on such a lot since this video, functionality that can boost productivy much much more. time you did a new video with the new functionality, like function, prompts, RAG etc

  • @jawwadhussain8457
    @jawwadhussain8457 3 месяца назад +1

    It's good but as new to this, we are missing a point where how to use this chatbot on website a live demonstration of linking this chatbot with website any WordPress etc etc could be helpful

  • @rahulmanocha4533
    @rahulmanocha4533 6 месяцев назад +1

    Hello sir, hows your membership works? Is there any group or something? As per yt membership tab its 3 months ago content, please let me know how i can access your latest live streaming projects?

  • @Cine95
    @Cine95 6 месяцев назад +1

    Thanks brother but would love if you could cover modes who allow api cause not everyone is out here running 4000 grand worth of equipment

  • @techiemedic
    @techiemedic 6 месяцев назад +1

    Hi Krish I am having a laptop with Nvidia RTX3050 GPU but docker always gives me error saying gpu not found. can you show us how you've set up docker environment to get gpu cuda option?

  • @TheScriptx
    @TheScriptx Месяц назад

    How can integrate this with other applications so that this can build API?

  • @SubinKrishnaKT
    @SubinKrishnaKT 4 месяца назад

    When I open the web UI along with my local LLM Lllama 2, I can see GPT-4 and GPT-4o in the choose LLM dropdown. Do you know what is causing this?

  • @Maisonier
    @Maisonier 5 месяцев назад

    There is anyway to use FAISS with Pipelines in Open WebUI? liked and subscribed.

  • @joekustek2623
    @joekustek2623 6 месяцев назад

    Existing models do not pull in like yours what did you do so your existing models are shown in webui?

  • @muhammedyaseenkm9292
    @muhammedyaseenkm9292 6 месяцев назад

    Krish, can you give the complete tutorial on distributed computing

  • @SRK-ub6fe
    @SRK-ub6fe 6 месяцев назад

    I did all steps, localhost is opening. When I try to find the models from the list, its showing no models found. How to fix it now. ?

    • @beyondbelief8458
      @beyondbelief8458 6 месяцев назад

      I think you need to download the models using command prompt. Correct me if I am wrong

  • @inesoueslati6037
    @inesoueslati6037 2 месяца назад

    have you any idea how to use functiog calling with open web-ui plzz?

  • @mahanteshpattakal4445
    @mahanteshpattakal4445 6 месяцев назад

    @Krish, I intend to use Ollama with my VS Code, so that it can help me code better, how can I achieve it using Ollama-Codellama model and VS Code?

    • @shubhammaurya3671
      @shubhammaurya3671 6 месяцев назад

      I was looking for same kind of solution. Can we collaborate on this to make some kind of npm library or extension for vscode?

  • @rsraman123
    @rsraman123 6 месяцев назад

    this is nice. how can i passthrough a colab GPU like I do for Ollama on cmd to use the colab GPU for the open webUI app as well?

  • @samridhhfincoach4243
    @samridhhfincoach4243 4 месяца назад

    Though the language models are open source and free but I think we need to pay for subscription for docker desktop. Can you please confirm. Please also make a video on the minimum configuration that is required for running these models effectively locally.

  • @RishiRajxtrim
    @RishiRajxtrim 6 месяцев назад +1

    🙏👍

  • @AasherKamal
    @AasherKamal 6 месяцев назад +1

    What configurations are required to run this?

    • @shivampradhan3645
      @shivampradhan3645 6 месяцев назад

      i ran on macbook m1 which has storage of 128gb and 8gb RAM,it is working.

  • @hassanahmad1483
    @hassanahmad1483 6 месяцев назад

    is there any way to deploy these custom trained models?

  • @lakshmikanth1988
    @lakshmikanth1988 5 месяцев назад

    Hi How to download models in local host? like codeguru, code llamma..

  • @TGajanan
    @TGajanan 6 месяцев назад

    What is the accuracy of this model ?

  • @chillakalyan
    @chillakalyan 6 месяцев назад +1

    😢Shows Impacts on Chat Gpt

  • @A_k_h_i_l_007
    @A_k_h_i_l_007 6 месяцев назад

    Is it possible on linux?

  • @mohsinkhan-bw3cd
    @mohsinkhan-bw3cd 6 месяцев назад

    Why u keep shaving from now on

    • @krishnaik06
      @krishnaik06  6 месяцев назад +6

      Bhai there are much other thing to discuss...focus on ur learnings

    • @mohsinkhan-bw3cd
      @mohsinkhan-bw3cd 6 месяцев назад

      Sorry sir if i hurt you

  • @AzarMohammad-gx6mz
    @AzarMohammad-gx6mz 6 месяцев назад +2

    hloo sir