Ollama Web UI (ChatGPT-ish) - Local AI FTW!!!

Поделиться
HTML-код
  • Опубликовано: 10 сен 2024
  • Ollama Web UI: A User-Friendly Web Interface for Chat Interactions. ChatGPT-Style Web Interface for Ollama 🦙
    My Ollama Tutorial - • Ollama on CPU and Priv...
    Features ⭐
    🖥️ Intuitive Interface: Our chat interface takes inspiration from ChatGPT, ensuring a user-friendly experience.
    📱 Responsive Design: Enjoy a seamless experience on both desktop and mobile devices.
    ⚡ Swift Responsiveness: Enjoy fast and responsive performance.
    🚀 Effortless Setup: Install seamlessly using Docker for a hassle-free experience.
    💻 Code Syntax Highlighting: Enjoy enhanced code readability with our syntax highlighting feature.
    ✒️🔢 Full Markdown and LaTeX Support: Elevate your LLM experience with comprehensive Markdown and LaTeX capabilities for enriched interaction.
    📥🗑️ Download/Delete Models: Easily download or remove models directly from the web UI.
    🤖 Multiple Model Support: Seamlessly switch between different chat models for diverse interactions.
    ⚙️ Many Models Conversations: Effortlessly engage with various models simultaneously, harnessing their unique strengths for optimal responses. Enhance your experience by leveraging a diverse set of models in parallel.
    🤝 OpenAI Model Integration: Seamlessly utilize OpenAI models alongside Ollama models for a versatile conversational experience.
    🔄 Regeneration History Access: Easily revisit and explore your entire regeneration history.
    📜 Chat History: Effortlessly access and manage your conversation history.
    📤📥 Import/Export Chat History: Seamlessly move your chat data in and out of the platform.
    🗣️ Voice Input Support: Engage with your model through voice interactions; enjoy the convenience of talking to your model directly. Additionally, explore the option for sending voice input automatically after 3 seconds of silence for a streamlined experience.
    ⚙️ Fine-Tuned Control with Advanced Parameters: Gain a deeper level of control by adjusting parameters such as temperature and defining your system prompts to tailor the conversation to your specific preferences and needs.
    🔐 Auth Header Support: Effortlessly enhance security by adding Authorization headers to Ollama requests directly from the web UI settings, ensuring access to secured Ollama servers.
    🔗 External Ollama Server Connection: Seamlessly link to an external Ollama server hosted on a different address by configuring the environment variable during the Docker build phase. Additionally, you can also set the external server connection URL from the web UI post-build.
    🔒 Backend Reverse Proxy Support: Strengthen security by enabling direct communication between Ollama Web UI backend and Ollama, eliminating the need to expose Ollama over LAN.
    🔗 Links 🔗
    Ollama Web UI - github.com/oll...
    Download Ollama here - ollama.ai/
    Ollama Model library - ollama.ai/libr...
    ❤️ If you want to support the channel ❤️
    Support here:
    Patreon - / 1littlecoder
    Ko-Fi - ko-fi.com/1lit...
    🧭 Follow me on 🧭
    Twitter - / 1littlecoder
    Linkedin - / amrrs
  • НаукаНаука

Комментарии • 42

  • @Arrowtake
    @Arrowtake 9 месяцев назад +6

    This is Amazing! Thanks Not only that I learned about ollama but also this very neat web-ui, Runs extremely well on my MBPro M2 with 32GB of mem (Mistral)

    • @1littlecoder
      @1littlecoder  9 месяцев назад +1

      Glad it helped! Really happy 😊

    • @programingcuriosity5075
      @programingcuriosity5075 5 месяцев назад

      No s*** it runs well on MacBook M2 pro, with 32GB ram hahaha :) Sorry, had to say this. Congrats

  • @vrynstudios
    @vrynstudios 9 месяцев назад +6

    Any difference between lm studio and this?

  • @KevinKreger
    @KevinKreger 9 месяцев назад +1

    bye bye ChatGPT! Great stuff. 🙂

  • @Suchtzocker
    @Suchtzocker 8 месяцев назад +1

    how can i speed up ollama general response generation ? im using a model wich takes ages to generate a response

  • @SubinKrishnaKT
    @SubinKrishnaKT 2 месяца назад

    When I open the web UI along with my local LLM Lllama 2, I can see GPT-4 and GPT-4o in the choose LLM dropdown. Do you know what is causing this?

  • @rabeemohammed5351
    @rabeemohammed5351 9 месяцев назад +3

    Can run it with my pdf and offline
    Or you have video for run it offline with pdf

  • @luigitech3169
    @luigitech3169 9 месяцев назад +5

    ollama-webui works really well, do you know how to integrare RAG function calling with it?

    • @MAzurekkk1
      @MAzurekkk1 6 месяцев назад

      create a master ai that will manage tasks to smaller ai, such as text to speech, speech to text, text to text, translator ai and a few others, to do this in very simple terms you need to set up several independent ai and then add them an ai so they can communicate with each other.

  • @virajkambleofficial17
    @virajkambleofficial17 5 месяцев назад

    can i modify logos and the background in the docker itself? If yes then pls tell, if no then how could i build frontend pls guide me

  • @aiamfree
    @aiamfree 7 месяцев назад

    a little late to this one, mainly cause I'd given up on these local web ui options... but this is incredibly well done!

    • @1littlecoder
      @1littlecoder  7 месяцев назад

      Try this once as well - ruclips.net/video/hX8pt_drIck/видео.html

  • @bashamsk1288
    @bashamsk1288 9 месяцев назад +1

    Does it do anything better? Like does it do better quantisation for speedup? Whats the inference time? Better?

  • @tikendraw
    @tikendraw 9 месяцев назад +1

    hey, Answer this. what type of models are generally better for rag / agent systems? base model, chat model or instruct model. Maybe they are same, i am asking , i don't know.
    . i did not watch the video. will do while eating.

    • @1littlecoder
      @1littlecoder  9 месяцев назад +3

      This video goes through some of the items - ruclips.net/video/Ce0OKpMhvXw/видео.html
      Generally if i were to do RAG or Agent today, I'd start with Mistral!

  • @gold-junge91
    @gold-junge91 6 месяцев назад

    i get the error on my root server there is no ollama installed but if it try ollama run openhermes in terminal its work

  • @attilavass6935
    @attilavass6935 9 месяцев назад

    I think a mixture of OpenAI Chat and Playground features would be great also in this Web UI, so we'd be able to use system prompt presets and settings like temp on thread level, which I think is a must.

  • @ihaveacutenose
    @ihaveacutenose 6 месяцев назад

    Is there an interface that is like playground from open ai?

  • @tigeroats913
    @tigeroats913 7 месяцев назад

    the docker command does not work it tells me it has been deprecated

  • @GrecoFPV
    @GrecoFPV 5 месяцев назад

    Can I connect this with N8n and api?

  • @pleabargain
    @pleabargain 9 месяцев назад

    Thanks for posting!

  • @aleprex77
    @aleprex77 8 месяцев назад

    Thanks for your video! I always get this error: Connection Issue or Update Needed
    Oops! It seems like your Ollama needs a little attention.
    We've detected either a connection hiccup or observed that you're using an older version. Ensure you're on the latest Ollama version
    (version 0.1.16 or higher) or check your connection.

  • @143comedyscenes
    @143comedyscenes 6 месяцев назад

    Can I build a chatbot using ollama?

  • @trapez_yt
    @trapez_yt 27 дней назад

    Could you make a video on where the login page is at in ollama web ui, I want to make it to my liking

  • @gaganpreetsingh-6453
    @gaganpreetsingh-6453 7 месяцев назад

    Is it possible to use our own fine-tuned models with ollama and it's webui?

    • @1littlecoder
      @1littlecoder  7 месяцев назад

      Yes you can use it. It can be defined in the model card . ruclips.net/video/C0GmAmyhVxM/видео.htmlsi=TMGdUYElCoApN4yi

  • @ds920
    @ds920 9 месяцев назад +2

    Dear 1littlecoder❤ would you please so kind to cover how to plug my own GGUF files into ollama?

    • @1littlecoder
      @1littlecoder  9 месяцев назад +2

      This would have it covered ruclips.net/video/C0GmAmyhVxM/видео.html

  • @3pm479
    @3pm479 9 месяцев назад

    Thanks bro

  • @hustle8656
    @hustle8656 9 месяцев назад

    this is so cool

  • @jawadmansoor6064
    @jawadmansoor6064 9 месяцев назад

    can this ollama be used as api? i.e. I want to use responses with python to be used in agents.

    • @1littlecoder
      @1littlecoder  9 месяцев назад

      Yes

    • @1littlecoder
      @1littlecoder  9 месяцев назад

      ruclips.net/video/C0GmAmyhVxM/видео.htmlsi=7nlvt7P67GLDPg5c

  • @isaquito9764
    @isaquito9764 9 месяцев назад

    Thank you bro, can we integrate it in kali linux using shell gpt?

  • @CognitiveComputations
    @CognitiveComputations 9 месяцев назад

    haha you beat me to it

  • @user-ce7vu3ct3y
    @user-ce7vu3ct3y 9 месяцев назад +1

    Lmao, I just gave a thanks, and youtube removed my comment.

    • @user-ce7vu3ct3y
      @user-ce7vu3ct3y 9 месяцев назад +1

      @1littlecoder I hope the amount is not removed too :)

    • @1littlecoder
      @1littlecoder  9 месяцев назад +1

      What's strange is that I already had replied to it as well and yet the comment was gone 😭

  • @mcw0530
    @mcw0530 8 месяцев назад

    F docker. Let's install things like real Men on bare metal.