Local LLM with Ollama, LLAMA3 and LM Studio // Private AI Server

Поделиться
HTML-код
  • Опубликовано: 28 июн 2024
  • Forget using public generative AI resources like ChatGPT. Use local large language models (LLMs) instead which allows you to create a private AI server. In the video, we step through setting up a private AI server in Windows Subsystem for Linux and a few hacks that need to happen so that your API can be accessible to the open source frontend called Open WebUI.
    Written post on the steps: www.virtualizationhowto.com/2...
    Ollama download: ollama.com/download
    LM Studio: lmstudio.ai/
    ★ Subscribe to the channel: / @virtualizationhowto
    ★ My blog: www.virtualizationhowto.com
    ★ Twitter: / vspinmaster
    ★ LinkedIn: / brandon-lee-vht
    ★ Github: github.com/brandonleegit
    ★ Facebook: / 100092747277326
    ★ Discord: / discord
    ★ Pinterest: / brandonleevht
    Introduction - 0:00
    What are Large Language Models (LLMs) - 0:45
    Advantages of hosting LLMs locally - 1:30
    Hardware to run LLMs on your own hardware - 2:30
    Setting up Ollama - 3:18
    Looking at the Linux script for Ollama - 3:40
    Downloading and running popular LLM models - 4:18
    Command to download LLAMA3 language model - 4:31
    Initiating a chat session from the WSL terminal - 5:16
    Looking at Hugging Face open source models - 5:46
    Open WebUI web frontend for private AI servers - 6:20
    Looking at the Docker run command for Open WebUI - 6:50
    Accessing, signing up, and tweaking settings in Open WebUI - 7:22
    Reviewing the architecture of the private AI solution - 7:38
    Talking about a hack for WSL to allow traffic from outside WSL to connect - 7:54
    Looking at the netsh command for the port proxy - 8:24
    Chatting with the LLM using Open WebUI - 9:00
    Writing an Ansible Playbook - 9:20
    PowerCLI scripts - 9:29
    Overview of LM Studio - 9:42
    Business use cases for local LLMs - 10:16
    Wrapping up and final thoughts - 10:51
  • ХоббиХобби

Комментарии • 9

  • @kenmurphy4259
    @kenmurphy4259 9 дней назад

    Thanks Brandon, nice review of what’s out there for local LLMs

  • @steheverod
    @steheverod 7 дней назад

    Its great idea, thanks Brandon. I will test on my homelab.

  • @fermatdad
    @fermatdad 4 дня назад

    Thank you for the helpful tutorial.

  • @romayojr
    @romayojr 8 дней назад +1

    this is awesome and can’t wait to try it. is there a mobile app for open webui?

  • @EhtizanVideoEditor007
    @EhtizanVideoEditor007 4 дня назад

    Hey VirtualizationHowto, I just watched your video and I must say that it was really informative and well-made.
    I was wondering if I could help you edit your videos and repurpose your long videos into highly engaging shorts? I can also make high CTR thumbnails for your channel

  • @nobody-P
    @nobody-P 9 дней назад

    😮I'm gonna try this now

  • @trucpham9772
    @trucpham9772 8 дней назад

    How to run ollama3 in macos, i want public localhost to use nextchatgpt , can you share command this solution

  • @user-nh4kg2zj2q
    @user-nh4kg2zj2q 7 дней назад

    Nobody explained how to install ollama and run it in properite way ، it should be step's ، is docker important to install before ollama? I tried to install ollama alone and it doesn't installed completely!! I don't know why

    • @kironlau
      @kironlau 6 дней назад +1

      1. you should mention what is your os
      2. read the official documentation
      3. if you run on win, just download the exe/msi file, install with one click(and click yes...)