HuggingChat Installation with aitom8 Plugin | Chat-UI + Inference Server + LLM

Поделиться
HTML-код
  • Опубликовано: 7 авг 2024
  • In this video you learn how you can install HuggingChat with just one command. All you need is aitom8 and the HuggingChat aitom8 Plugin.
    We cover everything, the Chat-UI, the Text Generation Inference Server and the Large Language Models (LLM). But also the prerequisites like a MongoDB.
    Chapters in this video:
    0:00 - Intro and Explanation
    01:05 - Overview HuggingChat Installation
    01:51 - MongoDB Installation
    02:24 - HuggingChat aitom8 Plugin
    03:22 - Install HuggingChat with one command
    05:46 - Run the Chat-UI
    06:23 - Install the Text Generation Inference Server
    07:11 - Run the Text Generation Inference Server
    08:28- Outro
    Video related links:
    - ChatGPT - but Open Sourced | Running HuggingChat locally (VM): • ChatGPT - but Open Sou...
    - Running HuggingChat locally (VM): www.blueantoinette.com/2023/0...
    - AI automation with aitom8: • AI automation with ait...
    - SSH into Remote VM with VS Code: • SSH into Remote VM wit...
    - NVIDIA CUDA Installation on Debian 11: • NVIDIA CUDA Installati...
    - aitom8 - AI Automation: www.blueantoinette.com/produc...
    - HuggingChat aitom8 Plugin: www.blueantoinette.com/produc...
    About us:
    - Homepage: www.blueantoinette.com/
    - Contact us: www.blueantoinette.com/contac...
    - Twitter: / blueantoinette_
    - Consulting Hour: www.blueantoinette.com/produc...
    Hashtags:
    #huggingchat #aitom8 #aiautomation
  • НаукаНаука

Комментарии • 2

  • @SMCGPRA
    @SMCGPRA Год назад

    May be for developing country viewers pls let us know the system config needed for the installations

    • @BlueAntoinette
      @BlueAntoinette  Год назад +1

      The system requirements depend on the variant you choose and the model you want to interact with. When you just want to run the Chat-UI but with a remote inference endoint there are no special requirements. However, if you also want to run the text-generation-inference server with the Open Assistant model (like in the video) on your (virtual) infrastructure, then you need powerful GPU(s). I show the infrastructure that worked for me in this video (variant 2): ruclips.net/video/EchfCSv1iNM/видео.html