Build your own Local Mixture of Agents using Llama Index Pack!!!

Поделиться
HTML-код
  • Опубликовано: 25 июл 2024
  • I built my own Mixture of Agents a new innovative Model Grouping technique to use Multiple LLMs (by Wang et al.) using Ollama and Llama Index Pack.
    This tutorial specifically deals with Local Models. We can leverage Llama 3 and Mistral and any other open model to build this MoA system.
    🔗 Links 🔗
    Local MoA - Notebook used in the code - github.com/amrrs/local_moa_ol...
    MoA pack from Llama index - llamahub.ai/l/llama-packs/lla...
    Ollama.ai to download ollama
    Ollama Installation tutorial • Ollama on CPU and Priv...
    ❤️ If you want to support the channel ❤️
    Support here:
    Patreon - / 1littlecoder
    Ko-Fi - ko-fi.com/1littlecoder
    🧭 Follow me on 🧭
    Twitter - / 1littlecoder
    Linkedin - / amrrs
  • НаукаНаука

Комментарии • 8

  • @lrkx_
    @lrkx_ 19 дней назад

    Nice. Tutorial, thank you.

  • @Cingku
    @Cingku 19 дней назад

    Can we customize system prompt for both aggregator and proposers?

  • @user-dl6ds6ij8x
    @user-dl6ds6ij8x 2 дня назад

    Fantastic. Thanks so much , l have an issue for some reason getting this error Round 3/3 to collecting reference responses.
    An error occurred:

  • @asrjy
    @asrjy 16 дней назад

    Great video as always. I have a question.
    I'm trying to build an RAG that answers questions about a dataset that I have using the create_pandas_dataframe_agent. I also have a long list of sample questions and answers that I want the RAG to imitate but not exactly copy. These questions contain some domain knowledge and I've also added some information about the columns at the end of these sample questions and answers.
    I'm currently passing this as a prefix parameter but I'm not sure if that's the best way to do this.
    The idea is to have this pandas agent as something that can answer questions that don't require pandas as well. What's the best way to build this? Thanks in advance!

  • @staticalmo
    @staticalmo 19 дней назад +1

    On which GPU did the models run? Integrated GPU?

    • @1littlecoder
      @1littlecoder  19 дней назад

      Yep Integrated. but mostly on CPU

  • @SonGoku-pc7jl
    @SonGoku-pc7jl 17 дней назад

    thanks! how is title of previous video? i hope for next video of mixture of experts! :)

    • @1littlecoder
      @1littlecoder  17 дней назад

      Could you elaborate? You mean like mixture of experts with llama index ?