Local RAG agent with LLaMA3 and Langchain

Поделиться
HTML-код
  • Опубликовано: 11 сен 2024
  • We will take a look at how to do RAG with LLama3
    github.com/lan...
    #python #pythonprogramming #llm #ml #ai #aritificialintelligence #largelanguagemodels #tutorial #deeplearning

Комментарии • 5

  • @proterotype
    @proterotype 4 месяца назад

    Do you like using ChatOllama(model=local_model) better than just Ollama(model=local_model)?
    Also have you seen a difference between using Llama3:instruct as opposed to just Llama3 70b:latest

  • @tintintintin576
    @tintintintin576 4 месяца назад

    very nice!

  • @MosheRecanati
    @MosheRecanati 4 месяца назад

    Any option to use local embedding rather than gpt?

    • @DLExplorers-lg7dt
      @DLExplorers-lg7dt  4 месяца назад +1

      GPT4AllEmbeddings is a local embedding, you can use sentence transformers too
      python.langchain.com/docs/integrations/platforms/huggingface/