How to Build an LLM Query Engine in 10 Minutes | LlamaIndex x Ray Crossover

Поделиться
HTML-код
  • Опубликовано: 22 авг 2024

Комментарии • 8

  • @Andromeda26_
    @Andromeda26_ Год назад

    Thank you, Jerry and Amog! Nice demo! Keep up the good work!

  • @fabsync
    @fabsync 7 месяцев назад

    always great videos! Love this topic but without openai..I would love to see something created only using open source technology...

  • @davidwynter6856
    @davidwynter6856 Год назад +1

    What advantages does this have over taking all your data sources, creating embeddings for all and storing in a vector store. Then a single query would give the top_k responses and a summarization of those will give an answer? I can see that the question explicitly defines the data sources, but struggle to see the utility in this.

  • @zahabkhan6832
    @zahabkhan6832 10 месяцев назад +1

    is there a place a beginner can learn about ray from scratch

  • @amparoconsuelo9451
    @amparoconsuelo9451 11 месяцев назад

    Can a subsequent SFT and RTHF with different, additional or lesser contents change the character, improve, or degrade a GPT model? Can you modify a GPT model? How?

  • @rayhanpatel8271
    @rayhanpatel8271 9 месяцев назад

    What’s is the difference ray and embedded chain

  • @catalystzerova
    @catalystzerova Год назад

    Light ide theme 😩 fun video tho

  • @arvindshelke8889
    @arvindshelke8889 Год назад +1

    Bro has light theme