Memory in LLM Applications

Поделиться
HTML-код
  • Опубликовано: 30 ноя 2024

Комментарии • 13

  • @jaceyang3375
    @jaceyang3375 4 месяца назад

    very nice introduction. good articulation. thanks for upload❤

  • @alesanchezr_
    @alesanchezr_ Год назад

    Such an important part of LLM

  • @anselm94
    @anselm94 Год назад

    Thank you Harrison!

  • @Adrian_Galilea
    @Adrian_Galilea Год назад

    Very good talk.

  • @fgfanta
    @fgfanta Год назад +7

    Perhaps you could post a link to the mentioned paper?

  • @andresfelipehiguera785
    @andresfelipehiguera785 5 месяцев назад

    While LangChain's memory types seem primarily geared towards constructing optimal prompts, focusing on retrieving relevant information for the next step, I believe there's another avenue worth investigating. This involves modifying the model's internal world representation, potentially by adjusting weights or even its overall size.
    This approach could offer a means to constrain the large language model (LLM), potentially enhancing the believability of the simulation it generates. Do you have any references I could explore that delve into this concept further?

  • @ekkamailax
    @ekkamailax 11 месяцев назад

    Would it be possible to save the entire conversation history as a text file and use that text file to fine tune ?

    • @rewindcat7927
      @rewindcat7927 11 месяцев назад

      From what I understand, yes it’s possible but at this point (dec 2023) extremely slow and expensive. Have a look at the recent fireship video about the dolphin llm.

  • @deeplearningpartnership
    @deeplearningpartnership Год назад

    Cool.

  • @samarammar1593
    @samarammar1593 9 месяцев назад

    this work with local llms ?