Memory in LLM Applications

Поделиться
HTML-код
  • Опубликовано: 29 сен 2024
  • This talk by Harrison Chase of Langchain will focus on how "memory" in the context of LLM applications. Memory is most often discussed in the context of chatbots so it will start with an overview of conversational memory. This can vary from simple (buffer of recent messages) to complex (extracting and dynamically generating a knowledge graph). It then covers recent advances in memory, including the "Generative Agents" paper (see link below). It will finish with an overview of where the space is now and where it could go in the future (it is still early days!)
    Generative Agents Paper (8:19) arxiv.org/pdf/...
    Subscribe and turn on notifications for upcoming Fully Connected content!
    Watch full playlist here: • Fully Connected San Fr...
    #LLMs #DeepLearning #AI #Modeling #ml #langchain

Комментарии • 13

  • @Adrian_Galilea
    @Adrian_Galilea 11 месяцев назад

    Very good talk.

  • @jaceyang3375
    @jaceyang3375 2 месяца назад

    very nice introduction. good articulation. thanks for upload❤

  • @fgfanta
    @fgfanta Год назад +7

    Perhaps you could post a link to the mentioned paper?

  • @andresfelipehiguera785
    @andresfelipehiguera785 3 месяца назад

    While LangChain's memory types seem primarily geared towards constructing optimal prompts, focusing on retrieving relevant information for the next step, I believe there's another avenue worth investigating. This involves modifying the model's internal world representation, potentially by adjusting weights or even its overall size.
    This approach could offer a means to constrain the large language model (LLM), potentially enhancing the believability of the simulation it generates. Do you have any references I could explore that delve into this concept further?

  • @ekkamailax
    @ekkamailax 9 месяцев назад

    Would it be possible to save the entire conversation history as a text file and use that text file to fine tune ?

    • @rewindcat7927
      @rewindcat7927 9 месяцев назад

      From what I understand, yes it’s possible but at this point (dec 2023) extremely slow and expensive. Have a look at the recent fireship video about the dolphin llm.

  • @alesanchezr_
    @alesanchezr_ 11 месяцев назад

    Such an important part of LLM

  • @anselm94
    @anselm94 Год назад

    Thank you Harrison!

  • @deeplearningpartnership
    @deeplearningpartnership Год назад

    Cool.

  • @samarammar1593
    @samarammar1593 6 месяцев назад

    this work with local llms ?