Memory in LLM Applications
HTML-код
- Опубликовано: 29 сен 2024
- This talk by Harrison Chase of Langchain will focus on how "memory" in the context of LLM applications. Memory is most often discussed in the context of chatbots so it will start with an overview of conversational memory. This can vary from simple (buffer of recent messages) to complex (extracting and dynamically generating a knowledge graph). It then covers recent advances in memory, including the "Generative Agents" paper (see link below). It will finish with an overview of where the space is now and where it could go in the future (it is still early days!)
Generative Agents Paper (8:19) arxiv.org/pdf/...
Subscribe and turn on notifications for upcoming Fully Connected content!
Watch full playlist here: • Fully Connected San Fr...
#LLMs #DeepLearning #AI #Modeling #ml #langchain
Very good talk.
very nice introduction. good articulation. thanks for upload❤
Perhaps you could post a link to the mentioned paper?
Here you go: arxiv.org/pdf/2304.03442.pdf
@@WeightsBiasesThank you!
While LangChain's memory types seem primarily geared towards constructing optimal prompts, focusing on retrieving relevant information for the next step, I believe there's another avenue worth investigating. This involves modifying the model's internal world representation, potentially by adjusting weights or even its overall size.
This approach could offer a means to constrain the large language model (LLM), potentially enhancing the believability of the simulation it generates. Do you have any references I could explore that delve into this concept further?
Would it be possible to save the entire conversation history as a text file and use that text file to fine tune ?
From what I understand, yes it’s possible but at this point (dec 2023) extremely slow and expensive. Have a look at the recent fireship video about the dolphin llm.
Such an important part of LLM
Thank you Harrison!
Cool.
this work with local llms ?
yes