While LangChain's memory types seem primarily geared towards constructing optimal prompts, focusing on retrieving relevant information for the next step, I believe there's another avenue worth investigating. This involves modifying the model's internal world representation, potentially by adjusting weights or even its overall size. This approach could offer a means to constrain the large language model (LLM), potentially enhancing the believability of the simulation it generates. Do you have any references I could explore that delve into this concept further?
From what I understand, yes it’s possible but at this point (dec 2023) extremely slow and expensive. Have a look at the recent fireship video about the dolphin llm.
very nice introduction. good articulation. thanks for upload❤
Such an important part of LLM
Thank you Harrison!
Very good talk.
Perhaps you could post a link to the mentioned paper?
Here you go: arxiv.org/pdf/2304.03442.pdf
@@WeightsBiasesThank you!
While LangChain's memory types seem primarily geared towards constructing optimal prompts, focusing on retrieving relevant information for the next step, I believe there's another avenue worth investigating. This involves modifying the model's internal world representation, potentially by adjusting weights or even its overall size.
This approach could offer a means to constrain the large language model (LLM), potentially enhancing the believability of the simulation it generates. Do you have any references I could explore that delve into this concept further?
Would it be possible to save the entire conversation history as a text file and use that text file to fine tune ?
From what I understand, yes it’s possible but at this point (dec 2023) extremely slow and expensive. Have a look at the recent fireship video about the dolphin llm.
Cool.
this work with local llms ?
yes