LangChain - Conversations with Memory (explanation & code walkthrough)

Поделиться
HTML-код
  • Опубликовано: 22 окт 2024

Комментарии • 128

  • @aanchalagarwal6886
    @aanchalagarwal6886 Год назад +18

    A video on custom memory will be helpful. Thanks for the series.

    • @vq8gef32
      @vq8gef32 8 месяцев назад

      Yes Please build a custom memory ! Thank you.

  • @atylerblack164
    @atylerblack164 Год назад +2

    Thank you so much! I had a app built without any conversation memory just using chains and was struggling to convert to memory.
    you made this very easy to follow and understand

    • @SaifBagmaru
      @SaifBagmaru Год назад +1

      Hey how did you do that i am trying to implement the same do you have any repo?

  • @Jasonknash101
    @Jasonknash101 Год назад +5

    Another great video, I want to create my own agent with a memory. I’m thinking a vector database is the best way of doing it would be great if you could do a similar video outlining some of the different vector database, options, pros and cons of the different ones.

  • @resistance_tn
    @resistance_tn Год назад +15

    Great explanation ! Would love to see the custom/combinaisons one :)

    • @hiranga
      @hiranga Год назад

      Yeah - would love to see a custom memory tute!

    • @RandyHawkinsMD
      @RandyHawkinsMD Год назад

      Custom memory seems intuitively potentially useful for allowing human experts’ input to shape the knowledge graph that might be created to represent the state of users’ concerns based on experts’ knowledge. I’d be v.interested in a video on this subject. :)

  • @aiamfree
    @aiamfree Год назад

    i’ve been experimenting with entity in my own ways and its pretty wild and probably the most useful for general use. I imagine word for word would really only matter in something like a story generator or whatnot

  • @noone-jq1xw
    @noone-jq1xw Год назад +1

    Great video! I'm such a big fan of your work now! I'm sure this channel is going to places once the llms become a bit more mainstream in the programming stack. Please keep up with the awesome work!
    I have a question with regard to the knowledge graph memory section. The sample code given shows that the relevant section never gets populated. Furthermore, the prompt structure has two inputs, {history} and {input}, but we only pass on the {input} part, which might explain why the relevant information is empty. In this case, do you know if there is any use for the relevant information section?
    A second query is in regard to the knowledge graph. Since the prompt seems to be contextually aware, even though the buffer doesn't show the chat history, is it safe to say that in addition to the chat log shown (as part of verbose), it also sends the knowledge graph triplets created to the llm to process the response?

  • @m_ke
    @m_ke Год назад +4

    Oh how much I missed that voice. Keep the videos coming and maybe get some sunglasses and a webcam.

    • @samwitteveenai
      @samwitteveenai  Год назад +2

      Long time no see. :D Working on getting a cam setup, but traveling a fair bit till April. Will DM you later.

    • @blackpixels9841
      @blackpixels9841 Год назад

      This was the voice that got me started on my Deep Learning journey! Let us know if you're ever in Singapore again some day

  • @owszystkim5415
    @owszystkim5415 3 месяца назад +1

    Is it cost effective to use ConversationSummary? From my understanding it needs to summarize our conversation every time.

  • @jintao824
    @jintao824 Год назад +1

    Great content Sam! Subbed. Just wanted to ask - are there technical limitations to why these LLMs have limited context windows? Any pointers to papers will be very helpful should they exist!

    • @samwitteveenai
      @samwitteveenai  Год назад +5

      mostly this is about the attention layers and that the wider the spans the go you run into compounding computation. Take a look at this stackoverflow.com/questions/65703260/computational-complexity-of-self-attention-in-the-transformer-model

    • @jintao824
      @jintao824 Год назад

      @@samwitteveenai Thanks Sam, I will check this out!

  • @kenchang3456
    @kenchang3456 Год назад

    Indeed, this was helpful. Thank you for this video series. The more I work through them the more may questions are being answered :-)

  • @elyakimlev
    @elyakimlev Год назад

    Thanks for this great tutorial series.
    Question: how do you set the k value for the ConversationSummaryBufferMemory option? I didn't see where you set it in your code. Is it always 2?

  • @abdoualgerian5396
    @abdoualgerian5396 Год назад

    i think the best one is to create like a small ai handler that handles all of the memory in your device then sends a very brief summary to the llm wih the necessary info of what the user means , in this case we will avoid sending too much data with much more effective promts than all of the mentined above

  • @z-ro
    @z-ro Год назад +1

    Amazing explanation! I'm currently trying to use Langchain's javascript library to "persist" memory across multiple "sessions" or reloads. Do you have a video of the types of memory that can do that?

    • @untypicalien
      @untypicalien Год назад

      Hey there, I'd love to know if after a month you found any useful resources or documentation about this. I'm trying to reach this as well. 😄

  • @ketolm
    @ketolm 3 месяца назад

    Love the videos! Thank you for making them. Dying at the b-film footage

    • @samwitteveenai
      @samwitteveenai  3 месяца назад

      Thanks . Feedback is really appreciated. We have tried to reduce the stock video a lot on the newer vids.

  • @sysadmin9396
    @sysadmin9396 7 месяцев назад +1

    Hi Sam, how do we keep the Conversation context of multiple users separate ?

    • @hussienhassin7334
      @hussienhassin7334 5 месяцев назад

      Have you resolved it? I am still struggling too

  • @viktor4207
    @viktor4207 10 месяцев назад

    Can you use both? So you can start working on a user profile by creating a knowledge graph associated with a user and storing it but then pass information to the bot in a summarized way?

  • @musabalsaifi8993
    @musabalsaifi8993 10 дней назад

    Great work, Thanks a lot

  • @hikariayana
    @hikariayana 11 месяцев назад

    This is exactly what I needed, thanks so much!

  • @aibasics7206
    @aibasics7206 Год назад

    hi sam nice video! can you please clarify that can we finetune and use the memory here .For finetuning with own data we are using gpt index anf for llm predictor we are using LangChain.Can you tell me way around to use memory of langchain with integration of gpt index and loading own custom chat data ?

  • @JimCh-g6w
    @JimCh-g6w Год назад +2

    I've built this with streamlit UI as a front-end and deployed it as a Cloud Run service. Now, if multiple users are trying to chat with the Bot, the entire chat_history combined with all User conversations is being referred. If I want to have a user_id/session_id specific chat_history, how can I do it ? Could you please helo me

    • @sysadmin9396
      @sysadmin9396 7 месяцев назад

      I have this same exact issue. Did you ever figure it out??

    • @naveennirban
      @naveennirban 2 месяца назад

      Hey guys, I am working on it too. I am trying to create multiple vector db with runtime knowledge feed for specific user.
      Name of vector db could be a unique id attached to your user model.

  • @joer3650
    @joer3650 Год назад

    best explanation Ive found, thanks

  • @MannyBernabe
    @MannyBernabe Год назад

    Super helpful overview. Thank you.

  • @wukao1985
    @wukao1985 Год назад

    Thanks Sam for this great video. I found it really hard to understand how to make these memory function to work with ChatOpenAI model, can you help create a video on that? This video were all using divanci models.

    • @samwitteveenai
      @samwitteveenai  Год назад

      yes good point these were made before that API existed. I might make some updated versions.

  • @hussamsayeed3012
    @hussamsayeed3012 Год назад +1

    how do we add a custom prompt by adding some variable data, and using memory in ConversationChain?
    Like I'm trying this but getting the validation error:
    PROMPT = PromptTemplate(
    input_variables=["chat_history_lines", "input", "tenant_prompt", "context"], template=_DEFAULT_TEMPLATE
    )
    llm = OpenAI(temperature=0)
    conversation = ConversationChain(
    llm=llm,
    verbose=True,
    memory=memory,
    prompt=PROMPT
    )
    Error: 1 validation error for ConversationChain
    __root__
    Got unexpected prompt input variables. The prompt expects ['chat_history_lines', 'input', 'tenant_prompt', 'context'], but got ['chat_history_lines', 'history'] as inputs from memory, and input as the normal input key. (type=value_error)

    • @samwitteveenai
      @samwitteveenai  Год назад +1

      you over write the 'prompt.template' and make sure it takes in the same inputs as the previous one etc. Take a look at one of the early vids about LangChain Prompts and Chains.

  • @Aidev7876
    @Aidev7876 9 месяцев назад

    Imbusimg am SQL chain. Id like to add memory on that. Do we have some ideas on thst? Thanks

  • @kenchang3456
    @kenchang3456 Год назад

    I just enjoy learning from your videos, thank you very much. Do you have any videos, suggestions or advice on how to control when a conversation goes off on a tangent and bring it back to the purpose of the conversation. E.g. A Chatbot for laptop trouble shooting - System: Hi how can I help you?, User: My laptop is broken., System: Can you describe the problem with more detail?, User: What's the weather like in Hawaii?, System: The weather is pleasant in Hawaii. Can you describe the problem with your laptop with more detail?

    • @samwitteveenai
      @samwitteveenai  Год назад +1

      With big models this dealt with by good prompts that make it clear what it can and can't talk about and then to discontinue the conversation if people are too far off the main topics.

    • @kenchang3456
      @kenchang3456 Год назад

      @@samwitteveenai Ah so it's in the prompts...interesting, thanks!

  • @carlosquiala8698
    @carlosquiala8698 5 месяцев назад

    Can I mixed 2 types of memories? For example Entity and graph?

  • @pec8377
    @pec8377 5 месяцев назад

    How do you use the diff. conversation with LCEL ?

  • @CookFu
    @CookFu 4 месяца назад

    can a retrieval chain work with memory function? I have been trying that for couple of days, but it doesn't work.

  • @ghinwamoujaes9059
    @ghinwamoujaes9059 4 месяца назад

    Very helpful - Thank you!

  • @prayagbrahmbhatt6375
    @prayagbrahmbhatt6375 Год назад

    Great stuff ! Thanks for the tutorial ! I do have a question regarding Opensource models. How can use any alternative of OpenAI model ? like Vicuna or Llama ? What if we don't have openAI API-key ?

    • @samwitteveenai
      @samwitteveenai  Год назад

      I have some vids using open source LLMs for this kind of thing

  • @krisszostak4849
    @krisszostak4849 Год назад +2

    This is awesome! I love the way you explain things Sam! If you ever create an in depth video course about using lang chain and llms, especially regarding extracting particular knowledge from a personal or business knowledge base - let me know pls, I'll be first one to buy it 😍

  • @srishtinagu1857
    @srishtinagu1857 7 месяцев назад

    Hii Sam, awesome video. I am trying to add conversation memory to my RAG application. But it is not giving correct response. Can you make a video or share some references for that. It will be really helpful. Thanks!

    • @samwitteveenai
      @samwitteveenai  7 месяцев назад

      I need to make a full LangChain update this vid is a year old now. I am working on it, so hopefully soon.

    • @srishtinagu1857
      @srishtinagu1857 7 месяцев назад

      @@samwitteveenai ok thanks! Waiting for it.

  • @dogtens1060
    @dogtens1060 Год назад

    nice overview, thanks!

  • @binstitus3909
    @binstitus3909 9 месяцев назад +1

    How can I keep the conversation context of multiple users separately?

    • @sysadmin9396
      @sysadmin9396 7 месяцев назад

      I’m looking for this answer as well. Did you ever figure it out?

  • @starmorph
    @starmorph Год назад

    I like the iced out parrot thumbnails 😎

  • @lordsairolaas6959
    @lordsairolaas6959 9 месяцев назад

    Hello ! I'm making a chat bot using Conversation with KG but It keeps popping this error for the past few days could you help ?
    Got unexpected prompt input variables. The prompt expects [], but got ['history'] as inputs from memory, and input as the normal input key. (type=value_error)

  • @mautkajuari
    @mautkajuari Год назад

    beautifully explained

  • @RedCloudServices
    @RedCloudServices Год назад

    Sam can you help clarify? Do we still need to fine tune a custom LLM with our own corpus if we can use Langchain methods (i.e. webhooks, Python REPEL, pdf loaders, etc) ? or are both still necessary for all custom use cases?

    • @samwitteveenai
      @samwitteveenai  Год назад +1

      LLMs that you finetune for your purpose should always have an advantage in regard to unique data etc. If you can get away with LangChain and and API though and you don't mind the costs then that will be easier.

  • @abhirj87
    @abhirj87 Год назад

    wow!!! super helpful and thanks a ton for making this tutorial!!

  • @ghghgjkjhggugugbb
    @ghghgjkjhggugugbb Год назад

    revolutionary video..

  • @sanakmukherjee3929
    @sanakmukherjee3929 Год назад

    Nice explanation. Can you help me add this to a custom csv dataset.

  • @adumont
    @adumont Год назад

    Really interesting. The last one about graphs and entities could have a lot of potential. I wonder how one could use some retrieval on a knowledge database for example to also enrich the context/prompt with information from that. For example, suppose the AI had access to the warranty database, and could check the status of the warranty for the TV serial number. It could maybe ask the user for the serial number, and automatically go check the warranty for that serial number, and answer "your TV is under warranty number xxx". Is there examples of how to do that?

    • @Jasonknash101
      @Jasonknash101 Год назад

      Totally agree would be great to show. How are you integrate this with something like node JS

  • @sahil5124
    @sahil5124 7 месяцев назад

    it's really helpful, thanks man

  • @harinisri2962
    @harinisri2962 Год назад

    Hi I have a doubt. I am implementing ConversationBufferWindowMemory for document question answering chatbot. from langchain import memory
    conversation=ConversationChain(llm=llm,verbose=True,memory=ConversationBufferWindowMemory(k=2)) is it possible to return the source of documents answer using any parameters?

  • @vq8gef32
    @vq8gef32 8 месяцев назад

    Amazing ! Appreciated it! But I can't run some of the codes ! :( is there any updated version?

    • @samwitteveenai
      @samwitteveenai  8 месяцев назад

      Sorry I am working on updated LangChain vid which I will update the code. Some of these vids are a year old now

    • @vq8gef32
      @vq8gef32 8 месяцев назад

      Thank you @@samwitteveenai amazing work, I am still watching your channel. Thank you heaps.

  • @insight-guy
    @insight-guy Год назад

    Thankyou Sam.

  • @stonaraptor8196
    @stonaraptor8196 Год назад

    There has to be a simpler way to get a personalized, locally on my PC stored AI, that has long term memory and is able to keep up long conversations. Maybe I am very naive, but for me as a non-programmer, my main interest in AI is more in philosophical nature I guess. Where/how would I start or even get an offline version? Reading the OpenAI site is, let's say slightly challenging...

  • @embeddedelligence-926
    @embeddedelligence-926 Год назад

    So how to make a conversational memory and using it with csv agent

  • @tubingphd
    @tubingphd Год назад

    Thank you Sam

  • @WissemBellara
    @WissemBellara 5 месяцев назад

    Nice Video, very well made

  • @caiyu538
    @caiyu538 Год назад

    great tutorial

  • @svenandreas5947
    @svenandreas5947 Год назад

    I`m wondering. This works as long as the human gives the expected information. Is there any chance to ask for information (like warranty number)?

    • @samwitteveenai
      @samwitteveenai  Год назад +1

      yes you can do this with context and retrieval. Eg. adding a search for data etc. and passing the results into the context of the prompt.

    • @svenandreas5947
      @svenandreas5947 Год назад

      @@samwitteveenai will google and search for this :-) Thanks for the hint. I just figured out the way via prompt engineering, but this wasn`t exactly what I was looking for. Thanks again

    • @samwitteveenai
      @samwitteveenai  Год назад

      What exactly do you want to do?

    • @memesofproduction27
      @memesofproduction27 Год назад

      langchain's self ask search sounds relevant.

  • @stanTrX
    @stanTrX 25 дней назад

    Thanks. How about agents with memory?

  • @Pure_Science_and_Technology
    @Pure_Science_and_Technology Год назад

    Not sure the difference, but I use, print(conversation.memory.entity_store) : print(dir(conversation.memory)) I don't have an attribute 'store'

  • @Fluffynix
    @Fluffynix Год назад

    How does this compare to Haystack which has been around for years?

    • @samwitteveenai
      @samwitteveenai  Год назад +1

      Its quite different than Haystack. This is all about prompts and generative LLM manipulation that search. LangChain can do search with vector stores. You could probably use haystack as a tool with LangChain which could be cool for certain use cases.

  • @gmdl007
    @gmdl007 Год назад

    hi Sam, is there a way to combine this with qa with own pdf files?

    • @samwitteveenai
      @samwitteveenai  Год назад

      yes I have a few videos about that if you look for PDF etc.

    • @gmdl007
      @gmdl007 Год назад

      @@samwitteveenai fantastic, can you share?

    • @samwitteveenai
      @samwitteveenai  Год назад

      @@gmdl007 There are a number take a look in this playlist ruclips.net/video/J_0qvRt4LNk/видео.html

  • @foysalmamun5106
    @foysalmamun5106 Год назад

    Thank you lot

  • @lorenzoleongutierrez7927
    @lorenzoleongutierrez7927 Год назад

    Thanks for sharing!

  • @428manish
    @428manish Год назад

    It works fine with gpt3.5 turbo .. How to make it work with FAISS DB using local data(pdf)..

  • @souvickdas5564
    @souvickdas5564 Год назад

    How do we create conversational bot for non English languages and the languages that are not supported by the OpenAI embeddings? For example If I want to build a conversational agent for articles written in Indian languages (Bengali or Bangla), how we can do it?

    • @samwitteveenai
      @samwitteveenai  Год назад

      You would use a multi-lingual embedding model which you could find on HuggingFace. Check out huggingface.co/sentence-transformers/stsb-xlm-r-multilingual there are others as well. there are also a number of multi-lingual LLMs including mT5 which supports Bengali. You would get best results by fine-tuning some of these models.

    • @souvickdas5564
      @souvickdas5564 Год назад

      @@samwitteveenai thanks a lot.

  • @souvickdas5564
    @souvickdas5564 Год назад

    How do I use memory with ChatVectorDBChain where we can specify vector stores. Could you please give code snippet for this. Thanks

    • @samwitteveenai
      @samwitteveenai  Год назад

      I will made a video about Vector stores at some point.

  • @sooryaprabhu14122
    @sooryaprabhu14122 9 месяцев назад

    bro please include the deployment also

  • @ranjithkumarkalal1810
    @ranjithkumarkalal1810 11 месяцев назад

    Great videos

  • @emmanuelkolawole6720
    @emmanuelkolawole6720 Год назад

    Are you saying that Alpaca can only take in 2000 tokens? Please if that is true how can we increase it

    • @samwitteveenai
      @samwitteveenai  Год назад

      increasing it requires some substantial retraining.

  • @foysalmamun5106
    @foysalmamun5106 Год назад

    waiting for video on custom memory 🙂

  • @ambrosionguema9200
    @ambrosionguema9200 Год назад

    Hi Sam, in this, How to upload my own data file? in this code? Plase help me

    • @samwitteveenai
      @samwitteveenai  Год назад +1

      I have a video coming out this weekend on using your own data for CSV and Excel files I will make one for larger dataset.

  • @pengchengwu447
    @pengchengwu447 Год назад

    I wonder if it's possible to specifiy *predefined* entities?

  • @RahulD600
    @RahulD600 2 месяца назад +1

    but still, this is not unlimited memory, right?

  • @WissemBellara
    @WissemBellara 5 месяцев назад

    Is it possible to add chapters with timestamps please ? It would make it easier

  • @mandarbagul3008
    @mandarbagul3008 Год назад

    hello sir, greetings, what is span? 3.42

    • @samwitteveenai
      @samwitteveenai  Год назад

      The span ( context size ) refers to the number of tokens (sub words) that you can pass into a model in a single shot.

    • @mandarbagul3008
      @mandarbagul3008 Год назад

      @@samwitteveenaiGot it. thank you very much sir :)

  • @nilendughosal6084
    @nilendughosal6084 Год назад

    How to handle memory for multiple users?

    • @samwitteveenai
      @samwitteveenai  Год назад

      You serialize this out and load in the memory based on who is calling the model etc.

  • @DavidTowers-f1y
    @DavidTowers-f1y Год назад

    I love these tutorials. Learning so much. Thanks.

  • @creativeuser9086
    @creativeuser9086 Год назад

    Btw, it would be nice if you show yourself on cam when you’re not coding. The clips are weirdly distracting 😅

    • @samwitteveenai
      @samwitteveenai  Год назад +1

      lol yeah plan to get a camera at some point. I cut back on the broll stuff after the early videos if that helps.

  • @neerajmahapatra5239
    @neerajmahapatra5239 Год назад

    Ho exam we add prompt with these memory chains.

  • @alizhadigerov9599
    @alizhadigerov9599 Год назад

    can we use gpt3.5-turbo instead of davinci-003 here?

    • @samwitteveenai
      @samwitteveenai  Год назад

      you can but the code has to be changed with the new Chat options.

    • @aaroldaaroldson708
      @aaroldaaroldson708 Год назад

      @@samwitteveenai Thanks. Are you planning to record a video on that? Would be very helpful!

  • @LearningWorldChatGPT
    @LearningWorldChatGPT Год назад

    Great class!
    Thank you very much for sharing your knowledge
    Gained a follower !

  •  Год назад

    select * from stock_videos where label like '%typing%' :D