This is the best 101 video I found on the subject. Most of the other videos assume you're already somewhat familiar with the tools or aren't that beginner friendly.
Thank you. I have watched a lot of videos that attempt to explain LLM's and LangChain as successfully as you have here but fail to do it as succinctly as you have. I was looking for a video that I can share with my clients that explains what LLM's and LangChain are without being too dumbed down or being too 'over their heads' and this video is perfect for that! So, again - thank you.
Excellent intro, especially for an experienced programmer to start using after a single watch. Learned a lot in a short time with it. Thanks for making.
The coolest thing about enhancing LLMs like this is that locally-runnable models will be very interesting (no huge API call costs) and smarter than by default.
I would love local LLMs! Though I doubt that one advanced as GTP-3.5/4 will be able to be run locally for a few years because of the required computational power. I still look forward to the day that it becomes a thing though!
The costs are not the advantage. Hosting things on your own hardware is usually more expensive, especially if you need multiple models(embedding model, LLM, maybe a text to speech). The advantage I see is that you could use custom models trained on your data
EXCELLENT OVERVIEW: Pls note Pinecone as of 1 week is NOT allowing new, free accounts to do any operations! PLS CONSIDER DOING SIMILAR VID FOSS end to end, There is a lot of interest. THANK YOU
Having read through the LangChain's conceptual documentation, I must say this video is a great accompaniment. Very clear and well presented and for a non coder like myself, easy to understand. (I'd pay for a LangChain manual for 5 year olds!) . Subscribed.
This was an awesome and very straightforward video. I believe that it's the most useful video about LangChain that exists I've seen so far. Even people that don't know much about programming can follow. Thanks so much!
With immediate effect I have subscribe to your awesome channel. Explanation to LangChain was clear and concise. I really learnt a lot in just 12 minutes.
Thank you for the video. I think it gives a really good introduction to the topic without much distraction. Absolutely pleasant to follow even for a non-native speaker.
"Great video! This explanation of LangChain's core concepts is super helpful for beginners looking to build LLM applications. Thanks for sharing the code link as well-makes it easy to follow along and experiment!"
solid instructor. good intro langchain at the right level of depth. For as quick as he rips thru a huge amount of information, he is still pretty easy to follow.
I inspected Langchain code as soon as it was released, ran some tests and never used it since. Im surprised so many consider its limitations acceptable. Using embedding similarity as a query filter is like trying to answer a prompt by comparing every chunk of text to your prompt. It makes absolutely no sense because often times an answer looks nothing like a question, and/or the data needed to answer a question looks nothing like the question. The purpose of the embedding layer in a transformer neural network is to prepare the prompt tensor for further processing through the remaining model layers. It’s like bringing your prompt to the starting line of a long process to be answered, but instead of bringing just the prompt to the starting line, langchain brings the entire text your asking the question of to the starting line with your question and asking them to look at each other and be like “hey, whoever looks like me, stand over here with me. Ok now the rest of you go away and I’m going to ask chatgpt to see which of you remaining can help answer me”. This is a slight of hand trick, trying to replace everything that happens after the starting line, with chatgpt, but it doesn’t really work for 2 big reasons: (1) chatgpt context is not large enough to transform both the entire text your asking a question of + your prompt, and the same limitation applies to batching (2) your embeddings are incomplete because they were not created by the network, but simply hacking the first layer in a sense
Interesting take. I suspect most people don't understand the technology enough to see how it works. Would be helpful if you could make a video explanation
Biggest limitation right know that we can’t get over with, is chat GPTs context length, there is no way around that unless the contexts is greatly increase by OpenAI themselves or we could train our gpt4 model on large texts
@@albertocambronero1326 I agree. It would cool if there was a sort of "short term memory model" that could hold personal data. I don't see expanding context length as a parsimonious solution. Model queries produce the best results when they are sort and poignant. Any time you need to bring a ton of context to the prompt it reduces the relative weight of the primary question. Imagine a patient friend who accepts questions with an unrestricted context length. They have never read the book Great Gadsby (i.e. this would be like your personal data) - so to ask them a question about Jay Gatsby the question must begin by reading them the entire Great Gatsby novel, followed by "thee end... Where did Jay Gatsby go to college?" Then to ask them another Gatsby question it requires reading them the novel, again, and again. It would be awesome if there was a way to side-load a small personalized model that can plug into a LLM for extended capabilities.
@@langmod amazing response, I did not know what was going on under the scenes with the context and did not know model queries produce the best results when they are sort and poignant. I believe that if you send the novel it would be stored in the context of the model and then you would be able multiple questions (?) or would the novel be lossing importance (weight) as more and more contexts is added? Referring to the comment that started this thread, the complicated bit about training the model on a certain topic, lets say: we train the existing GPT4 model in the book Great Gadsby it would probably know how to answer questions about the book, but it could not analize the whole book to find linguistic trends in the book (like what is the most talked about topic in the book) unless you ALSO feed the model with an article about "the most talked topic in the book". I mean I want my GPT4 model to read the book and analize the whole picture of what the book is about without needing extra articles about the book. (my use case is to make GPT4 analyze thousands of reviews and answer questions about it, but right now using NLP techniques sounds like a more duable option right now or at least until we have an option to extend GPT4 knowledge)
You can't say simply "it doesn't really work". It really depends on the use case. There are true limitations and some creativity might be required to leverage it. The context size might me sufficient for smaller use cases or it might be sufficient to break down bigger questions into smaller questions with their own contexts and then summarize etc.
I think you have to create the index in pine code explicitly. I did this with the following command 'pinecone.create_index(index_name, dimension=1024, metric="euclidean")' just before calling the search. I wonder if anyone else noticed this...
Thanks! This is the best high level langchain video I have watched. Im not a programmer but this overview is invaluable...its clearly explained and demystified the dark arts of langchain 😂😂...question, whats the most straightforward way of converting website data into vectors? Is there some way to scrape urls...looking to create simple q&a agents for small websites...thanks
I’m glad it was helpful, I appreciate the comment! Regarding scraping urls, take a look at the latest video I’ve uploaded ruclips.net/video/I-beHln9Gus/видео.html In that video I’m using LangChain’s integration with Apify to extract content from my own webpage
@@rabbitmetrics thanks. Yes took a look. Will see what I can do. Came across Apify in my research yesterday ! Will try to run this with llamaindex ….Im teaching myself! There’s not many apify videos around so thanks
Zero clutter. A Guru (remover of darkness) is one who can create chunks of knowledge in a sequence that is easier for the Shishya (student) to learn with ease and get it to their neocortex without having to decode the vectors, that allows for carrying it to their multiple incarnations. Thank you Guru-ji.
Your explanation is super clear to understand for me as a beginner. I want to know brief steps for the code flow as titles just like 1.Creating environment to get keys, 2. etc.,. Can anyone answer it?
How is the relevant info (as a vector representation) and question (as a vector representation) combined as a prompt to query the LLM? The example you show is a standard ChatGPT textual prompting scenario. The LLM will spit out what it knows and not what it does not know. So what application will this info be useful for? Also is there any associated paper or benchmark that investigates the performance of extracting "relevant information" using this chunking method or is it implementing some DL based Q/A paper?
OpenAI API keys usage is not free. I had to add a payment method, before the keys started working. Without a valid payment method the keys doesn't return any results.
How do you store a API key in the .env ? I created the .env file in the root and I get error 500 when trying to open the .env and even chatgpt doesn't know why.
Hi, this video is one of the best, but now langchain changed its modules and classes, please update us with the new video, for eg: simplesequentialchain is not supporting now!!
Thanks friend. You answered a lot of questions here and the repo, helped understanding your presentation much better. Please share more. Have a great day.
This is so interesting. We (german insurance company) want to develop our own copilot for employees. But we can’t use the GPT4 API given the fact that our companies data is sensitive and we don’t want them to be public at openai. You have a tip for this issue?
If you look at openAI's privacy policy, you'll find that they explicitly state that data provided through the API is not recycled into the training data for OpenAI's systems unless you explicitly enable it, it's off by default. So yes, you can use OpenAI's systems through the API with proprietary information and it wont end up in the training data. A quick search will let you find their official announcements about this.
@@markschrammel9513 yes, they would be in breach of their own terms of service and liable legally, also, the API has many fewer restrictions and controls vs chatgpt, it's a totally different animal
Wonder how useful this might be to use with repos? Imagine you could chat with GPT in it knows your entire codebase and could use specific examples in your conversations. Of course there are some security concerns but the trade off might be worth it.
I want to explore doing exactly this but with a private LLM instance rather than shipping data to GPT or elsewhere. I've been using gpt-engineer, which is super fun. When it can create a codebase and then iterate on it, more fun.
Hi there, is there a way to combine steps 4 and 5? I assumed you would be using the Agent to answer questions on the autoencoder that we had focused on for the whole video, but then we just used it to do some maths. I think it would be useful if it could answer questions based on the embeddings we have in our index?
Can someone explain to me, how the question & and the relevant (personal) data is combined when promting the model? Also, if I understand this correctly, using LangChain after all would enlarge the promt and hence number of tokens needed / cost? Thanks in advance!
90% (or more) of tech tutorials start with code, without providing a conceptual overview, as you have done. This video is phenomenal...
Appreciate it! 🙏 Thanks for watching
Totally agree with this. I love the way this guy teaching the conceptual
I disagree. I almost never find good code examples instead only concepts for dummys.
I've noticed a significant lack of comprehensive resources that cover LangChain thoroughly. Your work on the subject is highly valued. Thank you
Yes, there's not enough books on it. The documentation is sparse
Agreed. This was the perfect introduction, for me at this time, to Lang chain.
This is the best 101 video I found on the subject. Most of the other videos assume you're already somewhat familiar with the tools or aren't that beginner friendly.
I never comment on any video but your flawless explanation made me, Thank you for such a masterpiece.
Appreciate the kind words! 🙏 Thanks for watching
Thank you. I have watched a lot of videos that attempt to explain LLM's and LangChain as successfully as you have here but fail to do it as succinctly as you have. I was looking for a video that I can share with my clients that explains what LLM's and LangChain are without being too dumbed down or being too 'over their heads' and this video is perfect for that! So, again - thank you.
Glad it was helpful! I really appreciate the comment, thank you very much 🙏
Excellent intro, especially for an experienced programmer to start using after a single watch. Learned a lot in a short time with it. Thanks for making.
You're welcome! Thanks for watching
One of the best QuickStart streaming that I've seen. A clearly explanation in combination with images. Many thanks.
Thank you! 🙏
The coolest thing about enhancing LLMs like this is that locally-runnable models will be very interesting (no huge API call costs) and smarter than by default.
I would love local LLMs! Though I doubt that one advanced as GTP-3.5/4 will be able to be run locally for a few years because of the required computational power. I still look forward to the day that it becomes a thing though!
The costs are not the advantage. Hosting things on your own hardware is usually more expensive, especially if you need multiple models(embedding model, LLM, maybe a text to speech). The advantage I see is that you could use custom models trained on your data
Enter neuromorphics: ruclips.net/video/EXaMQejsMZ8/видео.html
EXCELLENT OVERVIEW: Pls note Pinecone as of 1 week is NOT allowing new, free accounts to do any operations! PLS CONSIDER DOING SIMILAR VID FOSS end to end, There is a lot of interest. THANK YOU
Having read through the LangChain's conceptual documentation, I must say this video is a great accompaniment. Very clear and well presented and for a non coder like myself, easy to understand. (I'd pay for a LangChain manual for 5 year olds!) . Subscribed.
Thank you! 🙏 Glad it was helpful
Companion*
We need more videos like this, comprehensive for the general public and for newbies like me. Thank you!
I've been watching a lot of AI videos, this is definitely one the best - well-organized and very clear
Best video I have ever seen on explaining Langchain soo far 💯
This was an awesome and very straightforward video. I believe that it's the most useful video about LangChain that exists I've seen so far. Even people that don't know much about programming can follow. Thanks so much!
With immediate effect I have subscribe to your awesome channel.
Explanation to LangChain was clear and concise. I really learnt a lot in just 12 minutes.
Thank you for the video. I think it gives a really good introduction to the topic without much distraction. Absolutely pleasant to follow even for a non-native speaker.
One of the best 101 video on LangChain out there, Kudos to you!
Your video really helps understand the basics of langchain and provides a good context as well. I'm looking forward to more such videos !
Thank you very much, Rabbitmetrics! This tutorial is absolutely a gem for someone looking for a clear and concise overview of the main concepts!
Thank you! I'm glad it was helpful
"Great video! This explanation of LangChain's core concepts is super helpful for beginners looking to build LLM applications. Thanks for sharing the code link as well-makes it easy to follow along and experiment!"
Excellent video for beginners who want to start on Langchain. Well explained.
Thanks! Glad it was useful
I have been searching and searching for an explanation of how to do this exact thing!! Yasssssss thank yooouuu! ❤
Wow, this video on lang-chain have all the pieces i have been searching for.
Thank you so much for taking time and making this awesome video.
solid instructor. good intro langchain at the right level of depth. For as quick as he rips thru a huge amount of information, he is still pretty easy to follow.
Thank you so much for covering all the components in just 13 mins. Though, it took an hour to learn and absorb everything :D
I inspected Langchain code as soon as it was released, ran some tests and never used it since. Im surprised so many consider its limitations acceptable. Using embedding similarity as a query filter is like trying to answer a prompt by comparing every chunk of text to your prompt. It makes absolutely no sense because often times an answer looks nothing like a question, and/or the data needed to answer a question looks nothing like the question.
The purpose of the embedding layer in a transformer neural network is to prepare the prompt tensor for further processing through the remaining model layers. It’s like bringing your prompt to the starting line of a long process to be answered, but instead of bringing just the prompt to the starting line, langchain brings the entire text your asking the question of to the starting line with your question and asking them to look at each other and be like “hey, whoever looks like me, stand over here with me. Ok now the rest of you go away and I’m going to ask chatgpt to see which of you remaining can help answer me”.
This is a slight of hand trick, trying to replace everything that happens after the starting line, with chatgpt, but it doesn’t really work for 2 big reasons: (1) chatgpt context is not large enough to transform both the entire text your asking a question of + your prompt, and the same limitation applies to batching (2) your embeddings are incomplete because they were not created by the network, but simply hacking the first layer in a sense
Interesting take. I suspect most people don't understand the technology enough to see how it works. Would be helpful if you could make a video explanation
Biggest limitation right know that we can’t get over with, is chat GPTs context length, there is no way around that unless the contexts is greatly increase by OpenAI themselves or we could train our gpt4 model on large texts
@@albertocambronero1326 I agree. It would cool if there was a sort of "short term memory model" that could hold personal data. I don't see expanding context length as a parsimonious solution. Model queries produce the best results when they are sort and poignant. Any time you need to bring a ton of context to the prompt it reduces the relative weight of the primary question. Imagine a patient friend who accepts questions with an unrestricted context length. They have never read the book Great Gadsby (i.e. this would be like your personal data) - so to ask them a question about Jay Gatsby the question must begin by reading them the entire Great Gatsby novel, followed by "thee end... Where did Jay Gatsby go to college?" Then to ask them another Gatsby question it requires reading them the novel, again, and again. It would be awesome if there was a way to side-load a small personalized model that can plug into a LLM for extended capabilities.
@@langmod amazing response, I did not know what was going on under the scenes with the context and did not know model queries produce the best results when they are sort and poignant.
I believe that if you send the novel it would be stored in the context of the model and then you would be able multiple questions (?) or would the novel be lossing importance (weight) as more and more contexts is added?
Referring to the comment that started this thread, the complicated bit about training the model on a certain topic, lets say: we train the existing GPT4 model in the book Great Gadsby it would probably know how to answer questions about the book, but it could not analize the whole book to find linguistic trends in the book (like what is the most talked about topic in the book) unless you ALSO feed the model with an article about "the most talked topic in the book".
I mean I want my GPT4 model to read the book and analize the whole picture of what the book is about without needing extra articles about the book.
(my use case is to make GPT4 analyze thousands of reviews and answer questions about it, but right now using NLP techniques sounds like a more duable option right now or at least until we have an option to extend GPT4 knowledge)
You can't say simply "it doesn't really work". It really depends on the use case. There are true limitations and some creativity might be required to leverage it. The context size might me sufficient for smaller use cases or it might be sufficient to break down bigger questions into smaller questions with their own contexts and then summarize etc.
This video really explains A-Z about langchain. This is damn good man.
Appreciate the comment! Thanks for watching
I found this to be very comprehensive and indeed useful.
Your approach on this Langchain vid garnered you a Subscriber! Thanks!
Appreciate the support! Thanks for watching
This is a absolutely wonderfuk video on LangChain and its clear and concise. Coukd you do a tutorial for beginners??? 🙏🏼
Really fantastic crisp explanation of LLM nothing more nothing less.
Thank you!
Thank you very much for watching the video, a very well-structured clarification. 👍
Much appreciated! Thanks for watching
Great content! Just what someone who just jumped into Gen AI would need to solve diverse use cases. Subscribed!
Appreciate it! Thanks for watching
Excellent! I've spent hours looking for this 13 minute tutorial. You fa man! Thanks! 💪😁🌴🤙
Glad you found it! 😊 Thanks for watching
Very good explanation with a simple example to understand how it works! Thanks for this content
You're welcome! Thanks for watching
I think you have to create the index in pine code explicitly. I did this with the following command 'pinecone.create_index(index_name, dimension=1024, metric="euclidean")' just before calling the search. I wonder if anyone else noticed this...
ty sir
Excellent coding examples. Please do more of these.
Please do a tutorial on how to summarise comments received on a RUclips video.
👍 Your explanation is so structure and clear. I can understand how langchain works now even though I don’t know your python codes at all.
Thanks! 🙏 Glad it was helpful
Thanks! This is the best high level langchain video I have watched. Im not a programmer but this overview is invaluable...its clearly explained and demystified the dark arts of langchain 😂😂...question, whats the most straightforward way of converting website data into vectors? Is there some way to scrape urls...looking to create simple q&a agents for small websites...thanks
I’m glad it was helpful, I appreciate the comment! Regarding scraping urls, take a look at the latest video I’ve uploaded ruclips.net/video/I-beHln9Gus/видео.html In that video I’m using LangChain’s integration with Apify to extract content from my own webpage
@@rabbitmetrics thanks. Yes took a look. Will see what I can do. Came across Apify in my research yesterday
! Will try to run this with llamaindex ….Im teaching myself! There’s not many apify videos around so thanks
Zero clutter. A Guru (remover of darkness) is one who can create chunks of knowledge in a sequence that is easier for the Shishya (student) to learn with ease and get it to their neocortex without having to decode the vectors, that allows for carrying it to their multiple incarnations. Thank you Guru-ji.
I appreciate the comment - thanks for watching!
super helpful. I think langchain engineer could hold significant value in the current job market
I agree!
Excellent intro. Harrison would approve!
Thank you!
This is a cool explanation of how langchain works.
Thank you for your contribution through the RUclips space
Appreciate it! Thanks for watching
Thank you for explaining all the components. Highly appreciate it.
You're welcome! Thanks for watching
This is amazing stuff. Would love to see a deeper dive into it.
Thanks for watching! I'm already working on some deep dive videos
Your explanation is super clear to understand for me as a beginner. I want to know brief steps for the code flow as titles just like
1.Creating environment to get keys, 2. etc.,. Can anyone answer it?
This is very insightful and straight to the point.
Thank you!
Amazing short video packed with knowledge. Just smashed that subscribe button!
Appreciate the support, thanks for watching!
How is the relevant info (as a vector representation) and question (as a vector representation) combined as a prompt to query the LLM? The example you show is a standard ChatGPT textual prompting scenario. The LLM will spit out what it knows and not what it does not know. So what application will this info be useful for? Also is there any associated paper or benchmark that investigates the performance of extracting "relevant information" using this chunking method or is it implementing some DL based Q/A paper?
What a beautiful video. You Sir are a great teacher ! Thank You !
Thank you!
Excellent video. THank you for sharing. Would love to see a video on Langchain Agents. Thank you
You're welcome! Thanks for watching
OpenAI API keys usage is not free. I had to add a payment method, before the keys started working. Without a valid payment method the keys doesn't return any results.
It was free at that time.
early api users got 18$ credit and last time one of my fnd got 5$ credit about a month ago.
but now it's not free.
yes. and I tried yesterday then realised how quickly charge can add up if no control of using it.
you don't need to pay. You need to earn. Check bittensor and work there
Elon 's working on that 😂
Use local llms.
Thank you for this video. Now I can start work on my Langchain. Have subscribed!
You're welcome! Thanks for watching
This video explains more better than some udemy courses
Thank you very much for the video! Really helpfull to kickstart with LangChain
Glad it was helpful!
This is gold! Thank you!❤
Thanks for the clarity , all the best
Absolutely love the way you explained.
Thank you!
Brilliant. Structured and clear.
Thank you!
Simply fantastic. Thank you very much for explaining it so well.
Appreciate the comment! 🙏 Thanks for watching
Excellent unpack! Can you please provide a link to this notebook?
Thank you this is the info I was looking for.
Excellent overview - Thanks!
You're welcome, thanks for watching!
How do you store a API key in the .env ? I created the .env file in the root and I get error 500 when trying to open the .env and even chatgpt doesn't know why.
Hi, this video is one of the best, but now langchain changed its modules and classes, please update us with the new video, for eg: simplesequentialchain is not supporting now!!
Thanks friend. You answered a lot of questions here and the repo, helped understanding your presentation much better. Please share more. Have a great day.
You're welcome! Thanks for watching
Great explanation! I learned a ton with your video
just found your channel. Excellent Content - another sub for you sir!
Thank you I appreciate the support!
Amazing tutorial and explanation, thank you!
This is so interesting. We (german insurance company) want to develop our own copilot for employees. But we can’t use the GPT4 API given the fact that our companies data is sensitive and we don’t want them to be public at openai. You have a tip for this issue?
Yes, you would use a local (possibly finetuned) language model instead of GPT4 - planning a video on this
@@rabbitmetrics would be more than happy about a video concerning this topic. Maybe using GPT4ALL
If you look at openAI's privacy policy, you'll find that they explicitly state that data provided through the API is not recycled into the training data for OpenAI's systems unless you explicitly enable it, it's off by default. So yes, you can use OpenAI's systems through the API with proprietary information and it wont end up in the training data. A quick search will let you find their official announcements about this.
@@thebluriam you believe them ??? :D :D: D :D :D
@@markschrammel9513 yes, they would be in breach of their own terms of service and liable legally, also, the API has many fewer restrictions and controls vs chatgpt, it's a totally different animal
Great!!! Fantastic! Awesome! Thank you for sharing!
Thanks for watching!
this video was nice and gives a good intro to the topic
Wonder how useful this might be to use with repos? Imagine you could chat with GPT in it knows your entire codebase and could use specific examples in your conversations. Of course there are some security concerns but the trade off might be worth it.
I want to explore doing exactly this but with a private LLM instance rather than shipping data to GPT or elsewhere. I've been using gpt-engineer, which is super fun. When it can create a codebase and then iterate on it, more fun.
Awesome work thanks a lot!
Hi I am new to Python, how do I get to the screen at 5:00 to edit the environment file? I installed all the component then stuck, thank you!
Excellent introduction! Thanks a lot :-)
Fascinating. Thank you for this.
Great explanation, thanks!
Excellent work!
Thanks for sharing the knowledge 👍
Fantastic overview of Langchain! Thank you @Rabbitmetrics
Highly appreciated video
Hi there, is there a way to combine steps 4 and 5? I assumed you would be using the Agent to answer questions on the autoencoder that we had focused on for the whole video, but then we just used it to do some maths. I think it would be useful if it could answer questions based on the embeddings we have in our index?
the hack: we reduce what we have to feed to the LLM by filtering down our data using similarity search on-demand [with embeddings]
Awesome Explanation
great! I can use this video to teach my friend
Can you do a video on Autogen and LangChain? Maybe throw in SuperAgent as well.
Will be likely covering this in upcoming videos
Subscribed. Others have clamored for the notebook. I do as well. Thank you.
How safe is proprietary data when using this? Is the data saved by openAI?
Thanks a lot. Very good explanation.
Thanks!
Great explanation!
This is really great video!
Great. Would love to have access to the code as well. Thanks!
Can someone explain to me, how the question & and the relevant (personal) data is combined when promting the model? Also, if I understand this correctly, using LangChain after all would enlarge the promt and hence number of tokens needed / cost? Thanks in advance!
Great video! Thank you.
Your summary of LangChain is very accurate. Do you have a PPT to share?
Thanks! Unfortunately, I don’t have a PPT. The video is made with FCPX
great overview and slides
Wonderful video. Thanks.
🎉🎉🎉 Great overview of LangChain, can you do similar video on using LangChain on open_assistant and weiviate vector database
Thanks! That’s a good idea for a video