OMG, this is exactly the functionality I need as a long-form fiction writer, not just to be able to look up continuity stuff in previous works in a series so that I don't contradict myself or reinvent wheels ^^ -- but then to also do productive brainstorming/editing/feedback with the chatbot. I need to figure out how to make exactly this happen! Thank you for the video!
Agreed. Do you have any simplified tutorials? Like explaining langchain I fed my novel into chatgpt page by page it worked..ok but I kept running into roadblocks. Memory cache limits and more.
@@areacode3816 maybe from ur pinecone reaching its limit? or ur 4000 gpt3 token limit? i would check these first, if its pinecone the fix is easy, jus buy more space, but if its due to gpt then try gpt4 it has double the token at 8k or if that doesnt work i would figure out an intermediary step in between to introduce another sumarizing algorithm before passing it to gpt3
Great job on the video. I understood a lot more in 12 mins than from a day of reading documentation. Would be extremely helpful if you can bookend this video with 1. dependencies and set up and 2. turning this into a web app. If you can make this into a playlist of 3 videos, even better.
This is exactly what I was looking to do, but I could'nt sort it out. This video is legit the best resource on this subject matter. You're gentleman and a scholar. I tip my hat to you, good sir.
Great tutorial bro. You're really doing good out here for us the ignorant. Took me a while to figure out that I needed to run pip install pinecone-client to install pinecone. So this is for anyone else who is stuck there
Can you do a more indepth Pinecone video? It seems like an interesting concept alongside embeddings and i think it'll help seam together the understanding of embeddings for more 'web devs' like me. I like how you used relatable terms while introducing it in this video and i think it deserves its own space. Please consider an Embeddings + Pinecone fundamentals video. Thank you.
@@DataIndependent I thinks that general pinecone video would be great, and connecting it with LangChain and building similar apps to this would be awesome
Its one project only on starter tier, that one project can contain multiple documents under one vector vector db. For me it was certainty enough to get an understanding of the potential. From my limited experience, to create multiple vector db's for different project types you will need to premium/paid and the cost is quite high. There may be other competitors offering cheaper entry level if you wish to develop apps but for a hobbyist/learning the starter tier on pinecone is fine IMO.
thanks for making these videos! I've been going through the playlist and learning a lot. One thing I wanted to mention that I find really helpful in addition to the concepts explained is the background music! Would love to get that playlist :)
Thank you! A lot of people gave constructive feedback that they didn't like it. Especially when they sped up the track and listed to it on 1.2x or 1.5x Here is where I got the music! lofigenerator.com/
Duudee!!! This video is exactly what I was looking for! Still a complete noob at all this LLM integration stuff and so visual tutorials are so incredibly helpful! Thank you for putting this together 🙌🏿🎉🙌🏿
I actually scanned the whole Mars trilogy to have something substantial, and it works fine. The queries generally return decent answers, although some of them are way off. Thanks for your excellent work!
Did you look at the results returned from Pinecone so you could determine if the answers that were off were due to Pinecone not providing the right context or OpenAi not interpreting the data correctly?
@@bartvandeenen I've been watching a few videos about LangChain and they did bring up that the chunk size (and overlap) can have a huge impact on the quality of the results. They not only said there hasn't been much research on an ideal size but they said it should likely vary depending on the structure of the document. One presenter suggested 3 sentences with overlap might be a good starting point. But I don't know enough about LangChain, yet, to know how you specify a split on the number of sentences vs just a chunk size.
Hi! Awesome tutorial. This is exactly what I was looking for. I really love this series you've started and hope you'll keep it up. I also wanted to ask: 1. What's the difference between using Pinecone or another vector store like Chrome, FAISS, Weaviate, etc? And what made you choose Pinecone for this particular tutorial? 2. What was the cost for creating embeddings for this book? (time & money) 3. Is there a way to estimate the cost of embeddings with LangChain beforehand? Thank you very much and looking forward to more vids like this! 🤟
For your questions 1. The difference with Pinecone/Chrome,etc. Not much. They store your embeddings and they run a similarity calc for you. However the space is super new, as things progress one may be a no brainer over another. Ex: You could also do this in GCP but you'd have to deal with their overhead as well. 2. Hm, unsure about the book but here is the pricing for Ada embeddings: $0.0004 / 1K tokens. So if you had 120K word book which is ~147K tokens, it would be $.05. Not very steep... 3. Yes, you can calc the number of tokens you're going to use and the task, then look up their pricing table and see how much it'll be.
Nice video. i tweaked the code and split the index part and the query part so that i can index once and keep querying - like how we would do in the real world. Nicely put together !!
This is really cool but i havent yet seen a query for a specific information store (in your case, a book) that chatgpt cant natively answer. For example i queried chatgpt the questions you asked and got detailed answers that echoed the answers you received and then some.
@@DataIndependent I'm curious what's a better option for this use case and would love to hear your thoughts. Why LangChain over Haystack? I want to pass through thousands of text documents into a question answering system and am still learning the best way to structure it. Also, an integration into something like Paperless would be cool! I'm a total noob so excuse my ignorance. Thanks!
@@philipsnowden I haven't used Haystack yet so I can't comment on it. If you have 1K text documents you'll definitely want to get embeddings and store them, retrieve them, then pass them into your prompt for the answer. Haven't used paperless yet either :)
@@DataIndependent Could you do a more in depth explainer on this? I'm struggling to take a directory of text files and get it going. I've been reading and trying the docs for langchain but am having a hard time . And can you use the new turbo 3.5 model to answer the questions? Thanks for your time, have a tip jar?
Awesome tutorial, brief and easy to understand, Do you think this could be an approach to make semantic search on private data from clients? my concern is data privacy so, I guess by using pinecone and openAI, is that openAI only process what we send (to respond in a NL), but they don't store any of our documents.
this is awesome! my question is, what happens when the model is asked a question outside of the knowledge base that was just uploaded? For example, what would happen if you asked a question about who is the best soccer player?
This is a great video - succinct and easy to follow. Two questions: 1) How easy is it to add more than one document to the same vector db 2) Is it possible to append an additional ... field(?) to that database table - so that the provenance of the reference can be reported back with the synethised result?
@@DataIndependent Amazing (and thanks for the reply). One final follow up then, is it easy / possible to delete vectors from the db too (I assume yes wanted to ask). I assume this is done by using a query e.g. if meta data contains "Document ID X" then delete?
I am getting Index 'None' not found in your Pinecone project. Did you mean one of the following indexes : langchain1 for below line docsearch = Pinecone.from_texts([t.page_content for t in texts], embeddings, index_name) Any idea what the issue could be. I checked index_name variable is set correctly as langchain1
Nice! I was working with pinecone / gpt code recently that gave your chat history basically infinite memory of past chats by storing them in pinecone which was pretty sweet as you can use it to give your chatbot more context for the conversation as it then remembers everything you ever talked about. Will be combining this with a custom dataset pinecone storage this week (like a book) to create a super powered custom gpt with infinite recall of past convos. Would be curious on your take, particularly how to keep the book data universally available to all users but at the same time keeping the past chat data of a particular user totally private but still being able to store both types of data on the free tier pinecone which I can see you are using (and I will be using too).
Nice! That's great. Soon if you have too much information (like in the book example above), you'll need to get good at picking which pieces of previous history you want to parse out. I imagine that won't be too hard in the beginning but it will later on.
@@DataIndependent Doesnt the k variable take care of this? It only returns the top k number in order of relevance that you end up querying. Or are you talking about the chat history and not the corpus? I see no reason why you would not just specify a k variable of 5 or 10 in regard to the chat history too. For example if a user was seeking relationship advice and the system knew their entire relationship history and the user said something like "this reminds of of the first relationship that I told you about", it would be easy for the system to do an exact recall of the relationship, the name of the partner and from there recall everything very quickly using the k variable on the chat history. I use relationships as an example because I just trained my system on a book that I wrote called sex 3.0 (something that gpt knows nothing about) and I am going to be giving it infinite memory and recall this week.
@@PizzaLord Yes, the K variable will help w/ this. My comment was around the chance for more noise to get introduced the more data you have. Ex: More documents creep in that share a close semantic meaning, but aren't actually what you're looking for. For small projects this shouldn't be an issue. Nice! That's cool about the project. Let me know how it goes. The langchain discord #tools would love to see it too
@@DataIndependent Another thing I will look at, and I think it would be cool if you looked at it too, is certain chat questions triggering an event like a graphic or a video link being shown where by the video can be played without leaving the chat. This can be done by either embedding the video in the chat response area or by having a separate area of the same html page which is the multimedia area or pane that gets updated. After all the whole point of langchain is to be able to chain things together, no? Once you chain things together you can get workflow. This gets around one of chat gpts main limitations right now which is that its text only in terms of what you can teach it and the internet loves its visuals and videos. Once this event flow stuff is in place you can easily use it to flow through all kinds of workflow with gpt at the centre like collecting data in forms, doing quick survey so you can store users preferences and opinions about what they might want to get out of an online course that you are teaching it and then storing that in a vector DB. It can become its own platform at that point.
@@PizzaLord You could likely do that by defining a custom tool, grabbing an image based off a URL (or generating one) and then displaying in your chat box. Doing custom tools is interesting and I'm going to look into a video for that.
I would love to see a video on the limitations of RAG. For instance say you have a document containing a summary of each country in Europe. Naturally one of the facts listed for each country would be the year they joined the EU. Unless explicitly stated, RAG wouldn't be able to tell you how many countries there are in the EU. I would love to see a tutorial on working around that limitation.
heeeey! Loving this! Greg, I'm running an e-commerce site. We've got a metric shit-ton of products and endless amounts of purchase data. It would be extremely interesting to see how we could work with this to get all our product data loaded into Pinecone and then be able to query it in some meaningful sense. I guess a lot of the comments are in a similar vein. Would be super cool to get a video on that. I could supply some product data from our shop if need be.
@@DataIndependent So I'm running a shop for car parts and equipment for cars. I think, from a consumer point of view, it would be amazing if we could solve two major issues. 1. If you're browsing for something to solve your problem rather than an actual product. Say that you have some stains on your car. It would be amazing if you could just ask the Friendly Chat Support how to deal with the issue and the support AI would have all the information about all our products and all the content that we've written at hand. And could go "Yeah, so you would use this product and go about it in so and so manner". 2. It would be super cool if it also had access to user data and past purchases etc. And go "Hey.. last time you bought this and this. How did that work out for you? From 1 to 10 how much did you love it?" etc etc. -- It feels like this is a scenario that is predicated on the idea that the AI has very specific knowledge
Awesome tutorial, brief, and easy to understand. My concern is data privacy, what happens with the data we turn into embeddings by using OpenAI, is that data used by them? Do they train further their models with that data? Can someone please answer if you have info on this privacy topic.
Great tutorial, I wonder how to generate questions based on the content of the book? I would probably have to pass the entire content of the book to the GPT model.
if I already have some embedding vector stored in pinecone, I don't need to embed again, how can I modify the following code ''docsearch = Pinecone.from_texts([t.page_content for t in texts], embeddings, index_name=index_name)'' and use docsearch.similarity_search() in the next step?
I was looking for creating an API to store the embedding in Pinecode that was fairly simple but I did not understand how to pass on a query(Plain Text) and get the response back from the embedding stored in the pinecone db I see that's what happening in the doc search and the chain lines but how i do i do it separately
Ok, so maybe I misunderstand this one. I used the full text of War and Peace, just to test. My query was "How many times does the word 'fire' appear in War and Peace?" and when it finishes running there is no output... is this not the right set up for that kind of question? Then, I set the query to 'What are the main philosophical ideas in War and Peace?' and also returned nothing. Didn't error out. I double checked and all my code is good.
Ah yes this is a fun question. So LLMs won't be good at counting words like you're describing. That's. a task they aren't well suited for yet. I would use regular regex or a .find() for that The 2nd question is also hard, you need to review multiple pieces of text in the book to form a good opinion of the philosophical ideas. Just doing an similar embedding approach won't get you there. If you wanted to answer the philosophical question I would do a map reduce or refine with a large context window. However war and peace is huge so that would cost a lot.
Also, I’ve been trying to make some type of “theorems, definitions, and corollaries” assistant which extracts from my textbook all the math theorems, definitions, and corollaries. The goal there was to create textbook summaries to reference when I work through tough problems which require me to flip back and forth through my book all day long. But more interesting, I am struggling to create a “math_proofs” assistant. Your approach in this video is awesome, but I can’t find any of your resources in which you use markdown, or latex, or any mathematical textbook to be queried. I use MathPix to convert my textbooks to latex, wordDoc, or markdown. But when I use my new converted markdown text, despite working hand-in-hand with the lang chain documentation, I still fail to get a working agent that proves statements. I feed the model: “Prove the zero vector is unique” and it replies nonsense, even though this proof is explicitly written in the text. It is not even something it had to “think” to produce (simple for the sake of example, these proofs are matrix theory so they get crazy). Could you please chime in?
Pulling all of that information out could be tough. I have a video on the playlist around "topic modeling" which is really just pulling structured information out of a piece to text. That one may be what you're looking for
This is great, thanks! have you thought about how to extend it to be able to CHAT about the book? (as opposed to a question at a time). I am running into problems figuring out when to keep a chain of chat and when to realize its a new or related question that needs new pulling of similar docs
I have a video about building a simple web app in 23 minutes using streamlit which may help! If not then vercel seems like another good option. Soon pynecone will be once they add hosting.
Great video!! Loved your explanation. Could you create another video on how to estimate the costs? Is the process of turning the Documents to Embeddings using OpenAI running every time you make a new question? or just the first time? Thanks!
Pinecone is basically a search engine for ai. It doesn't need the entire book but just segments of it instead. This saves a lot of tokens cause only segments of information end up in the prompt. Like adding some information into gpt's short term memory
Every time I run the cell with the emmbeding class do I get a charge from OpenAI? What option can I use to do the embedding load only once (for example to make queries available through a web application)?
In 1994 Richard E. Osgood created a conversational reading system called "ASK Michael" for Michael Porter's book "The Competitive Advantage of Nations". Please let me know when you can automate the conceptual indexing and question-based indexing of a book including the creation and categorization of relevant questions that a novice that doesn't know any keywords or relevant vocabulary can ask.
Hey, Greg! I'm trying to connect the dots on GPT + langchain and your videos have been excelent sources! To give it a try, I'm planning to build some kind of personal assistant for a specific industry (i.e. law, healthcare), and down the road the vector database will become pretty big. Any guideline on how to sort the best results and also how to show the source of where the information was pulled from?
Great video, how do I call the embeddings from pinecone next time I run the application (instead of having to generating them again via openai at a cost)?
I have a doubt. Please help me in this. I am trying to create a chatbot in which I provide companies information and it will refer that information and provide answer. Currently I was trying to achieve this by fine-tuning the openai gpt model but not getting the desired results. How much I have understood that this technique will work for the above use case. Am i right?
How to retrieve the data from the existing index instead of recreating them over and over again. I find the upgraded langchain Pinecone version has dependency issues. Suppose I have some 10 docs wherein i want to store each doc separately with the id, metadata and it's embeddings initially. Then i just need to retrieve it's index and query. How should I do?
thank you for this series. I'm confused about one thing. When querying the db, you passed the text, not its embedding. How does pinecone know how to embed the text?
Is it a fine tuned model ? Because if not we will charged high for using openai api. Please make a video on fine tuned langchain openai ai model like text-ada-001
Hm how many chatbots will depend on your product use case. I would put them in the same index, but make sure your metadata is explicit so you can easily filter with them
which version of python are you using? could not reproduce since the unstructured pdf loader library requires numpy 1.21.2 which for some reason is not listed on the versions supported by python 3.10 (required >=3.7,
Is there some a recommended list of dependencies needed to run the notebook? I am have problems at the 3rd line . data = loader.load() Have tried both on iMac and on colab but both have issues with crashed kernals on iMac and missing dependencies in colab. I am fairly new to Python so I am probably missing something obvious.
@@DataIndependent it would be great if you could help with a demo on how to set this up please. I am also facing issues with the Unstructured dependencies and get an error on the data=loader.load(). I tried a lot to find a way around it but no luck, sadly. Any advise would be really helpful please. Would really like to replicate this amazing work. Thank you for what you do!
@@panlaz1424 Awesome thank you! I'm unfortunately unaware of how to install Unstructured on your instance. The langchain documentation on the topic is here: langchain.readthedocs.io/en/latest/modules/document_loaders.html
@@DataIndependent I'm also having this issue on mac and it seem's so are a lot of others... would it be possible to create a video where you do the install from scratch ... plenty of good karma i'm sure :)
Is there any limit on the number/ size of the documents that can be uploaded so that the model performs efficiently? I am guessing with larger size, cosine similarity search might take higher computational time
I’m finding that breaking down into chunks following your method and code above, it’s not picking the right documents, or not cross referencing as accurately, given questions when the answer can vary. My book is actually a complex research paper. Would you have any suggestions on what to play around with in Pinecone in order to get a more accurate answer?
Hey Greg, great video! Do you know if it's possible to automatically create a pinecone db index from code? So that you don't have to create them manually
Would love to see an example of adding another book after you've done this one. What would be some of the considerations and fine-tuning you'd make as a result of the second upload
You could add more documents to your existing index and it shouldn't be a problem. However once you start to add a bunch of information, pre-filtering your vectors will become more important. Ex: If you know the answer comes from 1 of your 3 books then you can tell Pinecone to only return docs from that 1 book
I had a really hard time with 'UnstructuredPDFLoader', as the line "data = loader.load()'" gives me the error "Unable to get page count. Is poppler installed and in PATH?". Installing poppler just doesn''t work (tried cloning, etc). Only way to do it is create a new enviro, reinstall everything (incl. pip install torch, poppler-utils, terreract, opencv-python, detectron2, python-poppler). Got it working. GL!
Amazing content man , love the diagrams and how you deliver ,absolutely professional . quick question , is the text returned by the chain is exactly the same from the book or does the openAI engine make some touches and make it better ?
I have something that I want to ask about , langchain identified 5 docs that are 1000 character each which is about 3500 Tokens and added them with the original prompt then our response would be limited to a few amount of tokens ( since the limit is 4000 ) my questions are 1-did I get this part right ? 2-What would happen if lang chain identified let's say 7 docs which is more than 4000 Tokens ? 3-is there any work around the 4000 token limits that is beyond vector databases ?
1) Yep you're right 2) LangChain remains agnostic and tries to send the command to OpenAI which will return an error 3) Here is a video I just did on how to work around the 4K limit ruclips.net/video/f9_BWhCI4Zo/видео.html
It's really a great video to get start with langchain. I have a small confusion here. what if I want to send all the similar docs to the llm model not just k=5. Is there a way to deal with it?
Thank you - Super helpful to understand how to use external data sources with OpenAI. What are some of the limitations of this approach i.e. size of content being indexed in pinecone, any limits on correlating and summarizing data across multiple documents/sources, can I combine multiple types of sources of information about a certain topic (document, database, blogs, cases etc.) into a single large vector?
I have a question, in my case I have various books. But I want when I query for one I get information about that one and that only, not another book that may be similar to it. How should I go about this, should I have an index in pinecone for each? or is there a better way to accomplish this
Using your guide and trying to load streamlit for the front end. When I try to switch out the query variable to a streamlit text area (below) query=st.text_area('Input') I get this error: Failed to connect; did you specify the correct index name? any idea? on how to fix it? When I switch the text_area back to a variable - it works.
In LangChain is "similarity search" used as a synonym for "semantic search", or they are referring to different types of search? To my knowledge similarity search focuses on finding items that are similar based on their features or characteristics, while semantic search aims to understand the meaning and intent behind the query to provide contextually relevant results
I tried building this but idk I'm having a bunch of dependency problems. I even tried to download the repo and still have a bunch of dependency problems is there something I'm missing?
So even Ryan Gosling's getting into this now.
It's a fun topic!
@@DataIndependent he was referring to the fact you look like Ryan Gosling.
@@blockanese3225 I think understands that.
@@Author_SoftwareDesigner lol I couldn’t tell if he understood that when he said it’s a fun topic.
yesss
OMG, this is exactly the functionality I need as a long-form fiction writer, not just to be able to look up continuity stuff in previous works in a series so that I don't contradict myself or reinvent wheels ^^ -- but then to also do productive brainstorming/editing/feedback with the chatbot. I need to figure out how to make exactly this happen! Thank you for the video!
Nice! Glad it was helpful
Agreed. Do you have any simplified tutorials? Like explaining langchain I fed my novel into chatgpt page by page it worked..ok but I kept running into roadblocks. Memory cache limits and more.
@@areacode3816 maybe from ur pinecone reaching its limit? or ur 4000 gpt3 token limit? i would check these first, if its pinecone the fix is easy, jus buy more space, but if its due to gpt then try gpt4 it has double the token at 8k or if that doesnt work i would figure out an intermediary step in between to introduce another sumarizing algorithm before passing it to gpt3
How would I use this to make a smart chat bot for our chat support on our company? Specific to our company items
@@gjsxnobody7534I have same query!
you know it's something big when The GRAY MAN himself is teaching you AI!!
Your series is just so so good. What a passionate, talented teacher you are!
Nice! Thank you!
This is the best video i've watched explaining the use of pinecone.
Nice!!
No idea how long i've been searching the web for this exact tutorial. Thank you.
Wonderful - glad it worked out.
@@DataIndependentdo you offer consulting? I'd like to do something like this for my learners / learning business. 🙂
@@koraegis Happy to chat! Can you send me an email at contact@dataindependent.com with more details?
@@DataIndependent Thanks! Will do it now :D
Great job on the video. I understood a lot more in 12 mins than from a day of reading documentation. Would be extremely helpful if you can bookend this video with 1. dependencies and set up and 2. turning this into a web app. If you can make this into a playlist of 3 videos, even better.
This is absolutely brilliant! I love the way you explain everything and just give away all notes in such detailed and easy to follow way.. 🤩
This is exactly what I was looking to do, but I could'nt sort it out. This video is legit the best resource on this subject matter. You're gentleman and a scholar. I tip my hat to you, good sir.
Great tutorial bro. You're really doing good out here for us the ignorant. Took me a while to figure out that I needed to run pip install pinecone-client to install pinecone. So this is for anyone else who is stuck there
Glad it worked out
bro thank you so much honestly this video means so much to me, I really appreciate this all the best in all your future endeavors
Love it - what was your use case?
Can you do a more indepth Pinecone video? It seems like an interesting concept alongside embeddings and i think it'll help seam together the understanding of embeddings for more 'web devs' like me. I like how you used relatable terms while introducing it in this video and i think it deserves its own space. Please consider an Embeddings + Pinecone fundamentals video. Thank you.
Nice! Thank you. What's the question you have about the process?
@@DataIndependent I thinks that general pinecone video would be great, and connecting it with LangChain and building similar apps to this would be awesome
Weaviet is even better
Fantastic video thanks. I obtained excellent results (accuracy) following your guide compared to other tutorials I tried previously.
Ah that's great - thanks for the comment
Was the starter tier of pinecone enough for you?
Its one project only on starter tier, that one project can contain multiple documents under one vector vector db. For me it was certainty enough to get an understanding of the potential.
From my limited experience, to create multiple vector db's for different project types you will need to premium/paid and the cost is quite high.
There may be other competitors offering cheaper entry level if you wish to develop apps but for a hobbyist/learning the starter tier on pinecone is fine IMO.
thanks for making these videos! I've been going through the playlist and learning a lot. One thing I wanted to mention that I find really helpful in addition to the concepts explained is the background music! Would love to get that playlist :)
Thank you! A lot of people gave constructive feedback that they didn't like it. Especially when they sped up the track and listed to it on 1.2x or 1.5x
Here is where I got the music!
lofigenerator.com/
this helped me a lot, thanks, for the updated code in description as well!
Thank you soooo much I am using this knowledge soo much for my school projects.
I like the video because it was to the point and the presentation with the initial overview diagram is great.
Great video man. Loved it. I had been looking for this solution for some time. Keep up the good work.
Duudee!!! This video is exactly what I was looking for! Still a complete noob at all this LLM integration stuff and so visual tutorials are so incredibly helpful!
Thank you for putting this together 🙌🏿🎉🙌🏿
Great to hear! Checkout the video on the '7 core concepts' which may help round out the learnings
This is super awesome!!! And so easily explained! You made my year. Please keep up the greatest work
I actually scanned the whole Mars trilogy to have something substantial, and it works fine. The queries generally return decent answers, although some of them are way off.
Thanks for your excellent work!
Nice! Glad to hear it. How many pages/words is the mars trilogy?
@@DataIndependent About 1500 pages in total.
Did you look at the results returned from Pinecone so you could determine if the answers that were off were due to Pinecone not providing the right context or OpenAi not interpreting the data correctly?
@@keithprice3369 no I haven't.good idea to do this. I know have gpt4 access so can use much larger prompts
@@bartvandeenen I've been watching a few videos about LangChain and they did bring up that the chunk size (and overlap) can have a huge impact on the quality of the results. They not only said there hasn't been much research on an ideal size but they said it should likely vary depending on the structure of the document. One presenter suggested 3 sentences with overlap might be a good starting point. But I don't know enough about LangChain, yet, to know how you specify a split on the number of sentences vs just a chunk size.
This is gold ! please do another one with data in Excel or Google sheet please :)
Got to say, you are awesome! Keep up the good work, you got a subscriber here!
Nice! Thank you. I just ordered upgrades for my recording set up so quality will increase soon.
This is such a game changer. Can’t wait to hook all of this up to GPT-4 as well as countless other things
Nice! What other ideas do you think it should be hooked up to?
Thumbs up and subscribed.
Hi! Awesome tutorial. This is exactly what I was looking for. I really love this series you've started and hope you'll keep it up. I also wanted to ask:
1. What's the difference between using Pinecone or another vector store like Chrome, FAISS, Weaviate, etc? And what made you choose Pinecone for this particular tutorial?
2. What was the cost for creating embeddings for this book? (time & money)
3. Is there a way to estimate the cost of embeddings with LangChain beforehand?
Thank you very much and looking forward to more vids like this! 🤟
For your questions
1. The difference with Pinecone/Chrome,etc. Not much. They store your embeddings and they run a similarity calc for you. However the space is super new, as things progress one may be a no brainer over another. Ex: You could also do this in GCP but you'd have to deal with their overhead as well.
2. Hm, unsure about the book but here is the pricing for Ada embeddings: $0.0004 / 1K tokens. So if you had 120K word book which is ~147K tokens, it would be $.05. Not very steep...
3. Yes, you can calc the number of tokens you're going to use and the task, then look up their pricing table and see how much it'll be.
@@myplaylista1594 This one should help out
help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them
@@DataIndependent It can't be so expensive. text-embedding-ada-002 is about ~3,000 pages per US dollar (assuming ~800 tokens per page).
@@klaudioz_ ya, you’re right my mistake. I didn’t divide by the extra thousand in the previous calc. Fixing now
@@DataIndependent No problem. Thanks for your great videos !!
Awesome example, thanks for putting this together!
Nice! Glad it worked out. Let me know if you have any questions
Really clear, useful demo - thanks for sharing
Nice video. i tweaked the code and split the index part and the query part so that i can index once and keep querying - like how we would do in the real world. Nicely put together !!
Hello, Do you have an example of how you did that. This is the part that I have become confused about how to reuse the same indexes. Thanks
Can you pls provide an example?
This is definitely cool, thank you. There seem to be several dependencies left out. It would be great if all dependencies were shown or listed...
ok, thank you and will do. Are you having a hard time installing them all?
@@DataIndependent hey I'm stuck on the dependency part as well
This is a great video and Greg is awesome. Let's hope he puts together a course!
Thank you very much for doing this. It's absolutely awesome!!! Also can you do a video on how to improve the quality of answers?
Thanks for this very helpful practical tutorial!
Love this brother!
This is really cool but i havent yet seen a query for a specific information store (in your case, a book) that chatgpt cant natively answer. For example i queried chatgpt the questions you asked and got detailed answers that echoed the answers you received and then some.
Your videos are amazing. Keep it up and thanks!
Thanks Philip. Anything else you want to see?
@@DataIndependent I'm curious what's a better option for this use case and would love to hear your thoughts. Why LangChain over Haystack? I want to pass through thousands of text documents into a question answering system and am still learning the best way to structure it. Also, an integration into something like Paperless would be cool!
I'm a total noob so excuse my ignorance. Thanks!
@@philipsnowden I haven't used Haystack yet so I can't comment on it.
If you have 1K text documents you'll definitely want to get embeddings and store them, retrieve them, then pass them into your prompt for the answer.
Haven't used paperless yet either :)
@@DataIndependent Good info, thank you.
@@DataIndependent Could you do a more in depth explainer on this? I'm struggling to take a directory of text files and get it going. I've been reading and trying the docs for langchain but am having a hard time . And can you use the new turbo 3.5 model to answer the questions? Thanks for your time, have a tip jar?
thank you Greg! very helpful tutorial!!
Thanks Guiliana!
Thanks as always Greg!
Awesome thank you
Awesome tutorial, brief and easy to understand, Do you think this could be an approach to make semantic search on private data from clients? my concern is data privacy so, I guess by using pinecone and openAI, is that openAI only process what we send (to respond in a NL), but they don't store any of our documents.
this is awesome! my question is, what happens when the model is asked a question outside of the knowledge base that was just uploaded? For example, what would happen if you asked a question about who is the best soccer player?
This is a great video - succinct and easy to follow.
Two questions:
1) How easy is it to add more than one document to the same vector db
2) Is it possible to append an additional ... field(?) to that database table - so that the provenance of the reference can be reported back with the synethised result?
1) Super easy. Just upload another
2) Yep you can, it's the metadata field as you can add a whole bunch. People will often do this for document id's
@@DataIndependent Amazing (and thanks for the reply). One final follow up then, is it easy / possible to delete vectors from the db too (I assume yes wanted to ask). I assume this is done by using a query e.g. if meta data contains "Document ID X" then delete?
Very helpful Video, Thank you!
I am getting Index 'None' not found in your Pinecone project. Did you mean one of the following indexes : langchain1 for below line
docsearch = Pinecone.from_texts([t.page_content for t in texts], embeddings, index_name)
Any idea what the issue could be. I checked index_name variable is set correctly as langchain1
Succinct and easy to follow. Very cool.
Excellent video!
Nice!
I was working with pinecone / gpt code recently that gave your chat history basically infinite memory of past chats by storing them in pinecone which was pretty sweet as you can use it to give your chatbot more context for the conversation as it then remembers everything you ever talked about.
Will be combining this with a custom dataset pinecone storage this week (like a book) to create a super powered custom gpt with infinite recall of past convos.
Would be curious on your take, particularly how to keep the book data universally available to all users but at the same time keeping the past chat data of a particular user totally private but still being able to store both types of data on the free tier pinecone which I can see you are using (and I will be using too).
Nice! That's great. Soon if you have too much information (like in the book example above), you'll need to get good at picking which pieces of previous history you want to parse out. I imagine that won't be too hard in the beginning but it will later on.
@@DataIndependent Doesnt the k variable take care of this? It only returns the top k number in order of relevance that you end up querying.
Or are you talking about the chat history and not the corpus?
I see no reason why you would not just specify a k variable of 5 or 10 in regard to the chat history too. For example if a user was seeking relationship advice and the system knew their entire relationship history and the user said something like "this reminds of of the first relationship that I told you about", it would be easy for the system to do an exact recall of the relationship, the name of the partner and from there recall everything very quickly using the k variable on the chat history.
I use relationships as an example because I just trained my system on a book that I wrote called sex 3.0 (something that gpt knows nothing about) and I am going to be giving it infinite memory and recall this week.
@@PizzaLord Yes, the K variable will help w/ this. My comment was around the chance for more noise to get introduced the more data you have. Ex: More documents creep in that share a close semantic meaning, but aren't actually what you're looking for. For small projects this shouldn't be an issue.
Nice! That's cool about the project. Let me know how it goes.
The langchain discord #tools would love to see it too
@@DataIndependent Another thing I will look at, and I think it would be cool if you looked at it too, is certain chat questions triggering an event like a graphic or a video link being shown where by the video can be played without leaving the chat. This can be done by either embedding the video in the chat response area or by having a separate area of the same html page which is the multimedia area or pane that gets updated.
After all the whole point of langchain is to be able to chain things together, no? Once you chain things together you can get workflow.
This gets around one of chat gpts main limitations right now which is that its text only in terms of what you can teach it and the internet loves its visuals and videos.
Once this event flow stuff is in place you can easily use it to flow through all kinds of workflow with gpt at the centre like collecting data in forms, doing quick survey so you can store users preferences and opinions about what they might want to get out of an online course that you are teaching it and then storing that in a vector DB. It can become its own platform at that point.
@@PizzaLord You could likely do that by defining a custom tool, grabbing an image based off a URL (or generating one) and then displaying in your chat box. Doing custom tools is interesting and I'm going to look into a video for that.
I would love to see a video on the limitations of RAG. For instance say you have a document containing a summary of each country in Europe. Naturally one of the facts listed for each country would be the year they joined the EU. Unless explicitly stated, RAG wouldn't be able to tell you how many countries there are in the EU. I would love to see a tutorial on working around that limitation.
nice! That's fun thanks for the input on that.
You're right, that isn't a standard question and you'll need a different type of system set up for that
awesome video, very helpful! thank you
Love it thank you
Great tutorial, thanks so much!
Awesome thanks Walter
Amazing stuff with these videos
Glad you like them!
heeeey! Loving this! Greg, I'm running an e-commerce site. We've got a metric shit-ton of products and endless amounts of purchase data. It would be extremely interesting to see how we could work with this to get all our product data loaded into Pinecone and then be able to query it in some meaningful sense. I guess a lot of the comments are in a similar vein. Would be super cool to get a video on that. I could supply some product data from our shop if need be.
Nice! What would be the business use case or problem you'd be trying to solve?
@@DataIndependent So I'm running a shop for car parts and equipment for cars. I think, from a consumer point of view, it would be amazing if we could solve two major issues. 1. If you're browsing for something to solve your problem rather than an actual product. Say that you have some stains on your car. It would be amazing if you could just ask the Friendly Chat Support how to deal with the issue and the support AI would have all the information about all our products and all the content that we've written at hand. And could go "Yeah, so you would use this product and go about it in so and so manner". 2. It would be super cool if it also had access to user data and past purchases etc. And go "Hey.. last time you bought this and this. How did that work out for you? From 1 to 10 how much did you love it?" etc etc. -- It feels like this is a scenario that is predicated on the idea that the AI has very specific knowledge
Awesome tutorial, brief, and easy to understand. My concern is data privacy, what happens with the data we turn into embeddings by using OpenAI, is that data used by them? Do they train further their models with that data? Can someone please answer if you have info on this privacy topic.
Great tutorial, I wonder how to generate questions based on the content of the book? I would probably have to pass the entire content of the book to the GPT model.
if I already have some embedding vector stored in pinecone, I don't need to embed again, how can I modify the following code ''docsearch = Pinecone.from_texts([t.page_content for t in texts], embeddings, index_name=index_name)'' and use docsearch.similarity_search() in the next step?
Well this indeed is the unanswered question. Unfortunately that is the problem with Jupiter Notebook cells.
Amazing work ! thank you so much !!
Very impressive.great job.
Great series.
I was looking for creating an API to store the embedding in Pinecode that was fairly simple but I did not understand how to pass on a query(Plain Text) and get the response back from the embedding stored in the pinecone db I see that's what happening in the doc search and the chain lines but how i do i do it separately
Sorry I don't fully understand your question - could you rephrase it?
Thanks for sharing. Could you elaborate on why you didn’t use overlap?
This video is very good!
Ok, so maybe I misunderstand this one. I used the full text of War and Peace, just to test. My query was "How many times does the word 'fire' appear in War and Peace?" and when it finishes running there is no output... is this not the right set up for that kind of question?
Then, I set the query to 'What are the main philosophical ideas in War and Peace?' and also returned nothing. Didn't error out. I double checked and all my code is good.
Ah yes this is a fun question.
So LLMs won't be good at counting words like you're describing. That's. a task they aren't well suited for yet. I would use regular regex or a .find() for that
The 2nd question is also hard, you need to review multiple pieces of text in the book to form a good opinion of the philosophical ideas.
Just doing an similar embedding approach won't get you there.
If you wanted to answer the philosophical question I would do a map reduce or refine with a large context window. However war and peace is huge so that would cost a lot.
Greg, you are INCREDIBLE! Your channel and GitHub are a goldmine. Thank you 🙏. At 9:09, what install on Mac is necessary to assess methods like that?
Also, I’ve been trying to make some type of “theorems, definitions, and corollaries” assistant which extracts from my textbook all the math theorems, definitions, and corollaries. The goal there was to create textbook summaries to reference when I work through tough problems which require me to flip back and forth through my book all day long.
But more interesting, I am struggling to create a “math_proofs” assistant. Your approach in this video is awesome, but I can’t find any of your resources in which you use markdown, or latex, or any mathematical textbook to be queried. I use MathPix to convert my textbooks to latex, wordDoc, or markdown. But when I use my new converted markdown text, despite working hand-in-hand with the lang chain documentation, I still fail to get a working agent that proves statements.
I feed the model:
“Prove the zero vector is unique” and it replies nonsense, even though this proof is explicitly written in the text. It is not even something it had to “think” to produce (simple for the sake of example, these proofs are matrix theory so they get crazy). Could you please chime in?
Pulling all of that information out could be tough. I have a video on the playlist around "topic modeling" which is really just pulling structured information out of a piece to text. That one may be what you're looking for
Really awesome video!
Nice!! Thank you - what else do you want to see?
This is great, thanks! have you thought about how to extend it to be able to CHAT about the book? (as opposed to a question at a time). I am running into problems figuring out when to keep a chain of chat and when to realize its a new or related question that needs new pulling of similar docs
What about a video on hosting this on AWS and adding a Front end to make it accessible to clients?
I have a video about building a simple web app in 23 minutes using streamlit which may help! If not then vercel seems like another good option. Soon pynecone will be once they add hosting.
Great! What are the limits? How many pages can it handle, and what are the costs?
However many pages you want. It just storage space. Check out pinecone's pricing for more
Great video!! Loved your explanation. Could you create another video on how to estimate the costs? Is the process of turning the Documents to Embeddings using OpenAI running every time you make a new question? or just the first time? Thanks!
Pinecone is basically a search engine for ai. It doesn't need the entire book but just segments of it instead. This saves a lot of tokens cause only segments of information end up in the prompt.
Like adding some information into gpt's short term memory
Every time I run the cell with the emmbeding class do I get a charge from OpenAI?
What option can I use to do the embedding load only once (for example to make queries available through a web application)?
In 1994 Richard E. Osgood created a conversational reading system called "ASK Michael" for Michael Porter's book "The Competitive Advantage of Nations". Please let me know when you can automate the conceptual indexing and question-based indexing of a book including the creation and categorization of relevant questions that a novice that doesn't know any keywords or relevant vocabulary can ask.
This is gold! Thank you so much!
Thank you!
Hey, Greg! I'm trying to connect the dots on GPT + langchain and your videos have been excelent sources! To give it a try, I'm planning to build some kind of personal assistant for a specific industry (i.e. law, healthcare), and down the road the vector database will become pretty big. Any guideline on how to sort the best results and also how to show the source of where the information was pulled from?
Nice! Check out the langchain documentation for "q&a with sources" you're able to get them back pretty easily.
great video. thanks so much.
How do you query the index without creating the embeddings all the time? is it possible?
thanks
Hi, i found this : docsearch = Pinecone.from_existing_index(index_name, embeddings)
Great explanation. Thank you.
Thank you! That's great
Great video, how do I call the embeddings from pinecone next time I run the application (instead of having to generating them again via openai at a cost)?
Great Question. Did you ever get a response? I am looking for the same thing
I have a doubt. Please help me in this.
I am trying to create a chatbot in which I provide companies information and it will refer that information and provide answer.
Currently I was trying to achieve this by fine-tuning the openai gpt model but not getting the desired results.
How much I have understood that this technique will work for the above use case.
Am i right?
Yes, it would help with that. You just need to pass your company's documents into the loader
@@DataIndependentThank you for the reply!
How to retrieve the data from the existing index instead of recreating them over and over again. I find the upgraded langchain Pinecone version has dependency issues. Suppose I have some 10 docs wherein i want to store each doc separately with the id, metadata and it's embeddings initially. Then i just need to retrieve it's index and query. How should I do?
thank you for this series. I'm confused about one thing. When querying the db, you passed the text, not its embedding. How does pinecone know how to embed the text?
great lectures, learn how to use langchain API. It looks that how to fine tuning with langchain has not been uploaded yet.
Is it a fine tuned model ? Because if not we will charged high for using openai api.
Please make a video on fine tuned langchain openai ai model like text-ada-001
Great tut, thank you. Any advice on vectorizing a ton of widely varied documents? How many qa chatbots? One per index?
Hm how many chatbots will depend on your product use case.
I would put them in the same index, but make sure your metadata is explicit so you can easily filter with them
@@DataIndependent Thank you
which version of python are you using? could not reproduce since the unstructured pdf loader library requires numpy 1.21.2 which for some reason is not listed on the versions supported by python 3.10 (required >=3.7,
Apologies I don't have the version on there. I'll do that in my videos going foward
Is there some a recommended list of dependencies needed to run the notebook?
I am have problems at the 3rd line . data = loader.load()
Have tried both on iMac and on colab but both have issues with crashed kernals on iMac and missing dependencies in colab.
I am fairly new to Python so I am probably missing something obvious.
I had a hard time getting it set up as well. What's the error say down at the bottom?
@@DataIndependent it would be great if you could help with a demo on how to set this up please. I am also facing issues with the Unstructured dependencies and get an error on the data=loader.load().
I tried a lot to find a way around it but no luck, sadly. Any advise would be really helpful please.
Would really like to replicate this amazing work. Thank you for what you do!
@@panlaz1424 Awesome thank you!
I'm unfortunately unaware of how to install Unstructured on your instance.
The langchain documentation on the topic is here: langchain.readthedocs.io/en/latest/modules/document_loaders.html
@@DataIndependent I'm also having this issue on mac and it seem's so are a lot of others... would it be possible to create a video where you do the install from scratch ... plenty of good karma i'm sure :)
This is great! thanks, do you have a video that shows how to connect what you did to a chatbot interface?
Not currently but this is on the horizon - I'll make a post on this channel in a few weeks
Is there any limit on the number/ size of the documents that can be uploaded so that the model performs efficiently? I am guessing with larger size, cosine similarity search might take higher computational time
ya it likely would take longer. I haven't seen a limit yet. At that point it's an engineering problem rather than LLM/LangChain situation
Great video , I am wondering is there way to use the PDFs which made from photocopy of the document ( need to convert image to text )
I’m finding that breaking down into chunks following your method and code above, it’s not picking the right documents, or not cross referencing as accurately, given questions when the answer can vary. My book is actually a complex research paper. Would you have any suggestions on what to play around with in Pinecone in order to get a more accurate answer?
Hey Greg, great video!
Do you know if it's possible to automatically create a pinecone db index from code?
So that you don't have to create them manually
Would love to see an example of adding another book after you've done this one. What would be some of the considerations and fine-tuning you'd make as a result of the second upload
You could add more documents to your existing index and it shouldn't be a problem.
However once you start to add a bunch of information, pre-filtering your vectors will become more important.
Ex: If you know the answer comes from 1 of your 3 books then you can tell Pinecone to only return docs from that 1 book
Hi thanks for sharing, if we want to deploy this code in AWS as web app, what changes we should do.
Thanks Ryan!
I had a really hard time with 'UnstructuredPDFLoader', as the line "data = loader.load()'" gives me the error "Unable to get page count. Is poppler installed and in PATH?". Installing poppler just doesn''t work (tried cloning, etc). Only way to do it is create a new enviro, reinstall everything (incl. pip install torch, poppler-utils, terreract, opencv-python, detectron2, python-poppler). Got it working. GL!
Amazing content man , love the diagrams and how you deliver ,absolutely professional .
quick question , is the text returned by the chain is exactly the same from the book or does the openAI engine make some touches and make it better ?
I have something that I want to ask about , langchain identified 5 docs that are 1000 character each which is about 3500 Tokens and added them with the original prompt
then our response would be limited to a few amount of tokens ( since the limit is 4000 )
my questions are
1-did I get this part right ?
2-What would happen if lang chain identified let's say 7 docs which is more than 4000 Tokens ?
3-is there any work around the 4000 token limits that is beyond vector databases ?
1) Yep you're right
2) LangChain remains agnostic and tries to send the command to OpenAI which will return an error
3) Here is a video I just did on how to work around the 4K limit ruclips.net/video/f9_BWhCI4Zo/видео.html
@@DataIndependent Thank you so much you are a legend , I really admire your videos , very nice structure , minimalistic and right to the point
@@mikemansour1166 nice! Glad to hear it
It's really a great video to get start with langchain. I have a small confusion here. what if I want to send all the similar docs to the llm model not just k=5. Is there a way to deal with it?
Thank you - Super helpful to understand how to use external data sources with OpenAI. What are some of the limitations of this approach i.e. size of content being indexed in pinecone, any limits on correlating and summarizing data across multiple documents/sources, can I combine multiple types of sources of information about a certain topic (document, database, blogs, cases etc.) into a single large vector?
I have a question, in my case I have various books. But I want when I query for one I get information about that one and that only, not another book that may be similar to it. How should I go about this, should I have an index in pinecone for each? or is there a better way to accomplish this
That was fabulous thank you
Nice! Glad to hear it
Using your guide and trying to load streamlit for the front end.
When I try to switch out the query variable to a streamlit text area (below)
query=st.text_area('Input')
I get this error: Failed to connect; did you specify the correct index name?
any idea? on how to fix it? When I switch the text_area back to a variable - it works.
In LangChain is "similarity search" used as a synonym for "semantic search", or they are referring to different types of search?
To my knowledge similarity search focuses on finding items that are similar based on their features or characteristics, while semantic search aims to understand the meaning and intent behind the query to provide contextually relevant results
I tried building this but idk I'm having a bunch of dependency problems. I even tried to download the repo and still have a bunch of dependency problems is there something I'm missing?
Just out of curiosity, how much does something like this cost in openAI credits?