Get Better Response With Same Token Size - Langchain Parent Document Retriever
HTML-код
- Опубликовано: 22 окт 2023
- Are you using OpenAI and not getting better response because of limited context length and token size? Check this video to understand, how you can resolve this issue by utilizing Lanchain's Parent Document Retriever feature.
Blog: www.shwetalodha.in/
Medium: / shweta-lodha
* REFERRAL LINK ************
Medium referral link: / membership
* REFERRAL LINK ************
###### MORE PLAYLISTS ######
⭐Python for beginners: • #1 Python for Beginner...
⭐Python Pandas: • #1 Python Pandas: Intr...
⭐Python tips and tricks: • Python Tip: Take Multi...
⭐Jupyter tips & tricks: • Jupyter Tip: Run Termi...
⭐Microsoft Azure: • Know Response Time Of ...
⭐Azure ML and AI: • Getting Started with I...
⭐Visual Studio Code a.k.a. VS Code: • How to get started wit...
#openai #chatbot #chatgpt - Наука
How to save/load db (vectorstore) when using Parent Document retriever?
Thank you shweta for such wonderful content
Really appriciable
This will help a lot of people.
One suggestion if possible please share the code file in description so that one can access it and try it from their doing some changes in it
U r doing good job keep it up👍👍👍🙏
Hey, thanks for watching. You can find most of the code in my medium link.
I have a Question, So, What is the need for parent chunks here. Just split the documents into Chunks of smaller sizes directly.
What if you are setting your k=2 and your answer is in 4th chunk?
Can u include the script in gihub
Please check my Medium link, provided in About section of my channel.