Hi in the video it has shown how to run it as fastapi which can be deployed. If you want to know how to deploy fastapi on cloud like aws you can watch ruclips.net/video/7FVPn25mmEQ/видео.htmlsi=FAtDYHUduXugcN34
Hii Pradip, as usual amazing content you put out there! I created a rag app which read each line from a txt file in the same folder, passes it through an api. The returned data is chunked and embedded then passed to the retrieval chain. how best do you think I can do this for large scale process i.e reading the original txt file one after the other, passing it to the LLM and then appending the result into a final file. I would appreciate some insight 🙏🏾
Hi Pradip, how do we make the input document dynamic? meaning if its deployed on a web app, how can someone just input their own documents and the web app would be able to answer based on those new documents instead of something pre-loaded, do we require another API/cloud storage etc?
We can store all uploaded docs in folder and load docs from that folder. If each user only wants to ask questions to their on files it means you need to create seperate index for each user or better when insert doc in vector database add user id in metedata so when that user asks question you only fetch doc which has metadat containing that user id
Nice video.
Thank you for the quick tutorial, just wondering how this could be deployed on the web.
Hi in the video it has shown how to run it as fastapi which can be deployed. If you want to know how to deploy fastapi on cloud like aws you can watch ruclips.net/video/7FVPn25mmEQ/видео.htmlsi=FAtDYHUduXugcN34
@@FutureSmartAI Thank you.
Very good video. Thanks a lot for making it.
Glad you liked it!
Hii Pradip, as usual amazing content you put out there!
I created a rag app which read each line from a txt file in the same folder, passes it through an api. The returned data is chunked and embedded then passed to the retrieval chain. how best do you think I can do this for large scale process i.e reading the original txt file one after the other, passing it to the LLM and then appending the result into a final file. I would appreciate some insight 🙏🏾
Hi Pradip, how do we make the input document dynamic? meaning if its deployed on a web app, how can someone just input their own documents and the web app would be able to answer based on those new documents instead of something pre-loaded, do we require another API/cloud storage etc?
We can store all uploaded docs in folder and load docs from that folder. If each user only wants to ask questions to their on files it means you need to create seperate index for each user or better when insert doc in vector database add user id in metedata so when that user asks question you only fetch doc which has metadat containing that user id
Nice tutorial.
Hello Pradip, What is the best way to get in touch with you?
You can message me on LinkedIN.
great tutorial
how to add memory in langchaian sever?
In this video I have show how to add memory to chain ruclips.net/video/fss6CrmQU2Y/видео.htmlsi=2QWgHBkJ7eutw-vm
👍