Hi pradip, thanks alot! this is super useful stuff, thanks alot for taking the time to create this. btw not sure if you notice but you forgot to show how to transfer file using bash (but i think you shared the command), aside from that, how do we terminate the ec2/fastapi since now we've configured for it run even if we close the tab? (do we just stop the instance itself or is there a recommended way). Thanks in advance!
Thanks for the video. I deployed my LLM on Runpod server and now have an endpoint where I pass any question and it will send me the response back. But I want to use this endpoint in Langchain? How I can use that? Basically I want to do document question answering using my own endpoint that I deployed. Can you tell me how I can use it in langchain. Rest I know complete things like how to use langchain, document QA etc. But I want to use my endpoint/API in langchain. Please guide me this if you ever done it
@@FutureSmartAI you mean "requests" library? This is also fine but I want to use that endpoint in langchain. You know just like we import any huggingface model using huggingface api and then we pass that llm variable to langchain. And then we divide the user document in chunks, and then pass to the chain so that it predicts. But I don't know how I can pass to chain my own endpoint
One of the best , to the point content . Keep it sir really it is very helpful
Very helpful - thank you
Hey Pradip, Thank you so much for wonderful tutorial, as a total beginner it was really helpful.
Simple to understand👍
Hi pradip, thanks alot! this is super useful stuff, thanks alot for taking the time to create this. btw not sure if you notice but you forgot to show how to transfer file using bash (but i think you shared the command), aside from that, how do we terminate the ec2/fastapi since now we've configured for it run even if we close the tab? (do we just stop the instance itself or is there a recommended way). Thanks in advance!
Cool, thanks
Thanks a lot
Thanks for the video. I deployed my LLM on Runpod server and now have an endpoint where I pass any question and it will send me the response back. But I want to use this endpoint in Langchain? How I can use that?
Basically I want to do document question answering using my own endpoint that I deployed. Can you tell me how I can use it in langchain. Rest I know complete things like how to use langchain, document QA etc. But I want to use my endpoint/API in langchain. Please guide me this if you ever done it
You can do that you can use any python liabrary
@@FutureSmartAI you mean "requests" library?
This is also fine but I want to use that endpoint in langchain. You know just like we import any huggingface model using huggingface api and then we pass that llm variable to langchain. And then we divide the user document in chunks, and then pass to the chain so that it predicts. But I don't know how I can pass to chain my own endpoint
How many time you said you know 😆😆
😂