Great tutorial that focuses on the aspect of practical deployment. I've come across multiple tutorials that involves Google Colab, which are great for testing things out but API access to the LLM is the thing we need for building practical applications. I've few questions: Questions: 1. What's the estimated cost per day, considering no change is made to the underlying infrastructure ? 2. Does the steps remain same for deploying other LLM's ? 3. How can we maintain the context (of the previous response) in conversation ? 4. How can we provide custom information through RAG ? Thanks again for making this tutorial.
Great questions. Many of these are highly use-case dependent I cover many of these topics in my blog: skanda-vivek.medium.com/ Thanks for the feedback!
Great video! I noticed that you’ve set the Lambda function timeout to three minutes. However, it’s triggered by the AWS API Gateway, which has a maximum timeout of 30 seconds. Therefore, if the Lambda function execution exceeds 30 seconds, its response will never be sent unless you’ve configured the response to be asynchronous. Just an observation I thought I’d share.
Why is that everyone skip the most important part of AWS service for automation which is how to create the Lambda code! Is there a resource about how to write or make the Lambda context/code?
I received an error running the code above. UnexpectedStatusException Failed. Reason: The primary container for production variant AllTraffic did not pass the ping health check.
Great video. I'm pretty unfamiliar with cloud, I just wanna make sure that I can get a LLM to service multiple endpoints for multiple users. If so how do I get to know the number of users that can be serviced
Thanks for the video. Can we run 33B model with instance_type="mI.g4dn.2xlarge", that you also used in the video? If not, then which instance type should I use?
Hi - unfortunately that is too small, as it is a 32 GB instance. I would suggest an instance with 33*2 (atleast) - so a 70 GB or greater instance like ml.p3.16xlarge or ml.g4dn.12xlarge
@buksa7257 0 seconds ago Im having a bit trouble undestanding the following: i believe you're saying the lambda function is calling the endpoint of the sagemaker (the place where we stored the llm). But then who calls the lambda function? When is that function triggered? Does it need another endpoint?
Amazon sagemaker is pretty complex and the ui horrible, any other ways to deploy? The compute model tried to charge me like 1000 dollars for the free usage. Because it spins up like 5 instances. Instance that don’t show up in the console directly, you have to open the separate sage maker instance viewer,
Yes - you can deploy quantized models locally using desktop apps (ruclips.net/video/BPomfQYi9js/видео.html&ab_channel=DataScienceInEverydayLife) or look at other 3rd party solutions like lambda labs
try a huggingface crash course (though that is a bit more advanced - but covers topics like transformers) Otherwise I'd suggest making yourself comfortable with the fundamentals of ML, deep learning etc.
hello, does the text generated by hugging face or another artificial intelligence to make images to text have directly in a cell of Google sheet, as I can do with a copy and paste of the chatgpt API anyway thank you very much for your videos
@scienceineverydaylife3596 sorry I'm not sure I understand you, I speak bad English, are you asking if it is possible to connect chatgpt to Google sheet through an API? yes that's possible, there are several tutorials, if you want I'll show you one, it's very simple pink lovers for a beginner like me can i send you an email
very informative video. I am a student and want to deploy my LLM as a personal portfolio. is it possible to do it for free on aws? and what are the limitations?
I believe there are some free endpoints for 2 months or so - but not GPU or accelerated endpoints needed for LLMs (but good enough for BERT like models if you want to get started) aws.amazon.com/pm/sagemaker/?trk=b6c2fafb-22b1-4a97-a2f7-7e4ab2c7aa28&sc_channel=ps&ef_id=Cj0KCQjw5f2lBhCkARIsAHeTvlghqZsHJxS3V-li795UoUywEr9p7P6bKxbQx4XPL3vV2En4QFaHdtsaAnqTEALw_wcB:G:s&s_kwcid=AL!4422!3!651751060713!p!!g!!aws%20sagemaker%20pricing!19852662230!145019226617
@@scienceineverydaylife3596 Thank you so much for your quick answer! I really would like to be as close as 0$ running cost as possible even if it may needs a lot of initial investment, do you think running API from our own computer server would be realistic for production? What would be the requirements? Thank you for your time
Great tutorial that focuses on the aspect of practical deployment. I've come across multiple tutorials that involves Google Colab, which are great for testing things out but API access to the LLM is the thing we need for building practical applications.
I've few questions:
Questions:
1. What's the estimated cost per day, considering no change is made to the underlying infrastructure ?
2. Does the steps remain same for deploying other LLM's ?
3. How can we maintain the context (of the previous response) in conversation ?
4. How can we provide custom information through RAG ?
Thanks again for making this tutorial.
Great questions. Many of these are highly use-case dependent
I cover many of these topics in my blog: skanda-vivek.medium.com/
Thanks for the feedback!
@@scienceineverydaylife3596 But all the blogs are members only, could we get access some day?
Thank you. Better than the amazon videos prepared by 100 people. The only thing is how cheap is sagemaker😅
Great video!
I noticed that you’ve set the Lambda function timeout to three minutes. However, it’s triggered by the AWS API Gateway, which has a maximum timeout of 30 seconds. Therefore, if the Lambda function execution exceeds 30 seconds, its response will never be sent unless you’ve configured the response to be asynchronous. Just an observation I thought I’d share.
great observation!!
Good point!
Can we make endpoint of aws Lambda function instead of API gateway?
@@abdulwaqar844 yes indeed. It’s called a Lambda ‘function url’. It just takes a few clicks to set up the function url endpoint.
@@justcreate1387 Yes, In this way we can avoid limit of execution time of API gateway which is very low if we are working with ML models.
Why is that everyone skip the most important part of AWS service for automation which is how to create the Lambda code! Is there a resource about how to write or make the Lambda context/code?
informative video!
I have requested for the ml.m5.2xlarge instance - after 2 working days I will be given permission !!
Great 👍
is there an easier way to do this with Ollama?
I'm deploying a LLava model (image + text), how can I invoke that?
Can you make a video tutorial using nextjs with it?
I received an error running the code above. UnexpectedStatusException Failed. Reason: The primary container for production variant AllTraffic did not pass the ping health check.
Try increasing the timeout (max of 3600s)
Great video. I'm pretty unfamiliar with cloud, I just wanna make sure that I can get a LLM to service multiple endpoints for multiple users. If so how do I get to know the number of users that can be serviced
Would really help if you can provide an approximate cost of trying out this tutorial on AWS. Is there any info someone can share?
+1
Thanks for the video. Can we run 33B model with instance_type="mI.g4dn.2xlarge", that you also used in the video? If not, then which instance type should I use?
Hi - unfortunately that is too small, as it is a 32 GB instance. I would suggest an instance with 33*2 (atleast) - so a 70 GB or greater instance like ml.p3.16xlarge or ml.g4dn.12xlarge
@@scienceineverydaylife3596 Thanks for the quick reply. I'll definitely use them then ...
Yes what about costs? Any easier platforms you have tried?
@buksa7257
0 seconds ago
Im having a bit trouble undestanding the following: i believe you're saying the lambda function is calling the endpoint of the sagemaker (the place where we stored the llm). But then who calls the lambda function? When is that function triggered? Does it need another endpoint?
The lambda function is called from the API gateway (whenever a user invokes the API)
Amazon sagemaker is pretty complex and the ui horrible, any other ways to deploy? The compute model tried to charge me like 1000 dollars for the free usage. Because it spins up like 5 instances. Instance that don’t show up in the console directly, you have to open the separate sage maker instance viewer,
Yes - you can deploy quantized models locally using desktop apps (ruclips.net/video/BPomfQYi9js/видео.html&ab_channel=DataScienceInEverydayLife) or look at other 3rd party solutions like lambda labs
I am setting this up for the first time, can you share role config
the response generated by the model is of very limited characters how can adjust the model to generate more data. can anyone please help
Increase the number of tokens generated, obviously it would be compute heavy
Great video, thank you.
I can not find the source code deploying.ipynb in the repository
I appreciate adding a link to it here and in the description
Here is the blog link: skanda-vivek.medium.com/deploying-open-source-llms-as-apis-ec026e2187bc
Many thanks for the great video! Just wondering if you share your code / notebooks anywhere e.g. github etc
You can find blog posts here: skanda-vivek.medium.com/
And github code here: github.com/skandavivek/
@@scienceineverydaylife3596 which repo? I'm looking for the lambda logic
Bro I have a question kindly reply please Kindly suggest a channel which describes the NLP in detail from scratch mean total beginner friendly
try a huggingface crash course (though that is a bit more advanced - but covers topics like transformers)
Otherwise I'd suggest making yourself comfortable with the fundamentals of ML, deep learning etc.
did you upgrade boto3 in the notebook to get this going? Any other packages upgraded?
Here's the blog link, I don't think I needed any other upgrade/downgrades:
skanda-vivek.medium.com/deploying-open-source-llms-as-apis-ec026e2187bc
@@scienceineverydaylife3596 , thank you, i got this working, although the response time is slow like you mentioned.
@scienceineverydaylife3596 have you tried sending embeddings to the model as context?
hello, does the text generated by hugging face or another artificial intelligence to make images to text have directly in a cell of Google sheet,
as I can do with a copy and paste of the chatgpt API
anyway thank you very much for your videos
Not sure - you might look up whether there is a way to integrate google sheets with API calls
@scienceineverydaylife3596 sorry I'm not sure I understand you, I speak bad English,
are you asking if it is possible to connect chatgpt to Google sheet through an API? yes that's possible, there are several tutorials, if you want I'll show you one, it's very simple pink lovers for a beginner like me
can i send you an email
very informative video. I am a student and want to deploy my LLM as a personal portfolio. is it possible to do it for free on aws? and what are the limitations?
I believe there are some free endpoints for 2 months or so - but not GPU or accelerated endpoints needed for LLMs (but good enough for BERT like models if you want to get started)
aws.amazon.com/pm/sagemaker/?trk=b6c2fafb-22b1-4a97-a2f7-7e4ab2c7aa28&sc_channel=ps&ef_id=Cj0KCQjw5f2lBhCkARIsAHeTvlghqZsHJxS3V-li795UoUywEr9p7P6bKxbQx4XPL3vV2En4QFaHdtsaAnqTEALw_wcB:G:s&s_kwcid=AL!4422!3!651751060713!p!!g!!aws%20sagemaker%20pricing!19852662230!145019226617
thanks
Only helpful if you already know what you are doing.
haha i am the 1000th subscriber :)
Yay, thank you! Here's to the next 1000🥂 :)
If you could show how to deploy a finetuned HF model and monetize it youll be rich
This is pretty expensive to host and run, just fyi
Do you have the pricing for each service?
Here is the pricing for various AWS Sagemaker endpoints - what you choose depends on your model compute needs: aws.amazon.com/sagemaker/pricing/
@@scienceineverydaylife3596 Thank you so much for your quick answer!
I really would like to be as close as 0$ running cost as possible even if it may needs a lot of initial investment, do you think running API from our own computer server would be realistic for production? What would be the requirements?
Thank you for your time
hey i want to contact you, how can i ?
www.linkedin.com/in/skanda-vivek-01619311b/