Lol. According to mlflow.org/docs/latest/python_api/mlflow.sagemaker.html, no autoscaling, but you could enable it later using the SageMaker SDK by updating the endpoint configuration. Check the SageMaker docs for details.
Hi thanks for this great video. I successfully ran the model locally and it worked but when trying to deplay it to sagemaker, I have this error "botocore.exceptions.NoCredentialsError: Unable to locate credentials ". Please can you help me??
Nevermind I was able to resolve the issue. If you are a beginner like me and went through a similar problem, here is how I solved it: 1. Downloading the aws command line in my PC 2. Setting up aws configure (see this link for details docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html) 3. Lastly was to authenticate docker with ECR (see docs.aws.amazon.com/AmazonECR/latest/userguide/registry_auth.html) 4. Then deploying the model as described in the video above
I am getting the following error when building and push the image: File "/Users/niels/Desktop/Projects/MLflow_optuna/venv/lib/python3.9/site-packages/botocore/utils.py", line 1058, in validate_region_name raise InvalidRegionError(region_name=region_name) botocore.exceptions.InvalidRegionError: Provided region_name '\' doesn't match a supported format. What did I do wrong?
This shows really well how to BYO model to Sagemaker. On the other hand, I'm trying to figure out the best way to get a pre-baked model out of Sagemaker (e.g. Obj2Vec or BlazingText) and log it in MLflow or deploy it locally with MLflow (that could be helpful to deploy in a kubernetes cluster for example). I'm a bit confused between: option 1 = using the Sagemaker python SDK in local mode OR option 2 = taking the model artifacts from S3 somehow and packaging the model myself OR option 3 = setting up a Sagemaker endpoint to evaluate the model and then tear it down. I right now use option 3 but I have the feeling it's not the best way to do this. Do you see what I mean and how one could get that similar local experience for let's say a Obj2Vec model ?
Hi Louis, models trained with built-in algos are MXNet models (except XGBoost of course). If you want to use MLflow, you would have to import MXNet in the script, load the pretrained model, and save it again (just like you'd trained it). MLflow should then be able to deploy it. Give it a try and let me know :)
Hey! Is there any way to modify the response by /invocation endpoint? I want the model to return not only the predicted label but also the prediction probability like sklearn predict_proba
1) Do we have to write a Dockerfile for it? No right? "mlflow sagemaker build-and-push-container" should build the docker image and push it to the associated AWS ECR right? 2) I am working on Amazon workspace which has Amazon linux 2 installed. When I run the "mlflow sagemaker build-and-push-container" command, it is trying to use Ubuntu repository to install the required dependencies. As my OS is amazon linux (Centos dist) and uses yum it fails. Can you please suggest me on how to fix this?
@@juliensimonfr thank you for the response. I got it to work with below commands in the mentioned order: 1. mlflow sagemaker build-docker 2. mlflow sagemaker build-and-push-container 3. mlflow sagemaker deploy
This is really good video but I am hungry for more :) Julien could you make example with MLFlow + Hyperpt + loggin metrics accross multiple epocs? Or if there is such could you point me to it.
Hello. I have a question about Docker image. Why when I do changes in docker image with new name changes are not updated? for example I want to change version of one of packages. I use: docker build . -t new-image But I see that new version of package is not updated.
Great video!!! I am actually new to the aws, trying to deploy locally trained image classification model on to sagemaker. Can you guide me here with any article or video ?
When i am trying to deploy the model locally using sagemaker(just like it was instructed in the video i am getting following error, I checked docker hub there is no image called "mlflow-pyfunc") Did you build the following image before running the code @Julien Simon Unable to find image 'mlflow-pyfunc:latest' locally docker: Error response from daemon: pull access denied for mlflow-pyfunc, repository does not exist or may require 'docker login': denied: requested access to the resource is denied. See 'docker run --help'. Thank you
@@juliensimonfr ooh I didn't do that as I wanted it to deploy locally first, but I guess I should be doing because that builds the image which inturn would be used locally Thank you 🙂
Have been hitting this error while running "mlflow sagemaker run-local -m $MODEL_PATH -p $LOCAL_PORT" command. Unable to find image 'mlflow-pyfunc:latest' locally docker: Error response from daemon: pull access denied for mlflow-pyfunc, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
@@juliensimonfr same error here actually. Which image to build? ... `mlflow sagemaker run-local -m $MODEL_PATH -p $LOCAL_PORT` throws the error mentioned by Anil.
@@juliensimonfr 1) Do we have to write a Dockerfile for it? No right? "mlflow sagemaker build-and-push-container" should build the docker image and push it to the associated AWS ECR right? 2) I am working on Amazon workspace which has Amazon linux 2 installed. When I run the "mlflow sagemaker build-and-push-container" command, it is trying to use Ubuntu repository to install the required dependencies. As my OS is amazon linux (Centos dist) and uses yum it fails. Can you please suggest me on how to fix this?
This man is awesome. Is it possible to enable autoscaling from the mlflow command when creating the endpoint ?
Lol. According to mlflow.org/docs/latest/python_api/mlflow.sagemaker.html, no autoscaling, but you could enable it later using the SageMaker SDK by updating the endpoint configuration. Check the SageMaker docs for details.
Hi thanks for this great video. I successfully ran the model locally and it worked but when trying to deplay it to sagemaker, I have this error "botocore.exceptions.NoCredentialsError: Unable to locate credentials
". Please can you help me??
Nevermind I was able to resolve the issue. If you are a beginner like me and went through a similar problem, here is how I solved it:
1. Downloading the aws command line in my PC
2. Setting up aws configure (see this link for details docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html)
3. Lastly was to authenticate docker with ECR (see docs.aws.amazon.com/AmazonECR/latest/userguide/registry_auth.html)
4. Then deploying the model as described in the video above
I am getting the following error when building and push the image:
File "/Users/niels/Desktop/Projects/MLflow_optuna/venv/lib/python3.9/site-packages/botocore/utils.py", line 1058, in validate_region_name
raise InvalidRegionError(region_name=region_name)
botocore.exceptions.InvalidRegionError: Provided region_name '\' doesn't match a supported format.
What did I do wrong?
This shows really well how to BYO model to Sagemaker.
On the other hand, I'm trying to figure out the best way to get a pre-baked model out of Sagemaker (e.g. Obj2Vec or BlazingText) and log it in MLflow or deploy it locally with MLflow (that could be helpful to deploy in a kubernetes cluster for example).
I'm a bit confused between:
option 1 = using the Sagemaker python SDK in local mode OR
option 2 = taking the model artifacts from S3 somehow and packaging the model myself OR
option 3 = setting up a Sagemaker endpoint to evaluate the model and then tear it down.
I right now use option 3 but I have the feeling it's not the best way to do this. Do you see what I mean and how one could get that similar local experience for let's say a Obj2Vec model ?
Hi Louis, models trained with built-in algos are MXNet models (except XGBoost of course). If you want to use MLflow, you would have to import MXNet in the script, load the pretrained model, and save it again (just like you'd trained it). MLflow should then be able to deploy it. Give it a try and let me know :)
Merci pour la video!
Great video Julien
Thanks!
Enjoy!
Hey! Is there any way to modify the response by /invocation endpoint? I want the model to return not only the predicted label but also the prediction probability like sklearn predict_proba
hi can u please tell me whther u find any solution for this
1) Do we have to write a Dockerfile for it? No right? "mlflow sagemaker build-and-push-container" should build the docker image and push it to the associated AWS ECR right?
2) I am working on Amazon workspace which has Amazon linux 2 installed. When I run the "mlflow sagemaker build-and-push-container" command, it is trying to use Ubuntu repository to install the required dependencies. As my OS is amazon linux (Centos dist) and uses yum it fails. Can you please suggest me on how to fix this?
1) Correct, no need to write it.
2) Not sure, did you check the mlflow doc?
@@juliensimonfr thank you for the response.
I got it to work with below commands in the mentioned order:
1. mlflow sagemaker build-docker
2. mlflow sagemaker build-and-push-container
3. mlflow sagemaker deploy
@@chandrakumar6575 Great!
This is really good video but I am hungry for more :)
Julien could you make example with MLFlow + Hyperpt + loggin metrics accross multiple epocs?
Or if there is such could you point me to it.
Great video! Is there a way to train a model as a Sagemaker Training Job using mlflow?
Hi Jon, not that I know. mlflow.sagemaker seems to be for deployment only. Take a look at github.com/Kenza-AI/sagify, it could do the trick :)
Hello. I have a question about Docker image. Why when I do changes in docker image with new name changes are not updated? for example I want to change version of one of packages. I use: docker build . -t new-image But I see that new version of package is not updated.
You mean locally or on AWS?
@@juliensimonfr I could solve it just remove one line and then I could install packages. Thanks
Great video!!!
I am actually new to the aws, trying to deploy locally trained image classification model on to sagemaker. Can you guide me here with any article or video ?
plenty of info here: docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms.html
Is there any list of endpoints like /invocations? Where can I find them?
You can see your endpoints in the SageMaker console, and list them with the list-endpoints SageMaker API.
@@juliensimonfr Okay, thanks :) How did you create /invocations endpoint? How can I modify it's functionality?
Do we need to stop the service of the model just deployed in sagemaker since they may charge us when not being used?
Sure, always delete your endpoints where you're done.
@@juliensimonfr So, if I want to keep using it, then I need to be charged.
When i am trying to deploy the model locally using sagemaker(just like it was instructed in the video i am getting following error, I checked docker hub there is no image called "mlflow-pyfunc") Did you build the following image before running the code @Julien Simon
Unable to find image 'mlflow-pyfunc:latest' locally
docker: Error response from daemon: pull access denied for mlflow-pyfunc, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'.
Thank you
Did you build the sagemaker container? mlflow sagemaker build-and-push-container
@@juliensimonfr ooh I didn't do that as I wanted it to deploy locally first, but I guess I should be doing because that builds the image which inturn would be used locally
Thank you 🙂
great help
cool, thanks for watching.
👏👏
Thanks!
Have been hitting this error while running "mlflow sagemaker run-local -m $MODEL_PATH -p $LOCAL_PORT" command.
Unable to find image 'mlflow-pyfunc:latest' locally
docker: Error response from daemon: pull access denied for mlflow-pyfunc, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
Did you build the container?
@@juliensimonfr same error here actually. Which image to build? ... `mlflow sagemaker run-local -m $MODEL_PATH -p $LOCAL_PORT` throws the error mentioned by Anil.
@@wardsworld 'mlflow sagemaker build-and-push-container'. See www.mlflow.org/docs/latest/python_api/mlflow.sagemaker.html
@@juliensimonfr
1) Do we have to write a Dockerfile for it? No right? "mlflow sagemaker build-and-push-container" should build the docker image and push it to the associated AWS ECR right?
2) I am working on Amazon workspace which has Amazon linux 2 installed. When I run the "mlflow sagemaker build-and-push-container" command, it is trying to use Ubuntu repository to install the required dependencies. As my OS is amazon linux (Centos dist) and uses yum it fails. Can you please suggest me on how to fix this?
@@wardsworld Were you able to fix it? If yes, please let me know what steps you had to follow.