thankyou thankyou , thankyou very much dear ashutosh , i have wasted literally 1 year just to finding a good online course to learn about mlops , i have seen your playlist as a suggestion mannier time in past months but ignored beacuse of less number of views , but the end when i was frustated i dont know how i opened your playlist , and from that day im gladdd!!!! , thankyou brother
Very nice video and I appreciate your effort. One thing happened with me while trying to serve an LSTM model using TensorFlow. There is always an error because of the data shape and data type?
thanks.. it's helpful.. in 40:11 you activated the serve the model to production. but what if you want to change the version? should you stop the serving or change anything or just update the model version by using this py? thank you
When we are serving then either we are serving a particular version of the model or the model with a particular stage that is staging or production. And see while serving we either define model url with modelname/version or modelname/stage so in case of version change you have to restart the serving. However if you opt CI/CD then this can be achieved automatically with little to negligible downtime. But I would suggest to try out these things by changing version, stages and see if you need to restart the serving or it works as it is. Then it will be more clear. Thank you
do you mean string classes as model prediction output? This is normal behavior isn't it? may be I am not understanding your query, could you please explain what is your doubt.
at 30:12, you get the Run ID from an already exisitng source. I'm doing the same but having an error: RestException: INVALID_PARAMETER_VALUE: Invalid model version source: '67fd8db1a7be49fd9badace4b3a0a6e8\artifacts\model'. To use a local path as a model version source, the run_id request parameter has to be specified and the local path has to be contained within the artifact directory of the run specified by the run_id.
Hi Steven, Glad you liked it. please find below the link to notebook: github.com/TripathiAshutosh/mlflow/blob/main/MLFlow%20Model%20Serving%20Live%20Demo.ipynb
You can register each model individually on mlflow and then create a serving url for each of the models. Later can consume that url in any of the third party applications.
i am hosting mlflow on VM and using set tracking URI to log parameters , metrics , artifacts , my model is not supported by mlflow so to log i am using pyfunc module of mlflow to log the model. i have written custom class like ........ class Model_Wrapper(mlflow.pyfunc.PythonModel): def __init__(self,): self.model = None def load_context(self,context): self.model=mlflow.pyfunc.load_model(context.artifacts["Original_Model"]) def predict(self, context, model_input): ss=self.model.sample(model_input.get("records")[0]) return ss.to_json() passing {"inputs":{"records":[20]}} for invocations url in postman.. but, getting error as....... Encountered an unexpected error while evaluating the model. Verify that the serialized input Dataframe is compatible with the model for inference. I am using SDV ctgan model can you help anything on this...
As per the error message, input to the model is not compatible with the input model is expected. First may be in notebook try to run the model predictions and see if it works. And also normally mlflow prediction api take ndaray input format. So just work on this area then it should fix the error
Hello Ashutosh, thanks for this wonderful demo. I am not able to run this code - mlflow models serve --model-uri models:/iris-classifier/Production -p 1234 --no-conda What will be the alternative to this?
Can you show the same thing which is model serving part for CNN, Image classification import requests import numpy as np import mlflow,os, cv2 IMAGE_SIZE=120 new_prediction, img_data=[],[] NEW_TEST_DIRECTORY= 'new_prediction_data' #it has cat and dog mixed images for predictions for img in os.listdir(NEW_TEST_DIRECTORY): img_path=os.path.join(NEW_TEST_DIRECTORY,img) img_data.append(img) img_arr=cv2.imread(img_path) img_arr=cv2.resize(img_arr,(IMAGE_SIZE, IMAGE_SIZE)) new_prediction.append(img_arr) new_prediction=np.array(new_prediction) new_prediction=new_prediction/255 # Prepare inference request inference_request = { "data": new_prediction.tolist() } endpoint = "localhost:1234/invocations" response = requests.post(endpoint,data=inference_request,headers={"Content-Type": "application/json"}) predictions=np.argmax(np.array(response.json()),axis=1) print(predictions) for me its showing error can you fix this then it will be great help
thankyou thankyou , thankyou very much dear ashutosh , i have wasted literally 1 year just to finding a good online course to learn about mlops , i have seen your playlist as a suggestion mannier time in past months but ignored beacuse of less number of views , but the end when i was frustated i dont know how i opened your playlist , and from that day im gladdd!!!! , thankyou brother
Glad to hear that you liked the content.
Very very very helpful content, gives a bigger picture of how you models are utilized in real world scenario! Keep up the good work! and Thank you!
Excellent Explantation!....hats off to your dedication
Thank you buddy
Awesome Tutorial...
Thank you Krish
This was a fantastic tutorial! Thanks for taking the time to make such high-quality content.
Thank you.
very excellent explanation Bro,i got some good knowledge after watching your videos' Thx a lot for your efforts
Thank you Dost 🙏
Excellent Ashutosh....very detail presentation makes easy to understand the mlflow.....thanks for the detail explanation ashutosh....🙌🙌🙌🙌🙌🙏🙏🙏🙏
Glad to hear 🙏
excellent material for beginners
Thanks
Very well explained.. please upload more to the series.. :)
very well planned content ashutosh, really appreciate your efforts 👍
Great tutorial. Thank you for sharing.
You are welcome
great video and playlist pls keep uploading
Thanks.
ashutosh, u created well organized content, really helpful for complete understanding of mlflow. ultimate explanation. thanks :). keep it up.
Thank you @chaloghumne8632
This is so amazing Channel ❣️🙌🙌
Thanks Man.
Super tutorials on ML Flow...keep it up buddy on these topics
Thank you Sandeep.
After going through so many tutorials I found your tutorial to be clear and on point. Amazing tutorial. Can you make a video on docker and mlflow?
Actually docker and MLFlow solve two different purposes. Could you please let me know what exactly u would be interested to see in the video.
Fantastic tutorial, just one more request, can you please make a playlist for PySpark for Data Science, much needed sir
Sure.
Awesome
Very nice video and I appreciate your effort. One thing happened with me while trying to serve an LSTM model using TensorFlow. There is always an error because of the data shape and data type?
thanks.. it's helpful.. in 40:11 you activated the serve the model to production. but what if you want to change the version? should you stop the serving or change anything or just update the model version by using this py? thank you
When we are serving then either we are serving a particular version of the model or the model with a particular stage that is staging or production.
And see while serving we either define model url with modelname/version or modelname/stage so in case of version change you have to restart the serving.
However if you opt CI/CD then this can be achieved automatically with little to negligible downtime.
But I would suggest to try out these things by changing version, stages and see if you need to restart the serving or it works as it is. Then it will be more clear.
Thank you
@@AshutoshTripathi_AI thank you for explain it well.! hope the best for you
Part 1: experiment tracking using MLFlow: ruclips.net/video/r0do1KVEGqM/видео.html
Good
🙏🙏
How come you are getting string as prediction and model do not throws any error for ordinal encoding?
do you mean string classes as model prediction output? This is normal behavior isn't it? may be I am not understanding your query, could you please explain what is your doubt.
could u make a video on docker w.r.t Machine learning
Sure. You will get one video on this soon. Just hit the bell 🔔 icon to get notified.
Hi Harish, this video is uploaded. You can watch it. ML model deployment using docker container:
ruclips.net/video/Pn73iKmD3Cw/видео.html
at 30:12, you get the Run ID from an already exisitng source. I'm doing the same but having an error:
RestException: INVALID_PARAMETER_VALUE: Invalid model version source: '67fd8db1a7be49fd9badace4b3a0a6e8\artifacts\model'. To use a local path as a model version source, the run_id request parameter has to be specified and the local path has to be contained within the artifact directory of the run specified by the run_id.
Hi Ashutosh,
As we are using sqlite as Database, Can you please explain it in detail?
How can we do that?
I think I have already explained this. Please let me know what is your question specifically
@@AshutoshTripathi_AI How to setup sqlite for this?
Hi Ashutosh - Can you please share Model serving notebook. Your videos are very helpful. Please do upload code to github and share link soon.
Hi Steven, Glad you liked it. please find below the link to notebook:
github.com/TripathiAshutosh/mlflow/blob/main/MLFlow%20Model%20Serving%20Live%20Demo.ipynb
hi ashuthosh. Great videos, by the way whats the main purpose of serving a model?
Model serving APIs prediction output could be consumed within third party applications.
@@AshutoshTripathi_AI so can i combine mlflow with streamlit...because i have worked on web app development using streamlit
may i know how to do model serving for multiple models please thanks in advance
You can register each model individually on mlflow and then create a serving url for each of the models. Later can consume that url in any of the third party applications.
Okay thanks @AshutoshTripathi_AI
i am hosting mlflow on VM and using set tracking URI to log parameters , metrics , artifacts , my model is not supported by mlflow so to log i am using pyfunc module of mlflow to log the model. i have written custom class like ........
class Model_Wrapper(mlflow.pyfunc.PythonModel):
def __init__(self,):
self.model = None
def load_context(self,context):
self.model=mlflow.pyfunc.load_model(context.artifacts["Original_Model"])
def predict(self, context, model_input):
ss=self.model.sample(model_input.get("records")[0])
return ss.to_json()
passing {"inputs":{"records":[20]}} for invocations url in postman.. but, getting error as.......
Encountered an unexpected error while evaluating the model. Verify that the serialized input Dataframe is compatible with the model for inference. I am using SDV ctgan model
can you help anything on this...
As per the error message, input to the model is not compatible with the input model is expected. First may be in notebook try to run the model predictions and see if it works. And also normally mlflow prediction api take ndaray input format. So just work on this area then it should fix the error
Hello Ashutosh, thanks for this wonderful demo. I am not able to run this code - mlflow models serve --model-uri models:/iris-classifier/Production -p 1234 --no-conda
What will be the alternative to this?
Can you show the same thing which is model serving part for CNN, Image classification
import requests
import numpy as np
import mlflow,os, cv2
IMAGE_SIZE=120
new_prediction, img_data=[],[]
NEW_TEST_DIRECTORY= 'new_prediction_data' #it has cat and dog mixed images for predictions
for img in os.listdir(NEW_TEST_DIRECTORY):
img_path=os.path.join(NEW_TEST_DIRECTORY,img)
img_data.append(img)
img_arr=cv2.imread(img_path)
img_arr=cv2.resize(img_arr,(IMAGE_SIZE, IMAGE_SIZE))
new_prediction.append(img_arr)
new_prediction=np.array(new_prediction)
new_prediction=new_prediction/255
# Prepare inference request
inference_request = {
"data": new_prediction.tolist()
}
endpoint = "localhost:1234/invocations"
response = requests.post(endpoint,data=inference_request,headers={"Content-Type": "application/json"})
predictions=np.argmax(np.array(response.json()),axis=1)
print(predictions)
for me its showing error can you fix this then it will be great help