Model Serving using MLFlow Model Registry | MLFlow 2.0.1 | Live Demo | Part 2 | Ashutosh Tripathi

Поделиться
HTML-код
  • Опубликовано: 16 ноя 2024

Комментарии • 61

  • @sawanrawat5489
    @sawanrawat5489 9 месяцев назад +1

    thankyou thankyou , thankyou very much dear ashutosh , i have wasted literally 1 year just to finding a good online course to learn about mlops , i have seen your playlist as a suggestion mannier time in past months but ignored beacuse of less number of views , but the end when i was frustated i dont know how i opened your playlist , and from that day im gladdd!!!! , thankyou brother

  • @ManojCB-x4i
    @ManojCB-x4i 2 месяца назад +1

    Very very very helpful content, gives a bigger picture of how you models are utilized in real world scenario! Keep up the good work! and Thank you!

  • @AdityaSingh-xc9tr
    @AdityaSingh-xc9tr 2 месяца назад +2

    Excellent Explantation!....hats off to your dedication

  • @krishj8011
    @krishj8011 3 месяца назад +2

    Awesome Tutorial...

  • @hEmZoRz
    @hEmZoRz Год назад +4

    This was a fantastic tutorial! Thanks for taking the time to make such high-quality content.

  • @saikrishna-g2n
    @saikrishna-g2n 9 месяцев назад +1

    very excellent explanation Bro,i got some good knowledge after watching your videos' Thx a lot for your efforts

  • @BIZSURESH
    @BIZSURESH Год назад +1

    Excellent Ashutosh....very detail presentation makes easy to understand the mlflow.....thanks for the detail explanation ashutosh....🙌🙌🙌🙌🙌🙏🙏🙏🙏

  • @kumji1922
    @kumji1922 Год назад +1

    excellent material for beginners

  • @deepaksingh9318
    @deepaksingh9318 Год назад

    Very well explained.. please upload more to the series.. :)

  • @ishanmodi3583
    @ishanmodi3583 Год назад

    very well planned content ashutosh, really appreciate your efforts 👍

  • @yannawutkimnaruk7189
    @yannawutkimnaruk7189 Год назад +1

    Great tutorial. Thank you for sharing.

  • @aditya_01
    @aditya_01 Год назад +1

    great video and playlist pls keep uploading

  • @AdandKidda
    @AdandKidda Год назад +1

    ashutosh, u created well organized content, really helpful for complete understanding of mlflow. ultimate explanation. thanks :). keep it up.

  • @vishalwaghmare3130
    @vishalwaghmare3130 Год назад +1

    This is so amazing Channel ❣️🙌🙌

  • @mastersandy9
    @mastersandy9 Год назад

    Super tutorials on ML Flow...keep it up buddy on these topics

  • @chiragchauhan8429
    @chiragchauhan8429 Год назад +1

    After going through so many tutorials I found your tutorial to be clear and on point. Amazing tutorial. Can you make a video on docker and mlflow?

    • @AshutoshTripathi_AI
      @AshutoshTripathi_AI  Год назад

      Actually docker and MLFlow solve two different purposes. Could you please let me know what exactly u would be interested to see in the video.

  • @atulanand4824
    @atulanand4824 Год назад +1

    Fantastic tutorial, just one more request, can you please make a playlist for PySpark for Data Science, much needed sir

  • @niteshsharma4128
    @niteshsharma4128 Год назад

    Awesome

  • @overmarc86
    @overmarc86 Год назад

    Very nice video and I appreciate your effort. One thing happened with me while trying to serve an LSTM model using TensorFlow. There is always an error because of the data shape and data type?

  • @basi6621
    @basi6621 Год назад +1

    thanks.. it's helpful.. in 40:11 you activated the serve the model to production. but what if you want to change the version? should you stop the serving or change anything or just update the model version by using this py? thank you

    • @AshutoshTripathi_AI
      @AshutoshTripathi_AI  Год назад +1

      When we are serving then either we are serving a particular version of the model or the model with a particular stage that is staging or production.
      And see while serving we either define model url with modelname/version or modelname/stage so in case of version change you have to restart the serving.
      However if you opt CI/CD then this can be achieved automatically with little to negligible downtime.
      But I would suggest to try out these things by changing version, stages and see if you need to restart the serving or it works as it is. Then it will be more clear.
      Thank you

    • @basi6621
      @basi6621 Год назад +1

      @@AshutoshTripathi_AI thank you for explain it well.! hope the best for you

  • @AshutoshTripathi_AI
    @AshutoshTripathi_AI  Год назад +4

    Part 1: experiment tracking using MLFlow: ruclips.net/video/r0do1KVEGqM/видео.html

  • @geetatripathi9335
    @geetatripathi9335 Год назад +1

    Good

  • @mehul4mak
    @mehul4mak Год назад +1

    How come you are getting string as prediction and model do not throws any error for ordinal encoding?

    • @AshutoshTripathi_AI
      @AshutoshTripathi_AI  Год назад

      do you mean string classes as model prediction output? This is normal behavior isn't it? may be I am not understanding your query, could you please explain what is your doubt.

  • @harishs-dm8mm
    @harishs-dm8mm Год назад +1

    could u make a video on docker w.r.t Machine learning

    • @AshutoshTripathi_AI
      @AshutoshTripathi_AI  Год назад

      Sure. You will get one video on this soon. Just hit the bell 🔔 icon to get notified.

    • @AshutoshTripathi_AI
      @AshutoshTripathi_AI  Год назад

      Hi Harish, this video is uploaded. You can watch it. ML model deployment using docker container:
      ruclips.net/video/Pn73iKmD3Cw/видео.html

  • @sokhibtukhtaev9693
    @sokhibtukhtaev9693 7 месяцев назад

    at 30:12, you get the Run ID from an already exisitng source. I'm doing the same but having an error:
    RestException: INVALID_PARAMETER_VALUE: Invalid model version source: '67fd8db1a7be49fd9badace4b3a0a6e8\artifacts\model'. To use a local path as a model version source, the run_id request parameter has to be specified and the local path has to be contained within the artifact directory of the run specified by the run_id.

  • @Sandesh.Deshmukh
    @Sandesh.Deshmukh 11 месяцев назад

    Hi Ashutosh,
    As we are using sqlite as Database, Can you please explain it in detail?
    How can we do that?

    • @AshutoshTripathi_AI
      @AshutoshTripathi_AI  11 месяцев назад +1

      I think I have already explained this. Please let me know what is your question specifically

    • @Sandesh.Deshmukh
      @Sandesh.Deshmukh 11 месяцев назад

      @@AshutoshTripathi_AI How to setup sqlite for this?

  • @stevenpais5174
    @stevenpais5174 Год назад +1

    Hi Ashutosh - Can you please share Model serving notebook. Your videos are very helpful. Please do upload code to github and share link soon.

    • @AshutoshTripathi_AI
      @AshutoshTripathi_AI  Год назад

      Hi Steven, Glad you liked it. please find below the link to notebook:
      github.com/TripathiAshutosh/mlflow/blob/main/MLFlow%20Model%20Serving%20Live%20Demo.ipynb

  • @settaiwthsai
    @settaiwthsai Год назад

    hi ashuthosh. Great videos, by the way whats the main purpose of serving a model?

    • @AshutoshTripathi_AI
      @AshutoshTripathi_AI  Год назад +1

      Model serving APIs prediction output could be consumed within third party applications.

    • @settaiwthsai
      @settaiwthsai Год назад

      @@AshutoshTripathi_AI so can i combine mlflow with streamlit...because i have worked on web app development using streamlit

  • @saikiran-mi3jc
    @saikiran-mi3jc 10 месяцев назад

    may i know how to do model serving for multiple models please thanks in advance

    • @AshutoshTripathi_AI
      @AshutoshTripathi_AI  10 месяцев назад

      You can register each model individually on mlflow and then create a serving url for each of the models. Later can consume that url in any of the third party applications.

    • @saikiran-mi3jc
      @saikiran-mi3jc 10 месяцев назад +1

      Okay thanks @AshutoshTripathi_AI

  • @saikiran-mi3jc
    @saikiran-mi3jc 3 месяца назад

    i am hosting mlflow on VM and using set tracking URI to log parameters , metrics , artifacts , my model is not supported by mlflow so to log i am using pyfunc module of mlflow to log the model. i have written custom class like ........
    class Model_Wrapper(mlflow.pyfunc.PythonModel):
    def __init__(self,):
    self.model = None
    def load_context(self,context):
    self.model=mlflow.pyfunc.load_model(context.artifacts["Original_Model"])
    def predict(self, context, model_input):
    ss=self.model.sample(model_input.get("records")[0])
    return ss.to_json()
    passing {"inputs":{"records":[20]}} for invocations url in postman.. but, getting error as.......
    Encountered an unexpected error while evaluating the model. Verify that the serialized input Dataframe is compatible with the model for inference. I am using SDV ctgan model
    can you help anything on this...

    • @AshutoshTripathi_AI
      @AshutoshTripathi_AI  3 месяца назад

      As per the error message, input to the model is not compatible with the input model is expected. First may be in notebook try to run the model predictions and see if it works. And also normally mlflow prediction api take ndaray input format. So just work on this area then it should fix the error

  • @ankitchaturvedi9453
    @ankitchaturvedi9453 2 месяца назад

    Hello Ashutosh, thanks for this wonderful demo. I am not able to run this code - mlflow models serve --model-uri models:/iris-classifier/Production -p 1234 --no-conda
    What will be the alternative to this?

  • @s.seducation3888
    @s.seducation3888 Год назад

    Can you show the same thing which is model serving part for CNN, Image classification
    import requests
    import numpy as np
    import mlflow,os, cv2
    IMAGE_SIZE=120
    new_prediction, img_data=[],[]
    NEW_TEST_DIRECTORY= 'new_prediction_data' #it has cat and dog mixed images for predictions
    for img in os.listdir(NEW_TEST_DIRECTORY):
    img_path=os.path.join(NEW_TEST_DIRECTORY,img)
    img_data.append(img)
    img_arr=cv2.imread(img_path)
    img_arr=cv2.resize(img_arr,(IMAGE_SIZE, IMAGE_SIZE))
    new_prediction.append(img_arr)
    new_prediction=np.array(new_prediction)
    new_prediction=new_prediction/255
    # Prepare inference request
    inference_request = {
    "data": new_prediction.tolist()
    }
    endpoint = "localhost:1234/invocations"
    response = requests.post(endpoint,data=inference_request,headers={"Content-Type": "application/json"})
    predictions=np.argmax(np.array(response.json()),axis=1)
    print(predictions)
    for me its showing error can you fix this then it will be great help