End-to-End: Pipeline Orchestration (KFP) - BigQuery (BQML) Model For Endpoint Update [notebook 03C]

Поделиться
HTML-код
  • Опубликовано: 4 ноя 2024

Комментарии • 9

  • @WilfredLoyYongKang
    @WilfredLoyYongKang 2 месяца назад +1

    Awesome Mike! Even after 2 years, this is one of the best! Thanks Mike!

  • @normdy
    @normdy Год назад +2

    I love how you step through the model deployment while in process and show the changing interfaces and artifacts being generated. Thank you!

  • @chetanmundhe7899
    @chetanmundhe7899 2 года назад +5

    The best vertex ai series on RUclips🔥🔥

  • @АртемШлагин
    @АртемШлагин 7 месяцев назад

    Thank you for the series! What tweaking did you do to the model in order to make it better than the previous one -- wouldn't the automl in BQ always produce the same result given the same training data?

  • @jeffz7310
    @jeffz7310 2 года назад

    Hi Mike, I have this conceptual question: what is CICD in terms of ML projects?
    I know there are lots of articles about it and I have read a lot of it. But a key question remains: is an AUTOMATED data science workflow/pipeline (data processing => model training => model deployment) what we refer to as CICD for ML project?
    My understanding of CICD is we build a UI that serves a ML model (the app) and pushing the app to a platform, we need CICD to automate the pushing-app-to-cloud-platform part. If we don't need to create an app but just deploy the model as an endpoint, then what do we mean by CICD here? Is it just the automated workflow (pipeline is a workflow)?

    • @statmike-channel
      @statmike-channel  2 года назад

      This is a great question! I working towards the actual CI and CD parts of the workflows. The 05 Series on Custom Models with TensorFlow is getting close on the GitHub repo and will be featured in a new series of video this Fall. I am first setting the foundation of tracking experiments and runs and associated metadata to then enable the CI and CD part of the flow. There is also a great series developing on this GitHub repo that can help: github.com/GoogleCloudPlatform/mlops-with-vertex-ai

  • @reineryeager5270
    @reineryeager5270 2 года назад

    Hi! I have a scenario what if I have .py scripts, do I have to convert it into a notebook for this thing to work? Would it be feasible if I can just import my .py scripts inside those component functions so I don't have to change anything in my .py script? Hope it make sense. I was always using Flask and this is my first time encountering such a thing. Seems very interesting that you can recycle components

    • @statmike-channel
      @statmike-channel  2 года назад

      Hello Reiner, Thank you for the note. I have not made the videos for the following content but it is available in the GitHub repository for you to use right now.
      An an example of a single .py file being run as a Vertex AI Training Job: github.com/statmike/vertex-ai-mlops/blob/main/05%20-%20TensorFlow/05a%20-%20Vertex%20AI%20Custom%20Model%20-%20TensorFlow%20-%20Custom%20Job%20With%20Python%20File.ipynb
      A set of tips for different ways to get training code into Vertex AI Training Jobs: github.com/statmike/vertex-ai-mlops/blob/main/Tips/Python%20Training.ipynb
      And to expand these training jobs to KFP pipeline components you can either use the Vertex AI Python SDK from the component or use the pre-build components that make this even easier: cloud.google.com/vertex-ai/docs/pipelines/customjob-component
      In KFP v2 there is even a containerized Python Component spec that will further make this super easy directly in the pipeline!