Learn How to Reliably Monitor Your Data and Model Quality in the Lakehouse

Поделиться
HTML-код
  • Опубликовано: 2 дек 2024
  • Developing and upkeep of production data engineering and machine learning pipelines is a challenging process for many data teams. Even more challenging is monitoring the quality of your data and models once they go into production. Building upon untrustworthy data can cause many complications for data teams. Without a monitoring service, it is challenging to proactively discover when your ML models degrade over time, and the root causes behind it. Furthermore, with a lack of lineage tracking, it is even more painful to debug errors in your models and data. Databricks Lakehouse Monitoring offers a unified service to monitor the quality of all your data and ML assets.
    In this session, you’ll learn how to:
    Use one unified tool to monitor the quality of any data product: data or AI
    Quickly diagnose errors in your data products with root cause analysis
    Set up a monitor with low friction, requiring only a button click or a single API call to start and automatically generate out-of-the-box metrics
    Enable self-serve experiences for data analysts by providing reliability status for every data asset
    Talk by: Kasey Uhlenhuth and Alkis Polyzotis
    Connect with us: Website: databricks.com
    Twitter: / databricks
    LinkedIn: / databricks
    Instagram: / databricksinc
    Facebook: / databricksinc

Комментарии • 11