A Technical Deep Dive into Unity Catalog's Practitioner Playbook

Поделиться
HTML-код
  • Опубликовано: 24 июл 2023
  • Get ready to take a deep dive into Unity Catalog and explore how it can simplify data, analytics and AI governance across multiple clouds. In this session, take a deep dive into Unity Catalog and the expert Databricks team will guide you through a hands-on demo, showcasing the latest features and best practices for data governance. You'll learn how to master Unity Catalog and gain a practical understanding of how it can streamline your analytics and AI initiatives. Whether you're migrating from Hive Metastore or just looking to expand your knowledge of Unity Catalog, this session is for you. Join us for a practical, hands-on deep dive into Unity Catalog and learn how to achieve seamless data governance while following best practices for data, analytics and AI governance.
    Talk by: Zeashan Pappa and Ifigeneia Derekli
    Connect with us: Website: databricks.com
    Twitter: / databricks
    LinkedIn: / databricks
    Instagram: / databricksinc
    Facebook: / databricksin
  • НаукаНаука

Комментарии • 5

  • @allthingsdata
    @allthingsdata 8 месяцев назад

    Excellent talk with good deep dive addressing practical issues.

  • @snehotoshbanerjee1938
    @snehotoshbanerjee1938 7 месяцев назад

    Great content!!

  • @neelred10
    @neelred10 Год назад +4

    Parameterizing catalog name in Python seems straightforward. But for the notebooks and dashboards where we are using spark sql queries they are usually in schema.table format. And same code goes in git and is deployed in dev/qa/prod environments. How can we handle this when we move to UC and have different catalogs for each environment? 59:05

    • @RyanCoffman-mb6ye
      @RyanCoffman-mb6ye Год назад

      text based widget would work for notebooks and dashboards

  • @neelred10
    @neelred10 4 месяца назад +1

    if snowflake server is not publicly accessible, can it still be federated?