Advancing Spark - Managing Files with Unity Catalog Volumes

Поделиться
HTML-код
  • Опубликовано: 15 окт 2024
  • In the time before Unity Catalog, we mounted our lakes to a workspace and had nice aliased folder paths to refer to incoming data files, sandbox data, experiments and any other types of lake file. Unity Catalog brings a huge amount of governance, security and management functionality, but we felt a huge gap when it came to accessing actual files! Unity Catalog Volumes fills this gap, providing a slick, easy way of bringing your file-based data into the catalog.
    In this video, Simon walks through setting up a Unity Catalog volume, before showing how it can then be viewed, queried and even hooked up to Autoloader for efficient ETL loading.
    For the official volumes announcement, see www.databricks...
    If you need help rolling out Unity Catalog and revamping your lakehouse to take full advantage, get in touch with Advancing Analytics

Комментарии • 11

  • @vincentdelbaen8815
    @vincentdelbaen8815 Год назад +1

    Thank you sir!
    I'll try it out right away and probably include it to our ways of working.
    I feel it can reduce the burden and avoid creating external locations for each data analysts projects.

  • @datawithabe
    @datawithabe Год назад +1

    Great video,! as always, best place to learn new Databricks features :)

  • @coleb1567
    @coleb1567 Год назад

    Great video. One unrelated question: how do you guys manage deployments with databricks? I come from an airflow +Jenkins background as an engineer. Would you recommend Jenkins for databricks deployments?

    • @mc1912
      @mc1912 Год назад

      I remember Simon mentioning they use Terraform for infrastructure deployment, but maybe he can tell us more 😅

  • @AshleyBetts-h7t
    @AshleyBetts-h7t Год назад

    Love your work Simon. Do you know if it is possible to have a credential that is not associated with same cloud provider as the Unity Catalogue instance? I have Databricks environment deployed on Azure but one of the ingestions is via an S3 bucket. I would love to be able to set this up as an external volume.

    • @nachetdelcopet
      @nachetdelcopet 4 месяца назад

      I think you will need to create a Access Conector in your AWS, then go to your Databricks workspace and create the storage credentials using the AWS Access Conector ID. Then you can replicate everything he has explained in the video for AWS

  • @atulbansal8041
    @atulbansal8041 Год назад

    How can I get the access of data ricks environment for learning. I know there is a community edition available but somehow I am not able to load my raw files into that

  • @ErikParmann
    @ErikParmann Год назад

    So with mounts we can have the dev workspace mount the dev containers, and the prod environment mount the prod containers, and they both get mounted to the same path. So the notebook don't have to 'know' if its running in dev or prod. How will that work in this new world? I noticed that the path contains "dev". Does each notebook have to figure out what environment it is in, and then read/write from the right paths and catalogs based on some string manipulation?

    • @neelred10
      @neelred10 7 месяцев назад

      Exactly my thought. Maybe environment variable can store dev/qa/prod value and use it to dynamically generate path string.

  • @MariusS-h2p
    @MariusS-h2p 7 месяцев назад

    Does this also replace DBFS access in general?

  • @petersandovalmoreno5213
    @petersandovalmoreno5213 6 месяцев назад

    we may write on this volumens?