Understanding Delta Lake - The Heart of the Data Lakehouse

Поделиться
HTML-код
  • Опубликовано: 10 фев 2025
  • Data Lakehouse is taking the world by storm as the new data warehouse platform! In this video, I demonstrate how Delta Lake provides the core functionality of the Data Lakehouse and demystify this powerful technology.
    Join my Patreon Community and Watch this Video without Ads!
    www.patreon.co...
    Databricks Notebook and Data Files at:
    github.com/bca...
    Uploading Files to Databricks Video
    • Master Databricks and ...
    See my Pre Data Lakehouse training series at:
    • Master Databricks and ...

Комментарии • 17

  • @mainakdey3893
    @mainakdey3893 8 месяцев назад

    at last somebody is clearing the confusion, Good job Bryan

  • @stylish37
    @stylish37 Год назад

    Top stuff Bryan! Thanks a lot for this playlist

  • @amarnadhgunakala2901
    @amarnadhgunakala2901 2 года назад +1

    Thank you Brother, this helps people.

  • @gatorpika
    @gatorpika 2 года назад

    Great explanation! Thanks!

  • @rahulberry5341
    @rahulberry5341 Год назад

    Thanks for the nice explanation

  • @panzabamboo1901
    @panzabamboo1901 Год назад +1

    Hi Brian, would you be able to elaborate more on the file types, currently supporting etl jobs running databricks, still using trial and error to figure out the file type/ how to load em

    • @BryanCafferky
      @BryanCafferky  Год назад

      Hi Panza, Assuming you mean source files types to be read, most file types supported via Spark, i.e. csv, json, SQL databases, parquet, delta, avro. Are you looking for a specific type?

  • @parisaayazi8886
    @parisaayazi8886 9 месяцев назад +1

    Thanks Bryan! I'm wondering how it's possible to create a CSV table using the CREATE TABLE command, which allows us to write SQL queries against it, but we can't use saveAsTable with format('csv') to achieve the same result

    • @BryanCafferky
      @BryanCafferky  9 месяцев назад

      Originally Spark could not create updatable tables. Instead it could only create a schema for a flat file like a CSV. The schema describes the data in the file so SQL select statements can be used on it. You can't update the table though and it is not a Managed table meaning if you drop the table for the CSV file, the file remains. Updateable tables (supports CRUD and ACID) was added with Delta tables.

    • @parisaayazi8886
      @parisaayazi8886 9 месяцев назад

      @@BryanCafferky thanks a lot.

  • @WojciechBukowski-m5e
    @WojciechBukowski-m5e Год назад +1

    Thanks, this is great video and well explained

    • @BryanCafferky
      @BryanCafferky  Год назад

      Thanks. In my experience, it is important to have the original data you loaded into a DW bc 1) troubleshooting issues, 2) recovery if some part of the data fails to load - you reload from the copy, 3) auditability - you can show what you loaded. It's especially critical if you cannot go back at a later date and retrieve that data again from the source.

  • @gautamgovinda5140
    @gautamgovinda5140 9 месяцев назад

    Cool👍

  • @sajeershahul8361
    @sajeershahul8361 Год назад

    How can I not subscribe 👌🏽