Creating a Logical Data Warehouse in Azure Synapse Analytics Serverless SQL Pools Technical Demo

Поделиться
HTML-код
  • Опубликовано: 24 окт 2024

Комментарии • 14

  • @russellbrown6784
    @russellbrown6784 Год назад

    Great video

  • @RonaldPostelmans
    @RonaldPostelmans 2 года назад +1

    Great video.i am wondering, since i want to use this to create views which i can load into Power BI and then in Power BI set a refresh every night. Can you tell me if this is going to work performance wise? If i get all views to get into Power BI it's in the power bi cash so from there on the performance lies on power bi itself.

    • @DatahaiBI
      @DatahaiBI  2 года назад +1

      Hi, great question. It's generally considered better for performance to import the data from Serverless into Power BI. Serverless usually can't process and return results in sub-second timeframes so it's best to let Serverless to the "heavy lifting" in terms of data processing in the data lake and return an aggregated dataset to Power BI. And when I say aggregated, it could be a case of the source data lake data being in the billions of rows, and he aggregation into Power BI then reduces that to millions of rows. I hope that helps

  • @ragharaj3367
    @ragharaj3367 2 года назад +1

    Hiya, Please could you let me know how we could achieve incremental loads and SCDs in a logical DW?

    • @DatahaiBI
      @DatahaiBI  2 года назад

      Hi, you can check out my blog on a possible solution www.serverlesssql.com/logical-data-warehouse/creating-a-logical-data-warehouse-with-synapse-serverless-sql-part-3-of-3-incremental-fact-loading-and-slowly-changing-dimensions/

    • @ragharaj3367
      @ragharaj3367 2 года назад

      @@DatahaiBI Thanks for your response. I will have a look into the blog.

  • @halidlaguide9323
    @halidlaguide9323 2 года назад

    Hello
    I hope you're doing well.
    I just saw your blog. I am interested in Logical Datawarehouse architectures on Synapse and I would like to know if it was possible to migrate an entire datawarehouse architecture from SQL Server to this new architecture.
    Is it possible for example with this architecture to easily perform SCD Type 2?

    • @DatahaiBI
      @DatahaiBI  2 года назад

      Hi, it would be best to use Synapse Spark or Azure Databricks to do a full datawarehouse/lakehouse implementation as you have full data engineering control of how the data is transformed and landed. Serverless SQL Pools can then act as the "serving" layer, reading data from the data lake to other downstream applications such as Power BI.

  • @masoudpesaran9721
    @masoudpesaran9721 2 года назад +1

    When the part 2 is coming please?

    • @DatahaiBI
      @DatahaiBI  2 года назад

      I'll be doing a multi-part video series in the next few weeks.

  • @Rothbardo
    @Rothbardo 3 года назад +1

    Is there a way to automate the create external table as creation of a flat file?

    • @DatahaiBI
      @DatahaiBI  3 года назад

      You could dynamically construct the query necessary to create the external table. In this GitHub repo there is a script called createviewsdynamically which could be amended to create external tables github.com/datahai/serverlesssqlpooltools

    • @Rothbardo
      @Rothbardo 3 года назад

      @@DatahaiBI And would you have to put that in a stored proc and call it from adf?

    • @DatahaiBI
      @DatahaiBI  3 года назад

      @@Rothbardo Yes and you could parameterise the stored proc to accept folder locations etc and pass in parameter values from ADF metadata