Dive into Microsoft Fabric's Power BI Direct Lake

Поделиться
HTML-код
  • Опубликовано: 8 янв 2025

Комментарии • 26

  • @gvasvas
    @gvasvas 10 месяцев назад +3

    Awesome demo! Quick and right on the spot.

  • @SonharaProject
    @SonharaProject 3 месяца назад

    I was having kind of a hard time understanding the reframing but you explained it really well! Thank you!

  • @christophehervouet3280
    @christophehervouet3280 9 месяцев назад +1

    Super post Patrick , as usual

  • @toma4528
    @toma4528 10 месяцев назад +2

    Great video, Patrick!

  • @archanasrivastava6531
    @archanasrivastava6531 8 месяцев назад +2

    Thanks for this insightful video.
    Do you have any performance/capability metrix for comparison between Import, Direct query and Direct Lake , pls share. Thanks in advance

  • @Jojms123
    @Jojms123 3 месяца назад

    Thank you for this input! But what about the Measures "table" that we were always creating to have all measures together?
    Loved that we can query the model even without going there !!! That was awesome :D

  • @user-iv5tq4qk7m
    @user-iv5tq4qk7m 10 месяцев назад +3

    Q I love the ease of creating new semantic models but I keep coming across the problem whereby I have to give somebody access to the whole lake house in order to give them access to a segmented part of that data but I only want them to see via a semantic model. Is there any way that I can create a gold lake house in one workspace then create multiple semantic models in other workspaces and only give users access to those?

    • @npergand
      @npergand 10 месяцев назад

      You don’t need to give users access to the lakehouse, that’s just the default behavior. What happens is when you create a new semantic model it uses a gateway connection the lakehouse that is with SSO. You can see this in the semantic model settings screen. You can change that by creating a new connection to the lakehouse using a specific credential.

  • @brunomagalhaes9349
    @brunomagalhaes9349 6 месяцев назад

    I have several sematic models that are alike. Do I need to have a fabric capacity to merge them and treat the data like I do for SQL? Thanks a lot.

  • @robcarrol
    @robcarrol 6 месяцев назад

    Great demo. I've been using direct lake in a current project and absolutely love it

  • @shekharkumardas
    @shekharkumardas 10 месяцев назад +2

    How to create dax column in direct lake dataset

  • @nishantkumar9570
    @nishantkumar9570 10 месяцев назад +5

    How costing will work for direct lake mode?

    • @toulasantha
      @toulasantha 8 месяцев назад

      Less to start with
      Will be rocketing up after that
      Just like everything else MS 😂

  • @sandeepbarge4699
    @sandeepbarge4699 2 месяца назад

    Can I change Import Mode to Direct Lake Mode in my Power BI report?

  • @Mike-en1rd
    @Mike-en1rd 8 месяцев назад

    Do you know when Direct Lake will be available to use in Power BI Desktop?

  • @UnbelievableOdyssey
    @UnbelievableOdyssey 7 месяцев назад

    If my Delta Lake is in Azure Data Lake Stoage can I still use Direct Lake?

  • @gnomesukno
    @gnomesukno 10 месяцев назад

    Not using it currently but I can see some potential benefits to it. Will have to look into it

  • @danrolfe7862
    @danrolfe7862 10 месяцев назад

    THIS IS BANANAS!!!!!!!! WOOOHOOOO
    Is there still a row limit? (On data that you can actually bring into Power BI)
    I seem to remember hitting an upper limit on rows using SQL Endpoint / Direct Query.. I had this MONSTER data set of about 14m rows that the stakeholder insisted he needed all of the data.

  • @googlogmob
    @googlogmob 10 месяцев назад

    Patrick, thanks 👍

  • @dilipinamdarpatil6301
    @dilipinamdarpatil6301 9 месяцев назад

    Awesome 🙏

  • @NicolasPappasA
    @NicolasPappasA 7 месяцев назад

    Is direct lake using Delta Live Tables? It seems like it's the same technology.

  • @EBAN4444
    @EBAN4444 10 месяцев назад +2

    Does this mean the massive 25GB model I have that holds too many years of data because the "business" needs it, even though they only look at a few years, can be removed and then only the partitions of data that is needed will be held into memory? Lowing the memory used on the capacity and the amount of data and CPU needed to crunch all the measures?
    Can I recreate the model using direct lake against our ADLS gen2 databricks parquet files which are already the fact tables we pull in. Do you need to setup partitions in the onelake or does it automatically do it for you?
    This does seem to remove the query folding performance gains, so it seems like the parquet files will need to be rewrote to be better optimized and only include the data that is needed in the model.
    Also is that python library to refresh a dataset available outside of onelake? aka would love an easy way to refresh a PBI model from an Azure databricks notebook versus adf xmla call

  • @googlogmob
    @googlogmob 10 месяцев назад

    Does Fablic available for developers for free?

    • @srikanthm4504
      @srikanthm4504 9 месяцев назад

      No your admin must enable and can do for a specific space.

  • @NateHerring1
    @NateHerring1 10 месяцев назад +1

    I watch Patrick

  • @Milhouse77BS
    @Milhouse77BS 10 месяцев назад

    I’m up