Getting data into your Microsoft Fabric Lakehouse using Load to Tables

Поделиться
HTML-код
  • Опубликовано: 9 ноя 2024

Комментарии • 11

  • @knuckleheadmcspazatron4939
    @knuckleheadmcspazatron4939 6 месяцев назад

    This is really awesome! For some files this is a great method. Use it when it works kinda thing.

  • @lukaszk4388
    @lukaszk4388 Год назад +1

    hi, thanks for video. It was very widely explained for use cases and possibilities. Especially loading from folder comes in handy.
    There is one thing that I dont understand : why in case we dont see preview correctly should we drop and load tables again. How can you be sure that it will not repeat? I would rather see the reasons why table didnt not load correctly to understand where is the problem. What do you think?

  • @billkuhn5155
    @billkuhn5155 Год назад +1

    Very helpful video.
    Does/will load to table support incremental load from lakehouse files using merge?
    i.e., if lakehouse files that contains inserts, updates, and deletes is copied into lakehouse files each file needs to be merged (in chronological order) into the lakehouse table so that the correct final state is attained.
    Also, is there a way to retain history in the lakehouse table with the ability to time travel (a popular feature of other table offerings like iceberg).
    Thanks in advance for any pointers/suggestions.

    • @GuillaumeBerthier
      @GuillaumeBerthier Год назад

      Agree I would like to see incremental load capability with upsert/delete/merge operations; if I understood correctly it’s currently doing append only 😮

  • @XEQUTE
    @XEQUTE 7 месяцев назад

    that automation thing can be very handy

  • @sanishthomas2858
    @sanishthomas2858 6 месяцев назад

    Nice. if I save the files from source into the Lakehouse File as csv and Json then will it save it has delta parquet if not then why we are saying data is saved in one lake as delta parquet

  • @mainuser98
    @mainuser98 2 месяца назад

    If I update the source files , does it updates the delta table?

  • @TomFrost33
    @TomFrost33 6 месяцев назад

    Ther are many video options about loading data into a Lakehouse. How do we manage\edit the data once it is in there?

  • @ricardoabella867
    @ricardoabella867 Год назад

    You import the CSV but from a folder in the computer, but where is the connection to the file that originated the CSV? I see that the CSV in Fabric is static is not being updated.

  • @DanielWillen
    @DanielWillen Год назад

    We have an older AX09 database that is read only. It has about 1000 tables. There's absolutely no easy way to copy those tables into a Lakehouse , even with pipelines. For one, the copy tool doesn't support schema. So dbo.inventtrans becomes dbo.dbo_inventtrans in the target. Furthermore you basically have to export one table at a time, because when selecting multiple, schema mappings are not being generated. Then you add to that the strict case sensitive queries. From Azure SQL to Azure Serverless to Fabric Warehouse in just a span of 4 years. It's too much to ask companies that have lots of data and integrations going on to make the switch every time.

    • @MrDanìelCoelho1
      @MrDanìelCoelho1 Год назад

      Hi Daniel! Its Daniel here! 🙂
      You scenario may align better using SQL mounting technologies or ingesting directly to the SQL DW layer. If the later, you data will show up as Delta directly. In both cases, there is more tooling around att. There is a lot coming in on migration and I'm sure your scenario is covered.
      I'd also say that data copy to Lakehouse as Delta would be a "write once, run many" script to be coded in PySpark (Notebooks or SJDs) and operationalized using either Pipelines or direct scheduling capabilities of Fabric.
      Nevertheless, I'm forwarding your feedback to the Data Integration teams.
      Thanks!