Ingesting data into Fabric Warehouse

Поделиться
HTML-код
  • Опубликовано: 18 сен 2024

Комментарии • 5

  • @ItsNotAboutTheCell
    @ItsNotAboutTheCell 29 дней назад +2

    Haha! Love seeing John with that massive coffee mug! He's been awesome to work with and to collaborate with on data ingestion projects.

  • @keen8five
    @keen8five Месяц назад +3

    I'd love to ingest data from on-prem using Notebooks (code-based)

  • @Sarathen2007
    @Sarathen2007 10 дней назад

    I don’t think it is possible to create shortcuts of one fabric warehouse to another warehouse as mentioned in virtualisation options here . Can you please confirm? I was able to create shortcuts of fabric warehouse to lake house only . But I need to create in another warehouse. Is it possible?

  • @sgfgdsfae
    @sgfgdsfae 29 дней назад

    Using the Mirror feature to sync data into Delta tables is fantastic ... however, we are missing the ability to track changes in order to perform downstream incremental updates.
    The delta tables in the Mirror database allow time-travel using versions but I can't find a way to enable Change Data Feed, which would be a game changer for incremental workflows. Is this in the roadmap or is there a different solution?

    • @MarkPryceMaher
      @MarkPryceMaher 23 дня назад

      In theory, yes its possible to do what you want. As each change is a delta change, so you can work out the delta using sparkSQL today (I have done it)- but you might not get the results you expect. The issue is this is not what mirroring was designed for, its eventual consistency. The delta table on Onelake should represent the table is SQL. We may move incremental changes, but if you change the majority of the table (updating 1 billion rows changing the schema), we could take 1 billion changes or we could take a new snapshot and replace the entire thing. So if you are using the incremental updates, it would be the whole table. In the future, we may be able to support this but not right now.