REST API Pagination in Microsoft Fabric Notebooks: Fetch & Write JSON to Lakehouse

Поделиться
HTML-код
  • Опубликовано: 27 дек 2024

Комментарии • 8

  • @adilmajeed8439
    @adilmajeed8439 2 месяца назад +1

    Thanks for sharing your experience. Would it be possible that you had mentioned in the last part of the video, to run it in parallel for the pokeman and berry end points? Also if you can layout with the use case on using Airflow too?

    • @AleksiPartanenTech
      @AleksiPartanenTech  2 месяца назад +1

      Thanks! I am planning to make a video on parallelizing those API calls. :)
      I have no experience using Airflow so I would need to first dig into it before I can make a video on it.

  • @Stigzy90
    @Stigzy90 2 месяца назад +1

    Great video Aleksi!
    Think this will help me get data from the API of my financial system to Fabric. Do you have a video where it covers using the notebook with basic access authentication when fetching data from the API?
    Have a great weekend! 😄

    • @AleksiPartanenTech
      @AleksiPartanenTech  2 месяца назад +1

      Thanks!
      Creating videos on API authentication methods is bit tricky since I would probably need to set up my own API to be able to demonstrate those. However, I think you should be able to find plenty of help for that online and using tools like Postman are very helpful as well when working with APIs. ChatGPT can be also very helpful for these kind of cases.
      Have a nice weekend as well! :)

  • @kevinq6628
    @kevinq6628 2 месяца назад +1

    great video, cn you make a video on how to schedule notebooks to download data from API, and save to a table daily by appending the data?

    • @AleksiPartanenTech
      @AleksiPartanenTech  2 месяца назад +2

      Great suggestion! Might cover this in a separate video in the future. For now, this can be done by combining topics from my previous videos. You can schedule your notebook or add that notebook to a pipeline and schedule that. When writing data to delta table choose option "append" rather than "overwrite".

    • @ff_bolinho
      @ff_bolinho 2 месяца назад +1

      @@AleksiPartanenTech do you recommend landing the raw data straight as delta? Although it seems way easier with pipelines + this append option you mentioned, I’m not sure whether it’s the best option… looking forward to hearing your thoughts:)

    • @AleksiPartanenTech
      @AleksiPartanenTech  2 месяца назад +2

      @@ff_bolinho Usually I would recommend landing the data in the most raw format as possible and without any transformations. These raw files then act as "an insurance" and they can be used to regenerate following layers if something goes wrong. Also, when investigating issues it is best to have those raw files available. These are my thoughts about this. :)