DP-203: 14 - Error handling in Azure Data Factory

Поделиться
HTML-код
  • Опубликовано: 9 ноя 2024

Комментарии • 32

  • @vaibhavmethe1
    @vaibhavmethe1 9 месяцев назад +2

    This series is an absolute gem. I have watched 14 videos so far and each one of them is full of depth and detail. All the complex topics are covered with so much of simplicity that it makes understanding everything so clear and fun. Hope you would continue making videos.

  • @davidfreibrun
    @davidfreibrun 10 месяцев назад

    I'm realliy enjoying listenting to your presentations. Please continue putting out this great content on DP-203! :)

  • @prabhuraghupathi9131
    @prabhuraghupathi9131 7 месяцев назад

    Great content on Error handling, covering various topics and its need!! Thank you Tybul!!

  • @yangxu2496
    @yangxu2496 5 месяцев назад

    Agreed it's the best video so far in this series. I can't click the thumb up button enough.

  • @amazhobner
    @amazhobner 9 месяцев назад

    Last month I tried to recreate same scenario, but my pipeline failed and I was confused as to why this happened, thank you for your explanation at 17:00, it makes sense, as to why that pipeline failed.

  • @fekasng2010
    @fekasng2010 11 месяцев назад

    Nice one... Thank you. Always on standby

  • @soumikmishra7288
    @soumikmishra7288 4 месяца назад

    This was so amazing and in depth. Learnt a lot. Thank You!!

  • @Ef-sy4qp
    @Ef-sy4qp 8 месяцев назад

    Detailed explanatons with a good examples. Hope u will show us the complex use case of real project. Thanks

  • @MrPelastus
    @MrPelastus 4 месяца назад

    Thank you again for this video. It helped address one of my problem areas.

  • @gilne
    @gilne Месяц назад

    Greate thank you Piter

  • @JH-um7yz
    @JH-um7yz 9 месяцев назад +1

    Amazing video. Very concise explanations. Question: I have a Logic app at the end of a pipeline that sends an email on failure of any preceding activity. How can I force a failure status for the pipeline so it shows on Monitor as failed? Since the logic app is the leaf and is successful, the pipeline shows successful. Also, could you create a video on sending diagnostic logs to a storage account endpoint instead of Log analytics? And how I might have a pipeline that collects the logs and writes to Azure SQL database. Again, than you for taking the time to create these videos! Very helpful for those getting started with ADF.

    • @TybulOnAzure
      @TybulOnAzure  9 месяцев назад +1

      1. You can use the Fail activity: learn.microsoft.com/en-us/azure/data-factory/control-flow-fail-activity. Just put it at the end of your error handling logic, e.g. after the logic app that sends mails.
      2. It's super simple - in ADF Diagnostic Settings just check "Archive to a Storage Account" checkbox and indicate which Storage Account should be used.
      3. You can use a Copy Activity, set your Storage Account as a source, your Azure SQL DB as the sink and you're done.

    • @JH-um7yz
      @JH-um7yz 9 месяцев назад

      @@TybulOnAzure thank you for the tips!!

  • @qaz56q
    @qaz56q 5 месяцев назад

    Is it possible to include in the message (within logicApp) an error/errors from all activities available in the pipeline? I understand that we can use activity('Actname')?.error?.message, but in this case we have to hardcode the name of the activity (I also wouldn't want to do a nested IF condition). Does adf allow iterating through all activities, collecting a list of errors (activity:error), flattening the collection and passing it to the body in WEB activity (logicApp);
    I can imagine a situation in which we have implemented "General Error handling" with several steps (the 5th type discussed in the material); where at the end I call the service.

    • @TybulOnAzure
      @TybulOnAzure  5 месяцев назад

      I wish there was a system variable like @error_message that would automatically store all errors from the current pipeline. Unfortunately, there is no such thing.
      What you could do about it:
      1. Option A: Hardcode names of all activities that might fail. The obvious disadvantage is that adding new activity requires a developer to also update this error handling part.
      2. Option B: Don't implement error handling directly inside ADF pipelines, but instead use a scheduled logic app that queries Log Analytics Workspace for failures.
      3. Option C: Query REST API as described here (mrpaulandrew.com/2020/04/22/get-any-azure-data-factory-pipeline-activity-error-details-with-azure-functions/) or here (stackoverflow.com/questions/69562327/how-to-get-the-details-of-an-error-message-in-an-azure-data-factory-pipeline).

    • @qaz56q
      @qaz56q 5 месяцев назад

      @@TybulOnAzure Thanks for the answer. So I'll work with logic outside of ADF using logicApp + LogAnalytics

  • @heljava
    @heljava 8 месяцев назад

    Great as always. Thank you very much for this amazing hands-on practice.
    One question: which of the options are more expensive in terms of resources? I know by experience that Logic Apps can be expensive at each run.
    I'm just curious to know about the others, if you could answer that.

    • @TybulOnAzure
      @TybulOnAzure  7 месяцев назад +1

      Honestly, when implementing an error handling & notification part of my ADF solution, the cost of it wouldn't be my biggest concern as I don't expect to trigger it very often. Otherwise, it would mean that I did a really crappy job at implementing those pipelines if they fail every day.
      So basically the cost should be negligible and it wouldn't really matter what option you choose.

    • @heljava
      @heljava 7 месяцев назад

      @@TybulOnAzure thanks for the feedback!

    • @VAIBHAVSAXENA-w3t
      @VAIBHAVSAXENA-w3t Месяц назад

      @@TybulOnAzure 😂😂

  • @LATAMDataEngineer
    @LATAMDataEngineer 6 месяцев назад

    Wow this is top 1 video for now... Thanks 👏👏

  • @diegoalias2935
    @diegoalias2935 7 месяцев назад

    Hello Tybul. Do you recommend to have only one Logic App for both DEV and PROD Data Factory? I have the same question about Key Vaults. Thanks a lot for sharing the videos!!

    • @TybulOnAzure
      @TybulOnAzure  7 месяцев назад

      No, in my opinion DEV and PROD environments should be isolated so you would have separate Logic Apps and Key Vaults. There are two main reasons for this:
      1. Security - you don't want to grant DEV services access to PROD ones.
      2. It will allow you to test new features/changes on DEV without affecting PROD.

  • @zouhair8161
    @zouhair8161 10 месяцев назад

    thanks

  • @sio80orel
    @sio80orel 5 месяцев назад

    oh come on)) Great video!