This series is an absolute gem. I have watched 14 videos so far and each one of them is full of depth and detail. All the complex topics are covered with so much of simplicity that it makes understanding everything so clear and fun. Hope you would continue making videos.
Last month I tried to recreate same scenario, but my pipeline failed and I was confused as to why this happened, thank you for your explanation at 17:00, it makes sense, as to why that pipeline failed.
Amazing video. Very concise explanations. Question: I have a Logic app at the end of a pipeline that sends an email on failure of any preceding activity. How can I force a failure status for the pipeline so it shows on Monitor as failed? Since the logic app is the leaf and is successful, the pipeline shows successful. Also, could you create a video on sending diagnostic logs to a storage account endpoint instead of Log analytics? And how I might have a pipeline that collects the logs and writes to Azure SQL database. Again, than you for taking the time to create these videos! Very helpful for those getting started with ADF.
1. You can use the Fail activity: learn.microsoft.com/en-us/azure/data-factory/control-flow-fail-activity. Just put it at the end of your error handling logic, e.g. after the logic app that sends mails. 2. It's super simple - in ADF Diagnostic Settings just check "Archive to a Storage Account" checkbox and indicate which Storage Account should be used. 3. You can use a Copy Activity, set your Storage Account as a source, your Azure SQL DB as the sink and you're done.
Great as always. Thank you very much for this amazing hands-on practice. One question: which of the options are more expensive in terms of resources? I know by experience that Logic Apps can be expensive at each run. I'm just curious to know about the others, if you could answer that.
Honestly, when implementing an error handling & notification part of my ADF solution, the cost of it wouldn't be my biggest concern as I don't expect to trigger it very often. Otherwise, it would mean that I did a really crappy job at implementing those pipelines if they fail every day. So basically the cost should be negligible and it wouldn't really matter what option you choose.
Is it possible to include in the message (within logicApp) an error/errors from all activities available in the pipeline? I understand that we can use activity('Actname')?.error?.message, but in this case we have to hardcode the name of the activity (I also wouldn't want to do a nested IF condition). Does adf allow iterating through all activities, collecting a list of errors (activity:error), flattening the collection and passing it to the body in WEB activity (logicApp); I can imagine a situation in which we have implemented "General Error handling" with several steps (the 5th type discussed in the material); where at the end I call the service.
I wish there was a system variable like @error_message that would automatically store all errors from the current pipeline. Unfortunately, there is no such thing. What you could do about it: 1. Option A: Hardcode names of all activities that might fail. The obvious disadvantage is that adding new activity requires a developer to also update this error handling part. 2. Option B: Don't implement error handling directly inside ADF pipelines, but instead use a scheduled logic app that queries Log Analytics Workspace for failures. 3. Option C: Query REST API as described here (mrpaulandrew.com/2020/04/22/get-any-azure-data-factory-pipeline-activity-error-details-with-azure-functions/) or here (stackoverflow.com/questions/69562327/how-to-get-the-details-of-an-error-message-in-an-azure-data-factory-pipeline).
Hello Tybul. Do you recommend to have only one Logic App for both DEV and PROD Data Factory? I have the same question about Key Vaults. Thanks a lot for sharing the videos!!
No, in my opinion DEV and PROD environments should be isolated so you would have separate Logic Apps and Key Vaults. There are two main reasons for this: 1. Security - you don't want to grant DEV services access to PROD ones. 2. It will allow you to test new features/changes on DEV without affecting PROD.
This series is an absolute gem. I have watched 14 videos so far and each one of them is full of depth and detail. All the complex topics are covered with so much of simplicity that it makes understanding everything so clear and fun. Hope you would continue making videos.
I'm realliy enjoying listenting to your presentations. Please continue putting out this great content on DP-203! :)
Last month I tried to recreate same scenario, but my pipeline failed and I was confused as to why this happened, thank you for your explanation at 17:00, it makes sense, as to why that pipeline failed.
Great content on Error handling, covering various topics and its need!! Thank you Tybul!!
Agreed it's the best video so far in this series. I can't click the thumb up button enough.
Thanks a lot!
Nice one... Thank you. Always on standby
This was so amazing and in depth. Learnt a lot. Thank You!!
Glad it was helpful!
Thank you again for this video. It helped address one of my problem areas.
Glad it helped!
Detailed explanatons with a good examples. Hope u will show us the complex use case of real project. Thanks
Amazing video. Very concise explanations. Question: I have a Logic app at the end of a pipeline that sends an email on failure of any preceding activity. How can I force a failure status for the pipeline so it shows on Monitor as failed? Since the logic app is the leaf and is successful, the pipeline shows successful. Also, could you create a video on sending diagnostic logs to a storage account endpoint instead of Log analytics? And how I might have a pipeline that collects the logs and writes to Azure SQL database. Again, than you for taking the time to create these videos! Very helpful for those getting started with ADF.
1. You can use the Fail activity: learn.microsoft.com/en-us/azure/data-factory/control-flow-fail-activity. Just put it at the end of your error handling logic, e.g. after the logic app that sends mails.
2. It's super simple - in ADF Diagnostic Settings just check "Archive to a Storage Account" checkbox and indicate which Storage Account should be used.
3. You can use a Copy Activity, set your Storage Account as a source, your Azure SQL DB as the sink and you're done.
@@TybulOnAzure thank you for the tips!!
Greate thank you Piter
My pleasure
Great as always. Thank you very much for this amazing hands-on practice.
One question: which of the options are more expensive in terms of resources? I know by experience that Logic Apps can be expensive at each run.
I'm just curious to know about the others, if you could answer that.
Honestly, when implementing an error handling & notification part of my ADF solution, the cost of it wouldn't be my biggest concern as I don't expect to trigger it very often. Otherwise, it would mean that I did a really crappy job at implementing those pipelines if they fail every day.
So basically the cost should be negligible and it wouldn't really matter what option you choose.
@@TybulOnAzure thanks for the feedback!
@@TybulOnAzure 😂😂
Is it possible to include in the message (within logicApp) an error/errors from all activities available in the pipeline? I understand that we can use activity('Actname')?.error?.message, but in this case we have to hardcode the name of the activity (I also wouldn't want to do a nested IF condition). Does adf allow iterating through all activities, collecting a list of errors (activity:error), flattening the collection and passing it to the body in WEB activity (logicApp);
I can imagine a situation in which we have implemented "General Error handling" with several steps (the 5th type discussed in the material); where at the end I call the service.
I wish there was a system variable like @error_message that would automatically store all errors from the current pipeline. Unfortunately, there is no such thing.
What you could do about it:
1. Option A: Hardcode names of all activities that might fail. The obvious disadvantage is that adding new activity requires a developer to also update this error handling part.
2. Option B: Don't implement error handling directly inside ADF pipelines, but instead use a scheduled logic app that queries Log Analytics Workspace for failures.
3. Option C: Query REST API as described here (mrpaulandrew.com/2020/04/22/get-any-azure-data-factory-pipeline-activity-error-details-with-azure-functions/) or here (stackoverflow.com/questions/69562327/how-to-get-the-details-of-an-error-message-in-an-azure-data-factory-pipeline).
@@TybulOnAzure Thanks for the answer. So I'll work with logic outside of ADF using logicApp + LogAnalytics
Hello Tybul. Do you recommend to have only one Logic App for both DEV and PROD Data Factory? I have the same question about Key Vaults. Thanks a lot for sharing the videos!!
No, in my opinion DEV and PROD environments should be isolated so you would have separate Logic Apps and Key Vaults. There are two main reasons for this:
1. Security - you don't want to grant DEV services access to PROD ones.
2. It will allow you to test new features/changes on DEV without affecting PROD.
oh come on)) Great video!
Thanks! 😃
Wow this is top 1 video for now... Thanks 👏👏
Glad you liked it!
thanks