Great work Maheer, couple of observations 1. Type 2 dimension needs EffectiveStartDate / EffectiveEndDate too. If we add these columns updating all history rows will always reset these dates which fails type2 idea. Also, not good for performance , as we are always updating all history rows be it millions. 2. During 1st execution, we should have a capability of verifying although source has an entry for EmpId=1001 but is it really updated coz only in that case it make sense to INSERT and UPDATE history rows else we are simply duplicating rows with no changes.
Good explanation. But I guess you forgot to add a check if there is any change in any one of the column coming from the source file. Because you'll update the row in target only if you find any change in the source and destination.
Could you please tell me how your pipeline behave if you do not change anything. In my case, it is inserting a new row with isrecent=1 and changing the previous value isrecent =0, but As I am not changing anything then it should not be inserted again.
@@WafaStudies Great explanation. How can I redirect the history (isActive=0) to a different table if I done want to keep history on same table. Do you have a video for this or can you create one. Thank you!
Good one maheer along with add duplicate records form source and make some columns as scd type 1 and some as scd 2 for same table and also incremental load as new session
Hi Maheer sir, in the case of SCD 2 type, We can use inner join transformation. As it will only take the common rows from CSV file and SQL table, based on the primary key column, so then no need to apply filter transformation. I mean to say that instead of lookup transformation and then filtering transformation we can directly use inner join transformation, to simply. Am I right? can we do so?
This is really good video and helpful too just one suggestion can you add record_create_date and record_expire_date and then upload ..It would be great..
SCD type 2 was explained properly but one scinerio was not covered suppose we received same record from the source which is already present in the target. In that case also it will create new records and will update the old record as inactive under this logic.
I see surrogate key is initially inserted for target record..but in source record surrogate key is not there, can you explain how surrogate key is mapped for the newly inserted records
Nice explanation WafaStudios.. I have a doubt is that how to handle the rows which are not having any updates in Source? With this example, even the unaffected data also will be updated in the destination unnecessarily.. Looking for your reply and thanks in advance..
I was trying scd type2 using dataflows to make it dynamic , but on the frst run it is failing bcz I haven't choose the option of inspect schema to make it use for any delta table . Any workaround for this? The solution is atlst it should be able to read the header though the delta table is empty but am getting an error on the source side when the table is empty
Can you make a video in which you can include Start date and End date, and dynamically the dates getting updated for type-2 scd. I see that is a necessity and many people face this issue.
@WafaStudies I am facing a problem implementing scd2 using exist transformation instead of lookup u used here. But I guess the problem will be same for both the implementation. Here we need to make sure we are finishing the update inside the table first. If the new records are accidentally inserted in table first then lookup will fetch newly inserted columns also as matching and therefore all the columns are getting marked as non active. But the order of execution of the parallel streams are not in our hand. How to solve this? Any idea?
Hi, if I was archiving rows in a database. During the copying process from one data base to another. I want to delete what ever I’m archiving from the source. Is there a place where I could write a query that does that instead of using alter rows etc because the expression builder is just not what I need
Hello. How about doing it in sql server and not in query editor? Like doing mapping on Azure data factory but the result or the output will be seen in sql server.😊
Hello Sir, I want to switch my career from SQl Server developer, shoul i go through below playlist 1.Asure Basic 2.Azure Function 2.Asure Data Factory Please suggest steps.
@@WafaStudies Great explanation. How can I redirect the history (isActive=0) to a different table if I done want to keep history on same table. Do you have a video for this or can you create one. Thank you!
Great work Maheer, couple of observations
1. Type 2 dimension needs EffectiveStartDate / EffectiveEndDate too. If we add these columns updating all history rows will always reset these dates which fails type2 idea. Also, not good for performance , as we are always updating all history rows be it millions.
2. During 1st execution, we should have a capability of verifying although source has an entry for EmpId=1001 but is it really updated coz only in that case it make sense to INSERT and UPDATE history rows else we are simply duplicating rows with no changes.
Best way to implement SCD Type 2 😀👍 very well explained
Thank you 😊
great explanation... explained in very easy way to understand the concept
Thank you 🙂
Good explanation. But I guess you forgot to add a check if there is any change in any one of the column coming from the source file. Because you'll update the row in target only if you find any change in the source and destination.
Nice and Superb Explanation. Thanks alot Maheer.
Thank you 🙂
Could you please tell me how your pipeline behave if you do not change anything. In my case, it is inserting a new row with isrecent=1 and changing the previous value isrecent =0, but As I am not changing anything then it should not be inserted again.
I have exactly the Same question.
In nothing changes it is not supposed to be added again.
How can we fix this?
Hi! were you able to fix this issue? I need help on this
Hi! were you able to fix this issue? I need help on this
can anyone explain what he means to say by alter row instead of select at 21:30 ?
Nice technique, great job! One small nitpick ... I'd prefer if you used true() instead of 1==1 for your Alter Row Update policy :)
Yup. We are good to use true function as well. Idea is condition should return always true, so that update policy can be applied on all rows 😊
@@WafaStudies CC
good one dude thanks for explaining .
Thank you ☺️
@@WafaStudies Great explanation. How can I redirect the history (isActive=0) to a different table if I done want to keep history on same table. Do you have a video for this or can you create one. Thank you!
Good explaination. What is i have duplicate rows in the source file? How do i filter them?
Good one maheer along with add duplicate records form source and make some columns as scd type 1 and some as scd 2 for same table and also incremental load as new session
Hi Maheer, can we use Inner Join instead of lookup and filter ?
Hi Maheer sir, in the case of SCD 2 type, We can use inner join transformation. As it will only take the common rows from CSV file and SQL table, based on the primary key column, so then no need to apply filter transformation. I mean to say that instead of lookup transformation and then filtering transformation we can directly use inner join transformation, to simply.
Am I right? can we do so?
For scd 1 instead of update we have to just delete that row and further activities are not required in sink2 , right?
Pls check below link for scd type 1
ruclips.net/video/MzHWZ5_KMYo/видео.html
May i know how the surrogate key is generated in dim table?
This is really good video and helpful too just one suggestion can you add record_create_date and record_expire_date and then upload ..It would be great..
In the update branch, instead of lookup and filter, we could replace it with join (inner)
SCD type 2 was explained properly but one scinerio was not covered suppose we received same record from the source which is already present in the target. In that case also it will create new records and will update the old record as inactive under this logic.
may i ask how to resolve this then? thank you
+++ with this data flow, adf cannot recognise the old data, it literally gives 1 isactive for every row, would be better with staging I think
Excellent Maheer can you please do video on scd type 3😊
I see surrogate key is initially inserted for target record..but in source record surrogate key is not there, can you explain how surrogate key is mapped for the newly inserted records
you just don’t insert it, it will automatically add surrogate keys in sink as it is identity, if you add surrogate key in mapping it will fail
Nice explanation WafaStudios..
I have a doubt is that how to handle the rows which are not having any updates in Source? With this example, even the unaffected data also will be updated in the destination unnecessarily.. Looking for your reply and thanks in advance..
+1 i too have same question
Yes same doubt here. Could you please respond @WafaStudies as many people have this doubt
+1
Nice job. Please keep them coming. How About a video on SCD type 4 implementations
good explaination
I have implemented as your explanation.. but i am facing an issue that, key column does not exist in sink...here is the screen shot.
I was trying scd type2 using dataflows to make it dynamic , but on the frst run it is failing bcz I haven't choose the option of inspect schema to make it use for any delta table . Any workaround for this? The solution is atlst it should be able to read the header though the delta table is empty but am getting an error on the source side when the table is empty
Can you make a video in which you can include Start date and End date, and dynamically the dates getting updated for type-2 scd. I see that is a necessity and many people face this issue.
dit you learn that ? if so where ?
@WafaStudies I am facing a problem implementing scd2 using exist transformation instead of lookup u used here. But I guess the problem will be same for both the implementation. Here we need to make sure we are finishing the update inside the table first. If the new records are accidentally inserted in table first then lookup will fetch newly inserted columns also as matching and therefore all the columns are getting marked as non active. But the order of execution of the parallel streams are not in our hand. How to solve this? Any idea?
We did not check Md5 values for attributes whose employee id is already present in source and target…
Hi, if I was archiving rows in a database. During the copying process from one data base to another. I want to delete what ever I’m archiving from the source. Is there a place where I could write a query that does that instead of using alter rows etc because the expression builder is just not what I need
Using the azure data factory
can we use scd2 in real time data ?
Hello. How about doing it in sql server and not in query editor? Like doing mapping on Azure data factory but the result or the output will be seen in sql server.😊
Great work Maheer,
How to load parquet file from on-premises to Azure SQL database using Azure Data factory
thanks Maheer
Welcome 🤗
@WafaStudies , what if we get same column values from source as incremental, in that case isActive 0 and 1 both will be having true duplicates
Can we do SCD Type 2 on Delta file using mapping data flow
I am getting this error. Cannot insert explicit value for identity column in table when identity_insert is set to off. Can any one help on this
In SSIS this is very very easy to accomplish, why is it still so cumbersome in ADF?
Nice info
Thank you 🙂
Sir it is not working, the values still remains 1 for all + it does not recognise the old data, it literally inserts all data
Create a branch from source use alter row to update the records in sink that are present in source and in the branch just use insert
Hello Sir,
I want to switch my career from SQl Server developer, shoul i go through below playlist
1.Asure Basic
2.Azure Function
2.Asure Data Factory
Please suggest steps.
Azure functions is not require. Azure basics and adf
Thank you..🙏🙏
Can I practice ADF pn free Azure subscription
Ya
Good video but all the noise from the kids in the background was very distracting and loud.
Thank you. Sorry for that trouble. In old voices that might have happened. I am trying not to have any noise in all other and recent videos 🙂
@@WafaStudies Great explanation. How can I redirect the history (isActive=0) to a different table if I done want to keep history on same table. Do you have a video for this or can you create one. Thank you!