Stack Overflow, ChatGPT, and Microsoft documentation couldn't easily explain this process to me. This RUclips video is gold - thank you for the detailed and clear steps! This worked perfectly and helped me understand the nuances of the ADF pipeline.
It is Really helpful annu. I suppose to start like this. But dont have proper time. Anyways you done a great job. My self Anil. I’m a solution architect in data engineering
Clean and very simple explanation. Can you please let me know how I can send the latest file as a attachment in a email instead of copying it to another folder
Sensational video! One question, why in ForEach1 did you have to create a dataset that points to the parameterised files in the dataset and not use the result of getmedata1, which already returns a list with each of the names and dates?
@@AdenilsonFTJunior The one outside Foreach is pointing to folder level.. So we are getting the file names within the folder using child items and the one inside Foreach is parameterized to process those files one by one through iteration and get the last modified date of the files
Thanks for this wonderful explanation, but I have one doubt: can we do it using filter by last modified option of Getmeta data activity. It is just my doubt, overall I am following your channel and learning new things daily.. amazing ..
HI, Thanks for the detailed steps .Its a very good content and its working also. Could you please share how to load latest file to snowflake table from blob storage ? Please note the file is excel one.
Hi Annu, I have two parquet files in my storage. After I run my pipeline why am I getting two set variable2 and two set variable1 as the pipeline result? It is supposed only to show one set variable1 and one set variable2 right as the result for the latest modified file?
Hello your videos are awesome..in real time scenario what if we are moving the files after copying to archieve folder this way source folder will have only latest files..should we still consider using this approach??if we have more number of files it will take huge time and performance may impact?please advice
Thankyou so much . Glad to know my videos were helpful. In that case you should watch part 14 where SQL table has been used to log all the filenames and their last modified date time and write your custom query to get top N last modified files: ruclips.net/video/9PYZ3uEa6nk/видео.html
Good explanation. Small advice for you to make it more useful this video to viewers is write the steps while implementing or before starting the implementation. Then learners can easily remember the steps and process. Don't think this is -ve comment. Just explored my experience after watching your video. I have seen lot of videos in other RUclips channels, no one is explaining the implementation with writing steps. Beginners like get confusing with the steps. If possible to you then you can add the steps. This is my suggestion only
Hi ,kudos! This is really an informative video, however- I have a small doubt- “what if all 5 filenames are same but only last modified date is different. Then how will it pick the latest file? Bcz you only set the - FileName variable to set variable output. In this condition- all files are same. I hope i am able to explain my question. Pls help with a solution. Thanks
Thanks very much Annu. it's been a very helpful and handy video. It helped me to get the latest file in a blob storage. I have a question however, if I have a mix of different file types (lets say csv and JSON), is it possible to specify to extract the name of only the CSV file? I created a blob>eliminated csv dataset, however it seems to select any file times in the directory. I would appreciate it if you could advise how to sort this issue out. Thanks
Very good explanation. Thanks for that . I have followed your video and tried but i was getting multiple set variables in my second run and each one showing the files name and last modified . I need only one out put for my second run for latest file . please help me
Hello I need your help...In my scenario we have a CSV file in azure data lake with no header and some column data contained comma delimeter like a,b,c while doing copy data task ADF treat this data as a separate column but this data should be on one column... thanks
Hi Deep By default the column delimiter in csv dataset is comma, you can change it to add symbol value that is not present in the data .. For eg $ using add dynamics content
But this only work for single file ,like when i upload more than one file Ex.- 2 file ,then it only fetch latest uploaded file not both files , how can i handle this ?
Hello, I have read that we can not use set variable activity inside For-Each when for-each is set to parallel . But here we are using it.Can you please explain In which case this limitation will create impact.
It's not like we can't use it , but the thing is if we use set variable inside foreach and do not set it to sequential , then it might mess up the expected output as variable is going to be set as A for first run which needs to flow in consecutive activities which is consuming that value but before that itself variable is getting set to B
@@azurecontentannu6399 it is not comparing but assigning to lastmodified value to referdate after if conditions,Comparing is done before if conditoin to invoke
Imagine the level of debugging required incase of an issue. Cant you just put the list of files into a SQL table and then return file on the max modified date and just copy that file ?
Data analysis and Data engineering goes hand in hand.. So it might be useful if you want to know how data is ingested and transformed before it's ready for reporting.
ruclips.net/video/sYM6kVpng28/видео.html I did not undesrstend, you say that value of output similar as value of file in storage account, BUT time of output 2022-09-03T12:14:01Z, and time of the last modification file in container 9/3/2022, 5:44:02 PM How can it be the same time?
Hey sorry for the confusion. The pipeline output is in UTC time zone. Storage account is showing time in IST (India), if you add 5 hours 30 mins to pipeline output, it's same time as lastmodified time in storage account.
Stack Overflow, ChatGPT, and Microsoft documentation couldn't easily explain this process to me. This RUclips video is gold - thank you for the detailed and clear steps! This worked perfectly and helped me understand the nuances of the ADF pipeline.
Thankyou so much for your compliment 😊 means a lot
Thanks for a great step-by-step explanation of how to copy files by the last modified date.
It is Really helpful annu. I suppose to start like this. But dont have proper time. Anyways you done a great job. My self Anil. I’m a solution architect in data engineering
I like the way of teaching and content
Nice Explanation. Thank you
Awesome video and very helpful, thank you!
Excellent explanation, thank you!
Nice explanation Annu 👌 👍 👏
Thankyou😊
This is Golden thanks a lot you just made my day.
Thankyou
Annu garu ..good explanation
Thank you 😊
@@azurecontentannu6399 i think that you can prepare azure data factory activities video
Very good explanation 🙂 many thanks ANU.
i am following your each every video keep going....
Thankyou so much 😇
Clean and very simple explanation. Can you please let me know how I can send the latest file as a attachment in a email instead of copying it to another folder
Sensational video! One question, why in ForEach1 did you have to create a dataset that points to the parameterised files in the dataset and not use the result of getmedata1, which already returns a list with each of the names and dates?
@@AdenilsonFTJunior The one outside Foreach is pointing to folder level.. So we are getting the file names within the folder using child items and the one inside Foreach is parameterized to process those files one by one through iteration and get the last modified date of the files
@@azurecontentannu6399 thanks!
Thanks for this wonderful explanation, but I have one doubt: can we do it using filter by last modified option of Getmeta data activity. It is just my doubt, overall I am following your channel and learning new things daily.. amazing ..
HI, Thanks for the detailed steps .Its a very good content and its working also. Could you please share how to load latest file to snowflake table from blob storage ? Please note the file is excel one.
Crystal clear 🔥
Hi Annu, I have two parquet files in my storage. After I run my pipeline why am I getting two set variable2 and two set variable1 as the pipeline result? It is supposed only to show one set variable1 and one set variable2 right as the result for the latest modified file?
Great video, I am having this doubt here in this video we are getting latest file in dataset from there can we use that lastest file in dataflow
Thaks a lot Annu..
Welcome 😊
Hello your videos are awesome..in real time scenario what if we are moving the files after copying to archieve folder this way source folder will have only latest files..should we still consider using this approach??if we have more number of files it will take huge time and performance may impact?please advice
Thankyou so much . Glad to know my videos were helpful. In that case you should watch part 14 where SQL table has been used to log all the filenames and their last modified date time and write your custom query to get top N last modified files: ruclips.net/video/9PYZ3uEa6nk/видео.html
Good explanation
Good explanation.
Small advice for you to make it more useful this video to viewers is write the steps while implementing or before starting the implementation. Then learners can easily remember the steps and process. Don't think this is -ve comment. Just explored my experience after watching your video. I have seen lot of videos in other RUclips channels, no one is explaining the implementation with writing steps. Beginners like get confusing with the steps. If possible to you then you can add the steps. This is my suggestion only
Great feedback. I will surely implement it going further. Thanks a lot
@@azurecontentannu6399 could you please share Your email id. I have some doubt.
@@tusharchirame820 annukumari.ak@outlook.com
Hi Annu, if i want to compare current date and lastModified date then what would be the if condition expression? , If yes thn success els failure..
Hi ,kudos! This is really an informative video, however- I have a small doubt- “what if all 5 filenames are same but only last modified date is different. Then how will it pick the latest file? Bcz you only set the - FileName variable to set variable output. In this condition- all files are same. I hope i am able to explain my question.
Pls help with a solution. Thanks
Hi.. There is no way you can store 2 files with same name in an ADLS folder. So the question doesn't come at the first place 😊
Thanks very much Annu. it's been a very helpful and handy video. It helped me to get the latest file in a blob storage. I have a question however, if I have a mix of different file types (lets say csv and JSON), is it possible to specify to extract the name of only the CSV file? I created a blob>eliminated csv dataset, however it seems to select any file times in the directory. I would appreciate it if you could advise how to sort this issue out. Thanks
hi annu...i am looking for the same opertaion to be performed using azcopy command in a shell script
Very good explanation. Thanks for that . I have followed your video and tried but i was getting multiple set variables in my second run and each one showing the files name and last modified . I need only one out put for my second run for latest file . please help me
Don't worry about getting multiple set variables .. make the foreach run sequential and the last set variable will give the latest file name
very helpfull,but walk through bit slowly
Would the set variable capture two files with same last modified data and two different file names?
Is it possible to apply same in SharePoint to extract the new/modified files..?
Hello I need your help...In my scenario we have a CSV file in azure data lake with no header and some column data contained comma delimeter like a,b,c while doing copy data task ADF treat this data as a separate column but this data should be on one column... thanks
Hi Deep
By default the column delimiter in csv dataset is comma, you can change it to add symbol value that is not present in the data .. For eg $ using add dynamics content
great tutorial, thanks!
Thankyou for watching
Can you please confirm if For each activity is run in parallel or sequential mode? If its parallel, the set variable gives random result.
@@MahalingSawle-m6b it's sequential
But this only work for single file ,like when i upload more than one file Ex.- 2 file ,then it only fetch latest uploaded file not both files , how can i handle this ?
When I re-run same pipeline, Why I am getting different folder names as latest folder? Can you help.. I am trying this logic on folders.
Try by checking the 'sequential' option box in foreach activity
Hello,
I have read that we can not use set variable activity inside For-Each when for-each is set to parallel .
But here we are using it.Can you please explain In which case this limitation will create impact.
It's not like we can't use it , but the thing is if we use set variable inside foreach and do not set it to sequential , then it might mess up the expected output as variable is going to be set as A for first run which needs to flow in consecutive activities which is consuming that value but before that itself variable is getting set to B
Thank you !!!
How do I copy the contents of the last found file into a table in Snowflake?
Great
Anu why you have used last modified value to referdate inside if condition..It should be after if condition. do you thing it is right approach
Sorry what do you mean by after if condition.. We are comparing lastmodified
Value with refdatetime value in the if condition
@@azurecontentannu6399 it is not comparing but assigning to lastmodified value to referdate after if conditions,Comparing is done before if conditoin to invoke
Imagine the level of debugging required incase of an issue.
Cant you just put the list of files into a SQL table and then return file on the max modified date and just copy that file ?
@@sacnan yes that approach is covered in this video : ruclips.net/video/9PYZ3uEa6nk/видео.htmlsi=WgggbuoZKEZ5EK16
@@azurecontentannu6399 Thanks mam
Hi Annu I am Data Analyst can I start this playlist?
Data analysis and Data engineering goes hand in hand.. So it might be useful if you want to know how data is ingested and transformed before it's ready for reporting.
ruclips.net/video/sYM6kVpng28/видео.html
I did not undesrstend, you say that value of output similar as value of file in storage account, BUT time of output 2022-09-03T12:14:01Z, and time of the last modification file in container 9/3/2022, 5:44:02 PM
How can it be the same time?
Hey sorry for the confusion. The pipeline output is in UTC time zone. Storage account is showing time in IST (India), if you add 5 hours 30 mins to pipeline output, it's same time as lastmodified time in storage account.
Nicely explained