- Видео 177
- Просмотров 441 068
Azure Content : Annu
Добавлен 22 май 2021
Playlists:
1. Azure Synapse Analytics Real Time Scenarios : In this playlist, we are going to cover real time scenario videos about Azure Synapse , Azure data engineers come across projects where they may witness scenarios like this.
2. Azure data factory Real Time Scenarios: It covers various use cases which one may come across while developing adf pipelines.
3. Mapping data flow Real time scenarios: In this playlist, I have covered few videos related to data transformation using mapping dataflow
1. Azure Synapse Analytics Real Time Scenarios : In this playlist, we are going to cover real time scenario videos about Azure Synapse , Azure data engineers come across projects where they may witness scenarios like this.
2. Azure data factory Real Time Scenarios: It covers various use cases which one may come across while developing adf pipelines.
3. Mapping data flow Real time scenarios: In this playlist, I have covered few videos related to data transformation using mapping dataflow
5. Narrow VS Wide Transformation in RDD | Pyspark | Apache Spark
This video will explain different types of transformation in Apache Spark. There are two types of Transformation - Narrow and Wide
#azure #dataenginnerjobs #dataengineers #azuredataengineering #pyspark #programming #coding #cloudcomputing #azureservices
#azure #dataenginnerjobs #dataengineers #azuredataengineering #pyspark #programming #coding #cloudcomputing #azureservices
Просмотров: 295
Видео
4. RDD operations | Transformations and actions | Pyspark
Просмотров 239Месяц назад
In this video, we learnt about RDD operations - RDD transformations and RDD actions in Pyspark #azure #dataenginnerjobs #dataengineers #azuredataengineering #pyspark #programming #coding #cloudcomputing #azureservices
3. RDD partitioning | Repartition() vs Coalesce
Просмотров 4632 месяца назад
In this video we learnt the difference between Repartition() and Coalesce()
3. Use private endpoints to connect securely to Azure SQL Server
Просмотров 8653 месяца назад
In this video, we learnt how to use private endpoints to connect securely to Azure SQL Server
2. Connect to SQL Server through SQL Server Authentication
Просмотров 4963 месяца назад
In this video we learnt how to Connect to SQL Server through SQL Server Authentication #azure #azuredataengineer #azuresql #sqlserver #azureactivedirectory #azureentraid #microsoft #azuredataengineering
1. Create Azure SQL database with Microsoft Entra ID authentication
Просмотров 1,5 тыс.3 месяца назад
In this video we learnt how to Create Azure SQL database with Microsoft Entra ID authentication #azure #azuredataengineer #azuresql #sqlserver #azureactivedirectory #azureentraid #microsoft #azuredataengineering
38. How to find the position of nth occurrence of a character in a string using mapping dataflow
Просмотров 2113 месяца назад
In this video , how to find the position of nth occurrence of a character in a string using mapping dataflow #azure #azuredataengineer #azuredataengineering #cloud
34. How to Format datetime to a 12-hours clock and 24-hours clock
Просмотров 6914 месяца назад
In this video, we learnt how to Format datetime to a 12-hours clock / 24-hours clock
8. Found more columns than expected | Common Errors in ADF #adf
Просмотров 6144 месяца назад
In this video we learnt how to mitigate "Error found when processing 'Csv/Tsv Format Text’ .Found more columns than expected" While using copy activity in Azure Data Factory to load data from CSV to SQL Table, the value would be considered as if they belong to 2 different columns if they have dataset delimiter within the value. We need to consider using correct Quote character in dataset config...
2. Read Multiple JSONs from ADLS and load into SQL using Logic apps
Просмотров 3474 месяца назад
In this video we are Loading array of multiple Jsons from ADLS to multiple rows in SQL Using Azure Logic Apps #logicapps #automation #integration #azureintegration #azure #azurelogicapps #logicapp #cloud #cloudcomputing
1. Read JSON data from ADLS and insert into SQL table using Logic apps
Просмотров 1,1 тыс.4 месяца назад
In this video we learnt how to read JSON data from ADLS and load it into SQL table using Logic apps Parse JSON official documentation: learn.microsoft.com/en-us/azure/logic-apps/logic-apps-perform-data-operations?tabs=consumption#parse-json-action #logicapps #automation #integration #azureintegration #azure #azurelogicapps #logicapp #cloud #cloudcomputing
2. Resilient Distributed Dataset (RDD) in Pyspark
Просмотров 9245 месяцев назад
In this video, we learnt about RDD in pyspark #pyspark #azuredataengineer #dataengineer #dataengineering #azuredataengineering
37. How to concatenate data of multiple rows after grouping using mapping dataflow #adf #dataflow
Просмотров 4317 месяцев назад
In this video, we learnt How to concatenate data of multiple rows after grouping using mapping dataflow #adf #dataflow
33. How to Break Out of ForEach Loop on Activity Failure #adf
Просмотров 2,3 тыс.7 месяцев назад
33. How to Break Out of ForEach Loop on Activity Failure #adf
7. Self-Referencing Variable is not allowed #adf #datafactory
Просмотров 5107 месяцев назад
7. Self-Referencing Variable is not allowed #adf #datafactory
6. For Each activity do not run in parallel #adf #datafactory
Просмотров 1,7 тыс.7 месяцев назад
6. For Each activity do not run in parallel #adf #datafactory
How to pause all the dedicated SQL pools in subscription after a period of inactivity
Просмотров 6367 месяцев назад
How to pause all the dedicated SQL pools in subscription after a period of inactivity
5. Execute Pipeline activity fails to pass a parameter of array type to child pipeline
Просмотров 1,3 тыс.9 месяцев назад
5. Execute Pipeline activity fails to pass a parameter of array type to child pipeline
4. The specified SQL Query is not valid. The query doesn't return any data.
Просмотров 5849 месяцев назад
4. The specified SQL Query is not valid. The query doesn't return any data.
3. The expression cannot be evaluated because property doesn't exist.
Просмотров 1,5 тыс.9 месяцев назад
3. The expression cannot be evaluated because property doesn't exist.
2. Bad Request error in ADF | Message Null | Common errors in ADF
Просмотров 1,4 тыс.9 месяцев назад
2. Bad Request error in ADF | Message Null | Common errors in ADF
1. Failed to Delete Linked Service | Common Errors in ADF
Просмотров 1,3 тыс.9 месяцев назад
1. Failed to Delete Linked Service | Common Errors in ADF
26. Monitor Copy activity in ADF pipeline #adf
Просмотров 1,1 тыс.9 месяцев назад
26. Monitor Copy activity in ADF pipeline #adf
25. Auto Create table option in Copy activity for SQL as sink in Azure data factory
Просмотров 3,7 тыс.10 месяцев назад
25. Auto Create table option in Copy activity for SQL as sink in Azure data factory
32. How to Use the outh2.0 token to access the secured resource using azure data factory
Просмотров 1,6 тыс.Год назад
32. How to Use the outh2.0 token to access the secured resource using azure data factory
31. How to Query pipeline runs in ADF based on input filter conditions
Просмотров 2,8 тыс.Год назад
31. How to Query pipeline runs in ADF based on input filter conditions
30. Execute ADF pipeline via rest API from other ADF
Просмотров 2,3 тыс.Год назад
30. Execute ADF pipeline via rest API from other ADF
29. Execute ADF pipeline via REST API call
Просмотров 6 тыс.Год назад
29. Execute ADF pipeline via REST API call
Secure File Transfer Protocol | Copy data from on Prem to Azure storage account
Просмотров 2,1 тыс.Год назад
Secure File Transfer Protocol | Copy data from on Prem to Azure storage account
Can we not use wildcards to handle this scenario as you have explained earlier¿
yes we can
Sensational video! One question, why in ForEach1 did you have to create a dataset that points to the parameterised files in the dataset and not use the result of getmedata1, which already returns a list with each of the names and dates?
@@AdenilsonFTJunior The one outside Foreach is pointing to folder level.. So we are getting the file names within the folder using child items and the one inside Foreach is parameterized to process those files one by one through iteration and get the last modified date of the files
@@azurecontentannu6399 thanks!
Thank u, very much , what if it is case of folder of big data
Great explanation! Thanks for this content 😀
Hi Annu, Could you please make a video explaining the Degree of Copy Parallelism in Copy Activity? It would be really helpful! Thanks!
@@rajashekar4171 sure
@@azurecontentannu6399 Thanks
Beautiful and simple way of systematical teaching to the students. You are great from your heart explaining everything clearly without hiding . Your teaching is greatly appreciated by Muihammad Khan from North America.
@@atlanticoceanvoyagebird2630 Thankyou so much for your kind words
Wow, your content and explanations are absolutely fantastic! Really appreciate the clarity and detail you put into every video. Keep up the amazing work! 🙌 Thanks
I never understood this confused parameterized value passing from other instructors except your teaching methods that remained clear explanations. I am learning a lot of things from you which I could not enjoy from other sources. Muhammad Khan from North America.
Your English pronunciations are very clear not other instructors who speaks and can understand 5% audience not like you where audience can understand 100%. Keep on this great pronunciations.
If we want to skip files which contain char like fact, aggregated inside the file name like data567- fact.csv and data13144-aggregated-1456.csv from copying how we can do this???
Great video, mam I have a doubt that after using a filter activity why don't we take a copy activity directly to load dat into destination. Please clear my doubt
Good explanation annu. I have a small question. why don't we lookup activity instead of filter activity to filter last updated file?
Awesome and inspiring, I like your analytical approach and good communication skills.
Excellent analytical approach in the explanation👍
I watched a few of your videos on ADF, and honestly, they are much better than the paid ones on Udemy😀
@@CodeVeda Thankyou so much 😊
Filename and filepath both paramere pointing to file name only.....I guess that is a bug in ADF. Microsoft needs to improve it.
Great explanation of the tools and features before going into the demonstration examples. a good sneak preview.
End me kya kiya😢
256?
4-256
excel was also not supported as sink. Iceberg is only supported as sink but not source.
=
=
=
=
nice thanks for made this content
Please raise audio quality and avoid disturbance, you have long journey here, thanks a lot
Sure.. Noted
Thanks for this content really Helpful
Good explanation and easy to understand. Soon i will follow your azure content. Hope you will clear all doubts there.
=
=
can you tech me to :-Use Azure Data Factory to retrieve data from the Azure Log Analytics API. Use the query language used by Log Analytics (KQL).The sink will be SQL DB. Use conditions like error/Fail .
12 hours
=
Hi Annu Please create an end to end video for Teradata migration, Netezza Migration, SAP migration. That would outstand your channel.
@@mohitupadhayay1439 Reproducing scenario like these are difficult due to unavailable resources for these external source
I want to use active directory managed identity in mssql server as an authentication option in lastet 20v of msssql how to do that? I don't have any azure resource but all my users are from azure ad
Can we use sql query with like operator in lookup activity? select folder names like '____-__-__'
@@natarajbeelagi569 yes ofcourse.. If ur dataset is SQL dataset. U can write any SQL code directly in lookup activity
Thanks for the video. I have scenario like how to copy SharePoint files to Azure storage using ADF parametrize country wise.
Informative Annu. Please try making End to End project videos as well. This series together with the Project video will help a DE understand the scale and requirement of how to handle things on big scale data.
Instead we can also use rank function inside window transformation and filter based on N value being provided.
When there is no file at the source and if the merge file option is selected then it is generating a 2 B file with no data, any idea why it is happening?
@@anandnandanwhen there is No file at the source, why would you do merge
@@azurecontentannu6399 My pipeline schedule to run twice a day, and sometimes the source team doesn't send all the files. When this happens, the copy activity creates an empty file at the sink for missing source files. If I remove the merge option, the file isn't created at all. Is this a glitch or expected behavior for the copy activity? I know I can add pre-checks, but I'm just curious to understand this behavior better.
@@anandnandan I suspect it's an expected behavior. Will let u know if I get any source of confirmation on this behavior.. However, as u already thought abt pre check, you can add custom storage Event trigger instead of schedule trigger for ur pipeline as pre check
@azurecontentannu6399 How do you provide values to the parameters you have created when you have storage based trigger configured for the pipeline?
Hi Sis, what is the interview process for Microsoft and how many total rounds of interviews they will conduct. Please suggest some tips
Hi, I also have a scenario where I have few values of type decimal(38,18) on the source which I am copying into adls as a parquet file. In this case the column is converted to string. While copying this data to dsqlpool I am facing an issue saying “cannot load string to decimal(38,18). Do you have a work around for this?
Hi, I have a scenario where a single table has multiple unique keys or composite keys. How do we match all the keys with the target table using key Column?
@@gotimukulshriya1864 yes store the keys in the form of col1, col2, col3 in a SQL table and dynamically point to this col having the key columns list and use createarray function @createarray(keycol)
Hi Annu: Just discovered your channel and just wanted to say you're a brilliant presenter - your content is really helpful and clear! Thanks very much - keep up the excellent work.
really helpful recordings, thanks
can u add ACls to this as well
How to add source and target counts as well?
Thankyou so much ma'am for the video. It was very helpful. But can you please make a video or guide me on how to use a stored procedure in the pipeline to first check if the table exists and then truncate it then load it.. or if the table does not exist then create the table and load it.