Before i found this playlist, I was like who just woke up out of a coma with all the confusion from Microsoft.. U made everything very easy, no one on RUclips or Google even Microsoft made it that easy like you. Thank you man.
Great video! One question. I want to load data to Qlik Sense from Synapse. Do I need to create an external data source or do I need to create something else?
Hello Sir, You said we need to have external data sources to query the data from ADLS but in earlier videos you queried the data in ADLS using OPENROWSET() if we do that using OPENROWSET() why to create external data sources. And also if the data is not from azure eg MySQL or S3, then you said we need to use linked services then why is this external data source. Kindly clarify my doubt. Thanks in advance
hello maheer bhai, i am following your sessions thoroughly to learn ADF & ASA, i have few questions for this session : 1. is credential setting is optional or compulsory? 2. do we have to write password to fetch records from datasource? 3. post expiration of SAS token, it will not allow us to write query against this DS correct? 4. so we have to either expand or create & set new SAS token, but how to edit SAS token in external datasource? Thanks, Devang Tadvi
I want to connect to the MDF file which is in ADLS file Share. Daily it gets updated with new rows to it. User updates that it daily. We want to build some analytics and create a power BI report on top of it. Analytics may need new columns within existing tables. Or we may have to create new tables to build analytics. Requirement is under discussion. What would be the preffered way. Should I go for serverless and store tables in ADLS as a file or should I create Dedicated Pool and store everything within that. We also need quick results. I mean good performance. My feeling is that I should go for dedicated one but then I will have to create pipeline to insert new rows which are generated everytime. every day it creates 10 rows. So I may have to append it. Its does not have PrimaryKey or a date of creation as well. It holds Laks of existing rows. So it would be difficult. If I go for serverless then would it be easy to create seperate tables in CSV by ovverriding it? And then is it easy to build new columns n tables on top of it? Main pain is to append the data to existing table without any identification. Hence I am confused between serverless n dedicated to choose any one. Also performance factor matters to show analytics in Power BI. I know big question but if you can answer me in a shorter way would be a nice help
getting error that CREATE DATABASE SCOPED CREDENTIAL is not supported in master database. Visit this article to learn more about this error Total execution time: 00:00:02.274
Can I get your contact number, actually there is an error in creating external source, can we have a session on anydesk so that i can share you my error which i am getting error that CREATE DATABASE SCOPED CREDENTIAL is not supported in master database.
honestly you have great content but the squigly chalkboard thing is just too hard to follow. would have been better if you had created diagrams and such
Before i found this playlist, I was like who just woke up out of a coma with all the confusion from Microsoft..
U made everything very easy, no one on RUclips or Google even Microsoft made it that easy like you. Thank you man.
Thank you
Your playlists are must for whoever is aspiring to be Azure Data Engineer.
Keep up the good work.
Thank you ☺️
Really great content . Congrats Wafa, keep teaching , I´ll keep following ❤❤
Awesome..Really useful ..Thank you for sharing Rockstar.. 👍
Welcome 🤗
Thanks much Maheer Bhai. Your videos are really helpful and boost confidence to clear my interview.
Thank you ☺️
Finally i am able to understand external data sources :)
☺️🤗
Great explanation for difficult subject
Thank you 😊
Will you do a series on spark, pyspark & databricks?
Very nice explanation. I have one doubt. Why Master key is always needed here, is it related to Scope Credentials. Can you explain.
U have used built in SQL pool but when I am using it
It is giving me error:
Create external data source is not supported
Great video! One question. I want to load data to Qlik Sense from Synapse. Do I need to create an external data source or do I need to create something else?
Excellent! Loved It.
Thank you ☺️
Hello Sir, You said we need to have external data sources to query the data from ADLS but in earlier videos you queried the data in ADLS using OPENROWSET() if we do that using OPENROWSET() why to create external data sources. And also if the data is not from azure eg MySQL or S3, then you said we need to use linked services then why is this external data source. Kindly clarify my doubt. Thanks in advance
hello maheer bhai,
i am following your sessions thoroughly to learn ADF & ASA, i have few questions for this session :
1. is credential setting is optional or compulsory?
2. do we have to write password to fetch records from datasource?
3. post expiration of SAS token, it will not allow us to write query against this DS correct?
4. so we have to either expand or create & set new SAS token, but how to edit SAS token in external datasource?
Thanks,
Devang Tadvi
Thank you, It really helpful so much.
Welcome 😀
whats the difference between External Data Source and External table here?
I want to connect to the MDF file which is in ADLS file Share. Daily it gets updated with new rows to it. User updates that it daily.
We want to build some analytics and create a power BI report on top of it.
Analytics may need new columns within existing tables. Or we may have to create new tables to build analytics. Requirement is under discussion.
What would be the preffered way. Should I go for serverless and store tables in ADLS as a file or should I create Dedicated Pool and store everything within that.
We also need quick results. I mean good performance. My feeling is that I should go for dedicated one but then I will have to create pipeline to insert new rows which are generated everytime. every day it creates 10 rows. So I may have to append it. Its does not have PrimaryKey or a date of creation as well. It holds Laks of existing rows. So it would be difficult.
If I go for serverless then would it be easy to create seperate tables in CSV by ovverriding it? And then is it easy to build new columns n tables on top of it?
Main pain is to append the data to existing table without any identification. Hence I am confused between serverless n dedicated to choose any one. Also performance factor matters to show analytics in Power BI.
I know big question but if you can answer me in a shorter way would be a nice help
@wafa , if i want to get data from my AWS account , how can i do that?
Waiting for next video
Thanks Maheer
Welcome 🤗
Thank You Sir
Welcome
Please create videos on Microsoft fabric also
Nice 👌
Thank you 🙂
Super! Thanks....
Nice
Thanks
Thank you so much 😊
Thanks bro
Welcome 😁
getting error that CREATE DATABASE SCOPED CREDENTIAL is not supported in master database.
Visit this article to learn more about this error
Total execution time: 00:00:02.274
Can I get your contact number, actually there is an error in creating external source, can we have a session on anydesk so that i can share you my error which i am getting error that CREATE DATABASE SCOPED CREDENTIAL is not supported in master database.
👍🏻👍🏻
Thank you 🙂
honestly you have great content but the squigly chalkboard thing is just too hard to follow. would have been better if you had created diagrams and such
When was demoDB created ?