Hey Reza, Thanks much , i have seen all your Fabric videos , it is great and very informative as always . I just wanted to ask , If someone is using an existing Azure SQL warehouse , can we migrate all the tables to Fabric warehouse by pointing to the existing azure SQL that is being used ?
Is this Synapse DW similar to the dedicated pools with 60 nodes MS currently offers? Some of the Fabric documentation says Fabric Data Warehouse is actually delta lake-based?
Hi Reza, Thanks for this video, it's very interesting. One question, do you have what tool can I use for the following? In my job I have created a Power BI dataset with more than 10 billion of rows and I have created partitions with incremental mode. Sometimes I have to refresh old partitions with XMLA endpoint which works well but is a manual process (running XMLA code from SQL Management Studio). My question is if exists in the market any tool that I can run automatically Power BI XMLA Endpoint code? like years ago with SQL Agent. Many thanks! Fer
Hi Reza, Could you make a video or blog discussing the difference between a data lakehouse and a data wharehouse? If the final destination of the data is powerbi and excel reports, does it matter which one I use?
We will soon publish a video on that subject From Power BI point of view; there won't be much difference between a Lakehouse and a Warehouse. However, there will be other points of difference
Hi Reza, this video is very informative. Just curious regarding pipeline. If I connect to a database in lakehouse, will changes to the original database reflect to the one in data warehouse or is it just a copy?
with Data Factory, you are doing ETL, which is extracting data from the source, transforming it, and loading into destination (Lakehouse or Warehouse). This means any changes in the source system will not automatically reflect in the Warehouse or Lakehouse. However, the ETL process on a scheduled basis will pick up the latest updates and load it there.
Hi Reza, Thank you for sharing the video. It's really informative. I've been playing with Lakehouse and Data Warehouse in MS Fabric. I have a question. If i load my data into Lakehouse and use Pipeline to Copy data into Data Waterhouse from the same Lakehouse, would that create a duplicate data or it'll only connect through the metadata?
It will be duplicated data. Lakehouse has a warehouse itself, that is the SQL endpoint of it. which is read-only. But by moving data again to a separate Data Warehouse, you will have duplicates of data. However, that gives you more SQL command power around it. but if the idea is just to READ data from the Warehouse, then the SQL endpoint of Lakehouse might work better in your scenario.
@@RADACAD Thank you so much Reza. To avoid data duplication, i think it'd be a good idea to get the data directly into Data Warehouse and create a shortcut in Lakehouse for those tables, so it can be reused in any other scenarios.
Sorry Reza, I accidentally hit the wrong thumb at first, I didn't intend to give this a thumbs down, it's definitely a thumbs up! I'd give it two thumbs up if I could. Great work, so informative.
Hey Reza, Thanks much , i have seen all your Fabric videos , it is great and very informative as always .
I just wanted to ask ,
If someone is using an existing Azure SQL warehouse , can we migrate all the tables to Fabric warehouse by pointing to the existing azure SQL that is being used ?
Thanks Ranjan.
That is what I am not sure yet. I hope there be a migration plan, and hopefully an easy one. But I will investigate more on that.
Still unclear which of those connection shown is the DirectLake? :(
I wish I can figure out how to get data in Microsoft Fabric. It seems like it only works with Microsoft data sources.
Is this Synapse DW similar to the dedicated pools with 60 nodes MS currently offers? Some of the Fabric documentation says Fabric Data Warehouse is actually delta lake-based?
Why copilot is not introducing in Microsoft Fabric Trial? Do you have any idea when they will give for Fabric Trial users?
How to get IP address for datawarehouse in fabric
You can get the server url, not the IP, from the settings in the Warehouse
Thank you so much. How does Incremental Refresh works here ?
Hi Reza,
Thanks for this video, it's very interesting.
One question, do you have what tool can I use for the following?
In my job I have created a Power BI dataset with more than 10 billion of rows and I have created partitions with incremental mode.
Sometimes I have to refresh old partitions with XMLA endpoint which works well but is a manual process (running XMLA code from SQL Management Studio).
My question is if exists in the market any tool that I can run automatically Power BI XMLA Endpoint code? like years ago with SQL Agent.
Many thanks!
Fer
Hi Reza,
Could you make a video or blog discussing the difference between a data lakehouse and a data wharehouse?
If the final destination of the data is powerbi and excel reports, does it matter which one I use?
We will soon publish a video on that subject
From Power BI point of view; there won't be much difference between a Lakehouse and a Warehouse. However, there will be other points of difference
Hi Reza, this video is very informative.
Just curious regarding pipeline. If I connect to a database in lakehouse, will changes to the original database reflect to the one in data warehouse or is it just a copy?
with Data Factory, you are doing ETL, which is extracting data from the source, transforming it, and loading into destination (Lakehouse or Warehouse). This means any changes in the source system will not automatically reflect in the Warehouse or Lakehouse. However, the ETL process on a scheduled basis will pick up the latest updates and load it there.
Hi Reza, thanks a lot for your time and the video review. As for now, I wonder how cost management is provided for all objects in Fabric?
I will soon publish a licensing video :)
Fantastic video Reza! You answered many of my questions in this video! 🙂
always glad to help :)
thanks alot reza
Hi Reza,
Thank you for sharing the video. It's really informative.
I've been playing with Lakehouse and Data Warehouse in MS Fabric. I have a question. If i load my data into Lakehouse and use Pipeline to Copy data into Data Waterhouse from the same Lakehouse, would that create a duplicate data or it'll only connect through the metadata?
It will be duplicated data. Lakehouse has a warehouse itself, that is the SQL endpoint of it. which is read-only. But by moving data again to a separate Data Warehouse, you will have duplicates of data. However, that gives you more SQL command power around it. but if the idea is just to READ data from the Warehouse, then the SQL endpoint of Lakehouse might work better in your scenario.
@@RADACAD Thank you so much Reza.
To avoid data duplication, i think it'd be a good idea to get the data directly into Data Warehouse and create a shortcut in Lakehouse for those tables, so it can be reused in any other scenarios.
Sorry Reza, I accidentally hit the wrong thumb at first, I didn't intend to give this a thumbs down, it's definitely a thumbs up! I'd give it two thumbs up if I could. Great work, so informative.
You can simply hit thumbs up button and it will replace your thumbs down😊
no problems at all :) thanks for your visit anyway :)