Hi Ryan, thanks for sharing in depth information on azure cluster with load balancers with client connection test. Thank you so much again for your great post.
We do not support zone or geo redundancy with S2D currently. I should also note that this post is pretty old and prior to us releasing Azure Shared Disks so I would consider looking at that instead of using S2D.
Hello Ryan, that is a great video. I just configured multi subnet fci using s2d and it just works fine. But i am stuck at creating the ILB step, is it mandatory to create a ILB, will my application cant connect using the cluster name like it is doing now on onprem?. If not please help me how to create a ILB for multi subnet as i am having 2 ips for the sql resource. In your video when ur configuring the ILB , you used only one ip as it is not a multisubnet. thanks a lot for reading, awaiting for your reply.
Great video! One question though, why not just add a clustered DTC resource to the SQL Failover cluster role? That's what I've always done for physical failover clusters.
That's a great question and I am glad you asked. Back in the Windows 2000 days it was a very common configuration to have a single MSDTC resource in its own group. We also required an MSDTC prior to SQL 2008 even if you did not need one. I created the MSDTC in its own role in this video just to make it easier to see all the configurations. As a best practice I would create an MSDTC resource in each role hosting a SQL instance to scale out the MSDTC workload. Here is an article where I discuss that and several other things around MSDTC. www.ryanjadams.com/2018/07/msdtc-configuration-sql-support/
Great video but none of the links get to the scripts that you used. None of the links work so trying to get scripts from pausing only works if we can see the entire script. It would be useful as well if you tell people why you need 2 disks per node for the data. Would one premium disk be viable for each node?
I updated the link in the description. I just changed my short codes last week so it broke the link. 2 disks is an S2D requirement and that is publicly documented. Here is the link to the blog post just in case. www.ryanjadams.com/AzureFCI Note that this is an old post I wrote prior to Azure Shared Disks and highly recommend that instead.
@@ryanjadams18 Thanks, I'll give it a read. It's early enough in the process that I can used one of the premium shared disks (which was the direction I was originally going to take until I started reading all of the blogs pushing S2D). My current hiccup is loss of external internet access when I added a second interface on a different subnet to my nodes. No default gateway on them but external requests won't route out my primary interface gateway as expected. Once I solve that problem I will look at the disks again.
Ryan, thank you fo this awesome video! Im currently trying to make our application FCI enabled in the cloud and its helped no end. I do have one question. What would you do for other network traffic. My application contains up to 20 windows services all listening on different ports and I would like to point all traffic to the active node. Would you agree that I need to have another frontend IP supporting HA ports or is there a better way? Many Thanks, Brad
Assuming your app is on another VM or cloud service they would all still contact the cluster through the listener/load balancer so you shouldn't have to do anything. This video is a bit old now so one thing that has changed is you can now avoid the load balancer for an FCI by using a DNN if you are on Windows 2019 and SQL19 CU2. AG support starts in SQL19 CU8.
@@ryanjadams18 Appreicate the reply. Yes I noticed DNN vs VNN when reading through some M$ documentation, I'll check it out. What would you suggest if the services were not on a different box. The app is pretty complex I have to keep everything to one box and failure over if necessary.
@@bradsherwin8149 If the app is on the box and it has it's own IPs then you'll need to add those to the load balancer for incoming connections to succeed.
docs.microsoft.com/en-us/azure/virtual-machines/windows/sql/virtual-machines-windows-portal-sql-create-failover-cluster#limitations but still not very clear
This video has a link to my blog post on this subject here. www.ryanjadams.com/go/AzureFCI/ That post has a link to this article that has the code. docs.microsoft.com/en-us/azure/virtual-machines/windows/sql/virtual-machines-windows-portal-sql-create-failover-cluster#step-2-configure-the-windows-server-failover-cluster-wsfc-with-s2d
Yes you can use DISTRIBUTED TRANSACTION and see them come into the MSDTC using Component Services MMC Snap-in. This article has some information in the "Verify MSDTC" section. www.ryanjadams.com/2019/10/msdtc-best-practices-with-an-availability-group/
This video is quite old now and I recommend using Azure Shared Disks over S2D. I also no longer have all the code I used in the video, but here is an example that should help. # Install Windows Failover Clustering Feature Import-Module failoverclusters $nodes = ("node1","node2") Invoke-Command $nodes {Install-WindowsFeature Failover-Clustering -IncludeAllSubFeature -IncludeManagementTools} # Run Cluster Validation Test-Cluster -Node ("node1","node2") -Include "Cluster Configuration","Inventory","Network","Storage","System Configuration","Storage Spaces Direct" # Create the cluster # New-Cluster switch: -ManagementPointNetworkType {(Singleton: Traditional Name/IP) (Distributed: Uses node IPs) (Automatic: Default option that detects on-prem or Azure)} New-Cluster -Name Cluster2 -Node ("Node1","Node2") -NoStorage -ManagementPointNetworkType Distributed # Enable S2D and create pool. You need at least 4 drives per server for non-cache Enable-ClusterS2D #-cachestate disabled -PoolFriendlyName S2DTest -WhatIf # Carve volumes out of pool New-Volume -StoragePoolFriendlyName S2D* -FriendlyName Data -FileSystem CSVFS_REFS -Size 1024GB New-Volume -StoragePoolFriendlyName S2D* -FriendlyName Log -FileSystem CSVFS_REFS -Size 1024GB New-Volume -StoragePoolFriendlyName S2D* -FriendlyName MSDTC -FileSystem CSVFS_REFS -Size 1024GB
@@ryanjadams18 Which walkthrough would you skip? Do you recommend doing the normal cluster and just creating the Balancer? I had IP problems when I did it like this.
@@ricardojunior8924 I'm not sure I know what you mean by skip one? The balancer is required and it shouldn't matter if you do that or the cluster first (unless using DNNs). Generally I would do the cluster first and the balancer last so I didn't forget to open the ports on the cluster while creating the balancer.
Hi Ryan, thanks for sharing in depth information on azure cluster with load balancers with client connection test. Thank you so much again for your great post.
We do not support zone or geo redundancy with S2D currently. I should also note that this post is pretty old and prior to us releasing Azure Shared Disks so I would consider looking at that instead of using S2D.
Question- How do you make the SQL blue question mark green like it normally shows?
Great tutorial. Unfortunately is not working for me. I can't start MSDTC. Bring online failed. No additional information in event log.
Hello Ryan, that is a great video. I just configured multi subnet fci using s2d and it just works fine. But i am stuck at creating the ILB step, is it mandatory to create a ILB, will my application cant connect using the cluster name like it is doing now on onprem?. If not please help me how to create a ILB for multi subnet as i am having 2 ips for the sql resource. In your video when ur configuring the ILB , you used only one ip as it is not a multisubnet. thanks a lot for reading, awaiting for your reply.
Great video! One question though, why not just add a clustered DTC resource to the SQL Failover cluster role? That's what I've always done for physical failover clusters.
That's a great question and I am glad you asked. Back in the Windows 2000 days it was a very common configuration to have a single MSDTC resource in its own group. We also required an MSDTC prior to SQL 2008 even if you did not need one. I created the MSDTC in its own role in this video just to make it easier to see all the configurations. As a best practice I would create an MSDTC resource in each role hosting a SQL instance to scale out the MSDTC workload. Here is an article where I discuss that and several other things around MSDTC. www.ryanjadams.com/2018/07/msdtc-configuration-sql-support/
Got it. Thanks!
Great video but none of the links get to the scripts that you used. None of the links work so trying to get scripts from pausing only works if we can see the entire script. It would be useful as well if you tell people why you need 2 disks per node for the data. Would one premium disk be viable for each node?
I updated the link in the description. I just changed my short codes last week so it broke the link. 2 disks is an S2D requirement and that is publicly documented. Here is the link to the blog post just in case.
www.ryanjadams.com/AzureFCI
Note that this is an old post I wrote prior to Azure Shared Disks and highly recommend that instead.
@@ryanjadams18 Thanks, I'll give it a read. It's early enough in the process that I can used one of the premium shared disks (which was the direction I was originally going to take until I started reading all of the blogs pushing S2D). My current hiccup is loss of external internet access when I added a second interface on a different subnet to my nodes. No default gateway on them but external requests won't route out my primary interface gateway as expected. Once I solve that problem I will look at the disks again.
Ryan, thank you fo this awesome video! Im currently trying to make our application FCI enabled in the cloud and its helped no end. I do have one question. What would you do for other network traffic. My application contains up to 20 windows services all listening on different ports and I would like to point all traffic to the active node. Would you agree that I need to have another frontend IP supporting HA ports or is there a better way? Many Thanks, Brad
Assuming your app is on another VM or cloud service they would all still contact the cluster through the listener/load balancer so you shouldn't have to do anything. This video is a bit old now so one thing that has changed is you can now avoid the load balancer for an FCI by using a DNN if you are on Windows 2019 and SQL19 CU2. AG support starts in SQL19 CU8.
@@ryanjadams18 Appreicate the reply. Yes I noticed DNN vs VNN when reading through some M$ documentation, I'll check it out. What would you suggest if the services were not on a different box. The app is pretty complex I have to keep everything to one box and failure over if necessary.
@@bradsherwin8149 If the app is on the box and it has it's own IPs then you'll need to add those to the load balancer for incoming connections to succeed.
Hello Ryan, I don't understand why we need to use standard load balancer, what the difference with basic?
docs.microsoft.com/en-us/azure/virtual-machines/windows/sql/virtual-machines-windows-portal-sql-create-failover-cluster#limitations
but still not very clear
The short answer is that the basic load balancer does not support the required ports.
Hi, Thank you for the knowledge sharing. I have 2 VM's on Azure inside redhat. How can i create failover cluster instance in Azure.
Can't find the codes (powershell Commands) in your website, can you help me in getting them?
This video has a link to my blog post on this subject here. www.ryanjadams.com/go/AzureFCI/
That post has a link to this article that has the code. docs.microsoft.com/en-us/azure/virtual-machines/windows/sql/virtual-machines-windows-portal-sql-create-failover-cluster#step-2-configure-the-windows-server-failover-cluster-wsfc-with-s2d
hi, are you aware if S2D currently supports cross zone or cross region in azure VMs
It does not.
Thought so
Hy Ryan, how are u? great video!
is it possible you send your query to test DTC on SQL?
Thanks
Yes you can use DISTRIBUTED TRANSACTION and see them come into the MSDTC using Component Services MMC Snap-in. This article has some information in the "Verify MSDTC" section.
www.ryanjadams.com/2019/10/msdtc-best-practices-with-an-availability-group/
Can you please provide link to the code ?
This video is quite old now and I recommend using Azure Shared Disks over S2D. I also no longer have all the code I used in the video, but here is an example that should help.
# Install Windows Failover Clustering Feature
Import-Module failoverclusters
$nodes = ("node1","node2")
Invoke-Command $nodes {Install-WindowsFeature Failover-Clustering -IncludeAllSubFeature -IncludeManagementTools}
# Run Cluster Validation
Test-Cluster -Node ("node1","node2") -Include "Cluster Configuration","Inventory","Network","Storage","System Configuration","Storage Spaces Direct"
# Create the cluster
# New-Cluster switch: -ManagementPointNetworkType {(Singleton: Traditional Name/IP) (Distributed: Uses node IPs) (Automatic: Default option that detects on-prem or Azure)}
New-Cluster -Name Cluster2 -Node ("Node1","Node2") -NoStorage -ManagementPointNetworkType Distributed
# Enable S2D and create pool. You need at least 4 drives per server for non-cache
Enable-ClusterS2D #-cachestate disabled -PoolFriendlyName S2DTest -WhatIf
# Carve volumes out of pool
New-Volume -StoragePoolFriendlyName S2D* -FriendlyName Data -FileSystem CSVFS_REFS -Size 1024GB
New-Volume -StoragePoolFriendlyName S2D* -FriendlyName Log -FileSystem CSVFS_REFS -Size 1024GB
New-Volume -StoragePoolFriendlyName S2D* -FriendlyName MSDTC -FileSystem CSVFS_REFS -Size 1024GB
@@ryanjadams18 Which walkthrough would you skip?
Do you recommend doing the normal cluster and just creating the Balancer? I had IP problems when I did it like this.
@@ricardojunior8924 I'm not sure I know what you mean by skip one? The balancer is required and it shouldn't matter if you do that or the cluster first (unless using DNNs). Generally I would do the cluster first and the balancer last so I didn't forget to open the ports on the cluster while creating the balancer.
Can you please provide link to the code ?
docs.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/failover-cluster-instance-storage-spaces-direct-manually-configure?tabs=windows2012#step-2-configure-the-windows-server-failover-cluster-wsfc-with-s2d
www.ryanjadams.com/AzureFCI