In the beginning, you said that state file is created after terraform init, but it is created after terraform apply. Great video, though! Love this playlist!
Very nice video and example Travis. If using terraform cloud could I store state in the cloud and the tfstate.backup in azure storage for better redundancy and DR scenario?
Hello Travis, Thanks for your videos and knowledge sharing. What happens if two terraform developers executed terraform apply/destroy concurrently working on the same terraform code base and have two different remote state files in different blob containers?
Could you just run that initial script as a BASH script if you were using Github Actions for CI/CD? I'm still learning, and building a pipeline to AKS for practice, so this is a little new to me
Happy new year Travis and ty for your knowledge transfer! I think state file is the "Achilles' heel" of TF, as in AWS you would need to setup dynamodb to lock the state file in S3. In Azure blob that is happening by default as you shoed in the demo. However, the worse part of the state file is if there were changes in the portal (not by TF) statefile is out of sync and that would cause different type of issues to get them in sync again. Where in ArmTemplate (BICEP) i could run complete mode and make sure everything is matched what is in the code. What-If is similar to TF Plan. Same would go for CloudFormation. Travis, what are you thoughts about it?
Happy new year to you as well! That is the big limitation, or feature, of Terraform. If all is done with TF, things go well. But problems start when changes are made outside of TF. FWIW, one of the selling points to TF is it prevents configuration drift. So, removing changes made outside of TF is what it is expected to do. I prefer ARM templates or Bicep over Terraform. Not that there is anything wrong with TF, I am fortunate to work in an Azure only environment and TF seems like another layer of complexity. I would think differently with a multi-cloud environment. Customers like Terraform, that is the motivation to the videos. Also, most of the information available is on Terraform with AWS. I though it would help to create content focused on Azure.
Okay i am able to do this but i want to have a backend with multiple subscriptions along with it cause i tried doing that and got an error saying that i have an undefined provider despite doing the correct syntax for multiple subscriptions according to forums
Hi Travis, i hv one query how can we call the same state file from storage account while deploying another resource for example: i hv created one resource group in my infra that is stored in tfstate file. now i wants to create one public ip in same resource group and i don't want to write code again to deploy resource group( we can't with same name as well) . Is there any option so we can call the same tfstate file from storage account.
Thank Travis. I've tried to add backend "azurerm" under required providers but I'm getting "Error: Variables not allowed" for all those 4 items - resource_group_name, storage_account_name, container_name, and key. Any idea? Appreciate it, thank you. 🙂
Thank you, Travis. I'm recently learning Terraform and I could never get my head around remote states until now.
thanks for sharing your knowledge Travis!
In the beginning, you said that state file is created after terraform init, but it is created after terraform apply.
Great video, though! Love this playlist!
Instead of bootstrapping the storage with Azure CLI commands, any particular reason why don't you use Terraform to create those resources?
Very nice video and example Travis. If using terraform cloud could I store state in the cloud and the tfstate.backup in azure storage for better redundancy and DR scenario?
Hello Travis, Thanks for your videos and knowledge sharing. What happens if two terraform developers executed terraform apply/destroy concurrently working on the same terraform code base and have two different remote state files in different blob containers?
Each instance of terraform would act independently and overwrite the other. That's what a central state file is intended to prevent.
Could you just run that initial script as a BASH script if you were using Github Actions for CI/CD? I'm still learning, and building a pipeline to AKS for practice, so this is a little new to me
Actually, disregard that.. I see now that this is a "first run" script, just to set up the storage account
Happy new year Travis and ty for your knowledge transfer! I think state file is the "Achilles' heel" of TF, as in AWS you would need to setup dynamodb to lock the state file in S3. In Azure blob that is happening by default as you shoed in the demo. However, the worse part of the state file is if there were changes in the portal (not by TF) statefile is out of sync and that would cause different type of issues to get them in sync again. Where in ArmTemplate (BICEP) i could run complete mode and make sure everything is matched what is in the code. What-If is similar to TF Plan. Same would go for CloudFormation. Travis, what are you thoughts about it?
Happy new year to you as well! That is the big limitation, or feature, of Terraform. If all is done with TF, things go well. But problems start when changes are made outside of TF. FWIW, one of the selling points to TF is it prevents configuration drift. So, removing changes made outside of TF is what it is expected to do.
I prefer ARM templates or Bicep over Terraform. Not that there is anything wrong with TF, I am fortunate to work in an Azure only environment and TF seems like another layer of complexity. I would think differently with a multi-cloud environment. Customers like Terraform, that is the motivation to the videos. Also, most of the information available is on Terraform with AWS. I though it would help to create content focused on Azure.
Okay i am able to do this but i want to have a backend with multiple subscriptions along with it cause i tried doing that and got an error saying that i have an undefined provider despite doing the correct syntax for multiple subscriptions according to forums
Hi Travis, i hv one query how can we call the same state file from storage account while deploying another resource for example: i hv created one resource group in my infra that is stored in tfstate file. now i wants to create one public ip in same resource group and i don't want to write code again to deploy resource group( we can't with same name as well) . Is there any option so we can call the same tfstate file from storage account.
Does anyone know what the environment variable name should be on Linux? it is not ARM_ACCESS_KEY
how to run this through release pipeline? it asks for input yes/no which is not possible to provide through pipeline
I need that t-shirt! Where? Thanks for the vid!
Shirts by Shane, all proceeds go to Girls Who Code. I see a couple new designs, I may need to place an order. shirtsbyshane.com/
Thank Travis. I've tried to add backend "azurerm" under required providers but I'm getting "Error: Variables not allowed" for all those 4 items - resource_group_name, storage_account_name, container_name, and key. Any idea? Appreciate it, thank you. 🙂
what if that storage account goes down?