We use an artifact and per-env tfvars. Our source code tree is identical for all environments, with separate tfvars with the different env configuration. Our CI in dev creates a new artifact after each merge to main and the code automatically applied in dev (CD). Once approved for staging, the artifact is copied to the staging binary artifact repo and the terraform applied in staging using the staging.tfvars. Same pattern when promoted to production using prod.tfvars. We chose this method because all of our other components use the same binary artifact promotion procedure to copy artifacts from dev to stage to production. We have per-env separate state. We also share modules between applications using the same artifact tooling for each module. Modules can be pinned per application as needed.
Using an Azure Storage Account for a backend - I have... 1 - A different Storage Account for each ENV 2 - Use the same TF code and modules for all ENVs 3 - Use TFVARS files for each ENV 4 - Submit changes to a repo that triggers a CI/CD pipeline that does a PLAN for each ENV 5 - Have a approve/deny CI/CD pipeline stage to APPLY No one user can access the backend Storage Account for any ENV - only the Azure Service Connection/Principal the pipeline runs as can access the backend The only issue I have is, as the video highlights, when your PRD ENV is mixed quite differently from the others - e.g. load balancers and resources where they just aren't used in the other ENVs. It's a constant churn to work out shared code and modules to adapt to all of this
Thanks for explanation. So in regards to possible alternatives, you mentioned a few. Would really love to hear your vision on best alternative. What is the best practice for separating environments when not using workspaces?
My current preference is to use branches in Git. The other most popular way is to use a single config with different tfvars files and use the automation platform to configure the backend for a different state data instance.
Drift and workspaces really threw me for a loop with my first Terraform file. That's a lesson you only learn once, but I don't find myself paying much attention to workspaces anymore.
Agreed. We would like to know about whether TFC workspaces can resolve those issues in Terraform OSS workspaces? And can we achieve the DRY principle using TFC?
Can you link/reference the part where Terraform state workspaces are not ideal for production? Also, does the same apply to Workspaces when using Terraform Cloud? I have previously used multiple branches with multiple workspaces - each branch linked to a particular workspace so all merged PRs for that branch automatically open a plan for that workspace on TF cloud.
Here's the doc that I was referencing: developer.hashicorp.com/terraform/language/state/workspaces#using-workspaces Workspaces in Terraform Cloud are a totally different beast despite having the same name. They include access control, separate variable values, and linking to specific repo tags or branches.
Disagree. Have tried all ways and: - writing the configuration in a way that can be customised between environments + - splitting resources between workspaces with separate management is the Best implementation we have seen. Otherwise, you're basically assuming that: - people will perfectly maintain all the different code versions; they won't - leading to having wildly different dependencies and resources managed in each environment - people will want or have the inclination to write rego rules for each environment separately - there is no benefit to be gained by dev fully or mostly emulating prod for continuous integration and staging type testing purposes (or that your use case won't require it) - running your whopper of a plan/apply (that you will likely evolve towards) won't exceed time limit and the number of resources won't kill the underlying engine ... and you're encouraging everyone to push to the same default workspace, meaning that now all teams have to review ALL possible changes... which is a nightmare when you're just updating a small part of the system as part of a routine change and have to unpick all the previous discarded/buffered plans and failed applies or drift realignments. From the pov of a company with literally hundreds of AWS accounts, countless repositories applying across accounts, areas and teams, it is SUPREMELY impractical. Workspaces are the way forward. Just pray your TF Cloud provider doesn't charge by them!
Thanks for the feedback! I def think that workspaces in TF Cloud or a similar implementation in other TACOS makes a lot of sense. My quibble is with workspaces in the Terraform Community addition.
Are you using Terraform OSS workspaces for today? Do you agree with HashiCorp's guidance?
We use an artifact and per-env tfvars. Our source code tree is identical for all environments, with separate tfvars with the different env configuration. Our CI in dev creates a new artifact after each merge to main and the code automatically applied in dev (CD). Once approved for staging, the artifact is copied to the staging binary artifact repo and the terraform applied in staging using the staging.tfvars. Same pattern when promoted to production using prod.tfvars. We chose this method because all of our other components use the same binary artifact promotion procedure to copy artifacts from dev to stage to production. We have per-env separate state.
We also share modules between applications using the same artifact tooling for each module. Modules can be pinned per application as needed.
Using an Azure Storage Account for a backend - I have...
1 - A different Storage Account for each ENV
2 - Use the same TF code and modules for all ENVs
3 - Use TFVARS files for each ENV
4 - Submit changes to a repo that triggers a CI/CD pipeline that does a PLAN for each ENV
5 - Have a approve/deny CI/CD pipeline stage to APPLY
No one user can access the backend Storage Account for any ENV - only the Azure Service Connection/Principal the pipeline runs as can access the backend
The only issue I have is, as the video highlights, when your PRD ENV is mixed quite differently from the others - e.g. load balancers and resources where they just aren't used in the other ENVs. It's a constant churn to work out shared code and modules to adapt to all of this
Thanks for sharing Neil! This is exactly the type of setup I've used in the past. And yeah, Prod can be a real beast!
Thanks for explanation. So in regards to possible alternatives, you mentioned a few. Would really love to hear your vision on best alternative. What is the best practice for separating environments when not using workspaces?
My current preference is to use branches in Git. The other most popular way is to use a single config with different tfvars files and use the automation platform to configure the backend for a different state data instance.
@@NedintheCloud Thanks!
Hope your going to cover those other options in a future video. This was great content.
Happy to! I think I'd like to take a look at using GitHub Actions and Environments to handle separation of duties and state.
@@NedintheCloud please do!!!
Incredible video, and I concur with this. I have watched the env0 video but I wonder about the other options.
Also not forget that in test you will use a shared RDS for cost optimization ,and on prod you will have your own RDS , do that with workspaces xD!
Drift and workspaces really threw me for a loop with my first Terraform file. That's a lesson you only learn once, but I don't find myself paying much attention to workspaces anymore.
On AWS with s3 backend, each workspace creates a separate folders based on the key provided in the provider block, inside one bucket.
But again as he mentioned, the bucket access policy concerns remain the same.
@@divyamsharma5198but cant you just limit accees to a bucket specific resource ?
I simply keep my various environments in separate backend storages.
Agreed. We would like to know about whether TFC workspaces can resolve those issues in Terraform OSS workspaces? And can we achieve the DRY principle using TFC?
You can absolutely use TFC workspaces to resolve many of the issues. I'll detail that process in a future video.
Can you link/reference the part where Terraform state workspaces are not ideal for production?
Also, does the same apply to Workspaces when using Terraform Cloud? I have previously used multiple branches with multiple workspaces - each branch linked to a particular workspace so all merged PRs for that branch automatically open a plan for that workspace on TF cloud.
Here's the doc that I was referencing: developer.hashicorp.com/terraform/language/state/workspaces#using-workspaces
Workspaces in Terraform Cloud are a totally different beast despite having the same name. They include access control, separate variable values, and linking to specific repo tags or branches.
nice video but video and audio are out of sync wich drives me crazy
Really well explained!
Disagree.
Have tried all ways and:
- writing the configuration in a way that can be customised between environments +
- splitting resources between workspaces with separate management
is the Best implementation we have seen.
Otherwise, you're basically assuming that:
- people will perfectly maintain all the different code versions; they won't - leading to having wildly different dependencies and resources managed in each environment
- people will want or have the inclination to write rego rules for each environment separately
- there is no benefit to be gained by dev fully or mostly emulating prod for continuous integration and staging type testing purposes (or that your use case won't require it)
- running your whopper of a plan/apply (that you will likely evolve towards) won't exceed time limit and the number of resources won't kill the underlying engine
... and you're encouraging everyone to push to the same default workspace, meaning that now all teams have to review ALL possible changes... which is a nightmare when you're just updating a small part of the system as part of a routine change and have to unpick all the previous discarded/buffered plans and failed applies or drift realignments.
From the pov of a company with literally hundreds of AWS accounts, countless repositories applying across accounts, areas and teams, it is SUPREMELY impractical. Workspaces are the way forward. Just pray your TF Cloud provider doesn't charge by them!
Thanks for the feedback! I def think that workspaces in TF Cloud or a similar implementation in other TACOS makes a lot of sense. My quibble is with workspaces in the Terraform Community addition.
@@NedintheCloud Got you :)