At my last place of employment we decided on using variable definition files for each env. This way we did not have to duplicate our TF files. There was some extra work involved to handle specific cases where you made have had to create multiple instances of the same resource, or in rare cases, you deployed a resource in one env, but not the other. But overall it make things much easier to create consistency because we didn't have different versions of a TF files in a branch or folder.
The only downsize about this approach is that there's no clear view of what has been deployed to each env. That's not the case of folder-based strategy.
There's a hybrid option that I didn't mention, where the each folder calls the root folder as a module and passes values and updates through there. But you still have the problem that updating the root folder configuration impacts all environments, so it's hard to do testing.
@@NedintheCloud the correct module abstraction is the key with hybrid approach. all environment should start with a module which builds the environment specifics things from scratch. the common objects shared between environments should have there own isolate repo, for eg if you are on hub and spoke model. the hub will live in isolation with separate tf repo. then environment modules will refer the resource directly when deploying. great video btw! :)
Thanks for this video Ned. I'm wondering if the new Terraform Stacks feature presented during the HashiConf 2023 would help to work on several environments too. Hard to say because it's in private preview I believe but they were talking about new "deployment" blocks which could perhaps be mapped to the notion of environments. Perhaps you tried this new Stacks feature already ? In any case it would be a good subject for an other video. Thanks for you work ❤
Looking to implement the folder scenario to manage environments. How can the backend be managed with one terraform config? For example a dev, test and prod folder all needing their own backend config to blob storage or s3.
Setting up the state backend can be done in a few ways: - Inception: create the s3 bucket with a config using the local backend and then switch the backed to s3 - Backend generator: create the s3 bucket with a dedicated configuration that just manages s3 - CloudFormation: Use cloudformation to bootstrap the s3 bucket
Ned. I think an excellent video to do is relive the GitHub Actions CI/CD workflow with Terraform and Azure AD Federated Identity (OIDC) to map over a 100% GitOps form of that using Weaveworks FluxCD Terraform Controller (tf-controller). How does the imperative workflow of end-to-end GHA change to a declarative one using a setup of having a Control Plane K8s cluster (even using KinD) that has the tf-controller CRDs on there to watch and reconcile changes to Terraform code in Git.
These are the most in-depth explanations I've found online. The differences and pros & cons of each approach is broken down really well. Thank you!
Great to hear! Thanks for the feedback!
One of the best video's I've seen on repository strategies.
I agree, nailed it
At my last place of employment we decided on using variable definition files for each env. This way we did not have to duplicate our TF files. There was some extra work involved to handle specific cases where you made have had to create multiple instances of the same resource, or in rare cases, you deployed a resource in one env, but not the other. But overall it make things much easier to create consistency because we didn't have different versions of a TF files in a branch or folder.
The only downsize about this approach is that there's no clear view of what has been deployed to each env.
That's not the case of folder-based strategy.
There's a hybrid option that I didn't mention, where the each folder calls the root folder as a module and passes values and updates through there. But you still have the problem that updating the root folder configuration impacts all environments, so it's hard to do testing.
@@NedintheCloud the correct module abstraction is the key with hybrid approach. all environment should start with a module which
builds the environment specifics things from scratch.
the common objects shared between environments should have there own isolate repo, for eg if you are on hub and spoke model. the hub will live in isolation with separate tf repo. then environment modules will refer the resource directly when deploying.
great video btw! :)
Thanks for this video Ned. I'm wondering if the new Terraform Stacks feature presented during the HashiConf 2023 would help to work on several environments too. Hard to say because it's in private preview I believe but they were talking about new "deployment" blocks which could perhaps be mapped to the notion of environments. Perhaps you tried this new Stacks feature already ? In any case it would be a good subject for an other video. Thanks for you work ❤
Looking to implement the folder scenario to manage environments. How can the backend be managed with one terraform config? For example a dev, test and prod folder all needing their own backend config to blob storage or s3.
Setting up the state backend can be done in a few ways:
- Inception: create the s3 bucket with a config using the local backend and then switch the backed to s3
- Backend generator: create the s3 bucket with a dedicated configuration that just manages s3
- CloudFormation: Use cloudformation to bootstrap the s3 bucket
Hi Ned. Are there any upcoming updates planned for the Terraform Azure course? Thanks
Yes! In fact, I am working on the course outline as in a separate tab on my browser.
Thank you for the update! I appreciate your efforts. Could you please provide an ETA for the course release?
I love this, QQ what if we need to use different credentials for UAT and Production, than we use for Dev and QA?
If you're using GitHub environments, you can specify a different set of credentials for each environment in the GitHub Actions Environment Secrets.
So.... What about workspaces?
Terraform Core workspaces? I'd avoid them. ruclips.net/video/6QgHLncP5VA/видео.html
I'm glad you didn't mention workspaces, 'cause I would report your channel for using them 🤣
I may have done a whole video about workspaces 😉 ruclips.net/video/6QgHLncP5VA/видео.html
Nice shirt
You show your face more than the contents, is everything fine with you, very wierd
Ned. I think an excellent video to do is relive the GitHub Actions CI/CD workflow with Terraform and Azure AD Federated Identity (OIDC) to map over a 100% GitOps form of that using Weaveworks FluxCD Terraform Controller (tf-controller). How does the imperative workflow of end-to-end GHA change to a declarative one using a setup of having a Control Plane K8s cluster (even using KinD) that has the tf-controller CRDs on there to watch and reconcile changes to Terraform code in Git.
Great suggestion! I took a look at the tf-controller a couple years ago. Sounds like it might be worth check out again.