Anton, you and Cloud Champs youtube channels are seriously holding it down for the DevOps learning community. Im constantly bouncing between the channels because the content is practical and great.
Thank you so much for making such a thorough yet simplified video about this, I'm new to TF and have been experimenting with so many different ideas. I've been running into a lot of the issues you covered, consistently, this is such a great help.
What a awesome lesson! I was looking for that info 1 month ago because changed my job place and needed to create terraform repo from a scratch. Very useful
Thank you for this kind of video. There are thousands of Terraform videos in youtube but it is hard to find one that show how to use it professionally.
Welcome! Yeah, Dependency is way better, but there is a lot going on around OpenTofu (a fork of Terraform to work with Terragrunt), so I would suggest learning about it before committing to Terragrunt.
Amazing video Anton! I saw a lot of companies trying to scale with V2 version and is impossible! Other common mistake is use one repository for the entire IaC, with the same life cycle for a VPC and a simple resource(S3 for audit team, DynamoDB for specific API...)
Thanks. I find the most difficult part is building Terraform/Ansible modules for managing distributed systems such as Kafka and Cassandra. Usually, Terraform/Packer is not enough...
From my experience, we paid rackspace to write us terraform code for deployments, since we had many customers and environments. I remember the code they wrote was very heavy, I mean terraform plan could take up to 20-30 minutes, terraform apply more then 40 minutes.
Hi Anton, Great video as always! I went through a similar evolutionary process to what you described. I started with a very basic setup, and as the infrastructure grew, the problems you mentioned arose. Eventually, I ended up with a structure like yours, except I set up a private Terraform registry to manage the modules more efficiently and conveniently. I use Terrareg, which I believe is the ultimate option. I have a few questions: 1. Do you think it's correct to manage the providers as a separate module instead of creating a file for each project? 2. You mentioned that some resources can be put in the global folder. What do you think should go there? Thanks!
Thanks! 1. Well, I like the Terragrunt approach. You define it in the root, and it gets generated in every single environment. I don't suggest using Terragrunt, but you should definitely take a look at the "best" practices they have implemented. 2. IAM users, global S3 buckets to share artifacts like jars, ECR repos, etc. Some, including me, like to manage their own centralized monitoring and logging in a single place. It is much cheaper than Datadog, SignalFx, etc.
Thank you for the awesome video! What do you think about terraform workspaces? Any pluses/minuses to using them instead of splitting the environments into folders?
I cover it in one of the tutorials in that playlist. The biggest issue is authorization, you won't be able to grant different users access to different environments. By using folders and saving states in different buckets or simply using prefixes, you can limit access accordingly. I'll refresh that topic soon.
Interesting, as far as I'm aware if you run terraform apply when using workspaces, terraform will automatically assign the statefile to "env:/$TF_WORKSPACE" prefix in that S3 bucket. So if there is a way to grant granular access to different iam roles for each prefix, it could be done. Once again thank you for the awesome content you push so regularly!
Having one root module per environment (e.g. dev, stg, prd) makes sense to me, as you definitely want those to be different terraform states and be able to run in parallel, but I simply can't wrap my head around why you'd want to have a root module for each service as well (e.g. envs/dev/subnet, envs/dev/vpc as opposed to a single envs/dev that has all services). I know Google suggests a similar thing for their terraform guide, as it helps keep the state file and thus blast radius small and makes terraform plan run fast. However, you no longer manage dependencies between services in terraform and have to move to bash scripts with specific order in them. As a developer you then need to be extremely mindful in which order you've applied things as it's no longer as simple as running one single `plan` or `apply`. The number of root modules also rises significantly, and becomes a product of `envs` and `services` - which can lead to a large amount of boilerplate code one has to manage. It seems that the reason you'd want a state file per envⓧservice pair is due to terraform's limitations - if it could impute which part needs to be re-planned / fetched, it wouldn't really matter how big your state file is. Kinda how git & web state management frameworks can be pretty efficient and spot diffs accurately. Any advice on this would be appreciated, still learning how to do this best.
Usually, you never let anyone apply locally, therefore, they don't need to worry about dependencies as everything is done via PR and applied on some remote CI server (e.g., Jenkins, GitHub Actions). You can also use Terragrunt and explicitly define dependencies. The main reason for splitting the state is to make it more efficient and quicker to refresh when you run a plan. You can use the -target flag, but it is not convenient. Splitting the state should only be done for very large projects. When you have multiple large kafka clusters with hunderds of nodes deployed on ec2 instance, cassandra clusters etc. Every time terraform would need to refresh every single ec2.
@@AntonPutra I tend to apply locally when developing infra code using infra-dev environments. It gives me a way to quickly iterate and provides a good signal of correctness - there's nothing worse than approved PRs failing at terraform apply stage or just not producing the desired effect because your code is wrong and you never tested it. I use terraform variables to specify the project/environment (dev.tfvars & cd.tfvars) and terraform config to specify backend buckets (dev.conf & cd.conf). Adds a bit of boilerplate but helps in the end. Thanks for Terragrunt, I'll definitely look into it. Might be more than what we need, but I can see how it solves this specific problem.
It's convenient if you have a small team, but as it grows, you'll need to create different IAM roles with different permissions for different environments. Managing access isn't as easy as it is with buckets and S3 object keys, where you can use IAM policies to restrict access. I have some videos covering those topics just in case - th-cam.com/video/GgQE85Aq2z4/w-d-xo.html
@@AntonPutra The credentials I put on the pipeline variables, and for the state I use S3, with the prefix of the environment/workspace in the key of the state. So the developers only have permission to apply in the dev, and the another environments are only updated by the pipeline.
can you do please a deep full course on ecs like you did on eks with terraform and all the concepts that would help a lot because you have a way of teaching so great ! cuz sometimes you don't need eks for simple projects , ecs would satisfy the need ! thnks in advance
Thanks, I'll see what I can do. Are there a lot of people still using ECS? let's see :) - www.linkedin.com/feed/update/urn:li:activity:7217542062328422404/
Anton, i have multiple helm charts and multiple environments. I want to manage them using terragrunt, how can i achieve this in best way possible ? Can you share some best practices around this ?
I'm wondering if Terragrunt is the best option when it comes to multiple environments. Since there is Terramate, Terragrunt, and Atom I'm wondering what would be the best approach.
And the second question is providers also provides module like to create vpc,eks cluster, etc so we can use it directly. So should we use that ? Or not then why? Recently I started learning gcp so I thought to use terraform with gcp to create vpc & gke modules . So today i do that created custom module for vpc. But when i looked at gke resources. Looks like it was impossible for me to create module for gke. Tons of attributes was there.
actually gcp is much simpler then aws in terms of management, it abstracts away a lot of stuff from your compare to aws. i have a few videos how to create gke even one with shared vpc and all permissions needed. For the first question, when you learning any particular cloud try to use bare minimal providers not modules, it will be easier for you in the future to manage your infra. when you get enough experience you can switch to building your own modules or use open source off the shelf
@@AntonPutra thanks. I followed your video today. created gke from scratch. but few things i didn't understand. like nodepools concept. google_container_cluster location = "us-central1-a" node_locations = [ "us-central1-b" ] so it means control plane node is created inside us-central1-1. but there is not taints applied on control plane nodes. so we need to add it manually ? because when i create pod/workloads it may create on control plane nodes.
@@Krsaurav-cl5kj sure, if you use region for the "location" you get ha control plane, if you would use availability zone, you get control plane in a single zone - registry.terraform.io/providers/hashicorp/google/latest/docs/resources/container_cluster#location
When you say different environments (dev stg prod), do you mean they are different aws accounts? If yes, how do you account for that when doing terraform apply to target the correct aws account ID?
Different environments can be located in different AWS accounts, but it's not necessary. It's also very rare to have different AWS accounts in a single Terraform state, but in those edge cases, you can use provider aliases. That way, you can apply changes to multiple accounts/regions at once. developer.hashicorp.com/terraform/language/providers/configuration#alias-multiple-provider-configurations
Nice Video, Thanks! One question when i store my module in a private repo and i run my terraform deployment in a cicd pipeline like github action, how would you configure the github action to be able to pull the module repo? Whats the best approach here?
Well, you generally have 2 options (GitHub): 1. Create a PAT token. The maximum expiration, I think, is 1 year. Then use git clone [TOKEN]@github.com/[REPO-OWNER]/[REPO-NAME]. 2. Create a 'deploy key' at the repository level and use it to authenticate. I prefer this option because it does not have an expiration.
I am new in terraform. So i have a question..... we can create resources and organize them into modules, such as VPC and instance modules. we can use dev.tfvars,prod.tfvars file to define variables for multiple environments. However, I noticed there wasn’t any mention of .tfvars files in our discussion. Is there a specific reason for this? Or how does it different or affect ?
Hi , thank you very much for this great effort , does it mean vpc-02 folder will replace the /modules/vpc ? coz you mentioned that will act like versioning
You mentioned tfstate file getting corrupted or "tf plan" getting slower taking several minutes if all resources created in same repo. But even in your advanced method we dont see multiple state files getting created. So no solution for that problem? Do you perhaps have another video for that?
@@AntonPutraI see. At my work we are extensively using most advanced option of reusable modules and git tags. I definitely see a huge tfstate and many times notice timeouts. Will try multiple folders to see how without specifying the TF state name it creates a new smaller tfstate in my S3 backend. Or does this approach only work for local tfstate files?
Nice video. What about using git submodules? i think you can create git repo and store all modules there and then add this repo as git submodule to your project. After that you can simple reference module in submodule folder. Also you can switch between branches in this submodule so you can test changes or deploy different configurations
Why are you creating different environments instead of having the logic in the terraform files and let runtime variables from files and environment variables (secrets). This reduces the complexity significantly instead of having to maintain so many different directories and files. Also, don't read outputs from state file, use a data source directly to your provider. That is how hashicorp suggest, and for good reason. It will query your endpoint for a resource that is live rather than your state, which could be corrupted, or the resource could've been changed outside of Terraform.
I do not get the „benefit” of multiple state files. If you lose it you can lose all of them, as well as just one containing the whole infrastructure. Also to mention about using s3 as state backend would be nice.
I've been working for the last few years in large companies with large infrastructure, and when it takes 20 minutes to refresh, it drives you crazy... splitting state into multiple files drastically reduces the time you need to wait.
Can you elaborate? The state file should be set to the name of the component. For example, Terragrunt automatically generates the state path based on the folder structure. example - github.com/antonputra/tutorials/blob/main/lessons/160/git-infrastructure-live/terragrunt.hcl#L13
@@AntonPutra - I'm not poo hoo'ing your video but years of experience in large deployments (GovCloud) has proved that trying to maintain a single source of "truth" for terraform "state" is all but impossible and that managing terraform "state" relative to the context you are working in is enough. Not to mention that in large deployments (GovCloud) that are managed hierarchically, decisions made at levels above your pay grade, such as policy changes, can and will trickle down so that the terraform "state" becomes "stale" or more appropriately becomes invalid. At this point, remediation by way of "terraform import" becomes the task of the day. Once again, not poo hoo'ing your excellent work or this video but Hashicorp's recommended process and layout has never scaled or even worked for the large deployments (GovCloud) I have been involved in.
Everything before 4:35 is a bad practice and shouldn't be even mentioned as "evolution". Beginners should learn how to write reusable, composable, and fully parameterized code (with full separation of code and data) from the start. Hence splitting your code by "environments"/regions etc. is also a bad idea. Common fallacies, again...
The more 'reusable' you make your code, the harder it is to use to maintain large projects.😊 There are always trade-offs... For some reason, nobody wants to think about Day 2 operations and everyone only wants one-click deployments. Managers love it.
@@AntonPutra "The more 'reusable' you make your code, the harder it is to use to maintain large projects." :-[] It's, actually, the opposite. And If it becomes hard to maintain large projects with existing code, that is, it can't scale well, it means that the initial approach/design was bad (and, generally speaking, it has nothing to do with "one-click deployments", or day 2 operations).
🔴 To support my channel, I'd like to offer Mentorship/On-the-Job Support/Consulting (me@antonputra.com)
Anton, you and Cloud Champs youtube channels are seriously holding it down for the DevOps learning community. Im constantly bouncing between the channels because the content is practical and great.
thanks❤️
Dude you're putting serious quality content with these videos. Liking before seeing and already saving this on my watchlist ...
THANKS
thank you! :)
Thank you so much for making such a thorough yet simplified video about this, I'm new to TF and have been experimenting with so many different ideas. I've been running into a lot of the issues you covered, consistently, this is such a great help.
Welcome! Those are not the only approaches you can take, but they are the most common.
What a awesome lesson! I was looking for that info 1 month ago because changed my job place and needed to create terraform repo from a scratch. Very useful
thanks!
I'm new to Terraform. You just pulled me out of a design fugue state. Thank you.
my pleasure :)
Thank you for this kind of video. There are thousands of Terraform videos in youtube but it is hard to find one that show how to use it professionally.
thank you!!!
Great explanation. Very good terraform video for novice and intermediate terraform engineers.
I found that Terragrunt method which you uploaded here is the best for me. I hope you extend about Terragrunt more. Thanks Anton
Welcome! Yeah, Dependency is way better, but there is a lot going on around OpenTofu (a fork of Terraform to work with Terragrunt), so I would suggest learning about it before committing to Terragrunt.
Excellent video Anton.. thanks so much!
I was waiting for that video!
here you go :)
Amazing video Anton! I saw a lot of companies trying to scale with V2 version and is impossible! Other common mistake is use one repository for the entire IaC, with the same life cycle for a VPC and a simple resource(S3 for audit team, DynamoDB for specific API...)
Thanks. I find the most difficult part is building Terraform/Ansible modules for managing distributed systems such as Kafka and Cassandra. Usually, Terraform/Packer is not enough...
Thank you so much for this incredible vidéo. I'm going to watch to Terragrunt to get more deeper on the subjet. Much appreciate
thank you!
Thanks, very good explanation and quality content
thanks!
Thank you so much for this video! It's a great reference.
❤️
Thanks much really helpful, appreciated!
Thanks, awesome content.
thank you!!
amazing video, thanks
From my experience, we paid rackspace to write us terraform code for deployments, since we had many customers and environments. I remember the code they wrote was very heavy, I mean terraform plan could take up to 20-30 minutes, terraform apply more then 40 minutes.
well more stuff in the same state file, the longer it takes to refresh and apply
Hi Anton,
Great video as always! I went through a similar evolutionary process to what you described. I started with a very basic setup, and as the infrastructure grew, the problems you mentioned arose. Eventually, I ended up with a structure like yours, except I set up a private Terraform registry to manage the modules more efficiently and conveniently. I use Terrareg, which I believe is the ultimate option.
I have a few questions:
1. Do you think it's correct to manage the providers as a separate module instead of creating a file for each project?
2. You mentioned that some resources can be put in the global folder. What do you think should go there?
Thanks!
Thanks!
1. Well, I like the Terragrunt approach. You define it in the root, and it gets generated in every single environment. I don't suggest using Terragrunt, but you should definitely take a look at the "best" practices they have implemented.
2. IAM users, global S3 buckets to share artifacts like jars, ECR repos, etc. Some, including me, like to manage their own centralized monitoring and logging in a single place. It is much cheaper than Datadog, SignalFx, etc.
Thank you for the awesome video! What do you think about terraform workspaces? Any pluses/minuses to using them instead of splitting the environments into folders?
I cover it in one of the tutorials in that playlist. The biggest issue is authorization, you won't be able to grant different users access to different environments. By using folders and saving states in different buckets or simply using prefixes, you can limit access accordingly. I'll refresh that topic soon.
Interesting, as far as I'm aware if you run terraform apply when using workspaces, terraform will automatically assign the statefile to "env:/$TF_WORKSPACE" prefix in that S3 bucket. So if there is a way to grant granular access to different iam roles for each prefix, it could be done. Once again thank you for the awesome content you push so regularly!
Having one root module per environment (e.g. dev, stg, prd) makes sense to me, as you definitely want those to be different terraform states and be able to run in parallel, but I simply can't wrap my head around why you'd want to have a root module for each service as well (e.g. envs/dev/subnet, envs/dev/vpc as opposed to a single envs/dev that has all services).
I know Google suggests a similar thing for their terraform guide, as it helps keep the state file and thus blast radius small and makes terraform plan run fast.
However, you no longer manage dependencies between services in terraform and have to move to bash scripts with specific order in them. As a developer you then need to be extremely mindful in which order you've applied things as it's no longer as simple as running one single `plan` or `apply`.
The number of root modules also rises significantly, and becomes a product of `envs` and `services` - which can lead to a large amount of boilerplate code one has to manage.
It seems that the reason you'd want a state file per envⓧservice pair is due to terraform's limitations - if it could impute which part needs to be re-planned / fetched, it wouldn't really matter how big your state file is. Kinda how git & web state management frameworks can be pretty efficient and spot diffs accurately.
Any advice on this would be appreciated, still learning how to do this best.
Usually, you never let anyone apply locally, therefore, they don't need to worry about dependencies as everything is done via PR and applied on some remote CI server (e.g., Jenkins, GitHub Actions). You can also use Terragrunt and explicitly define dependencies. The main reason for splitting the state is to make it more efficient and quicker to refresh when you run a plan. You can use the -target flag, but it is not convenient. Splitting the state should only be done for very large projects. When you have multiple large kafka clusters with hunderds of nodes deployed on ec2 instance, cassandra clusters etc. Every time terraform would need to refresh every single ec2.
@@AntonPutra I tend to apply locally when developing infra code using infra-dev environments. It gives me a way to quickly iterate and provides a good signal of correctness - there's nothing worse than approved PRs failing at terraform apply stage or just not producing the desired effect because your code is wrong and you never tested it. I use terraform variables to specify the project/environment (dev.tfvars & cd.tfvars) and terraform config to specify backend buckets (dev.conf & cd.conf). Adds a bit of boilerplate but helps in the end.
Thanks for Terragrunt, I'll definitely look into it. Might be more than what we need, but I can see how it solves this specific problem.
terraform workspace :)
Wonderful thanks. Спасибо
my pleasure!
Please continue this series with real time production grade terraform ci CD pipeline
i may in the future
I like to use terraform workspaces to manage the environments. I think It is more compatible with git flow, branches by environment approach.
It's convenient if you have a small team, but as it grows, you'll need to create different IAM roles with different permissions for different environments. Managing access isn't as easy as it is with buckets and S3 object keys, where you can use IAM policies to restrict access. I have some videos covering those topics just in case - th-cam.com/video/GgQE85Aq2z4/w-d-xo.html
@@AntonPutra The credentials I put on the pipeline variables, and for the state I use S3, with the prefix of the environment/workspace in the key of the state. So the developers only have permission to apply in the dev, and the another environments are only updated by the pipeline.
@@rvs0910 gitops is a goal but not everyone can achieve it :)
can you do please a deep full course on ecs like you did on eks with terraform and all the concepts that would help a lot because you have a way of teaching so great ! cuz sometimes you don't need eks for simple projects , ecs would satisfy the need ! thnks in advance
Thanks, I'll see what I can do. Are there a lot of people still using ECS? let's see :) - www.linkedin.com/feed/update/urn:li:activity:7217542062328422404/
Anton, i have multiple helm charts and multiple environments. I want to manage them using terragrunt, how can i achieve this in best way possible ? Can you share some best practices around this ?
I'm wondering if Terragrunt is the best option when it comes to multiple environments. Since there is Terramate, Terragrunt, and Atom I'm wondering what would be the best approach.
Thank you, consider a blog with some of the slides or ReadMe to the same git repo...just a thought...thx
thanks, i'll think about it
It would be very cool to learn more about terragrunt =)
yeap, just be carefull with licences :) - th-cam.com/video/yduHaOj3XMg/w-d-xo.html
Great practical content with best practices... Wish you make Python videos too as people expect DevOps guys to code too...
thanks, i use python in lessons, but i'll consider it!
And the second question is providers also provides module like to create vpc,eks cluster, etc so we can use it directly. So should we use that ? Or not then why?
Recently I started learning gcp so I thought to use terraform with gcp to create vpc & gke modules . So today i do that created custom module for vpc. But when i looked at gke resources. Looks like it was impossible for me to create module for gke. Tons of attributes was there.
actually gcp is much simpler then aws in terms of management, it abstracts away a lot of stuff from your compare to aws. i have a few videos how to create gke even one with shared vpc and all permissions needed. For the first question, when you learning any particular cloud try to use bare minimal providers not modules, it will be easier for you in the future to manage your infra. when you get enough experience you can switch to building your own modules or use open source off the shelf
@@AntonPutra thanks. I followed your video today. created gke from scratch. but few things i didn't understand. like nodepools concept. google_container_cluster location = "us-central1-a"
node_locations = [ "us-central1-b" ]
so it means control plane node is created inside us-central1-1. but there is not taints applied on control plane nodes. so we need to add it manually ? because when i create pod/workloads it may create on control plane nodes.
Is it possible to configure the setup so that the control plane runs across three zones, with worker nodes created in each individual zone?
@@Krsaurav-cl5kj sure, if you use region for the "location" you get ha control plane, if you would use availability zone, you get control plane in a single zone - registry.terraform.io/providers/hashicorp/google/latest/docs/resources/container_cluster#location
@@Krsaurav-cl5kj control plane is abstracted away from you, it not possible to run any pods there, so don't worry about taints
I am looking forward some lessons
👌
When you say different environments (dev stg prod), do you mean they are different aws accounts? If yes, how do you account for that when doing terraform apply to target the correct aws account ID?
Different environments can be located in different AWS accounts, but it's not necessary. It's also very rare to have different AWS accounts in a single Terraform state, but in those edge cases, you can use provider aliases. That way, you can apply changes to multiple accounts/regions at once.
developer.hashicorp.com/terraform/language/providers/configuration#alias-multiple-provider-configurations
Nice Video, Thanks! One question when i store my module in a private repo and i run my terraform deployment in a cicd pipeline like github action, how would you configure the github action to be able to pull the module repo? Whats the best approach here?
Well, you generally have 2 options (GitHub):
1. Create a PAT token. The maximum expiration, I think, is 1 year. Then use git clone [TOKEN]@github.com/[REPO-OWNER]/[REPO-NAME].
2. Create a 'deploy key' at the repository level and use it to authenticate. I prefer this option because it does not have an expiration.
@@AntonPutra thanks!
I am new in terraform. So i have a question.....
we can create resources and organize them into modules, such as VPC and instance modules. we can use dev.tfvars,prod.tfvars file to define variables for multiple environments. However, I noticed there wasn’t any mention of .tfvars files in our discussion. Is there a specific reason for this? Or how does it different or affect ?
you can use .tvars, it just a different deployment strategy not as flexible as others but valid one
Hi , thank you very much for this great effort , does it mean vpc-02 folder will replace the /modules/vpc ? coz you mentioned that will act like versioning
Welcome, no it’s stays until it’s referenced in at least 1 environment
Please make whole course in terraform
i'll think about it :)
You mentioned tfstate file getting corrupted or "tf plan" getting slower taking several minutes if all resources created in same repo. But even in your advanced method we dont see multiple state files getting created. So no solution for that problem? Do you perhaps have another video for that?
if you separate your infra in multiple folders it will forse terraform to create separate "smaller" state file which will load faster
@@AntonPutraI see. At my work we are extensively using most advanced option of reusable modules and git tags. I definitely see a huge tfstate and many times notice timeouts.
Will try multiple folders to see how without specifying the TF state name it creates a new smaller tfstate in my S3 backend.
Or does this approach only work for local tfstate files?
Nice video. What about using git submodules? i think you can create git repo and store all modules there and then add this repo as git submodule to your project. After that you can simple reference module in submodule folder. Also you can switch between branches in this submodule so you can test changes or deploy different configurations
Yes, you can, but not many people get used to working with Git submodules, lol. But it's a 100% legitimate option!
🍿 Benchmarks: th-cam.com/play/PLiMWaCMwGJXmcDLvMQeORJ-j_jayKaLVn.html&si=p-UOaVM_6_SFx52H
Why are you creating different environments instead of having the logic in the terraform files and let runtime variables from files and environment variables (secrets). This reduces the complexity significantly instead of having to maintain so many different directories and files.
Also, don't read outputs from state file, use a data source directly to your provider. That is how hashicorp suggest, and for good reason. It will query your endpoint for a resource that is live rather than your state, which could be corrupted, or the resource could've been changed outside of Terraform.
Thanks for the feedback. Well, it's one of the options.
I use terraform workspace to avoid to directory per env :)
@@LaVidaEnUnaGota it works in small teams
I do not get the „benefit” of multiple state files. If you lose it you can lose all of them, as well as just one containing the whole infrastructure. Also to mention about using s3 as state backend would be nice.
I've been working for the last few years in large companies with large infrastructure, and when it takes 20 minutes to refresh, it drives you crazy... splitting state into multiple files drastically reduces the time you need to wait.
yeah, that's three ways to do it but... terraform "state" is ephemeral and therefore "state" should be relative!
Can you elaborate? The state file should be set to the name of the component. For example, Terragrunt automatically generates the state path based on the folder structure.
example - github.com/antonputra/tutorials/blob/main/lessons/160/git-infrastructure-live/terragrunt.hcl#L13
@@AntonPutra - I'm not poo hoo'ing your video but years of experience in large deployments (GovCloud) has proved that trying to maintain a single source of "truth" for terraform "state" is all but impossible and that managing terraform "state" relative to the context you are working in is enough. Not to mention that in large deployments (GovCloud) that are managed hierarchically, decisions made at levels above your pay grade, such as policy changes, can and will trickle down so that the terraform "state" becomes "stale" or more appropriately becomes invalid. At this point, remediation by way of "terraform import" becomes the task of the day. Once again, not poo hoo'ing your excellent work or this video but Hashicorp's recommended process and layout has never scaled or even worked for the large deployments (GovCloud) I have been involved in.
Bro, this is too much just to create an image. Just because we can doesn't mean we should.
image?? what do you mean?
Everything before 4:35 is a bad practice and shouldn't be even mentioned as "evolution". Beginners should learn how to write reusable, composable, and fully parameterized code (with full separation of code and data) from the start. Hence splitting your code by "environments"/regions etc. is also a bad idea.
Common fallacies, again...
The more 'reusable' you make your code, the harder it is to use to maintain large projects.😊 There are always trade-offs...
For some reason, nobody wants to think about Day 2 operations and everyone only wants one-click deployments. Managers love it.
@@AntonPutra "The more 'reusable' you make your code, the harder it is to use to maintain large projects." :-[]
It's, actually, the opposite. And If it becomes hard to maintain large projects with existing code, that is, it can't scale well, it means that the initial approach/design was bad (and, generally speaking, it has nothing to do with "one-click deployments", or day 2 operations).
@@maxbashyrov5785 Would you please describe your approach or provide a link to an article? I'll make another video covering it.
Better use terraform workspace and stop managing directory structure :)
With workspaces, you can't limit access to specific environments as you can with S3 buckets and paths.
@@AntonPutra But I don't need to have all those directories and every workspace has one state file created automatically :)