Nice tutorial! Thank you. A few words about storage resources, like S3 Bucket, ECR, etc. Terraform is not handling very well with resources like that and resources that already exist because of the idea behind it- to create and destroy the resources quickly and efficiently (among other purposes of course). With ECR for example, we don't always want to destroy it but if you created it with Terraform, it would try to. The best way, in my opinion, of doing that is to create manually (sad, I know) those storage resources and then attach them to Terraform. Other ways are pretty messy and could cause a lot of errors and headaches.
for my people who are confused why you get an Error 403 when trying to do this (even when you've given the TF provider for AWS credentials) it's because you need the AWS CLI to be configured too. So go and install the cli and then run `aws configure` enter the same credentials (or other ones up to you) and then try again
Thanks sanjeev .This is a good tutorial. How to do the same thing when am deploying the same script to multiple AWS accounts ? Because initially the statefile is stored in s3 bucket in account A, but when I try to deploy the same script in account B ,it fails to refresh the state owing to the user in account B not having access to bucket in account A.
Sanjeev thanks for this video. It seems like this would only be used if you wanted to share code with a team, and not for CI/CD environments as you would have to pop in and out of local / remote backend to destroy infra.
How to setup an s3 backend for multi account setup? The state is not getting updated in the s3 residing in base account after creating resources in target account. Used ec2 with assume role auth for deployment. It initializes the backend and creates the resources but fails to update state and gives access denied error. Please help
Thanks again...I have gone through your terraform 2 hr class and it was really helpful...but it cuts short in the end. I wanted to upload a few files into an S3 bucket and I am having issues with the etag setup, filemd5. Can you upload the remaining portion of the class. Also can you create and upload an intermediate to master Terraform class? Thanks.
Great tutorial! However, the way terraform creates backend-related resources (S3 Bucket, DynamoDB Table) directly is likely to get messed up later. They must be created manually in advance and only referenced in Terraform scripts.
Thank you very much for the explonantion. However I have a question, what if i have a single terraform configuration for creating S3 bucket, but then reuse the code with everything stay same only i change the AWS region. Because i creating TF configuration to create something, and then i want to recreate same this but in different region , so what i did is creating variable for AWS region and run the TF and created correctly. After that i run it again and then i input different variable for region and get an error that the bucket doesn't exist in new region. I mean i can create 3 copies of same configuration, but if we have 10 developer teams and each team needs to copy configuration from other teams but run in different regions we will have like 100 files in our github repo. Any ideas ? Thanks
I use s3 backend with conjunction of terraform workspaces and its really amazing, basicly for each backend (statefile) i've got seperate S3 statefile so to delete specific workspace (environment) i just do this ## to delete environment terraform workspace select terraform destroy ## to delete workspace after that, so no stale tfstate file left on S3 terraform workspace select default ## needed becouse you cant delete currently selected workspace terraform workspace delete ## will delete Tfstate file on S3
This tutorial saved me lots of pain. Many thanks for sharing
This is a clear, simple and fast explanation. well done and thank you Sanjeev.
very clear, simple and I really love your accent
Nice tutorial! Thank you.
A few words about storage resources, like S3 Bucket, ECR, etc.
Terraform is not handling very well with resources like that and resources that already exist because of the idea behind it- to create and destroy the resources quickly and efficiently (among other purposes of course).
With ECR for example, we don't always want to destroy it but if you created it with Terraform, it would try to.
The best way, in my opinion, of doing that is to create manually (sad, I know) those storage resources and then attach them to Terraform.
Other ways are pretty messy and could cause a lot of errors and headaches.
could u have some references for manually
Awesome! Please show in a demo project in real life scenario.. Thank You
for my people who are confused why you get an Error 403 when trying to do this (even when you've given the TF provider for AWS credentials) it's because you need the AWS CLI to be configured too. So go and install the cli and then run `aws configure` enter the same credentials (or other ones up to you) and then try again
Thanks sanjeev .This is a good tutorial. How to do the same thing when am deploying the same script to multiple AWS accounts ? Because initially the statefile is stored in s3 bucket in account A, but when I try to deploy the same script in account B ,it fails to refresh the state owing to the user in account B not having access to bucket in account A.
Sanjeev thanks for this video. It seems like this would only be used if you wanted to share code with a team, and not for CI/CD environments as you would have to pop in and out of local / remote backend to destroy infra.
Hi Sanjeev, great video! Not sure if I missed this but could you explain why you set the key as global/s3/terraform.tfstate ?
we are specific which folder to store the terraform state file. we can use single S3 bucket for multiple projects with different folders.
Awesome tutorial. Works like a charm.
Super clear Sanjeev! Thanks
How to setup an s3 backend for multi account setup? The state is not getting updated in the s3 residing in base account after creating resources in target account. Used ec2 with assume role auth for deployment. It initializes the backend and creates the resources but fails to update state and gives access denied error. Please help
Is it safe to make the bucket name public?
Thanks again...I have gone through your terraform 2 hr class and it was really helpful...but it cuts short in the end. I wanted to upload a few files into an S3 bucket and I am having issues with the etag setup, filemd5. Can you upload the remaining portion of the class. Also can you create and upload an intermediate to master Terraform class? Thanks.
Phenemenal
Great explanation!
Sajeev we need more help on Terraform modules not explained clearly in other videos
Great tutorial! However, the way terraform creates backend-related resources (S3 Bucket, DynamoDB Table) directly is likely to get messed up later. They must be created manually in advance and only referenced in Terraform scripts.
HI Kim sir, wat d u mean by manually
Clean and clear
How to get the deleted services manually from aws console?
Thank you very much for the explonantion. However I have a question, what if i have a single terraform configuration for creating S3 bucket, but then reuse the code with everything stay same only i change the AWS region. Because i creating TF configuration to create something, and then i want to recreate same this but in different region , so what i did is creating variable for AWS region and run the TF and created correctly. After that i run it again and then i input different variable for region and get an error that the bucket doesn't exist in new region. I mean i can create 3 copies of same configuration, but if we have 10 developer teams and each team needs to copy configuration from other teams but run in different regions we will have like 100 files in our github repo. Any ideas ? Thanks
nice lection mr. could you tell wich tf vscode extention do you use ?
I used marketplace.visualstudio.com/items?itemName=HashiCorp.terraform
Where or how do you specify the provider credentials for this scenario?
I use s3 backend with conjunction of terraform workspaces and its really amazing,
basicly for each backend (statefile) i've got seperate S3 statefile so to delete specific workspace (environment) i just do this
## to delete environment
terraform workspace select
terraform destroy
## to delete workspace after that, so no stale tfstate file left on S3
terraform workspace select default ## needed becouse you cant delete currently selected workspace
terraform workspace delete ## will delete Tfstate file on S3
very useful
Thank you! Great job!
nice demo
How are the AWS credentials being passed here?
I believe I had aws cli setup. Terraform can pull cetedentials from your aws cli config
But you can pass in credentials anyway u want. U cna use env variables as well
Where can I download the code? thanks
Error: error configuring S3 Backend: no valid credential sources for S3 Backend found. I am getting this can anyone help
+1
Please setup AWS cli
what about dynamo DB config, how is it going to be used by terraform??