Wow Its nice video. Explanation to deploy different stages using azure pipeline. facing couple of issues regarding destroy stage. finally we fixed based on your video reference. Thanks sooo muchhh.
Great video. But how does this approach accommodate different environments? Wouldn't it be better to run the majority of these commands in the release pipeline, so you can distinguish the environment in the release lanes?
You explained very well each steps of YAML pipeline , can you please make video for multistage deployment as well. Appreciate your help and couldn't find YAML file which you provided gitrepo
Hi Travis. If I ran terraform in visual code ebrything seems to work. When I push this to devops and ran the pipeline , it error on unit stating tfstate rg , storage and tfstae already existed. How do I run pipeline without getting errors saying the tfstae has change?
So because it needs terraform installer in yml file on top inside stages, you can just put - validate, apply, destroy without preceding with 'terraform' keyword? E.g.: just 'validate' instead of 'terraform validate' cmd? Please correct me if I am wrong.
Hi Travis. What’ll happen if you enable trigger to the main branch and make a code change and commit. Will it re-create the same resources? From what I understand it won’t because the current resources will exist in tfstate. Trying to understand how this will work in a prod environment when code changes should deploy new infra. Thanks
Hi! I believe you can use self-hosted agents when running your pipeline and that would allow you to preserve data and pass it on from one stage to the next. Haven't really tried it myself though
Great job! Super easy to understand and follow, thank you very much. Being said that, I have a question: How do you manage the storage for multiple deployments? Do you use the same? Let's imagine that we have a pipeline to deploy RG1 for project1. Another pipeline to deploy RG2 for project2. All this TF status can be/should be stored in the same blob container? How I can use a TF destroy only for RG1 and keep RG2 available? Thanks in advance
Have always loved your videos my man. First time posting a question here. What is a solution in Azure or Windows to auto-deploy and Azure File Share to Windows VMs as a drive letter? I have tried using the PowerShell connect script to run on startup via GPO (for servers or users in OUs) but have been unsuccessful. Thanks!
Much appreciated your efforts.Need to understand how to deploy azure resources like vent and subnet and vm & SQL paas and kV for different subscription via azure pipeline also queries on branch pipeline for the respective environment.
Great! but I have a quick question. What if I need to add an environment between the two stages?? For manual approval. Say I have created the same, two stages, first one to validate and plan. and if plan is completed successfully it moves to the second stage, which is apply stage. But before I run the apply I want to make sure that someone goes manually and approve the pipeline to apply the changes ....
There is a task called "manualvalidation". You can add this in between the "stages" and the recipients can do a validation to decide whether to approve or not.
Anyone know a workaround for: Failed to get existing workspaces: containers.Client#ListBlobs: Failure responding to request: StatusCode=403. The App Registration has "owner" and Storage Account Blob owner. Thx?
I found it, it is due to restricting the storage account to a specific IP. It seems that for this to work, the storage account must be "accessible from all networks", even if you have "allow azure services on the trusted services list to access this storage account" selected.
╷ │ Error: No configuration files │ │ Apply requires configuration to be present. Applying without a │ configuration would mark everything for destruction, which is normally not │ what is desired. If you would like to destroy everything, run 'terraform │ destroy' instead. ╵ ##[error]Terraform command 'apply' failed with exit code '1'. ##[error]╷ │ Error: No configuration files │ │ Apply requires configuration to be present. Applying without a │ configuration would mark everything for destruction, which is normally not │ what is desired. If you would like to destroy everything, run 'terraform │ destroy' instead.
thank you for you best content, I am getting error running azure devops pipeline , its not initializing terraform , I appreciate your suggestion, /opt/hostedtoolcache/terraform/0.14.11/x64/terraform init -backend-config=storage_account_name=xxx -backend-config=container_name=xxx -backend-config=key=xxx -backend-config=resource_group_name=xxx -backend-config=arm_subscription_id=xxx -backend-config=arm_tenant_id=*** -backend-config=arm_client_id=*** -backend-config=arm_client_secret=*** ##[error]Error: There was an error when attempting to execute the process '/opt/hostedtoolcache/terraform/0.14.11/x64/terraform'. This may indicate the process failed to start. Error: spawn /opt/hostedtoolcache/terraform/1.0.0/x64/terraform ENOENT Finishing: terraform init
Thank you so much for making this video. It took a process that is quite daunting to new DevOps engineers and made it very easy to follow.
Wow! I just witnessed a professional at work. Subscribed.
Thanks for the sub!
Great video, we're moving away from classic pipelines so this was very helpful.
Thanks Travis, this was a great video for getting started with pipelines using yaml rather than using the classic pipeline builder UI
Wow Its nice video. Explanation to deploy different stages using azure pipeline. facing couple of issues regarding destroy stage. finally we fixed based on your video reference. Thanks sooo muchhh.
After a bunch of googling, this video was gold. Thank you!
Great teaching .to the point what is needed . Thanks for the content
Wow, Great Explanation. This helped me to clear my concepts from basic . Very Professional !! Thanks a lot !
Great video. But how does this approach accommodate different environments? Wouldn't it be better to run the majority of these commands in the release pipeline, so you can distinguish the environment in the release lanes?
Great video. Really did a good job of explaining yaml stages and use of terraform tasks.
6:26 where do you find the backend storage key? Is that from ADO or from the Azure portal?
Exactly what I was looking for. Tyty!
I dont know if I commented this earlier but what a great video.
Thank you so much for making this video. Amazing
I super appreciate this video. Thank you! This was an awesome tutorial.
Thank you, everything is well explained in a simple manner.
Glad it was helpful!
Clear cut what was needed.
@Travis if we are not providing the working Directory in the pipeline steps how this pipeline knows where is the code?
did you manage to find the answer?
This is very helpful. Thank you!
amazing.. excellent.. superb explanation
You explained very well each steps of YAML pipeline , can you please make video for multistage deployment as well. Appreciate your help and couldn't find YAML file which you provided gitrepo
Excellent video. Question: How do we move the lock.hcl file back to our ADO repository ?
great explanation- thank you!
I ran into a problem, no file was created for the tfstate in the container.
how did you get the variables you are specifying under variables from line no 11
in the gitignore you have the .tfvars, yet the file is and needs to be present in the repo?
how would this work for deploying to multiple subscriptions, one of each dev, stage prod etc
You rock, keep it coming
Thanks!
Thanks for your wonderful videos. just wondering if you have done any CI/CD videos?
great explanation, though I was not able to find pipelines in git repo only tf files are stored.
Great Help Travis!
Hi Travis. If I ran terraform in visual code ebrything seems to work. When I push this to devops and ran the pipeline , it error on unit stating tfstate rg , storage and tfstae already existed. How do I run pipeline without getting errors saying the tfstae has change?
So because it needs terraform installer in yml file on top inside stages, you can just put - validate, apply, destroy without preceding with 'terraform' keyword?
E.g.:
just 'validate' instead of 'terraform validate' cmd?
Please correct me if I am wrong.
Hi, how did you use variables in backend block in terraform configuration? Terraform doesn't allow to do that
Its detailed . thank for help ,is it possible to share this sample pipeline ?
Hi Travis. What’ll happen if you enable trigger to the main branch and make a code change and commit. Will it re-create the same resources? From what I understand it won’t because the current resources will exist in tfstate. Trying to understand how this will work in a prod environment when code changes should deploy new infra. Thanks
this is just wow !!
Thanks!
Hi @Travis Is there a way we can avoid initializing terraform multiple times in the pipeline ??
Thanks,
Satish
Hi! I believe you can use self-hosted agents when running your pipeline and that would allow you to preserve data and pass it on from one stage to the next. Haven't really tried it myself though
@scerons that makes sense will try once...thank you
Great job! Super easy to understand and follow, thank you very much.
Being said that, I have a question:
How do you manage the storage for multiple deployments? Do you use the same?
Let's imagine that we have a pipeline to deploy RG1 for project1.
Another pipeline to deploy RG2 for project2.
All this TF status can be/should be stored in the same blob container?
How I can use a TF destroy only for RG1 and keep RG2 available?
Thanks in advance
@travis is there a way to create the back end storage in the project itself or is that just not best practice?
The backend storage has to be available for the initialization. It has to be in place before apply command runs.
Have always loved your videos my man. First time posting a question here. What is a solution in Azure or Windows to auto-deploy and Azure File Share to Windows VMs as a drive letter? I have tried using the PowerShell connect script to run on startup via GPO (for servers or users in OUs) but have been unsuccessful. Thanks!
Much appreciated your efforts.Need to understand how to deploy azure resources like vent and subnet and vm & SQL paas and kV for different subscription via azure pipeline also queries on branch pipeline for the respective environment.
Great! but I have a quick question. What if I need to add an environment between the two stages?? For manual approval.
Say I have created the same, two stages, first one to validate and plan. and if plan is completed successfully it moves to the second stage, which is apply stage. But before I run the apply I want to make sure that someone goes manually and approve the pipeline to apply the changes ....
There is a task called "manualvalidation". You can add this in between the "stages" and the recipients can do a validation to decide whether to approve or not.
Great video. I would like to know how to import existing resources in my pipelines.
Great video
This only works if first manually create the resource group, storage account, and container in Azure. If fails otherwise!
Anyone know a workaround for: Failed to get existing workspaces: containers.Client#ListBlobs: Failure responding to request: StatusCode=403. The App Registration has "owner" and Storage Account Blob owner. Thx?
I found it, it is due to restricting the storage account to a specific IP. It seems that for this to work, the storage account must be "accessible from all networks", even if you have "allow azure services on the trusted services list to access this storage account" selected.
Awesome! thanks
Jai ho
thankyou sir ❤
Nice!
╷
│ Error: No configuration files
│
│ Apply requires configuration to be present. Applying without a
│ configuration would mark everything for destruction, which is normally not
│ what is desired. If you would like to destroy everything, run 'terraform
│ destroy' instead.
╵
##[error]Terraform command 'apply' failed with exit code '1'.
##[error]╷
│ Error: No configuration files
│
│ Apply requires configuration to be present. Applying without a
│ configuration would mark everything for destruction, which is normally not
│ what is desired. If you would like to destroy everything, run 'terraform
│ destroy' instead.
I have the same issue, did you manage to resolve it?
Destroy pipeline is not deleting the resources 😑
Well
Thanks
The video is blurry
thank you for you best content, I am getting error running azure devops pipeline , its not initializing terraform , I appreciate your suggestion, /opt/hostedtoolcache/terraform/0.14.11/x64/terraform init -backend-config=storage_account_name=xxx -backend-config=container_name=xxx -backend-config=key=xxx -backend-config=resource_group_name=xxx -backend-config=arm_subscription_id=xxx -backend-config=arm_tenant_id=*** -backend-config=arm_client_id=*** -backend-config=arm_client_secret=***
##[error]Error: There was an error when attempting to execute the process '/opt/hostedtoolcache/terraform/0.14.11/x64/terraform'. This may indicate the process failed to start. Error: spawn /opt/hostedtoolcache/terraform/1.0.0/x64/terraform ENOENT
Finishing: terraform init
Hi, check the working directory! i faced the same issue