Splitting Pipelines with Azure DevOps

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 ก.ย. 2024

ความคิดเห็น • 16

  • @thesirhctube
    @thesirhctube 3 ปีที่แล้ว +3

    I'm not sure the configuration for the pr trigger you showed is correct. In the graphic, you showed a pr trigger excluding the main branch. However, the pr trigger is intended to run on branches that are the target of a pull request. In that case, you'd want to include main, not exclude it.
    There is also this admonition in the documentation for the pr trigger: If you specify an exclude clause without an include clause for branches or paths, it is equivalent to specifying * in the include clause. This may be why you observed the behavior you did.
    Thanks for these videos. I hadn't looked at Azure Pipelines before and have learned a ton about using and configuring them from you.

    • @SheldonHull
      @SheldonHull 3 ปีที่แล้ว +1

      Fyi pr trigger doesn't work in yaml with Azure Repos, only GitHub. Really confusing since trigger condition does work

  • @jafarshaik5160
    @jafarshaik5160 3 ปีที่แล้ว +1

    Sir please continue to make videos on gcp

  • @georgibg
    @georgibg ปีที่แล้ว

    Hi Ned. Great video! I just have one question - how do you ensure that the merge pipeline picks up the correct plan file artifact? Here's an example: two engineers are working on different branches and decide to submit pull requests and merge their code into the main branch. When the apply stage runs, it will default to using the latest artifact depending on which pull request gets approved last. When the pipelines are distinct, we need some mechanism to invoke the merge pipeline from the corresponding PR pipeline run.

  • @dus10dnd
    @dus10dnd 2 ปีที่แล้ว

    Ned, where does the artifact even go? The idea being that it can be reviewed... I don't see a way to include it for review or in the Artifacts for the project. Using it later is fine, because it automatically gets downloaded.
    I got it figured out... but it leads to the next concern... the "Approve" stage is very low value at this point. You give someone a button and a small set of instructions. What would be better is to give them the context of the plan. I am working through that, but would be the best is a link to the plan step's log.

  • @sebastijanp3
    @sebastijanp3 2 ปีที่แล้ว

    I have a dumm question; why would you do the setup (provisioning service principals; not the devops pipeline; that devops pipeline I do not mind in what is written) as terraform? Why not just with az cli (which is much more documented and up to date and you can do wonders by also calling web apis) or nuke build system? You are not storing the state anywhere so I am not sure why terraform for it. Plus you might get those things as developer from ops team prepared, because they do not want you mess around with AAD. Is it recommended somehow....struggle to understand. Or are you showing that just to be consistent and to show that is possible? For ex. imagine you need to write a custom contributor role (that does the writing too), how do you actually do that in terraform without killing yourself with searching and experimenting? Just wondering...i am trying to understand since i will have to articulate it to teams...haha

  • @drdamour
    @drdamour 3 ปีที่แล้ว

    docs clearly say PR trigger is only for github, not devops

  • @jenniferkoenig9814
    @jenniferkoenig9814 2 ปีที่แล้ว

    When I push a new branch to my Git repo, BOTH the CI and the PR pipeline run. Why? Also, why does every pipeline start running twice simultaneously? When I merge a branch to main, this starts the Merge pipeline - 2 times - and one succeeds, but the later one fails because the state file has changed. Am I supposed to delete the Terraform Cloud workspace after the project has been created (is that why it's running twice)?

  • @arieheinrich3457
    @arieheinrich3457 3 ปีที่แล้ว +1

    Please don't save the plan file!! that is not best practice as if you don't run the apply immediately any change to the resource makes the plan file obselete. Your best bet is to stop after the validate or after all tests, so enabling the branch policy, and then save the source code as the artifacts. Then in a release pipeline that you enabled it to be CD you do the init and apply (plan is run anyway when you run the apply )

    • @arieheinrich3457
      @arieheinrich3457 3 ปีที่แล้ว

      Most of the people who start with azure devops pipelines and terraform go for the "lets save the plan file as artifact" buts its the wrong practice.

    • @NedintheCloud
      @NedintheCloud  3 ปีที่แล้ว +1

      Interesting point. Do you have an example you could point to? How would you recommend validating the plan before apply?

    • @Vaisakhreghu007
      @Vaisakhreghu007 3 ปีที่แล้ว +1

      @@arieheinrich3457 Could you please tell why its a wrong practice?

    • @arieheinrich3457
      @arieheinrich3457 3 ปีที่แล้ว

      @@NedintheCloud HI Ned, it very much depends on how you split your state files and how you work as a team and not a single person on infra.
      Think of git for example, theres only one source of truth, on the server, but then people use branches and when they want to merge back is where you sometimes get merge conflicts. Now think of the state file. There is only one, so theres no really concept that allows you to "version" the state file.
      Lets say i have a resource to add, so i commit my code, it goes over a "build pipeline" that does linting and validate and maybe checkov. You run a plan and save the plan output file as artifact. Remember that the plan file is per the state at THAT point in time.
      You create a deployment pipeline and you trigger CD on it, thus as soon as the build finished and has new artifacts, the first stage in your pipeline executes, runs the init command and apply command but adds the parameter to tell it to use the plan file as an input file thus NOT doing plan again.
      You can assume that between the time you created the plan file and the time the apply command was issued, its such a short time that is highly unlikely that another process that accessed the state file changed anything, thus your plan file is still valid.
      Now you want to add a human intervention as someone needs to view the plan to see that nothing is get destroyed for example. If you use a build pipeline , you can potentially do the init + plan as one agent job, add an agentless job with a manual intervention step and another agent job after that with init + apply. The only thing you need to worry about is saving the plan file in the first agent job and the second agent job and you do that with upload/download artifact steps.
      If you use a release pipeline, you indeed can add an approval gate.
      no matter which approach you use, what if it takes that person 30 min to approve. How likely is it that some other process / pipeline / manual work via the portal has changed any of the existing resources that the state file is tracking thus making your plan file not relevant anymore ?
      by saving the scripts as artifacts, you dont fix yourself to a point in time of the plan, you just make sure you use the same code at that point in time after it was validated. You can even run a plan, without saving the plan file, but adding a PS script that scans the results as thats a json file, and looking for the reply from the plan " X create Y change Z destroy" and based on that and task conditions you can either directly apply or deny and ask for human intervention. Houssem Dellai from MS has a vid on that, just dont have a direct link.
      overall i try to use this method of validating the code, saving it as artifact and running the plan as part of the release so the plan is AS accurate the actual point of time when i do the apply and use this method to break execution if the plan output calls for any resource destroy for example.
      Having a person go over the logs of the plan introduces time where the state file might be changed by something/someone else.

    • @NedintheCloud
      @NedintheCloud  3 ปีที่แล้ว

      Thanks for that detailed response! I totally get where you're coming from. Here is my thought process on stashing the plan file as an artifact.
      Let's say I make a change and commit it to a branch and then create a pull request. The plan file is generated and the output can be reviewed by someone who will merge it into main. Merging into main runs the apply against the target environment using the plan file.
      If nothing has changed about the target environment, then the plan file and review are still valid, and the apply succeeds. If someone else has made a change to the target environment between the time the plan file was generated and the review was completed, then the plan file will no longer be valid and the apply will fail (Terraform won't apply a plan file if the state versions don't match up). That is actually what I want to happen. The review based on the plan is no longer accurate b/c the target environment has changed, and I would want the plan and review to be run again.
      There's a bunch of possible ways the build and release pipelines could be structured and I think it is going to depend a lot on your environment, your team, and the environments you are managing. I should have made it clear in the video that this is just one possible way of approaching Terraform automation, and certainly not the only viable option!
      I think this is the video you're talking about: th-cam.com/video/ukmbiTSWU_M/w-d-xo.html from Houssem.

  • @sebastijanp3
    @sebastijanp3 2 ปีที่แล้ว

    At 31:00 you were talking about the future video; so the enviroments parts like qa, staging, prod. Is there such a video or github code? And if not, is this still applicable? If I have an additional qa enviroment, does it make sense to create new azure-pipeline-pr-qa.yaml (and similar) or do you have a branch that is called qa (as an additional to main) and then not having that additional file but make triggers based on branch name? I am guessing here and BShiting, but is there such a code somewhere?