Top 10 Rules For Continuous Integration

แชร์
ฝัง
  • เผยแพร่เมื่อ 4 มิ.ย. 2024
  • In this episode Dave Farley introduces his "10 Rules for Continuous Integration” - rules to organise a team to practise this vitally important aspect of modern software engineering. Dave was practising a version of CI in the early 1990s but refined his approach at ThoughtWorks in the early part of this century and is now seen as an expert in this field.
    Continuous Integration is more than just build automation. It also demands a change in the way that software development teams organise their work and, when practiced well, introduces a more disciplined approach to software development.
    This disciplined approach is even more important when practiced as a component of a Continuous Delivery Deployment Pipeline (CICD). Before Kent Beck introduced CI projects failed simply because teams couldn’t merge their work together. This is not a small change, and Continuous Integration is a cornerstone of any effective CD approach.
    ---------------------------------------------------------------------------------------
    If you want to learn Continuous Delivery and DevOps skills, check out Dave Farley's courses:
    ➡️ bit.ly/DFTraining
    📚 BOOKS:
    📖 Dave’s NEW BOOK "Modern Software Engineering" is now available on
    Amazon ➡️ amzn.to/3DwdwT3
    In this book, Dave brings together his ideas and proven techniques to describe a durable, coherent and foundational approach to effective software development, for programmers, managers and technical leads, at all levels of experience.
    📖 "Continuous Delivery Pipelines" by Dave Farley
    paperback ➡️ amzn.to/3gIULlA
    ebook version ➡️ leanpub.com/cd-pipelines
    📖 The original award-winning “Continuous Delivery" book by Dave Farley and Jez Humble
    ➡️ amzn.to/2WxRYmx
    📖 "Accelerate, The science of Lean Software and DevOps", by Nicole Fosgren, Jez Humble & Gene Kim ➡️ amzn.to/2YYf5Z8
    --------------------------------------------------------------------------------------
    You can get Dave’s FREE guide on “Continuous Integration Top Tips” when you join our CD Mail List here ➡️ www.subscribepage.com/howto-c...
    ---------------------------------------------------------------------------------------
    Dave Farley's Blog ➡️ bit.ly/DaveFWebBlog
    Dave Farley on Twitter ➡️ bit.ly/DaveFTwitter
    Dave Farley on LinkedIn ➡️ bit.ly/DaveF-LI
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 67

  • @edgeeffect
    @edgeeffect 6 หลายเดือนก่อน +2

    Years ago now, I set up a Bamboo server at our company using the meagre resources that were made available. It took about 10 times as long to compile on the server than running a clean build locally. It could have been as much as half and hour between commit and build... and we didn't even have a test-suite.
    My number 0 tip would be "don't be stingy with the specification of your build server!" (several comments on here on the lines of "but our build takes over an hour to run" made me remember this).

    • @ContinuousDelivery
      @ContinuousDelivery  6 หลายเดือนก่อน +2

      Yes the commit stage of the deployment pipeline (which implements CI) is probably the most valuable place to invest in optimising for fast feedback

  • @Locomamonk
    @Locomamonk 3 ปีที่แล้ว +19

    I've always heard these terms and philosophies but you're the first one to put these into actual concrete examples of what they mean and how to adapt to these, and I understood them perfectly. Thanks for that!

  • @PatriceStoessel
    @PatriceStoessel 3 ปีที่แล้ว +17

    0:00 introduction
    *continuous integration*
    6:28 1. Always run commit tests locally before commiting
    8:01 2. Wait for the results !
    8:44 3. Fix or revert failures within 10 minutes
    9:51 4. If a team mate breaks the rules, revert their changes
    10:30 5. If someone else notices you caused a failure before you notice, it's a build sin !
    11:40 6. Once commit passes, move on to your next task
    12:09 7. If any test fails, it is the responsibility of the commiter
    13:02 8. If it is unclear who commited a failure, it is the responsibility of everyone who may be responsible to agree who will fix the failure
    *continuous delivery deployment pipeline*
    14:35 9. Monitor the progress of your commit
    16:20 10. Address every pipeline failure immediately
    16:53 ending

  • @Rockem1234
    @Rockem1234 2 ปีที่แล้ว +5

    Few questions: 1. What happens when multiple commits are done in the same time? How do you revert? verify? 2. How would a developer know when he can rely on a change? how do we know when something is completely ready? 3. How can we review each other code remotely?

    • @edgeeffect
      @edgeeffect 6 หลายเดือนก่อน +1

      1. git allows you to make a "reverse commit" that will revert your change without interfering with anyone else's commits.
      2. because you have a good test suite and all the tests pass.
      3. Zoom?

    • @Rockem1234
      @Rockem1234 6 หลายเดือนก่อน

      @@edgeeffect
      1. Nice, I'll check it out
      2. When I mean ready, I mean ready in functionality terms, when I know I can rely on this bit, if it's still in process and might gets deleted tomorrow. It's not always simple
      3. I meant offline review, you're not both always available

  • @tutunak
    @tutunak 2 ปีที่แล้ว

    This video should be used when the developer asks why do I have to test my changes locally? Very understandable and helpful.

  • @scotthjackson5651
    @scotthjackson5651 3 ปีที่แล้ว

    at 02:43 reminded once again of Kubrick's famous edit in 2001: A Space Odyssey, the bone tool transitioning into the spaceship edit... here we are talking about how to develop software effectively and it is still about managing primate conflicts.

  • @isaacdunstan6000
    @isaacdunstan6000 3 ปีที่แล้ว +2

    I've just found this youtube channel and it's awesome. It actually presents software engineering as an engineering practice unlike so many other channels and I can tell that I'm going to have a huge binge-watch of all your videos. You're a great communicator and I love the work you're doing

  • @elciomello
    @elciomello 2 ปีที่แล้ว

    Hello Dave, I Love your channel and videos.
    I believe that a fast "CI pipeline" contributes to everything you said here, and I agree that the build and testing should be faster and done in 5 minutes (or else something is wrong eheh), but my doubt here is that today we have many other validations in the code that we consider in "CI", like SAST and Static Code Analysis, which makes that 5 minute time go up.
    In this scenario, which is not a "5 minute Pipeline", what should you advise as best practices?
    E.g. Should we run SAST in the CI pipeline and deal with the timing, given that we have value in getting this feedback sooner, or should we run this in separate pipelines after the CI pipeline and get our feedback as our deployment process progresses. is it evolving?
    Once again, congratulations for the channel and videos with excellent content.

  • @iluzjonista
    @iluzjonista 2 ปีที่แล้ว +1

    My current occupation is mostly building pipelines, watching them run, grow and wither. All while killing the wait time with your vids recently. Great insights. My fellow devs also get to understand all the ci/cd quicker with your vids it feels. Compared to me mumbling about it 24/7 at least. The lack of really low level examples make it so much more approchable to non Ops dudes.

    • @ContinuousDelivery
      @ContinuousDelivery  2 ปีที่แล้ว +1

      Thanks, I am pleased that you have found them helpful.

  • @Ratstail91
    @Ratstail91 2 ปีที่แล้ว +2

    Here's a thought for you: In video games, automated testing is almost non-existent. You can test the engine, the tools, etc. but the game itself needs manual testing. As a result, I'm actually very weak at automated testing, despite having many, many years experience.

  • @p0rq
    @p0rq 3 ปีที่แล้ว +1

    I wish I worked somewhere that thought about code development practices really at all, whether CI/CD or not. We use Jenkins, we nominally "do Agile" (we have a kanban board and have meetings that we label things like "Sprint Planning" and "Grooming" -- wow!), etc etc. But we really don't think about our how we're coding in this kind of way.

  • @mikemegalodon2114
    @mikemegalodon2114 2 ปีที่แล้ว

    Nice tips! Thanks, Except I don't really like the "finger-pointing" moment. How's about pre-merge validation that would run all the checks and make sure the main branch is always buildable?

    • @ContinuousDelivery
      @ContinuousDelivery  2 ปีที่แล้ว +1

      That is what Continuous Integration is, it is just not necessarily automatic.

  • @Torsan1977
    @Torsan1977 4 หลายเดือนก่อน

    Frequent rebases on main should ease some of the problems right?
    It sounds to me like you're exchanging integration hell for developer hell. If someone merged broken (not compiling och failing tests) code to main, throwing the whole team off, I'd be pretty pissed.
    If you very rarely have integration hell, is CI really needed?
    Great content and I'm learning a lot.

  • @shellwhale8994
    @shellwhale8994 ปีที่แล้ว

    How does all of this scale? Does that still work if you have hundreds of developpers on the same pipeline?

  • @dgmstuart
    @dgmstuart 2 ปีที่แล้ว

    Genuine question: if you’re not able to get your test suite to run quickly enough that it’s reasonable to wait for it to pass (which in my limited experience is quite difficult for some teams to achieve), it sounds like this approach isn’t feasible?

    • @ContinuousDelivery
      @ContinuousDelivery  2 ปีที่แล้ว +5

      Well, then I’d look to the tests, because they aren’t good enough. Google have 25k developers sharing a single repo and running hundreds of millions of test cases per day, each dev gets feedback ln their changes in a few minutes. So it is possible, so why are the tests you run so slow?

  • @tj71520
    @tj71520 2 ปีที่แล้ว

    So use buildserver to build and start an autotest after each commit?

  • @MrOneWorld123
    @MrOneWorld123 ปีที่แล้ว

    Why would the commit build fail, when my local copy can build and my local tests passed? Are my locally run tests a different set of tests?

    • @ContinuousDelivery
      @ContinuousDelivery  ปีที่แล้ว +2

      The commonest cause is that you forget to commit a new file, so you test passes locally, because the file is there, and fails in test, because it is not. This shouldn't happen very often, but I think that it is more useful to think of this the other way around.
      The definitive build of your system is post-commit, it builds the SW you will release. Building and running tests locally, is just you doing your work, the CI build is the finished article. The reason you run tests locally first, is to reduce the chances of breaking things in CI, so you don't have to, but it is good practice.

  • @hunterwilhelm
    @hunterwilhelm 2 ปีที่แล้ว

    For point #7 If any test fails, it is the responsibility of the committer.
    Should the committer then go and fix someone else's code that they aren't familiar with? What should they do?

    • @ContinuousDelivery
      @ContinuousDelivery  2 ปีที่แล้ว +2

      Yes, if their commit causes a test to fail it is their problem. They may ask for help, but it is their responsibility. No other approach scales, because there is no cost to breaking things for the committer if you don't adopt this policy.

    • @hunterwilhelm
      @hunterwilhelm 2 ปีที่แล้ว

      @@ContinuousDelivery that makes sense, thanks!

  • @magdosandor8051
    @magdosandor8051 3 ปีที่แล้ว +3

    How do I keep my tests under 5 minutes? I have only seen 20+ minute test runs in bigger projects.

    • @dmitryplatonov
      @dmitryplatonov 3 ปีที่แล้ว

      1) Run tests in parallel 2) choose subset of tests to run at commit stage and run the rest later

    • @ContinuousDelivery
      @ContinuousDelivery  2 ปีที่แล้ว +1

      Sorry to sound flippant, but to a large degree the answer is "write better tests". Certainly you can do things to optimise things, but there is a limit. If you adopt a test-focused approach to dev it is all a lot easier.

    • @Rockem1234
      @Rockem1234 2 ปีที่แล้ว

      Write better and fewer tests, as any skill, it takes practice to master

  • @petermanger9047
    @petermanger9047 3 ปีที่แล้ว +1

    I liked the video

  • @erionan
    @erionan 3 ปีที่แล้ว

    Hi is it too late to become a embedded system developer at age 35?

    • @ContinuousDelivery
      @ContinuousDelivery  3 ปีที่แล้ว +4

      No, I don't think so. I don't think that this has anything to do with age. The trick is to either find someone who will give you a chance to learn on the job, or learn in your own time and do something that shows that you can do the job. Then it is a matter of luck, to some degree, that you meet the right person at interview who will give you a chance. This is true for everyone, whatever their age.

    • @edgeeffect
      @edgeeffect 6 หลายเดือนก่อน

      The problem isn't BECOMING an xxxxxxxx developer at age y... The problem is always convincing recruiters that you're able to do xxx in a job when recruiters have the mantra of "We use language x,and frameworks y and z - so you must have 5 years commercial experience using x, y and z in exactly the same context as we use them in".

  • @BaoNguyen-yb5qf
    @BaoNguyen-yb5qf 3 ปีที่แล้ว +6

    As far I understand your video, it seems the term "Continuous Integration" has been watered down throughout the years to just mean "use a build server" :/, while the practices you've mention in this video has sort-of "renamed itself" to be called "Trunk-based development" these days :s.

    • @ContinuousDelivery
      @ContinuousDelivery  3 ปีที่แล้ว +3

      Yes, I think that is pretty accurate.

    • @llothar68
      @llothar68 3 ปีที่แล้ว

      Well, i immediately know you are only doing web based development.
      In the good old age of binary compilation it is still different.

  • @theTeslaFalcon
    @theTeslaFalcon 2 ปีที่แล้ว

    What do u mean by "commit"?
    How do u "commit once per day"?
    A one line change takes seconds to make.
    The excessive amount of automated tests would take hours.
    How can u "commit every 15 minutes" when it'll take more than 15 minutes to write the tests for your change?
    How do u revert someone else's change? Wouldn't security constraints prevent such cross-code bickering?

    • @bmhyakiri
      @bmhyakiri ปีที่แล้ว +1

      When he says “commit” he means pushing code to the main branch. If you commit once per day then you push your code once per day. I’m not sure how automated tests could take hours to run, no offense intended, if your tests are taking longer than 1 second each (and even that can be considered too long) then they are likely poorly written. Ideally a single test should take around 100ms to run… On large projects I have seen thousands of tests complete in a minute or two… no idea how that could ever take hours.
      With TDD your tests are already written, so if you check in a test that tests against code that doesn’t exist yet, no harm done, and if the code does exist then you’d be committing because the test is passing.

    • @theTeslaFalcon
      @theTeslaFalcon ปีที่แล้ว

      @@bmhyakiri
      I've never worked on a large project or w a large team or w the software u refer. All of my projects were 1-man shows.
      However, the code still had to run through different departments & project management. I tested my code as I edited it, but it had to go through the testing team before it could get released.
      When they ran their "automated tests" on my code (c.1996), it took 6-12 hrs per update. I have no idea what they were doing. All I know is Pete would call me to let me know when the tests were starting (~4pm) & call again in the morning to let me know the final results after having run all night.

  • @nicksonsneidergomezpineda4905
    @nicksonsneidergomezpineda4905 ปีที่แล้ว

    5:00

  • @PieterWigboldus
    @PieterWigboldus 4 หลายเดือนก่อน

    Instead of local tests, why not just small pr's, and wait for the required checks are green.
    Locally tests for all code should not be required when you do it in the pipeline for the pr.
    Reduce local resources, force the required tests, transparant for everyone.

  • @Oswee
    @Oswee 3 ปีที่แล้ว +1

    Unfurtunately i have no real teamwork experiece yet, but all this sounds like it's putting a huge mental pressure on the developer just because how interconnected everything is and how little "fresh air" there is. Basically... you are in never ending run for personal performance metrics and in constant "fear" to broke the build pipeline. Which leads to the burn-outs. Could be totally wrong, but this is how it see at current point of time.
    BTW... really great videos! :)

    • @ContinuousDelivery
      @ContinuousDelivery  3 ปีที่แล้ว +5

      I am pleased to say that you are wrong. There is some good data on this, teams that work the way that I describe are less pressured and less stressed. Microsoft did a “before after” analysis when the Bing dev team adopted Continuous Delivery, before CD, they reported a 38% score on “work/life balance satisfaction”, after they reported 78%!
      From personal experience, I think that the reason is because regular SW dev is pretty stressful. You aren’t sure if the change you made works properly, you aren’t sure if you broke something you did expect with your change, you aren’t sure if there will be lots of bugs in production. After CD, you have a lot more confidence in your work. CD teams also tend to be a lot more collaborative, so it is a more pleasant social experience. Read the “State of DevOps” reports and the “Accelerate” book for more info.

    • @Oswee
      @Oswee 3 ปีที่แล้ว

      @@ContinuousDelivery Thank you! I see your point now. Definitely will read the "State of DevOps". Slowly working towards my own project setup with all these practices implemented but in bit smaller scale, so i really enjoy these videos.

    • @Qladstone
      @Qladstone 3 ปีที่แล้ว +2

      Knowing that your code works, and also knowing that someone else's code doesn't break your code. It means a lot. So many problems can be solved if we can maintain this invariant.

    • @ContinuousDelivery
      @ContinuousDelivery  3 ปีที่แล้ว

      @@Qladstone yes, and it gives a greater sense of confidence in moving ahead with changes. I don't know of any data on this, but subjectively teams that work this way feel much freer to change their code, and so do. That means that they tolerate messy code less, and so the code can get better over time.

  • @YvesHanoulle
    @YvesHanoulle 2 ปีที่แล้ว

    I don't mind that a build fails, I mind that it's not quikcly fixed. If it never fails, we don't have enough tests.
    Instead of blaming, I prefer to do the opposite: I call a build was either always green or fixed fast enough a cruisant build. and I will bring croissants for the full team the next day.
    (and over time these rules become harder)

  • @MohammadElmi
    @MohammadElmi 2 ปีที่แล้ว

    I cannt understand the sentence he says after "That's is one of the reasons why I really dislike the term CI/CD." 0:11

    • @keistzenon9593
      @keistzenon9593 5 หลายเดือนก่อน

      "But I'm a pedant, what can you do"
      He is pedantic and wants the terms to be used precisely

  • @knuthatsgut123
    @knuthatsgut123 10 หลายเดือนก่อน +1

    This idea of punishment is against google's blameless post mortem and also against the way you should treat dogs and developers deserve an equal or better treatment than them. The dev ops report says that trunk based developement is better than holding back your changes but it does not state anything about a need for a punishment. I would even go so far as to say that bad developers break the build more seldomly than good developers since bad developers tend to avoid high risk user stories because of lack of skill. So with this performance measurement metric you reward developers that do not take any risk. Automating the commit if it suceeds should lead to much lower rates of burnout than people wearing the hat.
    At Google for example the commit is reverted automatically if the pipeline fails

    • @ContinuousDelivery
      @ContinuousDelivery  10 หลายเดือนก่อน +2

      I think that you are over-reacting to what was meant to be a joke. The "punishments" were jokes, but meant to demonstrate that this is something that we all, as a team, collectively care about.
      Consequences matter when teaching anyone, or anything, even a dog, new behaviours. We learn from making mistakes, and we need to know that they are mistakes.
      I agree, that I may be loose with my language when I talk about this, but there is something deeper here that I think matters, and that is this idea of consequences.
      I completely agree about the need for "blameless post-mortems", but not all acts are blameless and if you treat bad behaviours as equivalent to good, you never improve. So we need feedback, and the team needs to collectively agree on what they think is "good" and provide feedback to everyone when "good" isn't achieved.
      The "punishments" were simply a jokey form of that feedback.
      I think that one of the characteristics of an "ideal work environment" is that everyone gets to see the consequences of their actions and choices, and has the opportunity to correct mistakes, or improve on outcomes. This is how you build a learning-focused environment. This is certainly NOT about one group of people victimising another.

    • @knuthatsgut123
      @knuthatsgut123 10 หลายเดือนก่อน

      ​@@ContinuousDelivery
      If you have dozens of solutions building everything locally might take forever (since it cannot be as parallelized as on the build server) and might be a waste of time for something as simple as a commit. Why not automating the testing whether a commit builds and only then comitting it automatically? So there do not have to be any consequences at all and you achieve a higher troughput
      We have a team with 5 people working on the same module and everytime when something does not build people complain and fingerpoint. It is normal that something does not build and it happens to everyone but it still causes distress and you cannot teach people to not fignerpoint. What would you do in that kind of scenario? Keep it as it is?

  • @coderider3022
    @coderider3022 2 ปีที่แล้ว

    Stage 3 - the rest of the dev team stand up and shout at you ! All your credibility has gone in that 10mins

  • @HoboSapien619
    @HoboSapien619 2 ปีที่แล้ว

    I feel like this is all common sense...

  • @Kabodanki
    @Kabodanki ปีที่แล้ว

    5/ I wouldn’t turn my colleagues into children

  • @john3Va
    @john3Va ปีที่แล้ว

    OK boomer ....

  • @ivanrichmond3524
    @ivanrichmond3524 ปีที่แล้ว +2

    If a manager made me wear a silly hat or put a dollar in a jar for any reason, I'd sue them and the company, or at least report the manager to HR, and you should do the same. There's a lot of psychological research that shows that negative reinforcement makes things worse. Humiliating or punishing employees is counter-productive, no matter how "friendly" it may seem, because many of us won't be able to concentrate on discipline because we're distracted by our feelings of shame, which don't actually help us be more disciplined. Instead, what many of us need is (a) to know what's expected of us and (b) confidence from the manager and team that we can do that.
    My recommendation is (1) make sure new hires understand how they're expected to collaborate (maybe their used to feature branches / Git Flow, not CICD), follow this up with a Wiki page, so people can go back to review what's expected of them, (2) politely tell a person who made a mistake, privately, every single time they make it, so that they get it through their heads not to do what they did, and tell them that you have confidence in their ability to have proper self-discipline (this will encourage them toward the right behavior), (3) if it's an ongoing problem with multiple team members, call a meeting for the whole team and re-train everybody in the approach (it's easy to forget all the disciplines you're supposed to have... we're also careful about best practices, putting a lot of thought into design, reading up on new technologies we need to know, etc., so we may have just forgotten -- we can all use a refresher), (4) if it's just one person and you've brought it up with them several times, then make it clear, again privately, that this is a real problem for the team and disciplinary action will be taken if it doesn't improve (but again express your confidence in their ability to change), (5) if it happens again, and they don't have a valid excuse, take the disciplinary action you said you'd take, (6) worst case, let the offending employee go, and on paper make sure it's put down as a no-fault layoff, so they can collect unemployment, but give them the parting advice that they learn CICD (don't ruin someone's future career just because they made a mistake... it's on them to learn from their mistake and most of us will do just that, if we get fired, which most of us won't, because we'll have the self-discipline not to).
    Otherwise, good video. Very informative. Thank you! :)

    • @ivanrichmond3524
      @ivanrichmond3524 ปีที่แล้ว +2

      Also, if you feel like you "have" to put dunce caps on people or make them drop dollars in a jar, you probably don't have enough automation. Consider more git hooks, unit test, integration tests, Jenkin scripts, or whatever's needed to evaluate things automatically. There are a lot of moving parts and we humans are imperfect by nature. Basically, stop the commit from going through, in the first place, so the problem doesn't arise. That's punishment enough :) Folks will learn quickly to get all their commit ducks in a row before committing... no dunce caps needed :)
      Yes, we should do our best to be self-disciplined, but that's not always enough. Let the computers take some of the self-discipline off of our shoulders by adding more automated tests into the process, and it will mitigate human imperfection.

    • @knuthatsgut123
      @knuthatsgut123 10 หลายเดือนก่อน

      @@ivanrichmond3524 true

    • @keistzenon9593
      @keistzenon9593 5 หลายเดือนก่อน

      Thanks for the writeup Ivan, nice to see the full process of how to foster personal change in colleagues
      I think the caveat is important about the negative impact of playful dissing as is the case with the dunce hat example
      I immediately think of tighter friends/buddies where this might work - but not with new colleagues or in a corporate cold environment. Usually you can sense the politeness/coldness in the Team and such interventions (D hat) won't improve it
      Anyway this is a GREAT video, don't want give the impression this is a deep criticism