TW presents: Trunk-based Development with Michael Lihs, Chris Ford & Kief Morris

āđāļŠāļĢāđŒ
āļāļąāļ‡
  • āđ€āļœāļĒāđāļžāļĢāđˆāđ€āļĄāļ·āđˆāļ­ 17 āļĄāļĩ.āļ„. 2021
  • In this meetup we want to talk about trunk-based development. This means that each commit to the source code repository immediately is pushed to the master branch and continuously integrated by our CI/CD pipeline.
    We want to discuss the benefits and challenges of this approach as well as the implications for the development and collaboration workflow. We will take a closer look at pair programming and how to support code reviews in a trunk-based workflow.
    The meetup will have 2 parts: a short presentation with an introduction of the topic and an online discussion afterward. So be prepared to get involved 😉
    About the speakers
    Michael Lihs currently works as an Infrastructure Consultant at ThoughtWorks. Coming from a larger enterprise, where security was usually left to a “team of specialists”, he quickly learned to embrace Agile Threat Modelling as a technique to shift left on security. He strongly believes that security is everyone’s responsibility and that everyone in the software development process should be involved.
    Chris Ford has been fascinated by programming (and in particular functional programming languages) since he first stumbled across Haskell during a misguided attempt to study electrical engineering. He came to his senses, and has spent the last seven years happily building systems in various countries across the world. He has worked for ThoughtWorks in the UK, India and Uganda, and is currently coding Clojure in Glasgow. Chris is troubled by the thought that humans and applications might be specialisations of a general class of information producing and consuming nodes, and what that might mean. He's not quite as odd a person as that might imply, honest.
    As TW Global Director of Cloud Engineering, Kief Morris enjoys helping organizations adopt cloud age technologies and practices. This usually involves buzzwords like cloud, digital platforms, infrastructure automation, DevOps, and Continuous Delivery. Originally from Tennessee, Kief has been building teams to deliver software as a service in London since the dotcom days. He is the author of Infrastructure as Code, published by O'Reilly.
  • āļ§āļīāļ—āļĒāļēāļĻāļēāļŠāļ•āļĢāđŒāđāļĨāļ°āđ€āļ—āļ„āđ‚āļ™āđ‚āļĨāļĒāļĩ

āļ„āļ§āļēāļĄāļ„āļīāļ”āđ€āļŦāđ‡āļ™ • 12

  • @anagheshmuruli3554
    @anagheshmuruli3554 3 āļ›āļĩāļ—āļĩāđˆāđāļĨāđ‰āļ§ +2

    Coming here after a merge hell! Amazing discussion. Thank you :)

  • @isarwarfp
    @isarwarfp 2 āļ›āļĩāļ—āļĩāđˆāđāļĨāđ‰āļ§ +1

    Clearly Concisely Covered TBD! Nice Job.

  • @SPeeSimon
    @SPeeSimon āļ›āļĩāļ—āļĩāđˆāđāļĨāđ‰āļ§

    Reminds me of the earlier days with CVS and Subversion. :)
    Where a whole department was committing on one 'development' branch with a CI build that auto-deployed on the development environment. Good times...
    Gained a lot of weight, because of the "you break the build, you treat" rule, which everyone broke one time or more. And thus the introduction of the Build QA, to tell the last committer to fix what he just did. Where Dev1 committed his work-in-progress, because he went on vacation... Dev2 did not (fully) test his merge and used code that was now deleted... And Dev3 did not merge correctly and undid previous work... And yes, there were "rules" to prevent this, but it still happened. I don't know how long the streak of a broken build was, but it was like 2-3 weeks.
    There is a reason why Git and it's easy to use feature branches was adopted so quickly. You commit on a branch, your CI builds and tests it, upon completion you make a PR and code review and then safely deploy a fully working, tested and reviewed feature.

  • @andrealaforgia
    @andrealaforgia 3 āļ›āļĩāļ—āļĩāđˆāđāļĨāđ‰āļ§

    Fantastic talk. Thank you!

  • @fringefringe7282
    @fringefringe7282 āļ›āļĩāļ—āļĩāđˆāđāļĨāđ‰āļ§ +2

    Whats with the kitchen noise? :)

  • @gzoechi
    @gzoechi 2 āļ›āļĩāļ—āļĩāđˆāđāļĨāđ‰āļ§ +1

    Great discussion. Any suggestions for concrete tools that make it convenient to see changes that belong to a story?

  • @MegaTosss
    @MegaTosss āļ›āļĩāļ—āļĩāđˆāđāļĨāđ‰āļ§ +1

    The overhead of short lived branches is issuing a command and clicking a button. I'd take that over breaking the build accidentally for everybody, causing havoc and being scrambled to fix it... Let alone if there was an urgent Production bug fix that needed to be deployed in the mean time 😅
    1. git checkout -b feat/my-feat
    2. Click "Create Merge Request" in you git platform.
    Overhead end.

  • @allmhuran
    @allmhuran 2 āļ›āļĩāļ—āļĩāđˆāđāļĨāđ‰āļ§ +1

    Why does it seem to be the case that no article or discussion about trunk based development seems to occur in a context where UAT is a thing that exists? All of the workflows seem to follow a pattern like this: A developer writes some code. The code goes through automated tests (some combination of unit and integration tests), and then a build pipeline pushes validated code to production.
    What if part of your SLDC is to have a subset of users (perhaps selected domain experts) *accept* the changes? That doesn't just mean they are ensuring there are no functional bugs, but also that the software that has been developed does what they expect it to do, and "feels" how they expect it to feel. This step has existed in every single enterprise development environment I have ever seen, but there seems to be zero acknowledgement of this, and no room to insert it, in any of the trunk based development workflows I've seen.
    The statistics about short lived branches also don't provide any persuasive argument to suggest that the correlation is actually a causation. It seems vacuous to point out that highly skilled developers will take less time to write code than less skilled developers. So even if a team is using feature branches, those branches will live for a shorter duration than branches in a less skilled team. And so it would naturally follow that teams with short lived branches would have higher velocity than teams with long lived branches, because those teams have more skilled developers. And so it's not the case that "adopting short lived branches increases velocity", but rather it is the case that "having a bunch of highly skilled developers results in higher velocity than having less skilled developers" - which is, as I said, so obvious that it's vacuous. But then if that is the case, the cited statistics have no persuasive value in terms of what branching strategy to adopt.

    • @askingalexandriaaa
      @askingalexandriaaa 2 āļ›āļĩāļ—āļĩāđˆāđāļĨāđ‰āļ§ +1

      important point is to have a single history as a source of truth. you can deploy commit/version abc123 to QA/ UAT, and then release the same commit to users later on. what is strongly not recommended is having multiple lines of histories for each "environment". history must be one, environment can be different. see continuous delivery vs continuous deployment

    • @michaelcirikovic33
      @michaelcirikovic33 2 āļ›āļĩāļ—āļĩāđˆāđāļĨāđ‰āļ§

      Some strategies are mentioned - for example you can use feature toggles to switch on features in production or preproduction only for the UAT testers. Or you can use your release process (e.g. canary releases and rollout changes user-group by user-group or environment by environment) or combine the process with feature flags. You can even use your old release approach with x UAT test phases each year - in those cases, you continuous deploy to your INT and promote only some releases to your UAT env. It will make the process more complicated and slow it down, but your code base will be a straight line with less overhead (but in those cases you need feature flags to enable the rollout of bugfixes). And if UAT is not happy, you can always change it again. Like... agile.
      Of course the approach brings overhead in the design (e.g. feature toggles, versioning of functionality etc) and requires a very reliable test suite and a team with a healthy amount of experienced developers. Otherwise unexperienced/bad developers will corrupt your tests fast and will often fail to understand the consequences of code changes. But: if people do stuff very often (e.g. merge to master, deploy and get feedback), they will learn them faster.

    • @crushingbelial
      @crushingbelial āļ›āļĩāļ—āļĩāđˆāđāļĨāđ‰āļ§

      You're conflating continuous delivery with trunk based development.

  • @coderlifer4870
    @coderlifer4870 āļ›āļĩāļ—āļĩāđˆāđāļĨāđ‰āļ§

    Unfortunately, this does not work on mission-critical software products where you ship software or embed software in devices. Examples of these devices are medical devices where people die if there is a bug in the software, components in aircraft where the plane could crash if there is a bug, or components in cars where it could crash if the autopilot fails to disengage when it reaches 90mph. Can your nightly automated test validate the 90mph scenario? Can you do continuous deployment to thousands of cars and when you find a bug you would recall all those cars?