Quality Assurance in Agile Software • Dave Farley • GOTO 2022

แชร์
ฝัง
  • เผยแพร่เมื่อ 4 ม.ค. 2025

ความคิดเห็น • 9

  • @nelsonfernandez9868
    @nelsonfernandez9868 2 ปีที่แล้ว +2

    Wow, this approach completely changes the game! 🤯 Thanks!

  • @kellyfj
    @kellyfj 2 ปีที่แล้ว +3

    Good talk - I think mention of the test pyramid a la Martin Fowler would have been really helpful as many developers have a very blinkered view of what testing is (often simple unit testing but not integration, system, user-driven or load testing).

  • @powerswitchfailure
    @powerswitchfailure 2 ปีที่แล้ว +1

    You say QA shouldn't do regression testing, but should assess non-functional aspects of the product/feature (aesthetics, usability, accessibility). This argument seems to assume that those non-functional aspects won't regress once the feature is delivered. But in my experience, such regressions are common (e.g. someone tweaks an existing feature and forgets to test it on iOS Safari, where the UI is now broken). How do you recommend dealing with that?

  • @jangohemmes352
    @jangohemmes352 2 ปีที่แล้ว +2

    Question: I'm confused about the acceptance tests that are created for a new feature. Naturally, these are defined up front, and the feature is done when these pass. But there is some time in between there. What does that do for the pipeline? Doesn't that mean a set of newly written acceptance tests will keep failing each commit until the feature is done? What am I missing? How do you keep the pretty green lights, i.e. how do you introduce the bigger acceptance tests?

    • @bertalankis7908
      @bertalankis7908 2 ปีที่แล้ว +2

      My usual approach to this question is that we disable the incomplete features and their acceptance tests. Any mainline changes are validated against the existing automated tests, so we can make sure that no regressions are introduced. Also the incomplete new feature can be covered with unit tests (which are not disabled), so while the feature is not active we are gaining some confidence on the implementation. When the team is working on the new feature, they enable the incomplete acceptance tests, so they are able to see when those are becoming green and releasable. This approach needs feature flags or branch by abstraction for disabling features and acceptance tests.

    • @jangohemmes352
      @jangohemmes352 2 ปีที่แล้ว

      @@bertalankis7908 That does provide a clearer picture. Very helpful, thanks!

  • @nickpll
    @nickpll 2 ปีที่แล้ว

    Hi Dave. Are you aware of any tools/technology that force devs/QA to first write QA tests before a particular piece of code can be merged/pushed to master? Something that promotes one step further than TDD, more like QADD?

  • @Kreadus005
    @Kreadus005 2 ปีที่แล้ว

    Whats the difference between QA and UAT under this perspective?

  • @robkom
    @robkom 2 ปีที่แล้ว +1

    Why does "manual testing" have a negative connotation here? Testing requires information gathering (exploration, experimentation) and figuring out the unwritten requirements, all within the context of the software project. And this can only be done by a human. Automated testing is merely checking the code. Real testing by QA and automated tests that are part of the code are both very important.
    I do agree that QA should be part of the dev team and work closely with them. On my current team, we have QA work with the product in feature branches that are deployed when a PR goes up. This ensures that anything that's merged into our main branch has been code reviewed and QA'ed and is ready to deploy.
    The dots analogy is ok, but a concrete example with git, GitHub PRs, code reviews, and some pseudo-features would really help drive the point home, unless the point was to keep this vague so allow teams to spin this in a direction that fits their team best.