I think what a lot of people get stuck on with this concept of "going faster results in higher quality", is that there are multiple different ways to "go faster". When most people think about going faster, they are thinking of ways of going faster that do indeed result in poor quality software. They think, skip the tests! Don't waste time refactoring! Ship the prototype! These are ways to go faster. But, it's not what Dave means. What we mean by go faster is about picking each smallest bit of value we can safely deliver and doing it and getting it in front of customers and end users right away and then going on from there. We don't skip tests. We don't skip refactoring. We don't skip any concepts of good code design. But we do skip anything that isn't needed for that bit of value, and we don't skip anything that's important for that bit of value. If the bit of value degrades performance, then performance design was in fact needed, for example. But there's a big ol' BUT: picking the smallest bit of value, and delivering it right away, and recognizing what can be skipped and what cannot is HARD. In communicating about this, it doesn't help to gloss over that fact.
Thanks Dave! Something i got from this talk is that regression tests should be done in the commit phase, and are best done by lightning fast unit tests.
DevOps is a really important step that is neglected too often in small to medium (and even large) businesses, making small changes to software shouldn't be so overcomplicated. My manager's view of Docker was "garbage" purely because he didn't understand it. We weren't using proper tools and structure simply because the manager didn't understand them and wasn't willing to do things differently even if it meant a significant improvement across all work. Deployment was a nightmare, git branches & environment configs were a total mess. I've only been in the software / web industry just over a year though I've been coding for more than a decade. Regarding the actions at the start; I can say the biggest problems I've seen are mainly down to a lack of communication. So many things could be improved if people were able to communicate more and the biggest roadblock is managers holding all the keys and everyone needs to come to them to be able to get on with their work. This is a recipe for disaster. Fractured hierarchies leading to junior developers needing to seek basic application requirements from the CEO or CTO should never happen when there is a line manager / senior developer in-between. Too many meetings where either side are complaining about blockers; with nobody actually trying to sort these roadblocks out - leading to lots of wasted time. The amount of times I would get directly opposite instructions from my team lead and from the CTO about our product during development, it just led to a general malaise & frustration. I see so often that small businesses grow but the managers do not delegate properly to intermediaries. The 'bottom rung' as it were, are constantly needing oversight from the top rung - leading to burnt out chiefs and deeply frustrated juniors / mid levels.
I think it is interesting that the concepts of the Agile process has been deepened with data and explained how it works. There is still a gap between good and poor programmers that even if speeding up the cycle to give feedback to poor programmers, there will still be more of those cycles, whereas with good programmers it would be fewer. But this is a good conversation to have and keep plugging along at it, I think this is great content. So now let me get to the nit-pick I have: This approach seems logical and it is convincingly likely the way to go... but I've worked at a lot of companies and I can tell you those businesses don't embrace this. I've been around for 30+ years in this industry and the old model of coding still exists. It's really the weight of the legacy system... Too much would have to change at once, and the legacy code is no where possible to rewrite to allow it to be tested, tangled up hard-coded brittle code with the god-include file that bring the whole aspect of the project into every other part of the project so you can't partition and test... I mean if you asked me to deliberately make code untestable, I'd do everything I see in the code! I'm a lone wolf with no power, I point things out and it seems like I'm unhappy with my job or "rocking the boat"... when in fact I'm trying to raise awareness and help to make things better. Solution? What I think you need is to meet the development teams and their managers where they're at present day... Then have a plan forward to bridge to where they need to be... DO you have ANY ideas on this? The massive machinery and heavy burden of the legacy code really is the impediment to crossing such a bridge to making a better process of development... The old process I'm told (at several companies I've worked) is: "the only way we know how to do the work"... and any suggestion I make is met with resistance. I'm in the embedded engineering side of the industry, and I have to change their minds about something: they think that in this environment that the rules "don't apply to them" and they say because it isn't a Java, web code, enterprise, or even a PC with gigabytes of memory and processor speed, etc. I know we have to be more careful in how we the development, it is a limited resource, but my reply is still: "software is software and the rules: They do apply!"
One thing is the desired state or a goal, another this is how to get there. The most often mentioned (on this channel) approach to implement CD in legacy environment is to repeatedly ask question "What blocks us from releasing more often?" and continously remove barriers - be it better building pipeline, automated tests, non-outage deployment I think this approach can be applied in whatever place you are now and will improve your process with every step, even if you never reach anywhere near true CD
@Unnamed man I haven't heard of that approach, or named that way. The company did a round of layoffs and I got shown the door after 12 years of service. I am in a much better place now... thanks for the reply and I will look into that method... although directly strangling that guy sounded appealing... LOL, I kid, that's not in my nature.
@Unnamed man Well, my work is in the embedded systems world, and wouldn't be amenable to that strategy. But I can see that would be a way to do it otherwise in a web deployed environment.
Hi Dave! Great talk! Can you maybe explain (maybe in a future video) how we can get a releasable package within 1 hour if we also have a manual test environment? In my mind, we can deploy to that environment quite fast, but somebody still needs to have the time and go through the manual tests and there can be a lot. Love to hear your thoughts on that :) Cheers
The simple answer is to reduce dependence on manual testing. Manual testing has a place, but not for regression testing. It's too slow, and too variable. Manual testing is best used for exploratory testing which can happen alongside development, rather than as a gate-keeping exercise before release. I do already have a video that talks about this: th-cam.com/video/XhFVtuNDAoM/w-d-xo.html I also have a new, rather good training course, that describes how to build the automated tests that replace the need for manual testing. courses.cd.training/courses/atdd-from-stories-to-executable-specifications
Hello Dave, What would you recommend for projects under contract? I've work in the automotive industry for a Tier-1 Supplier company for 9 years and this is how it works: During the quotation phase for a project there is a small team of engineers (1-2 engineers of every discipline like HW, ME, SW, MFG) that do the estimation for the whole project. They basically say "we will build this with X amount of people working for Y months", and there is usually pressure from business to reduce the people and time in order to sell it for cheaper and win the quotation. Once the quotation is won a team is put together to work on the project; this is a DIFFERENT group of people than the ones who did the quotation. And these teams are working under strict contract with hard set deadlines. When, inevitably, the releases are delayed the team is chewed up, spitted and shat on for doing poorly. This happens every time, with every project.
Amazing speech. I can clearly understood and I am trying to push my company towards this mentality because I've experienced it's all true. If you would ever see this comment, I would genuinely and respectfully suggest you to change the channel name to something else because it's difficult to find your videos.
Lots of stuff on testing, this specific video on the role of QA on CD teams: th-cam.com/video/XhFVtuNDAoM/w-d-xo.html Automated Testing playlist is here: th-cam.com/play/PLwLLcwQlnXBzwEqy9R3odTJKURxfwqDXa.html My Training course on this topic is here: courses.cd.training/courses/atdd-from-stories-to-executable-specifications
There is a film on this channel on PRs. TLDW: replace pull request-based code review with peer programming. If code is written by 2 people, you can treat it as already reviewed. So pair programming is also a kind of code review, just more immediate
slowly but surely I'll have Continuous Delivery one day. Thank you Dave. This is the way.
I think what a lot of people get stuck on with this concept of "going faster results in higher quality", is that there are multiple different ways to "go faster". When most people think about going faster, they are thinking of ways of going faster that do indeed result in poor quality software. They think, skip the tests! Don't waste time refactoring! Ship the prototype! These are ways to go faster. But, it's not what Dave means.
What we mean by go faster is about picking each smallest bit of value we can safely deliver and doing it and getting it in front of customers and end users right away and then going on from there. We don't skip tests. We don't skip refactoring. We don't skip any concepts of good code design. But we do skip anything that isn't needed for that bit of value, and we don't skip anything that's important for that bit of value. If the bit of value degrades performance, then performance design was in fact needed, for example.
But there's a big ol' BUT: picking the smallest bit of value, and delivering it right away, and recognizing what can be skipped and what cannot is HARD. In communicating about this, it doesn't help to gloss over that fact.
Thanks Dave! Something i got from this talk is that regression tests should be done in the commit phase, and are best done by lightning fast unit tests.
DevOps is a really important step that is neglected too often in small to medium (and even large) businesses, making small changes to software shouldn't be so overcomplicated. My manager's view of Docker was "garbage" purely because he didn't understand it. We weren't using proper tools and structure simply because the manager didn't understand them and wasn't willing to do things differently even if it meant a significant improvement across all work. Deployment was a nightmare, git branches & environment configs were a total mess.
I've only been in the software / web industry just over a year though I've been coding for more than a decade. Regarding the actions at the start; I can say the biggest problems I've seen are mainly down to a lack of communication. So many things could be improved if people were able to communicate more and the biggest roadblock is managers holding all the keys and everyone needs to come to them to be able to get on with their work. This is a recipe for disaster. Fractured hierarchies leading to junior developers needing to seek basic application requirements from the CEO or CTO should never happen when there is a line manager / senior developer in-between. Too many meetings where either side are complaining about blockers; with nobody actually trying to sort these roadblocks out - leading to lots of wasted time. The amount of times I would get directly opposite instructions from my team lead and from the CTO about our product during development, it just led to a general malaise & frustration. I see so often that small businesses grow but the managers do not delegate properly to intermediaries. The 'bottom rung' as it were, are constantly needing oversight from the top rung - leading to burnt out chiefs and deeply frustrated juniors / mid levels.
I think it is interesting that the concepts of the Agile process has been deepened with data and explained how it works. There is still a gap between good and poor programmers that even if speeding up the cycle to give feedback to poor programmers, there will still be more of those cycles, whereas with good programmers it would be fewer. But this is a good conversation to have and keep plugging along at it, I think this is great content.
So now let me get to the nit-pick I have: This approach seems logical and it is convincingly likely the way to go... but I've worked at a lot of companies and I can tell you those businesses don't embrace this. I've been around for 30+ years in this industry and the old model of coding still exists. It's really the weight of the legacy system... Too much would have to change at once, and the legacy code is no where possible to rewrite to allow it to be tested, tangled up hard-coded brittle code with the god-include file that bring the whole aspect of the project into every other part of the project so you can't partition and test... I mean if you asked me to deliberately make code untestable, I'd do everything I see in the code! I'm a lone wolf with no power, I point things out and it seems like I'm unhappy with my job or "rocking the boat"... when in fact I'm trying to raise awareness and help to make things better.
Solution? What I think you need is to meet the development teams and their managers where they're at present day... Then have a plan forward to bridge to where they need to be... DO you have ANY ideas on this? The massive machinery and heavy burden of the legacy code really is the impediment to crossing such a bridge to making a better process of development... The old process I'm told (at several companies I've worked) is: "the only way we know how to do the work"... and any suggestion I make is met with resistance. I'm in the embedded engineering side of the industry, and I have to change their minds about something: they think that in this environment that the rules "don't apply to them" and they say because it isn't a Java, web code, enterprise, or even a PC with gigabytes of memory and processor speed, etc. I know we have to be more careful in how we the development, it is a limited resource, but my reply is still: "software is software and the rules: They do apply!"
One thing is the desired state or a goal, another this is how to get there. The most often mentioned (on this channel) approach to implement CD in legacy environment is to repeatedly ask question "What blocks us from releasing more often?" and continously remove barriers - be it better building pipeline, automated tests, non-outage deployment
I think this approach can be applied in whatever place you are now and will improve your process with every step, even if you never reach anywhere near true CD
@Unnamed man I haven't heard of that approach, or named that way. The company did a round of layoffs and I got shown the door after 12 years of service. I am in a much better place now... thanks for the reply and I will look into that method... although directly strangling that guy sounded appealing... LOL, I kid, that's not in my nature.
@Unnamed man Well, my work is in the embedded systems world, and wouldn't be amenable to that strategy. But I can see that would be a way to do it otherwise in a web deployed environment.
Hi Dave! Great talk! Can you maybe explain (maybe in a future video) how we can get a releasable package within 1 hour if we also have a manual test environment? In my mind, we can deploy to that environment quite fast, but somebody still needs to have the time and go through the manual tests and there can be a lot. Love to hear your thoughts on that :) Cheers
The simple answer is to reduce dependence on manual testing. Manual testing has a place, but not for regression testing. It's too slow, and too variable. Manual testing is best used for exploratory testing which can happen alongside development, rather than as a gate-keeping exercise before release. I do already have a video that talks about this: th-cam.com/video/XhFVtuNDAoM/w-d-xo.html
I also have a new, rather good training course, that describes how to build the automated tests that replace the need for manual testing. courses.cd.training/courses/atdd-from-stories-to-executable-specifications
@@ContinuousDelivery Thanks a lot Dave! I really appreciate your answer and your videos. It really helps a lot!♥
Hello Dave,
What would you recommend for projects under contract?
I've work in the automotive industry for a Tier-1 Supplier company for 9 years and this is how it works:
During the quotation phase for a project there is a small team of engineers (1-2 engineers of every discipline like HW, ME, SW, MFG) that do the estimation for the whole project. They basically say "we will build this with X amount of people working for Y months", and there is usually pressure from business to reduce the people and time in order to sell it for cheaper and win the quotation. Once the quotation is won a team is put together to work on the project; this is a DIFFERENT group of people than the ones who did the quotation. And these teams are working under strict contract with hard set deadlines. When, inevitably, the releases are delayed the team is chewed up, spitted and shat on for doing poorly.
This happens every time, with every project.
Amazing speech. I can clearly understood and I am trying to push my company towards this mentality because I've experienced it's all true.
If you would ever see this comment, I would genuinely and respectfully suggest you to change the channel name to something else because it's difficult to find your videos.
Very good talk!
Thanks.
Dave after reading all your books I have a problem, I dont have what to read now. Do you have any recommendations?
Well thanks! I did a video on my top 5, did you see that? th-cam.com/video/RfOYWeu5pGk/w-d-xo.html
Where can I see the Q&A portion?
Lots of stuff on testing, this specific video on the role of QA on CD teams: th-cam.com/video/XhFVtuNDAoM/w-d-xo.html
Automated Testing playlist is here: th-cam.com/play/PLwLLcwQlnXBzwEqy9R3odTJKURxfwqDXa.html
My Training course on this topic is here: courses.cd.training/courses/atdd-from-stories-to-executable-specifications
How to continous delivery when your team is not continuos reviewing and approving?
There is a film on this channel on PRs. TLDW: replace pull request-based code review with peer programming. If code is written by 2 people, you can treat it as already reviewed. So pair programming is also a kind of code review, just more immediate
I can't fathom how every single Goto; conference video has a terrible audio. This must be intentional at this point.
It's the cheap headset mic
Typical underfunded low budget community effort
/S