i've watched a bunch of videos about this argument but this is the first that i see where all is put together, and really made me think about changing services architecture at work
Disk storage is relatively free in operation, but there's a chance that you may be contractually obligated to move all your data to a new datacenter, possibly over a low-bandwidth pipe. Also I wonder if there's some common ground between Extend Only design and Database Normalization Forms from the 80s.
Though I like this talk, I don't think I'd agree with most of what Stannard is saying here. Arguably, a ton of what technical debt includes is having to worry about past technical decisions when extending or modifying a given system. If I'm adding a new feature to one part of a system and need to worry about clients using 10 fields or options that are deprecated, that's a lot of extra work I need to do. The real cost of having append-only schemas is that your API surface is ever expanding, even if it's no longer used. You may end up in slightly impossible situations - ex. API v1 was designed that each item is its own order, but v2 has multiple items per order and it's impossible to serve this data in that format. Figuring out what to do in these inevitable scenarios has a real cost that has to be considered.
Interesting talk, but it doesn't mention the greatest optionality tool at your disposal... Keeping the code small enough that if you need to make a change, you can. For the startup example near the end, rather than taking time to set up event sourcing, extend-only schemas, and such, just keep your system small enough that if you need to, you can rewrite it entirely.
> Keeping the code small enough that if you need to make a change, you can. Author of the talk here - how, exactly, would you propose doing this for any non-trivial product? Edit: I suppose it's worth re-framing here - my advice is _how_ to think about keeping the system manageable and small over time when software inevitably grows in both complexity and size. Saying "keep it small" is obvious, but not easy in practice - hence my talk.
@@Stannardian Hey! Big respect, I certainly couldn't go on stage and talk about this stuff. I'm strictly limited to saying stuff in comment sections without backing up my claims haha. I've worked on both large and small systems, though nowhere near as long as you have, but I've had a lot of success with aggressively deleting anything you don't need anymore. Not building your code in a way that is designed to be extended, but building your code in a way that you could redesign it to be extensible when that need arises (which may never happen). Some of my greatest struggles have been with code that was designed to extend infinitely, for example a manufacturing software that was designed be extensible to support any material or process, and then was only used for one material with no post-processing steps for years before I inherited it.
@@Keisuki > Not building your code in a way that is designed to be extended, but building your code in a way that you could redesign it to be extensible when that need arises (which may never happen). So I've gone down that road many times in startup land (i.e. lean startup / product MVPs) - that's a bet against yourself and your own success most of the time and it leads to emergency rewrites under a limited budget due to negative burn rates. I've also built new products in companies that were profitable and it's the same problem - if you're successful in actually acquiring users / customers you're going to get hit with immediate unknown unknowns that need to be addressed soon, not next quarter or the quarter after when you've had time to do a rewrite. In a manufacturing system your domain is bounded by physics and the slow timeframes of logistics + supply - how likely is it that a manufacturing line run this way with these inputs is going to suddenly change? And if it did change, how quickly could it realistically change (i.e. literally updating the factory floor itself)? Those are going to be several orders of magnitude slower than the examples I used, all of which deal with pure software and thus can change much more quickly. This is why I calibrated my talk with "unless you are reasonably sure that your requirements aren't going to change" - so you're in the edge case.
YAGNI applies to the code, not the architecture. Dont add features to the code you might not need. Definitely keep your architecture flexible and open to change.
Remember too the principle of YAGNI (You Aren't Gonna Need It) in software development - it's often best not to add functionality until you're sure it's necessary
YAGNI is specifically called out as dangerous at 17:50 though, and he's making a very nuanced point here but he is correct. Everyone knows that trying to anticipate future requirements is a fool's errand. YAGNI is a way of saying that in short, "at best you will be wrong as often as you are right if you try to predict the future, and usually it will be net negative". The whole point of this talk is that while you cannot predict the future requirements, you can predict *that there will be* future requirements and that today, you don't know what those are. It's why people propose heuristics like "do the simplest thing that works". But since a poor architecture decision can be very very expensive to change, the whole point of this talk is to propose a different heuristic, "do the thing that best preserves future optionality" and then gives some examples of what that means in practice. So in some ways it's a repudiation, not quite of YAGNI itself, but definitely of some of the ways that people respond to YAGNI.
@@fang_xianfu To my mind, applying optionality to external software makes sense - isolate and abstract data access, messaging, events, etc., but don't try and anticipate changes to the business (domain) model.
"If you think you gonna be successful, build it like you mean it. If you don't think you'll be successful, why are you in this business? Go do crypto or AI" 💀
Not necessarily, you can make a bad choice (and incur technical debt) because the information (on the problem) you base the choice on is incomplete. Technical debt compounds when you keep on building on the bad choice after receiving new information, instead of straightening things out first. It's a slowly creeping poison that is not always to blame on willfully taken shortcuts.
The difference between software vs mechanical or civil engineering is that short cut can be fixed and the cost is still relatively small. Create a car without good breaks very difficult to fix when it goes to market With software change costs, hopefully it’s not catastrophic
@@thomas.moerman While I understand your and Aaron's point that there can be things that require changing and fixing that are discovered later instead of caused by intentional taking of shortcuts however that's not the traditional common meaning that most associate with the term. What you're using is an extended one that I haven't actually even run into before and I've watched plenty of conferences talks and read books touching the topic. So I'm not saying that your thinking is wrong but it's not what the overwhelming majority of software professionals mean when they talk of technical debt. If this extended meaning becomes more popular then my guess would be that we'll end up talking of different types of technical debt with them perhaps getting a separate terms of their own in the long run. I do think that you're on the right track here in that it's pretty easy to see that there's a way more nuanced way we could be talking about technical debt because once you get down to the details there can be all kinds of variants of it.
i've watched a bunch of videos about this argument but this is the first that i see where all is put together, and really made me think about changing services architecture at work
Optionality == the 'Soft' part of Software. In short, make it as soft as possible (but no softer) if you want it to be readily changeable 😉
Disk storage is relatively free in operation, but there's a chance that you may be contractually obligated to move all your data to a new datacenter, possibly over a low-bandwidth pipe.
Also I wonder if there's some common ground between Extend Only design and Database Normalization Forms from the 80s.
Though I like this talk, I don't think I'd agree with most of what Stannard is saying here. Arguably, a ton of what technical debt includes is having to worry about past technical decisions when extending or modifying a given system. If I'm adding a new feature to one part of a system and need to worry about clients using 10 fields or options that are deprecated, that's a lot of extra work I need to do.
The real cost of having append-only schemas is that your API surface is ever expanding, even if it's no longer used. You may end up in slightly impossible situations - ex. API v1 was designed that each item is its own order, but v2 has multiple items per order and it's impossible to serve this data in that format. Figuring out what to do in these inevitable scenarios has a real cost that has to be considered.
Interesting talk, but it doesn't mention the greatest optionality tool at your disposal... Keeping the code small enough that if you need to make a change, you can.
For the startup example near the end, rather than taking time to set up event sourcing, extend-only schemas, and such, just keep your system small enough that if you need to, you can rewrite it entirely.
> Keeping the code small enough that if you need to make a change, you can.
Author of the talk here - how, exactly, would you propose doing this for any non-trivial product?
Edit: I suppose it's worth re-framing here - my advice is _how_ to think about keeping the system manageable and small over time when software inevitably grows in both complexity and size. Saying "keep it small" is obvious, but not easy in practice - hence my talk.
@@Stannardian Hey! Big respect, I certainly couldn't go on stage and talk about this stuff. I'm strictly limited to saying stuff in comment sections without backing up my claims haha.
I've worked on both large and small systems, though nowhere near as long as you have, but I've had a lot of success with aggressively deleting anything you don't need anymore.
Not building your code in a way that is designed to be extended, but building your code in a way that you could redesign it to be extensible when that need arises (which may never happen). Some of my greatest struggles have been with code that was designed to extend infinitely, for example a manufacturing software that was designed be extensible to support any material or process, and then was only used for one material with no post-processing steps for years before I inherited it.
@@Keisuki
> Not building your code in a way that is designed to be extended, but building your code in a way that you could redesign it to be extensible when that need arises (which may never happen).
So I've gone down that road many times in startup land (i.e. lean startup / product MVPs) - that's a bet against yourself and your own success most of the time and it leads to emergency rewrites under a limited budget due to negative burn rates. I've also built new products in companies that were profitable and it's the same problem - if you're successful in actually acquiring users / customers you're going to get hit with immediate unknown unknowns that need to be addressed soon, not next quarter or the quarter after when you've had time to do a rewrite.
In a manufacturing system your domain is bounded by physics and the slow timeframes of logistics + supply - how likely is it that a manufacturing line run this way with these inputs is going to suddenly change? And if it did change, how quickly could it realistically change (i.e. literally updating the factory floor itself)? Those are going to be several orders of magnitude slower than the examples I used, all of which deal with pure software and thus can change much more quickly.
This is why I calibrated my talk with "unless you are reasonably sure that your requirements aren't going to change" - so you're in the edge case.
YAGNI applies to the code, not the architecture. Dont add features to the code you might not need. Definitely keep your architecture flexible and open to change.
Remember too the principle of YAGNI (You Aren't Gonna Need It) in software development - it's often best not to add functionality until you're sure it's necessary
YAGNI is specifically called out as dangerous at 17:50 though, and he's making a very nuanced point here but he is correct.
Everyone knows that trying to anticipate future requirements is a fool's errand. YAGNI is a way of saying that in short, "at best you will be wrong as often as you are right if you try to predict the future, and usually it will be net negative".
The whole point of this talk is that while you cannot predict the future requirements, you can predict *that there will be* future requirements and that today, you don't know what those are. It's why people propose heuristics like "do the simplest thing that works".
But since a poor architecture decision can be very very expensive to change, the whole point of this talk is to propose a different heuristic, "do the thing that best preserves future optionality" and then gives some examples of what that means in practice.
So in some ways it's a repudiation, not quite of YAGNI itself, but definitely of some of the ways that people respond to YAGNI.
@@fang_xianfu To my mind, applying optionality to external software makes sense - isolate and abstract data access, messaging, events, etc., but don't try and anticipate changes to the business (domain) model.
"If you think you gonna be successful, build it like you mean it. If you don't think you'll be successful, why are you in this business? Go do crypto or AI" 💀
Lool! That's God, when the old client knows about new features... and so on.. sorry...
Technical debt is WILLFULLY taking shortcuts with the intention of paying it back. What you’re talking about is being bad at engineering.
Not necessarily, you can make a bad choice (and incur technical debt) because the information (on the problem) you base the choice on is incomplete. Technical debt compounds when you keep on building on the bad choice after receiving new information, instead of straightening things out first. It's a slowly creeping poison that is not always to blame on willfully taken shortcuts.
The difference between software vs mechanical or civil engineering is that short cut can be fixed and the cost is still relatively small.
Create a car without good breaks very difficult to fix when it goes to market
With software change costs, hopefully it’s not catastrophic
@@thomas.moerman While I understand your and Aaron's point that there can be things that require changing and fixing that are discovered later instead of caused by intentional taking of shortcuts however that's not the traditional common meaning that most associate with the term. What you're using is an extended one that I haven't actually even run into before and I've watched plenty of conferences talks and read books touching the topic. So I'm not saying that your thinking is wrong but it's not what the overwhelming majority of software professionals mean when they talk of technical debt. If this extended meaning becomes more popular then my guess would be that we'll end up talking of different types of technical debt with them perhaps getting a separate terms of their own in the long run. I do think that you're on the right track here in that it's pretty easy to see that there's a way more nuanced way we could be talking about technical debt because once you get down to the details there can be all kinds of variants of it.