Looking for books & other references mentioned in this video? Check out the video description for all the links! Want early access to videos & exclusive perks? Join our channel membership today: th-cam.com/channels/s_tLP3AiwYKwdUHpltJPuA.htmljoin Question for you: What’s your biggest takeaway from this video? Let us know in the comments! ⬇
This is why code really is hard to maintain... people leaving, taking their knowledge with them, but also even your own code can become "legacy" because you improve over time, but you wrote that console job that runs monthly a decade ago, and now we have to make changes because of govt regulation. So no it's not just churn that's a factor, but knowledge changes themselves. So keeping code as clean as possible all the time is an actual goal. Except for those that like to go fast without realizing their actions have consequences. Awesome talk, i love this guy.
Keeping code as clean as possible all the time is a neat ideal, but we can't achieve it in practice. In his talk he mentioned 4.000 years of technical debt accumulated in 15 years. He named parallelization of multiple developers as a possible way for that particular scenario to occur, but I don't think it's too farfetched to think that 1 developer, 1 hour of developing can accumulate multiple hours of technical debt. Of course it depends on what we characterize as technical debt but what I'm getting at is that technical debt diverges. The more efficiently we develop the more technical debt we will accumulate - as the efficiency will be derived from the shortcuts that we take and they will all accumulate technical debt. That technical debt will likely hamstring us in the future. I believe the most productive way to develop is to take the technical shortcuts that provide the "best" trade-off between efficient development and technical debt accumulation. So achieving 0 technical debt although an ideal would never be feasible. What we can instead more realistically aim for is to set aside a portion of our development capacity to solve just enough of the technical debt that we produce in order to maintain a constant development speed. If we managed to do that we would have achieved what I believe to be perfection. In practice though we will of course all fall short of that goal and experience varying degrees of slowdown.
This encapsulates the programming experience so well. Software engineering is all about solving these issues. It doesn't matter that much what exactly is the product, the problems are the same.
This is fantastic! What a wonderful breakdown of "what matters" in tech. Measuring, ordering and actioning such a complex topic in such a visual way can only come from a genius mind at work. Nobel prize vote from me!
Valuable lecture, it is a big challenge to write quality code. I've been refactoring a code base the full today, making it more resilient for future development. While I was doing so I was thinking about what could be done better to reduce refactoring in the future, writing code that lives longer.
I think it's not possible. Mainly because when you write a new feature, you really don't know what it is and how to model and design it correctly - this understanding only comes after you have coded it. I think there is no point to try to write perfect code from beginning, but to write what you can, then release, then refactor.
Offensive implementations are usually left alone because the rest of the code depend on these. I'd argue that frequency of change of offensive implementations isn't what we're after, I think the scale should be measured by uses of offensive code -- i.e. how likely it is to impact you when you're writing new code or changing existing code.
Agree, obv the metrics discussed are useful/insightful, but could perhaps be more accurate regarding actual risk/impacts. Great place to start though and very interesting to see the examples!! Great talk, thanks
Thanks for the talk! I really like the idea of mining version control for data and combining that with other metrics to inform decisions on how to prioritize dealing with technical debt.
I disagree with the metric of measuring employees by code contribution. While it may be true that losing a high contributor is bad, it's not true that losing a low contributor is acceptable - senior developers tend to review code or plan architectures, which wouldn't show up in this metric. He also mentions the size of functions being a problem and implies that factoring is a good thing. However this assumes that the factored code is relevant to other parts o the codebase, and actually increases code complexity because you've turned linear code into potentially non-linear code.
A 500 line function is a problem. Even if you refactor this into helper functions that aren't used anywhere else, it's easier to unit test the helpers and keep the main function easier to read.
@@TheBswan It might be technically possible to have some code that can't be refactored into helper functions because the code itself is doing one peice of work, but it would seem pretty unlikely. And, if you could refactor them into helper functions, you should. Especially if those functions can be given specific names outlining exactly what they are doing and why, that way the main function is basically just explaining how the main function works. -- I have seen this many time where people put a bunch comment explaining the next stage of this function. And the best refactor was just moving each of those sections into functions and naming the function what that original comment said.
The whole measuring somebody purely on code contribution is a problem. I do recall one time I spent five hours on a piece of code to remove one letter 'e'. I'm pretty proud of it, and most metrics it wouldn't reflect how difficult it was to do.
@@davidolsen1222 i work with java spring and it can take hours, sometimes days, so solve an problem that results in adding 2 dependencies, 3 annotations and 1-2 config values. I purposely choose that over my broken implementation i could whip up in an hour or 2 precisely because its more maintainable.
Seeing the hotspots from the source control history is a neat idea and it would pick up some of my least favourite source files immediately. But in the interest of prioritisation of technical debt, you often want to focus on code improvements to prepare for a particular upcoming feature, which means identifying something _before_ it becomes a hotspot, so that the new feature built on top isn't slowed down by the pre-existing code problems.
at my employer we have a weekly meeting to discuss bugs and minor feature requests raised from the service desk. inexplicably this work is labelled "tech debt"
It could be also in reverse. A large file with frequent changes means that people managed to do them relatively easily. So, this file might be not that bad. But when you open some rarely touched file to slightly modify it for your 2 story points task and after 10 minutes you start asking yourself about your life... 😄
You make a salient point. Hotspots at any point in time can be influenced by the demands of the software at the time they were measured. If those demands change, entirely new hotspots could all of a sudden light up. For example, an unexpected regulatory change could force you to refactor code that was supposed to be extremely mature. But even still, those are scenarios that you can't predict. I would say you're still better off going after the technical debt that has more probability of hurting you, and that is the stuff that is accessed most frequently.
Exactly this. I don't think that anybody would consider lower importance tech debt eg bank software written in fortran, cobol or even perl that nobody understands and that you can't even easily find skilled (or willful) developers to work on in case of need. That is just a time bomb ready to explode, you just don't pay the cost of it every day, but all at once after 15 or 20 years. Also the talk focuses too much on "code" and "code changes" (commits). Certain tech debt is due to horrible, not scalable and entangled architectures. There the complexity is not measurable at all via any kind of code analysis tools
If you must change some code for your feature, it doesn't really say anything about the complexity because that's simply what you have to do to make it work. We could only reasonably say the complexity is low if we would have a time metric included as well which is very difficult to do and on top of that every developer is different so maybe even that would not be a solid proof.
The long tail of unchanged code may harbor exactly those bad (or outdated) abstractions and assumptions that make working with the high-frequency code so painful and inefficient...
Great talk, and I'd add one note to it. Adam mentions bugs as a consequence of Tech Debt. I'd take that one step further and say specifically "Regression bugs" are a nasty consequence, particularly if you've changed one part of the code base and a seemingly unrelated area starts acting up. Patches and hot fixes are particularly vulnerable because they may not have received a wide enough testing. Good companies can keep regression rates under 10%; heavy tech debt can push that over 50%.
Sometimes we don't touch code that works because we are just too scared to break it or we just don't have enough time and resources to do so. So we don't change it, even at cost of dropping useful features we'd like to have. This is the worst type of technical debt because it totally limits your innovation and business opportunities, and this can't be identified observing the git history
Suitability of software doesn't strictly decrease over time. Needs can evolve, but they don't always. I would argue that most software is built under the assumption of a growth economics model, and requires the software to continually adapt in order to meet the revenue model of the company selling the software, rather than the suitability of the software to existing customers. This is something that can be pigeon-holed into the first "'law" in this talk, but the subtleties of such distinctions is extremely important if we are interested in seeking truth (the "root cause" as described in the talk). It can strongly impact the *types* of modifications that changing requirements entail, as well as the volume of changes. If you insist on focusing your lens to only include the code side of things, then you can't optimize the experience for existing customers. This is a reason that it's best not to bury your head as a developer.
Also, lol at assessing risk based on whether or not the developer is going to leave. People jump ship without warning, especially in environments where those developers are in the high pressure position of being strongly relied upon.
18:50 my company has one main repo and in there we have methods with 1k loc. luckily most of these monsters are in the periphery, but some are right at the heart. We’re slowly trying to fix this, but it is painful
These metrics don't take into account that some files might have a low code change frequency BECAUSE they have so much technical debt. If nobody understands a specific file, nobody will dare to change it.
The code is a usually a reflection of the company/management. I worked at a place with a 'you build, you own' model but we were saddled with a 15 year old project where it literally used sql injection as a feature. Everyone despised working on that code. So we get the green light to write something that will scale 10x to replace it and wrote a reactive system. The work environment was so bad that the team of 5 devs quit within 6-8 months of each other with no real knowledge transfer. I promise you the guys that inherited a reactive system despise it as they won't have the benefit of a learning curve; that things in prod baby!
Really thought provoking talk! And resonates, as a developer that is maintaining a legacy application. I would suggest, as another metric, a more natural one - personnel interviews. Who is feeling the pain, and why? Who's ready to walk out and not look back? What is the reason for that and how does it tie to the code? Unfortunately, from my experience, the answer is more than just tech debt, it's architecture. And once you reach that point, it's difficult, expensive, and time-consuming to address it. But if you don't, you are circling the drain, with your most valuable assets leaving first.
Should also consider where else these functions are being used, and the change frequency around those locations. If code around it changes a lot there is a possibility that there is a leaky abstraction.
Ideally, all code should be read through regularly. If you are using it, you should be continually revisiting it with revue, assessing spec and test, and continually improving it or replacing it as the need arrives. If you aren't doing this then your old code that's the foundation of your ability to operate will be rotting.
And if you do all that, when do you have time to develop new features? Seems to me that's the thrust of this talk: since you can't refactor everything, use analysis to find the intersection of complexity and change frequency to constrain your refactoring to the highest value refactors.
Great talk. I hate to be that guy, but I am that guy... the B in debt is silent. It's pronounced DETT. The B was added by some pretentious brits a couple of hundred years ago to prove they knew Latin, but didn't change pronounciation. Same for plumber (PLUMMER), doubt (DOUT) etc.
I just wanna say that the earliest writing systems are over 5000 years old, and that hieroglyphs are not the first known writing system. My man’s did cuneiform dirty and I won’t stand for it.
Also “start of recorded history” and “invention of writing” are not the same thing, particularly when many of the earliest written documents are land deeds and contracts. And outside of the mundane stuff it’s largely personal inscriptions by rulers AKA propaganda. Recorded history almost certainly begins with oral traditions, and I’d wager that writing was not used for “true” history for a long time after it’s invention. Not worth it if hardly anyone can read.
I don't agree with some parts here. Complex bad parts of a codebase are not touched because no one dares to touch them, so they will not show up as active here. But in fact they are a problem because lots of useless layers are written around them to avoid having to touch them. Also a highly active part of a codebase means people can work with it so it means that part is actually good. I don't agree that refactoring such a part may be a good idea because it impacts a lot of people. Also most technical debt comes from requirement mess and lack of business knowledge, you won't see that by looking at the code.
Interesting lecture. A lot of valid points. You do not cover indeed, the most important. You should look for symptoms to identify your technical debt. Where you find symptoms is where you most likely have to solve technical debt. A part of the system that doesn't scale, it's hard to maintain, it's blocking you to refactor other parts of the system. The analysis you propose identifies parts of the code frequently changed, possibly by a few people. These parts are the ones you suggest to change. But these aren't necessary the parts you want to refactor. The parts you want to refactor first, are the ones having a direct impact on the business. Technical debt is created during analysis and design, more than coding. You won't even need to check the code to establish where is your technical debt, you just need to take a look at the architecture. I am not saying the analysis you propose is irrelevant, I am saying that shouldn't be the main driver, and other factors, more important ones, should be considered first.
Technical debt is a very powerful buzzword management system for prioritization of actionable, data driven analysis for complex, interactive systems visualization in a paradigm of cohesive conceptual directional enhancement. Or some such shit.
\uj It's nice that he is laying down some typology of code analysis. He is also asking the right questions. But yeah, the entire talk reeks of consulting companies BS. That methodology of finding hotspots seems plain silly to me. It is only adequate in a very few edge cases. Also, cyclomatic complexity is just bullshit. Anyone can spot code that smell because of silly conditionals. Apart from that, handling a lot of cases is just handling a lot of cases. Handling less cases is just having less features. Now there are techniques for reducing the amount of conditionals (composition, branchless programming, etc...) but I don't think that is the topic he was trying to explore. It really seemed like a great lecture that was eye opening. But really it's just good communication skills. I can't think of anything directly useful being said there.
@@michaelmalter2236 You can always substitute cyclomatic complexity metric for any other that you seem more viable and still follow the approach suggested. His primary insight is to use code change frequency as an important parameter to deciding on prioritization of factoring. While that is not revolutionary in thought, its certainly not BS.
A bunch of useless and terrible ideas that shows we still don't understand anything about code development even as we are being replaced by AI. Great job humanity!
There's a real bona fide mathematical law which rarely gets the attention it deserves. It's called "the law of requisite variety", formulated by cybernetics pioneer W. Ross Ashby when computer science barely existed. Lehman's pseudo-laws are special cases of this true mathematical law. Even using the base 2 logarithm to calculate the requisite number of bits for representing a known number of states is a special case of this law. So what is "the law of requisite variety"? It may be expressed in many ways, because it pertains to any communicative domain, but here: A system (e.g. a piece of software, or a team) which should cope/thrive in/handle an environment (e.g. a market or a user base) must contain a model of *all* the pertinent variations in that environment, but no more. If there is too little variety in the model, the system will fail to account for some states/conditions (-> unhappy users/customers). If there is too much variety (e.g. technical debt, or "greedy" modelling) the system becomes unwieldy and inefficient (-> unhappy users/customers). "MVP" is agile jargon for a system whose complexity closely matches the target (sprint goal) environment - a system with "requisite variety".
Looking for books & other references mentioned in this video?
Check out the video description for all the links!
Want early access to videos & exclusive perks?
Join our channel membership today: th-cam.com/channels/s_tLP3AiwYKwdUHpltJPuA.htmljoin
Question for you: What’s your biggest takeaway from this video? Let us know in the comments! ⬇
This is why code really is hard to maintain... people leaving, taking their knowledge with them, but also even your own code can become "legacy" because you improve over time, but you wrote that console job that runs monthly a decade ago, and now we have to make changes because of govt regulation. So no it's not just churn that's a factor, but knowledge changes themselves. So keeping code as clean as possible all the time is an actual goal. Except for those that like to go fast without realizing their actions have consequences.
Awesome talk, i love this guy.
It's legacy code when it's written and even more so the day after.
Keeping code as clean as possible all the time is a neat ideal, but we can't achieve it in practice.
In his talk he mentioned 4.000 years of technical debt accumulated in 15 years.
He named parallelization of multiple developers as a possible way for that particular scenario to occur, but I don't think it's too farfetched to think that 1 developer, 1 hour of developing can accumulate multiple hours of technical debt.
Of course it depends on what we characterize as technical debt but what I'm getting at is that technical debt diverges. The more efficiently we develop the more technical debt we will accumulate - as the efficiency will be derived from the shortcuts that we take and they will all accumulate technical debt. That technical debt will likely hamstring us in the future. I believe the most productive way to develop is to take the technical shortcuts that provide the "best" trade-off between efficient development and technical debt accumulation.
So achieving 0 technical debt although an ideal would never be feasible. What we can instead more realistically aim for is to set aside a portion of our development capacity to solve just enough of the technical debt that we produce in order to maintain a constant development speed. If we managed to do that we would have achieved what I believe to be perfection. In practice though we will of course all fall short of that goal and experience varying degrees of slowdown.
Definitely one of the best talks I've seen the last year.
This encapsulates the programming experience so well. Software engineering is all about solving these issues. It doesn't matter that much what exactly is the product, the problems are the same.
This is fantastic! What a wonderful breakdown of "what matters" in tech. Measuring, ordering and actioning such a complex topic in such a visual way can only come from a genius mind at work. Nobel prize vote from me!
Thanks!
Thank you very much, erikig, this is much appreciated! ⭐️
WOW !!! Wonderful explanation. Each and every word is so clear. Simply awesome !
Valuable lecture, it is a big challenge to write quality code.
I've been refactoring a code base the full today, making it more resilient for future development. While I was doing so I was thinking about what could be done better to reduce refactoring in the future, writing code that lives longer.
I think it's not possible. Mainly because when you write a new feature, you really don't know what it is and how to model and design it correctly - this understanding only comes after you have coded it. I think there is no point to try to write perfect code from beginning, but to write what you can, then release, then refactor.
Offensive implementations are usually left alone because the rest of the code depend on these. I'd argue that frequency of change of offensive implementations isn't what we're after, I think the scale should be measured by uses of offensive code -- i.e. how likely it is to impact you when you're writing new code or changing existing code.
Agree, obv the metrics discussed are useful/insightful, but could perhaps be more accurate regarding actual risk/impacts. Great place to start though and very interesting to see the examples!!
Great talk, thanks
Thanks for the talk! I really like the idea of mining version control for data and combining that with other metrics to inform decisions on how to prioritize dealing with technical debt.
What a great video, I got a lots of insights. Excelent job !!!
Amazing presentation!
What a great talk! Currently my team is planning to attack tech debt, this info is really gold!
I disagree with the metric of measuring employees by code contribution. While it may be true that losing a high contributor is bad, it's not true that losing a low contributor is acceptable - senior developers tend to review code or plan architectures, which wouldn't show up in this metric.
He also mentions the size of functions being a problem and implies that factoring is a good thing. However this assumes that the factored code is relevant to other parts o the codebase, and actually increases code complexity because you've turned linear code into potentially non-linear code.
A 500 line function is a problem. Even if you refactor this into helper functions that aren't used anywhere else, it's easier to unit test the helpers and keep the main function easier to read.
@@TheBswan It might be technically possible to have some code that can't be refactored into helper functions because the code itself is doing one peice of work, but it would seem pretty unlikely. And, if you could refactor them into helper functions, you should. Especially if those functions can be given specific names outlining exactly what they are doing and why, that way the main function is basically just explaining how the main function works. -- I have seen this many time where people put a bunch comment explaining the next stage of this function. And the best refactor was just moving each of those sections into functions and naming the function what that original comment said.
The whole measuring somebody purely on code contribution is a problem. I do recall one time I spent five hours on a piece of code to remove one letter 'e'. I'm pretty proud of it, and most metrics it wouldn't reflect how difficult it was to do.
@@davidolsen1222 i work with java spring and it can take hours, sometimes days, so solve an problem that results in adding 2 dependencies, 3 annotations and 1-2 config values. I purposely choose that over my broken implementation i could whip up in an hour or 2 precisely because its more maintainable.
Seeing the hotspots from the source control history is a neat idea and it would pick up some of my least favourite source files immediately. But in the interest of prioritisation of technical debt, you often want to focus on code improvements to prepare for a particular upcoming feature, which means identifying something _before_ it becomes a hotspot, so that the new feature built on top isn't slowed down by the pre-existing code problems.
at my employer we have a weekly meeting to discuss bugs and minor feature requests raised from the service desk. inexplicably this work is labelled "tech debt"
It could be also in reverse. A large file with frequent changes means that people managed to do them relatively easily. So, this file might be not that bad. But when you open some rarely touched file to slightly modify it for your 2 story points task and after 10 minutes you start asking yourself about your life... 😄
You make a salient point. Hotspots at any point in time can be influenced by the demands of the software at the time they were measured. If those demands change, entirely new hotspots could all of a sudden light up. For example, an unexpected regulatory change could force you to refactor code that was supposed to be extremely mature. But even still, those are scenarios that you can't predict. I would say you're still better off going after the technical debt that has more probability of hurting you, and that is the stuff that is accessed most frequently.
It would make me question the files cohesivenesses though. Maybe it should be different files? Just something to consider, not a hard rule.
Exactly this. I don't think that anybody would consider lower importance tech debt eg bank software written in fortran, cobol or even perl that nobody understands and that you can't even easily find skilled (or willful) developers to work on in case of need. That is just a time bomb ready to explode, you just don't pay the cost of it every day, but all at once after 15 or 20 years. Also the talk focuses too much on "code" and "code changes" (commits). Certain tech debt is due to horrible, not scalable and entangled architectures. There the complexity is not measurable at all via any kind of code analysis tools
If you must change some code for your feature, it doesn't really say anything about the complexity because that's simply what you have to do to make it work. We could only reasonably say the complexity is low if we would have a time metric included as well which is very difficult to do and on top of that every developer is different so maybe even that would not be a solid proof.
The long tail of unchanged code may harbor exactly those bad (or outdated) abstractions and assumptions that make working with the high-frequency code so painful and inefficient...
What is the name of the tool that measures tech debt for a repo in man-years? I would love to measure my own code bases.
Great talk, and I'd add one note to it. Adam mentions bugs as a consequence of Tech Debt. I'd take that one step further and say specifically "Regression bugs" are a nasty consequence, particularly if you've changed one part of the code base and a seemingly unrelated area starts acting up. Patches and hot fixes are particularly vulnerable because they may not have received a wide enough testing. Good companies can keep regression rates under 10%; heavy tech debt can push that over 50%.
Sometimes we don't touch code that works because we are just too scared to break it or we just don't have enough time and resources to do so. So we don't change it, even at cost of dropping useful features we'd like to have. This is the worst type of technical debt because it totally limits your innovation and business opportunities, and this can't be identified observing the git history
Suitability of software doesn't strictly decrease over time. Needs can evolve, but they don't always. I would argue that most software is built under the assumption of a growth economics model, and requires the software to continually adapt in order to meet the revenue model of the company selling the software, rather than the suitability of the software to existing customers. This is something that can be pigeon-holed into the first "'law" in this talk, but the subtleties of such distinctions is extremely important if we are interested in seeking truth (the "root cause" as described in the talk). It can strongly impact the *types* of modifications that changing requirements entail, as well as the volume of changes.
If you insist on focusing your lens to only include the code side of things, then you can't optimize the experience for existing customers. This is a reason that it's best not to bury your head as a developer.
Also, lol at assessing risk based on whether or not the developer is going to leave. People jump ship without warning, especially in environments where those developers are in the high pressure position of being strongly relied upon.
18:50 my company has one main repo and in there we have methods with 1k loc. luckily most of these monsters are in the periphery, but some are right at the heart. We’re slowly trying to fix this, but it is painful
These metrics don't take into account that some files might have a low code change frequency BECAUSE they have so much technical debt. If nobody understands a specific file, nobody will dare to change it.
22:32 What would it take to turn your codebase into a legacy codebase - offboarding case study
Very enlightening
The code is a usually a reflection of the company/management. I worked at a place with a 'you build, you own' model but we were saddled with a 15 year old project where it literally used sql injection as a feature. Everyone despised working on that code.
So we get the green light to write something that will scale 10x to replace it and wrote a reactive system. The work environment was so bad that the team of 5 devs quit within 6-8 months of each other with no real knowledge transfer. I promise you the guys that inherited a reactive system despise it as they won't have the benefit of a learning curve; that things in prod baby!
Really thought provoking talk! And resonates, as a developer that is maintaining a legacy application.
I would suggest, as another metric, a more natural one - personnel interviews. Who is feeling the pain, and why? Who's ready to walk out and not look back? What is the reason for that and how does it tie to the code? Unfortunately, from my experience, the answer is more than just tech debt, it's architecture. And once you reach that point, it's difficult, expensive, and time-consuming to address it. But if you don't, you are circling the drain, with your most valuable assets leaving first.
This presentation is visual representation of what most software engineers experience over the years when dealing with legacy codebases.
Should also consider where else these functions are being used, and the change frequency around those locations. If code around it changes a lot there is a possibility that there is a leaky abstraction.
Great talk, I like the methodology and signalling. Only thing I take issue with is the example is kubernetes where the entire thing is complex 😂
Legacy is a gift from the past for a better future!
- No Developer Ever
Can anyone guide which tools used to create these heatmaps for git and language specific analysis?
The tool is called CodeScene.
Ideally, all code should be read through regularly. If you are using it, you should be continually revisiting it with revue, assessing spec and test, and continually improving it or replacing it as the need arrives. If you aren't doing this then your old code that's the foundation of your ability to operate will be rotting.
And if you do all that, when do you have time to develop new features? Seems to me that's the thrust of this talk: since you can't refactor everything, use analysis to find the intersection of complexity and change frequency to constrain your refactoring to the highest value refactors.
Great talk. I hate to be that guy, but I am that guy... the B in debt is silent. It's pronounced DETT. The B was added by some pretentious brits a couple of hundred years ago to prove they knew Latin, but didn't change pronounciation. Same for plumber (PLUMMER), doubt (DOUT) etc.
I just wanna say that the earliest writing systems are over 5000 years old, and that hieroglyphs are not the first known writing system. My man’s did cuneiform dirty and I won’t stand for it.
Also “start of recorded history” and “invention of writing” are not the same thing, particularly when many of the earliest written documents are land deeds and contracts. And outside of the mundane stuff it’s largely personal inscriptions by rulers AKA propaganda. Recorded history almost certainly begins with oral traditions, and I’d wager that writing was not used for “true” history for a long time after it’s invention. Not worth it if hardly anyone can read.
brb, rewriting our core app in Linear B
I don't agree with some parts here. Complex bad parts of a codebase are not touched because no one dares to touch them, so they will not show up as active here. But in fact they are a problem because lots of useless layers are written around them to avoid having to touch them. Also a highly active part of a codebase means people can work with it so it means that part is actually good. I don't agree that refactoring such a part may be a good idea because it impacts a lot of people. Also most technical debt comes from requirement mess and lack of business knowledge, you won't see that by looking at the code.
Nice
Interesting lecture. A lot of valid points. You do not cover indeed, the most important. You should look for symptoms to identify your technical debt. Where you find symptoms is where you most likely have to solve technical debt. A part of the system that doesn't scale, it's hard to maintain, it's blocking you to refactor other parts of the system. The analysis you propose identifies parts of the code frequently changed, possibly by a few people. These parts are the ones you suggest to change. But these aren't necessary the parts you want to refactor. The parts you want to refactor first, are the ones having a direct impact on the business. Technical debt is created during analysis and design, more than coding. You won't even need to check the code to establish where is your technical debt, you just need to take a look at the architecture. I am not saying the analysis you propose is irrelevant, I am saying that shouldn't be the main driver, and other factors, more important ones, should be considered first.
i need to move to amsterdam, after knowing goto conference
Technical debt is a very powerful buzzword management system for prioritization of actionable, data driven analysis for complex, interactive systems visualization in a paradigm of cohesive conceptual directional enhancement.
Or some such shit.
\uj It's nice that he is laying down some typology of code analysis. He is also asking the right questions. But yeah, the entire talk reeks of consulting companies BS. That methodology of finding hotspots seems plain silly to me. It is only adequate in a very few edge cases.
Also, cyclomatic complexity is just bullshit. Anyone can spot code that smell because of silly conditionals. Apart from that, handling a lot of cases is just handling a lot of cases. Handling less cases is just having less features. Now there are techniques for reducing the amount of conditionals (composition, branchless programming, etc...) but I don't think that is the topic he was trying to explore.
It really seemed like a great lecture that was eye opening. But really it's just good communication skills. I can't think of anything directly useful being said there.
@@michaelmalter2236 You can always substitute cyclomatic complexity metric for any other that you seem more viable and still follow the approach suggested. His primary insight is to use code change frequency as an important parameter to deciding on prioritization of factoring. While that is not revolutionary in thought, its certainly not BS.
A bunch of useless and terrible ideas that shows we still don't understand anything about code development even as we are being replaced by AI. Great job humanity!
Useless. Nowadays, only one rule makes sense: Less Code, Less Tech Debt.
Rules don't change the existing Tech debt. Only actions does. And what should you action on?
@@CA-oe1ok Rules = Actions. You can do actions before and after TD in the codebase. Code Review blocks 80% of future TD.
There's a real bona fide mathematical law which rarely gets the attention it deserves. It's called "the law of requisite variety", formulated by cybernetics pioneer W. Ross Ashby when computer science barely existed.
Lehman's pseudo-laws are special cases of this true mathematical law. Even using the base 2 logarithm to calculate the requisite number of bits for representing a known number of states is a special case of this law.
So what is "the law of requisite variety"? It may be expressed in many ways, because it pertains to any communicative domain, but here:
A system (e.g. a piece of software, or a team) which should cope/thrive in/handle an environment (e.g. a market or a user base) must contain a model of *all* the pertinent variations in that environment, but no more.
If there is too little variety in the model, the system will fail to account for some states/conditions (-> unhappy users/customers). If there is too much variety (e.g. technical debt, or "greedy" modelling) the system becomes unwieldy and inefficient (-> unhappy users/customers).
"MVP" is agile jargon for a system whose complexity closely matches the target (sprint goal) environment - a system with "requisite variety".