"No code is perfect. The point of maintainable code is not to write code that can't have bugs. It is not possible to write code that can't have bugs. The value of maintainable code is writing code so that when the bugs happen, and they will happen, you can find them, and you can fix them, and you can write tests to make sure they do not pop up again." Somehow, was so satisfying to listen to this.
This is exactly what Erlang is famous for. Crash early, crash often. It literally has no exceptions, but you are forced to account for all cases and it's often programs run bug free from the first try. Rust achieves similar result from different angle. Rust compiler will beat the correct code out of you.
@@Tesmond256Exactly! That is how you get rehired as independent consultant after a lay off. Clean code should be marketed as a guide to job security. :)
16:44 "The value of maintainable code is writting code so that when bugs happen (and they will happen) you can find them and you can fix them (and write test so they don't happen again)" Loved this quote
how is that contradicts to the clean code? Clean code has a good rule when you need to split the long method into smaller methods, it says that all code within the method shall use the same level of abstraction and do one thing. If there is a method with much more than 4 LOC, but that is still talking on the same abstraction level and didn't loose the cohesiveness, it is still a good clean code. To validate that, we can write a unit test, if writing a unit test gets hard, the code is not clean.
Is there any book or course that goes over that in depth? I only have less than two years of experience and i find it almost impossible to achieve locality of behavior in most of my projects.
@@MykhayloS Arbitrarily saying "4 lines of code is too much" in a method is just.. well it's arbitrary. It should just be based on what the method is doing, are there any side effects, etc. some of these things you only understand with time.
@@MykhayloS The advice given in Clean Code is often self contradictory which results in Martin's various examples always breaking at least one of his principles, often the actually valuable ones, in favor of the others.
His description of finding a string somewhere in the codebase and working your way up, lines up 100% with my day to day. Polymorphism does screw it up. Happy to know I'm not the only one that starts every problem looking for some string in the codebase :D
same here brother. And if the app has localisation, I make sure to switch it to the original language (mostly English) so that the error string matches the actual string in code :D
Yeah. What really drives me crazy is when every single string in the entire app is constructed at printing time from a bunch of data tables, and `grep` never finds anything from either the logs or the user interface.
@@AdamJorgensen i18n is fine as long as the strings in the language files are whole and match what shows up in the logs or on the screen. Then `grep` or search can find the relevant line in the language file, I can map that to the key referencing that string, and look for that key in the code. What sucks is when the longest line in the language file is 3 consecutive words, but the average error message is 27 words, some of which are constructed from strings provided by the user or network.
I've rewritten several C# and C++ code in Ada and it changed the way I think and write code. It forces you to think really hard about your program's specifications. Before you can write a single line of executable code, you model your data structures in terms of types and their values. In turn, those values have constraints such as ranges, precision, number of bits, etc. Those types are grouped into "packages" and are included with your functions and procedures. When properly specified, it becomes very hard to write buggy code because the compiler checks your values at compile time and or runtime. The common fence post error and buffer overflow bugs simply don't occur when your types are properly specified. Those same constraints also serve as metadata, allowing the compiler to perform optimizations that would be impossible in other languages. The efficiency rivals C and C++ in speed and size. The resulting code is incredibly easy to understand and maintain, since it has a Pascal-like syntax and its specifications are built-in. You can revisit your code months later and immediately resume where you left off.
Haskell is also fun as it tries to make side-effects hard. Compiler is there more strict than Ada. It enforces that correct mindset that it is developer job to interpret requirements, and write them to formal language that is specification of the software.
I listened to Uncle Bob lectures - this (the book example) was not at all what it taught me. Book could be outdated or just locked to Java example. In reality clean code is a guideline and way of thinking, but not a replacement to thinking and design (like auditing your pricing calculations! Or at least logging) If you have a strategy pattern hiding somewhere, return with it a reason, or type.
Debate implies there's some merit to Clean Code. There isn't. The meritorious things in clean code get used to prop up all the bullshit in the book. The meritorious things are common sense principles that don't get presented like dogma.
Debates are about rhetorical finesse, not truth-finding. At least public ones. This romantic idea of "let the best ideas win" doesn't account in anyway for the reality that most of the audience very likely aren't domain experts and don't even remotely know the context of particular arguments. And of course, debates are infamous for all the derailing-tactics which have been developed - be it false analogies, strawmanning, gish galopping, etc. - makes them utterly useless and is the reason why we don't really do debates in academia, we write papers and long and detailed responses. THAT is the medium which allows finding of actual approximations of truth or at least finding the least wrong position.
Wait, I dont get it. In the last clean code example you were referring to, it's suggested to add the "special case" to the Expense class, instead of checking for the exceptional case outside it, why does this break the whole "don't make whack-a-moles" thing you mentioned? What's wrong with checking the private implementation of that class? If the error is in the expense amount, isn't it natural to then go check how that amount is calculated? Honestly, this entire thing sounded like someone who has a bunch of shit to vent, and just finding any excuse to vent.
Agreed, I came to the comments to try to understand why the last example was bad. To me it makes sense to encapsulate the logic in one place - sure it'll mean that changing the behaviour of one function impacts everything that calls it, but isn't that exactly what's intended (one source of truth)? Perhaps my issue with the video is that he's being very dogmatic on what I think is quite a nuanced and case by case topic. As with most things in life, I would have really appreciated a more balanced view.
Yeah problem with this explanation is no counter example is given for what the alternative he prefers looks like, I would personally do it completely differently (I suspect very different from the person talking too) but this part of the video is simply a complaint and not helpful to people who don't understand
Exactly what I thought. But still it's a bit of a headache when a factory is returning an object that implements an interface and you cannot identify at first glance wich implementation was used. I wouldn't scream that Clean Code is responsible for that... it'll happen everytime someone finds a bug in a method defined by an interface or an abstract class.
So true - supporting code in production has beaten out of me any intent of doing fancy abstractions and depth. I hope you continue to speak to this in plain principles.
Same here. If you’ve worked on large real world codebases you realize abstraction is not only mostly subjective but doesn’t actually serve any real need outside of gratifying the programmer who wrote it.
The underlying problem with all the 'advice' books is that those assume that by following certain practices a good product is going to emerge almost magically. It is as if we expected that pushing bolts and rivets in a specific way and aligning beams in another way would allow to build a bridge without ever seeing one. Call it 'clean engineering'.
Be careful, some self important people might resent the fact that you've described their identity as a craft rather than engineering. But that might generate enough traffic to get you views on your blog.
@drewsclues8625 I'd say don't bother blogging. If you want more than a handful of people to be exposed to your thoughts, start a TH-cam channel. I have multiple videos that probably each have more views than all the hits my blog has received in its lifetime.
@@asimpleguy2730 yeah it sure feels better that way, but I guess its up to the dudes cutting the paychecks to determine if they like more crafting or more engineering.
Thank you. I'm pretty sure you're the only person on the internet who has programmed. I've argued with so many people about the topic of how people get started. They always say it should take at least a month of a developer getting paid full time to sit and read through the entire codebase to understand it before they can ever make a single commit. And I'm always like what are you talking about. Someone is going to give you a very specific problem that you can usually find within an hour (yes, I'm being generous here) and in any sane environment, you've probably committed something on the first day.
I've worked over two dozen software contracts in my career. The client is almost never ready for me on the first day, and very often onboarding takes days or even weeks. Roughly 50% of the time I'm asked to read through the code and get a high-level understanding of it while I wait for someone to have a long enough break between meetings to do knowledge transfer. There is never good documentation. That said, in the rare case when the client is ready for me on day one, I get work done on day one. But I usually don't have the proper permissions or access to tools when I start. Instead I'm watching ten hours of training videos and submitting a bunch of tickets to get software installed because the client doesn't trust developers to do it themselves.
@@andywest5773 Thanks for expanding. Yeah I consider those separate issues. It's understandable if they aren't ready. The people I'm usually fighting against say that you have to understand the whole code base before even looking at an issue.
@@InfiniteQuest86 Ah, I see what you mean now. I've had similar experiences with devs who are new to the company/team and don't want to contribute because they don't feel "comfortable" or "qualified" when they start. I've never understood that.
I would say "probably committed something on the first day that your dev environment is up and running" - Some places I've been, I don't even have an email account set up for 3 days after I start.
I strongly identify with the idea that a software team should limit the set of "things" that create meaningless arguments. Clean code rules certainly fall in that bucket. And I totally agree that applying clean code on any appreciable scale on a real large codebase leads to disparate and hard to understand code. Clean code is part of a set of coding advice (possibly stemming from OOP in my opinion) which advocates for an abstraction-and-refactoring-first approach, which is doomed to fail because one does not infact understand the true nature of the problem at the outset (if indeed ever). This is a lesson I had to learn after being a clean code, abstracting zealot. HOWEVER, the opposite end of the extreme is also a problem. I've seen code written by senior devs which gives no thought to structure at all. This is code consisting of huge functions, where logic for new requirements is just tacked in the middle somewhere, without thinking if this is a well defined business rule or process or invariant which should be pulled and out and shared with an existing usecase. Then one gets bugs where business rules are not being applied consistently. This is still better than lots of little functions, because it is easier to trace backwards (or use a debugger and trace forwards) through big functions, but it is not ideal. Perhaps the ideal "book" is one which looks at the problem from both extremes, and explains why there are not hard and fast rules but rather tools in the toolbelt: sometimes you use a hammer, other times a screwdriver. But I also can't help but wonder if the only way one learns is through experience and a bit of humility.
True indeed. Separation of concern is very important and a major pain to deal with. Another common one I see is over-engineering/future proofing, large amounts of classes and concepts that are there for future ”what ifs” that never comes but now that code needs to be maintained. Its such a classic programmer thing to do, cant just solve the issue at hand but need to build something fancier that the next set of developers will have to deal with. Also having more than one way of doing something, usually a nightmare and unless it is absolutely needed to support both ways its not worth it. Another one is premature optimization, kind of falls under future proofing. But lots of devs love to do ”smart” things in the name of performance. In the end, the actual performance issue customers face are somewhere else and the cool optimisation is never useful only prone to errors and a nightmare to maintain
Depends what you classify as fancy. Some people would say having one inheritance hierarchy is "fancy" but I consider that pretty basic. It depends on the mileage you get out of it. That's where refactoring comes in. For example, is it worth investing the time to build a factory for connection strings and provide all the options for different DBs or just one DB up front in the hopes of it being easier to extend in the future? Or hack some quick and dirty thing and let it become a copy/pasted ball of mess with no real forethought?
@@jshowao Just do what is needed now, in the simplest most readable form possible, and if more db strings are needed later then reconsider the design, but always keep it as simple and readable as possible. Do not just make a factory for the sake of it.
I'm an ops guy who moonlights as an extremely junior backend "dev", and this was a refreshing take that was relatable and easy enough to understand. I mostly work in Python and write shell scripts, but the principles are the same, and there is no substitute for understanding the logic rules and control flow and edge cases that can break the whole edifice. The one good thing about Python from my perspective is that it enables or even forces you to think about the problem at a high level such that you're focusing on the principles of the solution rather than on the individual trees of the forest... but it is slow at runtime.
For me the clean code book was the first book that gave me a general guide on how to break an app into different layers and decouple them and on paper it just made sense. Ofcourse when writing production code all the things you mentioned can happen and will probably happen. So what i want ask is what is the alternative to this? Do you you just not obfuscate anything ever and you dont separate you code into layers? Is there some other guidebook that gives you a structure that you can reliably reproduce and it doesnt contain these problems?
Dijkstra's complaint was about the lack of locality; GOTO was just the tool that was most used to harm it. Tpday's code design principles are the modern day equivalent to GOTO.
Disagree, there are actually a lot of design principles that help a lot. Many of them, in my opinion, just aren't used properly. Not having any type of design principles is just a recipe for disaster and leads to spaghetti code. Working with code unarchitectured sucks.
what exactly do you mean by "you can't just simplify code" at 14:30? there is something i call "essential complexity". the raw logic, structure or math of the problem. i try to write code that does exactly this, not more. anything more just means more chances for bugs. if i need "why-information", i log it somewhere or encode it in the return type
Simplicity is a thing I value more and more when it comes to software. The architecture / the design / the code should be as simple as possible and only as complex as necessary. The goal is maintainability and I agree on your definition: Other devs should be able to understand and modify your code quickly. In my opinion that means in practice: don't over-engineer your code. Humans are only able to handle a certain level of complexity. If your software exceeds this level you're doomed. And don't hope for Copilot to fix it 😉
It's so good to hear all of the things I've been saying to basically every co-worker I've had for the better part of 2 decades coming out of someone's mouth other than mine. Thank you for this, Carl.
Your point about obfuscation is well taken. The main reason why I try to avoid object oriented languages, is because inheritance and instancing make tracking the execution or logic sequence, close to impossible. I am someone who needs to understand how things work, from the first step to the last; and I also tend to view the first steps or the lower levels, as having greater priority than higher. Most of what I know about programming, has come from FORTH. That is a language which inherently uses dictionary dispatch, or a scenario where subroutines can be mapped to numbers, and calling either one will execute it. The good news is that that is a method which I can port to almost any language I've encountered very quickly. The drawback with recursive or fractal composition, however, is that you have to be fanatically self-restrained, because if you are not, the level of complexity will become terrifying very quickly.
You're right I'm a beginner, haven't worked in many big code bases yet, and the advice "what goes wrong stays in its stack". But it still makes a lot of sense.
I thought he was going to discuss the limitations of the actual principles of clean code but all it was is a vague critique of scenarios that are more related to just poor code writing in general not an issue with something like polymorphism.
@paradoxicalcat7173 The beauty of programming is that there are many ways to go about it. Clean code never claimed to be the exact science of creating applications. It's a framework that provides architectural language that teams can use to communicate more effectively.
Three minutes into the video, and you stated one of my guidelines of coding, "Write code as if you will need to come back in six months and maintain the code." Most likely, you will not remember all of the details, and, most likely, you will be tasked to maintain the code since you are one of the last people remaining people who created or modified the code.
I think most people can agree that this is a significant consideration when writing code - problem is, it's way easier to say than to achieve. "Clean" code certainly isn't enough.
@NorthernRealmJackal The concept of an "instant expert" does not exist. The current culture ignores many years of effort and learning to achieve a successful result. Too many people would rather feel good from a smooth talker in their presence than have a difficult task accomplished the right way, especially when the initial plan needs to be modified.
whack a mole is so relatable. i once was on a project implementing a search of a product catalog and i kept going back and forth with QA about how the search would work. QA would tell me this one case wasn't working, so i would change to code to fix that case, but then another case would break. but i never had a list of all the cases required for search to work as expected. and the team had no consensus about what that list should be and what cases were more important that others. really taught why google is a billion dollar company. search is hard. assuming you can give people exactly what they ask for is an insane assumption.
@b42thomas > assuming you can give people exactly what they ask for is an insane assumption. Yep. This is the main reason I don't think AI will be "taking all the SWE jobs" any time soon. It's more about the people than the code, most of the time.
I love your take. It's been my innocent assumption from the time I was a junior to never ever follow any book from that author. In France, they treat these books as religious books, many developers and companies use these books as a reference and many jobs will have in their description: TDD, Clean Code, Hex architecture, Onion....whatever 😂 I hated that, from the beginning, I never saw the real benefit in a real-world scenario. BUT, I saw many problems it caused, creating abstractions on top of other abstractions without any human intelligence involved. They overcomplicate too much most of the time and ALL the "craftsmanships" will tell you that their code is better than yours. Actual clowns, they even tell you without any shame that software engineers at FAANG are not "real" software engineers because they don't use these books to write code. Even tho it's been admitted publicly on Reddit by FAANG directors that less than 1% of their engineers use TDD for example, and they're not even talking about the book. At least in France, they are unbelievably toxic, I don't know if it's the same everywhere. Arghhh 😡 Thanks for the video! I hope Primegeon will react to it.
I have spent about 70% of my career taking over a code base that other people wrote. Ill take a clean piece of code that has bugs over a dirty piece of code that "works" (note the air quotes around the word works!!!). I read Uncle Bob's books and took his classes and it doubled or tripled the quality of my work. And i dont do everything he says. I did it cafeteria style.
Same here, I don't understand why he trashed the book either, that seemed to stir things up. I believe the problem is there may be teams or people that are radical or blind follow recommendations like they are religious commandments. We need to apply common sense or balance according to every situation, no silver bullet.
The same thing happened to me. Having a clean, redable, undestandable code helps a lot with development, because you know where stuff are. You know how they behave because they explain themselves just by reading it, and they fit into properly designed patterns. Whenever I see defendants of "bad code", I feel like they're either too naive to understand how good code helps development or they're just doing it for the viewers. Of course, you can't follow every rule precisely, some SOLID principles even contradict themselves... but overall, it is a very desirable approach.
Yeah - I've heard about that from a couple of other people, too. I bought (paid real money) for a Final Cut Pro plug-in that said it would save me time with transitions and was well-reviewed (and was expensive). I guess that was a waste. Sorry about that. I'm still trying to find a workflow that has the right balance of production quality and production time.
Hard cuts are common, everyone is used to hard cuts absolutely everywhere and no-one bats an eye. Trying to hide cuts almost looks 'suspect' especially in interview pieces
This is a surprisingly nice argument that I agree a lot with. If only because it flips the script on people to write code that is easy to debug, not code that looks clean, which are often at odds with each other depending on an individual's idea of clean. I would only quibble with making changes to an excution path you don't have time to understand, as I believe when we're in a hurry and try to cut corners we always make mistakes and waste more time. Ideally your stack isn't that deep and miserable to traverse. I personally advocate for being methodical when programming and knowing when you need to understand the program better before making changes to it or diagnosing a problem.
Sure. The idea I was trying to get across is "you should try to design your code so that all the things relevant to what happens in that slice are contained within that slice" not "make some change somewhere in the slice of an unfamiliar codebase and check it in, and if it doesn't fix the bug or causes problems somewhere else, it's not your fault"
Excellent video! Sometimes when you're working on problems like this (daily for most of us) it can feel pretty isolated so this is excellent content to get yourself grounded again in knowing that we're not alone in the struggle. Subscribed.
It occurred to me not too long ago that when it comes to software design, and organizing code in a way that is easy to read and maintain, database designers figured it out 30 years ago. Simple entity properties go in a table named after the entity, and complex data and relationships between entities go in a table named after the entity and the property, or named after both entities. (Vehicles, Drivers, VehicleDrivers, etc.) I’ve started doing the same in my code, and it makes life so much easier. Functions dealing with Vehicles go in a module (static class) named Vehicle. Ditto for Drivers. For logic that needs to run to keep Vehicles and Drivers in sync, that code goes in a module named VehicleDrivers. It really is just that simple. Changes to one file only affect the similarly named files. Finding and fixing bugs is trivial. And the code is fast and efficient in both CPU and memory resources. I only use objects and interfaces in cases where it makes the code simpler than it would to do it with functions and structs. It turns out, you get to decide how complicated your source code will be. There are advantages to keeping it simple.
Yeah, this doesnt make a lot of sense, seems like a recipe for copying code a million times. A static class? How do you even instantiate objects or use your classes? Plus your vehicle should be composed of drivers, naming it VehicleDriver seems like a crazy naming convention. What about the vehicles tires or engine? You then create a class VehicleEngine and VehicleTires? Why not just create one Vehicle class that has an array of Tires, a Driver, and an Engine? That is 4 classes instead of 7 because the permutations would be crazy...
@@Id0nthavename I have done three names once or twice, which does start to get a bit ridiculous, I'll admit. I've never had to do four in a single class, but I can see where that would cross over into the absurd. At that point, I would probably just create a more generic static class, like VehicleEvents, and wire it up to handle all of the notifications from all of the other classes, and then call the appropriate methods. Or, just create multiple two name classes, like VehicleDriver, VehicleTire, VehicleEngine, VehicleFuel, etc. and wire them all up to handle specific events.
@@jshowao None of the code would be copied. Only code related to both Vehicles and Drivers would go in the VehicleDriver class. (Kind of like a table that joins Vehicles and Drivers wouldn't duplicate data from either table.) The reason that I would not immediately create a Vehicle class with all of the necessary information and methods is because it might be more design than is necessary. The only time I would start with a non-static class and creating objects would be if it simplified the design. For instance, if you have multiple types of vehicles. Or, for something like chess pieces, where each piece has different behavior based on its type. Trying to overload a single class with all of the logic across multiple domains tends to cause issues where you need an object to act a certain way when it is used by one component (say business logic), and another way when it is used by another (say, UI logic). I have just come to the point in my career where I prefer to use the simplest design to solve the immediate problem, and no more. That tends to make life easier when I need to maintain it after not looking at it for 6 months.
I like your subtle damning critique of OOP, which encourages the hiding of data. I think we can call it at this point. Hiding data and functionality inside of classes hasn't given us the benefits they said it would. It has caused more problems.
Digging into those books (Clean Architecture/Code) and trying to apply them, is a great exercice in my opinion. But it becomes bad, when you apply them like the holly bible. I worked on a project during months, applying dumbly clean architecture principles, and it was a total waste of time and over-engineering. BUT now, after having sorted the useful from it, I write my code in a 'light' layered architecture, and it's been great so far. Every programmer should learn from a lot of sources, but should not take any guidelines like an absolute "how to do things so it never fails". Because like you said, it will always fails at some point.
I would argue that, although you (and other thoughtful people) might have gotten something useful out of it, "Clean Code" on balance is bad for the industry and should be soundly rejected. What's happened is that many of the "verses" from "Clean Code" (and others) have become so ingrained in the industry that they show up on job descriptions and as interview questions - perpetuating the myth that they are holy scripture without fault. Most of the people who spout "That's not SOLID!" and "That needs to be broken up - it's too long or does too many things!!" haven't actually thought about how best to apply the maxims (if they've even read the book at all). They're just parroting what "Cracking the Code Interview" taught them to say, and the industry would be better off if we just turned everyone off of "Clean Code" completely.
@@InternetOfBugsMy issue is, okay, you've rejected it, now what? Your explanations weren't that elucidating either. Hopefully there will be some examples in future parts, because I didnt quite get what you meant by only code should affect this portion of the stack, but you then complained about the per diem thing being hidden away... I mean, its not really an error, its more a logic bug based on misunderstanding of requirements. Those rules are going to be buried in a class somewhere...
Great video - I think you're absolutely spot-on about what is wrong with "clean code". I don't think typography or naivety really play into the worldview of the author. I think the author believes that having a bunch of very short functions/methods makes the code easier to understand because each piece is bite-sized. This ignores the reality that getting an idea of what a whole process does ends up requiring a lot of jumping around. It ends up distributing and creating cruft around the thing that the code *actually does* in a way that prevents the maintainer from getting a foothold on what's happening. As to "hiding", I think that comes out of the belief that SRP and DRY will lead to more understandable code by dividing it up cleanly into chunks that can be understood in isolation. In reality, SRP and DRY lead to over-generalization and over-abstraction, which make a big-picture understanding of a process much, much harder. So basically, I think the author either seriously misunderstands how people's cognitive processes work when reading and maintaining code, or has a brain that works in a way fundamentally different from mine ;).
It wouldnt require jumping around if you contain it in a class and organize it in a way that makes sense so you arent having to edit 3-4 different files. Having a long method is just as bad in my opinion because you would have to jump around in that method once it reaches hundreds of lines. And you are honestly saying you'd rather look at spaghetti code instead of a bunch of abstractions? I mean, I don't think it needs to be as crazy as clean code suggests, but going the other way is an even bigger mess. I cant make sense of methods that are hundreds of lines long, I can make sense of methods that are 10's of lines long. Abstracted code that is well architected is highly flexible and testable. That is one of the major benefits to doing it. I mean, a great example is initialization code for a device. Lets say every device has a name, serial number, brand, a connect l/disconnect action, some basic setup stuff, etc. Would you rather write one parent class and reuse those properties and actions? Or would you write those properties and actions again for every device?
You don't have to do any of that jumping around if your functions and variables are named well enough that you can understand exactly what they are there for.
@@jshowao your example at the about device initialization isn't really exclusive to "clean code" that's a basic use case for, and the entire point of, OO programming. No one is trying to say don't even use classes. The issue is when you abstract to the point of things like a class full of methods like one of the book's pages in the video: SetupTeardownIncluder() includeSetupAndTeardown() includeSetupAndTeardownPages() includeTeardownPages() includeSetupPages() includeSuiteSetupPages() ..and so forth I fill with rage when I see method names this brain melting. You've abstracted to the point where the English language is longer adequate to describe the nuances between each step of what should really all be a single function. Once your method names are includeIncludingIncluderInclusionSetupPreInclusionLoaderPrechecker() you've gone too far. The above is all contained in a single class yet is nearly as maddening to unravel as having to jump around between files. So it's not about whether you have separate files or not. I honestly can't understand how that is easier or better to jump around than a page long method. At the end of the day you have to program the things, it takes that many lines to accomplish something. If a task is 50 lines, it's 50 lines whether you tuck them each neatly into their own little method beds kiss them night night, or not. Sometimes code is complicated and the best or only way to make sense of it is step debugging. Abstraction on its own doesn't help with, or shouldn't be used for the sole purpose of helping with understanding.
@@markt1964 I'd argue that if you're looking for a bug, you _do_ have to jump around, because regardless of how well a function is named, the bug could be in that code. But I'm not arguing for extremely long functions. I'm arguing that "short = good" is not a true as a general rule, because jumping around forces the maintainer to keep pushing to and popping from a mental function stack. Sometimes that's the right thing to do, and sometimes it serves no real purpose but makes the code harder to understand.
@@jshowao I'm not arguing against abstraction altogether. I'm arguing that SRF and DRY lead to _over_ abstraction. I would rarely call a single interface over-abstraction (Unless there was only one concrete class implementing it. Which I have seen. This week). But even small levels of abstraction can create difficulties. An example much like the one you cited would be a tabular reader/writer. I want to read from a bunch of things like a CSV file, a database table, an Avro file, an Excel file, etc., etc. The reading and writing functions can probably look the same, but what about the initialization? A CSV file reader needs to know the delimiter and the file location; a database table needs a connection string; an Excel file needs a location and a sheet name; etc., etc. So maybe the abstraction for reading/writing is justifiable -- but trying to do that for initialization creates a leaky abstraction. I've come to believe that abstraction is best viewed as a "necessary evil" rather than a simple virtue.
Can you please elaborate more? With maybe examples and stuff like that? I use clean code a lot, daily, but at least the way we do it seems to be really good. I hope you make more videos covering this in order for me to get better at what I do. What I'm worried about, is that I am confusing clean code with other subjects, and it got me confused.
this channel name is slick asf whenever i get eaten by bugs (mostly logical ones in algorithm design) somehow the name pops into my head , people worry about clean code , i am using polymorphism with C right now to implement a bit of OOP and i can`t even understand my own code looks worse than assembly (it appeals to my old boss) IT`s A JUNGLE OUT THERE & You are the pray !
I understand your frustration, but I think the focus of your video/rant says something about the kind of work you _usually_ do? I may be wrong, since this is the first time I'm watching one of your videos. You're talking exclusively about bug fixing, but the other side of maintainability is how easy it is to extend a system with new functionality. These two aspects, debuggability and extensibility are opposing forces in my experience, so what makes one easier usually makes the other harder. Once again, as with everything in software, it's about trade-offs. The discussion would be more nuanced if we took extensibility into account (some might even want to bring in performance as well). But... sometimes we have to let off some steam at the end of the day and from that point of view I can understand why you made this video.
My professional opinion is that extending buggy software just makes more bugs, and that extensibility should take a back seat to debuggability/quality. That's a minority opinion - most people want more features and worry about bugs later, which is one of the reasons why the slope of the CVE count curve is getting so much worse (see this video: th-cam.com/video/U-IhIqmCHlc/w-d-xo.html )
Preach. I was excited when I opened Clean Code for the first time. I was skeptical after I read a couple dozen pages. I put it down after I read Uncle Bob’s description of the most beautiful program he had ever seen, where every method had only one to three lines in its body. It was then that I knew that Uncle Bob had never written software that anyone actually had to use.
Your expense report thing reminds me of something that happened to me a few years ago. I was tasked with writing a new client for a Contract Bridge app. The server doesn't have a "legal moves" route, and when sent a move by the client, responds simply with an updated game state or a "no". The "no" either means "there was a server error, please try again" or "that's not a legal move, please send a legal move". I didn't know how to play Bridge at the time, and here I was needing to write a client that can infer legal moves from the game state, so when building out the rules system, I created an "explained move" structure, that not only encodes the move ("I'm playing the 9 of hearts") that will be sent to the server, but internally explains the legality of the move ("east is the only player to have played, so it is my turn, this hand is spades, but I don't have any left, so I can play any suit, I have the 9 of hearts, so it is a legal card to play now"). During a code review I was told that this might be a little unnecessary, as long as my code can produce legal moves, then do we really need a tagged structure that justifies itself? Guess what turned out to be really useful when debugging a weird edge case and even ended up finding a bug in the server? Players had been complaining that in some weird cases they were blocked from playing legal moves, and it was attributed to a UI or networking issue in the old client. When the bug happened with the new client, I had a log of the why the client thought the move was legal, and when checking the server, they found that there was a weird off-by-one error (IIRC) that caused the server to incorrectly refuse certain moves. When people talk about "null" being a billion-dollar mistake, they often focus on the problems caused by propagating null values around, but I feel like the main problem is that "null" doesn't carry any useful metadata about what went wrong. You call a "Thing.fromJson(filepath)" method and you get "null". Was the file path wrong? Was there an I/O problem? Is the file not text? Is the file not valid JSON? Does the data in the JSON not represent a valid "Thing"? Is a field missing? Is a field the wrong type? Is a field outside of expected values? Figure it out nerd, here's your clue: null
While I do agree with the sentiment that clean code is harmful to the industry. I also feel like the example you gave at 13:31 is not entirely clean code's fault, but actually, lack of logging. If the class is returning a PerDiem sub class, which is treated as a special case, then logging it would have been the right choice instead of straight up calling getTotal. But anyway, I do agree with you. Those 3 liners take me back to the old goto days where, when someone got too clever for his own good, you had to jump 17 times throughout the file just to understand what could be a single function call. It is what makes assembly harder to read as well since you are constantly jumping to offsets instead of using higher level constructs. Even if you give those offsets good names, like clean code dictates, after the third jump you'll have already forgotten what was the name of the first one.
loved the part about suggesting that features should be "vertical" as if in, the code of said feature be isolated from other features such that if you have to make any changes or fix any bugs you don't need to read or understand most of the codebase but just read the "vertical" part of the code related to that feature (it also leads to less unexpected changes aka change feature A, feature B randomly stops working)
Excellent. "It is not possible to write code that provably does not have bugs." Some people really need to tattoo this someplace. If you come up with a system that allows you to write arbitrary code that is provably correct, you could use that system to solve the halting problem. Which is provably unsolvable. Therefore: all code has the potential to have bugs in it, no matter what you do, what methodologies or coding practices you follow, what kind of tests you write or how you organize it etc. etc.
How i explain wacka mole is ... "seperating the puppies does not mean cut the puppies into little pieces and give me all their paws. Its the same with code" this mental image is mortifying and gets the point of what something being "whole" means.
I seriously love your content. New to programming, 9 months / one course in. The way you communicate clicks - even for me as a beginner- with me much better than most TH-camrs.
I’m currently working in a project where the project leader is very found off SOLID and Clean Code (tm) and I definitely agree with your message that it tends to make it harder understand any issues. To be fair to clean code, the project is written primarily in FORTRAN (I work at a US National Laboratory doing scientific computing) which makes things harder (no standardized error handling, tons of compiler bugs, etc.). However, just to understand one error messages I have to read 20 different files in 20 different directories. While I have developed an intuition about what files might be the problem given a stack trace it still takes a lot of time to debug it. In my experience, Clean Code (tm) and the SOLID principles break locality of features which makes the code a nightmare to debug.
OK, so you appeared in my feed first time and wholeheartedly agree with what you say here. Now I want to watch your 10 books video, but there's no link to be found... I don't really know if I'll ever get to go to your channel and loom for it
@16:40 - this - this is what I strongly believe too, after 20yrs of writing software, it's not to get out of all bugs (impossible), but to be able to tell, why the bug happens and fix it quickly
I wouldn't be too dogmatic about clean code. These books provide you with examples that might help you in one situation but not in a different situation. In the end it's an iterative process. Leave the code you are working on cleaner than you found it. Sometimes it just needs time to come up with a better solution that fits the context. Also test the error handling. Then you see if you get useful log reports or stack traces.
More videos explaining ideas and concepts on how to code like this would be amazing. Thank you man, honestly!! We don’t need more “coding BS tutorials”
I've always found that the best code from maintainability perspective is code that is descriptive of what is being done instead of code descriptive of how it's being done. I.e. code that is declarative-ish. When you know (easily) what is being done in a piece of code you will immediately know if what interests you is in there or not and where to look further. I.e. you will easily navigate the code. So this is basically my "clean code" criteria. P.S. "no code" is indeed perfect.
Great video, I really enjoy this content. I decided to learn to code back in 2018 and as a noob with no direction in the modern landscape it's quite a wild experience but 2 things I wanted to say. These rules on clean code I think just added to the clutter of learning about code/software and by that I mean it was more noise in the environment then I think was necessary. Idk if there anything you can do about that its just process I guess, but the biggest hurdle for beginners especially those working alone is making sense of and getting comfortable with this foreign environment and any clutter can be quite harmful imo. I read a handful of books about agile development, pair programming, etc and job postings say it's a requirement to understand/have exp so you think its important but later you learn it was mostly crap and anything of value was kind of common sense lol.. Which brings me to the last thing I wanted to say was this content is great for those who maybe have had less exposure to the industry to gain perspective on reality rather then some sort of mystical ideal. Even just listening to how you would walk through a code base is a nice confirmation in a way or when you watch Prime code and your like okay I got this xD lol
i'm normally skeptical about people disavowing clean code, abstractions, etc. but, i'm also not familiar with uncle bob's trademarked "clean code". i have my own definition of clean code. but based on the code examples from the book you showed, i'm also not a fan of uncle bob's clean code, described in the book. i avoid inheritance altogether, favoring composition, and seeing the sql example where each type of query has it's own subclass...that would definitely be a nightmare to maintain. i actually favor shallow abstractions/object graphs. i have the book on my shelf, never read it. but now i'm probably going to read it just to know how much it diverges from how i design/structure apps.
A maintainable piece of software is like a maintainable car. A car that lasts for decades is one you can easily maintain or fix once something goes wrong. Maintenance protocols are simple and easy to follow. Every system is easily accessible for any skilled mechanic who knows the procedure and parts are easily available (no black boxes). It's also a fault tolerant system, the car still runs if something stops working, no single points of failure.
Some very good points, definitely gonna have to check out more of your videos, I feel like it's going to help reinforce my studies in a positive way. The idea of dealing with code that has abstracted and encapsulated key return decisions like that, where you were talking about returning an Object with a total receipt amount or a PerDiem amount sounds like an easy trap to fall into especially for beginners like myself. I also guess this is why pure functions are important, so that you aren't mutating variables in a private function that might effect other portions of your code or project. That may cause a whack-a-mole style bug that you mentioned. Especially if you have created a lot of impure functions with non-local variables that are mutable within the local scope of the function. That sounds like maintainability hell.
Straight to the point. I'm currently leading a small team of a proprietary platform developers and once we defined to follow our own data oriented and procedural pattern it was like magic, less and less bugs, less dificult to find bugs when they happen, less difficult to extend features, easier to write tests and so on. "Clean coders" are like a cult and this book is like the Ten Commandments of them, lol.
Unit tests exist for something. They can be used to test particular locations of your code so that you don't have to trace everything or even running the entire program just to figure out why a particular portion is failing.
I can agree with you critic for example mentioned above. However, I think you forget mentioned what this is not fault of polymorphism. These entities are valid business objects. It is more of a problem, how this entities were instantiated. If you talking about service locator, this problem will hold. But with dependency injection or strategy pattern it is different story, because you can easily trace creation of instances of objects. Also the problem could be solved by using union types which presented in languages like Rust or F#. Because you can't get value of the type, unless you explicitly match on them.
A recent pet peeve of mine is when frameworks or things like IoC containers cause a break between the code's entry point and where it does something, so that you can't go from main and follow the code through to where it outputs something. Ideally your code forms a graph and you can traverse that graph and understand the flow of how it works using just the "Go to definition" and "Find callers" editor features. If you have to start using ctrl+f to try to figure out where something comes from or where its going it becomes very frustrating very quickly.
Yeah. And what's really, really annoying is when they throw a queue or a Command Pattern in, so you see the thing get put on the queue, and there are dozens of different places that pull things off the queue, and it takes forever to figure out which one(s) are relevant to any given input.
Hey, thanks for the awesome video! Just a quick suggestion - the transition effect between cuts can be a bit jarring at times. Maybe a straight cut would work better? No worries though, still loved the content!
Yeah - I've heard about that from a couple of other people, too. I bought (paid real money) for a Final Cut Pro plug-in that said it would save me time with transitions and was well-reviewed (and was expensive), and used it for the first time on this video. I guess that was a waste. Sorry about that. I'm still trying to find a workflow that has a good balance of production quality and production time.
I work on tech stacks 20+ years old. The biggest problem I face in modernizing and bug fixes is that there are class inheritances hierarchy’s that are so tightly coupled that half the stack uses them. Fix one, break another thing.
A lot of bugs are undoubtedly avoidable. A lot of bugs are unavoidable. That’s why there is no rule for coding, but that doesn’t mean we don’t have to follow certain principles, considering both scenarios. What do I know? What if things don’t go right?
I have been brought into a project where they say here's the code go look at it try it out figure it out from top to bottom. When that happens you know you're in for a real surprise because the code is almost always just incomprehensible. I find that it basically is a sign that they have not organized their code in a way that is bite-sized or something you can follow without understanding the whole thing which means it's an all-or-nothing spend 6 months looking through really really bad code. The last time this happened to me I quit only a few months in and went to someplace sane.
Far and away the worst codebase I've ever worked on (in terms of usability for the customer and scrutability for the developer) had adopted the clean code principles. It was insane to reason about. I lasted at that company for a month and had to leave. I'm not a job hopper but it was an exceptional circumstance. There's so much advice in software engineering that reads just like a get rich quick scheme. "You just need to apply this trick to write good software, buy my book!". While I don't (entirely) think it's intended as a scam, it really feels like the advice in these books is overly simplified because any book with legitimate advice offers too much uncertainty, which nobody likes. Sometimes hard problems are hard, sometimes there are tradeoffs, use good judgement, exercise good taste are all axioms in software that I stand by, but nobody will buy my book. Weird.
no matter what yours belief is in abstraction or code structure, dont just eat up exceptions, no magic, log them at least, document functions if closed source. i also heard somewhere that design patterns were necessary only because OOP introduced complexity when functions could be enough for modularity and locality. given the bottoms-up troubleshooting flow you mentioned, its worth the discussion.
Not necessarily OOP as a concept, but they were (consciously or not) a product of the deficiencies of the OOP languages of the time (Java and C++). See blog.plover.com/prog/design-patterns.html for one.
I think you're a little off on that example with the "per diem" stuff. At least from what I saw in the screenshot of that code, it was catching an exception and assigning "per diem" value. The problem with that is that the code uses exceptions for flow of control. Exceptions have to be used for errors/exceptional situations (it's in the name), which is something that is not really foreseen. Decision on whether to use "per diem" or the other technique is a normal business decision, so it shouldn't be made based on an exception. That's the real problem with that code. But isn't this rule on exception I just quoted part of "clean code"? (I don't know).
Well dang it. I bought some of the books you recommended last time along with clean code and clean architecture. I am confused about how to learn the big picture of major software applications and how you even learn when and how to implement these so called design patterns in a helpful way. Everything I find on the internet seems to be unhelpful in my brain as to how these giant applications work and how these design patterns allowed them to scale.
Yep. In my opinion, your best bet is to write code that makes the most sense to you. If you have to use someone else’s code, write your own code to call that code, and then call your code instead. Having a clear understanding of how your code works is essential to maintaining it.
Do your own little projects in a language you need to work in, push them until you notice issues with how you've written things and more, necessarily reach debugging, experiment with various ways you can organise code, and that is probably the best method of learning how various abstractions can help or harm you (for example by taking too much of your time) without practicing "on production". This way if you ever need you'll be able to argue why you think some way of doing things is better than other with code examples and some stories of debugging sessions. All of this really shouldn't be the end goal but this is more or less how I think you can start feeling like you are "on that level". Many of those things are very much language specific in my experience. IMO, scaling an application is a process that will be just different for each organisation and there's a lot to consider on nontechnical side that can determine the success. How big an application grows depends greatly on how much of resources can someone throw at it or certain aspects of it and that's a matter of project management, business planning, etc. You won't be able to learn much about those studying code. Code maintainability matters but not in void. It matters because it's a risk factor within a project. General advice: Don't get too hang up on learning. Gaining knowledge should be just a step in producing or achieving something valuable. Set a clearer, more precise goal and then it'll be easier to decide on the path. Simple joy of coding is a value too.
@@adhaliannaI really appreciate your advice and I will work towards that. I have a hard time staying focused on little task or breaking things down into little task for a larger project idea but I think working on that will probably go a long way to learning more. Thanks.
You're overcomplicating it. You're worrying about hypothetical "major software applications" that you haven't and won't see until you get a job working on one. The thing about them is they are all unique to the institution maintaining it. You can't read any book that will prepare you for a specific project. You have to practice programming, and look at a lot of code (like anything and everything open source you can find, for instance) until you feel comfortable enough that if and when you do jump into a large software project you can work out for yourself how it works. That's all anyone ever does starting a new job. On top of that, you're never going to be tasked with understanding the "big picture" of a major software application. Unless you're a senior dev or architect, your job is going to be "here's 5 bugs go find them" and you're going to be worrying about at most a few functions a day. You will learn the things you need to learn when the time comes, don't worry about it before then, worry about being a productive coder which means simply doing, not reading.
I wish there was an editor, that kind of has a large canvas to move and zoom around in, and displays all pages of the codebase. Then marking any function will also draw arrows to other page locations it calls, and/or an arrows it was called from. The program could order the pages as to have the fewest times arrows cross other pages (thus ordering the code into meaningful clusters). With a tool like that it would be much easier to visually jump around in a codebase and understand its structure, but also be able to see details by zooming into the code.
me after switching from a project written in C, to a project written in C++, where everyone is hyper-focused on "clean code"; trying to find which actual class object is being used because it's abstracted away behind an "interface" class: 🤢🤢
That's not great experience, but oftentimes when some tires to follow rules from clean code for the first time, they fail. It's not an easy skill. You don't hear amateur piano players say "i tried two hands play, and it sounded awful. I'm gonna stick to one hand". It's just a skill which takes time to learn to do properly.
Would you say that the stack you talking about is like Vertical Sliced Architecture (VSA) where all code of a feature a bounded as context? This is my take-away. I'm programming since 20yrs and started with Java. In the last 5 years now, I'm unlocked from this disaster of clean code, but stuck on how to organize my project structure better to avoid switching between too many files and folders in debugging or data-flow sessions. I read a lot about data-oriented programming and separating data from functions. I like this approach a lot. I read Grokking Simplicity from Eric Normand especially the part about architecture and layering code wtyt?
I'd never heard of "Vertical Sliced Architecture" before. Having looked at a couple of articles now: "Layered architectures organize the software system into layers or tiers. Each of the layers is typically one project in your solution. Some of the popular implementations are N-tier architecture or Clean architecture." (source www.milanjovanovic.tech/blog/vertical-slice-architecture ) - that seems like a horrible idea. As for "how to organize my project structure better" - I don't think there's an a priori answer to that. It's largely dependent on what tech you're using and what you're trying to accomplish.
@@InternetOfBugs Thank you for your honest answer. What you describe above is the horizontally layered architecture. (Link: 404) The VSA is analogous of what you described in this video about the concept of the bounded context when fixing bugs. That bugs should not influence other parts/modules of the code. VSA is like having feature modules instead of layered modules (UI-, App-, Core- logic should be separated in its own modules like the n-tier architecture "clean architecture). Where instead a vertically layered architecture, use cases/features are completely isolated in their respective own project modules instead. I find it often very hard with code navigation and hopping around files and folders when the architecture is horizontally layered.
@@InternetOfBugs Thank you for the hint. So, we are on the same boat? Horizontally Architecture ("Clean Architecture") is horrible and VSA is better maintainable?
@@PriNovaFX Yeah, that's not what I was getting at. I'm just talking about the slice represented by the call stack (or stack trace) in the module the bug is in. It *could* cross modules, I guess, but that seems overly complicated unless there's a good reason (like Conway's Law).
I've worked on the code you've described at big companies. Right now working on the complete opposite where everything is contained in massive nested if/else statements and I honestly don't know which kind of codebase is worse to work on.
06:00 This is one reason why I dislike OOP. While not limited to OOP, that style strongly encourages making lots of objects with generically named methods. This makes it difficult to do productive searching within a codebase because you don't have a unique identifier for a particular method. Instead you get lots of irrelevant search results from other methods which happen to share the same generic name. More generally to optimize for searching I tend to be fairly strict about how I name things within code so that each kind of thing always has the same name (perhaps with a different prefix or suffix to disambiguate when there many things of the same kind in scope at the same time) and each function name is reasonably unique. Then I can search for that thing and I will always get every occurrence of that thing (and ideally, nothing else).
The thing is. OOP works with the right IDE. With Visual Studio or Java stuff, the IDEs have proper navigation tools. C++, which is often used for more critical infra, lacks that sort of tooling altogether. CleanCode makes the code slightly easier to navigate when you’re not using the IDE but if you have the right environment, it’s trivial to navigate and localize the context. C++ is earths and bounds more reliant on CleanCode because it has much worse tooling and debugging, especially for large codebases.
MynameisBrianZX Those are kind of two different things. Whackamole is when a developer makes changes in one part of their codebase and it breaks something in a different part of that codebase. So, simple example, the username box on the login web page is not lined up with the password box, so the developer makes a change to the CSS to shift the username element twelve pixels to the left and that fixes it, but that also shifts the "Welcome, ${login_name}" header at the top of all the pages 12 pixels to the left, and so then someone has to go fix that. And then when they move "Welcome, ${login_name}" twelve pixels to the right in the header, and now every HTML-formatted email that gets sent to a customer cuts off the rightmost 12 pixels of every customer's name in the "Dear ${first_name} ${last_name}" greeting. Dependencies are different. First, dependencies aren't (or shouldn't be) routinely changed to fix application bugs. Two, the expectation for dependencies is that, when a dependency changes, all the parts of the application that use that dependency will need to be retested. So it's not really a surprise. They're kind of logically "below" the code the team is generally working on, rather than "over there." The main thing is that, with whackamole, there's no reason to believe that, in a reasonable application, a change to the login page would also make changes to all the page headers, or a page header change would break the email greetings. Those things don't seem they should be related. So it's an unpleasant surprise, and hard to plan for. Also, it tends to cause chain reactions, so it happens over and over. But when a dependency changes, there is a reason to believe that other places could break, so you'd expect to (and plan to) test them, so it's not the same kind of surprise, so it's a different category of problem. Plus, once a dependency is upgraded, and whatever problems that caused are fixed, that's generally the end of that issue. Does that help?
@@InternetOfBugsYes, that helps a lot. To sum it up my own way, whackamole is unpredictable and widespread due to reckless design, whereas distinct modules interact through known, logical, and ideally few ways.
@@MynameisBrianZXyes. But I should be clear that the way whackamole gets its name is from the seemingly never-ending nature of problem after problem reminding people of the carnival/arcade game of the same name.
In my experience wack a mole situations are often the result of some form of coupled states that don't have a linear sequence. Like when a change in a state flag forces the program to recalculate a whole set of entities, or when an hierarchy of rules conflict in non obvious ways. Or when certain aspects of the functional requirements aren't explicitly set on writing and a whole bunch of assumptions were made, and then trying to fix that usually leads to inconsistent behavior from the application. In fact I have seen a lot of smarty-pants developers, architects and consultants making a lot of specific assumptions that aren't necessarily true in terms of matching business requirements. And in the end applications are basically hammered to fit the most common use cases.
This! 10000000000% this! I absolutely love working on a codebase where if I see it misbehaving I can look at the folder structure and guess which folder the bug is in just from the names, and then when I look in the files, I find the exact line of the bug within a minute or two without even having to run anything.
Good that I listened until the end. It made me realize I should not trust the author. For The 25.13$ bug you should have logs that track decisions and help you to understand what went wrong. In my 13 years experience the less code you have the easier it to understand, debug and write. Funny thing that video author doesn't suggest how to do it otherwise exactly. His high level advise to avoid using abstraction and lose coupling between classes sounds to me like the worst advise you can take. That's what legacy code looks like. The worst thing is to make changes in such code, because to add a simple change you might spend too much time. Have high integration test or functional test coverage to minimize risk of bugs, not just unit tests
Good to see this videos that debunk the myth of clean code . I have done it in the past. I remember trying to make changes to my own code after some time it was a nightmare 😅
Regarding the accounting error issue, how could it be done otherwise? As from my POV, that particular logic is abstracted from the whole system, and it did mess up the whole system. But this logic has to live somewhere, should it live in the main/core part of the system? If many of these kinds of logic were in the main/core part of the system, wouldn’t it clutter it and make it less readable/testable? My POV of this issue is that the component is 1 either not tested properly, so corner cases are not added 2 or has a bad specification, and the test is wrong Either way I can only call it unfortunate that this defect propagates to a somewhat far place, but I would still consider this “far” place a part of the block of logic
Most Design Patterns are reactions to limitations in the particular language in question: blog.plover.com/prog/design-patterns.html So if your language is chosen for you, the Design Patterns for that language are handy solutions to common problems. The classic Design Patterns book from 1994 is all about the version of Java that existed at the time. Some of those patterns are still useful, some are now irrelevant due to improvements in Java over the last 30 years. Here's a good (relatively) recent talk that I think is a good take on the status of design patterns: www.deconstructconf.com/2017/brian-marick-patterns-failed-why-should-we-care
The great sin of OOP was a generation of "rockstar" programmers who split code across so many files and functions that it can be formally impossible to debug. You end up just "fixing" everything with garbage frontend logic. Like "if the order button is blue, then it is okay to process the credit card". Why does the button turn blue? Absolutely no one knows or understands, and it is not worth your time to discover why. So the whole frontend is duct tape and shoestrings holding together a backend full of 10 year old assumptions. Then log4j happen. You take out your gun and carefully caress it. The great sin of FANG programmers was thinking "elegant" code is always the goal. Making massive assumptions about programming languages, edge cases, and documentation in order to write "perfect" code. Except the client was wrong. Your base assumptions were wrong. And only someone with 13,000 hours of 1337code programming IN THIS PARTICULAR LANGUAGE can possibly discern what to do now. Anyone who goes too far into either camp is a disaster for an organization. Jusy write decent code. 99%+ of code in the world today does NOT need much more than the most casual optimization to be good enough.
"The internet is full of bugs, and anyone that says different is probably trying to sell you their book !" This take is the whole video in a sentence !
Thanks for the video, I never fully read that book, but had some knowledge about it, so it's very interesting for me see that those principles makes more difficult fix bugs. I have had many job interviews where I am asked if I have read and follow those principles, so I hope this change in the future
thats an entirely different animal to programming and isn't even really about it i would argue. its a philosophy of marrying humans to software and it will never happen. water and oil. why? because humans are not future proofing and neither are computers. code schisms happen. poor leadership. what a mess, RIP.
This is all good and nice, but could you provide examples of how you saw things done in a "clean" way vs how you would do it? Otherwise it is pretty hard to get why to prefer one or the other in a given situation.
It's not about an alternative, necessarily. Many aspects of "Clean" are just harmful, and you're better off just winging it. "Clean Code" just teaches beginners that they should rigidly cling to stupid mnemonics instead of understanding what problem the maxim is trying to avoid, what actually causes it, how to identify it, and what can be done about it. Far too many readers latch on to those maxims rigidly and refuse to give them up, even in the face of evidence that they're making things worse. What's worse, most people claiming they're following those principles haven't even read the book at all, much less understood it. They're just parroting what "Cracking the Code Interview" taught them to say, and refusing to admit that there might be a better way. That said, there will be more videos coming about the metrics and techniques I've found useful over the years.
The "not wack-a-mole" approach you're describing aligns with the Single Responsibility Principle. Code should have only one reason to change, as outlined in "Clean Architecture." Regarding the $25 expenses example, it serves as an exception to the idea that reading code from bottom to top is a good practice. You assumed that 13 cents were being rounded off but couldn't find it. Understanding what calculated the value would have been quicker. The "define the normal flow" example you reference from "Clean Code" aims to demonstrate an alternative to using exceptions for flow control. Taking your example in good faith, I suggest you identify the feature with the bug-public int getTotal() in this case-and trace its execution instead of making assumptions about the cause. I don't believe Robert Martin suggests that writing bug-free software is possible. Although I don't have "Clean Code" on hand to reference, in "Clean Architecture," he quotes Dijkstra: "Testing shows the presence, not the absence of bugs."
> The "not wack-a-mole" approach you're describing aligns with the Single Responsibility Principle. Code should have only one reason to change, as outlined in "Clean Architecture." No, it really doesn't. Not at all. > exception to the idea that reading code from bottom to top is a good practice Reading code from bottom of the stack to the top is the most efficient way for bugs to be found (assuming, like with most bugs, the bug was reported regarding behavior at the Presentation/UI level or via a log message). Code structures that don't facilitate that reading are inefficient at best, and obfuscating (harmful) at worst - as in this case. > I don't believe Robert Martin suggests that writing bug-free software is possible Either believes it's possible for code that is isolated and private to be bug-free, or he believes potentially buggy code should hide its bugs from the rest of the codebase. I think the former is a more charitable presumption on my part.
Can´t agree more, bought this book, read it a decade ago and forgot, what normally means it did´t made any impression.... People started do talk about CC last year and I re-read it ,just to be sure, and it is pure garbage besides de common sense tips. The code examples make you cry and are utterly irrelevant to modern architectures. Unless you´re writing a lot of java swing applications using java 1.4
It's an old book, but it has nothing to do with it being Java, it could easily be applied to C++, C# and every other language with OOP essentially. Not defending clean code because there is a lot of dumb stuff in it.
do you have any opinions on when/if you should ever stop trying to refactor an old application and start again from scratch? i know it's always an enticing meme for new developers to want to "rewrite it in rust" but at what point do we say, ok now we know the domain of the problem pretty well, let's build a new solution with all the lessons learned from this old application? not necessarily rewriting in a new language or framework but just saying, okay we know when X happens Y should do this in this manner so let's write our code this way because it's easier to fix when things go wrong?
That's worth it's own video. I'll add it to my list. Short version: It depends on how long (& if) the system you are writing has been in production and how functional it is. There's a problem called "second system effect" ( wiki.c2.com/?SecondSystemEffect which came from Fred Brooks' FANTASTIC book "The Mythical Man Month" ), where taking a system that works but is clunky and starting over tends to be worse than the thing you're replacing, if you're not really, really careful. Especially if you try to have your initial release of the rewrite have all the features of the original. If the thing you're replacing either has few users, or the users are so frustrated with it that they wouldn't mind the number of features being reduced for a while if it becomes more reliable, or if you don't care if they get upset for a while, then rewriting can make a lot of sense. The best recent example I can think of is Apple rewriting their Office Suite (Pages, Numbers, and KeyNote) from their Legacy Mac code to a shared MacOS/iOS codebase. They did the same thing by killing Final Cut Pro 7 in favor of the much less feature-rich Final Cut Pro X. There was a lot of complaining for a couple (or more) years, but by now, almost everyone seems happy that they made the switch.
There are words for what your talking about : cohesion and coupling. Coupling is when you have multiple subcomponents that affect each other. Cohesion is when coupled components are kept together. I think the ideas behind cleancode is that you can create a system with no coupling, by making these tiny independent pieces and then thinking somehow when these pieces are arranged togethor, problems cant occur from the integration of those components. But really what happens is, more coupling is created from doing things like DRY where arbitrary functions are broken down into small chunks and spread out everywhere for 'reuseability'. I think this is because creating cohesive systems requires you to actually understand the domain well enough to draw the right lines in the system to achieve locality of behaviour. But it seems alot of devs just dont care enough about the domain and just want to make pretty abstractions
Ugh. We're already overflowing with "we grabbed this existing word (or made one up) and gave it a specific meaning in the context of this specific kind of software development, and now we expect everyone to learn it and use it." It's such a waste of time and effort.
@InternetOfBugs yes, your right but the idea of 'keeping things related together' is not a new concept. Having words for things and agreeing on what they mean is exactly the opposite of wasting time, how can we possibly progress as a discipline If we can't agree of a vocabulary? Like go and find a definition of 'unit test' that programmers actually agree on... I found this video quite compelling and you definitely explain the 'vibe' of what I consider maintainable code, I just think people might find it helpful to know that these are established ideas that can be further explored
Great video! So whack a mole type issues.. Would it be possible to go into more depth with more real world examples for these and related issues? I find with systems i work on its unavoidable to have certain interdependent items and have an intuitive idea of when something might cause a whack a mole type problem later, but would be super useful to learn more about this
Sure. Here's a quick answer, though, that I wrote earlier in response to a similar question: Whack-a-mole is when a developer makes changes in one part of their codebase and it breaks something in a different part of that codebase. So, REALLY simple example: the username box on the login web page is not lined up with the password box, so the developer makes a change to the CSS to shift the username element twelve pixels to the left and that fixes it, but that also shifts the "Welcome, ${login_name}" header at the top of all the pages 12 pixels to the left, and so then someone has to go fix that. And then when they move "Welcome, ${login_name}" twelve pixels to the right in the header, and now every HTML-formatted email that gets sent to a customer cuts off the rightmost 12 pixels of every customer's name in the "Dear ${first_name} ${last_name}" greeting (because the name got shifted right 12 pixels on a fixed-width div with overflow: hidden). The main thing is that, with whack-a-mole, there's no reason to believe that, in a reasonable application, a change to the login page would also make changes to all the page headers, or a page header change would break the email greetings. Those things don't seem they should be related. So it's an unpleasant surprise, and hard to plan for. Also, it tends to cause chain reactions (each fix creates a new bug), so it happens over and over. That's the way whack-a-mole gets its name: from the seemingly never-ending nature of problem after problem reminding people of the carnival/arcade game of the same name. Does that help any?
@@InternetOfBugs ah ok I think I get your point, basically it should be obvious which components are dependencies, and aspects that cause unexpected breakages in seemingly unrelated places are wakamole
"No code is perfect. The point of maintainable code is not to write code that can't have bugs. It is not possible to write code that can't have bugs. The value of maintainable code is writing code so that when the bugs happen, and they will happen, you can find them, and you can fix them, and you can write tests to make sure they do not pop up again."
Somehow, was so satisfying to listen to this.
Tears are flowing in my eyes..
This is exactly what Erlang is famous for. Crash early, crash often. It literally has no exceptions, but you are forced to account for all cases and it's often programs run bug free from the first try. Rust achieves similar result from different angle. Rust compiler will beat the correct code out of you.
It is not possible to write code that can't have bugs.
My company has been proven this line wrong for months and maybe years now.
@@twigsagan3857r u hiring? 😅
lol no.
Its an old T-Shirt, but "Always write your code as if the person coming after you is a violent psychopath who knows where you live" comes to mind.
This! I like this! Perfect, I will repeat it to everyone
So, write it in an incredibly complex and obtuse manner so that they don’t have the time or energy to find you?
"Always write your code as if the person coming after you is a violent psychopath with dyslexia who knows where you live"
@@Tesmond256Exactly! That is how you get rehired as independent consultant after a lay off. Clean code should be marketed as a guide to job security. :)
This should be on a t-shirt 😅
16:44 "The value of maintainable code is writting code so that when bugs happen (and they will happen) you can find them and you can fix them (and write test so they don't happen again)"
Loved this quote
Locality of behavior is underrated
how is that contradicts to the clean code? Clean code has a good rule when you need to split the long method into smaller methods, it says that all code within the method shall use the same level of abstraction and do one thing. If there is a method with much more than 4 LOC, but that is still talking on the same abstraction level and didn't loose the cohesiveness, it is still a good clean code. To validate that, we can write a unit test, if writing a unit test gets hard, the code is not clean.
Is there any book or course that goes over that in depth? I only have less than two years of experience and i find it almost impossible to achieve locality of behavior in most of my projects.
@@MykhayloS Arbitrarily saying "4 lines of code is too much" in a method is just.. well it's arbitrary.
It should just be based on what the method is doing, are there any side effects, etc. some of these things you only understand with time.
highly underrated.
@@MykhayloS The advice given in Clean Code is often self contradictory which results in Martin's various examples always breaking at least one of his principles, often the actually valuable ones, in favor of the others.
His description of finding a string somewhere in the codebase and working your way up, lines up 100% with my day to day. Polymorphism does screw it up. Happy to know I'm not the only one that starts every problem looking for some string in the codebase :D
same here brother. And if the app has localisation, I make sure to switch it to the original language (mostly English) so that the error string matches the actual string in code :D
Yeah. What really drives me crazy is when every single string in the entire app is constructed at printing time from a bunch of data tables, and `grep` never finds anything from either the logs or the user interface.
@@InternetOfBugsSo....not a fan. Of i18n I take it 😂
@@AdamJorgensen i18n is fine as long as the strings in the language files are whole and match what shows up in the logs or on the screen. Then `grep` or search can find the relevant line in the language file, I can map that to the key referencing that string, and look for that key in the code.
What sucks is when the longest line in the language file is 3 consecutive words, but the average error message is 27 words, some of which are constructed from strings provided by the user or network.
How that would be different if no polymophism?
Write simple functions. Keep things together. Avoid premature abstraction. The end.
I'm abstrooooooooctinnngggg
Easier said than done
@@jshowao easier done than said
@@jshowao Avoid premature abstraction: is something that every beginner do, because do not know what is "abstraction"
Too abstract for me.
I've rewritten several C# and C++ code in Ada and it changed the way I think and write code. It forces you to think really hard about your program's specifications. Before you can write a single line of executable code, you model your data structures in terms of types and their values. In turn, those values have constraints such as ranges, precision, number of bits, etc. Those types are grouped into "packages" and are included with your functions and procedures.
When properly specified, it becomes very hard to write buggy code because the compiler checks your values at compile time and or runtime. The common fence post error and buffer overflow bugs simply don't occur when your types are properly specified. Those same constraints also serve as metadata, allowing the compiler to perform optimizations that would be impossible in other languages. The efficiency rivals C and C++ in speed and size.
The resulting code is incredibly easy to understand and maintain, since it has a Pascal-like syntax and its specifications are built-in. You can revisit your code months later and immediately resume where you left off.
Haskell is also fun as it tries to make side-effects hard. Compiler is there more strict than Ada. It enforces that correct mindset that it is developer job to interpret requirements, and write them to formal language that is specification of the software.
Who uses ADA ?
ada is used mostly in safety critical systems if i am not mistaken
@@johannsebastianbach3411
True.
I'd pay to see you debate Uncle Bob on clean code. Make it happen!
I doubt he'd have the balls of engaging in a debate instead of sitting in his safe space/echo chamber.
I listened to Uncle Bob lectures - this (the book example) was not at all what it taught me. Book could be outdated or just locked to Java example. In reality clean code is a guideline and way of thinking, but not a replacement to thinking and design (like auditing your pricing calculations! Or at least logging)
If you have a strategy pattern hiding somewhere, return with it a reason, or type.
Debate implies there's some merit to Clean Code. There isn't. The meritorious things in clean code get used to prop up all the bullshit in the book. The meritorious things are common sense principles that don't get presented like dogma.
Debates are about rhetorical finesse, not truth-finding. At least public ones. This romantic idea of "let the best ideas win" doesn't account in anyway for the reality that most of the audience very likely aren't domain experts and don't even remotely know the context of particular arguments. And of course, debates are infamous for all the derailing-tactics which have been developed - be it false analogies, strawmanning, gish galopping, etc. - makes them utterly useless and is the reason why we don't really do debates in academia, we write papers and long and detailed responses. THAT is the medium which allows finding of actual approximations of truth or at least finding the least wrong position.
Bob debated with Primeagen who was no fan so why not. Would love to see it.
Wait, I dont get it. In the last clean code example you were referring to, it's suggested to add the "special case" to the Expense class, instead of checking for the exceptional case outside it, why does this break the whole "don't make whack-a-moles" thing you mentioned? What's wrong with checking the private implementation of that class? If the error is in the expense amount, isn't it natural to then go check how that amount is calculated? Honestly, this entire thing sounded like someone who has a bunch of shit to vent, and just finding any excuse to vent.
Exactly my thoughts. Thank you for saving me the time to type it.
Agreed, I came to the comments to try to understand why the last example was bad. To me it makes sense to encapsulate the logic in one place - sure it'll mean that changing the behaviour of one function impacts everything that calls it, but isn't that exactly what's intended (one source of truth)?
Perhaps my issue with the video is that he's being very dogmatic on what I think is quite a nuanced and case by case topic. As with most things in life, I would have really appreciated a more balanced view.
Yeah problem with this explanation is no counter example is given for what the alternative he prefers looks like, I would personally do it completely differently (I suspect very different from the person talking too) but this part of the video is simply a complaint and not helpful to people who don't understand
Exactly what I thought.
But still it's a bit of a headache when a factory is returning an object that implements an interface and you cannot identify at first glance wich implementation was used.
I wouldn't scream that Clean Code is responsible for that... it'll happen everytime someone finds a bug in a method defined by an interface or an abstract class.
So true - supporting code in production has beaten out of me any intent of doing fancy abstractions and depth.
I hope you continue to speak to this in plain principles.
Same here. If you’ve worked on large real world codebases you realize abstraction is not only mostly subjective but doesn’t actually serve any real need outside of gratifying the programmer who wrote it.
@@Alex-wk1jv do you even know what abstraction is? sounds like you don't
The underlying problem with all the 'advice' books is that those assume that by following certain practices a good product is going to emerge almost magically. It is as if we expected that pushing bolts and rivets in a specific way and aligning beams in another way would allow to build a bridge without ever seeing one. Call it 'clean engineering'.
This makes me want to write a blog post on software craftsmanship
DO IT.
Be careful, some self important people might resent the fact that you've described their identity as a craft rather than engineering. But that might generate enough traffic to get you views on your blog.
@@Burrungus_G I much prefer referring to software as a craft than as engineering
@drewsclues8625 I'd say don't bother blogging. If you want more than a handful of people to be exposed to your thoughts, start a TH-cam channel. I have multiple videos that probably each have more views than all the hits my blog has received in its lifetime.
@@asimpleguy2730 yeah it sure feels better that way, but I guess its up to the dudes cutting the paychecks to determine if they like more crafting or more engineering.
Thank you. I'm pretty sure you're the only person on the internet who has programmed. I've argued with so many people about the topic of how people get started. They always say it should take at least a month of a developer getting paid full time to sit and read through the entire codebase to understand it before they can ever make a single commit. And I'm always like what are you talking about. Someone is going to give you a very specific problem that you can usually find within an hour (yes, I'm being generous here) and in any sane environment, you've probably committed something on the first day.
Yeah, people were usually quite surprised when my first PR was rolling in within the first 8 hours.
I've worked over two dozen software contracts in my career. The client is almost never ready for me on the first day, and very often onboarding takes days or even weeks. Roughly 50% of the time I'm asked to read through the code and get a high-level understanding of it while I wait for someone to have a long enough break between meetings to do knowledge transfer. There is never good documentation.
That said, in the rare case when the client is ready for me on day one, I get work done on day one. But I usually don't have the proper permissions or access to tools when I start. Instead I'm watching ten hours of training videos and submitting a bunch of tickets to get software installed because the client doesn't trust developers to do it themselves.
@@andywest5773 Thanks for expanding. Yeah I consider those separate issues. It's understandable if they aren't ready. The people I'm usually fighting against say that you have to understand the whole code base before even looking at an issue.
@@InfiniteQuest86 Ah, I see what you mean now. I've had similar experiences with devs who are new to the company/team and don't want to contribute because they don't feel "comfortable" or "qualified" when they start. I've never understood that.
I would say "probably committed something on the first day that your dev environment is up and running" - Some places I've been, I don't even have an email account set up for 3 days after I start.
I strongly identify with the idea that a software team should limit the set of "things" that create meaningless arguments. Clean code rules certainly fall in that bucket. And I totally agree that applying clean code on any appreciable scale on a real large codebase leads to disparate and hard to understand code. Clean code is part of a set of coding advice (possibly stemming from OOP in my opinion) which advocates for an abstraction-and-refactoring-first approach, which is doomed to fail because one does not infact understand the true nature of the problem at the outset (if indeed ever). This is a lesson I had to learn after being a clean code, abstracting zealot.
HOWEVER, the opposite end of the extreme is also a problem. I've seen code written by senior devs which gives no thought to structure at all. This is code consisting of huge functions, where logic for new requirements is just tacked in the middle somewhere, without thinking if this is a well defined business rule or process or invariant which should be pulled and out and shared with an existing usecase. Then one gets bugs where business rules are not being applied consistently. This is still better than lots of little functions, because it is easier to trace backwards (or use a debugger and trace forwards) through big functions, but it is not ideal.
Perhaps the ideal "book" is one which looks at the problem from both extremes, and explains why there are not hard and fast rules but rather tools in the toolbelt: sometimes you use a hammer, other times a screwdriver. But I also can't help but wonder if the only way one learns is through experience and a bit of humility.
The problem here is that you're seeing two extremes. Screen-sized functions may help.
Define really large.
Because in my experience, on a multi-million lines of code codebase, it does not do this.
True indeed. Separation of concern is very important and a major pain to deal with. Another common one I see is over-engineering/future proofing, large amounts of classes and concepts that are there for future ”what ifs” that never comes but now that code needs to be maintained. Its such a classic programmer thing to do, cant just solve the issue at hand but need to build something fancier that the next set of developers will have to deal with. Also having more than one way of doing something, usually a nightmare and unless it is absolutely needed to support both ways its not worth it. Another one is premature optimization, kind of falls under future proofing. But lots of devs love to do ”smart” things in the name of performance. In the end, the actual performance issue customers face are somewhere else and the cool optimisation is never useful only prone to errors and a nightmare to maintain
Depends what you classify as fancy. Some people would say having one inheritance hierarchy is "fancy" but I consider that pretty basic.
It depends on the mileage you get out of it. That's where refactoring comes in.
For example, is it worth investing the time to build a factory for connection strings and provide all the options for different DBs or just one DB up front in the hopes of it being easier to extend in the future? Or hack some quick and dirty thing and let it become a copy/pasted ball of mess with no real forethought?
@@jshowao Just do what is needed now, in the simplest most readable form possible, and if more db strings are needed later then reconsider the design, but always keep it as simple and readable as possible. Do not just make a factory for the sake of it.
I'm an ops guy who moonlights as an extremely junior backend "dev", and this was a refreshing take that was relatable and easy enough to understand. I mostly work in Python and write shell scripts, but the principles are the same, and there is no substitute for understanding the logic rules and control flow and edge cases that can break the whole edifice. The one good thing about Python from my perspective is that it enables or even forces you to think about the problem at a high level such that you're focusing on the principles of the solution rather than on the individual trees of the forest... but it is slow at runtime.
For me the clean code book was the first book that gave me a general guide on how to break an app into different layers and decouple them and on paper it just made sense. Ofcourse when writing production code all the things you mentioned can happen and will probably happen. So what i want ask is what is the alternative to this? Do you you just not obfuscate anything ever and you dont separate you code into layers? Is there some other guidebook that gives you a structure that you can reliably reproduce and it doesnt contain these problems?
Yes! Your description of how you work through bugs in a new codebase is exactly how i work!
wow thanks! As someone who now has to deal with a badly written legacy code this video series is gold!
Same here, the irony is we are having a "clean code" training now 😂
This guy is already gold.
Thanks for pointing out the essence of code maintainability: the locality of side-effects.
Thank you!
Dijkstra's complaint was about the lack of locality; GOTO was just the tool that was most used to harm it. Tpday's code design principles are the modern day equivalent to GOTO.
Disagree, there are actually a lot of design principles that help a lot. Many of them, in my opinion, just aren't used properly.
Not having any type of design principles is just a recipe for disaster and leads to spaghetti code.
Working with code unarchitectured sucks.
This is a good point, this is exactly what the small functions that CC advocates does.
what exactly do you mean by "you can't just simplify code" at 14:30? there is something i call "essential complexity". the raw logic, structure or math of the problem. i try to write code that does exactly this, not more. anything more just means more chances for bugs. if i need "why-information", i log it somewhere or encode it in the return type
I probably should have said "you can't arbitrarily simplify code" or "you can't simplify code without there being consequences"
Simplicity is a thing I value more and more when it comes to software. The architecture / the design / the code should be as simple as possible and only as complex as necessary.
The goal is maintainability and I agree on your definition: Other devs should be able to understand and modify your code quickly.
In my opinion that means in practice: don't over-engineer your code. Humans are only able to handle a certain level of complexity. If your software exceeds this level you're doomed. And don't hope for Copilot to fix it 😉
It's so good to hear all of the things I've been saying to basically every co-worker I've had for the better part of 2 decades coming out of someone's mouth other than mine. Thank you for this, Carl.
Your point about obfuscation is well taken. The main reason why I try to avoid object oriented languages, is because inheritance and instancing make tracking the execution or logic sequence, close to impossible. I am someone who needs to understand how things work, from the first step to the last; and I also tend to view the first steps or the lower levels, as having greater priority than higher.
Most of what I know about programming, has come from FORTH. That is a language which inherently uses dictionary dispatch, or a scenario where subroutines can be mapped to numbers, and calling either one will execute it. The good news is that that is a method which I can port to almost any language I've encountered very quickly. The drawback with recursive or fractal composition, however, is that you have to be fanatically self-restrained, because if you are not, the level of complexity will become terrifying very quickly.
You're right I'm a beginner, haven't worked in many big code bases yet, and the advice "what goes wrong stays in its stack". But it still makes a lot of sense.
I thought he was going to discuss the limitations of the actual principles of clean code but all it was is a vague critique of scenarios that are more related to just poor code writing in general not an issue with something like polymorphism.
Show me one problem solved by polymorphism that can't be solved any other way. PS: I'm going to get on with my life while you search for an example.
@paradoxicalcat7173 The beauty of programming is that there are many ways to go about it. Clean code never claimed to be the exact science of creating applications. It's a framework that provides architectural language that teams can use to communicate more effectively.
Three minutes into the video, and you stated one of my guidelines of coding,
"Write code as if you will need to come back in six months and maintain the code." Most likely, you will not remember all of the details, and, most likely, you will be tasked to maintain the code since you are one of the last people remaining people who created or modified the code.
"What kind of dimwit wrote this crap", looks at the git history, "Oh, that was me..."
I think most people can agree that this is a significant consideration when writing code - problem is, it's way easier to say than to achieve. "Clean" code certainly isn't enough.
@NorthernRealmJackal The concept of an "instant expert" does not exist. The current culture ignores many years of effort and learning to achieve a successful result. Too many people would rather feel good from a smooth talker in their presence than have a difficult task accomplished the right way, especially when the initial plan needs to be modified.
whack a mole is so relatable. i once was on a project implementing a search of a product catalog and i kept going back and forth with QA about how the search would work. QA would tell me this one case wasn't working, so i would change to code to fix that case, but then another case would break. but i never had a list of all the cases required for search to work as expected. and the team had no consensus about what that list should be and what cases were more important that others. really taught why google is a billion dollar company. search is hard. assuming you can give people exactly what they ask for is an insane assumption.
@b42thomas
> assuming you can give people exactly what they ask for is an insane assumption.
Yep. This is the main reason I don't think AI will be "taking all the SWE jobs" any time soon. It's more about the people than the code, most of the time.
I love your take. It's been my innocent assumption from the time I was a junior to never ever follow any book from that author.
In France, they treat these books as religious books, many developers and companies use these books as a reference and many jobs will have in their description: TDD, Clean Code, Hex architecture, Onion....whatever 😂
I hated that, from the beginning, I never saw the real benefit in a real-world scenario. BUT, I saw many problems it caused, creating abstractions on top of other abstractions without any human intelligence involved. They overcomplicate too much most of the time and ALL the "craftsmanships" will tell you that their code is better than yours. Actual clowns, they even tell you without any shame that software engineers at FAANG are not "real" software engineers because they don't use these books to write code. Even tho it's been admitted publicly on Reddit by FAANG directors that less than 1% of their engineers use TDD for example, and they're not even talking about the book.
At least in France, they are unbelievably toxic, I don't know if it's the same everywhere.
Arghhh 😡
Thanks for the video! I hope Primegeon will react to it.
I have spent about 70% of my career taking over a code base that other people wrote. Ill take a clean piece of code that has bugs over a dirty piece of code that "works" (note the air quotes around the word works!!!).
I read Uncle Bob's books and took his classes and it doubled or tripled the quality of my work.
And i dont do everything he says. I did it cafeteria style.
Same here, I don't understand why he trashed the book either, that seemed to stir things up. I believe the problem is there may be teams or people that are radical or blind follow recommendations like they are religious commandments. We need to apply common sense or balance according to every situation, no silver bullet.
The same thing happened to me. Having a clean, redable, undestandable code helps a lot with development, because you know where stuff are. You know how they behave because they explain themselves just by reading it, and they fit into properly designed patterns.
Whenever I see defendants of "bad code", I feel like they're either too naive to understand how good code helps development or they're just doing it for the viewers. Of course, you can't follow every rule precisely, some SOLID principles even contradict themselves... but overall, it is a very desirable approach.
Great video. A little bit of feedback on the editing: personally I found the "morph" cuts very distracting, I think simple hard cuts work better.
Yeah - I've heard about that from a couple of other people, too. I bought (paid real money) for a Final Cut Pro plug-in that said it would save me time with transitions and was well-reviewed (and was expensive).
I guess that was a waste. Sorry about that. I'm still trying to find a workflow that has the right balance of production quality and production time.
@@InternetOfBugs Maybe that originated the question "are you AI?" from the other video 😅
@@InternetOfBugs I know it can be hard but not talking with your hands in frame would help a lot with the morph cuts.
Finally, someone mentioning about this.
Hard cuts are common, everyone is used to hard cuts absolutely everywhere and no-one bats an eye. Trying to hide cuts almost looks 'suspect' especially in interview pieces
This is a surprisingly nice argument that I agree a lot with. If only because it flips the script on people to write code that is easy to debug, not code that looks clean, which are often at odds with each other depending on an individual's idea of clean.
I would only quibble with making changes to an excution path you don't have time to understand, as I believe when we're in a hurry and try to cut corners we always make mistakes and waste more time. Ideally your stack isn't that deep and miserable to traverse. I personally advocate for being methodical when programming and knowing when you need to understand the program better before making changes to it or diagnosing a problem.
Sure. The idea I was trying to get across is "you should try to design your code so that all the things relevant to what happens in that slice are contained within that slice" not "make some change somewhere in the slice of an unfamiliar codebase and check it in, and if it doesn't fix the bug or causes problems somewhere else, it's not your fault"
Excellent video! Sometimes when you're working on problems like this (daily for most of us) it can feel pretty isolated so this is excellent content to get yourself grounded again in knowing that we're not alone in the struggle. Subscribed.
It occurred to me not too long ago that when it comes to software design, and organizing code in a way that is easy to read and maintain, database designers figured it out 30 years ago.
Simple entity properties go in a table named after the entity, and complex data and relationships between entities go in a table named after the entity and the property, or named after both entities. (Vehicles, Drivers, VehicleDrivers, etc.)
I’ve started doing the same in my code, and it makes life so much easier. Functions dealing with Vehicles go in a module (static class) named Vehicle. Ditto for Drivers. For logic that needs to run to keep Vehicles and Drivers in sync, that code goes in a module named VehicleDrivers. It really is just that simple. Changes to one file only affect the similarly named files. Finding and fixing bugs is trivial. And the code is fast and efficient in both CPU and memory resources. I only use objects and interfaces in cases where it makes the code simpler than it would to do it with functions and structs.
It turns out, you get to decide how complicated your source code will be. There are advantages to keeping it simple.
So what do you do when you need to write code that changes 3 or 4 different entities?
Yeah, this doesnt make a lot of sense, seems like a recipe for copying code a million times. A static class? How do you even instantiate objects or use your classes?
Plus your vehicle should be composed of drivers, naming it VehicleDriver seems like a crazy naming convention. What about the vehicles tires or engine? You then create a class VehicleEngine and VehicleTires?
Why not just create one Vehicle class that has an array of Tires, a Driver, and an Engine?
That is 4 classes instead of 7 because the permutations would be crazy...
@@Id0nthavename I have done three names once or twice, which does start to get a bit ridiculous, I'll admit. I've never had to do four in a single class, but I can see where that would cross over into the absurd. At that point, I would probably just create a more generic static class, like VehicleEvents, and wire it up to handle all of the notifications from all of the other classes, and then call the appropriate methods. Or, just create multiple two name classes, like VehicleDriver, VehicleTire, VehicleEngine, VehicleFuel, etc. and wire them all up to handle specific events.
@@jshowao None of the code would be copied. Only code related to both Vehicles and Drivers would go in the VehicleDriver class. (Kind of like a table that joins Vehicles and Drivers wouldn't duplicate data from either table.) The reason that I would not immediately create a Vehicle class with all of the necessary information and methods is because it might be more design than is necessary. The only time I would start with a non-static class and creating objects would be if it simplified the design. For instance, if you have multiple types of vehicles. Or, for something like chess pieces, where each piece has different behavior based on its type. Trying to overload a single class with all of the logic across multiple domains tends to cause issues where you need an object to act a certain way when it is used by one component (say business logic), and another way when it is used by another (say, UI logic).
I have just come to the point in my career where I prefer to use the simplest design to solve the immediate problem, and no more. That tends to make life easier when I need to maintain it after not looking at it for 6 months.
I like your subtle damning critique of OOP, which encourages the hiding of data. I think we can call it at this point. Hiding data and functionality inside of classes hasn't given us the benefits they said it would. It has caused more problems.
Digging into those books (Clean Architecture/Code) and trying to apply them, is a great exercice in my opinion. But it becomes bad, when you apply them like the holly bible. I worked on a project during months, applying dumbly clean architecture principles, and it was a total waste of time and over-engineering. BUT now, after having sorted the useful from it, I write my code in a 'light' layered architecture, and it's been great so far. Every programmer should learn from a lot of sources, but should not take any guidelines like an absolute "how to do things so it never fails". Because like you said, it will always fails at some point.
Everything applied like a holy bible eventually backfires.
@@danielwilkowski5899 It is the same for the holy bible itself.
In a way i am glad in "found" Uncle Bob at a time where i was already both willing to accept and question advice.
I would argue that, although you (and other thoughtful people) might have gotten something useful out of it, "Clean Code" on balance is bad for the industry and should be soundly rejected.
What's happened is that many of the "verses" from "Clean Code" (and others) have become so ingrained in the industry that they show up on job descriptions and as interview questions - perpetuating the myth that they are holy scripture without fault. Most of the people who spout "That's not SOLID!" and "That needs to be broken up - it's too long or does too many things!!" haven't actually thought about how best to apply the maxims (if they've even read the book at all). They're just parroting what "Cracking the Code Interview" taught them to say, and the industry would be better off if we just turned everyone off of "Clean Code" completely.
@@InternetOfBugsMy issue is, okay, you've rejected it, now what? Your explanations weren't that elucidating either. Hopefully there will be some examples in future parts, because I didnt quite get what you meant by only code should affect this portion of the stack, but you then complained about the per diem thing being hidden away...
I mean, its not really an error, its more a logic bug based on misunderstanding of requirements. Those rules are going to be buried in a class somewhere...
Great video - I think you're absolutely spot-on about what is wrong with "clean code". I don't think typography or naivety really play into the worldview of the author. I think the author believes that having a bunch of very short functions/methods makes the code easier to understand because each piece is bite-sized. This ignores the reality that getting an idea of what a whole process does ends up requiring a lot of jumping around. It ends up distributing and creating cruft around the thing that the code *actually does* in a way that prevents the maintainer from getting a foothold on what's happening. As to "hiding", I think that comes out of the belief that SRP and DRY will lead to more understandable code by dividing it up cleanly into chunks that can be understood in isolation. In reality, SRP and DRY lead to over-generalization and over-abstraction, which make a big-picture understanding of a process much, much harder.
So basically, I think the author either seriously misunderstands how people's cognitive processes work when reading and maintaining code, or has a brain that works in a way fundamentally different from mine ;).
It wouldnt require jumping around if you contain it in a class and organize it in a way that makes sense so you arent having to edit 3-4 different files.
Having a long method is just as bad in my opinion because you would have to jump around in that method once it reaches hundreds of lines.
And you are honestly saying you'd rather look at spaghetti code instead of a bunch of abstractions? I mean, I don't think it needs to be as crazy as clean code suggests, but going the other way is an even bigger mess. I cant make sense of methods that are hundreds of lines long, I can make sense of methods that are 10's of lines long.
Abstracted code that is well architected is highly flexible and testable. That is one of the major benefits to doing it.
I mean, a great example is initialization code for a device. Lets say every device has a name, serial number, brand, a connect l/disconnect action, some basic setup stuff, etc.
Would you rather write one parent class and reuse those properties and actions? Or would you write those properties and actions again for every device?
You don't have to do any of that jumping around if your functions and variables are named well enough that you can understand exactly what they are there for.
@@jshowao your example at the about device initialization isn't really exclusive to "clean code" that's a basic use case for, and the entire point of, OO programming. No one is trying to say don't even use classes. The issue is when you abstract to the point of things like a class full of methods like one of the book's pages in the video:
SetupTeardownIncluder()
includeSetupAndTeardown()
includeSetupAndTeardownPages()
includeTeardownPages()
includeSetupPages()
includeSuiteSetupPages()
..and so forth
I fill with rage when I see method names this brain melting. You've abstracted to the point where the English language is longer adequate to describe the nuances between each step of what should really all be a single function. Once your method names are includeIncludingIncluderInclusionSetupPreInclusionLoaderPrechecker() you've gone too far. The above is all contained in a single class yet is nearly as maddening to unravel as having to jump around between files. So it's not about whether you have separate files or not. I honestly can't understand how that is easier or better to jump around than a page long method. At the end of the day you have to program the things, it takes that many lines to accomplish something. If a task is 50 lines, it's 50 lines whether you tuck them each neatly into their own little method beds kiss them night night, or not. Sometimes code is complicated and the best or only way to make sense of it is step debugging. Abstraction on its own doesn't help with, or shouldn't be used for the sole purpose of helping with understanding.
@@markt1964 I'd argue that if you're looking for a bug, you _do_ have to jump around, because regardless of how well a function is named, the bug could be in that code. But I'm not arguing for extremely long functions. I'm arguing that "short = good" is not a true as a general rule, because jumping around forces the maintainer to keep pushing to and popping from a mental function stack. Sometimes that's the right thing to do, and sometimes it serves no real purpose but makes the code harder to understand.
@@jshowao I'm not arguing against abstraction altogether. I'm arguing that SRF and DRY lead to _over_ abstraction. I would rarely call a single interface over-abstraction (Unless there was only one concrete class implementing it. Which I have seen. This week). But even small levels of abstraction can create difficulties. An example much like the one you cited would be a tabular reader/writer. I want to read from a bunch of things like a CSV file, a database table, an Avro file, an Excel file, etc., etc. The reading and writing functions can probably look the same, but what about the initialization? A CSV file reader needs to know the delimiter and the file location; a database table needs a connection string; an Excel file needs a location and a sheet name; etc., etc. So maybe the abstraction for reading/writing is justifiable -- but trying to do that for initialization creates a leaky abstraction. I've come to believe that abstraction is best viewed as a "necessary evil" rather than a simple virtue.
Can you please elaborate more? With maybe examples and stuff like that?
I use clean code a lot, daily, but at least the way we do it seems to be really good. I hope you make more videos covering this in order for me to get better at what I do.
What I'm worried about, is that I am confusing clean code with other subjects, and it got me confused.
This channel is GOLD. I always liked Software Development, but now, I'm falling in love and want to LEARN MORE & MORE!!
this channel name is slick asf whenever i get eaten by bugs (mostly logical ones in algorithm design) somehow the name pops into my head , people worry about clean code , i am using polymorphism with C right now to implement a bit of OOP and i can`t even understand my own code looks worse than assembly (it appeals to my old boss) IT`s A JUNGLE OUT THERE & You are the pray !
I understand your frustration, but I think the focus of your video/rant says something about the kind of work you _usually_ do? I may be wrong, since this is the first time I'm watching one of your videos.
You're talking exclusively about bug fixing, but the other side of maintainability is how easy it is to extend a system with new functionality. These two aspects, debuggability and extensibility are opposing forces in my experience, so what makes one easier usually makes the other harder. Once again, as with everything in software, it's about trade-offs. The discussion would be more nuanced if we took extensibility into account (some might even want to bring in performance as well). But... sometimes we have to let off some steam at the end of the day and from that point of view I can understand why you made this video.
My professional opinion is that extending buggy software just makes more bugs, and that extensibility should take a back seat to debuggability/quality. That's a minority opinion - most people want more features and worry about bugs later, which is one of the reasons why the slope of the CVE count curve is getting so much worse (see this video: th-cam.com/video/U-IhIqmCHlc/w-d-xo.html )
Preach.
I was excited when I opened Clean Code for the first time.
I was skeptical after I read a couple dozen pages.
I put it down after I read Uncle Bob’s description of the most beautiful program he had ever seen, where every method had only one to three lines in its body.
It was then that I knew that Uncle Bob had never written software that anyone actually had to use.
Your expense report thing reminds me of something that happened to me a few years ago. I was tasked with writing a new client for a Contract Bridge app. The server doesn't have a "legal moves" route, and when sent a move by the client, responds simply with an updated game state or a "no". The "no" either means "there was a server error, please try again" or "that's not a legal move, please send a legal move".
I didn't know how to play Bridge at the time, and here I was needing to write a client that can infer legal moves from the game state, so when building out the rules system, I created an "explained move" structure, that not only encodes the move ("I'm playing the 9 of hearts") that will be sent to the server, but internally explains the legality of the move ("east is the only player to have played, so it is my turn, this hand is spades, but I don't have any left, so I can play any suit, I have the 9 of hearts, so it is a legal card to play now"). During a code review I was told that this might be a little unnecessary, as long as my code can produce legal moves, then do we really need a tagged structure that justifies itself?
Guess what turned out to be really useful when debugging a weird edge case and even ended up finding a bug in the server? Players had been complaining that in some weird cases they were blocked from playing legal moves, and it was attributed to a UI or networking issue in the old client. When the bug happened with the new client, I had a log of the why the client thought the move was legal, and when checking the server, they found that there was a weird off-by-one error (IIRC) that caused the server to incorrectly refuse certain moves.
When people talk about "null" being a billion-dollar mistake, they often focus on the problems caused by propagating null values around, but I feel like the main problem is that "null" doesn't carry any useful metadata about what went wrong. You call a "Thing.fromJson(filepath)" method and you get "null". Was the file path wrong? Was there an I/O problem? Is the file not text? Is the file not valid JSON? Does the data in the JSON not represent a valid "Thing"? Is a field missing? Is a field the wrong type? Is a field outside of expected values? Figure it out nerd, here's your clue: null
While I do agree with the sentiment that clean code is harmful to the industry. I also feel like the example you gave at 13:31 is not entirely clean code's fault, but actually, lack of logging. If the class is returning a PerDiem sub class, which is treated as a special case, then logging it would have been the right choice instead of straight up calling getTotal. But anyway, I do agree with you. Those 3 liners take me back to the old goto days where, when someone got too clever for his own good, you had to jump 17 times throughout the file just to understand what could be a single function call.
It is what makes assembly harder to read as well since you are constantly jumping to offsets instead of using higher level constructs. Even if you give those offsets good names, like clean code dictates, after the third jump you'll have already forgotten what was the name of the first one.
loved the part about suggesting that features should be "vertical" as if in, the code of said feature be isolated from other features such that if you have to make any changes or fix any bugs you don't need to read or understand most of the codebase but just read the "vertical" part of the code related to that feature (it also leads to less unexpected changes aka change feature A, feature B randomly stops working)
Where can I find your writing?
Excellent. "It is not possible to write code that provably does not have bugs." Some people really need to tattoo this someplace. If you come up with a system that allows you to write arbitrary code that is provably correct, you could use that system to solve the halting problem. Which is provably unsolvable. Therefore: all code has the potential to have bugs in it, no matter what you do, what methodologies or coding practices you follow, what kind of tests you write or how you organize it etc. etc.
How i explain wacka mole is ... "seperating the puppies does not mean cut the puppies into little pieces and give me all their paws. Its the same with code" this mental image is mortifying and gets the point of what something being "whole" means.
I seriously love your content. New to programming, 9 months / one course in. The way you communicate clicks - even for me as a beginner- with me much better than most TH-camrs.
I’m currently working in a project where the project leader is very found off SOLID and Clean Code (tm) and I definitely agree with your message that it tends to make it harder understand any issues. To be fair to clean code, the project is written primarily in FORTRAN (I work at a US National Laboratory doing scientific computing) which makes things harder (no standardized error handling, tons of compiler bugs, etc.). However, just to understand one error messages I have to read 20 different files in 20 different directories. While I have developed an intuition about what files might be the problem given a stack trace it still takes a lot of time to debug it. In my experience, Clean Code (tm) and the SOLID principles break locality of features which makes the code a nightmare to debug.
OK, so you appeared in my feed first time and wholeheartedly agree with what you say here. Now I want to watch your 10 books video, but there's no link to be found... I don't really know if I'll ever get to go to your channel and loom for it
@16:40 - this - this is what I strongly believe too, after 20yrs of writing software, it's not to get out of all bugs (impossible), but to be able to tell, why the bug happens and fix it quickly
I wouldn't be too dogmatic about clean code. These books provide you with examples that might help you in one situation but not in a different situation. In the end it's an iterative process. Leave the code you are working on cleaner than you found it. Sometimes it just needs time to come up with a better solution that fits the context.
Also test the error handling. Then you see if you get useful log reports or stack traces.
More videos explaining ideas and concepts on how to code like this would be amazing. Thank you man, honestly!! We don’t need more “coding BS tutorials”
I've always found that the best code from maintainability perspective is code that is descriptive of what is being done instead of code descriptive of how it's being done. I.e. code that is declarative-ish. When you know (easily) what is being done in a piece of code you will immediately know if what interests you is in there or not and where to look further. I.e. you will easily navigate the code. So this is basically my "clean code" criteria.
P.S. "no code" is indeed perfect.
Great video, I really enjoy this content. I decided to learn to code back in 2018 and as a noob with no direction in the modern landscape it's quite a wild experience but 2 things I wanted to say. These rules on clean code I think just added to the clutter of learning about code/software and by that I mean it was more noise in the environment then I think was necessary. Idk if there anything you can do about that its just process I guess, but the biggest hurdle for beginners especially those working alone is making sense of and getting comfortable with this foreign environment and any clutter can be quite harmful imo. I read a handful of books about agile development, pair programming, etc and job postings say it's a requirement to understand/have exp so you think its important but later you learn it was mostly crap and anything of value was kind of common sense lol.. Which brings me to the last thing I wanted to say was this content is great for those who maybe have had less exposure to the industry to gain perspective on reality rather then some sort of mystical ideal. Even just listening to how you would walk through a code base is a nice confirmation in a way or when you watch Prime code and your like okay I got this xD lol
i'm normally skeptical about people disavowing clean code, abstractions, etc. but, i'm also not familiar with uncle bob's trademarked "clean code". i have my own definition of clean code. but based on the code examples from the book you showed, i'm also not a fan of uncle bob's clean code, described in the book. i avoid inheritance altogether, favoring composition, and seeing the sql example where each type of query has it's own subclass...that would definitely be a nightmare to maintain. i actually favor shallow abstractions/object graphs.
i have the book on my shelf, never read it. but now i'm probably going to read it just to know how much it diverges from how i design/structure apps.
Right on, there is a difference between the general idea of clean code (separation of concerns etc) and uncle bob's version of "clean code" (tm).
A maintainable piece of software is like a maintainable car. A car that lasts for decades is one you can easily maintain or fix once something goes wrong. Maintenance protocols are simple and easy to follow. Every system is easily accessible for any skilled mechanic who knows the procedure and parts are easily available (no black boxes). It's also a fault tolerant system, the car still runs if something stops working, no single points of failure.
Some very good points, definitely gonna have to check out more of your videos, I feel like it's going to help reinforce my studies in a positive way.
The idea of dealing with code that has abstracted and encapsulated key return decisions like that, where you were talking about returning an Object with a total receipt amount or a PerDiem amount sounds like an easy trap to fall into especially for beginners like myself.
I also guess this is why pure functions are important, so that you aren't mutating variables in a private function that might effect other portions of your code or project. That may cause a whack-a-mole style bug that you mentioned. Especially if you have created a lot of impure functions with non-local variables that are mutable within the local scope of the function. That sounds like maintainability hell.
Straight to the point. I'm currently leading a small team of a proprietary platform developers and once we defined to follow our own data oriented and procedural pattern it was like magic, less and less bugs, less dificult to find bugs when they happen, less difficult to extend features, easier to write tests and so on. "Clean coders" are like a cult and this book is like the Ten Commandments of them, lol.
i mean you will always have difficulties. that just depends on how ambitious you are.
I suddenly realised that it could be fun to hear your opinions on various "software architectures" out there.
Unit tests exist for something. They can be used to test particular locations of your code so that you don't have to trace everything or even running the entire program just to figure out why a particular portion is failing.
Much needed video, at times I thought I was stupid for finding this book's advice not helpful at all for my work
I can agree with you critic for example mentioned above. However, I think you forget mentioned what this is not fault of polymorphism. These entities are valid business objects. It is more of a problem, how this entities were instantiated. If you talking about service locator, this problem will hold. But with dependency injection or strategy pattern it is different story, because you can easily trace creation of instances of objects.
Also the problem could be solved by using union types which presented in languages like Rust or F#. Because you can't get value of the type, unless you explicitly match on them.
A recent pet peeve of mine is when frameworks or things like IoC containers cause a break between the code's entry point and where it does something, so that you can't go from main and follow the code through to where it outputs something. Ideally your code forms a graph and you can traverse that graph and understand the flow of how it works using just the "Go to definition" and "Find callers" editor features. If you have to start using ctrl+f to try to figure out where something comes from or where its going it becomes very frustrating very quickly.
Yeah. And what's really, really annoying is when they throw a queue or a Command Pattern in, so you see the thing get put on the queue, and there are dozens of different places that pull things off the queue, and it takes forever to figure out which one(s) are relevant to any given input.
Great video. Real love for coding is visible towards the end 🤣. I totally get that feeling of debugging **clean** code.
Hey, thanks for the awesome video! Just a quick suggestion - the transition effect between cuts can be a bit jarring at times. Maybe a straight cut would work better? No worries though, still loved the content!
Yeah - I've heard about that from a couple of other people, too. I bought (paid real money) for a Final Cut Pro plug-in that said it would save me time with transitions and was well-reviewed (and was expensive), and used it for the first time on this video.
I guess that was a waste. Sorry about that. I'm still trying to find a workflow that has a good balance of production quality and production time.
I work on tech stacks 20+ years old. The biggest problem I face in modernizing and bug fixes is that there are class inheritances hierarchy’s that are so tightly coupled that half the stack uses them. Fix one, break another thing.
Oh, man, I feel for you. Inheritance has caused me so many problems over the decades.
A lot of bugs are undoubtedly avoidable. A lot of bugs are unavoidable. That’s why there is no rule for coding, but that doesn’t mean we don’t have to follow certain principles, considering both scenarios. What do I know? What if things don’t go right?
I have been brought into a project where they say here's the code go look at it try it out figure it out from top to bottom. When that happens you know you're in for a real surprise because the code is almost always just incomprehensible. I find that it basically is a sign that they have not organized their code in a way that is bite-sized or something you can follow without understanding the whole thing which means it's an all-or-nothing spend 6 months looking through really really bad code. The last time this happened to me I quit only a few months in and went to someplace sane.
Far and away the worst codebase I've ever worked on (in terms of usability for the customer and scrutability for the developer) had adopted the clean code principles. It was insane to reason about. I lasted at that company for a month and had to leave. I'm not a job hopper but it was an exceptional circumstance.
There's so much advice in software engineering that reads just like a get rich quick scheme. "You just need to apply this trick to write good software, buy my book!". While I don't (entirely) think it's intended as a scam, it really feels like the advice in these books is overly simplified because any book with legitimate advice offers too much uncertainty, which nobody likes.
Sometimes hard problems are hard, sometimes there are tradeoffs, use good judgement, exercise good taste are all axioms in software that I stand by, but nobody will buy my book. Weird.
no matter what yours belief is in abstraction or code structure, dont just eat up exceptions, no magic, log them at least, document functions if closed source.
i also heard somewhere that design patterns were necessary only because OOP introduced complexity when functions could be enough for modularity and locality. given the bottoms-up troubleshooting flow you mentioned, its worth the discussion.
Not necessarily OOP as a concept, but they were (consciously or not) a product of the deficiencies of the OOP languages of the time (Java and C++). See blog.plover.com/prog/design-patterns.html for one.
I think you're a little off on that example with the "per diem" stuff. At least from what I saw in the screenshot of that code, it was catching an exception and assigning "per diem" value. The problem with that is that the code uses exceptions for flow of control. Exceptions have to be used for errors/exceptional situations (it's in the name), which is something that is not really foreseen. Decision on whether to use "per diem" or the other technique is a normal business decision, so it shouldn't be made based on an exception. That's the real problem with that code. But isn't this rule on exception I just quoted part of "clean code"? (I don't know).
Well dang it. I bought some of the books you recommended last time along with clean code and clean architecture. I am confused about how to learn the big picture of major software applications and how you even learn when and how to implement these so called design patterns in a helpful way. Everything I find on the internet seems to be unhelpful in my brain as to how these giant applications work and how these design patterns allowed them to scale.
just write more code and see what happens
Yep. In my opinion, your best bet is to write code that makes the most sense to you. If you have to use someone else’s code, write your own code to call that code, and then call your code instead. Having a clear understanding of how your code works is essential to maintaining it.
Do your own little projects in a language you need to work in, push them until you notice issues with how you've written things and more, necessarily reach debugging, experiment with various ways you can organise code, and that is probably the best method of learning how various abstractions can help or harm you (for example by taking too much of your time) without practicing "on production". This way if you ever need you'll be able to argue why you think some way of doing things is better than other with code examples and some stories of debugging sessions. All of this really shouldn't be the end goal but this is more or less how I think you can start feeling like you are "on that level". Many of those things are very much language specific in my experience.
IMO, scaling an application is a process that will be just different for each organisation and there's a lot to consider on nontechnical side that can determine the success. How big an application grows depends greatly on how much of resources can someone throw at it or certain aspects of it and that's a matter of project management, business planning, etc. You won't be able to learn much about those studying code. Code maintainability matters but not in void. It matters because it's a risk factor within a project.
General advice: Don't get too hang up on learning. Gaining knowledge should be just a step in producing or achieving something valuable. Set a clearer, more precise goal and then it'll be easier to decide on the path. Simple joy of coding is a value too.
@@adhaliannaI really appreciate your advice and I will work towards that. I have a hard time staying focused on little task or breaking things down into little task for a larger project idea but I think working on that will probably go a long way to learning more. Thanks.
You're overcomplicating it. You're worrying about hypothetical "major software applications" that you haven't and won't see until you get a job working on one. The thing about them is they are all unique to the institution maintaining it. You can't read any book that will prepare you for a specific project. You have to practice programming, and look at a lot of code (like anything and everything open source you can find, for instance) until you feel comfortable enough that if and when you do jump into a large software project you can work out for yourself how it works. That's all anyone ever does starting a new job.
On top of that, you're never going to be tasked with understanding the "big picture" of a major software application. Unless you're a senior dev or architect, your job is going to be "here's 5 bugs go find them" and you're going to be worrying about at most a few functions a day. You will learn the things you need to learn when the time comes, don't worry about it before then, worry about being a productive coder which means simply doing, not reading.
I wish there was an editor, that kind of has a large canvas to move and zoom around in, and displays all pages of the codebase. Then marking any function will also draw arrows to other page locations it calls, and/or an arrows it was called from.
The program could order the pages as to have the fewest times arrows cross other pages (thus ordering the code into meaningful clusters).
With a tool like that it would be much easier to visually jump around in a codebase and understand its structure, but also be able to see details by zooming into the code.
me after switching from a project written in C, to a project written in C++, where everyone is hyper-focused on "clean code"; trying to find which actual class object is being used because it's abstracted away behind an "interface" class: 🤢🤢
That's not great experience, but oftentimes when some tires to follow rules from clean code for the first time, they fail. It's not an easy skill. You don't hear amateur piano players say "i tried two hands play, and it sounded awful. I'm gonna stick to one hand". It's just a skill which takes time to learn to do properly.
@@danielwilkowski5899 from my point of view, its all just subjective nonsense. You can't really measure what clean code is.
Interfaces, on the whole, are a mistake. Inflexible shit. That's what you see.
@@mattymattffs I like interfaces in Java means I can treat objects that have lots of properties the same with out all the ifs.
@@AnimeGIFfy not nonsense. It forces discussion on how to write code, which is a good thing.
Would you say that the stack you talking about is like Vertical Sliced Architecture (VSA) where all code of a feature a bounded as context? This is my take-away. I'm programming since 20yrs and started with Java. In the last 5 years now, I'm unlocked from this disaster of clean code, but stuck on how to organize my project structure better to avoid switching between too many files and folders in debugging or data-flow sessions. I read a lot about data-oriented programming and separating data from functions. I like this approach a lot.
I read Grokking Simplicity from Eric Normand especially the part about architecture and layering code wtyt?
I'd never heard of "Vertical Sliced Architecture" before. Having looked at a couple of articles now:
"Layered architectures organize the software system into layers or tiers. Each of the layers is typically one project in your solution. Some of the popular implementations are N-tier architecture or Clean architecture." (source www.milanjovanovic.tech/blog/vertical-slice-architecture )
- that seems like a horrible idea.
As for "how to organize my project structure better" - I don't think there's an a priori answer to that. It's largely dependent on what tech you're using and what you're trying to accomplish.
@@InternetOfBugs Thank you for your honest answer.
What you describe above is the horizontally layered architecture. (Link: 404)
The VSA is analogous of what you described in this video about the concept of the bounded context when fixing bugs. That bugs should not influence other parts/modules of the code. VSA is like having feature modules instead of layered modules (UI-, App-, Core- logic should be separated in its own modules like the n-tier architecture "clean architecture). Where instead a vertically layered architecture, use cases/features are completely isolated in their respective own project modules instead. I find it often very hard with code navigation and hopping around files and folders when the architecture is horizontally layered.
Sorry - youtube thinks the closing paren ')' belongs on the end of the URL. Just take it off.
@@InternetOfBugs Thank you for the hint. So, we are on the same boat? Horizontally Architecture ("Clean Architecture") is horrible and VSA is better maintainable?
@@PriNovaFX Yeah, that's not what I was getting at. I'm just talking about the slice represented by the call stack (or stack trace) in the module the bug is in. It *could* cross modules, I guess, but that seems overly complicated unless there's a good reason (like Conway's Law).
I've worked on the code you've described at big companies. Right now working on the complete opposite where everything is contained in massive nested if/else statements and I honestly don't know which kind of codebase is worse to work on.
Yeah, that's a hard call. My condolences.
06:00 This is one reason why I dislike OOP. While not limited to OOP, that style strongly encourages making lots of objects with generically named methods. This makes it difficult to do productive searching within a codebase because you don't have a unique identifier for a particular method. Instead you get lots of irrelevant search results from other methods which happen to share the same generic name.
More generally to optimize for searching I tend to be fairly strict about how I name things within code so that each kind of thing always has the same name (perhaps with a different prefix or suffix to disambiguate when there many things of the same kind in scope at the same time) and each function name is reasonably unique. Then I can search for that thing and I will always get every occurrence of that thing (and ideally, nothing else).
The thing is. OOP works with the right IDE. With Visual Studio or Java stuff, the IDEs have proper navigation tools.
C++, which is often used for more critical infra, lacks that sort of tooling altogether. CleanCode makes the code slightly easier to navigate when you’re not using the IDE but if you have the right environment, it’s trivial to navigate and localize the context.
C++ is earths and bounds more reliant on CleanCode because it has much worse tooling and debugging, especially for large codebases.
11:31 Maybe I’m misinterpreting this whackamole stuff, but doesn’t “stuff here affecting stuff there” include dependencies?
MynameisBrianZX Those are kind of two different things.
Whackamole is when a developer makes changes in one part of their codebase and it breaks something in a different part of that codebase. So, simple example, the username box on the login web page is not lined up with the password box, so the developer makes a change to the CSS to shift the username element twelve pixels to the left and that fixes it, but that also shifts the "Welcome, ${login_name}" header at the top of all the pages 12 pixels to the left, and so then someone has to go fix that. And then when they move "Welcome, ${login_name}" twelve pixels to the right in the header, and now every HTML-formatted email that gets sent to a customer cuts off the rightmost 12 pixels of every customer's name in the "Dear ${first_name} ${last_name}" greeting.
Dependencies are different.
First, dependencies aren't (or shouldn't be) routinely changed to fix application bugs.
Two, the expectation for dependencies is that, when a dependency changes, all the parts of the application that use that dependency will need to be retested. So it's not really a surprise. They're kind of logically "below" the code the team is generally working on, rather than "over there."
The main thing is that, with whackamole, there's no reason to believe that, in a reasonable application, a change to the login page would also make changes to all the page headers, or a page header change would break the email greetings. Those things don't seem they should be related. So it's an unpleasant surprise, and hard to plan for. Also, it tends to cause chain reactions, so it happens over and over.
But when a dependency changes, there is a reason to believe that other places could break, so you'd expect to (and plan to) test them, so it's not the same kind of surprise, so it's a different category of problem. Plus, once a dependency is upgraded, and whatever problems that caused are fixed, that's generally the end of that issue.
Does that help?
@@InternetOfBugsYes, that helps a lot. To sum it up my own way, whackamole is unpredictable and widespread due to reckless design, whereas distinct modules interact through known, logical, and ideally few ways.
@@MynameisBrianZXyes. But I should be clear that the way whackamole gets its name is from the seemingly never-ending nature of problem after problem reminding people of the carnival/arcade game of the same name.
In my experience wack a mole situations are often the result of some form of coupled states that don't have a linear sequence. Like when a change in a state flag forces the program to recalculate a whole set of entities, or when an hierarchy of rules conflict in non obvious ways. Or when certain aspects of the functional requirements aren't explicitly set on writing and a whole bunch of assumptions were made, and then trying to fix that usually leads to inconsistent behavior from the application.
In fact I have seen a lot of smarty-pants developers, architects and consultants making a lot of specific assumptions that aren't necessarily true in terms of matching business requirements. And in the end applications are basically hammered to fit the most common use cases.
This! 10000000000% this! I absolutely love working on a codebase where if I see it misbehaving I can look at the folder structure and guess which folder the bug is in just from the names, and then when I look in the files, I find the exact line of the bug within a minute or two without even having to run anything.
Much of this has to do with code organization and namespace/packaging.
I feel like this doesn't get discussed enough.
well you hav descover reverse ingennering of bugs behabiors
Good that I listened until the end. It made me realize I should not trust the author. For The 25.13$ bug you should have logs that track decisions and help you to understand what went wrong. In my 13 years experience the less code you have the easier it to understand, debug and write. Funny thing that video author doesn't suggest how to do it otherwise exactly. His high level advise to avoid using abstraction and lose coupling between classes sounds to me like the worst advise you can take. That's what legacy code looks like. The worst thing is to make changes in such code, because to add a simple change you might spend too much time. Have high integration test or functional test coverage to minimize risk of bugs, not just unit tests
Can you share some code you've written/maintained? Just to have a look at cleaner alternatives
Good to see this videos that debunk the myth of clean code . I have done it in the past. I remember trying to make changes to my own code after some time it was a nightmare 😅
Regarding the accounting error issue, how could it be done otherwise?
As from my POV, that particular logic is abstracted from the whole system, and it did mess up the whole system.
But this logic has to live somewhere, should it live in the main/core part of the system? If many of these kinds of logic were in the main/core part of the system, wouldn’t it clutter it and make it less readable/testable?
My POV of this issue is that the component is
1 either not tested properly, so corner cases are not added
2 or has a bad specification, and the test is wrong
Either way I can only call it unfortunate that this defect propagates to a somewhat far place, but I would still consider this “far” place a part of the block of logic
"Error: Output changed to $25 because of"
would have been 100x better
What are your thoughts about the "Design Patterns: Elements of Reusable Object-Oriented Software"?
Most Design Patterns are reactions to limitations in the particular language in question:
blog.plover.com/prog/design-patterns.html
So if your language is chosen for you, the Design Patterns for that language are handy solutions to common problems. The classic Design Patterns book from 1994 is all about the version of Java that existed at the time. Some of those patterns are still useful, some are now irrelevant due to improvements in Java over the last 30 years.
Here's a good (relatively) recent talk that I think is a good take on the status of design patterns: www.deconstructconf.com/2017/brian-marick-patterns-failed-why-should-we-care
The great sin of OOP was a generation of "rockstar" programmers who split code across so many files and functions that it can be formally impossible to debug. You end up just "fixing" everything with garbage frontend logic. Like "if the order button is blue, then it is okay to process the credit card". Why does the button turn blue? Absolutely no one knows or understands, and it is not worth your time to discover why. So the whole frontend is duct tape and shoestrings holding together a backend full of 10 year old assumptions. Then log4j happen. You take out your gun and carefully caress it.
The great sin of FANG programmers was thinking "elegant" code is always the goal. Making massive assumptions about programming languages, edge cases, and documentation in order to write "perfect" code. Except the client was wrong. Your base assumptions were wrong. And only someone with 13,000 hours of 1337code programming IN THIS PARTICULAR LANGUAGE can possibly discern what to do now.
Anyone who goes too far into either camp is a disaster for an organization. Jusy write decent code. 99%+ of code in the world today does NOT need much more than the most casual optimization to be good enough.
"The internet is full of bugs, and anyone that says different is probably trying to sell you their book !" This take is the whole video in a sentence !
Thanks for the video, I never fully read that book, but had some knowledge about it, so it's very interesting for me see that those principles makes more difficult fix bugs. I have had many job interviews where I am asked if I have read and follow those principles, so I hope this change in the future
I am eager to see your opinion on Agile/Scrum.
thats an entirely different animal to programming and isn't even really about it i would argue. its a philosophy of marrying humans to software and it will never happen. water and oil. why? because humans are not future proofing and neither are computers. code schisms happen. poor leadership. what a mess, RIP.
This is all good and nice, but could you provide examples of how you saw things done in a "clean" way vs how you would do it? Otherwise it is pretty hard to get why to prefer one or the other in a given situation.
It's not about an alternative, necessarily. Many aspects of "Clean" are just harmful, and you're better off just winging it. "Clean Code" just teaches beginners that they should rigidly cling to stupid mnemonics instead of understanding what problem the maxim is trying to avoid, what actually causes it, how to identify it, and what can be done about it. Far too many readers latch on to those maxims rigidly and refuse to give them up, even in the face of evidence that they're making things worse. What's worse, most people claiming they're following those principles haven't even read the book at all, much less understood it. They're just parroting what "Cracking the Code Interview" taught them to say, and refusing to admit that there might be a better way.
That said, there will be more videos coming about the metrics and techniques I've found useful over the years.
The "not wack-a-mole" approach you're describing aligns with the Single Responsibility Principle. Code should have only one reason to change, as outlined in "Clean Architecture."
Regarding the $25 expenses example, it serves as an exception to the idea that reading code from bottom to top is a good practice. You assumed that 13 cents were being rounded off but couldn't find
it. Understanding what calculated the value would have been quicker.
The "define the normal flow" example you reference from "Clean Code" aims to demonstrate an alternative to using exceptions for flow control. Taking your example in good faith, I suggest you identify
the feature with the bug-public int getTotal() in this case-and trace its execution instead of making assumptions about the cause.
I don't believe Robert Martin suggests that writing bug-free software is possible. Although I don't have "Clean Code" on hand to reference, in "Clean Architecture," he quotes Dijkstra: "Testing shows
the presence, not the absence of bugs."
> The "not wack-a-mole" approach you're describing aligns with the Single Responsibility Principle. Code should have only one reason to change, as outlined in "Clean Architecture."
No, it really doesn't. Not at all.
> exception to the idea that reading code from bottom to top is a good practice
Reading code from bottom of the stack to the top is the most efficient way for bugs to be found (assuming, like with most bugs, the bug was reported regarding behavior at the Presentation/UI level or via a log message).
Code structures that don't facilitate that reading are inefficient at best, and obfuscating (harmful) at worst - as in this case.
> I don't believe Robert Martin suggests that writing bug-free software is possible
Either believes it's possible for code that is isolated and private to be bug-free, or he believes potentially buggy code should hide its bugs from the rest of the codebase. I think the former is a more charitable presumption on my part.
Can´t agree more, bought this book, read it a decade ago and forgot, what normally means it did´t made any impression.... People started do talk about CC last year and I re-read it ,just to be sure, and it is pure garbage besides de common sense tips. The code examples make you cry and are utterly irrelevant to modern architectures. Unless you´re writing a lot of java swing applications using java 1.4
It's an old book, but it has nothing to do with it being Java, it could easily be applied to C++, C# and every other language with OOP essentially.
Not defending clean code because there is a lot of dumb stuff in it.
If I use the ""Go to implementation(s)" feature of the IDE and it takes me to a function of an interface, I'm done man.
do you have any opinions on when/if you should ever stop trying to refactor an old application and start again from scratch? i know it's always an enticing meme for new developers to want to "rewrite it in rust" but at what point do we say, ok now we know the domain of the problem pretty well, let's build a new solution with all the lessons learned from this old application? not necessarily rewriting in a new language or framework but just saying, okay we know when X happens Y should do this in this manner so let's write our code this way because it's easier to fix when things go wrong?
That's worth it's own video. I'll add it to my list.
Short version: It depends on how long (& if) the system you are writing has been in production and how functional it is. There's a problem called "second system effect" ( wiki.c2.com/?SecondSystemEffect which came from Fred Brooks' FANTASTIC book "The Mythical Man Month" ), where taking a system that works but is clunky and starting over tends to be worse than the thing you're replacing, if you're not really, really careful. Especially if you try to have your initial release of the rewrite have all the features of the original.
If the thing you're replacing either has few users, or the users are so frustrated with it that they wouldn't mind the number of features being reduced for a while if it becomes more reliable, or if you don't care if they get upset for a while, then rewriting can make a lot of sense.
The best recent example I can think of is Apple rewriting their Office Suite (Pages, Numbers, and KeyNote) from their Legacy Mac code to a shared MacOS/iOS codebase. They did the same thing by killing Final Cut Pro 7 in favor of the much less feature-rich Final Cut Pro X. There was a lot of complaining for a couple (or more) years, but by now, almost everyone seems happy that they made the switch.
@@InternetOfBugs Thanks for the insight! i appreciate you writing all that out!
Where can I find your published technical book? I like to get a copy.
It's out of print now, except for Kindle: www.amazon.com/dp/B00LGXS9R6
@@InternetOfBugs Thank you. I bought myself a copy.
There are words for what your talking about : cohesion and coupling.
Coupling is when you have multiple subcomponents that affect each other.
Cohesion is when coupled components are kept together.
I think the ideas behind cleancode is that you can create a system with no coupling, by making these tiny independent pieces and then thinking somehow when these pieces are arranged togethor, problems cant occur from the integration of those components.
But really what happens is, more coupling is created from doing things like DRY where arbitrary functions are broken down into small chunks and spread out everywhere for 'reuseability'.
I think this is because creating cohesive systems requires you to actually understand the domain well enough to draw the right lines in the system to achieve locality of behaviour.
But it seems alot of devs just dont care enough about the domain and just want to make pretty abstractions
Ugh.
We're already overflowing with "we grabbed this existing word (or made one up) and gave it a specific meaning in the context of this specific kind of software development, and now we expect everyone to learn it and use it."
It's such a waste of time and effort.
@InternetOfBugs yes, your right but the idea of 'keeping things related together' is not a new concept. Having words for things and agreeing on what they mean is exactly the opposite of wasting time, how can we possibly progress as a discipline If we can't agree of a vocabulary?
Like go and find a definition of 'unit test' that programmers actually agree on...
I found this video quite compelling and you definitely explain the 'vibe' of what I consider maintainable code, I just think people might find it helpful to know that these are established ideas that can be further explored
Great video! So whack a mole type issues.. Would it be possible to go into more depth with more real world examples for these and related issues? I find with systems i work on its unavoidable to have certain interdependent items and have an intuitive idea of when something might cause a whack a mole type problem later, but would be super useful to learn more about this
Sure.
Here's a quick answer, though, that I wrote earlier in response to a similar question:
Whack-a-mole is when a developer makes changes in one part of their codebase and it breaks something in a different part of that codebase. So, REALLY simple example: the username box on the login web page is not lined up with the password box, so the developer makes a change to the CSS to shift the username element twelve pixels to the left and that fixes it, but that also shifts the "Welcome, ${login_name}" header at the top of all the pages 12 pixels to the left, and so then someone has to go fix that. And then when they move "Welcome, ${login_name}" twelve pixels to the right in the header, and now every HTML-formatted email that gets sent to a customer cuts off the rightmost 12 pixels of every customer's name in the "Dear ${first_name} ${last_name}" greeting (because the name got shifted right 12 pixels on a fixed-width div with overflow: hidden).
The main thing is that, with whack-a-mole, there's no reason to believe that, in a reasonable application, a change to the login page would also make changes to all the page headers, or a page header change would break the email greetings. Those things don't seem they should be related. So it's an unpleasant surprise, and hard to plan for. Also, it tends to cause chain reactions (each fix creates a new bug), so it happens over and over. That's the way whack-a-mole gets its name: from the seemingly never-ending nature of problem after problem reminding people of the carnival/arcade game of the same name.
Does that help any?
@@InternetOfBugs ah ok I think I get your point, basically it should be obvious which components are dependencies, and aspects that cause unexpected breakages in seemingly unrelated places are wakamole