There’s a ven diagram between “smart enough to anticipate possible future changes in requirements” and “dumb or inexperienced enough to believe that you can mitigate this proactively”. That said, some futures are worth planning for, if you know they’re coming (or at least very likely to).
That Venn diagram is where optimism goes to die! 😂 But you're spot on-there’s a difference between guessing wildly and making an informed bet on a likely future. Planning for the inevitable? Smart. Trying to outsmart the unknown? That’s how you end up with a Rube Goldberg machine in your codebase.
Design patterns-wise, I've found that the most productive one that generally future proofs code without much hassle is the strategy pattern. You can address exactly what you're trying to do without over-generalizing, while also allowing additions in the future without much effort.
Couldn’t agree with you more Very flexible pattern Too underutilized to be honest Even dependency injection is basically the strategy pattern with dynamic loading in some cases
I was working on an amateur project for a guy that had already built out a foundation, and his database had like 48 tables in it. MAYBE 5 had data, the rest were for "just in case" and "I need this later." It was a PITA to work in, and that is before getting to the mess in his website that interacted with it. That project made me stop "building ahead" when the Good Idea Fairy would whisper "You might need this later."
THANK YOU - that term "speculative generality" is what I needed: a big-brain sounding term to whip out when arguing with the architecture astronauts about why we should just build the thing instead of debating design patterns and UML diagramming
If I could invent a code smell called "architecture debate" and get away with it - I would ;-) That being said, I'm not running out of fish any time soon. 25 years is a long time to write down every possible frustration that's ever happened.
I think there is a place for future proofing: Knowing you will actually need these abstractions. E.g. you wrote similar code for different clients before and thus have some experience where it will end up. But if you solve a new problem, you dont know what your code has to solve in the future. In this case abstraction is just hubris. You think you know how to describe the world in abstractions. And then the exceptions hit you like a truck.
If you’re basing your abstractions on known patterns or prior experience, it’s not speculative-it’s informed design. The distinction lies in whether you have concrete evidence or are just guessing about future needs. Speculative generality becomes a problem when developers try to anticipate future requirements without sufficient context or data to justify those decisions. This often leads to over-engineered solutions that solve problems no one actually has, making the codebase harder to work with. On the other hand, leveraging prior experience to create abstractions that address recurring patterns is a smart way to future-proof in a controlled, meaningful way. The key is knowing when you’re dealing with a recurring problem versus stepping into the unknown.
Or just having feature cards in place and you know need it in the very near future for the feature you're working right now. I started working on a game 1 month ago. And I didn't need a state machine when I started. But once I started working on character states, I didn't start writing code without a pattern, I looked at solutions that gracefully solve it: state machines, behavior trees and GOAP and went with a state machine: it's the one I know best and looks the least complex to implement. While I didn't need it for the first 1-2 character states and we could say I still don't absolutely need it with 20 states, but right about now I'd be dying in endless state checks all over the place. So implemented it right away. Well, I implemented it after I added a single state and started on adding new states. So you could say I added it when I needed it.
The big thing to learn is not "just build the thing" but, "how to convince others to let me just build the thing". I think that's where a lot of developers get stuck in design pattern hell, because at some point it's not about writing good code anymore, but about convincing coworkers that you're writing good code and that's an entirely different game.
You hit the nail on the head! In the world of coding, sometimes it's less about the code and more about the code-versation. Convincing others can be a real game-changer! Maybe ask them what choice will guarantee they don't get emergency calls at 3am?
Good point. Charles H. Moore have stated this pretty clearly in 1970's book, Programming a Problem-Oriented Language: "Do Not Speculate!". The second rule after, "Keep it Simple"
I've actually done that. One of our products used to sync through Dropbox, and we moved it to sync via S3. Because of the abstract base class, it was relatively non-invasive. That said, it's not a sure thing that the base class helped. I could just have changed all references from Class A to Class B, and implemented functions until it compiled again. And it was a one-time deal.
@@HollywoodCameraWork If the base class didn't really help then it wasn't done right and didn't need to be done. Sounds like you deal with some strange issues.
I think we get this from tutorials, textbooks, and classes where the example is very simple. This simple example becomes a vehicle for illustrating every kind of technique. This teaches us that every kind of technique should be piled onto even simple code.
You make a point that is so good The lazy reusable example that ends up teaching the next generation of developers this horrible mindset When they should do the reverse and stick to simple demonstrations that encourage simplicity
In front of a junior dev or HR - both without significant experience in the programming field? 1000% agree In front of someone like myself? I'd hire you on the spot
I have worked on so many "enterprise" projects as a junior dev that fell into this particular trap to an insane degree. Debugging and refactoring nightmare that's impossible to upgrade to modern standards, interfaces upon interfaces, I had to step trough 10+ abstraction layers every time I wanted to know what was actually called. I will never make that mistake myself, simple and to the point. When I expect future extension will be needed, I create a single interface that allows outsourcing that particular problem to external modules and that's it.
You should absolutely anticipate about what comes next. Think about how the requirements might change, what might be asked, and then think about what *would have to be done* to your design to accommodate that change. The key point is *think about what would have to be done*, but *don't actually do it*. In particular, if you anticipate there will be some requirement down the line that will require introducing some abstraction to accommodate, *don't actually introduce that abstraction*, just satisfy yourself that down the line when you have to, you will be able to introduce that abstraction.
Thanks for your thoughtful comment! I completely agree that anticipating future requirements and considering how your design might need to adapt is an important part of the design process. Your point about focusing on what could be done while holding off on implementing abstractions until they're actually needed is spot on-it's a great way to balance flexibility and simplicity. I think this mindset is key to avoiding speculative generality while still being mindful of potential future needs. By keeping designs extendable but not overcomplicated, we can ensure the code evolves naturally as requirements become clearer. If you'd like, feel free to share an example of where this approach has worked well for you (or even where it hasn't). I'm sure many viewers would benefit from hearing your experience!
This was certainly a problem in the 90s when OO was really starting to pick up, the GOF book had just come out etc. There was a lot of talk at the time about writing reusable code, libraries etc. and generalising things for reuse that I think most of us fell into to some extent. It wasn't until later that people started to realise that the time to abstract/generalise was when you finally had a use-case that needed those things.
Listening to your excellent description it sounds like it became trendy before people understood it and when to use it. Then it became better knowledge. In the last 25 years however, you'd be astonished to see the % of my college/university (I'm in Canada) hires where I had to teach about the GOF patterns.
@@theseriouscto I don't think there's time in a typical 3 year (in the UK) BSc. Comp. Sci curriculum to cover more than a fraction of what you need to be a competent programmer/developer. Even back in the 90s, a surprisingly small number of graduates were really competent junior level programmers. Most of the people who were good by the time they left were already programming before they joined the course and spent a lot of their time during the course doing "extra-curricula study" of one form or another. It's even worse now than it was when I graduated in the mid 90s because there's so much more abstraction and complexity to get your head around. I think it's inevitable you have to do a lot of heavy lifting on your own time over and above what they teach you in BSc. I learned of the GoF book in my second job, as a senior because people had copies of it and were talking about it.
I was and I still somewhat am guilty of this due to having to work in a context without good internal standards. I realize that my intent, simplifying my future self life, is at odds with the strategy. However I think that many devs do this because they do *recognize* a problem, and they solve said problem with the tool they know best: code. While said problem is best approached from a different angle, getting everybody on board with a style, adding tools to reinforce that style (I don't care where brackets are written, as log as it's consisdent on the whole codebase). Likewise, refactoring and tests should be embraced, but in many companies management is pushing for more features because that's where the money comes from, ignoring the economic aspects of unmaintained code. Teaching to non-technical people that *not* refactoring is *expensive* is a challenge, some don't know and can learn. Some know but don't care because they are already polishing their CV after they get their 'unexpected productivity' bonus.
Thank you for the thoughtful comment! You bring up an excellent point about how developers often default to solving problems with code because that's the toolset they know best. I agree-sometimes, these issues are better addressed through team standards and tools, but that requires organizational buy-in, which isn't always easy to get. In my experience, most products don't even make it to the shelf, so the priority is often building a minimal viable product as quickly as possible. While I don't advocate writing bad code, there are so many great refactoring tools available now that cleaning up the code later isn't as intimidating as it once was. So, for early-stage projects, I recommend focusing on getting the product out the door without worrying about future-proofing. After all, if the product doesn’t succeed, all that extra effort was wasted. That said, explaining this approach to management can be a challenge. I’ve found that construction analogies work well for this. For example, you can compare bad code to poorly placed electrical wiring: it works initially, but if you later need to add plumbing in the same spot, you’ll have to redo the wiring first. This helps non-technical stakeholders understand why cleanups like refactoring are necessary. Finally, when you have these conversations, it’s important to establish your role as the technical expert. A little humor helps: I like to tell management, 'I won’t mess with accounting or marketing if you don’t tell me how to architect software or write code!' It’s lighthearted but gets the point across.
Absolutely! Premature optimization is like packing for a trip to Mars when you're just planning a weekend road trip-you'll probably end up hauling a lot of unnecessary baggage and still forget your toothbrush.
I used to see “every dependency must be an interface” in Java projects so we’d have something like an IBeanFactory and BeanFactoryImpl for every single class that got defined. Thankfully this kind of cargo cult standard seems to have gone by the wayside.
One example of this I see all too often in the game modding community is the "library mod." A mod that only contains common code for all of that modders other mods. Of which there will be ONE. >.>
I'm glad that the senior developer I was working with a lot when I started steered me away from speculative generality early. I've definitely still made some overgeneralizations, but nowhere near as much as I would have without that guidance. We're still working on a suite of rather complex LoB tools, so there's only so simple things can be, but that's even more reason to be careful not to overcomplicate. Granted one of the other devs there, who has since left, was very keen on generality to a fault, even if the solution didn't correctly address the problem. In at least one case so far I've rewritten something he made because fixing it to actually solve the problem would have required significantly more work, because the generality was so baked in that trying to address it would have broken more things than it fixed unless I did it exactly right.
I'm happy you had someone to guide you - the right mentor can make all the difference. Incidentally, that's one of the reasons why I started this channel, too many junior devs won't have a "GOOD" one.
Simplicity is king. I estimate about 2/3 of code in most codebases isn’t actually needed. I’ve done plenty of refactors where a small amount of code is added and 3-4x the amount of equivalent code is removed.
That's exactly the point-we're often overbuilding solutions to problems that don't exist yet! We all know that one day that special person in our lives may need bigger clothes... But let's not buy those clothes until they are needed ;-)
Another thing that can happen is, one developer will be comfortable with a certain level of complexity or abstractness of a design, so it doesn’t feel over engineered to them, so they build a complex solution that handles problems just fine as long as they are maintaining it, but then you bring in more junior developers who need the code to be super easy to comprehend, and they either make a mess or they just can’t touch the “over engineered” code. What makes it over engineered in some cases is the mismatch between the complexity of the solution and the actual developers who will be maintaining it.
Or maybe developers are just used to a different set of design patterns. I think it's key to have a common understanding how code is developed in a dev team. This might change with time but should not change over night.
Ah, the classic 'over-engineered code handoff'-where one dev’s masterpiece is another dev’s nightmare. Complexity is relative, isn’t it? What feels like elegant design to one person might look like a crime scene to someone else. 😂 That’s why shared team standards are so critical. A consistent approach gives everyone a common language to work with, minimizing that ‘what fresh hell is this?’ moment for whoever inherits the code. And yeah, evolving those standards is fine, but doing it overnight? That’s how you give your team whiplash!
I instead like to fully investigate the possible futures, and then just ensure that we're not abjectly working against those futures and preventing them. I'm happy to not build the future until we get there. But ignoring known futures is the road to hell. We're just coming out of a TWO YEAR refactor that was 90% downtime, and it was all from ignoring known futures while writing the code. Totally avoidable. We refused to look far enough into the future, and it cost us 1-2 years of full team downtime.
Wow, two years of refactoring sounds like a tough lesson-I'm sure that’s not an experience you want to repeat! You make a great point about known futures. Ignoring clear signals of where the code might need to go can create massive technical debt and costly downtime, as you've experienced. I also think it’s important to distinguish between speculating on features and knowing about them. Speculation often leads to over-engineering for hypothetical needs, while ignoring concrete, foreseeable changes can have consequences like the ones you described. Your approach of investigating possible futures without overcommitting to them strikes a great balance. It’s about acknowledging likely changes and ensuring you’re not actively working against them while still focusing on solving today’s problems. Thanks for sharing-it’s a valuable perspective and a reminder for everyone to keep their eyes open for those 'known futures' without falling into the trap of speculation.
Yes, excellent video. I definitely went through a phase like this in my career. In the last two-three years I have noticed myself finally stepping away from this, and it's much better. However, I think because I wrote (stupid) abstractions, I have learnt how to do it. Many people I know never learned to do it, and now they can't do it. Maybe there is some value in this, although I still feel bad that someone needs to maintain my bad code.
Ha! The monocle touch really elevates the comment. 😄 Totally agree-future-proofing and premature optimization can both be traps that overcomplicate things without adding real value. The best code is the kind that solves today’s problems cleanly and evolves naturally with tomorrow's needs. Thanks for the laugh!
Depends entirely upon whether you are building libraries or products. If you want to implement something complex as a library, having the right design patterns is nonnegotiable, but you also have to design it for a specific set of expected use cases (typically informed by previous experiences and foreseen new requirements). When building products, the right design patterns become obvious as you build imo, but you will only know which to use if you have studied and practiced with a variety of them already.
Great one-liners! For vacations, I had modular pants made. In case I lose a limb, or gain one. Or my spouse and I want to wear the same pair of pants. 🧠
Ig this isnt as much the case in game development, say you're making an rpg it would be good to "future proof" say the components/character classes so all new players, enemies, npcs etc. are easy to implement in the future and act consistently
True, in game dev, future-proofing is like setting the rules for a board game before anyone starts playing-it keeps things consistent and makes adding new pieces a breeze. But even then, the trick is to future-proof just enough. Build for expansion, not for every possible DLC idea that might pop into your head at 3 a.m. Otherwise, you’re just creating a boss-level challenge for yourself!
Kotlin has final as default and is not future proof by design. Apple products (like the iPhone) is not future proof as even the battery is glued in place.
Ah, the classic dance of abstraction and the fear of tech debt! It's like trying to dodge a rainstorm by hiding under a leaky umbrella-eventually, you have to face the weather!
People fear refactoring and tend to future proof the code because they fear they might break functionally. If you fear breaking functionality then you should write a test rather than future proofing your work.
I agree except if you have any downstream users you really need to have solid API versioning in place ahead of time so that "refactor it later" is an option.
Absolutely! If there are downstream users, solid API versioning is non-negotiable. It’s like laying down train tracks-you don’t want to suddenly realize mid-journey that you’re driving a bullet train on tracks meant for a steam engine. Good versioning turns 'refactor it later' from a dream into an actual strategy instead of a nightmare for your users. It's one of those rare cases where a little future-proofing upfront pays massive dividends. Thanks for bringing that up-it’s a lesson every dev learns sooner or later (hopefully sooner)! 🚂💻
Almost all the places I have worked, they have not budgeted time for refactoring when their "build for now" code has to be extended, repurposed and redesigned. As a result it has been a hell to get up to speed as a new hire amidst a hodgepodge of things cobbled together like a jenga tower assembled in a galewind. DO THINK ABOUT THE REALISTIC FUTURE. USE THAT DESIGN PATTERN THAT IS VERY LIKELY TO BE NEEDED. YOUR BOSS HAS NOT THOUGHT ABOUT IT. YOUR SALES TEAM HAS NOT THOUGHT ABOUT IT. YOU WONT GET THE TIME TO DO IT PROPERLY LATER. (And yes, you will say that's a shitty company. Those exist. We all have to work in some)
As the person who made the video, I think you know where I stand. As a technical expert, it's your duty to educate people who don't know any better. Otherwise, you're just watching the problem and doing nothing. People will respect you more for it, and they will eventually listen. I have a great video on the art of saying No.
"Are we opening an art studio? no" i came here regarding developing for an art studio... aww.. These are powerful tips for sure. a lot of over engineering compounds tech debt.. or gate keeps junior devs from being able to help out when suddenly their jobs become too abstract. As a dev with ADHD though I will say, having enough structure to build a factory pattern paired with a strategy pattern helps me maintain focus when i inevitably jump around. But, that usually is an organic result for the problems i face and not me actively slapping patterns on just because.
I'm glad you found the tips helpful, even if they weren’t directly aimed at art studios-your perspective adds a great layer to the discussion. You’re absolutely right that over-engineering can lead to compounded tech debt and make it harder for junior developers to contribute. Striking that balance between structure and simplicity is so important. I love your point about using patterns like Factory and Strategy to maintain focus as a dev with ADHD. Patterns can be a fantastic tool when they emerge organically from the problem at hand, rather than being forced into the design. It sounds like you’ve found a great way to align your workflow with your needs while keeping the codebase practical. I've been pondering a great way to discuss the Strategy pattern for a while, as so many smells that I'm talking about could be avoided with that pattern - it's crazy. Just requires a bit of a mindset change. Thanks for sharing your insights-it’s always great to hear how different devs approach these challenges! I was actually dismissed from art class for not being able to draw any geometric shapes without help ;-)
@@theseriouscto Sounds like you were just embracing the fun part of art. Ditch the rules and structure! haha For me, strategy patterns tend to be a good way to setup modular pieces for SOME important steps along the way. Trying to depict a scenario from my head here, hopefully it conveys. From a 3D scene in Maya.. Rendering out low res no color 3D assets for easy previewing, rendering out mid res but well lit videos for handing off to creatives for various purposes, render out high res for approvals and handoffs to editors to use s proxies and rendering max res for final frame. can each be a lego piece within a category of rendering components each with unique sets of actions, settings, and pre & post logic. but on the implementation side, it looks closer to a simple, and legible todo list for a particular action. So if one show1 needs renderingA, but show2 needs renderingA & renderingB, it is a very minimal change on a junior dev to go in and customize the behavior. something like that haha Again, great video man! thanks for your wonderful knowledge dump
I would say I’m probably the type of developer who tries to implement abstraction and modularity whenever possible. And I can see how this might increase complexity and waste time for both me and my teammates. However, I’ve also experienced difficulties with code that doesn’t have enough structure. I’m talking about hardcoded spaghetti with functions containing 1000+ lines. It was a nightmare to add features when there was no underlying framework whatsoever. Any advice on finding a healthy middle ground?
Great point! Striking the right balance between abstraction and simplicity is like walking a tightrope-lean too far either way, and things get messy. On one hand, too much abstraction can lead to unnecessary complexity that bogs down the whole team. On the other, too little structure turns the codebase into a plate of spaghetti no one wants to touch. My advice? Start simple. Build just enough structure to solve the problem at hand while keeping things flexible for future changes. Use refactoring as your safety net-let the code evolve as the requirements become clearer. And don’t be afraid to involve your teammates in these decisions; shared understanding is the key to maintainable code. Oh, and if a function starts approaching 1000 lines, that’s probably your code tapping you on the shoulder saying, 'Help me, I’m drowning!' 😂 Is getting a refactoring tool an option for you? I'd be curious to see if a good one spots patters.
@theseriouscto I don’t have any experience with refactoring tools, but if you have any recommendations let me know! I finished my internship at the company I was working at, and I’m not sure if it would be an option anyways, but would be nice to know.
This is a fantastic philosophy that im trying to encourage in our teams. It seems many engineers write complex abstractions for no other reason than enjoying writing code. 😂
Absolutely! It’s like some engineers think they’re composing Shakespeare when they’re really just writing a grocery list. Let’s aim for clarity over complexity!
i'll be pedantic and say that you're really complaining about pre-mature abstraction. Pre-mature abstraction is bad, future proofing is good.. Pre mature abstraction is just future proofing gone wrong.
It never happens until it happens, before avoiding it you should take into consideration the real project context, the problem that I see with this ideas is that no one suggests a realistic alternative, because yeah you can add noise and unnecessary complexity to the system, but not doing it can also add complexity and bottleneck and the "you can refactor later" in most of the cases it is a very big lie, before taking any decision you should first analyse the context.
The real context will draw a BIG LINE between speculative and the real project context Given that most software projects never get finished, I think it's important to always leave the fluff for later Refactor later is not a lie, it's work Get it working, get it working well, enhance it, repeat
Ah yes, the classic 'I built a spaceship to deliver pizza' moment. 😂 Single Page Applications for something as simple as a homepage is like bringing a bazooka to a snowball fight-it works, but at what cost? Jokes aside, it’s a relatable trap. Sometimes, the allure of shiny tools or over-preparing for hypothetical future needs can lead to...overkill. At least you walked away with a lesson and, hey, probably a really snappy homepage! 😄
Yh no. I tried the no future proofing thing. It only works on fire and forget projects, I suffered greatly from that ideology and it made me to do a rewrite. Here's an engineer lesson, one Boeing failed, if you plan on maintaining anything in the future, future proof it. Otherwise it will lead to a janky retrofit aka the MCAS.
Fair point! Future-proofing does have its place, especially for projects that need long-term maintenance or are core to your operations. It’s a fine line to walk, though-overdoing it can lead to overengineering, while skipping it entirely can turn into a painful rewrite (or, as you aptly pointed out, a 'janky retrofit' à la MCAS). The trick, I think, is knowing what to future-proof and balancing it with simplicity. Focus on designing for change in the areas most likely to evolve, and leave the rest as lean as possible. Thanks for sharing the hard-earned lesson-it's always great to hear real-world examples!
How on Earth is acknowledging the fact that changes eat up the most time cost as well as they're the most difficult part of the design process. How in the world can you say that anticipating this fact and so designing with the idea that the design needs to be flexible enough so that changes won't be difficult. They won't be annoying. The next person that comes in is going to think there's fucking lucky stars that there are already the scaffolding that they need to start making these changes. How is that? Not a good thing? That's what future proofing is. You're not predicting the future. You're using your experience to know with an absolute certainty that people are going to want to change things. A lot of the changes are going to be stupid and beyond that you know what aspects of the design need to be most flexible and so I mean you kind of have I mean that. That's what experience gives you the ability to kind of know what the next person coming in and working on this thing. What are they going to be asked to mess around with first for them to be able to come in and not have to? You know essentially start from scratch as well. You know they're not having to decode something that's impossible to figure out. It's already been made flexible as well as object-oriented enough to where. If they don't like it, it's easy just to remove it all together and replace that something with something else because you've gone ahead and made sure that everything is even modular and even if they don't keep anything, they're still going to work within that modular framework so that that the more people that work on it the less has to be like figured out all over again. And then by the time it makes its way back around to you it's not really an issue. It's already following that flexibility and workflow that you went ahead and provided because you just knew from experience that this is how it works. Maybe design is so different from code, but I can't imagine that coding to anticipate change is a bad thing. I just can't imagine that.
Thanks for commenting, I love being challenged. Let's take an analogy: driving a car What I'm basically saying is that future building slows you down and that you should avoid it when you are speculating so this way you can go faster. That doesn't mean I'm saying drive at 180 km/h or mph I'm saying if you do it when you don't need you it will take you longer to get to your destination (ship the product) If you know what the future holds then of course you should plan and put stuff for it, but only when you are absolutely sure. And even then for me that's a bit of a grey zone. What if the product flops? Ships and then is abandoned? What does all this future-proofing give you? The product is dead...
@@theseriouscto You haven't ever gone back into a dead project for that one amazing thing that absolutely is fits the the issue you are currently having. Basically what I mean is. Future proofing is ensuring that not only yourself, but other people that you may never actually meet, you can feel more assured that you, and they will not have to solve the problem from scratch a second time.
What you’re explaining is writing clear and clean code Not implementing a whole scaffolding of abstractions or interfaces that are only used in one straight line to a single class Different things no?
Ok, catch me up. When you say abstractions are we talking "cuil theory" (I swear it's a real thing) or like, umm... a new standard that just rearranges how things are done to hopefully get right a new trend in workflow? or something else entirely? "one straight line to a single class" are you basically saying that this is stupid because it DOESN'T do any of the things I have been in support of so far because it assumes the issues of one designer... (programmer) are for some reason going to be the issues of every programmer? When obviously that isn't true. This isn't making it easier to grab a hammer it is basically taking up time making a super hammer mega right hander left hammer welder at the same timer... um, I don't know what they call those things for programmers, bug in a fab hard, they are just called the freedom to solve a problem in the least inefficient way, which is wait for no reason while a workflow must be adhered to catches up to the need for it in order to get the WF changed to include the exception that allowed for the creation of the tool (like it anticipates some engineering will need to be napkin, but tries to keep that amount to the things that don't affect the lives of people or the cost of assets - hence the modularity) but it doesn't go straight to the "Class" (which is I am guessing that really obscure answer) and then try to anticipate others like it, all that take actual engineering to create, while none get used again?? Did I at least get close, because I am trying to do a better job of convince people (that protect the job tree from jobs being picked) that while I didn't do a lot of coding, I understand it more as an undertaking than I don't. Because if I got that all wrong, I am sorry.
Who said anything about convincing? Maybe it's not an option for everyone but if I believe that something makes no sense, even after I've been told to do it, I go back to my desk and keep working on the right priorities. Suits don't know tech - if they won't agree to you refactoring the app it's because they think your reasons aren't good enough. Ever read the Phoenix Project by Kim?
Why are you attacking me ^^ seriously one thing I want to add - sometimes this overengineering already happens at conception level with bloated requirements that never really get used - because we might need it later
I think I'm attacking everyone ;-) Couldn't agree with you more, hence the power of saying no - OR - priorities. Suits want everything but they have to prioritize and you start with the most important thing. Would a suit get married and fix issues with the bride later? Well we don't write software like that either
There is future-proofing, and then there is future-proofing. The right kind of future-proofing is: avoid third-party dependencies wherever possible, because they will be a maintenance nightmare in the future. Your own code is future-proof, because it's your own, you can fix it. You do not own dependencies and do not want to maintain them.
That’s a great point, and I 100% agree-avoiding unnecessary third-party dependencies is absolutely a form of smart future-proofing. The kind where you protect your codebase from external chaos, not where you overcomplicate your own. But I’d argue this still ties into the core idea of speculative generality: adding dependencies you don’t truly need today is just another version of ‘what if we need this later?’ Instead of over-engineering with abstractions, you’re over-relying on tools you don’t own. So yeah, future-proof by keeping control where it matters-your own codebase. Because you can fix your code; you can’t always fix that package that hasn’t been updated in two years. Thanks for bringing this up!
Funny comments? Absolutely-future devs deserve to laugh while crying over the code! Just remember, comments are like seasoning: too few, and it’s bland; too many, and you’ve turned your code into a novel. Aim for the sweet spot where they’re useful, hilarious, and don’t double as a stand-up routine.
@@theseriouscto I can only write comments and no code so my opinion doesn't really matter. But as a problem solver, my view is that in the future I'm much more likely to go back to notes to see HOW I was thinking than WHAT I was thinking. In other words, I learn from the process more than from the results. Put another way: the way to future proof code is make it so you're better at coding after every project.
Not intentionally, but I’ll take it if it makes me sound smarter! 😄 The idea does align with Altshuller’s TRIZ philosophy: the best solution is one that eliminates the problem entirely. In coding, that often means solving the problem so elegantly-or questioning whether it needs solving at all-that you don’t end up writing unnecessary code. Truly a case of less is more!
@theseriouscto it's in my email signature "the ideal system is when there is no system" along eith Churchill's exhortation "no idea is so outlandish..." etc.
Hahaha! Fortunately, the number is holding steady at zero-and now that I’m retired, I’d say it’s a streak I don’t plan on breaking. 😉 But on a serious note, this isn’t about ignoring best practices; it’s about striking a balance and not letting over-engineering create unnecessary headaches. Clean, simple code is what keeps projects (and jobs!) safe. Thanks for the laugh, though! 😅
There’s a ven diagram between “smart enough to anticipate possible future changes in requirements” and “dumb or inexperienced enough to believe that you can mitigate this proactively”.
That said, some futures are worth planning for, if you know they’re coming (or at least very likely to).
That Venn diagram is where optimism goes to die! 😂 But you're spot on-there’s a difference between guessing wildly and making an informed bet on a likely future. Planning for the inevitable? Smart. Trying to outsmart the unknown? That’s how you end up with a Rube Goldberg machine in your codebase.
Design patterns-wise, I've found that the most productive one that generally future proofs code without much hassle is the strategy pattern. You can address exactly what you're trying to do without over-generalizing, while also allowing additions in the future without much effort.
Couldn’t agree with you more
Very flexible pattern
Too underutilized to be honest
Even dependency injection is basically the strategy pattern with dynamic loading in some cases
strategy pattern is just a way to pass functions as parameter within a strictly OOP environment, so yeah, it's great
@@Iloerk I prefer to say "Behaviour" but Tomato/Tomato - 100% agree
I was working on an amateur project for a guy that had already built out a foundation, and his database had like 48 tables in it. MAYBE 5 had data, the rest were for "just in case" and "I need this later." It was a PITA to work in, and that is before getting to the mess in his website that interacted with it. That project made me stop "building ahead" when the Good Idea Fairy would whisper "You might need this later."
Ah the classic case of "just in case" tables! Built on the ideology of inspector gadget ;-)
THANK YOU - that term "speculative generality" is what I needed: a big-brain sounding term to whip out when arguing with the architecture astronauts about why we should just build the thing instead of debating design patterns and UML diagramming
If I could invent a code smell called "architecture debate" and get away with it - I would ;-)
That being said, I'm not running out of fish any time soon. 25 years is a long time to write down every possible frustration that's ever happened.
I think there is a place for future proofing: Knowing you will actually need these abstractions. E.g. you wrote similar code for different clients before and thus have some experience where it will end up.
But if you solve a new problem, you dont know what your code has to solve in the future. In this case abstraction is just hubris. You think you know how to describe the world in abstractions. And then the exceptions hit you like a truck.
It's not speculative then
If you’re basing your abstractions on known patterns or prior experience, it’s not speculative-it’s informed design. The distinction lies in whether you have concrete evidence or are just guessing about future needs.
Speculative generality becomes a problem when developers try to anticipate future requirements without sufficient context or data to justify those decisions. This often leads to over-engineered solutions that solve problems no one actually has, making the codebase harder to work with.
On the other hand, leveraging prior experience to create abstractions that address recurring patterns is a smart way to future-proof in a controlled, meaningful way. The key is knowing when you’re dealing with a recurring problem versus stepping into the unknown.
Or just having feature cards in place and you know need it in the very near future for the feature you're working right now.
I started working on a game 1 month ago. And I didn't need a state machine when I started. But once I started working on character states, I didn't start writing code without a pattern, I looked at solutions that gracefully solve it: state machines, behavior trees and GOAP and went with a state machine: it's the one I know best and looks the least complex to implement.
While I didn't need it for the first 1-2 character states and we could say I still don't absolutely need it with 20 states, but right about now I'd be dying in endless state checks all over the place. So implemented it right away. Well, I implemented it after I added a single state and started on adding new states. So you could say I added it when I needed it.
The big thing to learn is not "just build the thing" but, "how to convince others to let me just build the thing". I think that's where a lot of developers get stuck in design pattern hell, because at some point it's not about writing good code anymore, but about convincing coworkers that you're writing good code and that's an entirely different game.
You hit the nail on the head! In the world of coding, sometimes it's less about the code and more about the code-versation. Convincing others can be a real game-changer! Maybe ask them what choice will guarantee they don't get emergency calls at 3am?
Good point. Charles H. Moore have stated this pretty clearly in 1970's book, Programming a Problem-Oriented Language: "Do Not Speculate!". The second rule after, "Keep it Simple"
When you think about it, Do Not Speculate is a great way to Keep it Simple
100% true! Abstract base class with interface for data access for, you know, when you might change the DB. Because that happens all the time...
I know right? I can usually debunk that one just by looking at the queries and see if they use specific functions ;-)
@@theseriouscto you have to be prepared. You might get a call in the 3am and you have to change the DB immediately :)
I've actually done that. One of our products used to sync through Dropbox, and we moved it to sync via S3. Because of the abstract base class, it was relatively non-invasive. That said, it's not a sure thing that the base class helped. I could just have changed all references from Class A to Class B, and implemented functions until it compiled again. And it was a one-time deal.
@@ivanmaglica264 Definitely, my DevOps team got those calls from clients all the time
@@HollywoodCameraWork If the base class didn't really help then it wasn't done right and didn't need to be done.
Sounds like you deal with some strange issues.
I think we get this from tutorials, textbooks, and classes where the example is very simple. This simple example becomes a vehicle for illustrating every kind of technique. This teaches us that every kind of technique should be piled onto even simple code.
You make a point that is so good
The lazy reusable example that ends up teaching the next generation of developers this horrible mindset
When they should do the reverse and stick to simple demonstrations that encourage simplicity
But repeat that during your job interview, and they will "call you later".
In front of a junior dev or HR - both without significant experience in the programming field? 1000% agree
In front of someone like myself? I'd hire you on the spot
I have worked on so many "enterprise" projects as a junior dev that fell into this particular trap to an insane degree. Debugging and refactoring nightmare that's impossible to upgrade to modern standards, interfaces upon interfaces, I had to step trough 10+ abstraction layers every time I wanted to know what was actually called. I will never make that mistake myself, simple and to the point. When I expect future extension will be needed, I create a single interface that allows outsourcing that particular problem to external modules and that's it.
Complexity is the refuge of the clueless; simplicity is the signature of the clever. :-)
You should absolutely anticipate about what comes next. Think about how the requirements might change, what might be asked, and then think about what *would have to be done* to your design to accommodate that change. The key point is *think about what would have to be done*, but *don't actually do it*.
In particular, if you anticipate there will be some requirement down the line that will require introducing some abstraction to accommodate, *don't actually introduce that abstraction*, just satisfy yourself that down the line when you have to, you will be able to introduce that abstraction.
Thanks for your thoughtful comment! I completely agree that anticipating future requirements and considering how your design might need to adapt is an important part of the design process. Your point about focusing on what could be done while holding off on implementing abstractions until they're actually needed is spot on-it's a great way to balance flexibility and simplicity.
I think this mindset is key to avoiding speculative generality while still being mindful of potential future needs. By keeping designs extendable but not overcomplicated, we can ensure the code evolves naturally as requirements become clearer.
If you'd like, feel free to share an example of where this approach has worked well for you (or even where it hasn't). I'm sure many viewers would benefit from hearing your experience!
This was certainly a problem in the 90s when OO was really starting to pick up, the GOF book had just come out etc. There was a lot of talk at the time about writing reusable code, libraries etc. and generalising things for reuse that I think most of us fell into to some extent. It wasn't until later that people started to realise that the time to abstract/generalise was when you finally had a use-case that needed those things.
Listening to your excellent description it sounds like it became trendy before people understood it and when to use it. Then it became better knowledge. In the last 25 years however, you'd be astonished to see the % of my college/university (I'm in Canada) hires where I had to teach about the GOF patterns.
@@theseriouscto I don't think there's time in a typical 3 year (in the UK) BSc. Comp. Sci curriculum to cover more than a fraction of what you need to be a competent programmer/developer. Even back in the 90s, a surprisingly small number of graduates were really competent junior level programmers.
Most of the people who were good by the time they left were already programming before they joined the course and spent a lot of their time during the course doing "extra-curricula study" of one form or another.
It's even worse now than it was when I graduated in the mid 90s because there's so much more abstraction and complexity to get your head around.
I think it's inevitable you have to do a lot of heavy lifting on your own time over and above what they teach you in BSc.
I learned of the GoF book in my second job, as a senior because people had copies of it and were talking about it.
I was and I still somewhat am guilty of this due to having to work in a context without good internal standards.
I realize that my intent, simplifying my future self life, is at odds with the strategy.
However I think that many devs do this because they do *recognize* a problem, and they solve said problem with the tool they know best: code.
While said problem is best approached from a different angle, getting everybody on board with a style, adding tools to reinforce that style (I don't care where brackets are written, as log as it's consisdent on the whole codebase).
Likewise, refactoring and tests should be embraced, but in many companies management is pushing for more features because that's where the money comes from, ignoring the economic aspects of unmaintained code.
Teaching to non-technical people that *not* refactoring is *expensive* is a challenge, some don't know and can learn. Some know but don't care because they are already polishing their CV after they get their 'unexpected productivity' bonus.
Thank you for the thoughtful comment! You bring up an excellent point about how developers often default to solving problems with code because that's the toolset they know best. I agree-sometimes, these issues are better addressed through team standards and tools, but that requires organizational buy-in, which isn't always easy to get.
In my experience, most products don't even make it to the shelf, so the priority is often building a minimal viable product as quickly as possible. While I don't advocate writing bad code, there are so many great refactoring tools available now that cleaning up the code later isn't as intimidating as it once was. So, for early-stage projects, I recommend focusing on getting the product out the door without worrying about future-proofing. After all, if the product doesn’t succeed, all that extra effort was wasted.
That said, explaining this approach to management can be a challenge. I’ve found that construction analogies work well for this. For example, you can compare bad code to poorly placed electrical wiring: it works initially, but if you later need to add plumbing in the same spot, you’ll have to redo the wiring first. This helps non-technical stakeholders understand why cleanups like refactoring are necessary.
Finally, when you have these conversations, it’s important to establish your role as the technical expert. A little humor helps: I like to tell management, 'I won’t mess with accounting or marketing if you don’t tell me how to architect software or write code!' It’s lighthearted but gets the point across.
"Premature optimization is the root of all evil" indeed.
Absolutely! Premature optimization is like packing for a trip to Mars when you're just planning a weekend road trip-you'll probably end up hauling a lot of unnecessary baggage and still forget your toothbrush.
@@theseriouscto Abso-fkn-lutely.
I used to see “every dependency must be an interface” in Java projects so we’d have something like an IBeanFactory and BeanFactoryImpl for every single class that got defined. Thankfully this kind of cargo cult standard seems to have gone by the wayside.
It's interesting how certain practices can become ingrained in the development community. It's great to see a shift towards more practical approaches!
One example of this I see all too often in the game modding community is the "library mod." A mod that only contains common code for all of that modders other mods. Of which there will be ONE. >.>
I hadn’t thought about that
Very good point
All that bloatware adds up
I'm glad that the senior developer I was working with a lot when I started steered me away from speculative generality early. I've definitely still made some overgeneralizations, but nowhere near as much as I would have without that guidance. We're still working on a suite of rather complex LoB tools, so there's only so simple things can be, but that's even more reason to be careful not to overcomplicate.
Granted one of the other devs there, who has since left, was very keen on generality to a fault, even if the solution didn't correctly address the problem. In at least one case so far I've rewritten something he made because fixing it to actually solve the problem would have required significantly more work, because the generality was so baked in that trying to address it would have broken more things than it fixed unless I did it exactly right.
I'm happy you had someone to guide you - the right mentor can make all the difference.
Incidentally, that's one of the reasons why I started this channel, too many junior devs won't have a "GOOD" one.
Simplicity is king. I estimate about 2/3 of code in most codebases isn’t actually needed. I’ve done plenty of refactors where a small amount of code is added and 3-4x the amount of equivalent code is removed.
That's exactly the point-we're often overbuilding solutions to problems that don't exist yet! We all know that one day that special person in our lives may need bigger clothes... But let's not buy those clothes until they are needed ;-)
Another thing that can happen is, one developer will be comfortable with a certain level of complexity or abstractness of a design, so it doesn’t feel over engineered to them, so they build a complex solution that handles problems just fine as long as they are maintaining it, but then you bring in more junior developers who need the code to be super easy to comprehend, and they either make a mess or they just can’t touch the “over engineered” code. What makes it over engineered in some cases is the mismatch between the complexity of the solution and the actual developers who will be maintaining it.
Or maybe developers are just used to a different set of design patterns. I think it's key to have a common understanding how code is developed in a dev team. This might change with time but should not change over night.
Ah, the classic 'over-engineered code handoff'-where one dev’s masterpiece is another dev’s nightmare. Complexity is relative, isn’t it? What feels like elegant design to one person might look like a crime scene to someone else. 😂
That’s why shared team standards are so critical. A consistent approach gives everyone a common language to work with, minimizing that ‘what fresh hell is this?’ moment for whoever inherits the code. And yeah, evolving those standards is fine, but doing it overnight? That’s how you give your team whiplash!
I instead like to fully investigate the possible futures, and then just ensure that we're not abjectly working against those futures and preventing them. I'm happy to not build the future until we get there. But ignoring known futures is the road to hell. We're just coming out of a TWO YEAR refactor that was 90% downtime, and it was all from ignoring known futures while writing the code. Totally avoidable. We refused to look far enough into the future, and it cost us 1-2 years of full team downtime.
Wow, two years of refactoring sounds like a tough lesson-I'm sure that’s not an experience you want to repeat! You make a great point about known futures. Ignoring clear signals of where the code might need to go can create massive technical debt and costly downtime, as you've experienced.
I also think it’s important to distinguish between speculating on features and knowing about them. Speculation often leads to over-engineering for hypothetical needs, while ignoring concrete, foreseeable changes can have consequences like the ones you described.
Your approach of investigating possible futures without overcommitting to them strikes a great balance. It’s about acknowledging likely changes and ensuring you’re not actively working against them while still focusing on solving today’s problems.
Thanks for sharing-it’s a valuable perspective and a reminder for everyone to keep their eyes open for those 'known futures' without falling into the trap of speculation.
Yes, excellent video. I definitely went through a phase like this in my career. In the last two-three years I have noticed myself finally stepping away from this, and it's much better.
However, I think because I wrote (stupid) abstractions, I have learnt how to do it. Many people I know never learned to do it, and now they can't do it. Maybe there is some value in this, although I still feel bad that someone needs to maintain my bad code.
Maintaining bad code is like cleaning up after a party you didn't throw! At least now you know how to throw a better one next time!
I hate future proofing code almost as much as i hate premature optimization! **adjusts monocle**
Ha! The monocle touch really elevates the comment. 😄 Totally agree-future-proofing and premature optimization can both be traps that overcomplicate things without adding real value. The best code is the kind that solves today’s problems cleanly and evolves naturally with tomorrow's needs. Thanks for the laugh!
Depends entirely upon whether you are building libraries or products. If you want to implement something complex as a library, having the right design patterns is nonnegotiable, but you also have to design it for a specific set of expected use cases (typically informed by previous experiences and foreseen new requirements). When building products, the right design patterns become obvious as you build imo, but you will only know which to use if you have studied and practiced with a variety of them already.
It sounds like you've found the sweet spot between "too much design" and "too little" - that's the key!
I found another gold mine of a channel. Thank you, sir
You're welcome, my friend! Just trying to save the world, one code smell at a time.
Every code should have a DonaldTrump Class to make it idiot proof.
That would certainly make your code great again
Great one-liners!
For vacations, I had modular pants made. In case I lose a limb, or gain one. Or my spouse and I want to wear the same pair of pants. 🧠
🤣😂🤣🤪
Speculative pant-ability?
Ig this isnt as much the case in game development, say you're making an rpg it would be good to "future proof" say the components/character classes so all new players, enemies, npcs etc. are easy to implement in the future and act consistently
True, in game dev, future-proofing is like setting the rules for a board game before anyone starts playing-it keeps things consistent and makes adding new pieces a breeze. But even then, the trick is to future-proof just enough. Build for expansion, not for every possible DLC idea that might pop into your head at 3 a.m. Otherwise, you’re just creating a boss-level challenge for yourself!
Kotlin has final as default and is not future proof by design. Apple products (like the iPhone) is not future proof as even the battery is glued in place.
100% agree
the "better" at coding i get, the less productive i'm getting for this very reason. time to drop the habit.
With great power comes great responsibility - Voltaire
Think it comes from the notion of refactor bad, and tech debt bad. Thus abstract to avoid both.
Ah, the classic dance of abstraction and the fear of tech debt! It's like trying to dodge a rainstorm by hiding under a leaky umbrella-eventually, you have to face the weather!
People fear refactoring and tend to future proof the code because they fear they might break functionally. If you fear breaking functionality then you should write a test rather than future proofing your work.
100% in agreement
Fear of the unknown, future-proofing breeds; future-proofing to bloated code leads; bloated code, the path to the dark side, it is.
I agree except if you have any downstream users you really need to have solid API versioning in place ahead of time so that "refactor it later" is an option.
Absolutely! If there are downstream users, solid API versioning is non-negotiable. It’s like laying down train tracks-you don’t want to suddenly realize mid-journey that you’re driving a bullet train on tracks meant for a steam engine.
Good versioning turns 'refactor it later' from a dream into an actual strategy instead of a nightmare for your users. It's one of those rare cases where a little future-proofing upfront pays massive dividends. Thanks for bringing that up-it’s a lesson every dev learns sooner or later (hopefully sooner)! 🚂💻
Almost all the places I have worked, they have not budgeted time for refactoring when their "build for now" code has to be extended, repurposed and redesigned. As a result it has been a hell to get up to speed as a new hire amidst a hodgepodge of things cobbled together like a jenga tower assembled in a galewind. DO THINK ABOUT THE REALISTIC FUTURE. USE THAT DESIGN PATTERN THAT IS VERY LIKELY TO BE NEEDED. YOUR BOSS HAS NOT THOUGHT ABOUT IT. YOUR SALES TEAM HAS NOT THOUGHT ABOUT IT. YOU WONT GET THE TIME TO DO IT PROPERLY LATER. (And yes, you will say that's a shitty company. Those exist. We all have to work in some)
As the person who made the video, I think you know where I stand.
As a technical expert, it's your duty to educate people who don't know any better. Otherwise, you're just watching the problem and doing nothing.
People will respect you more for it, and they will eventually listen. I have a great video on the art of saying No.
"Are we opening an art studio? no"
i came here regarding developing for an art studio... aww..
These are powerful tips for sure.
a lot of over engineering compounds tech debt.. or gate keeps junior devs from being able to help out when suddenly their jobs become too abstract.
As a dev with ADHD though I will say, having enough structure to build a factory pattern paired with a strategy pattern helps me maintain focus when i inevitably jump around.
But, that usually is an organic result for the problems i face and not me actively slapping patterns on just because.
I'm glad you found the tips helpful, even if they weren’t directly aimed at art studios-your perspective adds a great layer to the discussion.
You’re absolutely right that over-engineering can lead to compounded tech debt and make it harder for junior developers to contribute. Striking that balance between structure and simplicity is so important.
I love your point about using patterns like Factory and Strategy to maintain focus as a dev with ADHD. Patterns can be a fantastic tool when they emerge organically from the problem at hand, rather than being forced into the design. It sounds like you’ve found a great way to align your workflow with your needs while keeping the codebase practical.
I've been pondering a great way to discuss the Strategy pattern for a while, as so many smells that I'm talking about could be avoided with that pattern - it's crazy. Just requires a bit of a mindset change.
Thanks for sharing your insights-it’s always great to hear how different devs approach these challenges!
I was actually dismissed from art class for not being able to draw any geometric shapes without help ;-)
@@theseriouscto Sounds like you were just embracing the fun part of art. Ditch the rules and structure! haha
For me, strategy patterns tend to be a good way to setup modular pieces for SOME important steps along the way.
Trying to depict a scenario from my head here, hopefully it conveys.
From a 3D scene in Maya.. Rendering out low res no color 3D assets for easy previewing, rendering out mid res but well lit videos for handing off to creatives for various purposes, render out high res for approvals and handoffs to editors to use s proxies and rendering max res for final frame. can each be a lego piece within a category of rendering components each with unique sets of actions, settings, and pre & post logic.
but on the implementation side, it looks closer to a simple, and legible todo list for a particular action. So if one show1 needs renderingA, but show2 needs renderingA & renderingB, it is a very minimal change on a junior dev to go in and customize the behavior.
something like that haha
Again, great video man! thanks for your wonderful knowledge dump
@@moo_goo You are most welcome and thanks for the Strategy pattern example - I wish more devs understood the power of some patterns
I would say I’m probably the type of developer who tries to implement abstraction and modularity whenever possible. And I can see how this might increase complexity and waste time for both me and my teammates. However, I’ve also experienced difficulties with code that doesn’t have enough structure. I’m talking about hardcoded spaghetti with functions containing 1000+ lines. It was a nightmare to add features when there was no underlying framework whatsoever. Any advice on finding a healthy middle ground?
Great point! Striking the right balance between abstraction and simplicity is like walking a tightrope-lean too far either way, and things get messy.
On one hand, too much abstraction can lead to unnecessary complexity that bogs down the whole team. On the other, too little structure turns the codebase into a plate of spaghetti no one wants to touch.
My advice? Start simple. Build just enough structure to solve the problem at hand while keeping things flexible for future changes. Use refactoring as your safety net-let the code evolve as the requirements become clearer. And don’t be afraid to involve your teammates in these decisions; shared understanding is the key to maintainable code.
Oh, and if a function starts approaching 1000 lines, that’s probably your code tapping you on the shoulder saying, 'Help me, I’m drowning!' 😂
Is getting a refactoring tool an option for you? I'd be curious to see if a good one spots patters.
@theseriouscto I don’t have any experience with refactoring tools, but if you have any recommendations let me know! I finished my internship at the company I was working at, and I’m not sure if it would be an option anyways, but would be nice to know.
@@owenm3112 What tech stack do you use?
Future proofing is important if making an API or a framework etc.
otherwise get the job done and do it with consistency in style and code patterns.
100%
Yup - you'll be preparing for anticipated changes that will never be there, while the real changes will anyway require a refactoring
Happy we're on the same page, out of curiosity what content would you suggest?
I love this. I spend lots of time ripping all this junk out to make legacy apps more maintainable.
For some reason I have the maid speech from Mr Incredible playing in my head after you just said that ;-)
This is a fantastic philosophy that im trying to encourage in our teams. It seems many engineers write complex abstractions for no other reason than enjoying writing code. 😂
Absolutely! It’s like some engineers think they’re composing Shakespeare when they’re really just writing a grocery list. Let’s aim for clarity over complexity!
i'll be pedantic and say that you're really complaining about pre-mature abstraction. Pre-mature abstraction is bad, future proofing is good.. Pre mature abstraction is just future proofing gone wrong.
We are 100% on the same page, and that's the definition of Speculative Generality. I think the word Speculative gives it away ;-)
This should only be the achilles heel of the junior.
Can't agree more
But I've seen otherwise... occasionally
It never happens until it happens, before avoiding it you should take into consideration the real project context, the problem that I see with this ideas is that no one suggests a realistic alternative, because yeah you can add noise and unnecessary complexity to the system, but not doing it can also add complexity and bottleneck and the "you can refactor later" in most of the cases it is a very big lie, before taking any decision you should first analyse the context.
The real context will draw a BIG LINE between speculative and the real project context
Given that most software projects never get finished, I think it's important to always leave the fluff for later
Refactor later is not a lie, it's work
Get it working, get it working well, enhance it, repeat
Yeah. This is how I made an SPA application for a simple homepage.
Ah yes, the classic 'I built a spaceship to deliver pizza' moment. 😂 Single Page Applications for something as simple as a homepage is like bringing a bazooka to a snowball fight-it works, but at what cost?
Jokes aside, it’s a relatable trap. Sometimes, the allure of shiny tools or over-preparing for hypothetical future needs can lead to...overkill. At least you walked away with a lesson and, hey, probably a really snappy homepage! 😄
@@theseriouscto I realized my mistake far too late. By that time I was too entrenched in code to just throw it away.
@@LedoCool1 I feel for you - you still have to maintain it?
@@theseriouscto it's still up, but I'm not adding that many things to it, so I'm not upgrading anything.
Yh no. I tried the no future proofing thing. It only works on fire and forget projects, I suffered greatly from that ideology and it made me to do a rewrite.
Here's an engineer lesson, one Boeing failed, if you plan on maintaining anything in the future, future proof it. Otherwise it will lead to a janky retrofit aka the MCAS.
Fair point! Future-proofing does have its place, especially for projects that need long-term maintenance or are core to your operations. It’s a fine line to walk, though-overdoing it can lead to overengineering, while skipping it entirely can turn into a painful rewrite (or, as you aptly pointed out, a 'janky retrofit' à la MCAS).
The trick, I think, is knowing what to future-proof and balancing it with simplicity. Focus on designing for change in the areas most likely to evolve, and leave the rest as lean as possible. Thanks for sharing the hard-earned lesson-it's always great to hear real-world examples!
How on Earth is acknowledging the fact that changes eat up the most time cost as well as they're the most difficult part of the design process. How in the world can you say that anticipating this fact and so designing with the idea that the design needs to be flexible enough so that changes won't be difficult. They won't be annoying. The next person that comes in is going to think there's fucking lucky stars that there are already the scaffolding that they need to start making these changes. How is that? Not a good thing? That's what future proofing is. You're not predicting the future. You're using your experience to know with an absolute certainty that people are going to want to change things. A lot of the changes are going to be stupid and beyond that you know what aspects of the design need to be most flexible and so I mean you kind of have I mean that. That's what experience gives you the ability to kind of know what the next person coming in and working on this thing. What are they going to be asked to mess around with first for them to be able to come in and not have to? You know essentially start from scratch as well. You know they're not having to decode something that's impossible to figure out. It's already been made flexible as well as object-oriented enough to where. If they don't like it, it's easy just to remove it all together and replace that something with something else because you've gone ahead and made sure that everything is even modular and even if they don't keep anything, they're still going to work within that modular framework so that that the more people that work on it the less has to be like figured out all over again. And then by the time it makes its way back around to you it's not really an issue. It's already following that flexibility and workflow that you went ahead and provided because you just knew from experience that this is how it works.
Maybe design is so different from code, but I can't imagine that coding to anticipate change is a bad thing. I just can't imagine that.
Thanks for commenting, I love being challenged.
Let's take an analogy: driving a car
What I'm basically saying is that future building slows you down and that you should avoid it when you are speculating so this way you can go faster.
That doesn't mean I'm saying drive at 180 km/h or mph
I'm saying if you do it when you don't need you it will take you longer to get to your destination (ship the product)
If you know what the future holds then of course you should plan and put stuff for it, but only when you are absolutely sure. And even then for me that's a bit of a grey zone.
What if the product flops? Ships and then is abandoned? What does all this future-proofing give you? The product is dead...
@@theseriouscto You haven't ever gone back into a dead project for that one amazing thing that absolutely is fits the the issue you are currently having.
Basically what I mean is. Future proofing is ensuring that not only yourself, but other people that you may never actually meet, you can feel more assured that you, and they will not have to solve the problem from scratch a second time.
What you’re explaining is writing clear and clean code
Not implementing a whole scaffolding of abstractions or interfaces that are only used in one straight line to a single class
Different things no?
Ok, catch me up. When you say abstractions are we talking "cuil theory" (I swear it's a real thing) or like, umm... a new standard that just rearranges how things are done to hopefully get right a new trend in workflow? or something else entirely?
"one straight line to a single class" are you basically saying that this is stupid because it DOESN'T do any of the things I have been in support of so far because it assumes the issues of one designer... (programmer) are for some reason going to be the issues of every programmer? When obviously that isn't true. This isn't making it easier to grab a hammer it is basically taking up time making a super hammer mega right hander left hammer welder at the same timer... um, I don't know what they call those things for programmers, bug in a fab hard, they are just called the freedom to solve a problem in the least inefficient way, which is wait for no reason while a workflow must be adhered to catches up to the need for it in order to get the WF changed to include the exception that allowed for the creation of the tool (like it anticipates some engineering will need to be napkin, but tries to keep that amount to the things that don't affect the lives of people or the cost of assets - hence the modularity) but it doesn't go straight to the "Class" (which is I am guessing that really obscure answer) and then try to anticipate others like it, all that take actual engineering to create, while none get used again??
Did I at least get close, because I am trying to do a better job of convince people (that protect the job tree from jobs being picked) that while I didn't do a lot of coding, I understand it more as an undertaking than I don't.
Because if I got that all wrong, I am sorry.
Listen to the greybeards.
Thank you for your comment! I completely agree-listening to the experienced voices in our community can guide us in the right direction.
Yeah, fat chance of convincing business you need to refactor the app.
Who said anything about convincing? Maybe it's not an option for everyone but if I believe that something makes no sense, even after I've been told to do it, I go back to my desk and keep working on the right priorities.
Suits don't know tech - if they won't agree to you refactoring the app it's because they think your reasons aren't good enough. Ever read the Phoenix Project by Kim?
i love saying no we will not support that now, we will add it when we need it
Precisely, great stable software or buggy software that doesn’t do anything more but is twice the complexity
Why are you attacking me ^^ seriously one thing I want to add - sometimes this overengineering already happens at conception level with bloated requirements that never really get used - because we might need it later
I think I'm attacking everyone ;-)
Couldn't agree with you more, hence the power of saying no - OR - priorities.
Suits want everything but they have to prioritize and you start with the most important thing.
Would a suit get married and fix issues with the bride later? Well we don't write software like that either
There is future-proofing, and then there is future-proofing. The right kind of future-proofing is: avoid third-party dependencies wherever possible, because they will be a maintenance nightmare in the future. Your own code is future-proof, because it's your own, you can fix it. You do not own dependencies and do not want to maintain them.
That’s a great point, and I 100% agree-avoiding unnecessary third-party dependencies is absolutely a form of smart future-proofing. The kind where you protect your codebase from external chaos, not where you overcomplicate your own.
But I’d argue this still ties into the core idea of speculative generality: adding dependencies you don’t truly need today is just another version of ‘what if we need this later?’ Instead of over-engineering with abstractions, you’re over-relying on tools you don’t own.
So yeah, future-proof by keeping control where it matters-your own codebase. Because you can fix your code; you can’t always fix that package that hasn’t been updated in two years. Thanks for bringing this up!
The only future proofing code needs is comments.
Lots and lots of comments. Ideally funny ones.
Funny comments? Absolutely-future devs deserve to laugh while crying over the code! Just remember, comments are like seasoning: too few, and it’s bland; too many, and you’ve turned your code into a novel. Aim for the sweet spot where they’re useful, hilarious, and don’t double as a stand-up routine.
@@theseriouscto I can only write comments and no code so my opinion doesn't really matter.
But as a problem solver, my view is that in the future I'm much more likely to go back to notes to see HOW I was thinking than WHAT I was thinking.
In other words, I learn from the process more than from the results.
Put another way: the way to future proof code is make it so you're better at coding after every project.
@ I have a great video coming up on a comment smell scheduled for Jan 5th
I think you will enjoy the value it will bring
That spoiler warning came too late; I couldn't click off the video. :(
So Sorry about that.
Out of curiosity what content would you suggest?
I tried not to click on this 😂
We's got da click precious, did we got da sub?
"The best code is the code that doesn't exist", are you paraphrasing Genrich Altshuller?
Not intentionally, but I’ll take it if it makes me sound smarter! 😄 The idea does align with Altshuller’s TRIZ philosophy: the best solution is one that eliminates the problem entirely. In coding, that often means solving the problem so elegantly-or questioning whether it needs solving at all-that you don’t end up writing unnecessary code. Truly a case of less is more!
@theseriouscto it's in my email signature "the ideal system is when there is no system" along eith Churchill's exhortation "no idea is so outlandish..." etc.
Absolutely...
okay daddy i will do as you say
I appreciate the trust! Just don't be surprised if I start charging for my "dad advice" soon! 😄
It is interesting how many times in your life you was laid off or fired due to your view on coding? 😁
Hahaha! Fortunately, the number is holding steady at zero-and now that I’m retired, I’d say it’s a streak I don’t plan on breaking. 😉
But on a serious note, this isn’t about ignoring best practices; it’s about striking a balance and not letting over-engineering create unnecessary headaches. Clean, simple code is what keeps projects (and jobs!) safe.
Thanks for the laugh, though! 😅
Very concise
Thank you my friend
I'm guilty as charged 😂
May I suggest lowering the volume of the background music? (or perhaps even remove it completely) Less it more ;)
I sentence you to… listening to background music that’s too loud!
Good suggestion, the next 2 will be the same but next ones will be less loud