I've definitely fallen in the YAGNI trap too many times. Even if you know for certain there will be multiple versions of something similar, that doesn't mean you're going to derive the right abstraction. I've learned to built the thing 1 or 2 times, then refactor out the common abstraction between them and build the 3rd or 4th+ from there. A few months ago I did some thread executor stuff and I wanted practically the same threading logic, but a different execution sequence. So after I built it once, I refactored the generic pieces, injected the executor impl to an executionmanager with a couple other properties, and all I ever needed to write was single execution handler, and not any of the threading manager for concurrent executions (minus a small bootstrapping class/springbean to inject a non-managed executor impl).
Hi. Maybe you're right... maybe not. A little abstraction doesn't hurt a lot and 'can' (yes... if..if ...) help a lot in the future. I remember a project where we had to change our excel export code in hundreds of files because we didn't abtract it. You can say... "Oh.. when you detect the code is used a lot....then abstract.." But you don't always detect this .. many developers, a lot of changes in the code.. a lot of time between changes. I really think interface abstractions don't hurt a lot and benefits 'can' (if...if...) help a lot. Thanks for comments
My random and rather unorganized thoughts on abstractions, generic code and frameworks: 1. Don't write them, don't plan them, don't design them. They should be GATHERED. From existing code that ran in production. You have 5 places with similar or same code? You can gather it into a generic solution once you had a few situations when they had to change at the same time for the same reason. Mostly this is not the case as it turns out and you will find out that 2 of the 5 change for different reasons. If they change for the same reason then maybe there is some business concept you are missing. 2. In my eyes the most important thing about good code is that it should be intention revealing. If your abstraction or generic solution hides the business intent I am pretty sure it does more harm than use. The most important measure of good code is that you can reason about the business problem it solves in plain and simple English. Hard to do that when you are 12 abstractions deep with 4+ generic parameters being passed around. I avoided the use of the word "clean" on purpose. 3. The number 1 killer of software is the continuous increase of complexity one step a time. Death by a thousand papercuts. We said it a thousand times that our industry is a trade off game. If the trade off price you pay is increased complexity the price might be too high to be justified.
exactly, don't be DRY, be WET (Write everything twice) And then when that 3rd implentation comes you have a wayyyy better idea what kind of abstraction you need, instead of being coupled to some gross god class that does everything.
All I know is that anytime I write generic code it's never used and anytime I don't, it ends up getting copy/pasted/modified 10 times across the code base. Probably not true but it's what it feels like.
I did a video about DRY recently, and it depends what the code relates to really on if you should re-use it and ultimately couple to it. th-cam.com/video/znpdlYgvU3M/w-d-xo.html
I agree wholeheartedly with yagni / useless / misleading abstraction. At first, features are so similar you are tempted to factor the code, then requirements specific to each case are added and the abstraction leaks / breaks. Junior devs add ifs galore just to keep the abstraction alive, and you're in deep trouble. I did not do it, I inherited it. It's better to let the system mature, then unearth useful abstractions, IF some ever appear. I had to de-tangle a "generic" implementation of "similar" "products". When more and more details were added, it became clear all the "similar" products had their own plot twists. I made one "service" per product and the code got all flat, down to the deepest helper methods. No more pesky ifs, switches, or glute of parameters. All methods are named as per the lingo used by the business. The good life.
What comes to my mind a lot is, that we developers love writing nice code. Code that makes us feel good and I think this might be one reason why we tend to make it so generic. It's just satisfying. However, I also like to remind myself of a tip I heard about, that was targeted towards book authors. "Murder your darlings"; get rid of the beautiful sentence you wrote, your darling, if it doesn't add any value to the story or readability. For me this is the same for us developers, but in terms of code. Often times I have found myself writing a very piece of code that looks or feels so elegant, just to realize that there is a much simpler way. And more often than not, it is also a less generic way. In a sense also YAGNI.
I fall in this trap a lot as well. Python is notorious for letting you write a beautiful, complex one-liner or decorated, context-managed generator that looks sleek and beautiful and flows perfectly in the editor. Commit and feel like a king. Then come back literally two days from now completely confused what is going on, unable to debug from the layers and nesting, and proceed to break apart into more sane pieces of functional code while weeping tears of bittersweet sadness for your lost love.
I agree completely, when the premise is that you’re creating the abstraction to prevent some future pain of migration (because that migration almost never happens in practice, and even in those rare cases when it does, changing the code is almost never the bottleneck/issue). However, I often see the premise for creating these abstractions as being a way to buffer an external dependency, define the failure modes in the domain, bake in resilience protocols (failover, retry with backoff, basically anything you’d use Polly for), and being able to mock/stub the external dependency to test those mechanisms. Knowing that, I’d say that the answer isn’t really “You ain’t gonna need it” it’s more like “You ain’t done designing this integration yet.”
Even if you don't necessarily have to abstract or generalize in advance, I think it's worth at least thinking about it first. Because if you take a few steps back, you often understand better what problem you are actually trying to solve. This can also lead to better code, better structure and better documentation.
I would go with your NotificationService one step further, and create a factory based on the NotificationType. Is it an SMS? Generate an SmsService. Will there be an Email or PushNotification in the future? Simply extend the enum and add a different implementation for the factory to return. Of course, this is based just on your example, it depends on the use case. If one wants to do abstractions, one would need to do create them in such a way that they will not affect the current code and allow extension over modification, similar to what to mentioned.
The problem is that when you do not do the code in a more generic way you are seen as not as good or not as seasoned developer. I hate this needs for generics everywhere.
mmm, that's concerning. I'm a senior. I love flat code, the less ifs possible, no else. If a junior writes a switch, they better know why. Nah, I pair program and we go on a switch hunt ;-)
In Agile, more often than not, PMs fill up the sprint to the brim with features, so it's pretty obvious why devs are so tempted to "pre-abstract": because they know for a fact they won't be allowed to refactor later: every time a dev wants to add a refactor task to the sprint, it's knocked back with things like: "done is done, move on". "if it ain't broken...", "the business wants features, not more costs, QA time, risks", etc... It's bad, but many companies still roll like that. Not saying the devs should "pre-abstract". Only pointing at a possible - bad - reason why devs are doing it.
This is great when you’re working on personal projects. However, in the real world, the best time to invest in architecture is at the beginning of the project, as fixing the architecture later can become very expensive. Furthermore, most of the investment in time and resources happens at the start, and later on, more people can benefit from the architecture by implementing features and reusing what has already been done. The "we'll do it later" mindset often doesn't materialize.
It's not about "do it later" it's about doing what is required today without handcuffing yourself later. Handcuffing yourself can also mean creating useless indirection that hampers future development as well. I'm not suggesting a free for all, rather what I'm really describing is pay attention to coupling as it's often the root of making changes difficult.
You can't foresee the needs. The BAs, the business, the customers don't know the feature they want or need. You've got to get a few features done before refactoring. The reasons why "do it later" does not materialize are 1) business ppl are impatient, pushy and mighty 2) devs can't convincingly enough explain: 2.1 why "done" needs to be redone. Yes, several times. 2.2) why iterative exploration beats planning any day.
Your code base can be a whole lot simplified IF it's only used internally by you. In other words if you are building a public API or SDKs or services, you have to be a lot more particular about edge cases etc. and abstractions maybe needed for different scenarios.
I think there is nothing wrong with always abstracting away technologies. That's not what causes the complexity when we talk about yagni, abstractions and generic code. Nobody is complaining about the standard 3 or 4 layers. The problem is these solutions with 10 plus layers and code so generic it has no meaning eg heavy use of reflection, rules engines, etc. If you cant easily see what the code is meant to do by looking at one place its difficult eg a domain class should tell you what the domain is does and preferably imo what it is capable of.
I wouldn't even do the in process event until it's needed. MediatR is really overrated, for example. It can make it hard to follow the logic in the code. If you really need something like that later then add it later, but in my experience you ain't gonna need it. If I wanted to return a response immediately before the operation is completed then I would use a more durable message broker or event topic, rather than doing it in-process. IMHO EDA makes more sense for microservices where the cost of changing the API of the service is high, and you can get loose coupling between teams etc.
I think you should almost always have at least two different uses of the abstraction before you design it. Otherwise you just end up modifying the abstraction when a new use case actually pops up. Another good video on this topic is the talk The Wet Codebase from Dan Abramov.
Correct, unless you already have experience around the abstraction your trying to create and have an idea of what multiple implementations look like, you'll derive your abstraction around the sole implementation you know. Worth noting, that that can absolutely fine if you're trying to simply the API for the single implementation.
Hi Derek. Nice video, appreciate it. However ISmsService in your example is pretty ok so you can mock or fake it in tests. The only note is that I would call it ISmsSender. Hate ...Service thing.
One drawback is that, by mocking/faking ISmsSender, you don't get to test the Twillio class nor the lib's integration. It's a similar problem to mocking database access at repository level: your tests won't tell if the SQL your app sends to database is faulty or not. Depending on what your app does, it can make your test strategy overcomplicated.
Of course you need abstraction because as a good developer, you want to be able to unit test without doing heavy E2E.... So often the YAGNI principles is overrated and just put as an excuse to (sometimes) go fast, lower the quality of code and its maintainability. Abstraction has to be considered, and, as always; think before coding. But at the same time, I totally agree that people overused genericity for almost everything. It's the "one size fits all" syndrome. But it's more over-engineering than actual YAGNI; imho.
Hi Derek. I too follow and try to teach about when and how to apply YAGNI with my teams and projects. However, also use Ports and Adapters (aka Hexagonal) wherein which the ports are usually interfaces. Would you consider those interfaces to be unnecessary abstraction/indirection? I’m asking because I have been challenged with this perspective before. I usually counter that argument with the fact that these interfaces (ie the ports) are there to ensure that the core (whether you’re using DDD or some other pattern is irrelevant) is agnostic to how it’s driven and what it drives. But sometimes I feel like it too is generally unnecessary. What are your thoughts?
It depends on the level and degree of coupling between them. There isn't a clear cut answer because your tolerance to coupling is dependent on the context
When I started my job, and doing API, I tried to do some generic action for example for delete database entity. Simple right? Yeah.. until diffrent entities started to have their own validations. Let's say you want to delete article category - but first fetch this category, and check isn't this category assigned to some article, by inculed articles. So - no geneneric this time Finally maybe it's many of breaking DRY rule, but at the same time it's much easier to make changes
On the subject of DRY, I think people go wrong by assuming that it refers to duplicate code structure. In my understanding, DRY is about spotting duplicate behaviour inside the same context rather than duplicate code structure. Duplicate code structure can often be a temporary coincidence, especially early on in a code base when not much of the uniqueness of different types or use cases has been captured leading to artifical coupling of unrelated types that will cause problems down the line. If you apply DRY too inappropriately then your code becomes flammable
Yagni implies that changes are easy and isolated, that the code written today has little significance for the code written tomorrow. That's all true for an experienced developer. Which leads me to a very general criticism: I see a lot of videos on yt about XP, CI, advanced architecture, sophisticated tooling. What coaches regularly seem to miss: most devs are not even remotely that good. It took me 20 years to get to a level where I can comfortably discuss architecture, and I had the luck of a project leader who put an emphasis on in-work training. But most teams have poorly trained devs mixed in, that often also perform a variety of very specialized tasks. At least my experience is, that no team can afford that level of professionality that would be necessary for advanced concepts.
I think this is one of the biggest problems of the industry. I had the good fortune to be starting as a junior developer just as a surge in interest in XP and agile (lower case a) swept across the city I worked in. And by that I mean a surge in interest from the dev community, not a surge in interest from management that their devs should be using this Agile (upper case A) thing that was going to make them more money without the business having to change their ways of working. I was very lucky to be sat near people who understood these things and were willing to patiently explain them to a junior who thought that learning how to program ended at learning the syntax.And I got to be part of the transformation of a code that had no automated tests, no IoC etc, and which was released once a month (and usually had to be rolled back) into a far better place where teams quite happily managed to perform releases multiple times a day and had the confidence to change and improve the code base without fearing they were breaking... something. So I have a deep appreciation of the value and limitations of the coding principles, automated testing etc. and by that I mean I don't dogmatically apply this stuff whilst chanting "For the greater good!" But... I'm using this stuff beacause I've seen the hellscape that can come from not using them. I feel we have a generation of developers now who are just copying what they see already in code bases with no real understanding of why. I guess I'm asking, how do you train a developer to produce habtable code and good architecture if they've never experienced what happens if you don't
I always find it very difficult to ever get this right. Sometimes, I've wished i had written the abstraction or genericised earlier, because the functionality I'm now writing would benefit massively, and take less time. On the other hand, I've also spent days abstracting something for the possible 2nd and 3rd use case the sales team are chirping on about, to find they never come to fruition.
Hi Derek, agree with you. However, still haven’t figured it out how to write unit tests for some edge cases like “Twilio returning 503” without creating an interface and mocking it. Of course, I could create a mocking service and run some integration tests against it, but that is more costly. What’s your opinion on that?
I get your point, but I completely disagree with your example. As somebody that has had to replace the attribution sdk 3 times, as mediator sdk 3 times, push notification sdk 2 times, crash reporting sdk 2 times, and VCS system once, I have learned that it's critical to put a strong abstraction between your code and those sdks. Not just the sdks, but the aspect as a whole. And this abstraction needs to be so generic that it's interface would fit any sdk. This in practice means you even have to split solve sdks across multiple aspects. But I can promise you, do it correct and it pays off massively.
Very nice video. I agree with the first point (about creating unneeded features). However I don't have a definitive opinion about abstractions. Don't you think it's worth it for automated tests, for example?
There are other forms of abstractions you can use besides an interface, in my example I was also using a delegate as a dependency which is simple to create a fake for. Also as mentioned, what are you trying to test? If it was sending out an SMS, as mentioned you want to verify your actually hitting Twilio correctly, that was the point of the functionality was to send an SMS, not to verify you're calling an abstraction.
@@CodeOpinion got you. So, in your example the usecase is very simple and small, thats why it's not worth adding an abstraction, right? In case of a more complex usecase, like a PlaceOrder, if we had to send a sms at the end of the process and we dont have a event-driven structure, do you think an abstraction would help?
@@betopolione.laura.gil.1 I see it like this: 1) Func/Action injection is a *single* function pointer with *unnamed* parameters 2) Named delegates is a *single* function pointer with *named* parameters 3) Interface is a *multiple* functions pointer with *named* parameters So you just pick whatever is right for your case at hand and start from the simplest option which is a Func/Action.
@@betopolione.laura.gil.1 not necessarily. A common email/sms abstraction makes sense if users can have email/sms/both notifications depending on runtime context. Abstracting services allows to decouple the event happening from the nature and count of implementations used. You could also make an abstraction over the Twilio implementation if your organisation is in the process of changing of sms provider but you don't know the target system, or if it is common practice to change providers frequently. But if your app needs to send email or sms notifications on events with a compile-time deterministic manner, don't bother the overhead, simply reference your service implementation. In dotnet its not very difficult to extract an interface and swap the reference later. *IF AND WHEN* needed.
I can't quite grasp how does this goes together with open/closed principle of SOLID? What I'm usually trying to do is to give all the opportunities for extension, which is kind of counteracted by YAGNI.And in cases when you end up actually needing it afterall - upgrading an original code usually turns ugly. Thoughts?
Upgrading ugly code is less ugly then trying to implement something for a future condition you don’t yet know. You ALWAYS get the abstraction wrong in some way and either have to change the abstraction anyway, or you try to build around the bad abstraction to fit your new use case again causing issues. Better of writing the abstraction when you k or what you need m, not before
I have a multi-module java project where I split things in a way where my domain doesn't depend on anything. That said, all the business logic including sending out a notification for some event, is typically part of my domain. So for something like using Twilio to send out an SMS notification, I usually do exactly the thing you gave as an example of what not to do. I define the interface for TextMessageSender in my domain, and I implement it in my infrastructure layer. Am I being silly to think that this is actually a good thing to do? I'm not exactly doing it to accommodate for future use cases, but my code looks almost 1:1 like the first example you gave.
Also I should add that I do a similar thing for things like Amazon SQS or services that would make local development (especially offline) harder. Why not have a simple interface, that you can replace with a mock or in-memory queue or whatever? Again I'm not trying to abstract away for the purpose of predicting future use-cases, but in general it still feels like a good design to me to atleast have something like Twilio and SQS behind an interface.
Better to make a generic notification json and have the notification data structure pattern match on what can understand the specific fields to do the right thing.
@@iantabron not string pattern match but the pattern matching of data structures. Do you know what I mean by pattern matching as a feature of the language? Not regex. There are some ECMA proposals to add it to JavaScript but currently it is in other languages for instance c#, f#, Scala, OCaml, Haskell, Rust and so on.
@@supercompooper yes, I understand the feature of the language you are referring to. I'm referring to your example of using it with a JSON object to determine which type to deserialize to. How does pattern matching do this?
I've definitely fallen in the YAGNI trap too many times. Even if you know for certain there will be multiple versions of something similar, that doesn't mean you're going to derive the right abstraction. I've learned to built the thing 1 or 2 times, then refactor out the common abstraction between them and build the 3rd or 4th+ from there. A few months ago I did some thread executor stuff and I wanted practically the same threading logic, but a different execution sequence. So after I built it once, I refactored the generic pieces, injected the executor impl to an executionmanager with a couple other properties, and all I ever needed to write was single execution handler, and not any of the threading manager for concurrent executions (minus a small bootstrapping class/springbean to inject a non-managed executor impl).
Hi. Maybe you're right... maybe not. A little abstraction doesn't hurt a lot and 'can' (yes... if..if ...) help a lot in the future. I remember a project where we had to change our excel export code in hundreds of files because we didn't abtract it. You can say... "Oh.. when you detect the code is used a lot....then abstract.." But you don't always detect this .. many developers, a lot of changes in the code.. a lot of time between changes. I really think interface abstractions don't hurt a lot and benefits 'can' (if...if...) help a lot. Thanks for comments
My random and rather unorganized thoughts on abstractions, generic code and frameworks:
1. Don't write them, don't plan them, don't design them. They should be GATHERED. From existing code that ran in production. You have 5 places with similar or same code? You can gather it into a generic solution once you had a few situations when they had to change at the same time for the same reason. Mostly this is not the case as it turns out and you will find out that 2 of the 5 change for different reasons. If they change for the same reason then maybe there is some business concept you are missing.
2. In my eyes the most important thing about good code is that it should be intention revealing. If your abstraction or generic solution hides the business intent I am pretty sure it does more harm than use. The most important measure of good code is that you can reason about the business problem it solves in plain and simple English. Hard to do that when you are 12 abstractions deep with 4+ generic parameters being passed around. I avoided the use of the word "clean" on purpose.
3. The number 1 killer of software is the continuous increase of complexity one step a time. Death by a thousand papercuts. We said it a thousand times that our industry is a trade off game. If the trade off price you pay is increased complexity the price might be too high to be justified.
exactly, don't be DRY, be WET (Write everything twice) And then when that 3rd implentation comes you have a wayyyy better idea what kind of abstraction you need, instead of being coupled to some gross god class that does everything.
All I know is that anytime I write generic code it's never used and anytime I don't, it ends up getting copy/pasted/modified 10 times across the code base. Probably not true but it's what it feels like.
I did a video about DRY recently, and it depends what the code relates to really on if you should re-use it and ultimately couple to it. th-cam.com/video/znpdlYgvU3M/w-d-xo.html
I agree wholeheartedly with yagni / useless / misleading abstraction. At first, features are so similar you are tempted to factor the code, then requirements specific to each case are added and the abstraction leaks / breaks. Junior devs add ifs galore just to keep the abstraction alive, and you're in deep trouble. I did not do it, I inherited it. It's better to let the system mature, then unearth useful abstractions, IF some ever appear. I had to de-tangle a "generic" implementation of "similar" "products". When more and more details were added, it became clear all the "similar" products had their own plot twists. I made one "service" per product and the code got all flat, down to the deepest helper methods. No more pesky ifs, switches, or glute of parameters. All methods are named as per the lingo used by the business. The good life.
What comes to my mind a lot is, that we developers love writing nice code. Code that makes us feel good and I think this might be one reason why we tend to make it so generic. It's just satisfying. However, I also like to remind myself of a tip I heard about, that was targeted towards book authors. "Murder your darlings"; get rid of the beautiful sentence you wrote, your darling, if it doesn't add any value to the story or readability. For me this is the same for us developers, but in terms of code. Often times I have found myself writing a very piece of code that looks or feels so elegant, just to realize that there is a much simpler way. And more often than not, it is also a less generic way. In a sense also YAGNI.
Great example, never heard that reference before.
I fall in this trap a lot as well. Python is notorious for letting you write a beautiful, complex one-liner or decorated, context-managed generator that looks sleek and beautiful and flows perfectly in the editor. Commit and feel like a king. Then come back literally two days from now completely confused what is going on, unable to debug from the layers and nesting, and proceed to break apart into more sane pieces of functional code while weeping tears of bittersweet sadness for your lost love.
"It's gross and you're not gonna need it" sums it very well.
I agree completely, when the premise is that you’re creating the abstraction to prevent some future pain of migration (because that migration almost never happens in practice, and even in those rare cases when it does, changing the code is almost never the bottleneck/issue). However, I often see the premise for creating these abstractions as being a way to buffer an external dependency, define the failure modes in the domain, bake in resilience protocols (failover, retry with backoff, basically anything you’d use Polly for), and being able to mock/stub the external dependency to test those mechanisms.
Knowing that, I’d say that the answer isn’t really “You ain’t gonna need it” it’s more like “You ain’t done designing this integration yet.”
Ya that's a very good point. Its also worth pointing out it depends on the degree of coupling you have.
Even if you don't necessarily have to abstract or generalize in advance, I think it's worth at least thinking about it first. Because if you take a few steps back, you often understand better what problem you are actually trying to solve. This can also lead to better code, better structure and better documentation.
Agree
Couldn’t agree more! Context is always key but Yagni is still a great starting point.
I would go with your NotificationService one step further, and create a factory based on the NotificationType. Is it an SMS? Generate an SmsService. Will there be an Email or PushNotification in the future? Simply extend the enum and add a different implementation for the factory to return. Of course, this is based just on your example, it depends on the use case. If one wants to do abstractions, one would need to do create them in such a way that they will not affect the current code and allow extension over modification, similar to what to mentioned.
The problem is that when you do not do the code in a more generic way you are seen as not as good or not as seasoned developer. I hate this needs for generics everywhere.
Yikes, that's unfortunate if that's been your experience. Clearly not in my eyes.
mmm, that's concerning.
I'm a senior.
I love flat code, the less ifs possible, no else.
If a junior writes a switch, they better know why. Nah, I pair program and we go on a switch hunt ;-)
In Agile, more often than not, PMs fill up the sprint to the brim with features, so it's pretty obvious why devs are so tempted to "pre-abstract": because they know for a fact they won't be allowed to refactor later: every time a dev wants to add a refactor task to the sprint, it's knocked back with things like: "done is done, move on". "if it ain't broken...", "the business wants features, not more costs, QA time, risks", etc... It's bad, but many companies still roll like that. Not saying the devs should "pre-abstract". Only pointing at a possible - bad - reason why devs are doing it.
Interesting reasoning/insights
This is great when you’re working on personal projects. However, in the real world, the best time to invest in architecture is at the beginning of the project, as fixing the architecture later can become very expensive. Furthermore, most of the investment in time and resources happens at the start, and later on, more people can benefit from the architecture by implementing features and reusing what has already been done. The "we'll do it later" mindset often doesn't materialize.
It's not about "do it later" it's about doing what is required today without handcuffing yourself later. Handcuffing yourself can also mean creating useless indirection that hampers future development as well. I'm not suggesting a free for all, rather what I'm really describing is pay attention to coupling as it's often the root of making changes difficult.
You can't foresee the needs. The BAs, the business, the customers don't know the feature they want or need. You've got to get a few features done before refactoring. The reasons why "do it later" does not materialize are 1) business ppl are impatient, pushy and mighty 2) devs can't convincingly enough explain: 2.1 why "done" needs to be redone. Yes, several times. 2.2) why iterative exploration beats planning any day.
Your code base can be a whole lot simplified IF it's only used internally by you. In other words if you are building a public API or SDKs or services, you have to be a lot more particular about edge cases etc. and abstractions maybe needed for different scenarios.
I think there is nothing wrong with always abstracting away technologies. That's not what causes the complexity when we talk about yagni, abstractions and generic code. Nobody is complaining about the standard 3 or 4 layers. The problem is these solutions with 10 plus layers and code so generic it has no meaning eg heavy use of reflection, rules engines, etc. If you cant easily see what the code is meant to do by looking at one place its difficult eg a domain class should tell you what the domain is does and preferably imo what it is capable of.
I wouldn't even do the in process event until it's needed. MediatR is really overrated, for example. It can make it hard to follow the logic in the code. If you really need something like that later then add it later, but in my experience you ain't gonna need it.
If I wanted to return a response immediately before the operation is completed then I would use a more durable message broker or event topic, rather than doing it in-process.
IMHO EDA makes more sense for microservices where the cost of changing the API of the service is high, and you can get loose coupling between teams etc.
I think you should almost always have at least two different uses of the abstraction before you design it. Otherwise you just end up modifying the abstraction when a new use case actually pops up.
Another good video on this topic is the talk The Wet Codebase from Dan Abramov.
Correct, unless you already have experience around the abstraction your trying to create and have an idea of what multiple implementations look like, you'll derive your abstraction around the sole implementation you know. Worth noting, that that can absolutely fine if you're trying to simply the API for the single implementation.
Hi Derek. Nice video, appreciate it. However ISmsService in your example is pretty ok so you can mock or fake it in tests. The only note is that I would call it ISmsSender. Hate ...Service thing.
There are only two hard things in Computer Science: cache invalidation and naming things.
-- Phil Karlton
@@maushax Absolutely agree
I mock (pun intended) the usage of "service" all the time so hopefully people didn't think I would actually name it that.
@@CodeOpinion Yep. That Service suffix is absolutely the most awful thing. Don't know how to name you class? Name it Service))
One drawback is that, by mocking/faking ISmsSender, you don't get to test the Twillio class nor the lib's integration. It's a similar problem to mocking database access at repository level: your tests won't tell if the SQL your app sends to database is faulty or not. Depending on what your app does, it can make your test strategy overcomplicated.
Of course you need abstraction because as a good developer, you want to be able to unit test without doing heavy E2E....
So often the YAGNI principles is overrated and just put as an excuse to (sometimes) go fast, lower the quality of code and its maintainability.
Abstraction has to be considered, and, as always; think before coding.
But at the same time, I totally agree that people overused genericity for almost everything. It's the "one size fits all" syndrome. But it's more over-engineering than actual YAGNI; imho.
Hi Derek. I too follow and try to teach about when and how to apply YAGNI with my teams and projects. However, also use Ports and Adapters (aka Hexagonal) wherein which the ports are usually interfaces. Would you consider those interfaces to be unnecessary abstraction/indirection?
I’m asking because I have been challenged with this perspective before. I usually counter that argument with the fact that these interfaces (ie the ports) are there to ensure that the core (whether you’re using DDD or some other pattern is irrelevant) is agnostic to how it’s driven and what it drives. But sometimes I feel like it too is generally unnecessary.
What are your thoughts?
It depends on the level and degree of coupling between them. There isn't a clear cut answer because your tolerance to coupling is dependent on the context
When I started my job, and doing API, I tried to do some generic action for example for delete database entity. Simple right?
Yeah.. until diffrent entities started to have their own validations. Let's say you want to delete article category - but first fetch this category, and check isn't this category assigned to some article, by inculed articles. So - no geneneric this time
Finally maybe it's many of breaking DRY rule, but at the same time it's much easier to make changes
On the subject of DRY, I think people go wrong by assuming that it refers to duplicate code structure. In my understanding, DRY is about spotting duplicate behaviour inside the same context rather than duplicate code structure. Duplicate code structure can often be a temporary coincidence, especially early on in a code base when not much of the uniqueness of different types or use cases has been captured leading to artifical coupling of unrelated types that will cause problems down the line. If you apply DRY too inappropriately then your code becomes flammable
Yagni implies that changes are easy and isolated, that the code written today has little significance for the code written tomorrow.
That's all true for an experienced developer.
Which leads me to a very general criticism: I see a lot of videos on yt about XP, CI, advanced architecture, sophisticated tooling.
What coaches regularly seem to miss: most devs are not even remotely that good. It took me 20 years to get to a level where I can comfortably discuss architecture, and I had the luck of a project leader who put an emphasis on in-work training. But most teams have poorly trained devs mixed in, that often also perform a variety of very specialized tasks.
At least my experience is, that no team can afford that level of professionality that would be necessary for advanced concepts.
I think this is one of the biggest problems of the industry. I had the good fortune to be starting as a junior developer just as a surge in interest in XP and agile (lower case a) swept across the city I worked in. And by that I mean a surge in interest from the dev community, not a surge in interest from management that their devs should be using this Agile (upper case A) thing that was going to make them more money without the business having to change their ways of working.
I was very lucky to be sat near people who understood these things and were willing to patiently explain them to a junior who thought that learning how to program ended at learning the syntax.And I got to be part of the transformation of a code that had no automated tests, no IoC etc, and which was released once a month (and usually had to be rolled back) into a far better place where teams quite happily managed to perform releases multiple times a day and had the confidence to change and improve the code base without fearing they were breaking... something.
So I have a deep appreciation of the value and limitations of the coding principles, automated testing etc. and by that I mean I don't dogmatically apply this stuff whilst chanting "For the greater good!"
But... I'm using this stuff beacause I've seen the hellscape that can come from not using them. I feel we have a generation of developers now who are just copying what they see already in code bases with no real understanding of why.
I guess I'm asking, how do you train a developer to produce habtable code and good architecture if they've never experienced what happens if you don't
I always find it very difficult to ever get this right. Sometimes, I've wished i had written the abstraction or genericised earlier, because the functionality I'm now writing would benefit massively, and take less time. On the other hand, I've also spent days abstracting something for the possible 2nd and 3rd use case the sales team are chirping on about, to find they never come to fruition.
Hi Derek, agree with you. However, still haven’t figured it out how to write unit tests for some edge cases like “Twilio returning 503” without creating an interface and mocking it. Of course, I could create a mocking service and run some integration tests against it, but that is more costly. What’s your opinion on that?
Good use case for wrapping it in something meaningful that you can test.
I get your point, but I completely disagree with your example.
As somebody that has had to replace the attribution sdk 3 times, as mediator sdk 3 times, push notification sdk 2 times, crash reporting sdk 2 times, and VCS system once, I have learned that it's critical to put a strong abstraction between your code and those sdks. Not just the sdks, but the aspect as a whole. And this abstraction needs to be so generic that it's interface would fit any sdk. This in practice means you even have to split solve sdks across multiple aspects.
But I can promise you, do it correct and it pays off massively.
Very nice video. I agree with the first point (about creating unneeded features). However I don't have a definitive opinion about abstractions. Don't you think it's worth it for automated tests, for example?
There are other forms of abstractions you can use besides an interface, in my example I was also using a delegate as a dependency which is simple to create a fake for. Also as mentioned, what are you trying to test? If it was sending out an SMS, as mentioned you want to verify your actually hitting Twilio correctly, that was the point of the functionality was to send an SMS, not to verify you're calling an abstraction.
@@CodeOpinion got you. So, in your example the usecase is very simple and small, thats why it's not worth adding an abstraction, right? In case of a more complex usecase, like a PlaceOrder, if we had to send a sms at the end of the process and we dont have a event-driven structure, do you think an abstraction would help?
@@betopolione.laura.gil.1 I see it like this:
1) Func/Action injection is a *single* function pointer with *unnamed* parameters
2) Named delegates is a *single* function pointer with *named* parameters
3) Interface is a *multiple* functions pointer with *named* parameters
So you just pick whatever is right for your case at hand and start from the simplest option which is a Func/Action.
@@betopolione.laura.gil.1 not necessarily. A common email/sms abstraction makes sense if users can have email/sms/both notifications depending on runtime context. Abstracting services allows to decouple the event happening from the nature and count of implementations used. You could also make an abstraction over the Twilio implementation if your organisation is in the process of changing of sms provider but you don't know the target system, or if it is common practice to change providers frequently.
But if your app needs to send email or sms notifications on events with a compile-time deterministic manner, don't bother the overhead, simply reference your service implementation. In dotnet its not very difficult to extract an interface and swap the reference later. *IF AND WHEN* needed.
Usually when I'm writing interface for test it's for not sending sms while running the tests, not 'testing the sms service' .
I can't quite grasp how does this goes together with open/closed principle of SOLID? What I'm usually trying to do is to give all the opportunities for extension, which is kind of counteracted by YAGNI.And in cases when you end up actually needing it afterall - upgrading an original code usually turns ugly. Thoughts?
Upgrading ugly code is less ugly then trying to implement something for a future condition you don’t yet know. You ALWAYS get the abstraction wrong in some way and either have to change the abstraction anyway, or you try to build around the bad abstraction to fit your new use case again causing issues. Better of writing the abstraction when you k or what you need m, not before
I have a multi-module java project where I split things in a way where my domain doesn't depend on anything. That said, all the business logic including sending out a notification for some event, is typically part of my domain. So for something like using Twilio to send out an SMS notification, I usually do exactly the thing you gave as an example of what not to do. I define the interface for TextMessageSender in my domain, and I implement it in my infrastructure layer. Am I being silly to think that this is actually a good thing to do? I'm not exactly doing it to accommodate for future use cases, but my code looks almost 1:1 like the first example you gave.
Also I should add that I do a similar thing for things like Amazon SQS or services that would make local development (especially offline) harder. Why not have a simple interface, that you can replace with a mock or in-memory queue or whatever? Again I'm not trying to abstract away for the purpose of predicting future use-cases, but in general it still feels like a good design to me to atleast have something like Twilio and SQS behind an interface.
This is aweome video
Interface for only one implementation and only one method, o sheet
Better to make a generic notification json and have the notification data structure pattern match on what can understand the specific fields to do the right thing.
Ewww. You would rather "pattern match" (read: do string comparison) than define explicit functionality? Crazy sauce
@@iantabron not string pattern match but the pattern matching of data structures. Do you know what I mean by pattern matching as a feature of the language? Not regex. There are some ECMA proposals to add it to JavaScript but currently it is in other languages for instance c#, f#, Scala, OCaml, Haskell, Rust and so on.
@@supercompooper yes, I understand the feature of the language you are referring to. I'm referring to your example of using it with a JSON object to determine which type to deserialize to. How does pattern matching do this?
@@iantabron It's based on the presence of certain keys in the data structure.
Goddamn your videos are always poignant
That's not good