The problem is that people create abstractions for the wrong reasons and the overuse of this approach leads a lot of other people to consider abstractions worthless. This cannot be further from the truth. There's a whole thesis that can be written about why and what should be abstracted but to keep it short let's start with how not to create an abstraction. If you create an implementation first and then create an interface around that implementation then you haven't actually abstracted anything. So this is clearly the wrong way of creating an abstraction. The uses for this are very limited indeed. However, if I create an abstraction around the requirements for a subset of use cases with the aim of formally specifying those needs first and then provide a concrete implementation that realizes that abstraction later then I've done a better job at creating an actual abstraction. This sort of abstraction is much more useful to me and will pay dividends in the long run. It also provides me immediate benefits because it helped me understand more precisely what the requirements for which I'm creating an implementation are (which is not too far off from what test-driven development is all about), and because I'm using the language of the domain (since I'm specifying requirements after all) as opposed to implementation-specific language (e.g. tables, dbset, foreign key, etc.) I'm going to create a better intention-revealing code. Thus, in doing so, I describe the abstract behavior that I rely upon agnostic of the low-level implementation details. In other words, I've created an actual abstraction. In saying this though, I have to point out that abstractions are tactical decisions and as such cannot protect us from the wide-reaching consequences of strategic changes. As you show, if we change from relational storage to event storage mid-project there will be far-reaching, behavior and expectation-breaking consequences that no amount of abstractions can shield us from. This is not why we create abstractions.
Agree. Thanks for the comment. At the very beginning, they are about simplifying the concept and API surface for our need/focus. I didn't push hard enough on creating an abstraction after the fact or for a single implementation can be net negative. But I did get the message across, because you're comment cover all of what I was trying to convey!
His Problem is that he is too focused on the technical aspects, instead of doing design based on the use case (eg obsessive rationale and examples around "entity framework"). Better: for crud, use other means to automatically generate the code necessary to do those operations, from the frontned almost straight to the database, and so no amount of crud appears in your pure domain model as pure fabrications (think GRASP). Then the model really is domain oriented. But yeah, the codeopinion guy doesn't seem to get the deeper connections, he still gets a lot though, to attract mid-level coders and architects.
The examples of Entity framework are because they are in a sample application that's on GitHub and most developers in the .NET Space know it. Feel free to point me to another sample app, and I'll use that next time. I'm genuinely curious if you've watched my videos? I kind of find the idea that I focus on technical aspects pretty hilarious given a good chunk of my videos talk about focusing on business processes, workflows, business concepts, language, and boundaries. Either way, thanks for the comment.
@@thedacian123 yeah, that also. The idea that you never have more than one implementation of the storage adapter is also wrong. Any clean architecture has at least two: the production implementation and the testing implementation, made of test doubles (yes, not just mocks; frankly, avoid mocks)
Great video - absolutely agree. I would like to add one more good case (in my opinion) for abstraction - and that’s for unit testing - or even integration testing if you wanted to mock infrastructure.
Abstractions are often misunderstood. Often we hear devs talk in terms of language features and syntax. But what matters are the actual patterns - not that you are creating an C# interface as an abstraction for a class. These are just tools of the language that take you a bit on the road - but to release the full potential you have to understand software architecture, design, and modeling - either as classes composed into aggregate types or events in an event-driven system.
I think that an event-stream and state-based persistence would be two different ports (in ports & adapters). They are enough conceptually different that they would affect the design of the application anyway. Some part of the app will deal with events, some others will deal with state. There may be an "StateProvider" that aggregates/projects a state from the event-stream, but that is just "code doing data things". :shrug:
"They are enough conceptually different that they would affect the design of the application anyway". Exactly. However, the common argument would be to toss persistence behind an abstraction, and my point was that you'll still be constrained to the same overall model of your abstraction. That's fine, but so as long as people realize that when they create abstractions.
Hi man, You're absolutely right! In many cases, the abstraction is tied to the implementation to some degree. Especially for complex interactions (with different paradigms) like persistence. Although it's nice to have an interface that normalizes interaction/access between layers for logs and mocks.
Even something as simple as `void doSomething()` implies that its having *some* side effect and that the next line of code can observe the effect. Its as much of a contract as the actual code.
I was lucky to be able to learn this lesson pretty early on in my career. We had to convert a flat file database to Oracle. The sheer number of changes in the function interfaces (it was C) showed me that it's pretty foolish to try and do this unless you actually have to. Turning a class into an interface doesn't make it a better abstraction. Having a cohesive set of methods that makes it easy to use makes it a better abstraction, regardless of the keyword you use to define it.
In general I agree with pinned comment, but I want to point out that in this example there in nothing stopping me from having ORM based implementation for the second interface. It fits perfectly.
But to be fair, mention all the cases where it is useful, I think you will find more pros than cons if you do it right. You can never know everything upfront but you are looking for a design that has the best chance of being able to adapt to future requirements. My abstractions have seriously saved me a massive amount on dev time. You do need useful abstractions and you do need to avoid leaks.
That's my point actually. There's this assumption that because you hide the implementation behind some abstraction that you can change the implementation. You're abstraction sets the assumptions and model in which you can implement behind it.
If you have TypeA that depends on TypeB. Now lets say that TypeB is deterministic and doesn't have any external side-effects. That's not going to work out without abstracting?
@@CodeOpinion Not exactly sure what you mean by side effects. If you're not making I/O or network calls then of course there isn't any need to abstract. In the video, you used storage as an example against abstracting when it is actually a reason to abstract. Network, I/O calls should always be abstracted if you are going to properly unit test.
@@awmy3109 Now I better understand your first comment, because I think there's some confusion. The video was about multiple implementations of an abstraction and the assumptions being made. I don't think I made a comment about not abstracting.
Everything should definetly not be an interface. BUT... Just using all the magic a Framework throws at you is a really bad idea. Imagine using a package for handling your payments e.g Laravel Cashier. Down the road the business has some special needs that is not supported by Cashier. Now you're screwed. This is where the phrase "Code to an Interface, not an implementation" really shines!
"thought" leaders and other gurus (who are promoting these abstraction everywhere teachings), always forget to miss the type of software in which those abstractions are useful and that is libraries, frameworks or OS. For building business applications (The boring CRUD) adding abstractions is worthless and abstractions cost more to maintain them. In your example about caching, at least in Java Hibernate has transparent way to add second level caching without changing tone of code. You have to find cache library that has Hibernate L2 cache support, configure a specific hibernate property and annotate classes you want to be store in Hibernate L2 cache No interface, no metaprogramming (DIY) Learning de-facto standard frameworks, libraries in-depth is useful to understanding that you only need abstraction if the framework/library doesn't provide for you. Frameworks/libraries are already fill with ton of abstractions, just reuse them
A lot of the reason for this video is because there's this tendency to create an abstraction around single implementations. And you only know that single implementation. Creating an abstraction with a single implementation won't land where you think it will when you do need to create another implementation. You'll be influenced by your single implementation and then be handcuffed by it.
@@CodeOpinion I'm exploring the same topic in one of the comments I shared. The gist of the issue is that abstractions should be created around behavior that the application needs and implementations should follow AFTER to provide a concrete realization of that behavior. The value of creating an interface around an existing implementation is limited indeed and I don't consider that an actual abstraction. Just because it's an interface doesn't make it an abstraction and just because it's a class doesn't make it not an abstraction. For example the EF DbContext provides us with a set of abstractions (e.g. DbSet) that are disconnected from the actual implementations (e.g. Tables if we're using a RDBMS). Just because the DbContext and DbSet are classes doesn't make them not abstractions - they provide us with an abstract set of functionality for data access and manipulation that is disconnected from any of the concrete implementations underneath them. However, just because the DbContext is itself an abstraction it is what I would call a platform-level abstraction rather than an application-specific abstraction. I would still create an abstraction around the specific application behavior which I would then implement using the DbContext. It's layer upon layer of abstractions all the way down the the physical layer where we're pushing electrons through wires. The problem is what abstractions are useful and why are they useful :).
@CodeOpinion the tendency is to create abstractions around technical concerns instead of around use cases. That's where problems start to creep in. The orm or what have you, should be just an implementation detail of the storage adapter. Frankly, most abstractions of databases are horrible in common languages that don't support mixins, and even in those who do, they're mostly crap, because they force themselves into being superclasses. That's vendor lock-in by the book also, so how convenient (to them). Scala (3) could be a nice language judging by the constructs it supports, not sure about their ecosystem though.
The problem is that people create abstractions for the wrong reasons and the overuse of this approach leads a lot of other people to consider abstractions worthless. This cannot be further from the truth. There's a whole thesis that can be written about why and what should be abstracted but to keep it short let's start with how not to create an abstraction. If you create an implementation first and then create an interface around that implementation then you haven't actually abstracted anything. So this is clearly the wrong way of creating an abstraction. The uses for this are very limited indeed. However, if I create an abstraction around the requirements for a subset of use cases with the aim of formally specifying those needs first and then provide a concrete implementation that realizes that abstraction later then I've done a better job at creating an actual abstraction. This sort of abstraction is much more useful to me and will pay dividends in the long run. It also provides me immediate benefits because it helped me understand more precisely what the requirements for which I'm creating an implementation are (which is not too far off from what test-driven development is all about), and because I'm using the language of the domain (since I'm specifying requirements after all) as opposed to implementation-specific language (e.g. tables, dbset, foreign key, etc.) I'm going to create a better intention-revealing code. Thus, in doing so, I describe the abstract behavior that I rely upon agnostic of the low-level implementation details. In other words, I've created an actual abstraction.
In saying this though, I have to point out that abstractions are tactical decisions and as such cannot protect us from the wide-reaching consequences of strategic changes. As you show, if we change from relational storage to event storage mid-project there will be far-reaching, behavior and expectation-breaking consequences that no amount of abstractions can shield us from. This is not why we create abstractions.
Agree. Thanks for the comment. At the very beginning, they are about simplifying the concept and API surface for our need/focus. I didn't push hard enough on creating an abstraction after the fact or for a single implementation can be net negative. But I did get the message across, because you're comment cover all of what I was trying to convey!
His Problem is that he is too focused on the technical aspects, instead of doing design based on the use case (eg obsessive rationale and examples around "entity framework").
Better: for crud, use other means to automatically generate the code necessary to do those operations, from the frontned almost straight to the database, and so no amount of crud appears in your pure domain model as pure fabrications (think GRASP). Then the model really is domain oriented.
But yeah, the codeopinion guy doesn't seem to get the deeper connections, he still gets a lot though, to attract mid-level coders and architects.
The examples of Entity framework are because they are in a sample application that's on GitHub and most developers in the .NET Space know it. Feel free to point me to another sample app, and I'll use that next time. I'm genuinely curious if you've watched my videos? I kind of find the idea that I focus on technical aspects pretty hilarious given a good chunk of my videos talk about focusing on business processes, workflows, business concepts, language, and boundaries. Either way, thanks for the comment.
What about SOLID principle, depend of an abstraction instead of a concrete stuff?
@@thedacian123 yeah, that also. The idea that you never have more than one implementation of the storage adapter is also wrong.
Any clean architecture has at least two: the production implementation and the testing implementation, made of test doubles (yes, not just mocks; frankly, avoid mocks)
Great video - absolutely agree. I would like to add one more good case (in my opinion) for abstraction - and that’s for unit testing - or even integration testing if you wanted to mock infrastructure.
Abstractions are often misunderstood. Often we hear devs talk in terms of language features and syntax. But what matters are the actual patterns - not that you are creating an C# interface as an abstraction for a class. These are just tools of the language that take you a bit on the road - but to release the full potential you have to understand software architecture, design, and modeling - either as classes composed into aggregate types or events in an event-driven system.
I think that an event-stream and state-based persistence would be two different ports (in ports & adapters). They are enough conceptually different that they would affect the design of the application anyway. Some part of the app will deal with events, some others will deal with state. There may be an "StateProvider" that aggregates/projects a state from the event-stream, but that is just "code doing data things". :shrug:
"They are enough conceptually different that they would affect the design of the application anyway". Exactly. However, the common argument would be to toss persistence behind an abstraction, and my point was that you'll still be constrained to the same overall model of your abstraction. That's fine, but so as long as people realize that when they create abstractions.
Hi man, You're absolutely right!
In many cases, the abstraction is tied to the implementation to some degree. Especially for complex interactions (with different paradigms) like persistence.
Although it's nice to have an interface that normalizes interaction/access between layers for logs and mocks.
Yes, it will be tied, which is fine so as long as you realize that
Even something as simple as `void doSomething()` implies that its having *some* side effect and that the next line of code can observe the effect. Its as much of a contract as the actual code.
I feel that designing abstractions to enable testing is often a good approach. For moving from sqlite to postgres? Not so much.
You forgot the biggest reason for the abstraction: "So we can swap out EF for NHibernate" :)
😂
I was lucky to be able to learn this lesson pretty early on in my career. We had to convert a flat file database to Oracle. The sheer number of changes in the function interfaces (it was C) showed me that it's pretty foolish to try and do this unless you actually have to.
Turning a class into an interface doesn't make it a better abstraction. Having a cohesive set of methods that makes it easy to use makes it a better abstraction, regardless of the keyword you use to define it.
In general I agree with pinned comment, but I want to point out that in this example there in nothing stopping me from having ORM based implementation for the second interface. It fits perfectly.
But to be fair, mention all the cases where it is useful, I think you will find more pros than cons if you do it right. You can never know everything upfront but you are looking for a design that has the best chance of being able to adapt to future requirements. My abstractions have seriously saved me a massive amount on dev time. You do need useful abstractions and you do need to avoid leaks.
what about branch by abbstraction pattern?
I've talked about this in another video. The difference being is you create an abstraction when you know what the exiting and new target is.
I agree, but this example might be a bit misleading, as you are not just replacing implementation, but the whole paradigm.
That's my point actually. There's this assumption that because you hide the implementation behind some abstraction that you can change the implementation. You're abstraction sets the assumptions and model in which you can implement behind it.
If you are not unit testing then don't abstract. Make everything concrete and see how that works for you when unit testing.
If you have TypeA that depends on TypeB. Now lets say that TypeB is deterministic and doesn't have any external side-effects. That's not going to work out without abstracting?
@@CodeOpinion Not exactly sure what you mean by side effects. If you're not making I/O or network calls then of course there isn't any need to abstract.
In the video, you used storage as an example against abstracting when it is actually a reason to abstract. Network, I/O calls should always be abstracted if you are going to properly unit test.
@@awmy3109 Now I better understand your first comment, because I think there's some confusion. The video was about multiple implementations of an abstraction and the assumptions being made. I don't think I made a comment about not abstracting.
Everything should definetly not be an interface. BUT... Just using all the magic a Framework throws at you is a really bad idea. Imagine using a package for handling your payments e.g Laravel Cashier. Down the road the business has some special needs that is not supported by Cashier. Now you're screwed.
This is where the phrase "Code to an Interface, not an implementation" really shines!
Don’t introduce unnecessary abstraction
"thought" leaders and other gurus (who are promoting these abstraction everywhere teachings), always forget to miss the type of software in which those abstractions are useful and that is libraries, frameworks or OS.
For building business applications (The boring CRUD) adding abstractions is worthless and abstractions cost more to maintain them.
In your example about caching, at least in Java Hibernate has transparent way to add second level caching without changing tone of code.
You have to find cache library that has Hibernate L2 cache support, configure a specific hibernate property and annotate classes you want to be store in Hibernate L2 cache
No interface, no metaprogramming (DIY)
Learning de-facto standard frameworks, libraries in-depth is useful to understanding that you only need abstraction if the framework/library doesn't provide for you.
Frameworks/libraries are already fill with ton of abstractions, just reuse them
A lot of the reason for this video is because there's this tendency to create an abstraction around single implementations. And you only know that single implementation. Creating an abstraction with a single implementation won't land where you think it will when you do need to create another implementation. You'll be influenced by your single implementation and then be handcuffed by it.
@@CodeOpinion I'm exploring the same topic in one of the comments I shared. The gist of the issue is that abstractions should be created around behavior that the application needs and implementations should follow AFTER to provide a concrete realization of that behavior. The value of creating an interface around an existing implementation is limited indeed and I don't consider that an actual abstraction. Just because it's an interface doesn't make it an abstraction and just because it's a class doesn't make it not an abstraction. For example the EF DbContext provides us with a set of abstractions (e.g. DbSet) that are disconnected from the actual implementations (e.g. Tables if we're using a RDBMS). Just because the DbContext and DbSet are classes doesn't make them not abstractions - they provide us with an abstract set of functionality for data access and manipulation that is disconnected from any of the concrete implementations underneath them. However, just because the DbContext is itself an abstraction it is what I would call a platform-level abstraction rather than an application-specific abstraction. I would still create an abstraction around the specific application behavior which I would then implement using the DbContext. It's layer upon layer of abstractions all the way down the the physical layer where we're pushing electrons through wires. The problem is what abstractions are useful and why are they useful :).
@CodeOpinion the tendency is to create abstractions around technical concerns instead of around use cases.
That's where problems start to creep in.
The orm or what have you, should be just an implementation detail of the storage adapter.
Frankly, most abstractions of databases are horrible in common languages that don't support mixins, and even in those who do, they're mostly crap, because they force themselves into being superclasses. That's vendor lock-in by the book also, so how convenient (to them).
Scala (3) could be a nice language judging by the constructs it supports, not sure about their ecosystem though.
@@andreipacurariu2013 Thats "Deep" 😉