He's using this term wrong. He's manually creating new instances of objects, and passes other objects as arguments. Dependency Injection handles this instantiation for you - you can just use them without having to worry about instantiating whatever they need in their parameters. It automatically wires up everything you need. With dependency injection you don't do stuff like this following pseudocode to get a Controller: DB db = new DB(url) Service s = new Service(DB) Controller c = new Controller(s) You create the instance like this: @AutoWire Controller c Then the DI framework will try to create the instance automatically. It sees that your Controller needs a Service so it will try to create that one, but sees that this Service needs a DB so it will create that one as well. The DB needs a URL string, so in that class you specify from which property or environment variable it gets that value from. That's Dependency Inversion - you are inverting the flow: instead of you having to pass in the arguments to instantiate objects manually the framework takes care of it automatically. He's using JavaScript, but he's not using any Dependency Injection framework. There's no @injectable annotation anywhere. He doesn't specify how to wire things. He's not using Dependency Injection. He's just talking about using Composition, but he's not talked about Dependency Injection at all.
@@Domo3000 You are describing injection *frameworks* not the concept of dependency injection itself. This video is correct, dependency injection is just a name to describe the concept of passing in a dependency from outside the thing that uses it, that's all it is. You don't need a framework, you can do that manually. Frameworks just automate this pattern on your behalf. Composition can be achieved without using dependency injection (i.e. without passing in dependencies from outside), DI helps to enable composition that can be controlled/changed at higher levels of your codebase.
@@MajesticNubbin the frameworks take care of the Injector part of the principle -> the part that's responsible for creating instances and their dependencies. They just make it easier, but even if you don't use a framework Dependency Injection is still something different than what's shown in the video. If you look at the video there's no Injector to be found. Like at 9:39 he's doing all the initialization of the dependencies himself. In Dependency Injection you shouldn't have to create the image_scaler to pass it to the creation of the image_generator. Your Injector should do that for you. Here's some definitions: "The interface only decouples the usage of the lower level class but not its instantiation. At some place in your code, you need to instantiate the implementation of the interface. That prevents you from replacing the implementation of the interface with a different one. The goal of the dependency injection technique is to remove this dependency by separating the usage from the creation of the object. This reduces the amount of required boilerplate code and improves flexibility." "With the dependency injection pattern, objects lose the responsibility of assembling the dependencies. The Dependency Injector absorbs that responsibility." "In software engineering, dependency injection is a technique whereby one object (or static method) supplies the dependencies of another object. A dependency is an object that can be used (a service)." "We will need to recreate the car object with a new Yokohama dependency. But when using dependency injection (DI), we can change the Wheels at runtime (because dependencies can be injected at runtime rather than at compile time). You can think of DI as the middleman in our code who does all the work of creating the preferred wheels object and providing it to the Car class. It makes our Car class independent from creating the objects of Wheels, Battery, etc." Literally every defintion describes something different than what's shown in this video. There's no Injector taking care of creating the dependencies and instantiation. He's not making use of Dependency Injection. He's just following the Dependency Inversion Principle.
@@MajesticNubbin Yup. And that framework even uses a "dark" pattern: Service locator. But at least it's hidden from the user (programmer), so it's acceptable ^^ But I have seen people routinely pulling objects from the container...Then they wonder why you want to kill them when you have to test^^ Still doable, but now you've to do more of an integration test ^^ To prevent surprises, in Spring, I create configuration classes per test when needed, so the test blows up if an unexpected dependency is unmet, and I can investigate why it is required. I have been a QAM, so I saw quite some horrors and had sitting sessions to fix programmers' minds. 😂 QAM is a funny role because devs usually think "just a tester" until you read their code and explain how it can be improved. (QAM is not QC, but in smaller companies, QAM does QC.)
agreed. if ur having trouble commiting time due to the wimpy monetezation, make like a shirt for 10 bucks. tell everyone to buy it at the end of a video with a link in a comment or description and charge 20.
Summary of the wisdom I have collected in my 20 year journey into the mind of the computer: 1) Favor composition over inheritance. 2) Couple to interfaces rather than implementations. 3) Premature abstraction is often a bigger problem than premature optimization. 4) DRY is only a rough guideline. If you repeat something more than twice, then refactor it out.
Premature Abstraction in a way *is* premature optimization, you're abstracting without needing a reason to in the event something *might* change, in the same way you're making obtuse instead of readable code in the event it *might* need to be faster. I think Code Aesthetic laid out good rules here: 1. If you're introducing more than 2-3 paths in your code, that's a smell it might need an abstration 2. If you have a dependency that is difficult to initialize for testing, that's a smell 3. If you have a resource that is killing your startup time but is only occasionally used, that's a sign it should be lazy initialized, and a DI/Factory is a good way to help with that.
1+2) Data-oriented designs are better than both, but there's a scarcity of good libraries so it can be limiting in the beginning 3) Hard agree (depending on your exact definition), but also: 3a) Implement the usage code asap (before the functional code if possible), that's the fastest way to realize what your code needs to do. 3b) Don't try to cover edge cases before you've actually made a working implementation. Once you've made working code, a lean solution will be much easier to see. Even the best programmers forget this sometimes. 4) I typically agree but I don't think it's an important guideline. 5) Don't be afraid to question norms. Computer science is still a very young field and it's *full* of dogma. And most of the commonly accepted guiding principles read like advice from humanities' papers, rather than quantifiable and falsifiable ideas. If you follow this rule, you're virtually guaranteed job security, judging by the current degrading state of software.
@@Muskar2 could you expound upon data-oriented design a bit? I have an intuitive idea about what it is just by reading the words, but I'm unfamiliar with it as a specific, named design principle/strategy.
Great video on an interesting topic! One note - when you put a text at the bottom like you did at ~1:22, consider that if the text disappears too quickly or appears while you're speaking, the viewer will probably have to pause the video to read it, but if they pause it the YT media controls will not disappear and cover that part of the screen where the text is
As soon as a minute passes in the video, I begin to understand why you don't upload as frequently as other channels we are used to. Creating such perfect videos requires time. Thank you
It was a well made video, but it has several mistakes. First, literally the only thing he did was switch from grouping code by functionality to grouping code by service provider. It's not inherently better or worse, and completely depends on the use case and personal preference. Type guards would have completely solved the issue of multiple optional parameters, and he could have organized the code better in the first place. Needlessly messy code and poor type definitions cannot be used as an argument for why dependency injection is a superior approach. Grouping code by service provider makes it harder to find out what a specific functionality does, as all the upload functions are split into different classes. A better teacher would tell about the different aspects, not just preach that "this is the best" without telling anything about its drawbacks. Finally, absolutely never use NODE_ENV for checking for production environment. That's such a rookie mistake. If you're doing your testing and staging properly, you should be running NODE_ENV=production in some of your development environments. Node itself and your dependencies use it for switching some runtime optimizations on and off and you need to test the production build in dev. Always use another custom flag for determining the deployment environment.
Dependency Injection itself does not favor Composition over Inheritance, it just externalizes the composition. However, you need Composition to do DI. There's actually some Inheritance in here with the use of the interfaces, but it is done the right way. (The wrong way would be if the different storage methods had been added by inheriting directly from the main class. Yeah, I've seen some such code…)
@@Cau_No what was the point of creating the base storage class since none of his inherited storage classes seemed to call any base storage class methods?
@@MaxwellORoark The base 'class' is an interface, that does not have any implementations, but defines methods to implement in the inherited ones, unlike an abstract class. It is used to make the subclasses pluggable and therefore testable.
This is hands down the best explanation of dependency injection I've ever seen. That "puzzle" where you kept connecting those services made it extremely easy to reason about what's happening!
This is soooo good. I’m an SDE at Amazon, and basically you’ve summed up 99 percent of what I’ve learned about writing clean code while here. Well done, this is awesome.
Literally the only thing he did was switch from grouping code by functionality to grouping code by service provider. It's not inherently better or worse, and completely depends on the use case and personal preference. Type guards would have completely solved the issue of multiple optional parameters, and he could have organized the code better in the first place. Needlessly messy code and poor type definitions cannot be used as an argument for why dependency injection is a superior approach.
@@Asijantuntia You're right. In TypeScript, the more common way, given a finite number of different configuration types, is to define the config types individually and union them all up to a discriminated union and use type guards. But using type guards is harder to extend when you don't have a finite number of possibilities. Or, to pass a unit of computation, one can simply accomplish it by passing a callback function instead of an object. This is more commonly seen in TypeScript libraries.
For a long time, people would ask me "Are you using dependency injection?" and try to sound like it's the most fancy thing ever, and that it's something that should always be used. Whether I was aware of the term or not, I was already employing this practice to organize. That said, using it when it makes sense is best. If it drives up complexity just for the purpose of using it to "plan for later", then it fights against simplicity. Use when needed, else it can create additional arbitrary interfaces to drive up the project's complexity. Years ago, I remember making completely useless interfaces to try and "be versatile" but it just made things much worse 😅
Dependency Injection is more fancy than what's presented here. He just talks about Composition and falsely calls it Dependency Injection. He manually creates new instances of classes and passes them as arguments. With Dependency Injection you just specify what arguments are required and how to construct them. So something like: DB db = new DB(url) Service s = new Service(db) Controller c = new Controller(s) Isn't dependency injection. That's just composition. In Dependency Injection you would just write: @AutoWire Controller c; To get a instance of the controller. Dependency Injection is about inverting the flow. Instead of you manually creating the required instances and passing them as arguments the DI framework will look which arguments are required and will recursively spin them all up as needed. So in this case it would try to create a new Server, then it notices that the Server needs a DB so it will try to spin that as well and notices that it needs a URL. So in order to make that work you would have to specify somewhere from which property or environment variable it reads that String from. Then the Dependency Injection can just create a new instance of the Controller without you having to manually instantiate all it's dependencies and their dependencies first. This video completely missed the point.
Yeah, one instance where it doesn't make sense, is when injecting simple objects that do not have any kind of side effect. Why not simply instantiate them inside? Unless they're using unknown data.
Absolutely. Usually DI simplifies the code, but creating a single implementation of the interface might not be worth it wrt creating the full dependency tree. I often resort to creating double constructors in Java: one that has everything sent into it, and one (the default one) constructs the dependencies that doesn't really change (like System.clock), so that I can inject a fixed clock in tests while having a simpler DI setup for production.
This is a great video with tons of great explanations and visual aids. However, throughout the whole video, I kept asking myself how you would handle errors in this situation. The more I thought about it, the more complex it got so I think it would be great if you could make a video taking about how you could handle errors in a situation like this and potentially in other situations as well.
While I understand the question, I think what is even better is not how you but where. Error context should be scoped per method. So you might have guards in your constructor and then logical error handling in specific method to validate functionality worked correctly. Like mentioned if you stream the data and then was an error chunking the data then a error should be handle there. What you do with that to the end users is where it will be handled. I know many people in web applications talk about a layered approach. So Presentation, Backend, Infrastructure, and so on. The only was the user ever sees an error is in in the the Presentation layer. This does not mean the frontend. Presentation can also be any endpoint being hit so in nodejs frameworks this is normally called the route (Django -> View, .Net -> Controller, and so on). It is how you are consuming requests and then presenting the result. Even if that is an error.
Using dependency injection doesn't really change anything for error handling. Wether you are working directly with a class or with an interface is basically the same, it returns or throw an error and you check or catch it.
This explanation was EVERYTHING. The clearest explanation at start, examples that make sense, a quick and engaging way to show the code, and clear animation. Now I need to watch everything you've ever made. Thank you!
Great use of the visual diagrams, this is one of the best examples that I have seen for visually showing code concepts alongside the actual code. Keep up the good work!
This is an awesome example and explanation of the benefits DI provides. The not-so-good part of the DI is that people really tend to use DI for everything and start abstracting things that they would never change.
That's me. When given so many options sometimes it's hard to know what you need. But overtime as the requirements change it's easier to write up DI for each scenario.
This finally makes sense to me. During my courses at school, the teacher would give vague answers when talking about Component Based Software Engineering, and how you would write components / dependencies up on a 'white model', and how they each require and provide for each other. He did not want to elaborate on dependency injection, and faulted us for it. Watching this made me realize I did not have a basic understanding of the technology, and how powerful it actually is - You explained in 1 video what a whole course could not. Thank you!
Take this video with a huge grain of salt. First, literally the only thing he did was switch from grouping code by functionality to grouping code by service provider. It's not inherently better or worse, and completely depends on the use case and personal preference. Type guards would have completely solved the issue of multiple optional parameters, and he could have organized the code better in the first place. Needlessly messy code and poor type definitions cannot be used as an argument for why dependency injection is a superior approach. Grouping code by service provider makes it harder to find out what a specific functionality does, as all the upload functions are split into different classes. A better teacher would tell about the different aspects, not just preach that "this is the best" without telling anything about its drawbacks. Finally, absolutely never use NODE_ENV for checking for production environment. That's such a rookie mistake. If you're doing your testing and staging properly, you should be running NODE_ENV=production in some of your development environments. Node itself and your dependencies use it for switching some runtime optimizations on and off and you need to test the production build in dev. Always use another custom flag for determining the deployment environment.
@@Asijantuntia I do agree, and completely disagree. This is a complete introduction into the dependency injection design pattern, not into the whole dependency injection paradigm. For that he would needs to cover run time vs build time dependency injection, dependency frameworks, as well as fully discuss environmental variables and their usage in delivering software products - which is complete overkill in an introduction to a design pattern. I agree that he could probably cover some shortcomings of the design pattern, like how cluttered your codebase becomes cluttered with the interfaces, or that your code becomes increasingly obfuscated but easier to read. The second part of your comment seems more like a rant than anything. Why shouldn't you use NODE_ENV to check your environment? Node sets the environmental variable at runtime and defaults it to "development", and is a fairly common practice. What would your alternative be?
I'm genuinely impress by the quality of this video, the visual are nice the delivery is spot on and the explanation and example are super clear. Well done!
Im so glad you take the time to fully think out an idea and use great and real examples. Can you do one on testing, because that is a topic that to me felt easy to do and very hard to master.
I cannot express how amazing these videos are dude. The ability to show a concept theoretically with such a short amount of time and short amount of code explanation is astounding
While dependency injection makes code clean, I found the obsession with it causes incredibly abstract codebases that become hard to navigate. If you're not familiar with the codebase, you are lost as to what actually happens during a request. Almost every function does a little and then calls into a dependency that does... Something. You then have to figure out what dependency. In this way, you move the code from having concise local static semantics, to having very global dynamic semantics. Nowadays, when I absolutely must use dependency injection, each dependency interface and all implementers will live next to each other. And the rules for choosing that implementer will live in the same folder/file. This way, it's somewhat more localised and the cognitive overhead is reduced
This is especially true when the dependencies are just cast out into the ether and pulled down “magically” to fit an interface the dependent code needs (e.g., Spring, Laravel). “Oh, this thing needs one of those things. But which one of those things is it using? Where is that configured?” 🫠
I think where dependency injection really shines is resource access and especially I/O. Because these things can be very hard to debug and test, but with dependency injection you can easily mock these dependencies. Especially for development to make some parts static and easier to get running, but especially for testing. In that way it's similar to classes, they really shine for RAII, but if you use them for arbitrary abstract concepts (especially modeling interactions between real world objects), code gets messy.
I agree. Although maybe not as severe as a video about why code comments are awesome or something, I would still say this upload was years out of date the moment it came out. DI often suffers from a lot of OOP issues, to the point where we need crutches like DI Frameworks and DI patterns and project standards just to ease some of the Burdon. Hell, hardcore functional programmers will often say the only "injection" you need is using parameters in functions.
I think this is more a design problem. In fact most of library or framework that heavily use DI with heavy typing, are often way simpler to understand by navigate through the return type and interfaces than read the docs. So...
This is truly genious and I like the abstraction level. To not go to much into unimportant trivial details but rather to get to the key points of this concept.
I wouldn't call it "the best pattern". Not after all the hours I've spend trying to debug complex application bugs related to DI. The problem with it is - if your dependency is incorrect one (I mean, not a bug inside a dependency object, but incorrect dependency value itself), it may become very hard to debug. Especially when DI is combined with patterns like factory, auto-factory, service provider, and other automation. Because first you need to identify the fact that the dependency is wrong, then you need to find where it came from, then you need to find where it came from into a factory/auto-factory/service provider. And it could be so that then you'd find yet another layer of DI there.
I've been noticing that a lot of programming patterns simply boil down to polymorphism. Once I thought that this "Strategy pattern" is something difficult and mysterious, but it turned out to simply be polymorphism. Now this menacingly sounding term, "Dependency Injection", also turns out to be polymorphism. I am at loss of words. Thanks for a great video!
all "design patterns" boil down to function application (sometimes partial). but java programmers invented names for them to torture other programmers with "do you know how to write a decorator factory that computes a price for different types of coffees with sugar?".
You’re quickly becoming one of my favorite channels on best coding practices. Lot of stuff I’ve preached for years all packed into well made terse videos
Passing parameters to a function has never been so complex as with these dependency injection frameworks. This video shows how it actually should be done: just super simple code!
Thank you for clarifying that "dependency injection" is such a simple concept, I had always assumed it was something more complicated always involving frameworks or reflection.
That's a beautiful extension of ideas in the second half of the video. Essentially, mocks and dependency injection are a way of viewing code in components and using the component you need for the use case you are addressing. It's all just function chunks.
I just made a data base system based on generics and a ton of interfaces. To me, as a noob, it was a challenge to set up but once it was working, damn, easy to expand and customize. This workflow is great, once it's up and running :) The abstract nature of interfaces can be hard to get your head around at first but worth it.
The visuals, the explanation, the code... all of it was just so straightforward and understandable that I now finally get just how misleading "dependency injection" as a term really is. The use case for testing also really helped me understand why I should be doing this even for my pet projects at home 😅
I always found dependency injection intuitive, but that comes from haviglng to solve the same problems without dependency injection. This can be a difficult topic to explain to a new developer, or even one who just hasn't worked in a modern software stack. I'll be passing this video along to anyone who asks me to explain DI from now in. :)
I struggled to understand interfaces and DI in my first years until I had this job interview where they requested a console app that uses multiple search engines to search for a string and compare the results. Ended up with multiple implementations of a same interface and using strategy pattern without knowing it and it finally clicked for me. DI is a solution to a problem, so I agree that the best way to understand a solution is to face the problem in the first place.
Outstanding. This is the best explanation and most visual description of why basic design and TDD is important to software ive ever encountered. Extremely well done.
Your videos are the most helpful on the internet because you use real-world code. Most channels (and textbooks) would have used Dogs and Cats (or worse, Customers and Banks) with implementations that are so trivial they're not even helpful. I'd watch all your videos because of that alone, but the cool and helpful animations, chill voiceover, and unexpected death metal send them to another level. Thanks for another great video! 😃
This is the best walk-through of dependency injection I've ever seen. Everything from the example chosen, to the visuals, and the pacing/tone of your voice is excellent. You are doing incredible work on this channel and have earned my support on Patreon.
Thank you soo much for this beautiful video. I have always hated learning how dependency injection works and why one would need it until a year ago when i tried to tamper with some web apis. Now I cannot develop any apps without DI. This video, while a bit fast in some spots, is a perfect example driven tutorial into DI. Many tutorials oversimplify the basics and the step to understanding how it becomes useful is blurred with too simple examples. this is just well made
Looks like Dependenncy injection is also good for scaling. As in the example of the video, even if you have only one type of Encryption, it's still good to have a interface in case you ever need to add another type of encryption in your system.
remember YAGNI: you aint gonna need it. only abstract what you think will be needed, DI is great but it still adds cognitive load for those reading it.
@@mattpen7966I mean adding an interface which specifies the public API of a class doesn't add that much abstraction, and actually makes clear what you intend to offer to the consumer of said API.
@@aruZeta exactly!! Actually defining an interface helps you understand much better how you should implement the service. It is nothing but a piece of boilerplate that you can use also to draft ideas, and once you have it then you can start implementing the actual service. I agree creating interfaces is not a useless nor expensive abstraction to do
When I was new with the Interface thing I thought it was a waste of time, but as my experience goes up investing my time to make Interfaces does really worth when you invite others into your projects. People have different opinions, and forcing a set of rules for your services will make collaboration easier
When I was using Lua, we didn't have proper types & interfaces and I didn't know much about interfaces, but I did something similar to them using a Manager object that takes 3 objects and dynamically checks that they have the correct type then "requires" them into its own object and let's users select one using an enum to choose. I did the pattern a lot: In a construction game, Build Mode, Edit Mode, and Delete Mode were handled using this model, and in a Discord bot: Commands to the bit were handled with this model. I didn't even know about interfaces nor dependency Injection but ended up remaking the same thing after many mistakes with having the many if statements and whatnot.
I see a new Code Aesthetic video and literally CANNOT click on it until I know I'm in a space where I can give it some focus. 3 Days later, here I am, let's go!
this reminds me of these videos about cells and biology I used to be obsessed with. Everything about this video - the explanations, content, insights, music, animations, are perfect
This is excellent. I use Dependency inversion by default - I use Tdd pretty much constantly which naturally leads to this sort of thing. What always baffles me is that folks seem to think DI is only for Java/C# and other statically typed languages when everything benefits from it. Another great effect of inverting dependencies, is that you can now pull "use case" classes into existence. "What happens when we upload?" The code tells us - without bothering us with every single detail.
The problem mentioned at roughly 3:10 could also be resolved in Typescript with a discriminating union type. So you have 'type A = { destination: "aws", aws_stuff: string } | { destination: "sftp", sftp_stuff: string }'. So if you pass "aws" as the destination, TS will narrow types down and allow only types that were defined together with "aws" as destination and allow only those to be passed. In that case there is no need to specify in documentation which other properties are used with that destination because TS will narrow it down for you. Of course in this case the refactor that was done to use dependency injection is better, but there are some cases where discriminating union type is beneficial.
exactly my thought. The classic case would be if you're designing a simplistic interface as a library. While within the library you might want to use dependency injection and many more concepts to make your code solid, simplifying the I/O interfaces between the library and the end user to allow your methods to be simplified to their abstract goal is easier to use (and it is better recognized/handled by IntelliJ) than having multiple implementations that you need to know specifically. They are probably adding significant amounts of time to the learning process (through docs) rather than having an intuitive use case. Me monkey, me see parameters, me fill parameters, me service works, me happy. As often in coding, everything has it's pitfalls and it's a lot about choosing to do the right thing at the right time. (It's kind of like, earlier today on another video about Assembly I saw someone saying they did a bunch of dart and hated that people used Electron to build Windows app, because of how unoptimized it is. Except for example I coded an app in Flutter (Dart) and I had to do a lot of the styling of the app, because Flutter barely has community libraries; and when I got to specific usecases (like, decoding music metadata specifically) I realized there was no solid library that could offer me a simplistic approach to getting the data at the time, forcing me to implement every usecase myself by reading and decrypting byte arrays - a massive time loss, as you can imagine, with all the different standards. On the other hand, I made a significantly more complex app with Electron, that took half the time to style, and allowed me to use significantly more powerful libraries that simplified most of the work. Not only was the technical debt significantly lower, but that app consumes 2-3% of my RAM and negligible amounts of processing power except during some (very mild) peaks; The optimization gains would be entirely pointless.)
I've come back to watch this video like, 4 times now. I can't tell you how amazing you are at explaining things. And the visuals *chefs kiss*. Keep it up dude!
My last job I did a bunch of elixir and rust programming. Of course, we used dependency injection all over the place because it's extremely useful. It's really nice that you get it out of the box in functional languages since instead of passing an interface objects you are just passing in functions. So in rust we were passing in trait objects and then in elixir we were passing in functions and also doing process level dependency injection. Testing on a system like this is just extremely easy and so is adding new parts and expanding the overall functionality. Being able to mock up a part of your code or change the connection of your code on the fly is a very powerful abstraction.
I've used it extensively over all my designs throughout my software development career. The upgradability, ease of testing, and readability this provides (not to mention the easily configurable code that can be controlled just using configs) is so great that this truly deserves the title of the best pattern.
One of my favorite thing about your videos is that the examples are not "We have an animal interface with eat() method and monkey implements it and prints 'banana' on the screen" type of examples.
I find the topic of dependency injection very interesting! I would love for you to make a video about the technical details of how dependency injection works in common frameworks. Not how to use them, but how the framework manages to get the right arguments to the right functions. I've always wondered how this works and googling about it only ever gives tutorials on how to use the framework(s).
I think its actually as simple as what he explained, just get a bunch of configuration as input and from that decide what implementation to return. You have 3 services: A1, A2 and B. B depends on A If the configuration has the flag a1 to true, you create an instance of A1 and pass it as a parameter to the constructor of B. if it is a2 then you create A2 and pass it as a parameter (The configuration should be an enum not a boolean, since a1 and a2 are mutually exclusive.
Yeah, I wouldn't worry too much about using a DI framework. Just start out doing DI manually like in this video until you internalize the concept. A lot of people have a hard time separating DI from DI containers.
Your videos are the best programming / coding / engineering / computer science (probably other categories too) video essay tutorials I have ever seen. Astounding, one of a kind work. Not to mention the name, I’ve been thinking about how much of what I value is some kind of “optimization aesthetic” or “aesthetic optimization.”
0:18 "We inject the dependent code into the code that uses it." I would argue that it is the opposite... it is precisely that code that uses it that must be termed the "dependent code", because _that_ is the code that depends on the injected code (the dependency) in order to do its work.
Was looking for this comment, I watched only the first 30 seconds which explains how it's implemented but doesn't explain the principle behind it. Thanks for pointing it out
I clicked on this video out of curiosity for something that sounded weird but it turns out it mostly resembles what I called making "generic interface classes" (or something similar in English) Those really are a must in an environment with a lot of rapid evolution of solutions like a research project where you change libraries or implementations all the time ! Really clean video !
Switching logic is fine when used sparingly, but excessive use results in code thats impossible to read or maintain. If left unchecked, even small changes end up requiring modifications to 30+ files, and finding bugs becomes a game of whack-a-mole.
@@LimitedWard >Switching logic is fine when used sparingly, but excessive use results in code thats impossible to read or maintain. How do you know? Did you check? Or are you repeating the same programming propaganda from other people?
that visual interpretation of function parameters is what I've always used when designing, even factories, so the dependency injection explanation snapped into my brain the moment I saw the visuals. 10/10
Problem with dependency injection is the memory footprint. It makes things dynamic when it could be static at compile time. It might not be as important in typescript. Thanks for this great video!
Dependency injection is a powerful tool that can greatly improve the modularity and testability of a system, but like any tool, it comes with trade-offs that need to be considered.
> We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. > > Yet we should not pass up our opportunities in that critical 3%. -Sir Tony Hoare People conveniently forget the context of the quote to justify sloppily inefficient production code. That optimization is the last mile - get it working first, then make it scale - does not mean that we should ignore it.
@@ghost_mall Repeating tired clichés should be considered the root of all evil. The number of times I've seen people say "I optimize for readability", or "It's best practice" and still produce the most brain dead code 🙄
@@integerdivision Agreed - Make it work - Make it stable - Make it fast *IN THAT ORDER* That doesn't mean you can't *design for fast* at the make it work stage, and DI is great for that because it promotes lazy initialization, but don't sweat the small stuff out the gate.
Dependency injection can be done statically to resolve all virtual methods at compile by using Generics rather than base classes. This works the best in languages like Rust where generic type parameters have explicit type bounds, but can also work in languages like C++ where the type bound is implicit.
I just watched all 7 of your videos. Thank you for those. I found myself generally agreeing with or already practicing everything you advocated. And you explained it well.
GoLang devs unite! One of the things that I love about Go is the requirement of DI for unit testable code. Of course, that just means that every contributor needs to have knowledge of DI 😂
Great video and excellent explanation. The animations are perfect for making it even easier to understand how everything connects, specially the interfaces, implementations and mocks. This is by far the best video I have seen about dependency injection.
I'm at war with dependency injection to change runtime behavior. Trying to understand what an app is doing "without running the app or even stepping through it via debugger" becomes very difficult. Dependency injection all the way through for testability: yes please. But runtime magic: It may be elegant to write and work with if you understand everything, but very complex and opaque if you don't touch the codebase on a regular basis. So e.g. I'd rather inject the factory instead of the final service so you can jump into the creator.
DI should decompose parts that work in isolation. Either you are debugging problems with chat code (in which case you can ignore file uploads), or with file upload (in which case you'd be looking at file upload code). Sure, it's possible to use DI too much, but for larger chunks of functionality it's indispensable.
I was so excited to see that a new video was out! A colleague of mine suggested this channel and I'm HOOKED! This is exactly the topic I wanted to understand more profoundly.
Whenever I am talking to my guys to build "sinks" for testing, the fake functionality, they never understand me. Now I know why. This video is a work of a genius! Thank you.
Coming form Rust, that bit at 2:55 could be massively improved using a tagged/discriminated enum. At least, from the caller's perspective. It definitely does not solve the inner complexity of the class and DI is the right choice, but tagged enums can simplify that "a bunch of optional variables" problem quite frequently.
Could also just have used a union of different object types, since they were using TypeScript. But yes this doesn't solve all the problems that dependency injection does in this case.
This isnt exactly dependency injection, more a mix of dependency injection and dependency inversion. Dependency inversion is about passing dependencies to pieces of code that use it, whether thats through function arguments, global variables, constructors, fields, etc.. Dependency injection generally refers to frameworks that inject the dependencies for you, you might, for example, be able to create a class with specific constructor parameters and then get an instance of that class using a dependency container, the dependency container will detect the parameters and create instances of those classes to pass into the constructor. These frameworks generally let you specify the lifetime of dependencies as well, in some case you might only want one instance of a specific class which you could, for example, indicate by passing some specific argument to the method that lets you register a dependency. Or you may want a different instance of a dependency wherever its used. Decent video if youre new to this concept but always make sure to do your own research as well
and this is the problem with this architectural circlejerk. people nitpicking and inventing new terms and overcomplicating stuff. like i always say: people that make these architectures are people trying to sell you books on that. just use whatever works and don't feel bad that your code doesn't do "big boy" dependency injection. I'm a senior dev and I see a LOT of "new architects" making incredibly complicated solutions to something that's just a glorified CRUD.
@@squishy-tomato uncle bob isnt saying inverting is traditional, rather that the inversion is in respect to the traditional structure. I agree, the terms "Dependency injection" and "Dependency inversion" are just too similar and could probably be merged into one but sadly thats just not gonna happen, once terminology gets created people start using it and once people are using it they wont stop using it.
@@83hjf a good developer knows of these patterns, a great developer knows when to use which. knowing about dependency injection doesnt mean you have to use it everywhere. knowing about di frameworks doesnt mean you have to use them everywhere. as long as the code is as maintainable as it needs to be
Since I see some people disagree with you, I'm assuming that either it is exactly as you say, or "dependency injection" is specificaly the practice of passing an interface as a parameter; which still seems to me kinda unnecessary to give it such a name. Or we're both taking crazy pills 😅
The reason I care at all about the naming is because "dependency" usually refers to external code like a library that you import, especially when dealing with web service stuff like in the video. I find this usage of "dependency" in its pure abstract meaning of "line of code that depends on another line of code" confusing because it's sort of unexpected. I used to assume "dependency injection" referred to something along the lines of injecting a library at runtime as opposed to at build time, like in the middle of your app it downloads jquery and starts using it or whatever. The pattern described here could have a clearer name like "shared api" with an honorable mention to "duck typing".
I agree that "interfaces" is not the best alternative, especially since it's an actual language feature in Java, not just a pattern. But he did mention that it's a similar concept in the video.
Your video editing style when visualizing code snippets is the best I've seen in years. Make it everything so much more understandable, thank you very much!
U literally just moved the same issues into multiple files which makes it harder to read and harder to debug for someone else. Breaking up everything into different classes or interfaces is not always the best solution.
My replies keep being deleted by TH-cam in another thread where @k98killer asked me to elaborate on Data-oriented design. Hopefully that won't happen here:
Firstly, I’ll do you one up and cover much more than you probably expected, hopefully without ranting too much (no promises 😉 ), because it’s not as easy to find material on this unless you’re already fortunate enough to work with very experienced developers (not of the car salesperson type) - and I think it’s a huge shame. I hope to convince you that this is very much worth your time to explore, and perhaps even convince you that it has the potential to revolutionize most software and give you the power to one-up almost every company simply by programming this way and copying their current feature sets. But I’ll let you be the judge of that. Secondly I’d add the disclaimer that I have 10 years of experience with the SOLID but only about a year with DOD - and I still have much to learn, but I’m so much happier and more productive coding this way. Regardless, I encourage you to try it out yourself, verify etc.
Thirdly, most popular design principles are exclusively about trying to optimize developer time, at the expense of users (which eventually includes ourselves - just look at the sorry state of Visual Studio, compilers etc.). However, I want to be upfront that data-oriented design can sometimes take longer to develop - especially until you get good at it (unsurprisingly). It may require a lot of unlearning simply because many of us have spent a long time learning ecosystems which are pretty distant from this way of thinking. For instance, DOD is fundamentally against adding a ton of libraries that add thousands of unnecessary features at huge complexity and performance costs. It's about *staying lean* and not adding more than you need, before it's actually needed. And making solutions that *combines the necessary components to fit the use case,* not divide everything into small abstractions which are then arbitrarily reassembled to try to prematurely cover a near infinite amount of unrealistic future use cases. For a moment, let’s assume there’s a hypothetical ideal design for application that does what you need. That solution has an inherent need for data structures and abstractions to get the job done - and so the idea is that it should be your goal not to inflate that ideal solution with more additional abstractions than what it takes for you to understand the final solution when you read it. Because *additional abstractions means additional complexity,* lower readability, higher maintenance, (often orders of magnitude) slower performance and higher resistance to more complex features in the long run. *Don't generalize, specialize.* You’ll find this is controversial advice - but when you ask critics “why”, you’ll mostly get answers that aren’t grounded in anything quantifiable. At least that’s been my experience.
Now, this video about dependency injection attempts to remedy some of the common adaptability problems by making it easy to see all important functionality in one place (9:13) but even their simple demo is making it so hard to keep an overview of the rest of the relevant code for the use cases, and eventually you just have to hope that it’ll not become too complex in the long run (spoiler alert: extremely hard), and that you won’t need to shoehorn functionality into the “pattern” to add simple features that wasn’t originally thought of. Instead you can literally be able to see what it does and having relevant code coupled with what it needs for the relevant use cases covered. When I first heard this, it was hard for me to imagine what anything else than OOP clutter looks like, but think of the kind of simple functions/programs you probably did when you were first starting out, writing simple code that coupled with a ‘main’ function. It’s very close to that, but with more files and lines of code - and useful abstractions that naturally evolve with experience. The process of adding functionality with DOD is usually to write a monolith function that does whatever job you needs done, and then the necessary data structures will naturally appear out of necessity, coupling that with the advice I gave in the previous comment about writing to the usage and not overextending yourself too early. Then *when you get it working, you compress it* down to the simplest form you can with your current level of experience - and then you’re done. Another common remedy to the unnecessary complexity is to try to cover everything with tests - because eventually the only thing you'll really be able to understand is simple snippets of functionality. And when they're scattered and used like tiny runtime Lego pieces that can be used literally anywhere, then you have barely any chance to actually understand what's going on. You basically have to guess, assume, hope, trust and take a long time to try to build an understanding of it. Or test to an extreme amount. Many developers have probably heard this one: A QA engineer walks into a bar. Orders a beer. Orders 0 beers. Oders 9999999999 beers. Orders a lizard. Orders -1 beers. Orders uiasdaisfduo. First real customer walks in and asks where the bathroom is. The bar bursts into flames, killing everyone. My point is, relying exclusively on tests to trust that your code is still 100% working after a change is not ideal. Because plans usually don’t pan out to reality and humans fallible. And when you try to make everything generalized, you’re introducing exponentially more ways for your application to go wrong - and eventually you’ll have so much trouble with understanding the application that you can’t really diagnose it. I think there’ll always be a complexity limit to any application humans have written, but when you have fewer unnecessary abstractions, it’s me and DOD’s claim that the ceiling is higher. I assume most developers have tried pulling their hairs out on something where you just can't figure out what's going on, that isn't really strongly related to the fundamental problem you're trying to solve. Not like “what is this algorithm?” or “how do I best integrate this new feature with the existing functionality?”, but more like “why does the .save() function of my ORM sometimes crash?” or “there’s a random memory leak somewhere, perhaps I misused one of my many libraries or something?”. If you've ever tried what's called an "object calisthenics challenge", like me, which is an exercise to teach you how to do SOLID, you might also have sensed that it just turns simple things into a complete mess for reasons that are all about unfalsifiable assumptions of what's better (often comparing it to unstructured messes of OOP applications which are even worse). At the time I just thought SOLID was the best of many bad solutions, and was a necessary cost of high-level programming. But I no longer think that’s the case at all. When you eventually get into the more advanced aspects of DOD, it's also about being aware of the underlying hardware, libraries, disassembly etc. and writing the code in a way that is designed to run on those instead of trying to force your mental model of the world into the program. Things like knowing that RAM is physically much further away from the CPU than its internal cache, which becomes an inherent performance bottleneck if your code constantly jumps to obfuscated code that isn’t cached. Contrary to common misconception, it’s *not* about hand-rolling assembly code - the goal isn’t to mimic the demoscene (but they do rock), there’s no need to abandon high level languages for most developers - although there’s also plenty of room for opportunity to make more aesthetic programming languages and libraries that makes good programming as frictionless as possible (Jai has promise imo). The unquestionable reality is that there’s disk storage, memory, internal caches, network connections etc. Everything above that is abstractions, some more necessary than others. Like “files” which is definitely a very useful one. “Resource” isn’t, because it’s very ambiguous and general. In fact, “resource handling” is a good example of a common abstraction that has no relation to anything fundamental other than a mental model of it. It’s about being myopic and making everything into small “resources” that can be individually constructed/initialized and then eventually deconstructed/cleaned up. But it’s actually almost always better (in every way) to handle data life cycles grouped together with the other relevant data that needs it. It’s about *avoiding the tendency to want to prematurely get your code ready to reuse virtually everything.* Once you get familiar with this concept, you'll find that your program expands into far less files, lines of code and it's easy to follow what's going on in a debugger, add new features or even have some hardcore devs go in to optimize things without needing to rewriting the entire application from scratch. Another really important aspect is: Don't assume, measure/try. DOD is fundamentally about not assuming something is better because it has 100 upvotes on StackOverflow or you heard an engineer at Google say it (or me). Many solutions aren’t great when applied generally, and you need to design it in a way that fits your application's needs - not everything adjacent to it. If you're familiar with thinking of time complexity (also known as "big O") and picking a dynamically sized array one place and a hash table another place, then this is just one step further than that.
In summary, I think the benefits are: • Easier to understand more complex applications, but maybe not if you’re afraid of occasional basic arithmetic and geometry (or more advanced math if you’re doing 3D graphics). • Easier to get right (when your program does anything more than a prototype) - at least if you’re a programmer who tries to understand code, rather than copy, paste and pray. And I don’t expect that’s a very high bar, except perhaps for absolute beginners with no real interest in the field and/or who only see coding as an easy way to get bread on the table. • Much better base performance, and easy to optimize further for the few expert developers that your company might (eventually) employ. If you don’t think performance is relevant, try noting every time you’re waiting more than you’d truly like on software after your input, just for one day. I think you’ll find it happens an abundant amount of times (for me it was about two dozens before I stopped). And if you’re old enough, think of when what most use for directions won over the big one from 1996 or the less capable Fruit Phone 1 took down the dark berries and Not-Doors CE. I think there’s many good reasons to believe that users (and wallets for using big servers) care very much - I certainly care as a user. • More adaptable to (realistic) change. Your bike renting website won’t be easily adaptable to handle running a nuclear reactor, but hopefully this hyperbole illustrates that much premature generalization isn’t rooted in specific industry experience. • It took me a few months of practicing it to be about as productive as I was in the OOP ways. YMMV of course. But unlike SOLID, it was fun from day one and still is. • Coding this way gives you knowledge that doesn’t stop being relevant anytime soon - unless perhaps something drastic happens like we replace having a CPU, RAM etc. with something fundamentally different. And in that case, all developers will probably need to relearn many things anyway. Some of the disadvantages may be: • Powerful libraries done with DOD can be hard to find or scarce. I think this is because so much software from the last 20 years completely disregards the cost of complexity and ignores the foundation you’re building on - I think we’ve often lived in a “software layer”-land where it’s all some abstract ideas stuffed into a compiler that then magically makes the thing work on our devices. But DOD-friendly libraries do exist. Just look at immediate-mode GUI as an example - I had no idea it existed until very recently, but so far I’ve found it’s generally a much easier way to make complex native user interfaces than the most common OOP ways. • Sometimes you’ll have been forced to make a premature design decision which was done on incorrect assumptions about what your application will need, and that can require a major refactor. E.g. imagine that you thought Google Maps would never need 3D rendering so you built the site around always being 2D. Many developers are scared of such refactors, but in my anecdotal experience it’s much less cumbersome and infrequent than the reality, where simply reordering B l a c k M i r r o r episodes on N e f l i x literally is a project that takes a team of developers months to accomplish (see “MicroServices (and a story about N e t f l i x)”). I actually think it’s healthy to do refactors (and code deletes) every now and again, so it doesn’t get too bloated, and you can get a lot better at it over time - especially if better generally available tools and languages are eventually built. This concept can be hard to digest, if you’re a business that lives on promising quickly built custom features with little motivation for the salespeople to say ‘no’ when the features are pretty far out of your expertise. I think it’s a balance of quality and quantity that’s very subjective. However, not everyone will agree that this is what DOD is - certainly not those who dismiss it without ever trying it out. Mike Acton’s talk from CppCon 2014 is the first place I heard this - he’s a very talented game programmer and goes deep into the weeds of how he programs, but he isn’t always friendly. The talk is very informative, even if you have no interest in game development. One of the better online resources I’ve encountered is Casey Muratori - which is also a game engine developer that you might know for his criticisms of “clean code”/SOLID - perhaps in his clash with “Uncle Bob”. Casey has some good showcases of what a DOD of codebase can look like in the episodes of “Handmade Hero”, and he now has more generally applicable educational material that is more available for enterprise code (which most of us work with), but I’m still unsatisfied with the scarcity of material out there, so I hope some of you reading this isn’t majorly discouraged by that, even though you have fair reasons to be. My advice is to just start small and take it in steps. Leave things better than you got them etc. Lastly, I no longer have a dream of a one-size-fits all solution where I could reuse all my code to make huge programs that were orders of magnitude more feature rich and powerful than the individual parts. But maybe one day we can get quality software as the norm again, where most developers actually find it fun like when I first started out with the basics. Good luck.
I wanted to add that while this video uses an OO approach to explain the idea of dependency injection, the idea goes beyond a single paradigm. In FP, for instance, when you make a function f that takes a function as input g, you are actually doing dependency injection. f is dependent on some function with signature g that the caller is supplying! Actually, if you are making an interface with a single method and the implementations you are using are stateless, it may be make more sense to do functional dependency injection instead, depending on your language
Thank you so much for making this video. I've heard this term many times but I didn't have a great way way to visualize the concept and you did a great job with this video.
This video is just amazing. This is one of the best patterns out there if not the best. It is simple and clean. It provides a system of plug and play. You develop the business logic once based on Interfaces, and you might not even have any code yet that does any of the steps each single service is responsible for, but you are already able to draft the main business logic of you application. Then developing each adapter is just a breeze, and no other part of the code is affected by the development of it. You can have multiple developers working on the same codebase without interfering with each other, and that is just fantastic. Also makes testing so much better, you develop the tests based on the interfaces again, and you just have to run the same tests against each adapter to make sure all the integrations work.
Also I would add this feels so much better combined to a statically typed language where the compiler helps you so much on the implementation of the interfaces. I love applying this pattern in Rust
In the first 15 seconds, you provided the best explanation of dependency injection I've ever seen. Well done
He's using this term wrong.
He's manually creating new instances of objects, and passes other objects as arguments.
Dependency Injection handles this instantiation for you - you can just use them without having to worry about instantiating whatever they need in their parameters. It automatically wires up everything you need.
With dependency injection you don't do stuff like this following pseudocode to get a Controller:
DB db = new DB(url)
Service s = new Service(DB)
Controller c = new Controller(s)
You create the instance like this:
@AutoWire
Controller c
Then the DI framework will try to create the instance automatically. It sees that your Controller needs a Service so it will try to create that one, but sees that this Service needs a DB so it will create that one as well. The DB needs a URL string, so in that class you specify from which property or environment variable it gets that value from.
That's Dependency Inversion - you are inverting the flow: instead of you having to pass in the arguments to instantiate objects manually the framework takes care of it automatically.
He's using JavaScript, but he's not using any Dependency Injection framework.
There's no @injectable annotation anywhere. He doesn't specify how to wire things. He's not using Dependency Injection.
He's just talking about using Composition, but he's not talked about Dependency Injection at all.
@@Domo3000 You are describing injection *frameworks* not the concept of dependency injection itself. This video is correct, dependency injection is just a name to describe the concept of passing in a dependency from outside the thing that uses it, that's all it is. You don't need a framework, you can do that manually. Frameworks just automate this pattern on your behalf. Composition can be achieved without using dependency injection (i.e. without passing in dependencies from outside), DI helps to enable composition that can be controlled/changed at higher levels of your codebase.
@@MajesticNubbin the frameworks take care of the Injector part of the principle -> the part that's responsible for creating instances and their dependencies. They just make it easier, but even if you don't use a framework Dependency Injection is still something different than what's shown in the video.
If you look at the video there's no Injector to be found. Like at 9:39 he's doing all the initialization of the dependencies himself.
In Dependency Injection you shouldn't have to create the image_scaler to pass it to the creation of the image_generator. Your Injector should do that for you.
Here's some definitions:
"The interface only decouples the usage of the lower level class but not its instantiation. At some place in your code, you need to instantiate the implementation of the interface. That prevents you from replacing the implementation of the interface with a different one. The goal of the dependency injection technique is to remove this dependency by separating the usage from the creation of the object. This reduces the amount of required boilerplate code and improves flexibility."
"With the dependency injection pattern, objects lose the responsibility of assembling the dependencies. The Dependency Injector absorbs that responsibility."
"In software engineering, dependency injection is a technique whereby one object (or static method) supplies the dependencies of another object. A dependency is an object that can be used (a service)."
"We will need to recreate the car object with a new Yokohama dependency. But when using dependency injection (DI), we can change the Wheels at runtime (because dependencies can be injected at runtime rather than at compile time). You can think of DI as the middleman in our code who does all the work of creating the preferred wheels object and providing it to the Car class. It makes our Car class independent from creating the objects of Wheels, Battery, etc."
Literally every defintion describes something different than what's shown in this video.
There's no Injector taking care of creating the dependencies and instantiation. He's not making use of Dependency Injection. He's just following the Dependency Inversion Principle.
@@MajesticNubbin Yup. And that framework even uses a "dark" pattern: Service locator. But at least it's hidden from the user (programmer), so it's acceptable ^^
But I have seen people routinely pulling objects from the container...Then they wonder why you want to kill them when you have to test^^
Still doable, but now you've to do more of an integration test ^^
To prevent surprises, in Spring, I create configuration classes per test when needed, so the test blows up if an unexpected dependency is unmet, and I can investigate why it is required.
I have been a QAM, so I saw quite some horrors and had sitting sessions to fix programmers' minds. 😂
QAM is a funny role because devs usually think "just a tester" until you read their code and explain how it can be improved. (QAM is not QC, but in smaller companies, QAM does QC.)
@@Domo3000 The video is correct. You don't need DI framework to do Dependency injection. Just as you don't need React to create reactive applications.
I was genuinely scared that you’d never put out another video. I’m so excited that you did. Your work is so valuable to the community. Thank you.
Yes we deserve atleast 1 video a month.
I see myself coming back to his channel every 2weeks to check for updates. 😀
me to. his work is just incredible. is there any other channel or platform where I can learn such high quality stuff?
agreed. if ur having trouble commiting time due to the wimpy monetezation, make like a shirt for 10 bucks. tell everyone to buy it at the end of a video with a link in a comment or description and charge 20.
????
Summary of the wisdom I have collected in my 20 year journey into the mind of the computer:
1) Favor composition over inheritance.
2) Couple to interfaces rather than implementations.
3) Premature abstraction is often a bigger problem than premature optimization.
4) DRY is only a rough guideline. If you repeat something more than twice, then refactor it out.
Premature Abstraction in a way *is* premature optimization, you're abstracting without needing a reason to in the event something *might* change, in the same way you're making obtuse instead of readable code in the event it *might* need to be faster. I think Code Aesthetic laid out good rules here:
1. If you're introducing more than 2-3 paths in your code, that's a smell it might need an abstration
2. If you have a dependency that is difficult to initialize for testing, that's a smell
3. If you have a resource that is killing your startup time but is only occasionally used, that's a sign it should be lazy initialized, and a DI/Factory is a good way to help with that.
1+2) Data-oriented designs are better than both, but there's a scarcity of good libraries so it can be limiting in the beginning
3) Hard agree (depending on your exact definition), but also:
3a) Implement the usage code asap (before the functional code if possible), that's the fastest way to realize what your code needs to do.
3b) Don't try to cover edge cases before you've actually made a working implementation. Once you've made working code, a lean solution will be much easier to see. Even the best programmers forget this sometimes.
4) I typically agree but I don't think it's an important guideline.
5) Don't be afraid to question norms. Computer science is still a very young field and it's *full* of dogma. And most of the commonly accepted guiding principles read like advice from humanities' papers, rather than quantifiable and falsifiable ideas. If you follow this rule, you're virtually guaranteed job security, judging by the current degrading state of software.
@@Muskar2 could you expound upon data-oriented design a bit? I have an intuitive idea about what it is just by reading the words, but I'm unfamiliar with it as a specific, named design principle/strategy.
@@jgroteI couldn’t agree more. Do you have a blog/Twitter or somewhere that you post regularly? I would subscribe.
yeah that doesn’t work in practice, there are no rules
Great video on an interesting topic!
One note - when you put a text at the bottom like you did at ~1:22, consider that if the text disappears too quickly or appears while you're speaking, the viewer will probably have to pause the video to read it, but if they pause it the YT media controls will not disappear and cover that part of the screen where the text is
true
Yeah, I noticed that when I had to go back to read it
Great point, thanks
interestingly its not a problem on mobile
The return of the King
Yes a king has returned
I love the fancy beat at the beginning
He's overtaking the thrown
seriously, I can’t find anyone else doing what this guy does
The Two Towers
As soon as a minute passes in the video, I begin to understand why you don't upload as frequently as other channels we are used to. Creating such perfect videos requires time. Thank you
It was a well made video, but it has several mistakes. First, literally the only thing he did was switch from grouping code by functionality to grouping code by service provider. It's not inherently better or worse, and completely depends on the use case and personal preference. Type guards would have completely solved the issue of multiple optional parameters, and he could have organized the code better in the first place. Needlessly messy code and poor type definitions cannot be used as an argument for why dependency injection is a superior approach. Grouping code by service provider makes it harder to find out what a specific functionality does, as all the upload functions are split into different classes. A better teacher would tell about the different aspects, not just preach that "this is the best" without telling anything about its drawbacks.
Finally, absolutely never use NODE_ENV for checking for production environment. That's such a rookie mistake. If you're doing your testing and staging properly, you should be running NODE_ENV=production in some of your development environments. Node itself and your dependencies use it for switching some runtime optimizations on and off and you need to test the production build in dev. Always use another custom flag for determining the deployment environment.
Any design pattern that promotes composition over inheritance is worth making a video about. Top notch work as always , the animations are amazing.
Dependency Injection itself does not favor Composition over Inheritance, it just externalizes the composition. However, you need Composition to do DI.
There's actually some Inheritance in here with the use of the interfaces, but it is done the right way.
(The wrong way would be if the different storage methods had been added by inheriting directly from the main class. Yeah, I've seen some such code…)
@@Cau_No what was the point of creating the base storage class since none of his inherited storage classes seemed to call any base storage class methods?
@@MaxwellORoark The base 'class' is an interface, that does not have any implementations, but defines methods to implement in the inherited ones, unlike an abstract class.
It is used to make the subclasses pluggable and therefore testable.
@@Cau_No
It's subtyping, not inheritance. Though subtyping is sometimes considered a type of inheritance ("interface inheritance").
"formal duck typing"@@Hwyadylaw
This is hands down the best explanation of dependency injection I've ever seen. That "puzzle" where you kept connecting those services made it extremely easy to reason about what's happening!
This is soooo good. I’m an SDE at Amazon, and basically you’ve summed up 99 percent of what I’ve learned about writing clean code while here. Well done, this is awesome.
Literally the only thing he did was switch from grouping code by functionality to grouping code by service provider. It's not inherently better or worse, and completely depends on the use case and personal preference. Type guards would have completely solved the issue of multiple optional parameters, and he could have organized the code better in the first place. Needlessly messy code and poor type definitions cannot be used as an argument for why dependency injection is a superior approach.
@@Asijantuntiayeah this all seems exciting in the beginning. But it quickly becomes tiring and feels unnecessary
@@Asijantuntia You're right.
In TypeScript, the more common way, given a finite number of different configuration types, is to define the config types individually and union them all up to a discriminated union and use type guards.
But using type guards is harder to extend when you don't have a finite number of possibilities.
Or, to pass a unit of computation, one can simply accomplish it by passing a callback function instead of an object. This is more commonly seen in TypeScript libraries.
For a long time, people would ask me "Are you using dependency injection?" and try to sound like it's the most fancy thing ever, and that it's something that should always be used.
Whether I was aware of the term or not, I was already employing this practice to organize. That said, using it when it makes sense is best. If it drives up complexity just for the purpose of using it to "plan for later", then it fights against simplicity.
Use when needed, else it can create additional arbitrary interfaces to drive up the project's complexity. Years ago, I remember making completely useless interfaces to try and "be versatile" but it just made things much worse 😅
Dependency Injection is more fancy than what's presented here.
He just talks about Composition and falsely calls it Dependency Injection.
He manually creates new instances of classes and passes them as arguments.
With Dependency Injection you just specify what arguments are required and how to construct them.
So something like:
DB db = new DB(url)
Service s = new Service(db)
Controller c = new Controller(s)
Isn't dependency injection. That's just composition.
In Dependency Injection you would just write:
@AutoWire
Controller c;
To get a instance of the controller. Dependency Injection is about inverting the flow. Instead of you manually creating the required instances and passing them as arguments the DI framework will look which arguments are required and will recursively spin them all up as needed.
So in this case it would try to create a new Server, then it notices that the Server needs a DB so it will try to spin that as well and notices that it needs a URL.
So in order to make that work you would have to specify somewhere from which property or environment variable it reads that String from. Then the Dependency Injection can just create a new instance of the Controller without you having to manually instantiate all it's dependencies and their dependencies first.
This video completely missed the point.
Default arguments help here.
Yeah, one instance where it doesn't make sense, is when injecting simple objects that do not have any kind of side effect.
Why not simply instantiate them inside? Unless they're using unknown data.
Absolutely. Usually DI simplifies the code, but creating a single implementation of the interface might not be worth it wrt creating the full dependency tree. I often resort to creating double constructors in Java: one that has everything sent into it, and one (the default one) constructs the dependencies that doesn't really change (like System.clock), so that I can inject a fixed clock in tests while having a simpler DI setup for production.
This is a great video with tons of great explanations and visual aids. However, throughout the whole video, I kept asking myself how you would handle errors in this situation. The more I thought about it, the more complex it got so I think it would be great if you could make a video taking about how you could handle errors in a situation like this and potentially in other situations as well.
+1 for error handling videos that would be so cool
this can be solved by having a global function that sets a callback for errors. this is how glm and juce handle errors.
While I understand the question, I think what is even better is not how you but where. Error context should be scoped per method. So you might have guards in your constructor and then logical error handling in specific method to validate functionality worked correctly. Like mentioned if you stream the data and then was an error chunking the data then a error should be handle there. What you do with that to the end users is where it will be handled. I know many people in web applications talk about a layered approach. So Presentation, Backend, Infrastructure, and so on. The only was the user ever sees an error is in in the the Presentation layer. This does not mean the frontend. Presentation can also be any endpoint being hit so in nodejs frameworks this is normally called the route (Django -> View, .Net -> Controller, and so on). It is how you are consuming requests and then presenting the result. Even if that is an error.
Using dependency injection doesn't really change anything for error handling. Wether you are working directly with a class or with an interface is basically the same, it returns or throw an error and you check or catch it.
don't use errors
This explanation was EVERYTHING. The clearest explanation at start, examples that make sense, a quick and engaging way to show the code, and clear animation. Now I need to watch everything you've ever made. Thank you!
Great use of the visual diagrams, this is one of the best examples that I have seen for visually showing code concepts alongside the actual code. Keep up the good work!
what a beautiful concept, i worked with this at my last job a year after college and i never understood its true power, great teachings!
This is an awesome example and explanation of the benefits DI provides. The not-so-good part of the DI is that people really tend to use DI for everything and start abstracting things that they would never change.
That's me. When given so many options sometimes it's hard to know what you need. But overtime as the requirements change it's easier to write up DI for each scenario.
Man those drawings representing whats happening in the code help me so much to understand this kind of thing. I wish teachers where more like you
Knowing is one thing. Teaching in such an elegant way is another thing. Another thing from another dimension! Good Job! Love your work.
This finally makes sense to me. During my courses at school, the teacher would give vague answers when talking about Component Based Software Engineering, and how you would write components / dependencies up on a 'white model', and how they each require and provide for each other. He did not want to elaborate on dependency injection, and faulted us for it. Watching this made me realize I did not have a basic understanding of the technology, and how powerful it actually is - You explained in 1 video what a whole course could not. Thank you!
Take this video with a huge grain of salt. First, literally the only thing he did was switch from grouping code by functionality to grouping code by service provider. It's not inherently better or worse, and completely depends on the use case and personal preference. Type guards would have completely solved the issue of multiple optional parameters, and he could have organized the code better in the first place. Needlessly messy code and poor type definitions cannot be used as an argument for why dependency injection is a superior approach. Grouping code by service provider makes it harder to find out what a specific functionality does, as all the upload functions are split into different classes. A better teacher would tell about the different aspects, not just preach that "this is the best" without telling anything about its drawbacks.
Finally, absolutely never use NODE_ENV for checking for production environment. That's such a rookie mistake. If you're doing your testing and staging properly, you should be running NODE_ENV=production in some of your development environments. Node itself and your dependencies use it for switching some runtime optimizations on and off and you need to test the production build in dev. Always use another custom flag for determining the deployment environment.
@@Asijantuntia I do agree, and completely disagree. This is a complete introduction into the dependency injection design pattern, not into the whole dependency injection paradigm. For that he would needs to cover run time vs build time dependency injection, dependency frameworks, as well as fully discuss environmental variables and their usage in delivering software products - which is complete overkill in an introduction to a design pattern. I agree that he could probably cover some shortcomings of the design pattern, like how cluttered your codebase becomes cluttered with the interfaces, or that your code becomes increasingly obfuscated but easier to read.
The second part of your comment seems more like a rant than anything. Why shouldn't you use NODE_ENV to check your environment? Node sets the environmental variable at runtime and defaults it to "development", and is a fairly common practice. What would your alternative be?
I'm genuinely impress by the quality of this video, the visual are nice the delivery is spot on and the explanation and example are super clear. Well done!
Im so glad you take the time to fully think out an idea and use great and real examples. Can you do one on testing, because that is a topic that to me felt easy to do and very hard to master.
I cannot express how amazing these videos are dude. The ability to show a concept theoretically with such a short amount of time and short amount of code explanation is astounding
Great video! I worked with Spring Boot for about a year and never really understood anything going on, and this video answered all my questions!
I cant get enough of your content. Some of the best coding content out there. Beautifully animated and very clearly explained. Thank you.
Interfaces and DI becomes a habit when you do C# for quite a long time. Awesome video, best explanation I have ever seen, thank you
While dependency injection makes code clean, I found the obsession with it causes incredibly abstract codebases that become hard to navigate.
If you're not familiar with the codebase, you are lost as to what actually happens during a request. Almost every function does a little and then calls into a dependency that does... Something. You then have to figure out what dependency. In this way, you move the code from having concise local static semantics, to having very global dynamic semantics.
Nowadays, when I absolutely must use dependency injection, each dependency interface and all implementers will live next to each other. And the rules for choosing that implementer will live in the same folder/file. This way, it's somewhat more localised and the cognitive overhead is reduced
This is especially true when the dependencies are just cast out into the ether and pulled down “magically” to fit an interface the dependent code needs (e.g., Spring, Laravel). “Oh, this thing needs one of those things. But which one of those things is it using? Where is that configured?” 🫠
I think where dependency injection really shines is resource access and especially I/O.
Because these things can be very hard to debug and test, but with dependency injection you can easily mock these dependencies.
Especially for development to make some parts static and easier to get running, but especially for testing.
In that way it's similar to classes, they really shine for RAII, but if you use them for arbitrary abstract concepts (especially modeling interactions between real world objects), code gets messy.
I agree. Although maybe not as severe as a video about why code comments are awesome or something, I would still say this upload was years out of date the moment it came out.
DI often suffers from a lot of OOP issues, to the point where we need crutches like DI Frameworks and DI patterns and project standards just to ease some of the Burdon. Hell, hardcore functional programmers will often say the only "injection" you need is using parameters in functions.
I think this is more a design problem. In fact most of library or framework that heavily use DI with heavy typing, are often way simpler to understand by navigate through the return type and interfaces than read the docs. So...
Stronger typing and a good IDE is needed then. .... But I give you right, when DI is done everywhere it slows you down reading the code.
This is truly genious and I like the abstraction level. To not go to much into unimportant trivial details but rather to get to the key points of this concept.
I wouldn't call it "the best pattern". Not after all the hours I've spend trying to debug complex application bugs related to DI. The problem with it is - if your dependency is incorrect one (I mean, not a bug inside a dependency object, but incorrect dependency value itself), it may become very hard to debug. Especially when DI is combined with patterns like factory, auto-factory, service provider, and other automation. Because first you need to identify the fact that the dependency is wrong, then you need to find where it came from, then you need to find where it came from into a factory/auto-factory/service provider. And it could be so that then you'd find yet another layer of DI there.
Over a quarter of a million subscribers with SEVEN videos. Content is incredible. Thank you so much for saying what you say so well.
I've been noticing that a lot of programming patterns simply boil down to polymorphism. Once I thought that this "Strategy pattern" is something difficult and mysterious, but it turned out to simply be polymorphism. Now this menacingly sounding term, "Dependency Injection", also turns out to be polymorphism. I am at loss of words. Thanks for a great video!
The strategy pattern is kind of a special case of this, the strategy can be a dependency injected into a method or class
all "design patterns" boil down to function application (sometimes partial). but java programmers invented names for them to torture other programmers with "do you know how to write a decorator factory that computes a price for different types of coffees with sugar?".
People have been making up fancy terms for OOP for decades. OOP = message passing and late binding = polymorphism.
I'd argue that a lot of programming concepts are given unnecessarily scary and mysterious names for some reason. It's very obfuscated.
i like that there are so many real-world examples in the video, very helpful to understand.
You’re quickly becoming one of my favorite channels on best coding practices. Lot of stuff I’ve preached for years all packed into well made terse videos
This is a beautiful diagram to explain the whole process and really helps to understand something otherwise very complex.
Passing parameters to a function has never been so complex as with these dependency injection frameworks. This video shows how it actually should be done: just super simple code!
Thank you for clarifying that "dependency injection" is such a simple concept, I had always assumed it was something more complicated always involving frameworks or reflection.
Does JS even have reflection functionality, the way Java does?
@@lightyear3429 not yet, but there's a polyfill for it and it's supposedly coming to the language proper soon-ish (it's a stage 3 TC39 proposal)
That's a beautiful extension of ideas in the second half of the video.
Essentially, mocks and dependency injection are a way of viewing code in components and using the component you need for the use case you are addressing.
It's all just function chunks.
I just made a data base system based on generics and a ton of interfaces. To me, as a noob, it was a challenge to set up but once it was working, damn, easy to expand and customize. This workflow is great, once it's up and running :) The abstract nature of interfaces can be hard to get your head around at first but worth it.
This is amazing, I love how low level you go then bring it back to the surface instantly.
The power of polymorphism. Thanks for the video, I really liked the visuals that went with the explanation.
The visuals, the explanation, the code... all of it was just so straightforward and understandable that I now finally get just how misleading "dependency injection" as a term really is. The use case for testing also really helped me understand why I should be doing this even for my pet projects at home 😅
I always found dependency injection intuitive, but that comes from haviglng to solve the same problems without dependency injection. This can be a difficult topic to explain to a new developer, or even one who just hasn't worked in a modern software stack.
I'll be passing this video along to anyone who asks me to explain DI from now in. :)
I struggled to understand interfaces and DI in my first years until I had this job interview where they requested a console app that uses multiple search engines to search for a string and compare the results. Ended up with multiple implementations of a same interface and using strategy pattern without knowing it and it finally clicked for me. DI is a solution to a problem, so I agree that the best way to understand a solution is to face the problem in the first place.
Outstanding. This is the best explanation and most visual description of why basic design and TDD is important to software ive ever encountered. Extremely well done.
Your videos are the most helpful on the internet because you use real-world code. Most channels (and textbooks) would have used Dogs and Cats (or worse, Customers and Banks) with implementations that are so trivial they're not even helpful. I'd watch all your videos because of that alone, but the cool and helpful animations, chill voiceover, and unexpected death metal send them to another level. Thanks for another great video! 😃
This is the best walk-through of dependency injection I've ever seen. Everything from the example chosen, to the visuals, and the pacing/tone of your voice is excellent. You are doing incredible work on this channel and have earned my support on Patreon.
finally you remembered your yt password
Thank you soo much for this beautiful video. I have always hated learning how dependency injection works and why one would need it until a year ago when i tried to tamper with some web apis. Now I cannot develop any apps without DI. This video, while a bit fast in some spots, is a perfect example driven tutorial into DI. Many tutorials oversimplify the basics and the step to understanding how it becomes useful is blurred with too simple examples. this is just well made
Your editing and messaging is always so clear. Great video, as always
Expected DI but got a beautiful video on clean code and principles. I love it 🥰.
Jeez, I usually hate code videos, but this one? I even shared it with two people. The music, the animation, the humor! You live up to your name man!
Finally someone spoke my thoughts. One of the best thing is that it makes it so easy to test any architecture
Looks like Dependenncy injection is also good for scaling. As in the example of the video, even if you have only one type of Encryption, it's still good to have a interface in case you ever need to add another type of encryption in your system.
remember YAGNI: you aint gonna need it. only abstract what you think will be needed, DI is great but it still adds cognitive load for those reading it.
@@mattpen7966I mean adding an interface which specifies the public API of a class doesn't add that much abstraction, and actually makes clear what you intend to offer to the consumer of said API.
@@aruZeta exactly!! Actually defining an interface helps you understand much better how you should implement the service. It is nothing but a piece of boilerplate that you can use also to draft ideas, and once you have it then you can start implementing the actual service. I agree creating interfaces is not a useless nor expensive abstraction to do
When I was new with the Interface thing I thought it was a waste of time, but as my experience goes up investing my time to make Interfaces does really worth when you invite others into your projects. People have different opinions, and forcing a set of rules for your services will make collaboration easier
When I was using Lua, we didn't have proper types & interfaces and I didn't know much about interfaces, but I did something similar to them using a Manager object that takes 3 objects and dynamically checks that they have the correct type then "requires" them into its own object and let's users select one using an enum to choose. I did the pattern a lot: In a construction game, Build Mode, Edit Mode, and Delete Mode were handled using this model, and in a Discord bot: Commands to the bit were handled with this model.
I didn't even know about interfaces nor dependency Injection but ended up remaking the same thing after many mistakes with having the many if statements and whatnot.
I see a new Code Aesthetic video and literally CANNOT click on it until I know I'm in a space where I can give it some focus. 3 Days later, here I am, let's go!
Great video, bro. Keep going, I was expecting a new one for like 5 months lol
this reminds me of these videos about cells and biology I used to be obsessed with. Everything about this video - the explanations, content, insights, music, animations, are perfect
I did not expect 10:26, that was outstanding 😂
This is excellent. I use Dependency inversion by default - I use Tdd pretty much constantly which naturally leads to this sort of thing.
What always baffles me is that folks seem to think DI is only for Java/C# and other statically typed languages when everything benefits from it.
Another great effect of inverting dependencies, is that you can now pull "use case" classes into existence.
"What happens when we upload?"
The code tells us - without bothering us with every single detail.
Great content as always.
I like the visualizations. Animating code to move into place makes it really easy to understand what changed. Have a like and sub
The problem mentioned at roughly 3:10 could also be resolved in Typescript with a discriminating union type. So you have 'type A = { destination: "aws", aws_stuff: string } | { destination: "sftp", sftp_stuff: string }'. So if you pass "aws" as the destination, TS will narrow types down and allow only types that were defined together with "aws" as destination and allow only those to be passed. In that case there is no need to specify in documentation which other properties are used with that destination because TS will narrow it down for you.
Of course in this case the refactor that was done to use dependency injection is better, but there are some cases where discriminating union type is beneficial.
exactly my thought. The classic case would be if you're designing a simplistic interface as a library. While within the library you might want to use dependency injection and many more concepts to make your code solid, simplifying the I/O interfaces between the library and the end user to allow your methods to be simplified to their abstract goal is easier to use (and it is better recognized/handled by IntelliJ) than having multiple implementations that you need to know specifically. They are probably adding significant amounts of time to the learning process (through docs) rather than having an intuitive use case. Me monkey, me see parameters, me fill parameters, me service works, me happy. As often in coding, everything has it's pitfalls and it's a lot about choosing to do the right thing at the right time.
(It's kind of like, earlier today on another video about Assembly I saw someone saying they did a bunch of dart and hated that people used Electron to build Windows app, because of how unoptimized it is. Except for example I coded an app in Flutter (Dart) and I had to do a lot of the styling of the app, because Flutter barely has community libraries; and when I got to specific usecases (like, decoding music metadata specifically) I realized there was no solid library that could offer me a simplistic approach to getting the data at the time, forcing me to implement every usecase myself by reading and decrypting byte arrays - a massive time loss, as you can imagine, with all the different standards. On the other hand, I made a significantly more complex app with Electron, that took half the time to style, and allowed me to use significantly more powerful libraries that simplified most of the work. Not only was the technical debt significantly lower, but that app consumes 2-3% of my RAM and negligible amounts of processing power except during some (very mild) peaks; The optimization gains would be entirely pointless.)
I've come back to watch this video like, 4 times now. I can't tell you how amazing you are at explaining things. And the visuals *chefs kiss*. Keep it up dude!
My last job I did a bunch of elixir and rust programming. Of course, we used dependency injection all over the place because it's extremely useful. It's really nice that you get it out of the box in functional languages since instead of passing an interface objects you are just passing in functions. So in rust we were passing in trait objects and then in elixir we were passing in functions and also doing process level dependency injection. Testing on a system like this is just extremely easy and so is adding new parts and expanding the overall functionality. Being able to mock up a part of your code or change the connection of your code on the fly is a very powerful abstraction.
I've used it extensively over all my designs throughout my software development career. The upgradability, ease of testing, and readability this provides (not to mention the easily configurable code that can be controlled just using configs) is so great that this truly deserves the title of the best pattern.
I love how you present code. What software do you use for code change and type animation?
power point
I think he said before that he just has a Python script to do it.
One of my favorite thing about your videos is that the examples are not "We have an animal interface with eat() method and monkey implements it and prints 'banana' on the screen" type of examples.
I find the topic of dependency injection very interesting! I would love for you to make a video about the technical details of how dependency injection works in common frameworks. Not how to use them, but how the framework manages to get the right arguments to the right functions. I've always wondered how this works and googling about it only ever gives tutorials on how to use the framework(s).
I think its actually as simple as what he explained, just get a bunch of configuration as input and from that decide what implementation to return.
You have 3 services: A1, A2 and B. B depends on A
If the configuration has the flag a1 to true, you create an instance of A1 and pass it as a parameter to the constructor of B. if it is a2 then you create A2 and pass it as a parameter (The configuration should be an enum not a boolean, since a1 and a2 are mutually exclusive.
Yeah, I wouldn't worry too much about using a DI framework. Just start out doing DI manually like in this video until you internalize the concept. A lot of people have a hard time separating DI from DI containers.
Binge watched your channel rn, your way of explaining, cadence, tone and animations are top tier for coding guides, thanks for the videos
Shoutout to all my bell notification friends
Your videos are the best programming / coding / engineering / computer science (probably other categories too) video essay tutorials I have ever seen. Astounding, one of a kind work. Not to mention the name, I’ve been thinking about how much of what I value is some kind of “optimization aesthetic” or “aesthetic optimization.”
0:18 "We inject the dependent code into the code that uses it."
I would argue that it is the opposite... it is precisely that code that uses it that must be termed the "dependent code", because _that_ is the code that depends on the injected code (the dependency) in order to do its work.
I don't know why but seeing actual code samples (even though they're simplified) makes it so satisfying to watch these videos.
this is the principle of dependency inversion, dependency injection is the implementation, still great vid
Was looking for this comment, I watched only the first 30 seconds which explains how it's implemented but doesn't explain the principle behind it. Thanks for pointing it out
I clicked on this video out of curiosity for something that sounded weird but it turns out it mostly resembles what I called making "generic interface classes" (or something similar in English)
Those really are a must in an environment with a lot of rapid evolution of solutions like a research project where you change libraries or implementations all the time !
Really clean video !
I prefer if statement
Switching logic is fine when used sparingly, but excessive use results in code thats impossible to read or maintain. If left unchecked, even small changes end up requiring modifications to 30+ files, and finding bugs becomes a game of whack-a-mole.
@@LimitedWard “let’s sweep the logic under dynamic dispatch. yep that looks better”
@@sporefergieboy1099% of the time I will happily accept the minor hit in performance using dynamic dispatch over making my code unmaintainable.
@@LimitedWard
>Switching logic is fine when used sparingly, but excessive use results in code thats impossible to read or maintain.
How do you know? Did you check? Or are you repeating the same programming propaganda from other people?
Literally the best explanation of DI I've seen. Amazing job
Great video! Just hope that you will make music quieter in the next one. It was really loud, especially at 10:26.
That's the point
Scrolled to find this exact comment, made me pucker just a little bit, but was pretty funny!
that visual interpretation of function parameters is what I've always used when designing, even factories, so the dependency injection explanation snapped into my brain the moment I saw the visuals. 10/10
Problem with dependency injection is the memory footprint. It makes things dynamic when it could be static at compile time. It might not be as important in typescript. Thanks for this great video!
Dependency injection is a powerful tool that can greatly improve the modularity and testability of a system, but like any tool, it comes with trade-offs that need to be considered.
> We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.
>
> Yet we should not pass up our opportunities in that critical 3%.
-Sir Tony Hoare
People conveniently forget the context of the quote to justify sloppily inefficient production code. That optimization is the last mile - get it working first, then make it scale - does not mean that we should ignore it.
@@ghost_mall Repeating tired clichés should be considered the root of all evil.
The number of times I've seen people say "I optimize for readability", or "It's best practice" and still produce the most brain dead code 🙄
@@integerdivision Agreed
- Make it work
- Make it stable
- Make it fast
*IN THAT ORDER*
That doesn't mean you can't *design for fast* at the make it work stage, and DI is great for that because it promotes lazy initialization, but don't sweat the small stuff out the gate.
Dependency injection can be done statically to resolve all virtual methods at compile by using Generics rather than base classes.
This works the best in languages like Rust where generic type parameters have explicit type bounds, but can also work in languages like C++ where the type bound is implicit.
I just watched all 7 of your videos. Thank you for those. I found myself generally agreeing with or already practicing everything you advocated. And you explained it well.
GoLang devs unite! One of the things that I love about Go is the requirement of DI for unit testable code. Of course, that just means that every contributor needs to have knowledge of DI 😂
Great video and excellent explanation. The animations are perfect for making it even easier to understand how everything connects, specially the interfaces, implementations and mocks.
This is by far the best video I have seen about dependency injection.
I'm at war with dependency injection to change runtime behavior. Trying to understand what an app is doing "without running the app or even stepping through it via debugger" becomes very difficult. Dependency injection all the way through for testability: yes please. But runtime magic: It may be elegant to write and work with if you understand everything, but very complex and opaque if you don't touch the codebase on a regular basis. So e.g. I'd rather inject the factory instead of the final service so you can jump into the creator.
DI should decompose parts that work in isolation. Either you are debugging problems with chat code (in which case you can ignore file uploads), or with file upload (in which case you'd be looking at file upload code).
Sure, it's possible to use DI too much, but for larger chunks of functionality it's indispensable.
I was so excited to see that a new video was out! A colleague of mine suggested this channel and I'm HOOKED! This is exactly the topic I wanted to understand more profoundly.
dependency injection indeed is a very powerful pattern but it too has its downsides. You should talk about them too!
Whenever I am talking to my guys to build "sinks" for testing, the fake functionality, they never understand me. Now I know why. This video is a work of a genius! Thank you.
music at 0:50 ? please
Anatoly Boardman - Boy's Heart
This music is deceptively not what you expect past the intro, so you might prefer the f2nn edit version
@@fredV35 thanks!
You really need to continue with these videos! They are amazing!! Please keep creating this awesome content for all CS enthusiasts
Coming form Rust, that bit at 2:55 could be massively improved using a tagged/discriminated enum. At least, from the caller's perspective. It definitely does not solve the inner complexity of the class and DI is the right choice, but tagged enums can simplify that "a bunch of optional variables" problem quite frequently.
Could also just have used a union of different object types, since they were using TypeScript. But yes this doesn't solve all the problems that dependency injection does in this case.
bro the intro had no reason to go this hard :D You got me seriously pumped up for a programming video, love it!! :D
This isnt exactly dependency injection, more a mix of dependency injection and dependency inversion. Dependency inversion is about passing dependencies to pieces of code that use it, whether thats through function arguments, global variables, constructors, fields, etc.. Dependency injection generally refers to frameworks that inject the dependencies for you, you might, for example, be able to create a class with specific constructor parameters and then get an instance of that class using a dependency container, the dependency container will detect the parameters and create instances of those classes to pass into the constructor. These frameworks generally let you specify the lifetime of dependencies as well, in some case you might only want one instance of a specific class which you could, for example, indicate by passing some specific argument to the method that lets you register a dependency. Or you may want a different instance of a dependency wherever its used. Decent video if youre new to this concept but always make sure to do your own research as well
"Dependency inversion via dependency injection" would just be a pedantic mouthful
and this is the problem with this architectural circlejerk. people nitpicking and inventing new terms and overcomplicating stuff. like i always say: people that make these architectures are people trying to sell you books on that. just use whatever works and don't feel bad that your code doesn't do "big boy" dependency injection. I'm a senior dev and I see a LOT of "new architects" making incredibly complicated solutions to something that's just a glorified CRUD.
@@squishy-tomato uncle bob isnt saying inverting is traditional, rather that the inversion is in respect to the traditional structure. I agree, the terms "Dependency injection" and "Dependency inversion" are just too similar and could probably be merged into one but sadly thats just not gonna happen, once terminology gets created people start using it and once people are using it they wont stop using it.
@@83hjf a good developer knows of these patterns, a great developer knows when to use which. knowing about dependency injection doesnt mean you have to use it everywhere. knowing about di frameworks doesnt mean you have to use them everywhere. as long as the code is as maintainable as it needs to be
You explained dependency injection using minimal jargon with code and visuals. Well done!
I don't understand why we're calling interfaces "dependency injection". Can't we just call this "using interfaces"? Am I taking crazy pills?
Programmers are bad at naming things.
Because you can "use interfaces" without doing dependency injection
Since I see some people disagree with you, I'm assuming that either it is exactly as you say, or "dependency injection" is specificaly the practice of passing an interface as a parameter; which still seems to me kinda unnecessary to give it such a name. Or we're both taking crazy pills 😅
The reason I care at all about the naming is because "dependency" usually refers to external code like a library that you import, especially when dealing with web service stuff like in the video. I find this usage of "dependency" in its pure abstract meaning of "line of code that depends on another line of code" confusing because it's sort of unexpected. I used to assume "dependency injection" referred to something along the lines of injecting a library at runtime as opposed to at build time, like in the middle of your app it downloads jquery and starts using it or whatever. The pattern described here could have a clearer name like "shared api" with an honorable mention to "duck typing".
I agree that "interfaces" is not the best alternative, especially since it's an actual language feature in Java, not just a pattern. But he did mention that it's a similar concept in the video.
Your video editing style when visualizing code snippets is the best I've seen in years. Make it everything so much more understandable, thank you very much!
U literally just moved the same issues into multiple files which makes it harder to read and harder to debug for someone else. Breaking up everything into different classes or interfaces is not always the best solution.
Amazing code explanation and visually that make it clear in what dependency injection is and its benifits.
I used to agree, but dependency injection is a solution to OOP problems that can be entirely avoided if you don't use OOP at all.
My replies keep being deleted by TH-cam in another thread where @k98killer asked me to elaborate on Data-oriented design. Hopefully that won't happen here:
Firstly, I’ll do you one up and cover much more than you probably expected, hopefully without ranting too much (no promises 😉 ), because it’s not as easy to find material on this unless you’re already fortunate enough to work with very experienced developers (not of the car salesperson type) - and I think it’s a huge shame.
I hope to convince you that this is very much worth your time to explore, and perhaps even convince you that it has the potential to revolutionize most software and give you the power to one-up almost every company simply by programming this way and copying their current feature sets. But I’ll let you be the judge of that.
Secondly I’d add the disclaimer that I have 10 years of experience with the SOLID but only about a year with DOD - and I still have much to learn, but I’m so much happier and more productive coding this way. Regardless, I encourage you to try it out yourself, verify etc.
Thirdly, most popular design principles are exclusively about trying to optimize developer time, at the expense of users (which eventually includes ourselves - just look at the sorry state of Visual Studio, compilers etc.). However, I want to be upfront that data-oriented design can sometimes take longer to develop - especially until you get good at it (unsurprisingly). It may require a lot of unlearning simply because many of us have spent a long time learning ecosystems which are pretty distant from this way of thinking.
For instance, DOD is fundamentally against adding a ton of libraries that add thousands of unnecessary features at huge complexity and performance costs. It's about *staying lean* and not adding more than you need, before it's actually needed. And making solutions that *combines the necessary components to fit the use case,* not divide everything into small abstractions which are then arbitrarily reassembled to try to prematurely cover a near infinite amount of unrealistic future use cases.
For a moment, let’s assume there’s a hypothetical ideal design for application that does what you need. That solution has an inherent need for data structures and abstractions to get the job done - and so the idea is that it should be your goal not to inflate that ideal solution with more additional abstractions than what it takes for you to understand the final solution when you read it.
Because *additional abstractions means additional complexity,* lower readability, higher maintenance, (often orders of magnitude) slower performance and higher resistance to more complex features in the long run. *Don't generalize, specialize.* You’ll find this is controversial advice - but when you ask critics “why”, you’ll mostly get answers that aren’t grounded in anything quantifiable. At least that’s been my experience.
Now, this video about dependency injection attempts to remedy some of the common adaptability problems by making it easy to see all important functionality in one place (9:13) but even their simple demo is making it so hard to keep an overview of the rest of the relevant code for the use cases, and eventually you just have to hope that it’ll not become too complex in the long run (spoiler alert: extremely hard), and that you won’t need to shoehorn functionality into the “pattern” to add simple features that wasn’t originally thought of. Instead you can literally be able to see what it does and having relevant code coupled with what it needs for the relevant use cases covered.
When I first heard this, it was hard for me to imagine what anything else than OOP clutter looks like, but think of the kind of simple functions/programs you probably did when you were first starting out, writing simple code that coupled with a ‘main’ function. It’s very close to that, but with more files and lines of code - and useful abstractions that naturally evolve with experience.
The process of adding functionality with DOD is usually to write a monolith function that does whatever job you needs done, and then the necessary data structures will naturally appear out of necessity, coupling that with the advice I gave in the previous comment about writing to the usage and not overextending yourself too early. Then *when you get it working, you compress it* down to the simplest form you can with your current level of experience - and then you’re done.
Another common remedy to the unnecessary complexity is to try to cover everything with tests - because eventually the only thing you'll really be able to understand is simple snippets of functionality. And when they're scattered and used like tiny runtime Lego pieces that can be used literally anywhere, then you have barely any chance to actually understand what's going on. You basically have to guess, assume, hope, trust and take a long time to try to build an understanding of it. Or test to an extreme amount. Many developers have probably heard this one:
A QA engineer walks into a bar. Orders a beer. Orders 0 beers. Oders 9999999999 beers. Orders a lizard. Orders -1 beers. Orders uiasdaisfduo.
First real customer walks in and asks where the bathroom is. The bar bursts into flames, killing everyone.
My point is, relying exclusively on tests to trust that your code is still 100% working after a change is not ideal. Because plans usually don’t pan out to reality and humans fallible. And when you try to make everything generalized, you’re introducing exponentially more ways for your application to go wrong - and eventually you’ll have so much trouble with understanding the application that you can’t really diagnose it. I think there’ll always be a complexity limit to any application humans have written, but when you have fewer unnecessary abstractions, it’s me and DOD’s claim that the ceiling is higher.
I assume most developers have tried pulling their hairs out on something where you just can't figure out what's going on, that isn't really strongly related to the fundamental problem you're trying to solve. Not like “what is this algorithm?” or “how do I best integrate this new feature with the existing functionality?”, but more like “why does the .save() function of my ORM sometimes crash?” or “there’s a random memory leak somewhere, perhaps I misused one of my many libraries or something?”.
If you've ever tried what's called an "object calisthenics challenge", like me, which is an exercise to teach you how to do SOLID, you might also have sensed that it just turns simple things into a complete mess for reasons that are all about unfalsifiable assumptions of what's better (often comparing it to unstructured messes of OOP applications which are even worse). At the time I just thought SOLID was the best of many bad solutions, and was a necessary cost of high-level programming. But I no longer think that’s the case at all.
When you eventually get into the more advanced aspects of DOD, it's also about being aware of the underlying hardware, libraries, disassembly etc. and writing the code in a way that is designed to run on those instead of trying to force your mental model of the world into the program. Things like knowing that RAM is physically much further away from the CPU than its internal cache, which becomes an inherent performance bottleneck if your code constantly jumps to obfuscated code that isn’t cached. Contrary to common misconception, it’s *not* about hand-rolling assembly code - the goal isn’t to mimic the demoscene (but they do rock), there’s no need to abandon high level languages for most developers - although there’s also plenty of room for opportunity to make more aesthetic programming languages and libraries that makes good programming as frictionless as possible (Jai has promise imo).
The unquestionable reality is that there’s disk storage, memory, internal caches, network connections etc. Everything above that is abstractions, some more necessary than others. Like “files” which is definitely a very useful one. “Resource” isn’t, because it’s very ambiguous and general.
In fact, “resource handling” is a good example of a common abstraction that has no relation to anything fundamental other than a mental model of it. It’s about being myopic and making everything into small “resources” that can be individually constructed/initialized and then eventually deconstructed/cleaned up.
But it’s actually almost always better (in every way) to handle data life cycles grouped together with the other relevant data that needs it. It’s about *avoiding the tendency to want to prematurely get your code ready to reuse virtually everything.*
Once you get familiar with this concept, you'll find that your program expands into far less files, lines of code and it's easy to follow what's going on in a debugger, add new features or even have some hardcore devs go in to optimize things without needing to rewriting the entire application from scratch.
Another really important aspect is: Don't assume, measure/try. DOD is fundamentally about not assuming something is better because it has 100 upvotes on StackOverflow or you heard an engineer at Google say it (or me). Many solutions aren’t great when applied generally, and you need to design it in a way that fits your application's needs - not everything adjacent to it. If you're familiar with thinking of time complexity (also known as "big O") and picking a dynamically sized array one place and a hash table another place, then this is just one step further than that.
In summary, I think the benefits are:
• Easier to understand more complex applications, but maybe not if you’re afraid of occasional basic arithmetic and geometry (or more advanced math if you’re doing 3D graphics).
• Easier to get right (when your program does anything more than a prototype) - at least if you’re a programmer who tries to understand code, rather than copy, paste and pray. And I don’t expect that’s a very high bar, except perhaps for absolute beginners with no real interest in the field and/or who only see coding as an easy way to get bread on the table.
• Much better base performance, and easy to optimize further for the few expert developers that your company might (eventually) employ. If you don’t think performance is relevant, try noting every time you’re waiting more than you’d truly like on software after your input, just for one day. I think you’ll find it happens an abundant amount of times (for me it was about two dozens before I stopped). And if you’re old enough, think of when what most use for directions won over the big one from 1996 or the less capable Fruit Phone 1 took down the dark berries and Not-Doors CE. I think there’s many good reasons to believe that users (and wallets for using big servers) care very much - I certainly care as a user.
• More adaptable to (realistic) change. Your bike renting website won’t be easily adaptable to handle running a nuclear reactor, but hopefully this hyperbole illustrates that much premature generalization isn’t rooted in specific industry experience.
• It took me a few months of practicing it to be about as productive as I was in the OOP ways. YMMV of course. But unlike SOLID, it was fun from day one and still is.
• Coding this way gives you knowledge that doesn’t stop being relevant anytime soon - unless perhaps something drastic happens like we replace having a CPU, RAM etc. with something fundamentally different. And in that case, all developers will probably need to relearn many things anyway.
Some of the disadvantages may be:
• Powerful libraries done with DOD can be hard to find or scarce. I think this is because so much software from the last 20 years completely disregards the cost of complexity and ignores the foundation you’re building on - I think we’ve often lived in a “software layer”-land where it’s all some abstract ideas stuffed into a compiler that then magically makes the thing work on our devices. But DOD-friendly libraries do exist. Just look at immediate-mode GUI as an example - I had no idea it existed until very recently, but so far I’ve found it’s generally a much easier way to make complex native user interfaces than the most common OOP ways.
• Sometimes you’ll have been forced to make a premature design decision which was done on incorrect assumptions about what your application will need, and that can require a major refactor. E.g. imagine that you thought Google Maps would never need 3D rendering so you built the site around always being 2D. Many developers are scared of such refactors, but in my anecdotal experience it’s much less cumbersome and infrequent than the reality, where simply reordering B l a c k M i r r o r episodes on N e f l i x literally is a project that takes a team of developers months to accomplish (see “MicroServices (and a story about N e t f l i x)”). I actually think it’s healthy to do refactors (and code deletes) every now and again, so it doesn’t get too bloated, and you can get a lot better at it over time - especially if better generally available tools and languages are eventually built. This concept can be hard to digest, if you’re a business that lives on promising quickly built custom features with little motivation for the salespeople to say ‘no’ when the features are pretty far out of your expertise. I think it’s a balance of quality and quantity that’s very subjective.
However, not everyone will agree that this is what DOD is - certainly not those who dismiss it without ever trying it out. Mike Acton’s talk from CppCon 2014 is the first place I heard this - he’s a very talented game programmer and goes deep into the weeds of how he programs, but he isn’t always friendly. The talk is very informative, even if you have no interest in game development.
One of the better online resources I’ve encountered is Casey Muratori - which is also a game engine developer that you might know for his criticisms of “clean code”/SOLID - perhaps in his clash with “Uncle Bob”. Casey has some good showcases of what a DOD of codebase can look like in the episodes of “Handmade Hero”, and he now has more generally applicable educational material that is more available for enterprise code (which most of us work with), but I’m still unsatisfied with the scarcity of material out there, so I hope some of you reading this isn’t majorly discouraged by that, even though you have fair reasons to be. My advice is to just start small and take it in steps. Leave things better than you got them etc.
Lastly, I no longer have a dream of a one-size-fits all solution where I could reuse all my code to make huge programs that were orders of magnitude more feature rich and powerful than the individual parts. But maybe one day we can get quality software as the norm again, where most developers actually find it fun like when I first started out with the basics.
Good luck.
Your explanation is very useful for me as CS student thanks!
I'm a dude who's about 10y in development and so on, and I genuinely think that you, dude, have the best content around. ty!
I wanted to add that while this video uses an OO approach to explain the idea of dependency injection, the idea goes beyond a single paradigm. In FP, for instance, when you make a function f that takes a function as input g, you are actually doing dependency injection. f is dependent on some function with signature g that the caller is supplying! Actually, if you are making an interface with a single method and the implementations you are using are stateless, it may be make more sense to do functional dependency injection instead, depending on your language
Thank you so much for making this video. I've heard this term many times but I didn't have a great way way to visualize the concept and you did a great job with this video.
This video is just amazing. This is one of the best patterns out there if not the best.
It is simple and clean.
It provides a system of plug and play. You develop the business logic once based on Interfaces, and you might not even have any code yet that does any of the steps each single service is responsible for, but you are already able to draft the main business logic of you application.
Then developing each adapter is just a breeze, and no other part of the code is affected by the development of it.
You can have multiple developers working on the same codebase without interfering with each other, and that is just fantastic.
Also makes testing so much better, you develop the tests based on the interfaces again, and you just have to run the same tests against each adapter to make sure all the integrations work.
Also I would add this feels so much better combined to a statically typed language where the compiler helps you so much on the implementation of the interfaces. I love applying this pattern in Rust
It's simple the best video I viewd about this subject. So simple and beautiful... Ok, I think every person working with code should see this video
After I understood Dependency Injection I realized I've been doing it in my Lua scripts for ages, and I love it.