Thanks to be reminded on the focus on behavior. Dealing with a modern dotnet code base, we do DI all over the place and mostly fall back on mocks. I'd love to understand the alternatives better.
Use DI when needed (like strategy pattern or other patterns where you don't know the concrete dependency before runtime or there can be different dependencies based on conditions in layers above). And stop writing unit tests for internal or private classes. Test public classes, more specifically public methods in those public classes. Mocking couples implementation details to the test, which will make the test fail if you change the implementation, even if the behavior remains the same.
I tend to mock, fake, or stub on the edges for tests that last. May use it when I'm discovering internal implementation and just need something to pretend to work. These tests tend to get removed over time.
I'm confused about "only test the public interface that other people depend upon" at 28:04. The implication of that seems to be that if the entire system is developed by one team with full collective code ownership, then you should only test at the user or other external interface of the application. But that can't be right, since testing at the user interface level is generally hard work and fragile. So maybe it means have the app split into modules and test at the module boundary level. But then the problem of what to test isn't really answered, it's just replaced with the problem of how to decompose the app into modules, and if you change that arrangement of modules you'll still need to change the tests. And not all languages have a native module system where a module is distinct from a class or a function. And writing a test for a module may be a lot more complicated and require a lot of irrelevant details that gets in the way. E.g. in the example of calculateFoo, if Foo is something that the customer understands (e.g. foo is the postageAndPackagingCostEstimate for display on a shopping cart) then having a focused test that calls that function or something close to it seems a lot simpler than a test on a bigger scale where more setup is required so we can call a bigger function that e.g. also has to calculate the taxes.
> The implication of that seems to be that if the entire system is developed by one team with full collective code ownership, then you should only test at the user or other external interface of the application. But that can't be right, since testing at the user interface level is generally hard work and fragile. These days I try to structure my applications so that all of the functional requirements can be expressed in abstract form by some data/object model one layer below the user interface. In a web API that's already pretty common, as I think you generally see people writing a service layer with thin controllers on top. The one difference I've seen is that IME people want to put all the validation logic in the controller, but to the extend that those validations need to go in the test suite I pull them down into the service/domain model. In UIs I'm a big advocate of state managers like Redux with thin rendering logic on top; and that rendering logic can go completely untested as far as I'm concerned. Maybe a storybook for some manual testing of the different component states. I don't find writing tests at this level to be super complicated, although I do usually wind up with helper methods at the top of several of my test files that set up common boilerplate scenarios, on top of which each test can then tweak the one or two inputs that it cares about.
Merging development and QA was a very bad decision that corporations made. Now, their agenda is obvious, more profit, but the outcome is software of lower quality. QA is a different mindset than development itself, and having the developer only testing his code is just asking for trouble because of all sorts of biases.
20 years as an IT consultant and I've yet to meet a QA department that adds anything of value to the process of developing and releasing working software. At my previous client, I was told that QA must be involved in the testing of the features I developed. When I asked them what they needed to test, they couldn't specify it. When I asked how they would test it, they couldn't specify. When I asked what and how they test in general, they couldn't specify.
I agreed with some of this, but the idea of using TDD to drive design is a recipe for anti-patterns and inelegant code. We shouldn't bypass the developer's ability to use experience and their mental models to design the abstractions to support the public interface.
Didn't make sense to me. - Testing behaviour as black box is basicly E2E tests. It is known fact that there are expensive, so you don't want to make 100% coverage by them, only happy paths. The rest should be covered by other kind of tests: units, integration etc - How its expected to get 100% coverage "out of the box" if you are not supposed to TDD whole codebase. Сausal relationship is self referential here, therefore is invalid. - What exactly you testing by TTD, if its not: IO, Http, Database, UI, Defined Spec. If its not units, not integration, then its E2E? (see point 1) Are u building features without specification? How is that supposed to even work? What are u discovering there? Funny tho, TTD guys usually say that its a skill, but if you discovering implementation by TTD, it is looks like a skill issue. I think there is lack of practical examples
It make sense to me. Here is why. Prerequisite about who I am. - I don't follow anything, I make tradeoff. - I don't care about detail, those IO, UI stuff. I only care about doamin logic. For your second point, I don't force 100% on everything, I force 100% on the doamin logic that I care. For your first point, I have a clearly defined inside layer which is my domain in my codebase. Inside doamin there are properly divided modules, I treat each module as a black-box and test their public methods that will be called by other modules. About this you may need to learn about DDD and component/module. Hope my experience could help.
This finally made testing as a whole concept make sense...
This video should be shared more. Everyone needs to go back to basics
There is a built-in Audacity plugin that can remove the background hum by giving it a sample of it from where there are no other sounds in the source.
my favorite idea is a PLL-based hum cancellation. Please tell me this is something others have thought of, too?
Thanks to be reminded on the focus on behavior. Dealing with a modern dotnet code base, we do DI all over the place and mostly fall back on mocks. I'd love to understand the alternatives better.
full end to end and integration testing
Use DI when needed (like strategy pattern or other patterns where you don't know the concrete dependency before runtime or there can be different dependencies based on conditions in layers above). And stop writing unit tests for internal or private classes. Test public classes, more specifically public methods in those public classes. Mocking couples implementation details to the test, which will make the test fail if you change the implementation, even if the behavior remains the same.
I tend to mock, fake, or stub on the edges for tests that last. May use it when I'm discovering internal implementation and just need something to pretend to work. These tests tend to get removed over time.
FYI: The hum seems worse in the left channel. Turn off the left channel, or use headphones/earbuds and only have the right side on/in your ear.
I Love the "using the word micro in micro services" 👏👏👏
I'm confused about "only test the public interface that other people depend upon" at 28:04. The implication of that seems to be that if the entire system is developed by one team with full collective code ownership, then you should only test at the user or other external interface of the application. But that can't be right, since testing at the user interface level is generally hard work and fragile.
So maybe it means have the app split into modules and test at the module boundary level. But then the problem of what to test isn't really answered, it's just replaced with the problem of how to decompose the app into modules, and if you change that arrangement of modules you'll still need to change the tests. And not all languages have a native module system where a module is distinct from a class or a function.
And writing a test for a module may be a lot more complicated and require a lot of irrelevant details that gets in the way. E.g. in the example of calculateFoo, if Foo is something that the customer understands (e.g. foo is the postageAndPackagingCostEstimate for display on a shopping cart) then having a focused test that calls that function or something close to it seems a lot simpler than a test on a bigger scale where more setup is required so we can call a bigger function that e.g. also has to calculate the taxes.
> The implication of that seems to be that if the entire system is developed by one team with full collective code ownership, then you should only test at the user or other external interface of the application. But that can't be right, since testing at the user interface level is generally hard work and fragile.
These days I try to structure my applications so that all of the functional requirements can be expressed in abstract form by some data/object model one layer below the user interface. In a web API that's already pretty common, as I think you generally see people writing a service layer with thin controllers on top. The one difference I've seen is that IME people want to put all the validation logic in the controller, but to the extend that those validations need to go in the test suite I pull them down into the service/domain model. In UIs I'm a big advocate of state managers like Redux with thin rendering logic on top; and that rendering logic can go completely untested as far as I'm concerned. Maybe a storybook for some manual testing of the different component states.
I don't find writing tests at this level to be super complicated, although I do usually wind up with helper methods at the top of several of my test files that set up common boilerplate scenarios, on top of which each test can then tweak the one or two inputs that it cares about.
23:51 Thirty seconds is still too much for TDD. The whole suite should run in under a second. Good talk.
Merging development and QA was a very bad decision that corporations made. Now, their agenda is obvious, more profit, but the outcome is software of lower quality. QA is a different mindset than development itself, and having the developer only testing his code is just asking for trouble because of all sorts of biases.
20 years as an IT consultant and I've yet to meet a QA department that adds anything of value to the process of developing and releasing working software. At my previous client, I was told that QA must be involved in the testing of the features I developed. When I asked them what they needed to test, they couldn't specify it. When I asked how they would test it, they couldn't specify. When I asked what and how they test in general, they couldn't specify.
I agreed with some of this, but the idea of using TDD to drive design is a recipe for anti-patterns and inelegant code. We shouldn't bypass the developer's ability to use experience and their mental models to design the abstractions to support the public interface.
Wait do europeans still drive stick?
Sure do! We had a continent wide vote on it a while ago and stick won out.
Electric and hybrid cars are often automatic only, I think, so manual gearbox is probably going away slowly.
Electric are "automatic" in the sense that they have no need for a gearbox.
that means all your tests are going to be integration tests.. strongly disagree with this guy
audio makes the video unbearable
Didn't make sense to me.
- Testing behaviour as black box is basicly E2E tests. It is known fact that there are expensive, so you don't want to make 100% coverage by them, only happy paths. The rest should be covered by other kind of tests: units, integration etc
- How its expected to get 100% coverage "out of the box" if you are not supposed to TDD whole codebase. Сausal relationship is self referential here, therefore is invalid.
- What exactly you testing by TTD, if its not: IO, Http, Database, UI, Defined Spec. If its not units, not integration, then its E2E? (see point 1)
Are u building features without specification? How is that supposed to even work?
What are u discovering there? Funny tho, TTD guys usually say that its a skill, but if you discovering implementation by TTD, it is looks like a skill issue.
I think there is lack of practical examples
It make sense to me. Here is why.
Prerequisite about who I am.
- I don't follow anything, I make tradeoff.
- I don't care about detail, those IO, UI stuff. I only care about doamin logic.
For your second point, I don't force 100% on everything, I force 100% on the doamin logic that I care.
For your first point, I have a clearly defined inside layer which is my domain in my codebase. Inside doamin there are properly divided modules, I treat each module as a black-box and test their public methods that will be called by other modules. About this you may need to learn about DDD and component/module.
Hope my experience could help.
Go read Test Driven Development by Kent Beck. It contains some clarifications to your questions...