I’ve bought your latest book and have been watching loads of your videos - and think they’re brilliant. I just had to write to say I unexpectedly burst out laughing at 13:56 - thank you 🙂 Edit - and laughing again at the Jenkins test example.
Hi Dave, I see the rationale about using the "Should" prefix, in order to trick our mind into thinking about the spec in right perspective. However because it is repetitive and does not add information, we soon start mentally discarding it as noise. RSpec for example used to adopt the "should" style, but it switched to encouraging a more terse and assertive style ("it returns the result" instead of "it should return the result".) What do you think?
At my practice I used "Should" as the sufix of the class not the prefix of the method. So I would make ListShould class not ListTest. Each @Test method do not need a prefix. This make also the test class less twchnical name, pointing that this is the specification and not the test. Aditiona benefits are reports, where it is gruped as fallows. Liist Should: - keep an order of... - ... At last close ro it ;)
Hi Dave, I love your channel, and I'm often (kind of) binge-watching. I'd like to hear your opinion about a style of scenarios titles I started using for some higher-level (integrated) tests. I'm not using the "Should" form for those. Instead I name them after what makes the scenario unique, for example "WhenAccountBalanceIsInsufficient" (fixture name could be like "ChargePaypalAccountFixture"). My thinking is that integrated tests are slow so I don't want to run the same scenario over and over, just to do all the assertions. If you use the right assertion "DSL", it's still easy to read what the scenario is trying to prove. (For pure unit tests, I do use the Should....When naming for the test methods.)
Just name it after the main thing, like "ShouldRejectWhenAccountBalanceIsInsufficient". There is no problem having multiple asserts, as long as there is a block of "Given" followed by a block of "When" followed by a block of "Then", not multiple interleavings of the three. Given, Given, When, Then, Then, Then. Not Given, When, Then, When, Then, When, Then. The most important thing is: the purpose of your specifications is NOT for scripting automated tests. This is not a test automation framework. You aren't writing test scripts. You are writing specifications of behavior. The fact that you are mentioning something like *how* the tests run indicates to me you are most likely just thinking of Gherkin as a way to write your test scripts so that Cucumber will run them for you. If you do implement your scenarios into executable specifications, and you need them to run faster, then think more carefully about how you implement all the stages. Can you implement your "givens" such that a resource can be reused across runs, etc, without corrupting the test. Can you implement your "givens" such that they don't have to wait on a network connection. Etc etc. The Givens/Whens/Thens that you are building are meant to form a domain-specific language for talking clearly about the high-level behavior of your system. You have total freedom in the way you implement the elements of that language to make your tests run fast.
Question about Manager convention , I have incorporated the term manager heavily into my architecture. Each Manager has a very specific goal and is used as a communication between different ViewModels. I heavily test the manager , but since the ViewModel uses the Manager I believe that I have sped up my testing time of ViewModels. The complex code is done in the Manager so at that point I'm testing whether I've passed the right parameters, Propperly fed the data or to see if changes where made to the Manager functionality that affect the ViewModel ussage. Do you agree with my assessment? - Sincerely a young Software Engineer
My point is that “Manager” doesn’t tell us what that code is doing. It’s hard to tell from a brief description, but what you describe sounds like an implementation of A Model View Controller pattern en.m.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller. I think that “Controller” is a bit better name, than Manager. Controller is a bit generic, but it is useful to use conventional names when you are implementing common patterns. More generally though, ‘cos particularly when you are starting out you may not always know the names, pick a name that describes what it is doing. If it is routing events, call it an “EventRouter” if it is handling UI events, maybe “EventHandler” and so on. A good check is to explain, or imagine explaining, what each module, or class, is doing to someone else. For extra credit, could you explain it to someone who isn’t technical?
@@ContinuousDelivery Thank you for excellent points! I did your extra credit and you are correct it seems that I have some work to do with my design. The comment that I got was that my use of Manager was not as straightforward as when I explained it with controller. By having good naming I can clearly see how easy it is to walk through the code, much more fluid and logical. I took a look at my design and I think that I see the MVC pattern in my design but probably more modified, this has been an excellent discussion!
@@chrisjohnson7255 MVC is a fairly complex pattern, there are lots of variants, and people tend to invent their own versions, it is something that is worth reading a bit about, to help you to decide what you want to do. I am very pleased that you found this helpful.
Love your videos! Must’ve watched a dozen of them by now. The concept of testing behaviour instead of implementation details (what vs how) makes intuitive sense to me. I’m still confused about what a DSL is and how it interacts with actual tests that do the assertions. Also why are we naming our tests “Should…”? Why not use Given-When-Then in our test names instead?
Hey, I'm pretty new to this, but I'll try to explain how I understood it. Let's say you are working on a project that involves managing the crew of a ship, as you might guess, in the ship you will find a captain, sailors, chefs, etc. It might be weird to name the sailors as members of the crew, the captain as admin, and the chefs as auxiliaries, by doing that we create chaos and a mental mapping will be necessary to translate from the problem domain to the system. By using a DSL it's easier to define a system that behaves as we required, if we wanted to test the behavior of the system when a sailor is fired, we would write a test like "shouldRemoveSailorFromCrewWhenFired" that clearly states what we are trying to do and how the system should behave when that happens.
Great video, keen to see more about how you think / apply BDD at a meta / architectural level (almost high level functions) as I find these are quite useful when talking with customers / business who are looking for abilities to do things (capabilities) rather than more precise specifications. What are your thoughts on handling that? I've found use stories are ok but keen to explore them with BDD alongside.
I have come to the conclusion that the trick for requirements is to focus exclusively on "What" rather than "How". That is pretty fractal. It works at all resolutions of detail. When we built our financial exchange, at LMAX, I suggested that we work hard to avoid any technical stories of any kind. At one stage we implemented an architecture to support fault tolerance and clustering based only on what the user observed - albeit while the system was in trouble. We defined the requirements to describe a scenario where the user was carrying out some actions, while parts of the system were failing - we simulated this in automated tests (executable specifications), our assertions were based on what the user wanted to see, no loss of data, experiential consistency. It worked really nicely. I have done similar things with a few clients since. I think it is a very broadly-applicable approach. It does take some ingenuity to think about what this really means sometimes, and it requires a fairly sophisticated dev team who can take such requirements and appropriately interpret them into technical outcomes.
I'm shocked that Dave said that the book "Growing Object-Oriented Software, Guided by Tests" is fantastic. The book advocated for the London School of tests that caused a lot of money being wasted on mock maintenance and frozen implementation details (hard to refactor). In other videos Dave praised the Chicago School (to test many object at once and avoid mocks). So it's a bit of inconsistency.
I don't think that I said that, I am in favour of using mocks, and disagree with your statement that mocks add to maingtenance, of course you can misuse any tool or approach. I do recommend that care is taken when using mocks though, if your design is poorly abstracted then mocks are a bad tool to use. I am also not against using several real classes within a test. So for example, I never return a mock from a mock, I may mock a provider class and then return what it would return.
@@ContinuousDelivery In your movie "When Test Driven Development Goes Wrong • Dave Farley" there is a section about "Mockery" I totally agree with. Unfortunately that's what GOOS book shows.
I view TDD as a process. I view BDD as putting semantics on top of the TDD process. All code is based upon the assumptions and intentions of the developer. Those assumptions and intentions often reside on the in the developer's head. Sometimes they reside in design documents, which never stay up to date and no one reads anyhow. They may be added as comments, but they are rarely kept up to date either. BDD allows developers to document their assumptions and intentions in the tests. This creates living specification documentation. When the code or tests start to diverge, then the tests start to fail immediately. This will never occur when a design document or even code comments diverge from the implementation. When tests fail, either the code is violating an assumption and intention, which must be corrected in the code, or the assumption or intention is out of date, in which case the test needs to be updated.
I think BDD is a bit more than semantics, but otherwise I agree completely. TDD doesn't really say much about he nature of the test, BDD does. "Good TDD" is certainly opinionated about the tests, but that was the original point of BDD - a way to teach people so they got to "Good TDD" sooner. Inevitably it has morphed a bit since then, and there is plenty of "Bad BDD" around too, but I still think that it takes the discipline a step further. So, certainly, as I say in this video, when we started out with BDD the intent was to "get the words right", so there is certainly a level at which semantics matter, but I think that the focus on tests as genuinely "executable specifications" amplifies your later points. In addition to that, the focus on the desired behaviour, consciously aiming to exclude any notion of how the system works, makes BDD a step further than TDD in guiding us to a better "TDD process".
@@ContinuousDelivery It seems to me that building a ubiquitous language is one of the enablers (or one of the blockers, if you don't have it) to run BDD within the team. Without having this your specs, code, and tests mess in terminology and become hard to understand and verify. Thus, one will not be able to build nor good TDD nor BDD practices without having ubiquitous language. Importance and process of building ubiquitous language being covered in Eric Evans's book on Domain Driven Design.
@@TheOptiklab I certainly think that BDD and DDD are related. I think that BDD is a tool that can promote the use of DDD thinking and that ideas like Ubiquitous Language help us to focus on some real value in the analysis and design of our software and its testing.
@Jim, I completely agree with you BDD is much more inline with how a developer reads his code and works better for modern frameworks where Testing tools may be limited, non-existent, or impractical.
Please do not say "Assert one thing per test" without explaining it! It can be just as harmful as an unexplained SRP: "Let one function/class do one thing only and do it well." Often people interpret it as just 1 assertion statement, but that's not correct! We want to test the outcome of 1 test case, but sometimes it's not possible to do it with just 1 primitive assertion statement. That's when you either extract some code, create your own special assertion method, which can be an overkill (because of too many special cases + abstractions/indirections in tests make hard-to-read specifications), or just do multiple assertions. And when you do multiple assertions, then you better use a test tool that lets you report all assertion results collectively, rather than stopping at the first one!
Why write Java code like this cumbersome: Map perms = new HashMap(); perms.put(Item.READ, Collections.singleton("alice")); The amount of duplications, and low-level telling the "how" instead of the "what" is overwhelming. This is not a readable test, let alone if it was production code. Learn the new ways, embrace the new features of Java, and write: var perms = Map.of( Item.READ, Set.of("alice")); Alternatively, Kotlin is nice too... Note: I know it was just a counter-example.
Your videos are timeless David. And they’re so inspiring. I am 40 and loving software engineering!
This channel is a gold mine!
Thanks 😎
Wow, what an underrated video! Thank you very much, Sir!
You're very welcome!
I am really thankful for your videos, they are amazing! Thank you!
Thanks
Been watching these. Excellence personified. I'm speechless!!!!
Wow, thank you!
I’ve bought your latest book and have been watching loads of your videos - and think they’re brilliant. I just had to write to say I unexpectedly burst out laughing at 13:56 - thank you 🙂 Edit - and laughing again at the Jenkins test example.
Hi Dave, I see the rationale about using the "Should" prefix, in order to trick our mind into thinking about the spec in right perspective.
However because it is repetitive and does not add information, we soon start mentally discarding it as noise.
RSpec for example used to adopt the "should" style, but it switched to encouraging a more terse and assertive style ("it returns the result" instead of "it should return the result".)
What do you think?
At my practice I used "Should" as the sufix of the class not the prefix of the method. So I would make ListShould class not ListTest. Each @Test method do not need a prefix.
This make also the test class less twchnical name, pointing that this is the specification and not the test. Aditiona benefits are reports, where it is gruped as fallows.
Liist Should:
- keep an order of...
- ...
At last close ro it ;)
Hi Dave, I love your channel, and I'm often (kind of) binge-watching. I'd like to hear your opinion about a style of scenarios titles I started using for some higher-level (integrated) tests. I'm not using the "Should" form for those. Instead I name them after what makes the scenario unique, for example "WhenAccountBalanceIsInsufficient" (fixture name could be like "ChargePaypalAccountFixture"). My thinking is that integrated tests are slow so I don't want to run the same scenario over and over, just to do all the assertions. If you use the right assertion "DSL", it's still easy to read what the scenario is trying to prove. (For pure unit tests, I do use the Should....When naming for the test methods.)
Just name it after the main thing, like "ShouldRejectWhenAccountBalanceIsInsufficient".
There is no problem having multiple asserts, as long as there is a block of "Given" followed by a block of "When" followed by a block of "Then", not multiple interleavings of the three. Given, Given, When, Then, Then, Then. Not Given, When, Then, When, Then, When, Then.
The most important thing is: the purpose of your specifications is NOT for scripting automated tests. This is not a test automation framework. You aren't writing test scripts. You are writing specifications of behavior.
The fact that you are mentioning something like *how* the tests run indicates to me you are most likely just thinking of Gherkin as a way to write your test scripts so that Cucumber will run them for you.
If you do implement your scenarios into executable specifications, and you need them to run faster, then think more carefully about how you implement all the stages. Can you implement your "givens" such that a resource can be reused across runs, etc, without corrupting the test. Can you implement your "givens" such that they don't have to wait on a network connection. Etc etc.
The Givens/Whens/Thens that you are building are meant to form a domain-specific language for talking clearly about the high-level behavior of your system. You have total freedom in the way you implement the elements of that language to make your tests run fast.
Question about Manager convention , I have incorporated the term manager heavily into my architecture. Each Manager has a very specific goal and is used as a communication between different ViewModels. I heavily test the manager , but since the ViewModel uses the Manager I believe that I have sped up my testing time of ViewModels. The complex code is done in the Manager so at that point I'm testing whether I've passed the right parameters, Propperly fed the data or to see if changes where made to the Manager functionality that affect the ViewModel ussage. Do you agree with my assessment? - Sincerely a young Software Engineer
My point is that “Manager” doesn’t tell us what that code is doing.
It’s hard to tell from a brief description, but what you describe sounds like an implementation of A Model View Controller pattern en.m.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller. I think that “Controller” is a bit better name, than Manager.
Controller is a bit generic, but it is useful to use conventional names when you are implementing common patterns.
More generally though, ‘cos particularly when you are starting out you may not always know the names, pick a name that describes what it is doing. If it is routing events, call it an “EventRouter” if it is handling UI events, maybe “EventHandler” and so on. A good check is to explain, or imagine explaining, what each module, or class, is doing to someone else. For extra credit, could you explain it to someone who isn’t technical?
@@ContinuousDelivery Thank you for excellent points! I did your extra credit and you are correct it seems that I have some work to do with my design. The comment that I got was that my use of Manager was not as straightforward as when I explained it with controller. By having good naming I can clearly see how easy it is to walk through the code, much more fluid and logical. I took a look at my design and I think that I see the MVC pattern in my design but probably more modified, this has been an excellent discussion!
@@chrisjohnson7255 MVC is a fairly complex pattern, there are lots of variants, and people tend to invent their own versions, it is something that is worth reading a bit about, to help you to decide what you want to do.
I am very pleased that you found this helpful.
Love your videos! Must’ve watched a dozen of them by now.
The concept of testing behaviour instead of implementation details (what vs how) makes intuitive sense to me. I’m still confused about what a DSL is and how it interacts with actual tests that do the assertions.
Also why are we naming our tests “Should…”? Why not use Given-When-Then in our test names instead?
Hey, I'm pretty new to this, but I'll try to explain how I understood it. Let's say you are working on a project that involves managing the crew of a ship, as you might guess, in the ship you will find a captain, sailors, chefs, etc. It might be weird to name the sailors as members of the crew, the captain as admin, and the chefs as auxiliaries, by doing that we create chaos and a mental mapping will be necessary to translate from the problem domain to the system. By using a DSL it's easier to define a system that behaves as we required, if we wanted to test the behavior of the system when a sailor is fired, we would write a test like "shouldRemoveSailorFromCrewWhenFired" that clearly states what we are trying to do and how the system should behave when that happens.
Very insightful video. Thank you very much David!
Glad you enjoyed it!
What do you think about modern RSpec's DSL with "it" for example block and "expect" instead of "should" keywords?
I haven't seen them, I will take a look.
19:55 So what do the peopel of the Jenkins project say about their tests? :-)
Great video, keen to see more about how you think / apply BDD at a meta / architectural level (almost high level functions) as I find these are quite useful when talking with customers / business who are looking for abilities to do things (capabilities) rather than more precise specifications. What are your thoughts on handling that? I've found use stories are ok but keen to explore them with BDD alongside.
I have come to the conclusion that the trick for requirements is to focus exclusively on "What" rather than "How". That is pretty fractal. It works at all resolutions of detail. When we built our financial exchange, at LMAX, I suggested that we work hard to avoid any technical stories of any kind. At one stage we implemented an architecture to support fault tolerance and clustering based only on what the user observed - albeit while the system was in trouble. We defined the requirements to describe a scenario where the user was carrying out some actions, while parts of the system were failing - we simulated this in automated tests (executable specifications), our assertions were based on what the user wanted to see, no loss of data, experiential consistency.
It worked really nicely.
I have done similar things with a few clients since. I think it is a very broadly-applicable approach. It does take some ingenuity to think about what this really means sometimes, and it requires a fairly sophisticated dev team who can take such requirements and appropriately interpret them into technical outcomes.
@@ContinuousDelivery awesome info Dave, if you are keen to explore some examples in a subsequent video that would be much appreciated.
I'm shocked that Dave said that the book "Growing Object-Oriented Software, Guided by Tests" is fantastic. The book advocated for the London School of tests that caused a lot of money being wasted on mock maintenance and frozen implementation details (hard to refactor). In other videos Dave praised the Chicago School (to test many object at once and avoid mocks). So it's a bit of inconsistency.
I don't think that I said that, I am in favour of using mocks, and disagree with your statement that mocks add to maingtenance, of course you can misuse any tool or approach. I do recommend that care is taken when using mocks though, if your design is poorly abstracted then mocks are a bad tool to use. I am also not against using several real classes within a test. So for example, I never return a mock from a mock, I may mock a provider class and then return what it would return.
@@ContinuousDelivery In your movie "When Test Driven Development Goes Wrong • Dave Farley" there is a section about "Mockery" I totally agree with. Unfortunately that's what GOOS book shows.
I view TDD as a process. I view BDD as putting semantics on top of the TDD process.
All code is based upon the assumptions and intentions of the developer. Those assumptions and intentions often reside on the in the developer's head. Sometimes they reside in design documents, which never stay up to date and no one reads anyhow. They may be added as comments, but they are rarely kept up to date either.
BDD allows developers to document their assumptions and intentions in the tests. This creates living specification documentation. When the code or tests start to diverge, then the tests start to fail immediately. This will never occur when a design document or even code comments diverge from the implementation.
When tests fail, either the code is violating an assumption and intention, which must be corrected in the code, or the assumption or intention is out of date, in which case the test needs to be updated.
I think BDD is a bit more than semantics, but otherwise I agree completely.
TDD doesn't really say much about he nature of the test, BDD does. "Good TDD" is certainly opinionated about the tests, but that was the original point of BDD - a way to teach people so they got to "Good TDD" sooner. Inevitably it has morphed a bit since then, and there is plenty of "Bad BDD" around too, but I still think that it takes the discipline a step further.
So, certainly, as I say in this video, when we started out with BDD the intent was to "get the words right", so there is certainly a level at which semantics matter, but I think that the focus on tests as genuinely "executable specifications" amplifies your later points.
In addition to that, the focus on the desired behaviour, consciously aiming to exclude any notion of how the system works, makes BDD a step further than TDD in guiding us to a better "TDD process".
@@ContinuousDelivery It seems to me that building a ubiquitous language is one of the enablers (or one of the blockers, if you don't have it) to run BDD within the team. Without having this your specs, code, and tests mess in terminology and become hard to understand and verify. Thus, one will not be able to build nor good TDD nor BDD practices without having ubiquitous language. Importance and process of building ubiquitous language being covered in Eric Evans's book on Domain Driven Design.
@@TheOptiklab I certainly think that BDD and DDD are related. I think that BDD is a tool that can promote the use of DDD thinking and that ideas like Ubiquitous Language help us to focus on some real value in the analysis and design of our software and its testing.
@Jim, I completely agree with you BDD is much more inline with how a developer reads his code and works better for modern frameworks where Testing tools may be limited, non-existent, or impractical.
awesome man, keep going
Thanks, will do!
Please do not say "Assert one thing per test" without explaining it! It can be just as harmful as an unexplained SRP: "Let one function/class do one thing only and do it well." Often people interpret it as just 1 assertion statement, but that's not correct! We want to test the outcome of 1 test case, but sometimes it's not possible to do it with just 1 primitive assertion statement. That's when you either extract some code, create your own special assertion method, which can be an overkill (because of too many special cases + abstractions/indirections in tests make hard-to-read specifications), or just do multiple assertions. And when you do multiple assertions, then you better use a test tool that lets you report all assertion results collectively, rather than stopping at the first one!
This is such a thoughtful and experienced response
Why write Java code like this cumbersome:
Map perms = new HashMap();
perms.put(Item.READ, Collections.singleton("alice"));
The amount of duplications, and low-level telling the "how" instead of the "what" is overwhelming. This is not a readable test, let alone if it was production code. Learn the new ways, embrace the new features of Java, and write:
var perms = Map.of(
Item.READ, Set.of("alice"));
Alternatively, Kotlin is nice too...
Note: I know it was just a counter-example.