For me, the barriers that I faced with TDD have been my co-workers. Easy to figure out what tests to write as I drive development - impossible to convey that practice to others. Challenges to get code past code-reviews in a timely enough fashion to make rapid progress. Clueless managers that just didn't get the churning out of tests and code that results in predictable progress, couldn't differentiate it from unstable progress from cow-boys that delivered untested code that delivered surprises, that rewarded those heroic efforts that periodically saved the inherently chaotic situation.
When I faced this issue I doubled my estimates because I needed to test my code with a debugger and not a unit test. They soon come round. Ask your managers if they want their developers to be fixing bugs or delivering functionality to the stakeholders.
If you are so interested do a pair programming with one of your coworkers. To share the value. Show them how frequently you test more than others giving confidence during development about delivery quality.
You should pair program. It doesn't have to be a policy; just do it long enough to train others how to do it right.
2 ปีที่แล้ว +59
One thing that really made TDD (or rather BDD) click for me, was the concept of programming by wishful thinking (which I think is the same as programming by intention). This technique has massively improved my ability to write simple and loosely coupled code. I think this is a topic you could cover quite well, maybe even with a small workshop video to go along with it 🙂
Michael C. Feathers book on working with legacy code helped me tremendously. Not only in working with legacy code, but in getting stuff into a test harness. The legacy system i am working on is now under "test" and constantly refactored . The tests guided us to that. They made us realize a lot of flaws. TDD and Unit Tests are "simple", but they change a lot of things for the better. I will never again do without unit tests, the feeling of safety and feedback is to nice. I am still writing bad tests from time to time, but they become better as they allow easier refactoring and redesign of the code under test. I wish i realized that earlier. And now i am more and more using TDD, its makes so much more fun, because of the instant feedback :). Your video motivated me to keep using TDD.
So far my biggest barrier to practicing TDD has just been to set up my local dev env to be able to run with the cycle of TDD. Getting closer each day though and leaning a lot. Great content! Appreciate it 💪
That's why I prefer IDEs over text editors, because they integrate with many test libraries and I don't have to set them up. And I don't even have to manually run the tests sometimes, options exist to enable running in the background. .NET 6 even has hot-reload for tests so even changing test code doesn't require a rebuild or complete rerun.
Refreshing! I've seen coworkers advocate for several of the anti-patterns you mentioned, glad to know I'm not the only one who thinks they're anti-patterns.
I've been looking for explanatory videos for a couple of days now and find that you have the best way of explaining things in a simple and consice, yet professional and challenging way so far. Thank you!
tried watching this a month or 2 back and had no idea what you were talking about. Spent time actually trying to do TDD like uncle Bob instructs and now its all much clearer. TDD can not be done backwards. You cannot write tests after without an incredible amount of pain.
Unit tests are made hard by one major thing: not having the entire domain model in a completely independent component. In the domain model, introduce pure fabrications (example `interface AccountRepository`) to achieve dependency inversion. Then only test the domain model through the boundary, all in memory. These are called unit tests. They have the advantage that you're not bound to the implementation details of the domain, you just feed it data and check what you get back (from the return value, or via a test double). I make only an exception for the "unit test at the boundary rule": when testing more convoluted algorithms. Other than that, do a couple of sanity integration tests and the like.
Thank you for all of your outstanding content! One small thing. Please make the font of your code examples bigger. This would be a great favor for people watching your videos on the phones, like me. Again, great material like all the previous ones 🙏
TDD clicked for me when I started writing them consistently. Once I caught my first bug in the pipeline from a unit test break, I was hooked. Now my test suite is mature and secure.
When your testing write back cache modules with cache-line eviction (in Verilog), you really really want to run through a lot of tests checking for race conditions ( for i = 0; i = 16 clocks; i++){ write, wait i clocks, read } and so on. But this is because correctness isn’t just a function of inputs, but a function of sequence of inputs over time, where timing is absolutely critical to triggering edge cases.
Would love an example video on when mocks are used wrong and how one might consider redesigning. I do run into scenarios where my tests for the return of a mock
Great material, really liked the summation, TDD implemented well will shout at us when we are doing something wrong, we just need to listen to what it is saying to us
2:13 2:40 what is TDD really about: thinking about how our software works from the perspective of users 3:25 (key) point: 3:39 meaning of the interface mentioned: 4:58 simple example: 6:35 : problem of if we write code first 6:58 if write code first, *we are going to expose of the detail of our thinking in the interface to our code* 7:57 important technique: 8:27 what we are really interested in: 8:40 3 difference types of TDD 9:08 type 1: return value or exception 9:25 example 9:40 common mistakes: 10:06 **TIP** : the time for testing different input 10:58 type 2: test state change 11:21 problem with this kinda test is more to do with *iteration* 11:33 *TIP:* don't iterate in a test 12:12 type 3: 12:24 tests that validate that your code interact with other code 12:30 way to test such things: insert sth 12:38 confusing terms: stub, mock, spy, fake 13:20 code example 13:40 fake 14:02 spy and mock 14:50 mock 15:46 trouble of using spy and mock
Hello. I agree with you on the points you make in the video. I have noticed newer developers get caught up on the complexity of the unit test. The rule of 'no loops' in a unit test is a good one.
Testing graphics seems rather difficult. I think this is a part of reason game development is behind on good practices. It would be fascinating to see you interview a game programmer who would answer obvious questions on practical side of TDD in game development.
I was really curious on how that line should be tested (except by visual inspection) without pretty much the same logic as the one which should be creating it. Like, am I supposed to just assert the state of some pixels given an input, maybe? I'm not sure how feasible that gets with anything more complex
@@user-sl6gn1ss8p yeah. Testing geometry and physics is rather difficult. It's hard to know what to watch out for until you've seen it in person, so errors are harder to predict and avoid
Best video I've seen with you so far, and the rest are really good too! The graphics makes the content simpler to digest (just try to keep the font size mobile-friendly if possible), and I love the playful animations! The common mistake of adding tests that don't add new behavior was a great highlight, and I will probably reference this video a lot because of it. Thank you!
I use unit tests for basic functionality and then to add regression test cases. As on the start you will never predict all the test cases so the regressions are the win.
The most useful part I have found with TDD is it eliminates redundant code, whether that's at the unit or acceptance level If you can't find a way to test it, the code shouldn't exist The greatest myth of TDD is it is slower...in my experience it makes you magnitudes faster
2 ปีที่แล้ว +5
I agree. Once your test is written, you hardly have to do anything other than what your compiler and test tells you to do. In addition to that, the amount of time you can spend manually testing various quirks every time you change something, is much much bigger than many people think.
People say it's slower because they add the learning curve time to it. People who start first feel less productive as they learn about how to design to test. Once you get the hang of it. It's all the same because you feel productive.
Liked many points mentioned here. But for me the main benefit of TDD is producing a safety net, that makes refactoring easy (when done right only). For validating design there are other ways. and also can really TDD say if design is good. I have doubts. its mainly about cohesion and coupling.
The moment you start doing functional programming and passing all data by value you do not have 3 types of tests anymore, as you specified. You have only 2 types of test, the return value and the interaction. The change in state is also represented by the return of a new value of an object, so its the same thing as the 1st type. The interactions with external components are left to represent interactions with by-reference external systems e.g. databases, UI, file etc. They are the sources and the sinks of pure data transformations. This is the reason why in functional by-value data flow languages writing tests is much easier than in by-reference languages.
Something to consider with mocks is "can I use the real thing?". Often the answer is "no because it would be slow/expensive/etc.". But sometimes you can just run a copy of the real thing and end up with more confidence in your tests. e.g. You can often just run a SQLite database during tests and avoid mocking the database.
From my understanding, spy are used to spy on objects, so that we can spy on the calls being made and do verification on the calls including the parameters passed into it. However, with spy, we can choose to mock specific calls or method. If we wanted to keep our test less code (if this is what it means by simple test), then we don't need to mock the method, however, then, it's no longer a unit test but more of an integration test; which means, it's not independent. I used to not nock these method but overtime, I find that mocking these method make my test more independent and less complex albeit there is more code to read; but by having the mock, it gives better understanding and focus on the current unit to be tested. Once I was reviewing some else code and there were no mocks, only spy and verification on the call and then assertion on the final result. I took some discussion to understand what the call is suppose to do and how it would've affected the result from a reviewer or non code owner perspective. I don't know is there is such thing as right or wrong or even how to debate about better or more effective; I've encountered team members who think very little of test, those that often complain of the complexity, those that have written very simple test but it's not useful when refactoring. I don't enforce what my team should do, I only encourages them and also asking questions, giving examples, and so on, on why I wrote it that way and how it has help me so far, especially for refactoring or understanding the requirement when there is problem or needing to relook on the requirement. One important thing I've learned is to always look back at your old test and review them again. I do often find myself improving the test but also still finding ways to make it more simple; I've often discuss with my team too on how can we improve the test but given the business case scenarios, it's often more difficult than typical scenario from trainings or tutorials. We also try not to overthink but if our goal is to ensure proper coverage and to reduce manual test, we should cover possible scenarios like for example, null value, empty string, invalid values, etc.
I have started using docker or in memory version of most dependencies for testing the integration or interaction code. Benefit is I dont have to understand how things would work in actual implementation and i can completely rely on the state of the dependencies to verify my code. Alternatively when these things are not available, i would chose a stub generated by any mocking library.
When you mock out say hardware for TDD, when do you actually test the code that interacts with the hardware? Say I have a fake file system to interact with for a system that will interact with the physical disk eventually. What would be the case here / steps here? This is the question I have been struggling to answer for awhile now. Been practicing TDD for a bit. I watched a talk about how those are then integration tests. I follow the idea that TDD unit test are short and quick because you want fast feedback. Would love to get your thoughts on this.
Others may disagree, but there's nothing wrong with writing to a file system as part of a unit test, as long the file interaction is independent i.e. not tied to a specific environment you will be fine (most test suites provide this). It does become a bit more involved when you need to start working with email, database, FTP, other APIs....you need to start thinking about the test dependencies which become integration tests. Your design should abstract the dependency with an interface. Your unit tests will stub/mock/whatever the interface to ensure your business logic/logic one level higher is correct, then you have an intergation type test which should call the actual class/module interacting the dependency to ensure that is correct. The integration tests can break, can be fragile, but if your system is reliant on these then you should have them.
I think that one of the best examples that we have of how to architect for hardware is an OS. We don't often think of it that way, but that's what an OS does, it provides an insulation layer of code between our apps that do useful things, and the hardware that they run on. I recommend that you architect for bespoke hardware similarly. Establish well defined interfaces at the boundaries and test apps to those interfaces. If I am writing a Windows app or a Mac app I don't worry about testing it with every last detail of every printer that may be connected. OS designers design an API that abstracts printing, we call them print device drivers, and then we write to those abstractions. The people that write the printer drivers don't test their driver with every app that uses it. They will have an abstract test suite that validates that the driver works with their printer. Their tests will be made-up cases that exercise the bits that the driver writers are worried about. My recommendation for hardware based systems is work hard to define, and maintain a clean API, at the point where the SW talks to the HW. Write layers of code, firmware and drivers perhaps, that insulate apps from the HW, test the apps against fake versions of that API, under test control. Test the driver layer in the abstract in terms of does the "driver work" rather than "does an app work". It's not perfect, you may not trust it enough, but this is a MUCH more scalable approach to testing and a version of this is how, for example, the vast majority of testing in a Tesla is done.
@@ContinuousDelivery and @jimiscott thank you both for your responses! I think I see what you both are saying here. It is about having a good API design then. The details of how it does that aren't necessarily important and as you said might not trust it enough. I agree though it is much more scale able. I think jimiscott touched upon this some but for longer tests to email, database, etc it seems these are the implementations instead of the stubs correct? These use the API designed say in your example of the OS and call those abstraction layers to do it. You use the stubs or mocks to guide the design of your API. Then implement your API to use the API of the OS, manufacturer, or custom driver to call their code to do the work. At this you won't get full test coverage but it will be good enough and is scale able using this technique. Giving yourself layers seems to be the key here to all of this. Use the layers as a point to stub in at HW and replace them with real calls using other APIs, Firmware, Drivers, etc which need to be manually tested perhaps depending on the situation (ADC, DAC, SPI, etc..). So I guess if I am finding it hard to test or too much stubs/ mocks that is speaking to me that there is something with my design I should look at then. Perhaps I am missing a layer of abstraction. I really appreciate the feedback here! I am loving and hating TDD all at the same time haha. It really changes how I think and write software.
You probably don’t need to be using mocks. They have a time and place, but it’s unlikely that you’ve correctly determined that your scenario needs one. It usually ends up being like using a broadsword to carve a turkey.
Unfortunately if you work on things like firmware, the mocks are I think necessary and will do more than simple things as you need to mimic e.g. HW devices behavior. This sometimes can be a bit complex if you would like to test your code. I don't know other way to test that.
Think you nailed it with the painful mocking tests description, though in my experience people are instinctively more likely to assign the pain to mocking itself rather than a signal to review / redesign code interactions: have you seen this Dave? What's the answer? Thanks :)
Also talking about loops and simplicity. Let's take a very simple example of a method that would try to find a userid based on the given string. In this method, let's say we want to return null if the string parameter is null or empty string, otherwise, it would call the repository to do the find. In the test, we might need 3 test at least? 1. If there string parameter is not empty, verify the call to the repository and ensure that the result return is from the repository without further alter (we would need to mock the repository return value) 2 and 3, would have the same behavior where we want to ensure null or empty string would result in the repository is not being called and the result should always return null. Now, the only difference between 2 and 3 are only the value passed in, if we use a loop of values to be passed in, we only need 1 test. But if want to force ourselves not to use loop? It seems redundant to have 2 test cases that only differs by the input value. I've discussed with every developer in my team, they agree on this, although some still tries to follow the rules that 1 test case 1 scenario, but eventually, they find themselves copy and pasting which makes them question themselves about the rules. Imagine having to do that for every 'findBy' (e.g. user, products, etc), how many code duplication? I've found ways to rewrite the code that could reduce theses problem whereby I would delegate the null or empty string check to another separate function, so I only need to test once in this test. However, in some cases, it feel redundant to do that everytime since it would introduce more code which would discourage other developers to practice this.
I would like to ask late question below this film. There are two schools of TDD: classicist, and London School (Outside-in, mockist, with two loops of TDD). Could you make film to explain differences? Are they relevant for you? If you preferer BDD and acceptance tests, I guess that you prefer second school? Also, when I read about TDD and testing strategies, I see two contradictory advices. One that, you should test all user requirements on core of your system, and only some happy paths e2e. Second that you should test all requirements from outside (rather it is UI or API), and all components by unit tests. I see here some differences between DDD and BDD. Do you agree with that, there are some differences, and what is your preferable strategy? By the way, thank for you materials and last book "Modern Software Engineering".
Yes, I am a London-school person. Good idea to talk about the difference though, thanks for the suggestion, I will think about that. I think it works best if you start with an "executable specification" then do TDD underneath to evolve the solution that meets the spec. I don't see differences between DDD and BDD, I think that you can use BDD to reinforce DDD.
I have a function in a project that calculates scores for submissions to a contest. It looks at what a submission was scored by every judge, then adds them up. The setup to testing this function is a little complicated, because it involves creating all the entities and their relationships (create a submission category, create judges in the category, create submissions in the category, assign the judge to submissions). This results in multiple loops in the setup code. Should there be an easier way to write tests with complex setups? Is it fine to have complex setup if multiple classes and relationships need to be present for the function to run?
Based on the video and your description, I'm not sure why you'd have loops in your setup var categoryOne = new Category("cat1") var categoryTwo = new Category("cat2") var judgeOne = new Judge("one") var judgeTwo = new Judge("two)" var submissionOne = new Submission("John Doe") var submissionTwo = new Submission("Janet) categoryOne.addJudge(judgeOne) categoryOne.addSubmission(submissionOne) categoryOne.assignJudgeToSubmission(judgeOne, submissionOne) //this feels awkward and prob a result of your design. But I'm basing it on your description //check that submissionOne has judgeOne assigned //Check that categoryOne has submissionOne and judgeOne Repeat for 0,1,2 situations of judges and submissions to categories. He stated this in the video. 2 is perfectly acceptable for the "many" test. You shouldn't need a loop to verify two things are there
Large setup code is usually a smell. However, sometimes it's the only way; have a look at the TestDataBuilder pattern for those cases, it might help. Happy designing.
This sounds like a good candidate for a fixture, basically a function that's re-used in multiple tests. Create a fixture for one category, returns categoryOne with a name, judge and submission assigned. Create a second fixture for categoryTwo. Then re-use the fixtures instead of duplicating the complex setup in numerous tests.
@@chaunceyphilpot3986 so I'll have 2 categories, 2 judges per category (4), and 2 submissions per judge (8). The number increases with any one-to-many relationship.
Hmm, so today I wanted to see whether or not each buffer full of data in a file read straight from a filesystem was the same as the corresponding buffer of data read from an encrypted zip file (after being decrypted and decompressed). I guess this is an interaction with an external component. The test worked and, thankfully, passed. So this helps me have more confidence in the zip library I'm using and in my use of it and it provides a tested sample/reminder of a rudimentary way to read a buffer at a time from a music file in an encrypted zip file using my "Archive" class. (When I actually go to do it I'll have to read a mp3 frame at a time.) This is Seems like a worthwhile test, I think, but.. this is not really proper TDD, I guess, because it uses a loop? I suppose I could extract the loop into an external function, like "StreamsMatch(stream1, stream2)" or something, but I don't really have any use for such a function outside of this test. -------------- [Test] public void TestBinaryStreaming() { var reader1 = folderArchive.GetBinaryReader(MusicFilePath); var reader2 = zipArchive.GetBinaryReader(MusicFilePath); var fileSize = reader1.BaseStream.Length; var data1 = new byte[16384]; var data2 = new byte[16384]; var bytesRead1 = 0; var bytesRead2 = 0; var totalBytesRead = 0; do { bytesRead1 = reader1.Read(data1, 0, 16384); bytesRead2 = reader2.Read(data2, 0, 16384); if (bytesRead1 != bytesRead2) Assert.Fail("number of bytes read were not the same"); totalBytesRead += bytesRead1; if (!data1.IsEqual(data2, bytesRead1)) Assert.Fail("data1 and data2 did not match"); } while (bytesRead1 != 0); if (totalBytesRead != fileSize) Assert.Fail("not all bytes were read"); Assert.Pass(); } -------------- Too long.. magic numbers.. a loop.. multiple Asserts.. but it seems like breaking it up would be more work for little gain. I did make that IsEqual extension method to compare the contents of two byte arrays. Maybe I should I ask for advice on SO..
Talking about loops and verification for number of times, if you had a feature that accept a list of values and you would call update if the record exists and create if it doesn't. A simple test would be write 2 test, one for update scenario and another for create scenario. But how do we know if the implementation handles multiple record correctly? Would it be better to write 1 test that has passes in for example 7 existing record and 3 non existing record, then do a verification that the update was called 7 times and 3 times for create? If I wanted more comprehensive test to ensure or minimize mistakes, I could write a loop too and use verification for every call to ensure that the item passed in is the correct item for update and create. This could be extreme but if has helped in preventing certain cases where a mistake was done due to copy pasting code or overlooking by passing the wrong parameter. This could not have been possible with the verification of X times since it would use any() type for parameter.
This however needs the explanation about what is a unit. Lot of people misunderstand the word unit by thinking a class or a method is a unit, which leads to the overuse of mocks in tests. Instead a Unit should be the class also with its real dependencies and only the outside stuff, like the code that accesses database or makes calls to the server should be faked or mocked.
I teach high school students basic programming a few years ago. I wrote tests to help grading student submissions. But some crafty students decided to submit functions that always returns the expected output with it's corresponding input. I'm not sure if I should gave them an A or a D 😭
Oh dear. Mocks which return mocks. That literally was the last code I wrote yesterday! In this case, I was trying to mock the .NET framework "get registry key" and subsequently the "get registry value" methods in order to allow the actual code to find the "Default Browser". At the time, it felt overly complicated. I suppose I could create another class to wrap up that behaviour, but it would seem way too trivial.
That's what I would do, and "way too trivial" is exactly what you want from that kind of insulation-code. It will make your code more testable and less fragile, and even though this bit of code is pretty uninteresting, it will stop you having to do "interesting things" with mocks. 😁😎
@@ContinuousDelivery Well, I took your advice to heart, and I can say that I feel happier for it. That extra level of abstraction made the unit tests much simpler to read. It went from 389 lines to around 252. Deleting code always makes me happy. 🙂
Excellent video. I do feel that your advice to abstain from doing iterations in the tests is misguided: let's say you're in a dynamically typed language. You have a function that accepts a argument and should deal sensibly with it regardless of the type of the data (thinking mostly of handling scalars here). Iterating through examples of each type of data and the expected outcome seems to be quite reasonable here. I will also counter the no iteration stricture that at times you may want to make relatively strong assertions about the execution time taken by a particularly busy (as measured and determined to be necessary -- no premature optimization) piece of code as part of your testing. This too will often require the use of iteration followed by elapsed time evaluation to assert that the code is executing as efficiently as possible, and to catch any changes to the CUT that impact the performance, since that's one of the characteristics of the function's interface.
2:30, I think this is the weak point of TDD - I'm not used to it, though. To me, it seems to damage the flux of thinking the design, in trade of giving a solid ground at each step. But I do reckon that if the thinking is too foggy, TDD may help.
I didn’t get what you see as weak. It’s not the fact that TDD guides the immediate design, is it? BTW, I say immediate because, for me, TDD helps in designing in the “small scale”, the class or function being written. It’s not practical to use TDD to design at the system scale (bounded contexts and alike). It’s TDDevelopment, not TDDesign after all.
@@antoruby The design of an f() has a bunch of lines that you are keeping in mind, for some time - the algorithm. TDD may damage this thinking, due to slowing the process and loosing focus, making you forget the idea, confuse some of them, and so on. I foresee this kind of issue - I may be wrong, though.
@@MrAbrazildo I don’t think you’re wrong in this statement, there is always some level of personal preference when it comes to “how I like to reason”. Taking the example of writing an algorithm, the time spent in the test will be way less than in the algorithm, specially if it has an easy to verify solution. E.g., I can quickly assert that sorted([3,1,2]) == [1,2,3], but writing a quick sort will take more time. TDD then helps in defining the interface of the function, but it’s internals are free to evolve. Anyway, there are cases in which TDD is a perfect fit and others that are as fit as your familiarity in using it (we’ll never find a one size fits all).
@@antoruby You are talking about unity tests, which test results from an f(). TDD is meant to write a test _before writing each line of code_ ! That's why it's supposed to help in developing anything. This may be true, if your boss wants you in front of computer all the time - leading to health issues, btw. I use to take a walk, and come back with the solution or the path straight to it, most of the time. Sometimes, when I'm inspired, I rather write directly something that I think is promising. It starts with a mess, but has a direction, and some unity tests can fix the route. TDD may slow any of these alternatives.
@@MrAbrazildo hmm in my view you’re overthinking the “before writing each line” part hehe. I’d say that testing the internals of an algorithm is not good/productive. It can probably start with trivial cases (empty container, one value only, etc.), but then the internals of an algorithm require exactly the walk away you mentioned. And I don’t see this conflicting with using TDD. Besides that, at least in my work, I write much more other kinds of code (not dense algorithms) that definitely benefit from the red, green, refactor cycle :)
Private methods are implementation specifics. Tests should focus upon the public contract. If a private method is really important, then it will be covered by the tests to to the public methods, when those public methods call the private methods. If all interface/contract behavior is covered, and there are private methods that are not covered, then you have one of two basic cases. You've missed a behavior, possibly an edge case, and you need more tests. The code is dead, and it can be removed. However, sometimes I've had to work with some really nasty legacy code, that is methods that are hundreds of lines long. Unit tests are next to impossible to cover these. I'll extract chunks of code into private methods, but it's still a mess to test. I'll make some of those private method package-private. This allows me to override/mock them to test the "larger" method, but I can also test them in isolation. But I consider this a stepping stone. Extracting a method and making it package-private is the first step in acquiring some control over a large method/class. Additional refactoring will probably be needed, but this provides a bit of a safety net to get started.
I noticed something about using a mock library. I recently started practicing TDD and I’ve done 2 projects so far. On the first, I didn’t use a mock library and I had to use a lot of interfaces in order to create stubs. In the second project, I used a mocking library and I noticed that all the interfaces disappeared because I could easily mock the class behavior directly. In your talk with Martin Fowler, he mentioned that one of the nice side effect benefits of TDD is it guides you to create interfaces (just like I saw in my first project), but when using the mocking library I didn’t need to. Is that a drawback to mocking libraries? Should one mock only interfaces even when using a library, or are interfaces not as big of a deal?
Maybe, perhaps, this is confusing because there's 2 overlapping things here: 1 an interface constrict such as a Java interface; 2 the concept of an interface, being the contract to a unit of code. The necessity of creating (1) is language, tool and context specific. The conceptual thinking about (2) is about focusing on the contract and not overreach into the internal implementation of units, which i think is where TDD, Dave and Fowler are really coming from. IMHO the thinking (2) is consistently important to manage and minimize the overall complexity of your code by defining reasonable units, the actual creation of an interface (1) is a lesser concern, which is often strictly not necessary where there are is a single implementation, although there is potentially some value in consistency and self-documentation in a particular situation. Hth :)
It is stated that TDD helps you de-couple the test from the implementation code, but actually in order to isolate the test you have to use mocks or stubs to stop the code from interacting with external interfaces such as other components or database. So, one has to be aware of which DB calls are made in the depth of the code under test. So how do you actually avoid implementation awareness?
My preference is to isolate the core of my code at the edges of the system like this, write your own adapter that insulates the body of your code from the details of "interacting with external interfaces". Your abstraction will almost always be simpler, because we don't use every feature in every case. Unit test, with mocks if you like, to this simpler interface. The code in this adapter is usually pretty generic and specific to the tech it integrates with, so you can cover that with a few, generic rather than case-specific, iteration tests.
@@ContinuousDelivery My question was a bit different. I mean that if your code fetches some data from the database for instance then you need to mock that internal method which will return your static "db objects" your under test code expects. If someone refactors the implementation and for instance, renames the mocked internal method then your test also needs to change the mock. This means the test is aware of the implementation details.
@@yonisim30 I don't think that your question is different. In the case you describe, I'd write a thin layer of code that read the stuff from the DB and translated it into a form more useful for my code. I'd test most of my code around faked versions of that translated input. The "thin" layer of translation is more generic and the only bit of code that is then coupled to the detail of the DB implementation. So for most of my code I'd mock the input of the translated stuff, and I'd do some basic "contract testing" against the part of the code that read the DB.
@@ContinuousDelivery I understand what you mean but I didn't aim to the db layer implementation. I was talking about the logic implementation. Lets consider the following method: def perform_action(param_a): do some stuff return return_val As a test I shouldn't know whats going on inside the "do some stuff" block. But lets say that inside that block there is a call that writes something to the audit log which can be maintained in a log file or a database or an external service. Anyway, as I understand, my test shouldn't even know about that auditing line, but i have to mock the audit service because if I don't then the auditing action will probably fail because there is no audit external service or database in the test context. Maybe you can say that the auditing is a side effect and shouldn't be in the tested code in the first place but there can be a whole lot of other examples such as fetching objects from the db or referencing an in memory variable. All this is internal logic which the test should be blind to except if it is an integral part that influences the behavior of the specific scenario. You can often find yourself mocking or creating data that doesn't really of interest to the specific test scenario but is needed for the under test code. Would like to know what you think of that. Thanks a lot and I really enjoy and appreciate your lovely content.
If a function should call an external interface, then this call is not an implementation detail. You must know it in advance. The implementation detail in this example is the implementation of the interface: in memory, db, service call or what ever. But it is not the fact that it uses an external interface to do its logic.
Somehow in reality, we have to decide whether to make very specific test case or to combine several scenarios which would result in more complex test but less repeating code and test to be maintained. Also, regarding test simplicity, it's also really depends on the scenario and how much coverage, for example: If you have a requirement to remove an item or object from a list, a simple test would simply to ensure the size is -1. The test passes and everything looks good, but is it enough? What if the item to be removed wasn't the correct one? The developer could have written to remove the first item, and the test still passes. A better test is to ensure the correct item or object was removed and the other item or object is not removed. Additionally, we might also want to cover for scenarios if the item to be removed was the last item, what we would expect the result to be (empty list or null). Other more complex example would be if you have some more generic functions that accepts a generic interface. If we have many objects that implements this interface and we would like to test each scenario, we would need to repeat or write many duplicate test of the same thing. Alternative, we could simply mock a list of items that implements the interface and run a loop to test all the items thus avoiding having many similar test cases. In this case, we're sacrificing the test readability and detection for less code. The improve the test case debugging or knowing which failed, we use logging to provide information in the loop to identify which object it's testing, so we could easily know which fails; it's similar to having a list of test cases and identifying which fails, in this case, we have 1 test case that fails which covers multiple sub scenarios that could be known in the test logs. This may not be a good practice but for teams that writes a lot of test, our test classes is huge and a lot of test when the implementation class is so much less lines of code, and if we decided to separate each scenario for a single method, we would need to repeat/copy-pasting the mocking of data, mocking of return value for depending function calls and so on; in many scenarios, we find that making use of some loops and if else seems to be more preferred. We made these decisions together as a team and we didn't restrict how one should write it, we do encourage or mentioned that it's better practice to split the scenarios but most of the developers prefer not to split too much on the scenario which would result in a lot of lines of code, repeating codes like mocked data, mocking of returns, and so on. We even tried implementing some in setup method using before annotation it sometimes confuses some developers because if was not immediately obvious when reading the test case alone. All in all, I wouldn't put it as wrong to have more complex test but definitely I would prefer simple test whenever possible, but I'm also someone who prefer a good coverage because having a test is like having insurance coverage. You can have a simple coverage or very comprehensive coverage. I would cover it at my comfortable level that I'm confident my code will work for many different scenarios. One of the questions I used to get is 'how confident are you with your code?', 'how do you measure quality'. By having test coverage, you can confidently measure and mentioned that you have coverage these scenarios and it will not fail on these scenarios. There could be scenarios you've missed and when you or anyone reported the issue, all you need to do is simply add the coverage.
Well my experience is that the more complex test can sometimes feel like a short cut when it isn't. Tests like this are usually much slower to execute because they are much more complex to set-up, as a result they are less likely to be able to run in parallel with other tests, and will be slow to initialise. My experience is that a focus on very short very simple tests, even if it means executing the same code path to get to the point of the test brings with it faster more atomic tests which shout out the reason for any failure. I have never seen a team with tests like this complain about test performance, but I see teams with the kind of tests that you describe complain about test performance all the time.
@@ContinuousDelivery we have discussion about performance which talks about unnecessary test, which could be these complex scenarios but in this case, if we ignore complex scenarios, how do we ensure our code are properly covered and be confident it works and more importantly it doesn't break if someone decided to refactor? Additionally, how can it help other developers taking over the project better understand the requirement clearly? I once we're asked by managers, how can I measure your code quality? I had a good insight from a scrum trainer that by having scenarios covered in test, we can confidently mention that our application is covered for these scenario and that it will not fail under these scenarios. Let take a look at the below example (I can't recall those more complex scenario now): Imagine we're using document based database and we received and event that were supposed to update the name of all matching IDs in the subcollection of the document. In our test, we would need to mock the document containing the list of subcollection and ensure that ONLY the matching IDs in the subcollection should have the name updated. A simple test would simply be write a test and assert that the name of the matching I'd matches the expected name. Typically, we would also write a negative scenario to ensure that the name is not updated if it doesn't match the IDs. Now the question is, how could we test that if the subcollection contains more than 1 item that matches the id? One of the way is to use a for loop in the test which simplifies the problem. Alternatively, we could break the update of the name into separate function and perform a verification the method was being called x number of times (if there should have 3 matching IDs from a list of 5 items, it should be calling 5 times), then technically we no longer need a for loop. However, how could we ensure that the 3 times being executed is actually calling only when the matching Id is being called? If we want to make the test more specific, I'd write again a for loop, and verify the update function is being passing the expected parameter when the Id matches and NOT being called when the Id doesn't match. This would also involve if else in the loop. Now we're already covering the positive and negative scenario in the test, and if we want to separate them, that means we would have very similar code duplicated into 2 test, having the same mock data and so on, the only difference is the verification or assertion. In a written test, the preparion or mocking could involved several lines of code and duplicating them seems redundant just to separate them into more specific scenarios. In this case the team agrees that combining them into 1 is better understanding the consequences of sacrificing readbility. In terms of performance, back to the questions regarding how much should we cover? We do not specifically force any developer to write complex coverage, as long as there're some coverage and also because everyone think differently. My goal is simply to eliminate as much as possible, manual test, and that my code is always in a releasable state that I can confidently push to production anytime without any manual test. My current project is still in continuous deployment and we still have to perform manual test. Most of our developers will first test in their local environment, push the code to master to deployed to dev for another developer to test. Personally, I do not like the idea of having to test locally, I'm lazy to bring up my local environment, deploy to my local environment, setup manual test data, etc. I prefer that my test can cover everything is possible and when I push the code, I can confidently tell the other developer that will be testing my code to perform the test without me verifying it's working as expected first. I've done this several time and I'm pretty satisfied when it works and when there are issues reported, I know exactly what scenario I missed and and I would add the test coverage, fix the problem and immediately push the code and request the dev to retest without me verifying the change manually. I want to have the confident that the test has covered and that I can trust my test. I often refer to the test coverage as having insurance policy to my team and I've been trying to push for continuous delivery but in order to achieve that, my point of view is how can we be confident with out test especially if the coverage is not good enough and what is good enough is we focus only on simple coverage? We often face bugs due to simple coverage because we have some developer that have very good testing skills or could think of many scenarios and often found new surprises but why do we need these manual steps if we could cover them in automated test? There're many developers still thinks should have less test and coverage but in this case, they can't answer if we would need more comprehensive manual test and how can we achieve continuous delivery?
I'm involved in Blazor websites and have found that designing for unit tests makes the unit test code for razor pages larger than the website code itself and the nature of making all the razor code unit testable obfuscates the Blazor page code where it's hard to understand. I don't mind having component based unit tests, but there's people in my organization that want all the code unit testable in almost a religious zeal. It's difficult where the effort to be agile and get stuff done when unit testing doubles the code size and doubles the execution time. I find it never finds bugs anyway as the tests are made to meet the code expectation. Another pet peeve of mind is that code reviews are exercises in refactoring code and there's always someone that thinks everything can be converted to LINQ queries of the smallest size. These code reviews tend to inject bugs into the code and the reviewers never find real bugs because they are only focused on the code constructs. Thoughts?
When I have code that does nothing more than coordinates between two mocks/stubs/fakes, in my mind testing that code is a waste of time. Basically I'm just testing my mocks/stubs/fakes/whatever at that point. If someone is advocating for you to replace readable, functioning code with terse LINQ queries, I'd say they are in the wrong. However, there are times when I would consider a LINQ query the idiomatic way of expressing an idea in C#. One thing to watch out for with LINQ is that if the LINQ is being performed on a data source other than in-memory objects when run outside of testing, then unit testing that code can give a false sense of security. The semantics of LINQ are different at runtime depending on the data it's operating on.
May i ask how these unit tests look like? Are there unit tests for the controller / presenters asserting on data map? Are there separate unit tests on the razor templates that parse output HTML and assert on that based on input map data? Are "page objects" being used?
I would like someone to answer me this question if possible, in which layers of an application should I do TDD? if I am in a spring boot application for example, I should test controllers, repositories, configuration... thanks
Realistically, all of them, the idea is to use TDD to design the code that you write, whatever that code is. You may need to adapt your designs and your testing to make that practical, but my default starting point for any code I write is "how will I test this".
The most DEPRESSING thing about TDD is when you are given code that already exists and is basically untested.. It becomes an uphill battle to convince people that the untested broken stuff they already have is rubbish
This was fascinating first off, so thank you. Secondly there's obviously a lot of benefits to TDD but there's also downsides as well. I do think TDD makes delivery slower but which might not be viable for every product/company. TDD works best when the deadlines are probably quite large or a little bit more relaxed. I like the idea of designing your code and really thinking about it before development.
TDD means more typing at the point of production, but estimates on bug reduction range from 74% up. So it is quite significantly faster overall. The DORA reports say that high performers on scores of Stability & Throughput (which are highly correlated with people practicing CI, and that is usually linked with TDD) spend 44% more of their time on new features than people with average scores. So even if you do type a bit more (which is arguable, because the code you write is simpler when you get good at TDD usually) then the time you spend doing that is paid back many times over in the time you don't spend diagnosing and fixing bugs.
I have a question, do you use all tests written during implementation in CI? or are there some tests not worth running in CI ? If so where is the border where the test is worth running regularly?
2 ปีที่แล้ว
I'd personally just run everything. If the tests take too long, make sure they can run in parallel and run them in parallel. The extra hardware is not expensive considering how valuable instant feedback is.
It depends on your system. If it is big and complex, you probably want to create a deployment pipeline, which is an effective way to optimise more complex collections of tests. The test, I think, is that you need a commit stage cycle to take less than 5 minutes. So if you can run every test that will give you a definitive statement on the releasability of your system in under 5 minutes, then great. If not you need a pipeline!
The barrier to TDD for me is that TDD seems to operate on the idea that the programmer writing the code doesn't ever actually RUN it to see if it works or not. By the time I finish a function, or a module, class or method, I've run that code dozens, if not hundreds of times during the edit/run/repeat cycle. Testing is built into and is an integral part of the design process. And since I am always working off specifications I've gotten from the user during numerous design meetings, I know what it is that I am trying to accomplish: namely, what the user wants to accomplish. That's why I'm writing the code in the first place. I'm not just writing code, and then seeing if I can find a user that might can use it to solve some problem they have. All code is written to solve a user problem that we are aware of before we start writing code. We aren't just writing aimlessly with no idea where we are trying to go. We have a design in mind, a design we got from a user describing their workflow, or their process. We are writing code to implement that process or workflow. And we write, compile, run, over and over again until it produces the results it needs to produce to solve the problem. TDD seems to think that programmers are only writing code from a strict specification that was given to them that they had no part in designing in the first place. But in my experience, there just are not very many programmers like that. We tend to be involved in the design phase, with the users, from the beginning, and we are as aware of the needed outcome as the user is before we even begin coding. So, trying to treat TDD as a step seperate from the implementation just never seems to materialise. At least in my experience.
@@GDScriptDude Agreed. There is a segment of programmers who are little more than "coders". _"Here, take these instructions and write the code."_ Such programmers really are divorced from the bigger picture. But I think that only applies in "big corporate" IT departments. I don't see it much in consulting, or in small development firms.
Shouldn’t acceptance criteria be agreed on before a user story is worked on? Otherwise your going to be writing something that the user doesn’t want? I also don’t really think that the users really cares about the implementation either. With TDD you are running the code to see if it works, that’s what the tests do! How else are you running it? Though a UI and a debugger? That just slows you down.
"The barrier to TDD for me is that TDD seems to operate on the idea that the programmer writing the code doesn't ever actually RUN it to see if it works or not. " Running the tests does RUN the code.
I was expecting to find in this video TDD masturbation - as I was told - is what modern TDD looks like, but found common sense actually. Where I work (major telecom company) developers are devoid of common sense, they design interfaces with unit tests in mind, to make almost every class mockable (in their view it makes testing easy). It ends up with abstract factories and other nonsense everywhere and tests are code change detectors, tightly coupled with production code. It makes refactoring of such code close to impossible and very expensive. I wish people were taught more to use common sense rather than design patterns.
What are the three things?... I'm a long way in and have not heard the three points. So? No, no, no negative conditions must be tested and, by extension, implemented in the production code.
Once upon a time I wanted to start a new JAVA project with TDD, but then I discovered that private methods cannot be tested using normal means. Now this means I can't use TDD for developing complex algorithms without exposing their internals within at least a package or using reflection. How do you think one should approach testing implementation detail?
th-cam.com/video/KyFVA4Spcgg/w-d-xo.html Maybe this can give you some insight into it. You have to separate the domains of complexity in your algorithm into testable pieces, so each is simple. It also will probably involve dependency inversion/injection.
You should not be testing private methods directly. You only need to test the public interface to a unit: A return value or thrown exception, a state change/side effect, or an external interaction. If none of these things are observable by a test, then whatever private method being called has no effect on the behavior of your application.
Think of it this way: You have an interface with two methods. There is a shared calculation between the two methods in implementation, so you factor that out into a private method that is called by each. As far as your interface is concerned, that private method does not exist. You write tests for your public interface as if you are entirely ignorant to the private implementation. If that private method is modified and is no longer correct, your tests of your public methods SHOULD fail. If they do not fail, then you do not have good tests, or your private method is unnecessary because it has no effect on the behavior of the system.
@@gabrielvilchesalves6406 Great video, thanks. There are algorithms, that are complex and within one domain. I will give it some more thought, but it simply breaks my way of thinking about encapsulation and what should be private in a class, and what should be separated into separate class as a public method.
@@pchasco I sometimes have a single public method that uses five or ten private methods, each doing important and complex operations. I will think some more on this, since TDD is an established way of doing things, but it seems to be forcing me to change some ways of thinking about how software should be written. I will have to review my code and see it again. Thank you for your advice.
Would you write a loop in the test? I don't think you would. Therefore, I don't think that property based testing violates the cyclomatic complexy of one in the test. The underlaying tool probably has loops, but they are not part of your test. If you have a loop or condition in the test, then it's another thing.
I still have a hard time to wrap my head around this. If I have a WPF application that's only does CRUD operations against a WebApi, what are there to test then? And how?
Your WebAPI clients should implement interfaces which are injected into your WPF components via constructors preferably. The reason for this is that you don't want to be making network or database or any system calls in unit tests. You create mocks of the interface in your unit tests and pass them into the WPF components you want to test.
@@awmy3109 Yes, I have understood that I shouldn't unit test the database traffic. Then it's not unit testing. I have a httpClientFactory and I'm injecting through interface in the Constructor of my ViewModels. So, what's left to test? I do some unit testing of my base classes for ModelWrappers used for validation but what more? What I have noticed is that if I'm introducing a bug when I'm developing it further is usually a xaml binding. Or that I forget to implement a Property on my Wrapper class that I've just implemented on my Model class and database. Or forget to register an interface in my Autofac BootStrapper.
@@tobiasjohansson1256 If you've done all that then you are fine. Honestly, there isn't much to unit test in the UI. Maybe end to end testing might be better but will definitely take longer time and more resources.
First of all. It's almost impossible to not do test driven development because the first time you run your program it's probably going to give you an error message. So do I write a test to tell me if there's an error or not when I can see it? I suppose I could turn on error messages. I don't have a problem with doing it. It's coming up with tests that I find it hard to wrap my head around. It feels like asking Harlan Ellison where he gets his ideas. If I could come up with tests I wouldn't need to because I'd be able to come up with the next Amazon. I think I just need to start out with doing more testing than I am. Then maybe I'll work my way up so that I can write a test for something I'm not sure what it's going to do. A lot of programs don't have simple outputs that you can test. Some just output information. You're trying to get to the information and you will know what it is when you see it because it will look like the sort of thing you expected to see.
I don't even know unit testing. I should probably learn that first but every tutorial involves downloading Unit.. And setting up who knows what do do who knows what for hours. I wish I could just learn the vanilla language testing functions. Like couldn't I start with assert, try/catch and things like that?
there is a lot wrong with mocks: using them you assumes you actually know everything about the thing you mock, including bugs :) if you can't test against the real thing, for example if you can't install a test instance of the service you might want to mock, you are in a lot of trouble
Wrong design if a mock returns a mock, you say? If I have a, java, component using a resttemplate to get some response with data from an externa service and want to test that my component acts upon the external data, I need to mock both the resttemplate and the response and the getBody-method on the response to return the data I want to test. How can this be done differently?
Wrap the rest template in a simpler http client design of your choosing, with concrete response classes if necessary. "Don't mock what you don't own" might be useful search. Hth
@@danm6189 that is one way to do it, but when I test my code unit, I want to make sure all of my written code is correct, because that is something I can fix if it isnt. If I introduce a wrapper, I may introduce a bug in that wrapper. But mocking standard java functionality should be ok, even if the structure does imply situations where a mock needs to return a mock.
@@JohnnyNilsson83 yeah myself I'd probably integration test the client against a basic http server using something like wiremock, but i agree that "never say never", these things always contextual :) one reason i suggested split is I'm not the biggest fan of resttemplate interface and sounds like you're already dealing with parsing responses in your component - myself i love a bit of divide and conquer :)
@@danm6189 I havent really found any option for the resttemplate. Do you have one I have missed? But yes, we use gson to parse the returned json. The resttemplate is wrapped in an integration client that handles all the integration details with url, api keys, headers etc. But in the spirit of OO and DI we inject the resttemplate instead of creating it within the integration client.
@@JohnnyNilsson83 I'm assuming this is spring resttemplate? Java stacks used to be mainly apache httpclient back in the day, pretty sure there's a new std java async client, etc. fwiw. Not against using resttemplate in itself, just like to limit the reach into my system, keeping use of broad or slightly ugly interfaces to small, low logic adaptors e.g. an http client at edge of system, I'd try not to deal with specifics of parsing / validation etc within same unit that uses e.g. resttemplate itself. The benefits of that would be simplifying testing/handling of specific request and response handling. Increasingly useful as system gains multiple outbound calls for to different endpoints or more complex queries where the http basics are relatively similar, which seems the norm in most cases i see. Anyhows again if you're happy with your setup and are not feeling pain from it, your tests are readable and simple, then it sounds like it's inherently working for you, so no great value for you maybe: the rule of thumb is still valuable, as i understand it, which is to take some time to reflect when you're returning mocks from mocks - I've defo done it before and will do so again but, usually, will end up moving code around to make things cleaner / simpler :)
I just wish my lead developer would stop insisting that he needs to test his function that accepts 5+ parameters, 4 of which have nothing to do with what the function actually does. Then he tells me that it's too trivial to separate the actual core part of the function without all the extra parameters and write a test for specifically that. We need to keep things high level, because otherwise were adding too much work for nothing. 🤦 I even had a lot of very non-trivial examples of what a good unit test actually looks like and showed how they tend to force you to break down your highly complicated logic into simpler "units" so you don't have to test a wider space of inputs than you have to. Yes, we could include a Properties object as a parameter because we're trying to read a CSV-formatted list from it, but how about we just have a function that takes a string and parses the CSV format and test that? It shouldn't matter where the string is from. But that's getting too into the weeds.
The greatest contribution of TDD is obviously reinforcing the concept and advantages of interfaces. TDD forces you to think in terms of interfaces since once you get started with TDD, you run into the need for mocking (to simulate expensive/external interactions) which is best done with interfaces - enter DI and IOC. To do this properly, you have to abstract out interfaces if the codebase doesn't have it already, or improve upon existing interfaces. That's the biggest payoff using TDD.
Certainly one of the benefits. Personally I think the biggest benefit is having no fear of refactoring code. We all know that over time code changes and gets added to and becomes difficult to read and understand. With unit tests you can refactor code to make it cleaner without fear of breaking it.
what wasn't mentioned is that testing interactions using mocks couples the test to the implementation. This violates the black box testing principle of only testing the interface. A better approach would be to tests state changes through the interface on the affected object. Kent Beck style tdd. He hardly ever uses mocks etc he uses real objects as they would be used in code.
Respectfully i disagree with the blanket categorisation in the practical useful sense. To me, use of mocks does not necessarily couple tests to implementation. If i define a unit that calls one of two functions, pass() or fail() and i test that unit checking that it behaves as expected for input conditions, calling pass or fail as expected, i am testing that the unit as designed within the system does its job and fulfills its contract. For me contract is the useful bit of the "interface". This to me is a completely different situation from asserting on a unit making multiple specific calls to a broad interface we do not own, e.g. calls to a library, where we might be wastefully over constraining implementation. Again, with respect.
Totally agree. The big problem is creating interfaces for every single class as the London school advocates. Your refactorings will be limited to what that structure allows, so good luck with more complex refactors than extract method Interfaces should be defined at the boundaries of the process or the module
2 ปีที่แล้ว
True developers are often victims of those "showing the way". Amen!
For a while I used TDD with small unit-tests. for a in web app. but I found it was too much work to test individual smaller units. Most of the units worked fine by themselves- the bugs were more often in the chains between them. A date formatting function might work perfectly in isolation..m but when called via my html view the input had been formatted differently by a viewhelper. Also ended up changing the units quite a bit so often found that the unit I had written tests for was no longer needed as it had become something else entirely. What I do now is primarily write end-to-end tests where the input is api data and the output is an entire html page - where I expect the relevant content.
On type2, the change state test, I sometime find myself asserting the state before and after the method call. Is it unnecessary to assert before? I loved the video. Thank you.
@@hansoloo98 you won't always get 3 lines of course. For example, if you're testing a guard clause, depending on your test framework you might be able to write it in one line. The point I was trying to make was don't write a lot of lines of setup and then a lot of assertions in one test. If you have too much setup then you probably need to rethink your design, and if you have too many assertions then you probably need to split the test into separate tests for the separate behaviours your care about. Specifically regarding the question, you don't assert your initial state, you set it in the test.
Not really, if the answer is wrong - doesn't meet the expectations, the test will fail. There is a danger that the question is wrong, but that is always true. If the "question" is "what should my code achieve" then it may be wrong, but if we get that wrong in a test, it means that we have misunderstood the problem we are trying to solve and so will always have that misunderstanding however we capture it, and so will encode the wrong solution. At least this way the code definitely solves the problem that we think we have. That is a step forward.
How do you do tdd if your software is just a glorified database? I.e. your code is 98% reading data, writing data and glue, with almost no nontrivial logic? Is it even the right tool for that kind of software?
I find your example quite confusing: drawing a line is not something I can write a unit test for! It is 100% a side-effect. It is fundamentally impossible to test this without a human (or AI?) and a screen.
I don't like TDD because it slows down design itself. Without TDD, I can test designs by writing code, discover some hurdles - maybe the library I intend to use doesn't have enough functionalities or doesn't work in this particular case? Maybe the communication with other components will be insufficient or faulty? Maybe some other component that I intend to use has some forgotten code that makes the design useless? Maybe some part of the code needs to be overly complicated because of interactions with other components? Those things come out only in the implementation phase. If I write tests first - you can write tests independent of implementation, but you cannot write tests that are independent of design. The problem is, that if I write tests for design and the design is wrong - I will only discover it when I will start the implementation, by that time I will have already spent a lot of time writing tests that reflect the design.
As you’d probably expect I see this very differently. TDD is ALL ABOUT DESIGN the tests embody my design choices, leaving me able to change the implementation detail. I don’t know how to get faster, clearer feedback on the quality of my design than TDD gives me.
One suggestion if you don't mind, maybe stop animating things on screen for so long while you're speaking. It can distract the mind from following your speech. For example those arrows that keep flashing towards the types of unit test. They could have flashed once and then stopped or not flash at all. I know it looks pretty but I find it distracting... maybe I have ADHD or something IDK.
For those who are concerned the video starts at 8:40. The rest is Dave's usual effusive surrounding. Oh and thx Dave for the vid.
For me, the barriers that I faced with TDD have been my co-workers. Easy to figure out what tests to write as I drive development - impossible to convey that practice to others. Challenges to get code past code-reviews in a timely enough fashion to make rapid progress. Clueless managers that just didn't get the churning out of tests and code that results in predictable progress, couldn't differentiate it from unstable progress from cow-boys that delivered untested code that delivered surprises, that rewarded those heroic efforts that periodically saved the inherently chaotic situation.
When I faced this issue I doubled my estimates because I needed to test my code with a debugger and not a unit test. They soon come round. Ask your managers if they want their developers to be fixing bugs or delivering functionality to the stakeholders.
If you are so interested do a pair programming with one of your coworkers. To share the value.
Show them how frequently you test more than others giving confidence during development about delivery quality.
You should pair program. It doesn't have to be a policy; just do it long enough to train others how to do it right.
One thing that really made TDD (or rather BDD) click for me, was the concept of programming by wishful thinking (which I think is the same as programming by intention). This technique has massively improved my ability to write simple and loosely coupled code. I think this is a topic you could cover quite well, maybe even with a small workshop video to go along with it 🙂
I will borrow this phrase "wishful thinking driven development" for anyone that doesn't do TDD. They are also making a choice.
Michael C. Feathers book on working with legacy code helped me tremendously. Not only in working with legacy code, but in getting stuff into a test harness. The legacy system i am working on is now under "test" and constantly refactored . The tests guided us to that. They made us realize a lot of flaws. TDD and Unit Tests are "simple", but they change a lot of things for the better. I will never again do without unit tests, the feeling of safety and feedback is to nice. I am still writing bad tests from time to time, but they become better as they allow easier refactoring and redesign of the code under test.
I wish i realized that earlier. And now i am more and more using TDD, its makes so much more fun, because of the instant feedback :). Your video motivated me to keep using TDD.
So far my biggest barrier to practicing TDD has just been to set up my local dev env to be able to run with the cycle of TDD. Getting closer each day though and leaning a lot. Great content! Appreciate it 💪
That's why I prefer IDEs over text editors, because they integrate with many test libraries and I don't have to set them up. And I don't even have to manually run the tests sometimes, options exist to enable running in the background. .NET 6 even has hot-reload for tests so even changing test code doesn't require a rebuild or complete rerun.
Happy to help you for an hour or two when I am free. As a community help. Let me know if you want something.
Its annoying at start. But pays off later. And on the lifetime of every project, specially if they are not just simple scripts
Refreshing! I've seen coworkers advocate for several of the anti-patterns you mentioned, glad to know I'm not the only one who thinks they're anti-patterns.
I've been looking for explanatory videos for a couple of days now and find that you have the best way of explaining things in a simple and consice, yet professional and challenging way so far. Thank you!
This channel deserves way more views. Way better than most of the channels out there.
tried watching this a month or 2 back and had no idea what you were talking about. Spent time actually trying to do TDD like uncle Bob instructs and now its all much clearer. TDD can not be done backwards. You cannot write tests after without an incredible amount of pain.
Wow, so precise and informative! As a senior dev I can relate to everything that being said. Thank you very much!
Unit tests are made hard by one major thing: not having the entire domain model in a completely independent component.
In the domain model, introduce pure fabrications (example `interface AccountRepository`) to achieve dependency inversion.
Then only test the domain model through the boundary, all in memory. These are called unit tests. They have the advantage that you're not bound to the implementation details of the domain, you just feed it data and check what you get back (from the return value, or via a test double).
I make only an exception for the "unit test at the boundary rule": when testing more convoluted algorithms.
Other than that, do a couple of sanity integration tests and the like.
Thank you for all of your outstanding content! One small thing. Please make the font of your code examples bigger. This would be a great favor for people watching your videos on the phones, like me. Again, great material like all the previous ones 🙏
TDD clicked for me when I started writing them consistently. Once I caught my first bug in the pipeline from a unit test break, I was hooked. Now my test suite is mature and secure.
Your explanations help me a lot.
When your testing write back cache modules with cache-line eviction (in Verilog), you really really want to run through a lot of tests checking for race conditions ( for i = 0; i = 16 clocks; i++){ write, wait i clocks, read } and so on. But this is because correctness isn’t just a function of inputs, but a function of sequence of inputs over time, where timing is absolutely critical to triggering edge cases.
Would love an example video on when mocks are used wrong and how one might consider redesigning. I do run into scenarios where my tests for the return of a mock
You've probably seen it but that video now exists! Dave released a video a month or so ago called "Don't Mock 3rd Party Code" that does go into that.
Great material, really liked the summation, TDD implemented well will shout at us when we are doing something wrong, we just need to listen to what it is saying to us
Great video. Love to see you do a live TDD session so that we can see exactly how it is done correctly.
Try this: courses.cd.training/courses/tdd-tutorial
@@ContinuousDelivery Thanks will do.
This video arrived to my feed in the precise moment. Thanks Dave. And happy holidays!
Great as always
Because of your videos I started with TDD and can't imagine how I ever wrote code in a different way...
2:13
2:40 what is TDD really about: thinking about how our software works from the perspective of users
3:25 (key) point:
3:39 meaning of the interface mentioned:
4:58 simple example:
6:35 : problem of if we write code first
6:58 if write code first, *we are going to expose of the detail of our thinking in the interface to our code*
7:57 important technique:
8:27 what we are really interested in:
8:40 3 difference types of TDD
9:08 type 1: return value or exception
9:25 example
9:40 common mistakes:
10:06 **TIP** : the time for testing different input
10:58 type 2: test state change
11:21 problem with this kinda test is more to do with *iteration*
11:33 *TIP:* don't iterate in a test
12:12 type 3: 12:24 tests that validate that your code interact with other code
12:30 way to test such things: insert sth
12:38 confusing terms: stub, mock, spy, fake
13:20 code example
13:40 fake
14:02 spy and mock
14:50 mock
15:46 trouble of using spy and mock
I enjoy watching Mr.Dave at 1x
Thank you for the content
One of the few SE channels i watch at normal speed lol. Tim Corey, for example, is always played on 2x speed
Hello. I agree with you on the points you make in the video. I have noticed newer developers get caught up on the complexity of the unit test. The rule of 'no loops' in a unit test is a good one.
Fantastic Video. I really got some good input of how I should approach TDD in my own projects :)
I’m so glad I found your channel. Thank you for all the great content!
You are very welcome!
Testing graphics seems rather difficult.
I think this is a part of reason game development is behind on good practices.
It would be fascinating to see you interview a game programmer who would answer obvious questions on practical side of TDD in game development.
I was really curious on how that line should be tested (except by visual inspection) without pretty much the same logic as the one which should be creating it. Like, am I supposed to just assert the state of some pixels given an input, maybe? I'm not sure how feasible that gets with anything more complex
@@user-sl6gn1ss8p yeah. Testing geometry and physics is rather difficult. It's hard to know what to watch out for until you've seen it in person, so errors are harder to predict and avoid
Best video I've seen with you so far, and the rest are really good too! The graphics makes the content simpler to digest (just try to keep the font size mobile-friendly if possible), and I love the playful animations!
The common mistake of adding tests that don't add new behavior was a great highlight, and I will probably reference this video a lot because of it.
Thank you!
I second the mobile formatting, code was hard to read
Gold. Dave needs to be protected at all costs, treasure.
I use unit tests for basic functionality and then to add regression test cases. As on the start you will never predict all the test cases so the regressions are the win.
The most useful part I have found with TDD is it eliminates redundant code, whether that's at the unit or acceptance level
If you can't find a way to test it, the code shouldn't exist
The greatest myth of TDD is it is slower...in my experience it makes you magnitudes faster
I agree. Once your test is written, you hardly have to do anything other than what your compiler and test tells you to do.
In addition to that, the amount of time you can spend manually testing various quirks every time you change something, is much much bigger than many people think.
@ Absolutely agree. People don't see the value in cases like catching regressions early, or simplifying testing hypotheses about the system.
People say it's slower because they add the learning curve time to it.
People who start first feel less productive as they learn about how to design to test.
Once you get the hang of it. It's all the same because you feel productive.
@@SujaiSD I honestly think its much faster. I was about 50 percent slower without it
It does seems slower at start. Its like a person sprinting while you are entering your car, putting your belt and then turning it on to move on
Liked many points mentioned here. But for me the main benefit of TDD is producing a safety net, that makes refactoring easy (when done right only). For validating design there are other ways. and also can really TDD say if design is good. I have doubts. its mainly about cohesion and coupling.
Very useful ! Thanks a lot Dave !
The moment you start doing functional programming and passing all data by value you do not have 3 types of tests anymore, as you specified. You have only 2 types of test, the return value and the interaction. The change in state is also represented by the return of a new value of an object, so its the same thing as the 1st type. The interactions with external components are left to represent interactions with by-reference external systems e.g. databases, UI, file etc. They are the sources and the sinks of pure data transformations.
This is the reason why in functional by-value data flow languages writing tests is much easier than in by-reference languages.
Something to consider with mocks is "can I use the real thing?". Often the answer is "no because it would be slow/expensive/etc.".
But sometimes you can just run a copy of the real thing and end up with more confidence in your tests. e.g. You can often just run a SQLite database during tests and avoid mocking the database.
That's really cool doing
From my understanding, spy are used to spy on objects, so that we can spy on the calls being made and do verification on the calls including the parameters passed into it.
However, with spy, we can choose to mock specific calls or method. If we wanted to keep our test less code (if this is what it means by simple test), then we don't need to mock the method, however, then, it's no longer a unit test but more of an integration test; which means, it's not independent. I used to not nock these method but overtime, I find that mocking these method make my test more independent and less complex albeit there is more code to read; but by having the mock, it gives better understanding and focus on the current unit to be tested. Once I was reviewing some else code and there were no mocks, only spy and verification on the call and then assertion on the final result. I took some discussion to understand what the call is suppose to do and how it would've affected the result from a reviewer or non code owner perspective.
I don't know is there is such thing as right or wrong or even how to debate about better or more effective; I've encountered team members who think very little of test, those that often complain of the complexity, those that have written very simple test but it's not useful when refactoring. I don't enforce what my team should do, I only encourages them and also asking questions, giving examples, and so on, on why I wrote it that way and how it has help me so far, especially for refactoring or understanding the requirement when there is problem or needing to relook on the requirement.
One important thing I've learned is to always look back at your old test and review them again. I do often find myself improving the test but also still finding ways to make it more simple; I've often discuss with my team too on how can we improve the test but given the business case scenarios, it's often more difficult than typical scenario from trainings or tutorials. We also try not to overthink but if our goal is to ensure proper coverage and to reduce manual test, we should cover possible scenarios like for example, null value, empty string, invalid values, etc.
I have started using docker or in memory version of most dependencies for testing the integration or interaction code. Benefit is I dont have to understand how things would work in actual implementation and i can completely rely on the state of the dependencies to verify my code. Alternatively when these things are not available, i would chose a stub generated by any mocking library.
This is so good and soo true. Thanks Dave!
When you mock out say hardware for TDD, when do you actually test the code that interacts with the hardware? Say I have a fake file system to interact with for a system that will interact with the physical disk eventually. What would be the case here / steps here? This is the question I have been struggling to answer for awhile now. Been practicing TDD for a bit. I watched a talk about how those are then integration tests. I follow the idea that TDD unit test are short and quick because you want fast feedback. Would love to get your thoughts on this.
Others may disagree, but there's nothing wrong with writing to a file system as part of a unit test, as long the file interaction is independent i.e. not tied to a specific environment you will be fine (most test suites provide this).
It does become a bit more involved when you need to start working with email, database, FTP, other APIs....you need to start thinking about the test dependencies which become integration tests. Your design should abstract the dependency with an interface. Your unit tests will stub/mock/whatever the interface to ensure your business logic/logic one level higher is correct, then you have an intergation type test which should call the actual class/module interacting the dependency to ensure that is correct. The integration tests can break, can be fragile, but if your system is reliant on these then you should have them.
I think that one of the best examples that we have of how to architect for hardware is an OS. We don't often think of it that way, but that's what an OS does, it provides an insulation layer of code between our apps that do useful things, and the hardware that they run on. I recommend that you architect for bespoke hardware similarly. Establish well defined interfaces at the boundaries and test apps to those interfaces.
If I am writing a Windows app or a Mac app I don't worry about testing it with every last detail of every printer that may be connected. OS designers design an API that abstracts printing, we call them print device drivers, and then we write to those abstractions.
The people that write the printer drivers don't test their driver with every app that uses it. They will have an abstract test suite that validates that the driver works with their printer. Their tests will be made-up cases that exercise the bits that the driver writers are worried about.
My recommendation for hardware based systems is work hard to define, and maintain a clean API, at the point where the SW talks to the HW. Write layers of code, firmware and drivers perhaps, that insulate apps from the HW, test the apps against fake versions of that API, under test control. Test the driver layer in the abstract in terms of does the "driver work" rather than "does an app work". It's not perfect, you may not trust it enough, but this is a MUCH more scalable approach to testing and a version of this is how, for example, the vast majority of testing in a Tesla is done.
@@ContinuousDelivery and @jimiscott thank you both for your responses! I think I see what you both are saying here. It is about having a good API design then. The details of how it does that aren't necessarily important and as you said might not trust it enough. I agree though it is much more scale able.
I think jimiscott touched upon this some but for longer tests to email, database, etc it seems these are the implementations instead of the stubs correct? These use the API designed say in your example of the OS and call those abstraction layers to do it. You use the stubs or mocks to guide the design of your API. Then implement your API to use the API of the OS, manufacturer, or custom driver to call their code to do the work. At this you won't get full test coverage but it will be good enough and is scale able using this technique.
Giving yourself layers seems to be the key here to all of this. Use the layers as a point to stub in at HW and replace them with real calls using other APIs, Firmware, Drivers, etc which need to be manually tested perhaps depending on the situation (ADC, DAC, SPI, etc..). So I guess if I am finding it hard to test or too much stubs/ mocks that is speaking to me that there is something with my design I should look at then. Perhaps I am missing a layer of abstraction.
I really appreciate the feedback here! I am loving and hating TDD all at the same time haha. It really changes how I think and write software.
You probably don’t need to be using mocks. They have a time and place, but it’s unlikely that you’ve correctly determined that your scenario needs one. It usually ends up being like using a broadsword to carve a turkey.
Unfortunately if you work on things like firmware, the mocks are I think necessary and will do more than simple things as you need to mimic e.g. HW devices behavior. This sometimes can be a bit complex if you would like to test your code. I don't know other way to test that.
Think you nailed it with the painful mocking tests description, though in my experience people are instinctively more likely to assign the pain to mocking itself rather than a signal to review / redesign code interactions: have you seen this Dave? What's the answer? Thanks :)
Yes, I have seen it, I think the answer is for people to learn how to TDD and learn to listen to the signals it sends to us as we do it.
Say one more word and i will add TDD to my 2022 resolution! :) Thank You!
Also talking about loops and simplicity. Let's take a very simple example of a method that would try to find a userid based on the given string.
In this method, let's say we want to return null if the string parameter is null or empty string, otherwise, it would call the repository to do the find.
In the test, we might need 3 test at least?
1. If there string parameter is not empty, verify the call to the repository and ensure that the result return is from the repository without further alter (we would need to mock the repository return value)
2 and 3, would have the same behavior where we want to ensure null or empty string would result in the repository is not being called and the result should always return null.
Now, the only difference between 2 and 3 are only the value passed in, if we use a loop of values to be passed in, we only need 1 test. But if want to force ourselves not to use loop? It seems redundant to have 2 test cases that only differs by the input value. I've discussed with every developer in my team, they agree on this, although some still tries to follow the rules that 1 test case 1 scenario, but eventually, they find themselves copy and pasting which makes them question themselves about the rules.
Imagine having to do that for every 'findBy' (e.g. user, products, etc), how many code duplication?
I've found ways to rewrite the code that could reduce theses problem whereby I would delegate the null or empty string check to another separate function, so I only need to test once in this test. However, in some cases, it feel redundant to do that everytime since it would introduce more code which would discourage other developers to practice this.
I would like to ask late question below this film. There are two schools of TDD: classicist, and London School (Outside-in, mockist, with two loops of TDD). Could you make film to explain differences? Are they relevant for you? If you preferer BDD and acceptance tests, I guess that you prefer second school?
Also, when I read about TDD and testing strategies, I see two contradictory advices. One that, you should test all user requirements on core of your system, and only some happy paths e2e. Second that you should test all requirements from outside (rather it is UI or API), and all components by unit tests. I see here some differences between DDD and BDD. Do you agree with that, there are some differences, and what is your preferable strategy?
By the way, thank for you materials and last book "Modern Software Engineering".
Yes, I am a London-school person. Good idea to talk about the difference though, thanks for the suggestion, I will think about that.
I think it works best if you start with an "executable specification" then do TDD underneath to evolve the solution that meets the spec. I don't see differences between DDD and BDD, I think that you can use BDD to reinforce DDD.
I have a function in a project that calculates scores for submissions to a contest. It looks at what a submission was scored by every judge, then adds them up. The setup to testing this function is a little complicated, because it involves creating all the entities and their relationships (create a submission category, create judges in the category, create submissions in the category, assign the judge to submissions). This results in multiple loops in the setup code.
Should there be an easier way to write tests with complex setups? Is it fine to have complex setup if multiple classes and relationships need to be present for the function to run?
Based on the video and your description, I'm not sure why you'd have loops in your setup
var categoryOne = new Category("cat1")
var categoryTwo = new Category("cat2")
var judgeOne = new Judge("one")
var judgeTwo = new Judge("two)"
var submissionOne = new Submission("John Doe")
var submissionTwo = new Submission("Janet)
categoryOne.addJudge(judgeOne)
categoryOne.addSubmission(submissionOne)
categoryOne.assignJudgeToSubmission(judgeOne, submissionOne) //this feels awkward and prob a result of your design. But I'm basing it on your description
//check that submissionOne has judgeOne assigned
//Check that categoryOne has submissionOne and judgeOne
Repeat for 0,1,2 situations of judges and submissions to categories.
He stated this in the video. 2 is perfectly acceptable for the "many" test. You shouldn't need a loop to verify two things are there
Large setup code is usually a smell.
However, sometimes it's the only way; have a look at the TestDataBuilder pattern for those cases, it might help.
Happy designing.
This sounds like a good candidate for a fixture, basically a function that's re-used in multiple tests. Create a fixture for one category, returns categoryOne with a name, judge and submission assigned. Create a second fixture for categoryTwo. Then re-use the fixtures instead of duplicating the complex setup in numerous tests.
@@rogertunnell5764 I do have fixtures to create related entities, and they are definitely a big help.
@@chaunceyphilpot3986 so I'll have 2 categories, 2 judges per category (4), and 2 submissions per judge (8). The number increases with any one-to-many relationship.
Well done Sir! 👏
Also I think having bad unit tests is better that not having it at all. Perfectionism is root of all evil.
I am really enjoying TDD, Only thing that confuses me is what to test so that if I change or refactor my code the tests don’t break
Always love the intro ..
Hmm, so today I wanted to see whether or not each buffer full of data in a file read straight from a filesystem was the same as the corresponding buffer of data read from an encrypted zip file (after being decrypted and decompressed). I guess this is an interaction with an external component. The test worked and, thankfully, passed. So this helps me have more confidence in the zip library I'm using and in my use of it and it provides a tested sample/reminder of a rudimentary way to read a buffer at a time from a music file in an encrypted zip file using my "Archive" class. (When I actually go to do it I'll have to read a mp3 frame at a time.) This is Seems like a worthwhile test, I think, but.. this is not really proper TDD, I guess, because it uses a loop? I suppose I could extract the loop into an external function, like "StreamsMatch(stream1, stream2)" or something, but I don't really have any use for such a function outside of this test.
--------------
[Test]
public void TestBinaryStreaming()
{
var reader1 = folderArchive.GetBinaryReader(MusicFilePath);
var reader2 = zipArchive.GetBinaryReader(MusicFilePath);
var fileSize = reader1.BaseStream.Length;
var data1 = new byte[16384];
var data2 = new byte[16384];
var bytesRead1 = 0;
var bytesRead2 = 0;
var totalBytesRead = 0;
do
{
bytesRead1 = reader1.Read(data1, 0, 16384);
bytesRead2 = reader2.Read(data2, 0, 16384);
if (bytesRead1 != bytesRead2) Assert.Fail("number of bytes read were not the same");
totalBytesRead += bytesRead1;
if (!data1.IsEqual(data2, bytesRead1)) Assert.Fail("data1 and data2 did not match");
} while (bytesRead1 != 0);
if (totalBytesRead != fileSize) Assert.Fail("not all bytes were read");
Assert.Pass();
}
--------------
Too long.. magic numbers.. a loop.. multiple Asserts.. but it seems like breaking it up would be more work for little gain. I did make that IsEqual extension method to compare the contents of two byte arrays. Maybe I should I ask for advice on SO..
awesome! learn a lot
Thank you! Cheers!
Talking about loops and verification for number of times, if you had a feature that accept a list of values and you would call update if the record exists and create if it doesn't.
A simple test would be write 2 test, one for update scenario and another for create scenario.
But how do we know if the implementation handles multiple record correctly?
Would it be better to write 1 test that has passes in for example 7 existing record and 3 non existing record, then do a verification that the update was called 7 times and 3 times for create?
If I wanted more comprehensive test to ensure or minimize mistakes, I could write a loop too and use verification for every call to ensure that the item passed in is the correct item for update and create. This could be extreme but if has helped in preventing certain cases where a mistake was done due to copy pasting code or overlooking by passing the wrong parameter. This could not have been possible with the verification of X times since it would use any() type for parameter.
Thank you!
Is the test visible at 10:38 also bad because it implies that 'self.AssertEqual(10 * 200/200, Calculator().percentage(10,200))' should work?
This however needs the explanation about what is a unit. Lot of people misunderstand the word unit by thinking a class or a method is a unit, which leads to the overuse of mocks in tests. Instead a Unit should be the class also with its real dependencies and only the outside stuff, like the code that accesses database or makes calls to the server should be faked or mocked.
Thank you
I teach high school students basic programming a few years ago. I wrote tests to help grading student submissions. But some crafty students decided to submit functions that always returns the expected output with it's corresponding input. I'm not sure if I should gave them an A or a D 😭
Oh dear. Mocks which return mocks. That literally was the last code I wrote yesterday!
In this case, I was trying to mock the .NET framework "get registry key" and subsequently the "get registry value" methods in order to allow the actual code to find the "Default Browser". At the time, it felt overly complicated. I suppose I could create another class to wrap up that behaviour, but it would seem way too trivial.
That's what I would do, and "way too trivial" is exactly what you want from that kind of insulation-code. It will make your code more testable and less fragile, and even though this bit of code is pretty uninteresting, it will stop you having to do "interesting things" with mocks. 😁😎
@@ContinuousDelivery Well, I took your advice to heart, and I can say that I feel happier for it. That extra level of abstraction made the unit tests much simpler to read. It went from 389 lines to around 252. Deleting code always makes me happy. 🙂
@@markbertenshaw3977 Great! I am pleased that you found my advice helpful, and thanks for the feedback.
Excellent video. I do feel that your advice to abstain from doing iterations in the tests is misguided: let's say you're in a dynamically typed language. You have a function that accepts a argument and should deal sensibly with it regardless of the type of the data (thinking mostly of handling scalars here). Iterating through examples of each type of data and the expected outcome seems to be quite reasonable here.
I will also counter the no iteration stricture that at times you may want to make relatively strong assertions about the execution time taken by a particularly busy (as measured and determined to be necessary -- no premature optimization) piece of code as part of your testing. This too will often require the use of iteration followed by elapsed time evaluation to assert that the code is executing as efficiently as possible, and to catch any changes to the CUT that impact the performance, since that's one of the characteristics of the function's interface.
2:30, I think this is the weak point of TDD - I'm not used to it, though. To me, it seems to damage the flux of thinking the design, in trade of giving a solid ground at each step. But I do reckon that if the thinking is too foggy, TDD may help.
I didn’t get what you see as weak. It’s not the fact that TDD guides the immediate design, is it?
BTW, I say immediate because, for me, TDD helps in designing in the “small scale”, the class or function being written. It’s not practical to use TDD to design at the system scale (bounded contexts and alike). It’s TDDevelopment, not TDDesign after all.
@@antoruby The design of an f() has a bunch of lines that you are keeping in mind, for some time - the algorithm. TDD may damage this thinking, due to slowing the process and loosing focus, making you forget the idea, confuse some of them, and so on. I foresee this kind of issue - I may be wrong, though.
@@MrAbrazildo I don’t think you’re wrong in this statement, there is always some level of personal preference when it comes to “how I like to reason”. Taking the example of writing an algorithm, the time spent in the test will be way less than in the algorithm, specially if it has an easy to verify solution. E.g., I can quickly assert that sorted([3,1,2]) == [1,2,3], but writing a quick sort will take more time. TDD then helps in defining the interface of the function, but it’s internals are free to evolve. Anyway, there are cases in which TDD is a perfect fit and others that are as fit as your familiarity in using it (we’ll never find a one size fits all).
@@antoruby You are talking about unity tests, which test results from an f(). TDD is meant to write a test _before writing each line of code_ ! That's why it's supposed to help in developing anything. This may be true, if your boss wants you in front of computer all the time - leading to health issues, btw.
I use to take a walk, and come back with the solution or the path straight to it, most of the time. Sometimes, when I'm inspired, I rather write directly something that I think is promising. It starts with a mess, but has a direction, and some unity tests can fix the route. TDD may slow any of these alternatives.
@@MrAbrazildo hmm in my view you’re overthinking the “before writing each line” part hehe. I’d say that testing the internals of an algorithm is not good/productive. It can probably start with trivial cases (empty container, one value only, etc.), but then the internals of an algorithm require exactly the walk away you mentioned. And I don’t see this conflicting with using TDD. Besides that, at least in my work, I write much more other kinds of code (not dense algorithms) that definitely benefit from the red, green, refactor cycle :)
How do you test private methods? It's so annoying working with Java.
Private methods are implementation specifics. Tests should focus upon the public contract. If a private method is really important, then it will be covered by the tests to to the public methods, when those public methods call the private methods.
If all interface/contract behavior is covered, and there are private methods that are not covered, then you have one of two basic cases. You've missed a behavior, possibly an edge case, and you need more tests. The code is dead, and it can be removed.
However, sometimes I've had to work with some really nasty legacy code, that is methods that are hundreds of lines long. Unit tests are next to impossible to cover these. I'll extract chunks of code into private methods, but it's still a mess to test. I'll make some of those private method package-private. This allows me to override/mock them to test the "larger" method, but I can also test them in isolation.
But I consider this a stepping stone. Extracting a method and making it package-private is the first step in acquiring some control over a large method/class. Additional refactoring will probably be needed, but this provides a bit of a safety net to get started.
I noticed something about using a mock library. I recently started practicing TDD and I’ve done 2 projects so far. On the first, I didn’t use a mock library and I had to use a lot of interfaces in order to create stubs. In the second project, I used a mocking library and I noticed that all the interfaces disappeared because I could easily mock the class behavior directly. In your talk with Martin Fowler, he mentioned that one of the nice side effect benefits of TDD is it guides you to create interfaces (just like I saw in my first project), but when using the mocking library I didn’t need to. Is that a drawback to mocking libraries? Should one mock only interfaces even when using a library, or are interfaces not as big of a deal?
Maybe, perhaps, this is confusing because there's 2 overlapping things here:
1 an interface constrict such as a Java interface;
2 the concept of an interface, being the contract to a unit of code.
The necessity of creating (1) is language, tool and context specific. The conceptual thinking about (2) is about focusing on the contract and not overreach into the internal implementation of units, which i think is where TDD, Dave and Fowler are really coming from. IMHO the thinking (2) is consistently important to manage and minimize the overall complexity of your code by defining reasonable units, the actual creation of an interface (1) is a lesser concern, which is often strictly not necessary where there are is a single implementation, although there is potentially some value in consistency and self-documentation in a particular situation. Hth :)
It is stated that TDD helps you de-couple the test from the implementation code, but actually in order to isolate the test you have to use mocks or stubs to stop the code from interacting with external interfaces such as other components or database. So, one has to be aware of which DB calls are made in the depth of the code under test.
So how do you actually avoid implementation awareness?
My preference is to isolate the core of my code at the edges of the system like this, write your own adapter that insulates the body of your code from the details of "interacting with external interfaces". Your abstraction will almost always be simpler, because we don't use every feature in every case. Unit test, with mocks if you like, to this simpler interface. The code in this adapter is usually pretty generic and specific to the tech it integrates with, so you can cover that with a few, generic rather than case-specific, iteration tests.
@@ContinuousDelivery My question was a bit different. I mean that if your code fetches some data from the database for instance then you need to mock that internal method which will return your static "db objects" your under test code expects. If someone refactors the implementation and for instance, renames the mocked internal method then your test also needs to change the mock. This means the test is aware of the implementation details.
@@yonisim30 I don't think that your question is different. In the case you describe, I'd write a thin layer of code that read the stuff from the DB and translated it into a form more useful for my code. I'd test most of my code around faked versions of that translated input. The "thin" layer of translation is more generic and the only bit of code that is then coupled to the detail of the DB implementation. So for most of my code I'd mock the input of the translated stuff, and I'd do some basic "contract testing" against the part of the code that read the DB.
@@ContinuousDelivery I understand what you mean but I didn't aim to the db layer implementation. I was talking about the logic implementation.
Lets consider the following method:
def perform_action(param_a):
do some stuff
return return_val
As a test I shouldn't know whats going on inside the "do some stuff" block.
But lets say that inside that block there is a call that writes something to the audit log which can be maintained in a log file or a database or an external service.
Anyway, as I understand, my test shouldn't even know about that auditing line, but i have to mock the audit service because if I don't then the auditing action will probably fail because there is no audit external service or database in the test context.
Maybe you can say that the auditing is a side effect and shouldn't be in the tested code in the first place but there can be a whole lot of other examples such as fetching objects from the db or referencing an in memory variable. All this is internal logic which the test should be blind to except if it is an integral part that influences the behavior of the specific scenario.
You can often find yourself mocking or creating data that doesn't really of interest to the specific test scenario but is needed for the under test code.
Would like to know what you think of that.
Thanks a lot and I really enjoy and appreciate your lovely content.
If a function should call an external interface, then this call is not an implementation detail. You must know it in advance. The implementation detail in this example is the implementation of the interface: in memory, db, service call or what ever. But it is not the fact that it uses an external interface to do its logic.
"The test is simply saying that the code I wrote is the code I wrote" 🤣🤣🤣
Somehow in reality, we have to decide whether to make very specific test case or to combine several scenarios which would result in more complex test but less repeating code and test to be maintained.
Also, regarding test simplicity, it's also really depends on the scenario and how much coverage, for example:
If you have a requirement to remove an item or object from a list, a simple test would simply to ensure the size is -1. The test passes and everything looks good, but is it enough? What if the item to be removed wasn't the correct one? The developer could have written to remove the first item, and the test still passes. A better test is to ensure the correct item or object was removed and the other item or object is not removed.
Additionally, we might also want to cover for scenarios if the item to be removed was the last item, what we would expect the result to be (empty list or null).
Other more complex example would be if you have some more generic functions that accepts a generic interface. If we have many objects that implements this interface and we would like to test each scenario, we would need to repeat or write many duplicate test of the same thing. Alternative, we could simply mock a list of items that implements the interface and run a loop to test all the items thus avoiding having many similar test cases. In this case, we're sacrificing the test readability and detection for less code. The improve the test case debugging or knowing which failed, we use logging to provide information in the loop to identify which object it's testing, so we could easily know which fails; it's similar to having a list of test cases and identifying which fails, in this case, we have 1 test case that fails which covers multiple sub scenarios that could be known in the test logs. This may not be a good practice but for teams that writes a lot of test, our test classes is huge and a lot of test when the implementation class is so much less lines of code, and if we decided to separate each scenario for a single method, we would need to repeat/copy-pasting the mocking of data, mocking of return value for depending function calls and so on; in many scenarios, we find that making use of some loops and if else seems to be more preferred. We made these decisions together as a team and we didn't restrict how one should write it, we do encourage or mentioned that it's better practice to split the scenarios but most of the developers prefer not to split too much on the scenario which would result in a lot of lines of code, repeating codes like mocked data, mocking of returns, and so on. We even tried implementing some in setup method using before annotation it sometimes confuses some developers because if was not immediately obvious when reading the test case alone.
All in all, I wouldn't put it as wrong to have more complex test but definitely I would prefer simple test whenever possible, but I'm also someone who prefer a good coverage because having a test is like having insurance coverage. You can have a simple coverage or very comprehensive coverage. I would cover it at my comfortable level that I'm confident my code will work for many different scenarios.
One of the questions I used to get is 'how confident are you with your code?', 'how do you measure quality'.
By having test coverage, you can confidently measure and mentioned that you have coverage these scenarios and it will not fail on these scenarios. There could be scenarios you've missed and when you or anyone reported the issue, all you need to do is simply add the coverage.
Well my experience is that the more complex test can sometimes feel like a short cut when it isn't. Tests like this are usually much slower to execute because they are much more complex to set-up, as a result they are less likely to be able to run in parallel with other tests, and will be slow to initialise. My experience is that a focus on very short very simple tests, even if it means executing the same code path to get to the point of the test brings with it faster more atomic tests which shout out the reason for any failure. I have never seen a team with tests like this complain about test performance, but I see teams with the kind of tests that you describe complain about test performance all the time.
@@ContinuousDelivery we have discussion about performance which talks about unnecessary test, which could be these complex scenarios but in this case, if we ignore complex scenarios, how do we ensure our code are properly covered and be confident it works and more importantly it doesn't break if someone decided to refactor? Additionally, how can it help other developers taking over the project better understand the requirement clearly? I once we're asked by managers, how can I measure your code quality? I had a good insight from a scrum trainer that by having scenarios covered in test, we can confidently mention that our application is covered for these scenario and that it will not fail under these scenarios.
Let take a look at the below example (I can't recall those more complex scenario now):
Imagine we're using document based database and we received and event that were supposed to update the name of all matching IDs in the subcollection of the document.
In our test, we would need to mock the document containing the list of subcollection and ensure that ONLY the matching IDs in the subcollection should have the name updated.
A simple test would simply be write a test and assert that the name of the matching I'd matches the expected name.
Typically, we would also write a negative scenario to ensure that the name is not updated if it doesn't match the IDs.
Now the question is, how could we test that if the subcollection contains more than 1 item that matches the id? One of the way is to use a for loop in the test which simplifies the problem. Alternatively, we could break the update of the name into separate function and perform a verification the method was being called x number of times (if there should have 3 matching IDs from a list of 5 items, it should be calling 5 times), then technically we no longer need a for loop. However, how could we ensure that the 3 times being executed is actually calling only when the matching Id is being called? If we want to make the test more specific, I'd write again a for loop, and verify the update function is being passing the expected parameter when the Id matches and NOT being called when the Id doesn't match. This would also involve if else in the loop.
Now we're already covering the positive and negative scenario in the test, and if we want to separate them, that means we would have very similar code duplicated into 2 test, having the same mock data and so on, the only difference is the verification or assertion. In a written test, the preparion or mocking could involved several lines of code and duplicating them seems redundant just to separate them into more specific scenarios. In this case the team agrees that combining them into 1 is better understanding the consequences of sacrificing readbility.
In terms of performance, back to the questions regarding how much should we cover? We do not specifically force any developer to write complex coverage, as long as there're some coverage and also because everyone think differently. My goal is simply to eliminate as much as possible, manual test, and that my code is always in a releasable state that I can confidently push to production anytime without any manual test. My current project is still in continuous deployment and we still have to perform manual test. Most of our developers will first test in their local environment, push the code to master to deployed to dev for another developer to test. Personally, I do not like the idea of having to test locally, I'm lazy to bring up my local environment, deploy to my local environment, setup manual test data, etc. I prefer that my test can cover everything is possible and when I push the code, I can confidently tell the other developer that will be testing my code to perform the test without me verifying it's working as expected first. I've done this several time and I'm pretty satisfied when it works and when there are issues reported, I know exactly what scenario I missed and and I would add the test coverage, fix the problem and immediately push the code and request the dev to retest without me verifying the change manually. I want to have the confident that the test has covered and that I can trust my test. I often refer to the test coverage as having insurance policy to my team and I've been trying to push for continuous delivery but in order to achieve that, my point of view is how can we be confident with out test especially if the coverage is not good enough and what is good enough is we focus only on simple coverage? We often face bugs due to simple coverage because we have some developer that have very good testing skills or could think of many scenarios and often found new surprises but why do we need these manual steps if we could cover them in automated test? There're many developers still thinks should have less test and coverage but in this case, they can't answer if we would need more comprehensive manual test and how can we achieve continuous delivery?
I'm involved in Blazor websites and have found that designing for unit tests makes the unit test code for razor pages larger than the website code itself and the nature of making all the razor code unit testable obfuscates the Blazor page code where it's hard to understand. I don't mind having component based unit tests, but there's people in my organization that want all the code unit testable in almost a religious zeal. It's difficult where the effort to be agile and get stuff done when unit testing doubles the code size and doubles the execution time. I find it never finds bugs anyway as the tests are made to meet the code expectation.
Another pet peeve of mind is that code reviews are exercises in refactoring code and there's always someone that thinks everything can be converted to LINQ queries of the smallest size. These code reviews tend to inject bugs into the code and the reviewers never find real bugs because they are only focused on the code constructs. Thoughts?
When I have code that does nothing more than coordinates between two mocks/stubs/fakes, in my mind testing that code is a waste of time. Basically I'm just testing my mocks/stubs/fakes/whatever at that point.
If someone is advocating for you to replace readable, functioning code with terse LINQ queries, I'd say they are in the wrong. However, there are times when I would consider a LINQ query the idiomatic way of expressing an idea in C#. One thing to watch out for with LINQ is that if the LINQ is being performed on a data source other than in-memory objects when run outside of testing, then unit testing that code can give a false sense of security. The semantics of LINQ are different at runtime depending on the data it's operating on.
May i ask how these unit tests look like? Are there unit tests for the controller / presenters asserting on data map? Are there separate unit tests on the razor templates that parse output HTML and assert on that based on input map data? Are "page objects" being used?
I would like someone to answer me this question if possible, in which layers of an application should I do TDD? if I am in a spring boot application for example, I should test controllers, repositories, configuration... thanks
Realistically, all of them, the idea is to use TDD to design the code that you write, whatever that code is. You may need to adapt your designs and your testing to make that practical, but my default starting point for any code I write is "how will I test this".
The most DEPRESSING thing about TDD is when you are given code that already exists and is basically untested.. It becomes an uphill battle to convince people that the untested broken stuff they already have is rubbish
0:06 why green is yellow?
This was fascinating first off, so thank you. Secondly there's obviously a lot of benefits to TDD but there's also downsides as well. I do think TDD makes delivery slower but which might not be viable for every product/company. TDD works best when the deadlines are probably quite large or a little bit more relaxed. I like the idea of designing your code and really thinking about it before development.
TDD means more typing at the point of production, but estimates on bug reduction range from 74% up. So it is quite significantly faster overall. The DORA reports say that high performers on scores of Stability & Throughput (which are highly correlated with people practicing CI, and that is usually linked with TDD) spend 44% more of their time on new features than people with average scores. So even if you do type a bit more (which is arguable, because the code you write is simpler when you get good at TDD usually) then the time you spend doing that is paid back many times over in the time you don't spend diagnosing and fixing bugs.
I have a question, do you use all tests written during implementation in CI? or are there some tests not worth running in CI ? If so where is the border where the test is worth running regularly?
I'd personally just run everything. If the tests take too long, make sure they can run in parallel and run them in parallel. The extra hardware is not expensive considering how valuable instant feedback is.
It depends on your system. If it is big and complex, you probably want to create a deployment pipeline, which is an effective way to optimise more complex collections of tests. The test, I think, is that you need a commit stage cycle to take less than 5 minutes. So if you can run every test that will give you a definitive statement on the releasability of your system in under 5 minutes, then great. If not you need a pipeline!
I like your channel. By the way, I like your shirts! 🙂
The barrier to TDD for me is that TDD seems to operate on the idea that the programmer writing the code doesn't ever actually RUN it to see if it works or not. By the time I finish a function, or a module, class or method, I've run that code dozens, if not hundreds of times during the edit/run/repeat cycle. Testing is built into and is an integral part of the design process. And since I am always working off specifications I've gotten from the user during numerous design meetings, I know what it is that I am trying to accomplish: namely, what the user wants to accomplish. That's why I'm writing the code in the first place. I'm not just writing code, and then seeing if I can find a user that might can use it to solve some problem they have. All code is written to solve a user problem that we are aware of before we start writing code. We aren't just writing aimlessly with no idea where we are trying to go. We have a design in mind, a design we got from a user describing their workflow, or their process. We are writing code to implement that process or workflow. And we write, compile, run, over and over again until it produces the results it needs to produce to solve the problem. TDD seems to think that programmers are only writing code from a strict specification that was given to them that they had no part in designing in the first place. But in my experience, there just are not very many programmers like that. We tend to be involved in the design phase, with the users, from the beginning, and we are as aware of the needed outcome as the user is before we even begin coding. So, trying to treat TDD as a step seperate from the implementation just never seems to materialise. At least in my experience.
The code that I have been writing recently is heavily enveloped in the design process, so I understand your point of view.
@@GDScriptDude Agreed. There is a segment of programmers who are little more than "coders". _"Here, take these instructions and write the code."_ Such programmers really are divorced from the bigger picture. But I think that only applies in "big corporate" IT departments. I don't see it much in consulting, or in small development firms.
Shouldn’t acceptance criteria be agreed on before a user story is worked on? Otherwise your going to be writing something that the user doesn’t want? I also don’t really think that the users really cares about the implementation either. With TDD you are running the code to see if it works, that’s what the tests do! How else are you running it? Though a UI and a debugger? That just slows you down.
"The barrier to TDD for me is that TDD seems to operate on the idea that the programmer writing the code doesn't ever actually RUN it to see if it works or not. "
Running the tests does RUN the code.
I was expecting to find in this video TDD masturbation - as I was told - is what modern TDD looks like, but found common sense actually. Where I work (major telecom company) developers are devoid of common sense, they design interfaces with unit tests in mind, to make almost every class mockable (in their view it makes testing easy). It ends up with abstract factories and other nonsense everywhere and tests are code change detectors, tightly coupled with production code. It makes refactoring of such code close to impossible and very expensive.
I wish people were taught more to use common sense rather than design patterns.
What are the three things?... I'm a long way in and have not heard the three points. So?
No, no, no negative conditions must be tested and, by extension, implemented in the production code.
Once upon a time I wanted to start a new JAVA project with TDD, but then I discovered that private methods cannot be tested using normal means. Now this means I can't use TDD for developing complex algorithms without exposing their internals within at least a package or using reflection. How do you think one should approach testing implementation detail?
th-cam.com/video/KyFVA4Spcgg/w-d-xo.html
Maybe this can give you some insight into it. You have to separate the domains of complexity in your algorithm into testable pieces, so each is simple.
It also will probably involve dependency inversion/injection.
You should not be testing private methods directly. You only need to test the public interface to a unit: A return value or thrown exception, a state change/side effect, or an external interaction. If none of these things are observable by a test, then whatever private method being called has no effect on the behavior of your application.
Think of it this way: You have an interface with two methods. There is a shared calculation between the two methods in implementation, so you factor that out into a private method that is called by each. As far as your interface is concerned, that private method does not exist. You write tests for your public interface as if you are entirely ignorant to the private implementation. If that private method is modified and is no longer correct, your tests of your public methods SHOULD fail. If they do not fail, then you do not have good tests, or your private method is unnecessary because it has no effect on the behavior of the system.
@@gabrielvilchesalves6406 Great video, thanks. There are algorithms, that are complex and within one domain. I will give it some more thought, but it simply breaks my way of thinking about encapsulation and what should be private in a class, and what should be separated into separate class as a public method.
@@pchasco I sometimes have a single public method that uses five or ten private methods, each doing important and complex operations. I will think some more on this, since TDD is an established way of doing things, but it seems to be forcing me to change some ways of thinking about how software should be written. I will have to review my code and see it again. Thank you for your advice.
What about property based testing? Isn’t this a 4th type and also it breaks the rules of no loops
Would you write a loop in the test? I don't think you would. Therefore, I don't think that property based testing violates the cyclomatic complexy of one in the test. The underlaying tool probably has loops, but they are not part of your test.
If you have a loop or condition in the test, then it's another thing.
9:45 ha… Me thinking of bombarding my ML Model with different values for testing. My ML Model: „See! He says I‘ll give you the same output. Promise 🤞“
I still have a hard time to wrap my head around this. If I have a WPF application that's only does CRUD operations against a WebApi, what are there to test then? And how?
Your WebAPI clients should implement interfaces which are injected into your WPF components via constructors preferably. The reason for this is that you don't want to be making network or database or any system calls in unit tests.
You create mocks of the interface in your unit tests and pass them into the WPF components you want to test.
What is the domain? WPF and CRUD is infrastructure, not domain..
@@awmy3109 Yes, I have understood that I shouldn't unit test the database traffic. Then it's not unit testing. I have a httpClientFactory and I'm injecting through interface in the Constructor of my ViewModels. So, what's left to test? I do some unit testing of my base classes for ModelWrappers used for validation but what more?
What I have noticed is that if I'm introducing a bug when I'm developing it further is usually a xaml binding. Or that I forget to implement a Property on my Wrapper class that I've just implemented on my Model class and database. Or forget to register an interface in my Autofac BootStrapper.
Be pragmatic. Write some integration tests which test key or complex functionality.
@@tobiasjohansson1256 If you've done all that then you are fine. Honestly, there isn't much to unit test in the UI. Maybe end to end testing might be better but will definitely take longer time and more resources.
First of all. It's almost impossible to not do test driven development because the first time you run your program it's probably going to give you an error message. So do I write a test to tell me if there's an error or not when I can see it? I suppose I could turn on error messages. I don't have a problem with doing it. It's coming up with tests that I find it hard to wrap my head around. It feels like asking Harlan Ellison where he gets his ideas. If I could come up with tests I wouldn't need to because I'd be able to come up with the next Amazon. I think I just need to start out with doing more testing than I am. Then maybe I'll work my way up so that I can write a test for something I'm not sure what it's going to do. A lot of programs don't have simple outputs that you can test. Some just output information. You're trying to get to the information and you will know what it is when you see it because it will look like the sort of thing you expected to see.
I don't even know unit testing. I should probably learn that first but every tutorial involves downloading Unit.. And setting up who knows what do do who knows what for hours. I wish I could just learn the vanilla language testing functions. Like couldn't I start with assert, try/catch and things like that?
there is a lot wrong with mocks: using them you assumes you actually know everything about the thing you mock, including bugs :)
if you can't test against the real thing, for example if you can't install a test instance of the service you might want to mock, you are in a lot of trouble
Wrong design if a mock returns a mock, you say?
If I have a, java, component using a resttemplate to get some response with data from an externa service and want to test that my component acts upon the external data, I need to mock both the resttemplate and the response and the getBody-method on the response to return the data I want to test. How can this be done differently?
Wrap the rest template in a simpler http client design of your choosing, with concrete response classes if necessary. "Don't mock what you don't own" might be useful search. Hth
@@danm6189 that is one way to do it, but when I test my code unit, I want to make sure all of my written code is correct, because that is something I can fix if it isnt. If I introduce a wrapper, I may introduce a bug in that wrapper. But mocking standard java functionality should be ok, even if the structure does imply situations where a mock needs to return a mock.
@@JohnnyNilsson83 yeah myself I'd probably integration test the client against a basic http server using something like wiremock, but i agree that "never say never", these things always contextual :) one reason i suggested split is I'm not the biggest fan of resttemplate interface and sounds like you're already dealing with parsing responses in your component - myself i love a bit of divide and conquer :)
@@danm6189 I havent really found any option for the resttemplate. Do you have one I have missed?
But yes, we use gson to parse the returned json. The resttemplate is wrapped in an integration client that handles all the integration details with url, api keys, headers etc. But in the spirit of OO and DI we inject the resttemplate instead of creating it within the integration client.
@@JohnnyNilsson83 I'm assuming this is spring resttemplate? Java stacks used to be mainly apache httpclient back in the day, pretty sure there's a new std java async client, etc. fwiw. Not against using resttemplate in itself, just like to limit the reach into my system, keeping use of broad or slightly ugly interfaces to small, low logic adaptors e.g. an http client at edge of system, I'd try not to deal with specifics of parsing / validation etc within same unit that uses e.g. resttemplate itself. The benefits of that would be simplifying testing/handling of specific request and response handling. Increasingly useful as system gains multiple outbound calls for to different endpoints or more complex queries where the http basics are relatively similar, which seems the norm in most cases i see. Anyhows again if you're happy with your setup and are not feeling pain from it, your tests are readable and simple, then it sounds like it's inherently working for you, so no great value for you maybe: the rule of thumb is still valuable, as i understand it, which is to take some time to reflect when you're returning mocks from mocks - I've defo done it before and will do so again but, usually, will end up moving code around to make things cleaner / simpler :)
I just wish my lead developer would stop insisting that he needs to test his function that accepts 5+ parameters, 4 of which have nothing to do with what the function actually does. Then he tells me that it's too trivial to separate the actual core part of the function without all the extra parameters and write a test for specifically that. We need to keep things high level, because otherwise were adding too much work for nothing. 🤦
I even had a lot of very non-trivial examples of what a good unit test actually looks like and showed how they tend to force you to break down your highly complicated logic into simpler "units" so you don't have to test a wider space of inputs than you have to. Yes, we could include a Properties object as a parameter because we're trying to read a CSV-formatted list from it, but how about we just have a function that takes a string and parses the CSV format and test that? It shouldn't matter where the string is from. But that's getting too into the weeds.
The greatest contribution of TDD is obviously reinforcing the concept and advantages of interfaces. TDD forces you to think in terms of interfaces since once you get started with TDD, you run into the need for mocking (to simulate expensive/external interactions) which is best done with interfaces - enter DI and IOC. To do this properly, you have to abstract out interfaces if the codebase doesn't have it already, or improve upon existing interfaces. That's the biggest payoff using TDD.
Certainly one of the benefits. Personally I think the biggest benefit is having no fear of refactoring code. We all know that over time code changes and gets added to and becomes difficult to read and understand. With unit tests you can refactor code to make it cleaner without fear of breaking it.
what wasn't mentioned is that testing interactions using mocks couples the test to the implementation. This violates the black box testing principle of only testing the interface. A better approach would be to tests state changes through the interface on the affected object. Kent Beck style tdd. He hardly ever uses mocks etc he uses real objects as they would be used in code.
Respectfully i disagree with the blanket categorisation in the practical useful sense. To me, use of mocks does not necessarily couple tests to implementation. If i define a unit that calls one of two functions, pass() or fail() and i test that unit checking that it behaves as expected for input conditions, calling pass or fail as expected, i am testing that the unit as designed within the system does its job and fulfills its contract. For me contract is the useful bit of the "interface". This to me is a completely different situation from asserting on a unit making multiple specific calls to a broad interface we do not own, e.g. calls to a library, where we might be wastefully over constraining implementation. Again, with respect.
Totally agree. The big problem is creating interfaces for every single class as the London school advocates. Your refactorings will be limited to what that structure allows, so good luck with more complex refactors than extract method
Interfaces should be defined at the boundaries of the process or the module
True developers are often victims of those "showing the way". Amen!
0:05 speak for yourself. I wrote unit, integ, and e2e tests before I cried as a newborn baby 😤
For a while I used TDD with small unit-tests. for a in web app. but I found it was too much work to test individual smaller units.
Most of the units worked fine by themselves- the bugs were more often in the chains between them. A date formatting function might work perfectly in isolation..m but when called via my html view the input had been formatted differently by a viewhelper.
Also ended up changing the units quite a bit so often found that the unit I had written tests for was no longer needed as it had become something else entirely.
What I do now is primarily write end-to-end tests where the input is api data and the output is an entire html page - where I expect the relevant content.
On type2, the change state test, I sometime find myself asserting the state before and after the method call. Is it unnecessary to assert before? I loved the video. Thank you.
Arrange: set the initial state
Act: call the code under test
Assert: check the final state
Ideally the test should be 3 lines long.
3 lines is pretty arbitrary mate. Arrange act assert is a fine pattern but there's nothing wrong with having one or two sanity check asserts
@@hansoloo98 you won't always get 3 lines of course.
For example, if you're testing a guard clause, depending on your test framework you might be able to write it in one line.
The point I was trying to make was don't write a lot of lines of setup and then a lot of assertions in one test. If you have too much setup then you probably need to rethink your design, and if you have too many assertions then you probably need to split the test into separate tests for the separate behaviours your care about.
Specifically regarding the question, you don't assert your initial state, you set it in the test.
“These are good question to answer”.. yes they are, but if we come up with a wrong answer it will be wrong in the code and in the test
Not really, if the answer is wrong - doesn't meet the expectations, the test will fail. There is a danger that the question is wrong, but that is always true. If the "question" is "what should my code achieve" then it may be wrong, but if we get that wrong in a test, it means that we have misunderstood the problem we are trying to solve and so will always have that misunderstanding however we capture it, and so will encode the wrong solution. At least this way the code definitely solves the problem that we think we have. That is a step forward.
Hyper is the best
Dang $500 for TDD course? wow that's expensive.
How do you do tdd if your software is just a glorified database? I.e. your code is 98% reading data, writing data and glue, with almost no nontrivial logic? Is it even the right tool for that kind of software?
Just do integration tests
librarian make autobiography about mad scientist of fair...ytd
I find your example quite confusing: drawing a line is not something I can write a unit test for! It is 100% a side-effect. It is fundamentally impossible to test this without a human (or AI?) and a screen.
Can we just call TDD, Test Driven Design, please? :)
I don't like TDD because it slows down design itself.
Without TDD, I can test designs by writing code, discover some hurdles - maybe the library I intend to use doesn't have enough functionalities or doesn't work in this particular case? Maybe the communication with other components will be insufficient or faulty? Maybe some other component that I intend to use has some forgotten code that makes the design useless? Maybe some part of the code needs to be overly complicated because of interactions with other components?
Those things come out only in the implementation phase. If I write tests first - you can write tests independent of implementation, but you cannot write tests that are independent of design. The problem is, that if I write tests for design and the design is wrong - I will only discover it when I will start the implementation, by that time I will have already spent a lot of time writing tests that reflect the design.
As you’d probably expect I see this very differently. TDD is ALL ABOUT DESIGN the tests embody my design choices, leaving me able to change the implementation detail. I don’t know how to get faster, clearer feedback on the quality of my design than TDD gives me.
A mock from a mock 😂, LMAO!
One suggestion if you don't mind, maybe stop animating things on screen for so long while you're speaking. It can distract the mind from following your speech. For example those arrows that keep flashing towards the types of unit test. They could have flashed once and then stopped or not flash at all. I know it looks pretty but I find it distracting... maybe I have ADHD or something IDK.