I see here a comment that tdd is slower than regular development. It's demo, guys! IRL Tdd is much faster than anything else since you don't spend almost any time on fixing stupid mistakes. Actually I hardly ever use a debugger because there's no need. It is also faster since you don't make your solution too complicated to write, test and maintain. You even don't spend time on trying to sort out what is happening in your own code because it's always refactored to the best design you could imagine at the moment. And if you think that your old design sucks - go on and refactor it. Everything is covered so it's 100% safe to refactor.
Yes exactly! I don't have data for only TDD, but teams that practice the surrounding things like CI, TBD, TDD, CD and so on, spend "44% more time on new work" as reported by the "State of DevOps reports" and in the "Accelerate" book.
@@mana20"Test Driven Development: By Example (The Addison-Wesley Signature Series)", Kent Beck "Growing Object Oriented Software Guided by Tests", By Nat Price & Steve Freeman "Fifty Quick Ideas To Improve Your Tests", Gojko Adzic
well that although is not a proof. I spent 12 years on a quite large project (1.4 MIllion code lines of C++ when I left). The measured time fixing bugs was roughly 4% of the team development time. We did not use TDD.. so one cannot assume that not using TDD generates buggy code. Quality of code comes from quality of the team, nor a process. TDD is a way to solve the issue of how to force a minimum standard on teams on another angle of approach.
I am a baby developer (just started 3 months ago) but I naturally started doing exactly this. I think it is great not only for catching early errors and mistakes but it also deepens the joy of creating a functional code because you get the feedback so frequently.
Sadly, I am no longer a "baby developer" but I agree entirely. I love the fast, clear feedback on my work that I get every few minutes working this way.
As a big advocate and teacher of TDD, and an interviewer of candidate engineers for my teams it always amazes me just how many people claim to do TDD but don't actually have a clue. Even then the difference between good (useful) TDD and poor (fragile) TDD is a big gap. Once the penny drops it is normally plain sailing and any tool that supports that good behaviour has to be a worthy addition to the toolbox to get everyone across the line in a shorter time scale. Doing all of this in practice tends to be the big bugbear though, where TDD skills blend with unit testing and people go back to old, bad habits just to get work done rather than follow the path and make sure the code output is built the right way
Great comment. I find exactly the same thing. Many devs put TDD on there CV's and claim in interviews to be able to use TDD but once you dig, even just below the surface, they crumble.
I'm trying to sell TDD as Specification driven development, as if you can agree what a program should be outputting and the api, the work to be done is extremely clear cut. Would like to see more of this type of content as well!
This is what BDD was originally invented to do. To change the language around TDD to more accurately reflect what we are doing. The idea got hijacked a bit by the functional testers, but I think it still has a lot of merit. I think of my approach to TDD as being behaviourally-focused.
@@ContinuousDelivery That's where I see the value, but sadly TDD has gained a bad reputation. In my business, much is done waterfall, and by saying you're writing to meet a specification, that gets favor by management. As an aside, if you have opinions on how to unit test GPU code, me and my coworkers I'm sure would appreciate that.
Many years ago I have been told to ba lazy as a developer and return as soon as possible. I agree you create multiple exit points but your code is getting 'flat' (no/less nested ifs). Keeping in mind that 10line function can be considered long - it should be safe to say that code should be easy to read anyway. As always a pleasure to learn - thank you for sharing your knowledge.
The 10 lines of code although must be taken with a pinch of salt (i.e it is not for 100% of cases). For example if you are writing a code that has a directly mapped meaning, split it can cause confusion. One example is math code. If you take a invert matrix code and start to fragment it you are splitting a well defined and known concept and that very likely will cause MORE confusion (because you start to create arbitrary cut point that do not exist in the concept)
Definitely agree with this. I had a teacher or two in college who mandated “always return once” and at the time it sounded nice but I’ve since grown to dislike it more and more. It’s especially true for guard clauses. You don’t want slowly nest your method code deeper and deeper whenever you have to test for a bad value. If you are only returning at the end it forces you to do this. It also forces you to create unnecessary variables like IsErrorRaised. Sometimes coding in this way is a good training exercise, but in practice it usually makes code harder to read. It’s usually good to not be an absolutist “always do X” will usually cause you unnecessary pain somewhere down the line.
Mr. Farley, this is amazing. I usually listen to you for your theoretical content, but I wanted to practice TDD so that I can start following it. I sat down to start and I couldn't figure out what my first test should be so I did a TH-cam search. I just followed along on my own in cyber-dojo and now I'm moving on to part 2. This is great, practical content. Thanks for everything you do!
That's the kind of videos i love, it very instructive to have tutorials of such advanced topics from someone as experienced as you. because all the tutorials nowadays focus on technologies and tools rather than software engineering best practices.
It's pay now or pay later. I admit I've never done this at it's purest form, however once I had the opportunity for a client BA giving us the inputs and outputs for a particular system and while he was doing this I thought to myself that I could finally do some TDD. It went great! There were 2 glitches found: The first was a scenario that the BA left out and the second was when a developer wrapped my code and did not do TDD on his wrapper. (i.e. my code behaved as expected!)
Loved the practical example, great video. I just started reading Kent Beck's TDD By Example book a couple of nights ago, so this has helped remind me of some of the principals. Cheers!
This is a great video. Thanks again. One value of TDD is the ability to keep only one thing in mind at a time... red - specifying what a thing ought to do green - do the thing refactor -- improve thing and tests BUT... something great from the original TDD by Example book that I think is one of my favorite things... the "TODO" list. As we think of things (design related, test related, etc...) that we are not currently in the process of doing we can write them down to keep them out of our heads. so the todo list may read (in comment) - test 1 returns '1' - x - test for 3 - x - test for 5 - x - refactor for single return path? Just wanted to share one of my favorite parts that I use constantly as I practice TDD. I feel like it helps me produce better software design - which I hope someday lives up to what I regularly see on your channel and trainings. Thanks again, Dave
Good tutorial, I finished the exercise although I don't think I was able to make such small steps towards getting fails, greens and refactor small portions of code, but I did manage small functions that allowed me to test things like isMultipleOfThree method and isMultipleOfFive first failing and then passing. Thank you !
Great shirt! I'd like to see more of this sort of thing as well. I try to write test first as much as I can, but where I feel like I struggle is in breaking down bigger problems into simpler parts, but moreover how to best organise breaking apart a problem into different layers of tests; e.g: functional vs unit tests etc. The latter is much easier to grok than the former. I guess it's about practice as you said.
Love your videos! Thanks for helping us to learn and grow! I wanted to test the rules of our TDD discipline, to see if a certain rule can be followed in all scenarios, to no ill effect. That rule is, never to change the code, unless we have a failing test. At 12:48, we know that we're going to get a compilation error. But, how do we truly know? I think we need to run the test. Put the compiler to work, while we ourselves think of next steps. We are not allowed to write any production code, until we run a test that fails. We are not allowed to write any more of a test than is sufficient to fail, and compilation failures, are failures. We are not allowed to write any more code than is sufficient to pass the one failing test. To follow all three guidelines, traps us in a cycle, but are we trapped? I would hazard a guess to say, we're not trapped, because TDD is a virtuous cycle. TDD gives us sure and repeatable proofs that our system works as intended. Sorry, I don't mean to be overly pedantic, but I want to push the state of the TDD art to its logical conclusion, wherever it may lead us. Thanks for showing us the FizzBuzz Kata, I had an excellent time watching.
Yes and no. To be more correct is to *not write new logic*, without a failing test first. TDD actively encourages us to change the code frequently (this is the 'refactor' part of the loop), so long as we keep the tests passing after each change.
@@oliverhughes169 Thanks, excellent point! I misjudged which step of the TDD cycle we were in. Refactoring is much easier, with a set of automated tests. Tests keep the code malleable. To the extent that software is hard to change, we have re-invented hardware. Tests allow us to be very casual and even flippant about code changes, so long as we always write the tests first and stop writing once we pass the test. Ultimately, the only reason our job exists, is because the boss wants more changes. There's no reason to keep up us around, unless it's always time for a software change. So optimize for continuous change! Changes that we haven't even thought about yet.
I think you hit the nail right on the head: the hard part is being good at design. So here's the thing: can good software be created without TDD? Can bad software be created with TDD? I believe the answer to both questions is "yes". With this in mind, therefore, is there too much emphasis on TDD? I think yes. I don't believe that TDD is the panacea that we'd like to think it is. In fact, TDD has nothing to do with good software design, and thus well crafted software. It has become a mantra - which brushes good design under the carpet.
D language has unit test support built in to the compiler, which in turn motivates us to really write the tests instead of adopting any complicated setups.
These days most IDEs will set up the leading test frameworks for your language at the click of a button. It's not a significant issue unless you insist on using a text editor.
Even-though I'm an experienced programmer I have done very little TDD and never really missed it. I'm open to the idea but and I get the theory and it works out well in simple cases like this. The problem I have is that so much code isn't testable. For instance user interfaces, code that assumes certain large data-sets. Just to name a few. You end up using fake mock-ups and fake data. I've seen people going into great lengths doing it and in the end they have tested absolutely nothing real. The code may be fairly bug-free but hey, they spent 3 times the time on it. That time could as well have been spent fixing the one or two bugs. Well, I'm just one of those people who values practical real-world situations over theory.
Very useful to see you thinking the process through, and the common mistakes. One aspect that I think is worth separating out - especially for beginners (I'm trying to find collateral to help an actuary get their head around development) - is the background on 'separation of concerns'. The emergent design may come over as too complicated to understand quickly
As always, TDD is a practice, doesn't necessarily means you must follow it. In my experience, TDD is quite difficult to apply because the codebase is so old and I don't have the necessary business knowledge and logic to do the TDD. Hence, sometimes we create the test after we do the code. It is also sometimes quite difficult and tedious to change the code very often when the project is quite agile, which cause requirement often change in the middle (sometimes after you finish the test and the code). It's nice to use TDD when you own the codebase, know well what to build and you have clear concise requirement which makes the test very easy and simple but this is quite rare in some part of software development IMO.
This is the same thing I’ve encountered It’s relatively easy to do TDD when the design of the system is known, as well as the inputs and outputs and shapes of the data But in my experience so far these are Rarely known when we are told to start implementing “something” and they also constantly Change I don’t think I’ve seen agile done properly as we start coding before we even know what we are building or who for The product people seem to forget that us engineers can also define and design systems, not just bang out code I don’t know how to explain this to them, they don’t seem to understand this and just assume we will figure it out as we go along We always end up with a unmanageable unmaintainable mess that gets scrapped and we start over again without a solid plan Suffice to say I’m looking for another job, but this practice seems common in the industry
I understand the basic steps involved in creating a failing test and then doing enough implementation code to make all the test pass. However, the code refactoring after having written all the test is the tricky part-this involves careful design of the system.
where can I see more complex example of TDD practice being followed? This one is quite simple as there is no interaction with external api, no dependence on DB, no multi-class interaction etc.
Great advice! I would have liked so see a bit more about the transition in the code going from simple cases to the general answer. Following the absolutely simplest approach that you mention, we test 2, then 3, then 5, then maybe 15, and at that point we have an implementation that is effectively a lookup table of the correct answer for our test cases. Each time we add a new test it seems that the simplest thing to do is to add another if check that just outputs the correct answer for that specific test case, which brings us no closer to actually solving the problem. I'd love to hear your perspective on when/where/how we transform a lookup table of answers into a solution that works for a general input. Is this the purpose of the "refactor" stage you mention?
Take a look at the whole exercise here courses.cd.training (it'd free, but does need a registration with your email - you can unsubscribe afterwards if you like).
Good. But what I want to see is tests when you must read/write variables that are not passed as arguments to the f(). 19:46, I'm against that. If the code is holding a variable to exit later, in a later reading you won't know in advance what will be made of that variable. After a whole reading, loosing time, you will eventually realize that the code should just exit right away.
You touch very briefly on something interesting I think: multiple returns. I can see why you don't like them, but I always struggle with excessive nesting (and therefore cyclomatic complexity) when I don't use early-out input validation. Perhaps you'd like to touch on this in a future video?
I agree. I remember when "one input, one output" was king. When I heard this mentioned in this video my mind immediately rewound to "The Design of the UNIX Operating System" by Maurice J. Bach for AT&T 1986. The style is very much to exit on faulty input data or conditions. As you say, this makes for much cleaner, and I would argue more easily understood, code. I've no idea if Linux has followed suit. Nothing would surprise me on that front.
I am a fan of using guard clauses in code where I return early and often in a function. Once I get beyond the guard I like to only have a single return. I feel that if the code fails the guard it's useless to continue. If the data passes the guard then I can start to think of the operation I want to execute. I do use multiple returns as well as it makes more sense, but often find myself following the above pattern.
Well there is a compilation of these videos into a free course on my training site: courses.cd.training/courses/tdd-tutorial and we will be releasing a full paid-for course, "TDD & BDD Design Through Testing" in the next couple of weeks.
I used to develop multi-yeared projects as a hobby. After years of experience, I can easily say testing is vital to any project. Recently, I made a craeer shift and started working as a senior software engineer in a local company. In the first month, they've asked me to drop all the different types of tests I was writing and only write functional tests for APIs endpoints. They will pay the price of faster development in the near future 😅
15:44 going a bit to an extreme, the _minimal_ amount of work to make that first test pass is to just return "2". Now a subsequent test could be checking that the result is actually related to the input value. So we'd have assertEqual("5", FizzBuzz().fizzbuzz(5)) At that point the fixed return value "2" would be replaced with str(number). Maybe too extreme :) but I see _some_ value in this if I think about a more complex real-world scenario where I want to be absolutely certain that different input produce appropriately different results.
I think it depends on how you see the process. I see it as a tool to help me to design my solution so that it works. Sure the "Green" step maybe to hard-code a return value, but I want my test to be expressing my desired design of how I will interact with the, as yet unwritten, code. The job here is not to take my brain out, it is to proceed in VERY small steps, but based on my current best understanding of the problem I am trying to solve with my new test, and any that existed before.
Thanks for this nice introduction to TDD and making me aware of cyber-dojo, a great resource! As a TDD noob one question is should I create tests for input parameter validation ie checking the values passed to my code are in the correct range etc? If this shouldn't be tested as part of TDD where, if anywhere, should that be tested?
I would try to focus on this as an approach to design, rather than an approach to testing. So if you are writing the code that does the validation, then yes you should test it. Testing at the edges of the system, where it touches real I/O is more complicated than other parts, so the trick is to design your system to minimise the code that actually deals with I/O and throughly test everything else. So one of the strategies that I take for stuff like input validation, is to separate the validation from the code that captures the input. Then I can test the validation code thoroughly without needing any UI (or other input) code in my validation tests.
Hi . I just discovered your channel. What came to my mind almost right away was..Of course you do it this way!! Isn't this how its done anyway? I was in the general IT field years ago, never really became a programmer per se but I did do some C, even cobol (yikes!) and in recent years I created a few VBA solutions in excel just for my own business and interest purposes. I have an electronics tech background and one of the first things we learned was proper troubleshooting. How else can real programming be done I would say? You don't start bangin away with a bunch of code with a great idea in your head only to end up with a slew of compile and syntax errors! Is this how the "kids" are doing it?? Or is this how its taught?! Eeeaach.. Even if you fixed those things, you likely would end up with something that compiles but doesn't do what it was intended to do anyway! Now what? If you have a problem or a challenge with a piece of electronics, electrical, you don't make 3 or 4 changes and see what happens. You make 1. Make a prediction. Then test it. If you make multiple changes and it works, fine but you don't know what change was the clincher. Or worse, you might end up changing the thing or the result, still with a problem, just a different one and still malfunctioning and not doing what it supposed to do. Now you are in a real pickle because you have no idea what your changes did, and which one or 2 are responsible for the new behaviour! Just my 2 cents. Nice to see this kind of stuff. :)
@@jlou888 I've been looking for someone to say this. All these evangelists preaching about their perfect little methodoligies that will fix all your problems in trivial example projects of cats-and-dogs of fizzbuzz. Why don't they show us how to do it in a real-world project with complex business domains, highly concurrent and distributed systems, and requirements changing on the fly. Or at least let us see you try it on a project that is a little older that a few feeks and has more than 2 developers working on it.
Hi, good video but i would like to see an example how to use TDD when you have a database application and event driven architecture by using a framework like Spring. I would like to see how TDD is used in more realistic example which is not necessarily something big.
Well, in theory, you can do the same thing, however as you need a database connection, you have to provide that to the test instance. Ideally, using the provided Annotations (@DataMongoTest for example) along with a base dataset to work on (or an empty dataset, depending on your circumstances) should suffice as preparation. After that base setup, the behavior is the same - prepare the entry-point (if event-driven, there should be a separation between accepting event and accessing database, but let's ignore that for now), call entry point with a fitting input, and - depending on the behavior - either expect an outcome or verify the modified data on the database with the spring-provided database-access. the only differences here are the setup and the verification, IF you need to verify in the database, as the actual call does not yield any results. So basically you'd have @ExtendWith(SpringExtension) @DataMongoTest(configure here) class TestClass { @Autowired MongoTemplate mongotemplate; @Autowired DBAccessClass sut; @Test shouldDoSomething() { sut.save(newObject) Object result = mongoTemplate.find(criteria) assertThat(result).satisfies(condition) } } The database connection itself would have to be asserted at another place, most likely a Configuration class test, if you want to go there, however that is a tiny bit more complicated and requires use of, for example, @Nested with jUnit Jupiter.
I talk about this a bit in my episode called "Testing at the edge" th-cam.com/video/ESHn53myB88/w-d-xo.html The trouble with demonstrating this, or teaching it, with real-world code is that the complexity of the code gets in the way of the ideas. There are a few examples of some real systems, and as it happens some event driven systems, being worked on this way, but TDD isn't the only or primary focus, try this one: th-cam.com/video/bHKHdp4H-8w/w-d-xo.html
I struggle with writing tests first because I use typed languages and if I don't write at least the function first (empty with an return false stmt) then I can't compile my tests. Is this still the right way to go? Or should I write the test, call a function that doesn't exist and have as my first red, Not a red result but a compile error?
The idea is that you use the act of writing the test to design the interface to your code. So write the test first, write the interface that you need to make the test make sense, then do just enough work to make everything compile and to get to the stage where the test can run and fail. So yes, write the test first.
I demonstrate a very simple version of that at the end of this exercise, you can see the whole thing on my training site at courses.cd.training - it's free to access. I also talk about this in more detail in this episode: th-cam.com/video/ESHn53myB88/w-d-xo.html Fundamentally it's pretty simple, push the IO to the edges of your system, write your own abstraction of what you care about for that IO, and test to that. Then write small simple bits of code to translate between your abstraction and the IO.
Thanks for the feedback. I got lots of requests to do something at the beginners level, so was keeping it simple and taking time, so hopefully people new to coding and writing tests could follow.
This kind of content is great! Getting started with TDD is hard. I'd love to pay for a TDD course which focuses less on the classical katas and more on backend web development. What happens when we have a /fizzbuzz POST endpoint and save the result to a database, for example?
My preference when teaching TDD inside of companies is to do a couple of days on katas and exercises, and then demonstrate this in the codebase that people are working in. The difference isn't really about TDD, its about refactoring to get to the place where you can do TDD. You can check out some of my advice on that in the free workshop on refactoring my training website courses.cd.training
Its very similar to the katas. You shouldn't be focusing on a specific problem such as TDD for backend web development, but more on the process because once you get that down you are able to generalize it and apply it to many other domains. However to answer your question this is how I would go along to approach it. Say your POST endpoint just saves whatever is sent in the body directly to the table foo in the database. Lets assume you are using some sort of web framework Spring, ASP.net, express, flask, etc. It doesn't really matter too much they all follow the same principles. I am not sure about your level of experience, but I am assuming you have a basic understanding of how modern backend web frameworks work. I would start writing the smallest amount of "boilerplate" code as possible in order for you to be able to assert something and for there to be a complete chain of calls from the endpoint all the way to your database. For example you create a controller class, lets call it FizzBuzzController which will handle the request coming in, then you create a class called FizzBuzzRepository which deals with saving to the database and depending on your setup you might have another class that deals with the actual database logic, but for simplicity lets just say the repository will handle all the connections and saving in one through some library/ object that has been created before which callings some stored procedure on some database. Make a method on the controller class that you will use to test, this method should return some sort of response object most likely, refer to the framework docs for details. Lets take the Java HttpResponse object which has a body method which returns the response body and a statusCode method that returns the status code. Simply make your method throw an exception here to indicate we have no code written and to guarantee the test we will write will fail. In C# a good exception to throw here is a NotImplementedException, in Java it would be an UnsupportedOperationException, it doesn't really matter we are going to change it soon anyway. Do the same on the repository class, this method should be of return type void, we indicate an unsuccessful action by throwing an error. Now that we have the classes created you should make the repository be a dependency of the controller. I strongly advise using a dependency injection library here (chances are the framework you are using is shipped with one). This is probably the most crucial step because you do not need to worry about how or what the other dependencies do, instead you just mock them out. Now create a test class for the controller. This might have to be an "integration test" which spins up an actual webserver and allows you to handle a real request. Create a test with a name like "shouldReturn200Code_whenFizzBuzzSaved()" which will assert that a 200 status code is returned when you post some data and save it to the database successfully. For example it would go something like this following the Arrange, Act, Assert (AAA) method: Assuming you have created a new test server on localhost port 8080 like: server = new FizzBuzzServer("localhost", 8080); The framework should provide some examples how to write an integration test. // Arrange var httpClient = new HttpClient("localhost", 8080); var request = new FizzBuzzRequest("my data"); // Act var response = httpClient.post("localhost:8080", request); // Assert assertEquals(200, response.statusCode); Then this test should fail because we threw an error. Now, go and fix the test. You could do this in many different ways, but I would just return a HttpResponse object with a 200 status code. Now write another test. Perhaps this one now checks that we call the repository class with the data that we got in the request body. Lets now move onto the repository class, this one can be more of a pure unit test as we will mock out the database in the test so we don't have to worry about connections or anything like that, just the logic and the problem we are trying to solve. Say the repository class has a dependency called database which has a method called save("some data") that will save anything you give it into a table. Once again make sure the save method on the repository class just throws an exception at this stage. Lets write a test called "shouldSaveGivenStringToDatabase()" which will be something like: // setup a mock of the Database class var database = mock(); // Arrange var repository = new FizzBuzzRepository(database); var dataToSave = "my data"; // Act repository.save(); // Assert. In case you are not familiar with mocking libraries, this is an example from mockito which verifies that the method save on the database object was called with the argument "dataToSave" ie "my data" verify(database.save(dataToSave)); Now, go and fix the test. This should be as simple as just changing the save function in the repository class to something like: database.save(data); Do you see how this is quickly beginning to turn into the Katas? We are not worrying about what the program actually is, we are creating good classes through separation of concerns and using dependency injection combined with mocking to split the code into small chunks which essentially are small katas that form a larger backend webservice application. Maybe I will write a blog post or create some TH-cam videos someday about doing TDD with webservices because it is something I do quite regularly at work and I think you write very good code if you practise it. I think doing the pure TDD way can sometimes be too much and cutting corners to write a bit of code first is fine as long as you know why it is fine.
@@ContinuousDelivery yeah, refactoring to be able to TDD is exactly what I'm doing myself. Then it can make sense to start with something simpler and cleaner.
@@z3nkoded178 huge props for this reply. I've re-read it a couple of times to soak it all up. I totally think you should create some content on it. Personally, I stay away from DI libs and even mocking libs. But I've mostly worked in JS/TS. I will re-evaluate mocking libs when I try in Rust. There is a view point that DI systems are anti-patterns as you create too much distance from the injector and the receiver. I agree, and tend to inject normal arguments into constructors. I also tend to unit test web servers and abstract away the framework. A smaller benefit is that we can write our fizzbuzz logic and then construct router endpoints on top of it. There are more complications to solve though, so starting with integration tests is a really solid approach which can black-box away a lot of smelly code in the API. If you ever do make a video or write an article, please let me know :-) (@marcusradell on twitter)
The important missing step is how to go from hard-coding a test and fix for each case to a fix that is a more general solution to any conceivable test.
I always wanted to try TDD but could never get my head around it and got stuck every time. How does one turn from specific primitive implementations that pass the first tests to general algorithms? I needed to write a function the other day that takes an ordered unbound array of integers and returns a string that represents the array using ranges when possible; a range is represented as its lower and upper bounds separated by a dash, and single values and ranges are separated by commas. I decided to try TDD again. Implementations that worked with 0, 1, 2 item arrays were simple and straightforward. A 3 item array needed a check whether it's a range or not, but when I got to a 4 item array, things started to get uglier. I saw that my case or switch statement kept growing, and every branch was getting more and more complex. I didn't even want to start with a 5 items branch because of its complexity, and there was no sign of a general algorithm. I got frustrated again, sighed, and wrote the function "the usual" way...
The real trick to TDD is to pick your test cases well. In an ideal world each test demands a small increment in the behaviour of the code, after every new test is passing refactor the code to make it more generic, simpler and easier to read. Your problem seems well suited to this approach to me. What do you want the code to do if the 4 items in the array don't form a natural range, what should it do when they do, what when it forms 2 ranges? a test for each is probably enough, depending on how you write the code, you almost certainly don't need to add tests for 5 place arrays, because that is just a repeat of the same cases, I'd assume.
@@ContinuousDelivery I thought it was precisely the way I was building it. I created the following test cases one by one: { 1, 2, 3, 4 } = '1-4'; { 1, 2, 3, 5 } = '1-3,5'; { 1, 3, 4, 5 } = '1,3-5'; { 1, 2, 4, 5 } = '1,2,4,5'. The implementation was growing along with the tests, so the case branch for 4 items looked at four options in the end: 4 consecutive numbers; first 3 consecutive numbers; last 3 consecutive numbers; and no consecutive numbers. But that was a dead end because this approach did not scale well: one check for 3 items (whether they are consecutive or not), 3 checks described above for 4 items , and way too many checks for 5 items (I didn't even go there). There was no sign of a generic solution emerging. I watched Robert Martin showing a solution for prime factors problem grow naturally from some initially primitive code, and I understood that this is the promise of TDD. But I never got to experience it myself.
@@mister-kay Certainly if your examples above are in-order, you jumped in too soon, you needed simpler tests to begin with. I prefer to start with the simplest case that I can think of, often that is a null-test - what happens if the inputs are wrong? I am not sure if I would have started with the null-test of {} or the simplest range, in this case a single integer, but certainly one of those two. I think I would have picked these, in this order, but without writing the code, I may have thought of other tests along the way... {}='' {1}= '1' {1,3}='1,3' {1,2}='1-2' {1,2,,4}='1-2,4' {1,2,3,4}='1-4' { 1, 2, 3, 5 } = '1-3, 5' { 1, 3, 4, 5 } = '1, 3-5' { 1, 2, 4, 5 } = '1-2, 4-5'
@@ContinuousDelivery Yes, thank you, I did that. Sorry if my first comment was not clear, but these were my first tests: {} = '' { 1 } = '1' { 1, 2 } = '1,2' { 1, 2, 3 } = '1-3' { 1, 2, 4 } = '1,2,4' And then the mentioned above tests with 4 items. Along the way if() in the implementation changed into switch() that checked the number of items and I wrote branches for 0, 1, 2, 3, and 4 items. The check whether the items are consecutive numbers appeared in the case of 3 items for the first time and it looked like item[1] + 2 = item[3] (it was the simplest that I could think of). And then I used the same approach for the of 4 items.
@@mister-kay OK, sounds like you ended up with good tests, but got hung up on the wrong thing to me. Why is the number of parameters relevant at all, other than as a side effect of being able to supply "interesting" examples? So I'd certainly question the design choice of either a switch or of an 'if' statement. This sounds more like in need of a loop that simply processes all of the inputs in the list and identifies ranges where it can, given your good tests, this should be an easy refactor, since your tests don't assume anything about the implementation!
I probably wouldn't use the term "bad practice", but yes, I think that this is a worse way to do things. The problem with an in-memory DB is that it is not the real thing, there are differences, and it is also tightly coupled to the code at the level of implementation, rather than at the level of behaviour. I don't really care that I need a particular form of "select" statement or "update" statement if I am processing orders, all I care about is that the I can retrieve the order and do something useful to it, and save it again for later. So my preference is to create an abstraction of the Store and the Order and deal with those in most of my tests. For testing the actual translation of store.storeOrder(...) what I am really interested in is can I talk to the real DB and is it all configured correctly, so once again, for those kind of tests, I'd generally use Acceptance Tests for most of that, rather than unit tests (there may be unit tests for components of the 'Store' component.) That allows me to test the things that matter, in terms of my interaction with the 3rd party DB code, without having to deal with the complexity of all of that in my faster, lighter-weight, TDD code. So I have no real need for an in-memory DB for testing. The time when I may resort to that, is dealing with some poorly designed legacy code, and I may use an in-memory DB as a pragmatic hack, to make some progress.
@ContinuousDelivery Thanks a lot for your detailed response. I will have to figure it out in case of an ORM, i.e., Entity Framework, how to do that what you're suggesting since I am not a big fan the repository pattern ( to do one more layer of abstraction on top of EF).
Another interesting video Dave, thanks for sharing. Hope that cold's all cleared up now! You made a little mention in the video that I found really interesting - that you don't like many return paths from your functions. I wonder if you could expand on that sometime with an opinionated piece around functions and returns? For myself, I use "early returns", "guards", or whatever people like to call them as a way to keep processing to a minimum during execution.
I was rather surprised about his comment. have found that by dong returns whenever appropriate it is much clearer code. You see immediately what is going to happen. If you put the return value in a variable first, you'll have to read the rest of the function to know if that value is going to be changed (sometimes it changes unintentionally). Moreover, you'll often need a large if block to skip the code that doesn't need to be executed. Not to mention multiple of them if you have multiple exit paths. Deeper nesting adds to less clear code. Since I keep my functions short anyway (maximum a few screens) and thanks to syntax highlighting I see absolutely nothing wrong with return statements. Rather the opposite actually. I see very few benefits in holding to the rule he mentions. I guess it comes down to style and taste it is tough to argue about it.
Not to criticize, but the audio quality seems to have dropped a bit the last few videos. You might wanna double check settings or something. It definitely "fuzzier" than I remember from a few weeks ago.
Funny enough. I have found that you best understand your code when you are debugging an actual error. You have to analyze every line of code to figure out the problem. So Starting with the failure helps you figure out exactly what you are doing.
So you wrote code to test code but you never wrote code that test that code and it was up to the compiler to catch it. But what if you put in 6 in stead of 5 for example. It wouldn't get caught. Also notice the method encourages you to think of the program in portion rather than in the entirety. So you end up with crap programming such as using a lot of if statements or worse nested if else. Not worth anything if you need performance. So lets say you get to the end of creating fizzbuzz and want to boost the performance. Well you are going to need to change away from that if else mentality. Which means a complete rewrite of the fizzbuzz function. So that nice small incremental steps you boasted about suddenly becomes pointless a waste of time. TDD might be good if you are one of these people who can only focus on small issues or one thing at a time. There are better options beside using TDD, knowing how to program worth a damn is one of those, using proven code copy paste works well for eliminating errors, ... Also you end up with multiple times the amount of code but it isn't any better, in fact it is worse performance wise. The Best you can hope for is a doubling of code where you reuse the same code to test itself.
Hey Dave! It's nice to see a hands-on video about TDD! I have a question though, I've been trying to bring the culture of TDD to the company I currently work on, but I receive a lot of complaints about how costly it is, and, when I look at your example, I can see where they are coming from. You, not generalizing the cases for 3 and 5 on the first tests, gives an impression of an exaggerated simplification and I confess that if I were developing this class on my job I'd never create those tests. Do you think it is ok to take the "shortcut" if the developer is confident, or it's a big no-no and all tests should be done as simple as possible always? Thanks!
The problem is that the data says that that developer confidence is usually unfounded. Research of Production failures says that 58% of failures in production are due to the common errors that all programmers put into code. It is not that the programmers don't know how to solve the problem, it is that they don't do what they thought they did. TDD fixes this problem and so eliminates defects dramatically as a result. The data says that teams that practice the kind of stuff that I talk about, including TDD, spend 44% MORE time on developing new features than teams that don't. So while it may feel slow at the point at which you write the code, it really isn't when you take everything else, including fixing all the bugs that we put into code in the absence of TDD, into account. TDD is like double-entry bookkeeping but for code. It helps us to know that our code, at least, does what we meant it to do - code without TDD doesn't achieve that for 58% of the production failures. You can read the research here: www.usenix.org/system/files/conference/osdi14/osdi14-paper-yuan.pdf
I don't even do tdd yet, but this specific example does help me see the value in starting with those simpler explicit tests. Because if you happen to see all three tests fail, the failure of the simpler test alerts you quickly to a whole different type of issue than you might start thinking of if your fizzbuzz algorithm test failed.
@@OggerFN Yes? Fast, Simple, Staticly typed language with amazing first party tooling, easy concurrency and lets not forget statically linked, cross-compileable binaries.
@@OggerFN I mean there's not a lot I could say in a youtube comment to convince you that a language is good. You'll have to try it out :). Personally, I appreciate their goal of simplicity and no hidden magic like operator overloads. Since the language is rather small, its very quick to get productive and very easy to understand someone else's code.
I guess it is a judgement thing. You could certainly do that, Somewhere you have to draw the line between specificity and generality. Lots of people that teach TDD talk about that, about moving from specific examples to more general solutions, and I think that it is a good guide. If I had done that here, I'd have probably had a test for "1 reports '1'" and then " reports 'number-as-string'". I do this for "3" and "multiples of 3". I think that by preference, and I should have demonstrated this but didn't, is that I should have returned "2" first, then changed it to str(number) in the refactoring step.
its a bit too slow and laborious. But having you taking your time to talk about something in depth is the great added value to that channel. SO IDK. It is good for beginner to see that, for more advanced practicionner, less.
I disagree, once you get good at it it helps you write faster from my experience. All code should be tested in a professional environment and writing code first then going back to write tests is less fun and leads to poorer quality tests that cover fewer cases
@@TamDNB Plus, automated tests also alert whenever something right in the past starts to get wrong, as a collateral effect, due to current modifications. For instance (19:50), if he had write the 'case 3' before, and now put the 'return str' above, the previous test for 'number == 3' would alert the error.
It makes more sense for complex logic. You break down every part and it helps you confirm that you get it instead of putting code and trying to debug what's wrong.
Alternatively: from functools import reduce print(reduce(lambda a, b: str(a) + ' ' + str(b) , ["FizzBuzz" if x % 3 == 0 and x % 5 == 0 else "Fizz" if x % 3 == 0 else "Buzz" if x % 5 == 0 else x for x in range(1,101)]))
I can think that it's a bit too much weight being put on the yellow result. Compilation errors happen, and to put such value in not having them seems almost elitist ("you're not a good programmer unless you always have your code compile the first time", "syntax checkers are for noobs and if you fail it you are bad and should feel bad").
Sorry, I think that you misunderstood. The emphasis on avoiding the yellow balls, representing compilation errors, is that it shows that you were making progress in bigger steps. The odd one for a typo is ok, but the idea is to try and work in ways that prevent you from making big mistakes. Working to avoid "yellow balls" really means work in tiny steps. I use syntax checkers all the time, except when using Cyber-Dojo.
I have done embedded firmware development for 30 years in C mostly...and cannot for the life of me understand the confusing metaphores( dojos,antipatterns,etc) when trying to understand how to employ TDD in embedded work. Way more confusing and complicated than necessary. Are all you Gurus just working on coding websites in Python? Your tests still look like you are writing code so what is the point when you still have to write the code..more code more mistakes. More pseudo techy koolaid that I can’t justify drinking. Can’t see trying to do this for writing stepper-motor- driving code.
Well, lots of orgs that write embedded code do. Tesla for example. I have worked with teams writing firmware for scientific instruments, medical devices and FPGA's for financial systems. I agree that the terminology is a bit arcane, but then jargon that you are not familiar with always is. Since you call them out dojo=place to practice, antimatter=don't do this! ...and yes, I could easily see myself writing tests for code that rove a stepper motor, done that, would have been better with tests!
This is going straight to the dev chat channel of my team 😄
I see here a comment that tdd is slower than regular development. It's demo, guys!
IRL Tdd is much faster than anything else since you don't spend almost any time on fixing stupid mistakes. Actually I hardly ever use a debugger because there's no need.
It is also faster since you don't make your solution too complicated to write, test and maintain.
You even don't spend time on trying to sort out what is happening in your own code because it's always refactored to the best design you could imagine at the moment.
And if you think that your old design sucks - go on and refactor it. Everything is covered so it's 100% safe to refactor.
Yes exactly! I don't have data for only TDD, but teams that practice the surrounding things like CI, TBD, TDD, CD and so on, spend "44% more time on new work" as reported by the "State of DevOps reports" and in the "Accelerate" book.
@@ContinuousDelivery Any good books you'd recommend for TDD?
@@mana20"Test Driven Development: By Example (The Addison-Wesley Signature Series)", Kent Beck
"Growing Object Oriented Software Guided by Tests", By Nat Price & Steve Freeman
"Fifty Quick Ideas To Improve Your Tests", Gojko Adzic
well that although is not a proof. I spent 12 years on a quite large project (1.4 MIllion code lines of C++ when I left). The measured time fixing bugs was roughly 4% of the team development time. We did not use TDD.. so one cannot assume that not using TDD generates buggy code.
Quality of code comes from quality of the team, nor a process. TDD is a way to solve the issue of how to force a minimum standard on teams on another angle of approach.
1,000% correct 👍
This was wonderful! In answer to your question at the end, yes please to publishing the second half of the video, if you see fit.
I am a baby developer (just started 3 months ago) but I naturally started doing exactly this. I think it is great not only for catching early errors and mistakes but it also deepens the joy of creating a functional code because you get the feedback so frequently.
Sadly, I am no longer a "baby developer" but I agree entirely. I love the fast, clear feedback on my work that I get every few minutes working this way.
"Experience is a hard teacher because she gives the test first, the lesson afterwards." - Vern Law 😁
What a great and apt phrase. Going to use this 🤩
As a big advocate and teacher of TDD, and an interviewer of candidate engineers for my teams it always amazes me just how many people claim to do TDD but don't actually have a clue. Even then the difference between good (useful) TDD and poor (fragile) TDD is a big gap. Once the penny drops it is normally plain sailing and any tool that supports that good behaviour has to be a worthy addition to the toolbox to get everyone across the line in a shorter time scale. Doing all of this in practice tends to be the big bugbear though, where TDD skills blend with unit testing and people go back to old, bad habits just to get work done rather than follow the path and make sure the code output is built the right way
Great comment. I find exactly the same thing. Many devs put TDD on there CV's and claim in interviews to be able to use TDD but once you dig, even just below the surface, they crumble.
would love to see more of these hands-on videos, Dave! Thank you for all your inspiring work!
I would love to see more of these as well. Your explaining your thinking process which is so valuable to a newbie.
Did he make more, if yes please lonks
I'm trying to sell TDD as Specification driven development, as if you can agree what a program should be outputting and the api, the work to be done is extremely clear cut. Would like to see more of this type of content as well!
This is what BDD was originally invented to do. To change the language around TDD to more accurately reflect what we are doing. The idea got hijacked a bit by the functional testers, but I think it still has a lot of merit. I think of my approach to TDD as being behaviourally-focused.
@@ContinuousDelivery That's where I see the value, but sadly TDD has gained a bad reputation. In my business, much is done waterfall, and by saying you're writing to meet a specification, that gets favor by management.
As an aside, if you have opinions on how to unit test GPU code, me and my coworkers I'm sure would appreciate that.
I would love to see more tutorials like this or courses where you build a full project with TDD!
God, that bookshelf! All legendary books together!
Many years ago I have been told to ba lazy as a developer and return as soon as possible. I agree you create multiple exit points but your code is getting 'flat' (no/less nested ifs). Keeping in mind that 10line function can be considered long - it should be safe to say that code should be easy to read anyway. As always a pleasure to learn - thank you for sharing your knowledge.
The 10 lines of code although must be taken with a pinch of salt (i.e it is not for 100% of cases). For example if you are writing a code that has a directly mapped meaning, split it can cause confusion. One example is math code. If you take a invert matrix code and start to fragment it you are splitting a well defined and known concept and that very likely will cause MORE confusion (because you start to create arbitrary cut point that do not exist in the concept)
Definitely agree with this. I had a teacher or two in college who mandated “always return once” and at the time it sounded nice but I’ve since grown to dislike it more and more.
It’s especially true for guard clauses. You don’t want slowly nest your method code deeper and deeper whenever you have to test for a bad value. If you are only returning at the end it forces you to do this. It also forces you to create unnecessary variables like IsErrorRaised. Sometimes coding in this way is a good training exercise, but in practice it usually makes code harder to read.
It’s usually good to not be an absolutist “always do X” will usually cause you unnecessary pain somewhere down the line.
Mr. Farley, this is amazing. I usually listen to you for your theoretical content, but I wanted to practice TDD so that I can start following it. I sat down to start and I couldn't figure out what my first test should be so I did a TH-cam search.
I just followed along on my own in cyber-dojo and now I'm moving on to part 2. This is great, practical content. Thanks for everything you do!
Thanks, I am pleased that you liked it. I do have a much more detailed course on this topic coming our soon.
Saving to watch later. This video goes to my "must see" playlist.
Watch it ASAP. Don't put it on a 'do sometime' rather DO IT
Sometimes I feel the discovery becomes the code. People feel there way to find something that works and then it's off to production.
That's the kind of videos i love, it very instructive to have tutorials of such advanced topics from someone as experienced as you. because all the tutorials nowadays focus on technologies and tools rather than software engineering best practices.
I encountered this site about six years ago as part of a job interview process, wonderful site.
It's pay now or pay later. I admit I've never done this at it's purest form, however once I had the opportunity for a client BA giving us the inputs and outputs for a particular system and while he was doing this I thought to myself that I could finally do some TDD. It went great! There were 2 glitches found: The first was a scenario that the BA left out and the second was when a developer wrapped my code and did not do TDD on his wrapper. (i.e. my code behaved as expected!)
yes, please do upload the 2nd part !
Loved the practical example, great video. I just started reading Kent Beck's TDD By Example book a couple of nights ago, so this has helped remind me of some of the principals. Cheers!
Glad it was helpful!
This is a great video. Thanks again.
One value of TDD is the ability to keep only one thing in mind at a time...
red - specifying what a thing ought to do
green - do the thing
refactor -- improve thing and tests
BUT... something great from the original TDD by Example book that I think is one of my favorite things...
the "TODO" list.
As we think of things (design related, test related, etc...) that we are not currently in the process of doing we can write them down to keep them out of our heads.
so the todo list may read (in comment)
- test 1 returns '1' - x
- test for 3 - x
- test for 5 - x
- refactor for single return path?
Just wanted to share one of my favorite parts that I use constantly as I practice TDD. I feel like it helps me produce better software design - which I hope someday lives up to what I regularly see on your channel and trainings.
Thanks again, Dave
Good tutorial, I finished the exercise although I don't think I was able to make such small steps towards getting fails, greens and refactor small portions of code, but I did manage small functions that allowed me to test things like isMultipleOfThree method and isMultipleOfFive first failing and then passing. Thank you !
Really glad you gave it a go! Hope it helped!
Please second part Dave : )
Great shirt! I'd like to see more of this sort of thing as well. I try to write test first as much as I can, but where I feel like I struggle is in breaking down bigger problems into simpler parts, but moreover how to best organise breaking apart a problem into different layers of tests; e.g: functional vs unit tests etc. The latter is much easier to grok than the former. I guess it's about practice as you said.
Love your videos! Thanks for helping us to learn and grow!
I wanted to test the rules of our TDD discipline, to see if a certain rule can be followed in all scenarios, to no ill effect. That rule is, never to change the code, unless we have a failing test.
At 12:48, we know that we're going to get a compilation error. But, how do we truly know? I think we need to run the test. Put the compiler to work, while we ourselves think of next steps.
We are not allowed to write any production code, until we run a test that fails.
We are not allowed to write any more of a test than is sufficient to fail, and compilation failures, are failures.
We are not allowed to write any more code than is sufficient to pass the one failing test.
To follow all three guidelines, traps us in a cycle, but are we trapped? I would hazard a guess to say, we're not trapped, because TDD is a virtuous cycle.
TDD gives us sure and repeatable proofs that our system works as intended.
Sorry, I don't mean to be overly pedantic, but I want to push the state of the TDD art to its logical conclusion, wherever it may lead us.
Thanks for showing us the FizzBuzz Kata, I had an excellent time watching.
Yes and no. To be more correct is to *not write new logic*, without a failing test first. TDD actively encourages us to change the code frequently (this is the 'refactor' part of the loop), so long as we keep the tests passing after each change.
@@oliverhughes169 Thanks, excellent point! I misjudged which step of the TDD cycle we were in. Refactoring is much easier, with a set of automated tests. Tests keep the code malleable.
To the extent that software is hard to change, we have re-invented hardware. Tests allow us to be very casual and even flippant about code changes, so long as we always write the tests first and stop writing once we pass the test.
Ultimately, the only reason our job exists, is because the boss wants more changes. There's no reason to keep up us around, unless it's always time for a software change. So optimize for continuous change! Changes that we haven't even thought about yet.
Only 3 concepts: Parameters, Subject and Informational Individual! This is the future in software!
I think you hit the nail right on the head: the hard part is being good at design. So here's the thing: can good software be created without TDD? Can bad software be created with TDD? I believe the answer to both questions is "yes". With this in mind, therefore, is there too much emphasis on TDD? I think yes. I don't believe that TDD is the panacea that we'd like to think it is. In fact, TDD has nothing to do with good software design, and thus well crafted software. It has become a mantra - which brushes good design under the carpet.
D language has unit test support built in to the compiler, which in turn motivates us to really write the tests instead of adopting any complicated setups.
These days most IDEs will set up the leading test frameworks for your language at the click of a button. It's not a significant issue unless you insist on using a text editor.
Even-though I'm an experienced programmer I have done very little TDD and never really missed it. I'm open to the idea but and I get the theory and it works out well in simple cases like this. The problem I have is that so much code isn't testable. For instance user interfaces, code that assumes certain large data-sets. Just to name a few. You end up using fake mock-ups and fake data. I've seen people going into great lengths doing it and in the end they have tested absolutely nothing real. The code may be fairly bug-free but hey, they spent 3 times the time on it. That time could as well have been spent fixing the one or two bugs. Well, I'm just one of those people who values practical real-world situations over theory.
Very useful to see you thinking the process through, and the common mistakes. One aspect that I think is worth separating out - especially for beginners (I'm trying to find collateral to help an actuary get their head around development) - is the background on 'separation of concerns'. The emergent design may come over as too complicated to understand quickly
As always, TDD is a practice, doesn't necessarily means you must follow it.
In my experience, TDD is quite difficult to apply because the codebase is so old and I don't have the necessary business knowledge and logic to do the TDD. Hence, sometimes we create the test after we do the code.
It is also sometimes quite difficult and tedious to change the code very often when the project is quite agile, which cause requirement often change in the middle (sometimes after you finish the test and the code).
It's nice to use TDD when you own the codebase, know well what to build and you have clear concise requirement which makes the test very easy and simple but this is quite rare in some part of software development IMO.
This is the same thing I’ve encountered
It’s relatively easy to do TDD when the design of the system is known, as well as the inputs and outputs and shapes of the data
But in my experience so far these are Rarely known when we are told to start implementing “something” and they also constantly Change
I don’t think I’ve seen agile done properly as we start coding before we even know what we are building or who for
The product people seem to forget that us engineers can also define and design systems, not just bang out code
I don’t know how to explain this to them, they don’t seem to understand this and just assume we will figure it out as we go along
We always end up with a unmanageable unmaintainable mess that gets scrapped and we start over again without a solid plan
Suffice to say I’m looking for another job, but this practice seems common in the industry
A fascinating insight. Thank you. Cyber-dojo looks like a great place to start my second career.
Best of luck!
I understand the basic steps involved in creating a failing test and then doing enough implementation code to make all the test pass. However, the code refactoring after having written all the test is the tricky part-this involves careful design of the system.
very interesting - the small moves ... thanks!
where can I see more complex example of TDD practice being followed? This one is quite simple as there is no interaction with external api, no dependence on DB, no multi-class interaction etc.
Probably never. That would ruin the hype around TDD as a silver bullet that you should definetely buy this new course on 😂
Very nice video and great tool. I'll be testing it out later
This was a great video, thanks for sharing this it was excellent
Glad you enjoyed it
Great advice! I would have liked so see a bit more about the transition in the code going from simple cases to the general answer. Following the absolutely simplest approach that you mention, we test 2, then 3, then 5, then maybe 15, and at that point we have an implementation that is effectively a lookup table of the correct answer for our test cases. Each time we add a new test it seems that the simplest thing to do is to add another if check that just outputs the correct answer for that specific test case, which brings us no closer to actually solving the problem. I'd love to hear your perspective on when/where/how we transform a lookup table of answers into a solution that works for a general input. Is this the purpose of the "refactor" stage you mention?
Take a look at the whole exercise here courses.cd.training (it'd free, but does need a registration with your email - you can unsubscribe afterwards if you like).
Good. But what I want to see is tests when you must read/write variables that are not passed as arguments to the f().
19:46, I'm against that. If the code is holding a variable to exit later, in a later reading you won't know in advance what will be made of that variable. After a whole reading, loosing time, you will eventually realize that the code should just exit right away.
He was talking about bigger functions but don't let that stop you getting your important opinion across!
You touch very briefly on something interesting I think: multiple returns. I can see why you don't like them, but I always struggle with excessive nesting (and therefore cyclomatic complexity) when I don't use early-out input validation. Perhaps you'd like to touch on this in a future video?
I agree. I remember when "one input, one output" was king. When I heard this mentioned in this video my mind immediately rewound to "The Design of the UNIX Operating System" by Maurice J. Bach for AT&T 1986. The style is very much to exit on faulty input data or conditions. As you say, this makes for much cleaner, and I would argue more easily understood, code. I've no idea if Linux has followed suit. Nothing would surprise me on that front.
I am a fan of using guard clauses in code where I return early and often in a function. Once I get beyond the guard I like to only have a single return.
I feel that if the code fails the guard it's useless to continue. If the data passes the guard then I can start to think of the operation I want to execute.
I do use multiple returns as well as it makes more sense, but often find myself following the above pattern.
Very helpful! Would love to see more.
Well there is a compilation of these videos into a free course on my training site: courses.cd.training/courses/tdd-tutorial
and we will be releasing a full paid-for course, "TDD & BDD Design Through Testing" in the next couple of weeks.
David Farley, keep up the good work :)
I used to develop multi-yeared projects as a hobby. After years of experience, I can easily say testing is vital to any project. Recently, I made a craeer shift and started working as a senior software engineer in a local company. In the first month, they've asked me to drop all the different types of tests I was writing and only write functional tests for APIs endpoints. They will pay the price of faster development in the near future 😅
15:44 going a bit to an extreme, the _minimal_ amount of work to make that first test pass is to just return "2".
Now a subsequent test could be checking that the result is actually related to the input value. So we'd have assertEqual("5", FizzBuzz().fizzbuzz(5))
At that point the fixed return value "2" would be replaced with str(number).
Maybe too extreme :) but I see _some_ value in this if I think about a more complex real-world scenario where I want to be absolutely certain that different input produce appropriately different results.
I think it depends on how you see the process. I see it as a tool to help me to design my solution so that it works. Sure the "Green" step maybe to hard-code a return value, but I want my test to be expressing my desired design of how I will interact with the, as yet unwritten, code. The job here is not to take my brain out, it is to proceed in VERY small steps, but based on my current best understanding of the problem I am trying to solve with my new test, and any that existed before.
Really thx for sharing your knowledge.
My pleasure
Thanks for this nice introduction to TDD and making me aware of cyber-dojo, a great resource! As a TDD noob one question is should I create tests for input parameter validation ie checking the values passed to my code are in the correct range etc? If this shouldn't be tested as part of TDD where, if anywhere, should that be tested?
I would try to focus on this as an approach to design, rather than an approach to testing. So if you are writing the code that does the validation, then yes you should test it. Testing at the edges of the system, where it touches real I/O is more complicated than other parts, so the trick is to design your system to minimise the code that actually deals with I/O and throughly test everything else.
So one of the strategies that I take for stuff like input validation, is to separate the validation from the code that captures the input. Then I can test the validation code thoroughly without needing any UI (or other input) code in my validation tests.
Hi . I just discovered your channel. What came to my mind almost right away was..Of course you do it this way!!
Isn't this how its done anyway? I was in the general IT field years ago, never really became a programmer per se but I did do some C, even cobol (yikes!) and in recent years I created a few VBA solutions in excel just for my own business and interest purposes. I have an electronics tech background and one of the first things we learned was proper troubleshooting. How else can real programming be done I would say? You don't start bangin away with a bunch of code with a great idea in your head only to end up with a slew of compile and syntax errors! Is this how the "kids" are doing it?? Or is this how its taught?! Eeeaach.. Even if you fixed those things, you likely would end up with something that compiles but doesn't do what it was intended to do anyway! Now what? If you have a problem or a challenge with a piece of electronics, electrical, you don't make 3 or 4 changes and see what happens. You make 1. Make a prediction. Then test it. If you make multiple changes and it works, fine but you don't know what change was the clincher. Or worse, you might end up changing the thing or the result, still with a problem, just a different one and still malfunctioning and not doing what it supposed to do. Now you are in a real pickle because you have no idea what your changes did, and which one or 2 are responsible for the new behaviour! Just my 2 cents. Nice to see this kind of stuff. :)
More TDD cookbook videos please.
We love TDD and TBD
where did you get those glasses? great video by the way
My problem with these exemples is that it's always trivial examples. Can't we get examples on things that are more akin to real world ?
@@jlou888 I've been looking for someone to say this. All these evangelists preaching about their perfect little methodoligies that will fix all your problems in trivial example projects of cats-and-dogs of fizzbuzz. Why don't they show us how to do it in a real-world project with complex business domains, highly concurrent and distributed systems, and requirements changing on the fly. Or at least let us see you try it on a project that is a little older that a few feeks and has more than 2 developers working on it.
Hi, good video but i would like to see an example how to use TDD when you have a database application and event driven architecture by using a framework like Spring. I would like to see how TDD is used in more realistic example which is not necessarily something big.
Well, in theory, you can do the same thing, however as you need a database connection, you have to provide that to the test instance.
Ideally, using the provided Annotations (@DataMongoTest for example) along with a base dataset to work on (or an empty dataset, depending on your circumstances) should suffice as preparation.
After that base setup, the behavior is the same - prepare the entry-point (if event-driven, there should be a separation between accepting event and accessing database, but let's ignore that for now), call entry point with a fitting input, and - depending on the behavior - either expect an outcome or verify the modified data on the database with the spring-provided database-access.
the only differences here are the setup and the verification, IF you need to verify in the database, as the actual call does not yield any results.
So basically you'd have
@ExtendWith(SpringExtension)
@DataMongoTest(configure here)
class TestClass {
@Autowired
MongoTemplate mongotemplate;
@Autowired
DBAccessClass sut;
@Test
shouldDoSomething() {
sut.save(newObject)
Object result = mongoTemplate.find(criteria)
assertThat(result).satisfies(condition)
}
}
The database connection itself would have to be asserted at another place, most likely a Configuration class test, if you want to go there, however that is a tiny bit more complicated and requires use of, for example, @Nested with jUnit Jupiter.
I talk about this a bit in my episode called "Testing at the edge" th-cam.com/video/ESHn53myB88/w-d-xo.html
The trouble with demonstrating this, or teaching it, with real-world code is that the complexity of the code gets in the way of the ideas. There are a few examples of some real systems, and as it happens some event driven systems, being worked on this way, but TDD isn't the only or primary focus, try this one: th-cam.com/video/bHKHdp4H-8w/w-d-xo.html
Now the second part is released it would be useful to have a link to that in the show notes :-).
Thanks for the suggestion, done!
I struggle with writing tests first because I use typed languages and if I don't write at least the function first (empty with an return false stmt) then I can't compile my tests. Is this still the right way to go? Or should I write the test, call a function that doesn't exist and have as my first red, Not a red result but a compile error?
The idea is that you use the act of writing the test to design the interface to your code. So write the test first, write the interface that you need to make the test make sense, then do just enough work to make everything compile and to get to the stage where the test can run and fail.
So yes, write the test first.
@@ContinuousDelivery wow thank you for your answer!
I need that shirt! ... Also great video!
How does this approach translate to code that needs IO or needs to communicate with other systems ?
I demonstrate a very simple version of that at the end of this exercise, you can see the whole thing on my training site at courses.cd.training - it's free to access.
I also talk about this in more detail in this episode: th-cam.com/video/ESHn53myB88/w-d-xo.html
Fundamentally it's pretty simple, push the IO to the edges of your system, write your own abstraction of what you care about for that IO, and test to that. Then write small simple bits of code to translate between your abstraction and the IO.
Surely we would want to see more videos like this, you could just speed up a bit, since other of your videos aren't that trivial :)
Thanks for the feedback. I got lots of requests to do something at the beginners level, so was keeping it simple and taking time, so hopefully people new to coding and writing tests could follow.
@@ContinuousDelivery Thats reasonable. Are you also gonna create TDD tutorial but with integration tests? Or does TDD only apply to unit testing?
This kind of content is great! Getting started with TDD is hard.
I'd love to pay for a TDD course which focuses less on the classical katas and more on backend web development. What happens when we have a /fizzbuzz POST endpoint and save the result to a database, for example?
My preference when teaching TDD inside of companies is to do a couple of days on katas and exercises, and then demonstrate this in the codebase that people are working in. The difference isn't really about TDD, its about refactoring to get to the place where you can do TDD. You can check out some of my advice on that in the free workshop on refactoring my training website courses.cd.training
Its very similar to the katas. You shouldn't be focusing on a specific problem such as TDD for backend web development, but more on the process because once you get that down you are able to generalize it and apply it to many other domains. However to answer your question this is how I would go along to approach it. Say your POST endpoint just saves whatever is sent in the body directly to the table foo in the database. Lets assume you are using some sort of web framework Spring, ASP.net, express, flask, etc. It doesn't really matter too much they all follow the same principles. I am not sure about your level of experience, but I am assuming you have a basic understanding of how modern backend web frameworks work.
I would start writing the smallest amount of "boilerplate" code as possible in order for you to be able to assert something and for there to be a complete chain of calls from the endpoint all the way to your database. For example you create a controller class, lets call it FizzBuzzController which will handle the request coming in, then you create a class called FizzBuzzRepository which deals with saving to the database and depending on your setup you might have another class that deals with the actual database logic, but for simplicity lets just say the repository will handle all the connections and saving in one through some library/ object that has been created before which callings some stored procedure on some database. Make a method on the controller class that you will use to test, this method should return some sort of response object most likely, refer to the framework docs for details. Lets take the Java HttpResponse object which has a body method which returns the response body and a statusCode method that returns the status code. Simply make your method throw an exception here to indicate we have no code written and to guarantee the test we will write will fail. In C# a good exception to throw here is a NotImplementedException, in Java it would be an UnsupportedOperationException, it doesn't really matter we are going to change it soon anyway. Do the same on the repository class, this method should be of return type void, we indicate an unsuccessful action by throwing an error.
Now that we have the classes created you should make the repository be a dependency of the controller. I strongly advise using a dependency injection library here (chances are the framework you are using is shipped with one). This is probably the most crucial step because you do not need to worry about how or what the other dependencies do, instead you just mock them out.
Now create a test class for the controller. This might have to be an "integration test" which spins up an actual webserver and allows you to handle a real request. Create a test with a name like "shouldReturn200Code_whenFizzBuzzSaved()" which will assert that a 200 status code is returned when you post some data and save it to the database successfully. For example it would go something like this following the Arrange, Act, Assert (AAA) method:
Assuming you have created a new test server on localhost port 8080 like: server = new FizzBuzzServer("localhost", 8080); The framework should provide some examples how to write an integration test.
// Arrange
var httpClient = new HttpClient("localhost", 8080);
var request = new FizzBuzzRequest("my data");
// Act
var response = httpClient.post("localhost:8080", request);
// Assert
assertEquals(200, response.statusCode);
Then this test should fail because we threw an error. Now, go and fix the test. You could do this in many different ways, but I would just return a HttpResponse object with a 200 status code.
Now write another test. Perhaps this one now checks that we call the repository class with the data that we got in the request body.
Lets now move onto the repository class, this one can be more of a pure unit test as we will mock out the database in the test so we don't have to worry about connections or anything like that, just the logic and the problem we are trying to solve. Say the repository class has a dependency called database which has a method called save("some data") that will save anything you give it into a table. Once again make sure the save method on the repository class just throws an exception at this stage.
Lets write a test called "shouldSaveGivenStringToDatabase()" which will be something like:
// setup a mock of the Database class
var database = mock();
// Arrange
var repository = new FizzBuzzRepository(database);
var dataToSave = "my data";
// Act
repository.save();
// Assert. In case you are not familiar with mocking libraries, this is an example from mockito which verifies that the method save on the database object was called with the argument "dataToSave" ie "my data"
verify(database.save(dataToSave));
Now, go and fix the test. This should be as simple as just changing the save function in the repository class to something like:
database.save(data);
Do you see how this is quickly beginning to turn into the Katas? We are not worrying about what the program actually is, we are creating good classes through separation of concerns and using dependency injection combined with mocking to split the code into small chunks which essentially are small katas that form a larger backend webservice application. Maybe I will write a blog post or create some TH-cam videos someday about doing TDD with webservices because it is something I do quite regularly at work and I think you write very good code if you practise it. I think doing the pure TDD way can sometimes be too much and cutting corners to write a bit of code first is fine as long as you know why it is fine.
@@ContinuousDelivery yeah, refactoring to be able to TDD is exactly what I'm doing myself. Then it can make sense to start with something simpler and cleaner.
@@z3nkoded178 huge props for this reply. I've re-read it a couple of times to soak it all up. I totally think you should create some content on it.
Personally, I stay away from DI libs and even mocking libs. But I've mostly worked in JS/TS. I will re-evaluate mocking libs when I try in Rust.
There is a view point that DI systems are anti-patterns as you create too much distance from the injector and the receiver. I agree, and tend to inject normal arguments into constructors.
I also tend to unit test web servers and abstract away the framework. A smaller benefit is that we can write our fizzbuzz logic and then construct router endpoints on top of it. There are more complications to solve though, so starting with integration tests is a really solid approach which can black-box away a lot of smelly code in the API.
If you ever do make a video or write an article, please let me know :-) (@marcusradell on twitter)
The important missing step is how to go from hard-coding a test and fix for each case to a fix that is a more general solution to any conceivable test.
this was great!
I always wanted to try TDD but could never get my head around it and got stuck every time. How does one turn from specific primitive implementations that pass the first tests to general algorithms? I needed to write a function the other day that takes an ordered unbound array of integers and returns a string that represents the array using ranges when possible; a range is represented as its lower and upper bounds separated by a dash, and single values and ranges are separated by commas. I decided to try TDD again. Implementations that worked with 0, 1, 2 item arrays were simple and straightforward. A 3 item array needed a check whether it's a range or not, but when I got to a 4 item array, things started to get uglier. I saw that my case or switch statement kept growing, and every branch was getting more and more complex. I didn't even want to start with a 5 items branch because of its complexity, and there was no sign of a general algorithm. I got frustrated again, sighed, and wrote the function "the usual" way...
The real trick to TDD is to pick your test cases well. In an ideal world each test demands a small increment in the behaviour of the code, after every new test is passing refactor the code to make it more generic, simpler and easier to read. Your problem seems well suited to this approach to me. What do you want the code to do if the 4 items in the array don't form a natural range, what should it do when they do, what when it forms 2 ranges? a test for each is probably enough, depending on how you write the code, you almost certainly don't need to add tests for 5 place arrays, because that is just a repeat of the same cases, I'd assume.
@@ContinuousDelivery I thought it was precisely the way I was building it. I created the following test cases one by one: { 1, 2, 3, 4 } = '1-4'; { 1, 2, 3, 5 } = '1-3,5'; { 1, 3, 4, 5 } = '1,3-5'; { 1, 2, 4, 5 } = '1,2,4,5'. The implementation was growing along with the tests, so the case branch for 4 items looked at four options in the end: 4 consecutive numbers; first 3 consecutive numbers; last 3 consecutive numbers; and no consecutive numbers. But that was a dead end because this approach did not scale well: one check for 3 items (whether they are consecutive or not), 3 checks described above for 4 items , and way too many checks for 5 items (I didn't even go there). There was no sign of a generic solution emerging. I watched Robert Martin showing a solution for prime factors problem grow naturally from some initially primitive code, and I understood that this is the promise of TDD. But I never got to experience it myself.
@@mister-kay
Certainly if your examples above are in-order, you jumped in too soon, you needed simpler tests to begin with. I prefer to start with the simplest case that I can think of, often that is a null-test - what happens if the inputs are wrong? I am not sure if I would have started with the null-test of {} or the simplest range, in this case a single integer, but certainly one of those two.
I think I would have picked these, in this order, but without writing the code, I may have thought of other tests along the way...
{}=''
{1}= '1'
{1,3}='1,3'
{1,2}='1-2'
{1,2,,4}='1-2,4'
{1,2,3,4}='1-4'
{ 1, 2, 3, 5 } = '1-3, 5'
{ 1, 3, 4, 5 } = '1, 3-5'
{ 1, 2, 4, 5 } = '1-2, 4-5'
@@ContinuousDelivery Yes, thank you, I did that. Sorry if my first comment was not clear, but these were my first tests:
{} = ''
{ 1 } = '1'
{ 1, 2 } = '1,2'
{ 1, 2, 3 } = '1-3'
{ 1, 2, 4 } = '1,2,4'
And then the mentioned above tests with 4 items. Along the way if() in the implementation changed into switch() that checked the number of items and I wrote branches for 0, 1, 2, 3, and 4 items. The check whether the items are consecutive numbers appeared in the case of 3 items for the first time and it looked like item[1] + 2 = item[3] (it was the simplest that I could think of). And then I used the same approach for the of 4 items.
@@mister-kay OK, sounds like you ended up with good tests, but got hung up on the wrong thing to me. Why is the number of parameters relevant at all, other than as a side effect of being able to supply "interesting" examples? So I'd certainly question the design choice of either a switch or of an 'if' statement. This sounds more like in need of a loop that simply processes all of the inputs in the list and identifies ranges where it can, given your good tests, this should be an easy refactor, since your tests don't assume anything about the implementation!
I tried cyber-dojo however it does not seem to recognise ES6. I get your point though so am practicing using VSC.
I would like to see the second part
The link to "Part 2" is in the description of the video now.
Test-Driven Design ♥️
A general query, would you consider it a bad practice using in-memory DB instead of mocking the DB interactions while doing TDD?
I probably wouldn't use the term "bad practice", but yes, I think that this is a worse way to do things. The problem with an in-memory DB is that it is not the real thing, there are differences, and it is also tightly coupled to the code at the level of implementation, rather than at the level of behaviour. I don't really care that I need a particular form of "select" statement or "update" statement if I am processing orders, all I care about is that the I can retrieve the order and do something useful to it, and save it again for later. So my preference is to create an abstraction of the Store and the Order and deal with those in most of my tests. For testing the actual translation of store.storeOrder(...) what I am really interested in is can I talk to the real DB and is it all configured correctly, so once again, for those kind of tests, I'd generally use Acceptance Tests for most of that, rather than unit tests (there may be unit tests for components of the 'Store' component.)
That allows me to test the things that matter, in terms of my interaction with the 3rd party DB code, without having to deal with the complexity of all of that in my faster, lighter-weight, TDD code. So I have no real need for an in-memory DB for testing.
The time when I may resort to that, is dealing with some poorly designed legacy code, and I may use an in-memory DB as a pragmatic hack, to make some progress.
@ContinuousDelivery Thanks a lot for your detailed response. I will have to figure it out in case of an ORM, i.e., Entity Framework, how to do that what you're suggesting since I am not a big fan the repository pattern ( to do one more layer of abstraction on top of EF).
always copy-paste variable names; type once - copy-paste many! ;-)
Another interesting video Dave, thanks for sharing. Hope that cold's all cleared up now!
You made a little mention in the video that I found really interesting - that you don't like many return paths from your functions. I wonder if you could expand on that sometime with an opinionated piece around functions and returns? For myself, I use "early returns", "guards", or whatever people like to call them as a way to keep processing to a minimum during execution.
I was rather surprised about his comment. have found that by dong returns whenever appropriate it is much clearer code. You see immediately what is going to happen. If you put the return value in a variable first, you'll have to read the rest of the function to know if that value is going to be changed (sometimes it changes unintentionally). Moreover, you'll often need a large if block to skip the code that doesn't need to be executed. Not to mention multiple of them if you have multiple exit paths. Deeper nesting adds to less clear code. Since I keep my functions short anyway (maximum a few screens) and thanks to syntax highlighting I see absolutely nothing wrong with return statements. Rather the opposite actually. I see very few benefits in holding to the rule he mentions. I guess it comes down to style and taste it is tough to argue about it.
Not to criticize, but the audio quality seems to have dropped a bit the last few videos. You might wanna double check settings or something. It definitely "fuzzier" than I remember from a few weeks ago.
Thanks for the feedback. I will check - but it may be something to do with recording at my desk, and recovering from a head cold!
Part 2 please!
Funny enough. I have found that you best understand your code when you are debugging an actual error. You have to analyze every line of code to figure out the problem. So Starting with the failure helps you figure out exactly what you are doing.
So you wrote code to test code but you never wrote code that test that code and it was up to the compiler to catch it. But what if you put in 6 in stead of 5 for example. It wouldn't get caught.
Also notice the method encourages you to think of the program in portion rather than in the entirety. So you end up with crap programming such as using a lot of if statements or worse nested if else. Not worth anything if you need performance. So lets say you get to the end of creating fizzbuzz and want to boost the performance. Well you are going to need to change away from that if else mentality. Which means a complete rewrite of the fizzbuzz function. So that nice small incremental steps you boasted about suddenly becomes pointless a waste of time.
TDD might be good if you are one of these people who can only focus on small issues or one thing at a time.
There are better options beside using TDD, knowing how to program worth a damn is one of those, using proven code copy paste works well for eliminating errors, ...
Also you end up with multiple times the amount of code but it isn't any better, in fact it is worse performance wise.
The Best you can hope for is a doubling of code where you reuse the same code to test itself.
Hey Dave! It's nice to see a hands-on video about TDD!
I have a question though, I've been trying to bring the culture of TDD to the company I currently work on, but I receive a lot of complaints about how costly it is, and, when I look at your example, I can see where they are coming from. You, not generalizing the cases for 3 and 5 on the first tests, gives an impression of an exaggerated simplification and I confess that if I were developing this class on my job I'd never create those tests. Do you think it is ok to take the "shortcut" if the developer is confident, or it's a big no-no and all tests should be done as simple as possible always?
Thanks!
The problem is that the data says that that developer confidence is usually unfounded. Research of Production failures says that 58% of failures in production are due to the common errors that all programmers put into code. It is not that the programmers don't know how to solve the problem, it is that they don't do what they thought they did. TDD fixes this problem and so eliminates defects dramatically as a result. The data says that teams that practice the kind of stuff that I talk about, including TDD, spend 44% MORE time on developing new features than teams that don't. So while it may feel slow at the point at which you write the code, it really isn't when you take everything else, including fixing all the bugs that we put into code in the absence of TDD, into account.
TDD is like double-entry bookkeeping but for code. It helps us to know that our code, at least, does what we meant it to do - code without TDD doesn't achieve that for 58% of the production failures. You can read the research here: www.usenix.org/system/files/conference/osdi14/osdi14-paper-yuan.pdf
@@ContinuousDelivery thank you for your thorough answer!
I don't even do tdd yet, but this specific example does help me see the value in starting with those simpler explicit tests. Because if you happen to see all three tests fail, the failure of the simpler test alerts you quickly to a whole different type of issue than you might start thinking of if your fizzbuzz algorithm test failed.
SAVED!
13:04 shouldn't it be fizzbuzz(2) ?
Yes, 14:31
I just posted this in my team’s dev chat
Thank You!
would have love to see this written in Go = )
Is there any purpose to writing anything in Go nowadays?
@@OggerFN Yes? Fast, Simple, Staticly typed language with amazing first party tooling, easy concurrency and lets not forget statically linked, cross-compileable binaries.
@@OggerFN is there any purpose at all to programming nowadays ? th-cam.com/video/YnEXEIp5vB8/w-d-xo.html
@@juliankandlhofer7553
cross-compileable binaries sounds interesting.
The other features are prominent in most modern languages..
@@OggerFN I mean there's not a lot I could say in a youtube comment to convince you that a language is good. You'll have to try it out :).
Personally, I appreciate their goal of simplicity and no hidden magic like operator overloads. Since the language is rather small, its very quick to get productive and very easy to understand someone else's code.
Should the first passing test not just have returned a static "2", because that is actually MUCH simpler than str(number)?
I guess it is a judgement thing. You could certainly do that, Somewhere you have to draw the line between specificity and generality. Lots of people that teach TDD talk about that, about moving from specific examples to more general solutions, and I think that it is a good guide. If I had done that here, I'd have probably had a test for "1 reports '1'" and then " reports 'number-as-string'". I do this for "3" and "multiples of 3".
I think that by preference, and I should have demonstrated this but didn't, is that I should have returned "2" first, then changed it to str(number) in the refactoring step.
Thank you for the Quick and comprehensive answer :)
TDD seems scary, hard and slow to those who don't use. But once you commit to it, you'd be like "what have I been doing all along without you?" 😂
Showing TDD with python, you should have picked Pytest as a proven standard :)
18 minutes in I suddenly realise it's TDD, not BDD (which is what I want to know about!)
sadly this stops when you transition from "If 3" and "If 5" to the actual algorithm... like all the other TDD tutorials out there.
Shouldn't it be "Test-Driven" with a hyphen?
its a bit too slow and laborious. But having you taking your time to talk about something in depth is the great added value to that channel. SO IDK. It is good for beginner to see that, for more advanced practicionner, less.
I disagree, once you get good at it it helps you write faster from my experience. All code should be tested in a professional environment and writing code first then going back to write tests is less fun and leads to poorer quality tests that cover fewer cases
@@TamDNB Plus, automated tests also alert whenever something right in the past starts to get wrong, as a collateral effect, due to current modifications.
For instance (19:50), if he had write the 'case 3' before, and now put the 'return str' above, the previous test for 'number == 3' would alert the error.
@@TamDNB I have to clarifiy something. I have nothing about TDD, i meant, THIS video is a bit slow and laborious.
It makes more sense for complex logic.
You break down every part and it helps you confirm that you get it instead of putting code and trying to debug what's wrong.
@@mhcbon4606 fair play, my bad!
Alternatively:
from functools import reduce
print(reduce(lambda a, b: str(a) + '
' + str(b) , ["FizzBuzz" if x % 3 == 0 and x % 5 == 0 else "Fizz" if x % 3 == 0 else "Buzz" if x % 5 == 0 else x for x in range(1,101)]))
I can think that it's a bit too much weight being put on the yellow result. Compilation errors happen, and to put such value in not having them seems almost elitist ("you're not a good programmer unless you always have your code compile the first time", "syntax checkers are for noobs and if you fail it you are bad and should feel bad").
Sorry, I think that you misunderstood. The emphasis on avoiding the yellow balls, representing compilation errors, is that it shows that you were making progress in bigger steps. The odd one for a typo is ok, but the idea is to try and work in ways that prevent you from making big mistakes. Working to avoid "yellow balls" really means work in tiny steps. I use syntax checkers all the time, except when using Cyber-Dojo.
How is this a "Tutorial for beginners"? Too many prerequisites are not addressed, in my oppinion.
I have done embedded firmware development for 30 years in C mostly...and cannot for the life of me understand the confusing metaphores( dojos,antipatterns,etc) when trying to understand how to employ TDD in embedded work. Way more confusing and complicated than necessary. Are all you Gurus just working on coding websites in Python? Your tests still look like you are writing code so what is the point when you still have to write the code..more code more mistakes. More pseudo techy koolaid that I can’t justify drinking. Can’t see trying to do this for writing stepper-motor- driving code.
Well, lots of orgs that write embedded code do. Tesla for example. I have worked with teams writing firmware for scientific instruments, medical devices and FPGA's for financial systems. I agree that the terminology is a bit arcane, but then jargon that you are not familiar with always is. Since you call them out dojo=place to practice, antimatter=don't do this!
...and yes, I could easily see myself writing tests for code that rove a stepper motor, done that, would have been better with tests!
how often i wrote hundrers of lines before even attemping to compile the code =) i am such a nasty coder
It's acceptable to not write tests. But it's mandatory to compile often.
You cheeky little bugger
Always painful to watch someone typing slowly code char by char and explaining things for half an hour, which could be a 5 minutes video.
gg
wooooooooooow didn't know about cyber-dojo, nice video and approach for beginners
Glad you liked it!
Anyone interested in this stuff should ALSO check out anything by JBrains (J B Rainsberger). Dave and JB are a fearsome duo!