@@Kalasklister1337 No idea what's the problem with that, if the shit is not in your unit under test it should be mocked. Otherwise, you're testing other people's code and making your own tests flaky since they no longer test just your implementation but also other people's implementations on top
These blogs always pick an easy example. Most validation also requires examining the database. A common one would be, in this scenario, a requirement where user email address needs to be unique.
This also somehow assumes you will never forget to call that validate function before putting it in the database -- solve one problem (testability) while creating another (reliability).
@stevemartin2981That doesn't answer the question, now does it. You can make the fanciest factory pattern, without using the database you cannot enforce constraints that require data stored in the database.
You mean, any non-absolutely-trivial thing? I completely agree with you. I've mostly worked with Oracle, but have used PSQL a lot too, and things that I know are different in Oracle and SQLite are: - Text vs varchar2, clob and char have different semantics, and Oracle infamously treats "" as null. This is kinda important that your program picks up on... - integer types are different, and Oracle has a number that includes support for decimal numbers, which you don't want to have truncated to ints, or screwed up to 64-bit longs. - date handling is pretty different, Oracle supports julian days, ISO strings and unix time, but: Every DBA I've ever known have broken out in sweat if you don't specify the date format in Oracle first. Why wouldn't you document your intention? - floats are 126 bits in Oracle, 64 in SQLite And that's just the basic datatypes. SQLite is cool, I have nothing against it in any way, but it's not a great way to test against your database. Use the free version of the database for that. Oracle, Microsoft, even PostgreSQL offers free versions of their databases that are really useful as testing tools. (Yes, the psql mentioned was a joke. I know it's all free.)
I find a lot of mocking comes from arbitrary code coverage percentages (80%-100% unit test code coverage). Also seen with: "unit test confirms exact log output", and "unit test getter/setter/properties".
My favorite part of arbitrary code coverage percentage is when you add something small with a test but still don't have enough coverage. Naturally you add a test to some code you didn't write using doc tags and comments to guide writing your test. At this point you find out the code or documentation has always been bugged and just make the test match the current output. Now the tests make fixing the bug a regression. And all I wanted to do was add a feature that scratches an itch I have or upstream patches I'm sitting on.
I was going to say this as well: mocks are needed when unit tests are expected to cover 80+% of your code by company policies. :) I wouldn't say they test nothing like the article though: you can still do some checks on inputs, control return values and errors... which can be useful to some extent.
@@Jeryagor Covering 80+% of the code is just crazy to begin with. Particularly if you're using an ORM. Your database queries should be automatically tested, which means only the logic parts (the fiddling) need testing that, lo and behold, shouldn't need mocks if written well.
@@CottidaeSEA You still have to somehow test if your ORM returns the things you expect depending on the ORM, e.g. Laravel's Eloquent can also be used as a Query Builder and so you might want to ensure that the correct data gets returned...
If people are mocking their own code, then they're doing it wrong, simple. Mocks are for things outside your unit-under-test exclusively, everything within your code should be tested and anything outside it should be mocked. Even if it's a database query, you might not test the DB query itself (your ORM handles this), but you definitely want to make sure your ORM command is requesting the right data. So you'd mock the ORM itself and check that people are making the right query to it and including all the things you expect
No pgcrypt, clientside timestamp and hashing, no RETURNING id, no error handling (duplicate email? or transient error, so gotta retry?), no prepared statements, timeouts, stats collection. We don’t even look at the response. So what do we do? Extract two of the four non-db-related steps into a separate function of course! Now we’re cooking on gas! No need to run expensive integration tests to test those 2 ifs! The main problem of the project has been solved.
For testing library-type code (e.g. data structures) in dynamic languages, mocks can be useful for verifying that your fancy data structure isn't accidentally doing something untoward with the objects you're entrusting to it -- you can verify that only functions that are part of the interface that algorithm requires are called, and that things that are part of only _most_ objects (e.g. random toString logging calls) aren't used.
I highly recommend reading "Unit Testing Principles, Practices, and Patterns" by Vladimir Khorikov. It's probably one of the best software design books.
Bwhahaha. I second that. For years i am all the time recommending this (Khorikov) book to any person at first opportunity to get started with unit testing. (And then recommending TDD Kent beck as second book, for more practical approach, with learning feeling how much pause can be allowed between written working code)
There is a valid use case for mocked APIs: One time I was writing a shipment tracking service and it would connect to the API's of various shipping providers and check the status of packages and trigger notifications to users devices. While some of the shipping companies provided testing servers they didn't update the package status since there was no actual package and no real delivery process. So I couldn't use the actual API even if I wanted to. Instead I created a series of responses and stored them in a bunch of files. Then I served these files iteratively. A big benefit is that your test can go real fast without the network latency. But you have to maintain the mocked responses which is kind of a bummer.
Perfectly valid - there are all sorts of scenarios such as this where you a. cannot call the service as it would trigger a real event OR you cannot call the service as it simply would not act the way you need to test. As you've mentioned you need to keep the response files and serve these up - a common pattern and there are frameworks that do exactly this ... wiremock for example. Is this a unit test or an integration test? Does it matter? I feel we waste too much time classifying these types of things when all you're trying to do is validate the system.
Normally those kind of services do have sandbox modes where you can test your code against the real backend. If they don't have it maybe it's good idea to look for alternative.
@@adriankal Did you read my comment? A few of them had a sandbox but they didn't bother to simulate someone scanning the packages barcode at various locations it was insufficient for the purposes of the tests I had to run. And I couldn't just say oh well I guess I wont use DHL or UPS. I had to deal with much smaller shipping companies that didn't even have a test database. I would have to test those with real tracking numbers.
Making your tests depend on the API that you do not own is pretty bad idea and imo I would take mocks over direct API calls in this situation any day. Especially if you run those tests in CI/CD pipelines. Pretty interesting that you've mentioned shipping providers btw, the company where I work also deals with those and mocks (such as gomock or httptest libraries) are actually invaluable for our needs, there is no other way to ensure that your code is not going to break and your tests are not flaky Also sandbox APIs that shipment providers have usually either don't work half of the time or just break quite often.
I love Go's named returns. Keep in mind you absolutely don't have to use them naked. They are extremely useful as quick bits of documentation if you have helpful names for the return variables, especially if you are returning more than one variable of the same type. They also really help with reducing complexity because they guarantee those variables will be declared at the very top of scope, which can really help with refactoring and reasoning about the function. Kind of like defer guaranteeing stuff happens before exit, it makes it easier to write and refactor code while maintaining guarantees that your declarations haven't been moved around and caused some insidious bug. Even naked returns with stuttering names like "err error" can be very useful in private implementation code and should be used freely there. The only caveat with naked returns is don't add stuttering named return values in public methods: your public methods are primarily meant to be understood by the people using them, and stuff like (string string, err error) adds needless complexity to reading the function signature.
I'm a desktop developer and disagree with this. Testing a desktop application without mocks is a nightmare hellscape where there is no return. I understand that backend is different but there are many more dependencies in this world other than db. What about calling other API's, report generation, interfacing with other applications or any of the many scenarios that would break a unit test without a mock?
Hey Prime, not usually a commenter but I think this hate towards mocking is misleading to newer devs. Overall love the advice on code structure. Would really appreciate a response to the below :) In the example in the article there is a critical piece missing about how to glue things together. This is a transitive problem at each level of abstraction (normally several of these in any large code base), and these glue layers often contain small bits of logic themselves. Also, not all dependencies are deterministic (i.e. http timeouts, db trx contention, etc.) and it can be useful to model these situations to verify your code behaves as expected (i.e. propagating the right errors or retrying a certain way) For example, consider a common API handler function: 1. Stateless request validation (pure function) 2. Fetch some data (function with network dependency) 3. Stateful validation (pure function) 4. Example Twiddle: Some type mappings between request domain and data domain (pure function) 5. Store some data (function with network dependency) 6. Map stuff to a public facing response (pure function) This "glue function" has logic as to how it fulfils these steps and deals with potential failures (i.e. if we fail a validation step, further functions should not be called). When it comes to testing this, I want to test the wiring logic WITHOUT rehashing the behaviors of each of the dependent functions. I could do this by "injecting" my various dependent functions as well typed parameters and then "mock" the behaviors such that I can ensure certain branches get hit. This is all decoupled from the implementations; I can change the validation logic and it doesn't change the test. Call it mocks and DI or call it function composition, its all the same at the end of the day. You might argue all the glue branches get covered by integ tests, but this is almost never true at scale. These UTs are so easy to write too and in my experience have huge ROI for mature code bases.
3. Stateful validation --- This is where you messed up. Parse, don't validate. Especially if you're doing HTTP requests to third-party services on the server side for some reason. This makes your "wiring" completely deterministic and testable. 5. Store some data --- Testing this is pointless, if a third-party dependency fails there's nothing you can do about that. The best approach is to branch the computation into two deterministic functions and test them both in isolation.
@@andrueanderson8637 You have different understanding of "parse" and "validate" than I do. Could you explain what's the difference? >Especially if you're doing HTTP requests to third-party services on the server side for some reason Yes I do, how you'd otherwise save data in user country first to follow GDPR? Do everything asynchronously (obv bad idea)? What if you're using external secret managers or anything that is complex and you don't want to write your own code for? >if a third-party dependency fails there's nothing you can do about that Yes you can. You can use alternative mechanism on failures (instead logging to monitoring services you write to file), you might retry or anything else. Most of time those are just unimportant details, but in some cases they're important. Things are not always deterministic unless you live in haskell land. >branch the computation into two deterministic functions and test them both in isolation How you'd do this with saving info to database? Anything that deals with real world won't be deterministic, unless by deterministic you mean "I know what happens if saving fails and I know what happens when saving works and both cases are tested"
totally agree with this, you want to test things without relying on your dependencies that, in integration envs, are often broken. Of course you want integration tests, but that doesn't mean you don't want functional tests that tests your service functionality mocking all your deps. The example in the article is too trivial, and even if you can relate for simple unittests, it doesn't work when layers of abstraction start to grow. And btw it's 100% better(for dev confidence) to have functional tests that work 100% of the time with mocks, to have integration tests that work 85% of the time because of network/database/other services issues independent from your service.
Yeah, I don't get the hate for DI & mocking. In my experience a sufficiently large codebase will have things that have tons of dependency. The only feasible way to test them is using DI & mocks. Mocking is akin to doing experiment where we assume everything else is correct & test with that assumption in mind.
100% agree! Mocks are a great way to fk up your test suite. Where i work, my boss mocked all his internals and taught the rest of the engineering team to do it. I never gave in. After he left i was finally able to convince the team to stop doing it. Your test should describe a narrative on how it behaves, not how it’s implemented.
How would you approach testing a function with multiple dependencies, where the code execution takes different paths based on various conditionals? The function's behavior might change if, for example, a validation step fails or if an HTTP request encounters an error so further code may not be executed but other will still be executed. I want to ensure comprehensive testing to cover all possible scenarios. Given the complexity and potential branching, how would you design tests to verify that the function behaves correctly in different conditions? I don't think its possible to achieve this without relying on mocking
Hello Prime, i dont know if you are ever going to read this, but as someone who just finished learning this job for two years in germany and tries to find her way through all this content and all, you are a great help. As a trainee didn't learn much of programming. I was basically just prepraed for the exams and that is about it. Your content helps me. If you or anyone here has some kind of help for me to what learn first and what I need to know, it would be sooooo helpful
I was taught mocking in school, it made a lot of sense to me. You want to decouple the code in tests. If module A depends on module B, you unit test B, then mock B for A's unit tests. That way, if a dummy breaks B, only B's tests fail and it's way easier to find the bug than if all tests fail together. Also, mocking is a necessity in Test Driven Development. Not saying if TDD is a good idea or not, I really don't know. It's a good exercice though.
it depends, if your assert at the end is "function X was called exactly 2 times with following parameters", then it doesn't bring that much value. I prefer to mix in a bit of functional programming - all the "logic", like validation, actually modifying the data etc lives in public static functions (not async if you language support async/awawit). Async means you are doing something - db call, network call etc. You unit test the pure functions, without any mocks. Then you integration test the big async functions that call the db (TestContainers are awesome for that), and only check if the output matches the expectations for the input - nothing else. That way you know it works, and you can easily change implementation, while unit tests are keeping the internal contracts intact
@@Qrzychu92 There is (like everything) some validity in asserting that a method on a mocked property is called exactly X times....not all the time, but some times.
If "B" breaks, that should have been caught by B's tests before it was even submitted. If it wasn't caught, it's good your tests are catching it and now blocking your release of a broken product.
If you have to use TDD, just write an integration test rather than a unit test. I don't understand why TDD advocates don't seem to think that's a good idea, or don't advocate for it over unit tests where it is useful.
Many great points from you, the article, and many comments, but I don’t think mocks are a problem *if* you prefer, as I do, to test only public interfaces. Those in some cases while actuating underlying code, will need to call outside the library which shouldn’t happen in unit test. Mocking should be done only when necessary, but it is sometimes.
I wanted to skip the video when I heard “never mock”, but didn’t… and in the end the final thought was not to use mocks for integrated tests. And here are I 100% agree with you. But I do think that mock is a powerful tool for state testing. And to be honest you need to test the contracts and the states. If both are tested you don’t even need integrated tests.
as someone who uses C# professionally, why the fuck does someone want exceptions and try/catch in go? errors as values are far superior to debugger’s jumping in catch block at the slightest hint of trouble
@@angelocarantino4803 Yeah basically. I am heavily using Rust nowadays and Options are great. Though in the context of Godot Rust I need to use expect() instead of unwrap() since the debugging can be pretty bad there. I have learned my lesson and will now use expect instead lol
Remember the old adage, “Use the right tool for the job.” Mock is just another tool in the development and testing toolbox. The bigger problem in the examples was the poor design of the system, coupling data validation to the save function.
Here's how I prefer to test my backend handles: 1)I use Docker (specifically docker-compose) to run an ephemeral database just for tests. 2)I create the database and run migrations at the start of the tests, and then destroy the container when they finish. 3)I test HTTP requests to a handler, which allows me to refactor the inner logic of the requests later. It's not exactly unit testing, but it works fine, especially in the microservice world.
@pycz I think this approach is pretty standard in e.g. Django ecosystem, because there are a lot of places using a database (Models etc.). That line between integration and unit-testing is very thin in such cases.
That's just a normal integration test. Unit tests test code of units. Integration tests test code that need external units (database) you cannot test (database engine)
@@QckSGamingI realize in recent years that most people in this industry don't even understand the basic difference between unit test and integration test.
The one area I like to use mocks is to put the code in an unexpected state and see if handles the error and that it gives a reasonable error message. If the program is logging errors then I will mock out the logger to catch the error directly and compare it to what is expected rather than actually write the error to log. The other area is when I have to fix legacy code and I need to mock out the stuff that is irrelevant or that is creating expensive connections. Sometimes you can break the code out into a separate function and not have to mock it, but I have run into cases where that's not possible without a major refactoring of the code.
I think mocks are useful when your functions make calls to third party things. Like for example if you have a route that does some stuff and makes a call to an email client. You have to mock the email client response because you shouldn’t actually be calling a third party service in your unit tests.
ideally you shouldn't be calling random third party things directly in the backend call, these things can be scheduled with a message queue, then the test only needs to test that an event for email notification is in the test queue
@@samanthaqiu3416 not always. There’s lot of scenarios where you need to call third party services because you need data from them that is later used in your func. You can’t put that in a queue.
I guess in that case shouldn’t the data manipulation be separate out in its own function that way the main function shoud be 1) preprocess data logic 2) third party call 3) post process logic Thant was you can actually unit test 1 and 3 and save running the full function with a test enviroment with integ tests
I only mock things that make http requests to external things like google services. 99% of the time you don't need a real response from google to test. Everything else is fileld with fake/incorrect/empty data, with a real database.
@@alanonym8972 if their APIs aren't versioned or they make a breaking change without announcing it months prior, your production will crash regardless of your tests. So there's no reason to test 3rd-party services.
@@spicynoodle7419 I mean it does not necessarly crash, it might just not work as intended. It is also not necessarily a bug that will be detected immediately by your users, or maybe they won't really report it for some time. I have spotted a lot of bugs before my users do (or at least before they decided to report it). I feel like it is much more important to contract test things that are likely to change rather than the file system for example
Keep in mind, that while using SQLite for testing is great, it does not share all features with postgres. So then you have integration tests, which use different database than your production environment. I would sleep better at night, if the testing DB engine is the same as the production one :)
The lesson here is not that mocking is bad. The lesson is don't mix high level policy code with infrastructure code. High level policy code pairs well with unit testing, dependency inversion, and fakes of various kinds, including mocks when appropriate. Infrastructure code pairs well with integration testing and using real, as close to production as possible, collaborators.
Someone claiming to have "fixed" how any language does its error handling is hilarious. The hoops some people invent and then jump through just so they can use something they are more familiar with never fails to amaze. I had to work with Mocha/Javascript Unit Tests ones and they connected to and manipulated a MongoDB. The DB Setup and -Cleanup after every test made at least 75% of the test code. It would have been a lot faster to just set up a local docker container and do a quick reset before each test run (it's like 2 bash commands to do so). But nooooo, none of the other devs knew how to use docker, so it was not allowed.
yikes this is why i love sqlite, you can use a file for testing which means that you can just cp a test file which has the perfect database setup and use that for integration / e2e tests. that is my fav :)
Tests don't test current code. Tests test future changes to your code. Your tests are bad if your mocks are able to hide future code changes. However, you can use them to also assure that future code changes don't break the expectations of current code.
The main problem is that people just don't realise what exactly they can test with different kinds of tests. It's damn useful to have unit tests on sql queries using mocks (sql.Mock) to test marshalling and unmarshalling of structures, to test passing parameters in queries. It's just hilarious how many silly bugs, panics, etc. there can be at these levels. But that only tells you that the data flow in your code seems to be correct. Apply encapsulation and be fine.
i like to write in-memory repositories to "mock" the database, but actually i;m using them on "prod" when it's only a prototype as well. The key idea is that i can run bazillion of test within a second or three but if i want to test it with real database - also no problem, just it's taking a little bit longer. For APIs my team also wrote inmemory service to pretend it's a real service. Usually these in-memory things are enough but several times real database and real api discovered some bugs. We're running all of the tests with real things before merging to master, but these in memory are used in development and they giving us instant feedback. That's the compromise i believe.
90% of our test code is integration tests, and CI has to install MariaDB in the container. The main reason is that the code is ass, but the second is that there's tons of MariaDB-specific code that would simply fail on Sqlite or H2 and wouldn't be tested at all if we used mocks for everything.
Beeceptor has been a heavensend for mocking, debugging endpoints and timeout problems, testing adjustments pre/post production, CORS support, Customized Dynamic responses, Mock serving for OAS Specs. It's fantastic
I would say there is one case where mocking is both trivial and definitely should be done: In Rust when parsing messages from some kind of data stream, when it is written correctly, it is easy to just get the test data and put it into a Cursor. That way you can just unit test the logic without having to do anything complicated.
I love mocks in unit testing. if you use it properly you can test how your code should behave when different values are returned from mocked dependencies.
I find the "abuse" of mocks an issue. But I do see scenarios where you simply don't have the real thing to test or you do have an unstable environment with no valid test cases for you to check contour scenarios. I was one of the guys who worked with a code coverage target. At a project I even got at 100% code coverage. I still look at code coverage, to see "hey, there is this scenario that we totally did miss in our tests". And there are situations when what we need is a failure. Which in some situations may be difficult to test. Such as if the code itself is a wrapper around a real database that is supposed to handle different database responses, or a production webservice that we cannot really mess with or the customer doesn't have a stable uat environment, or we lack access to that environment at some point. In that case, either we create a mock server. Or stop development totally. At this moment mocks comes in handy. IN the video itself, there is a mention on "using an SQLLite" database. Is it the real production database? No, right? I do consider that a mock. I work with developing a framework that other people uses to make real life integrations. I don't have nor want access to the customer database or services, nor do I want to test them. But I do want to receive a json payload or an invalid json payload os some times sobre issues with payload content to test how my framework reacts to that situation. For these there is no way around: I need a mock server. In short: I think mocks have a niche use. At least at some steps of development.
@@d_6963 Exactly, when the feature to be tested is just how to handle a database response from a customer who has that mainframe based oracle database and is paying millions for it, and has personal sensitive data inside it, we can't just say, "Hey, I need to introduce a shortage in that database of yours, so I can make a test. Short thing, like we need a downtime of like , 2 minutes, some 20 times per day. Is that ok?" 😁
You could argue the stub vs mock difference, but I think that would be splitting hairs. I agree though, that most of the hate for things comes from weird management rules.
@@HrHaakon I think it's just a bunch of misnomers, but if you describe the behavior of the test double, the context makes it easy to find out whether it's a stub or a fully fledged mock object.
This is not a great take. tl;dr version; Mocks are good. The way people abuse mocking libraries is bad. What Prime is recommending "instead of mocking" is literally mocking. By the dictionary definition. And also by what you will be taught in any book about mocking, or from any of the people who developed these techniques. What we have here is what we pretty much always get in this industry: another programming cargo cult. People claiming to be using techniques they neither know nor understand, and the thing gets equated with whatever horrible nonsense people are doing just because they falsely claim to be doing the thing. A mock is supposed to be an input that you can run your code on which satisfies the interface you're programming to in a very simple/degenerate way. You absolutely should program to simple interfaces. That is the way to do testing and also to do design. The reason "mocking" sucks is because people have developed extensive "mocking libraries" which allow them to quickly and automatically produce objects that satisfy interfaces *without reducing the complexity of the thing being mocked*. The mocks have extensive, complicated behavior that makes the tests hard to reason about. Because the programmers don't redesign their code to simple and elegant interfaces, they instead leave the code complex and reach for some giant mocking tool so they can carefully orchestrate their rube-goldberg test. Whether he realizes it or would use the same terms or not, what Prime was recommending about separating the fiddling bit from the bit that fetches the stuff to fiddle *is in fact the essence* of unit testing and mocks in the actual definition of these terms. He's saying instead of writing input to the whole program, like a filename or an id for a row in a database, instead you put an interface between the fetching and the fiddling, and then you test the fiddling code with *a test thing substituted in for what in production will be coming out of the database*. Since "a-test-thing-substituted-in-for-what-in-production-will-be-coming-out-of-the-database" is very long and cumbersome to say, we introduce the word "mock" and define it to refer to the things you fiddle on in tests. Mocks are good. Mocking libraries are bad. If you have to reach to a mocking library to write a test, simplify your interfaces so you can write the mock by hand simply.
Was a long scroll to find exactly what I was thinking. Nowhere in the video was mocking proven to be bad. Instead I saw a weird tangent about how people don’t understand features. The SQLite example is an advanced form of mocking in my mind. Taking some library that bruteforces a mock is ridiculous, people do that..?
Prefer (in memory) fakes over mocks anyday. You should invest in a db fake and then you can test the write and read (100 percent of the backend) in unit tests like this: write op 1, write op 2, write op 3, read op 1 assert output. Of course in addition to the advice peddled here to break out the fiddling into isolated functions to test.
Oh and I would also add that many devs do not know how to responsibly write INT tests. A good integration test should leave no trace or at least try to leave no trace it was there, and not cost you tons of money by connection to a third party API or hammering a database when you don’t need to. Unit tests at the base of the pyramid help to ensure the integration tests on top are minimized and are as least intrusive as possible. Or am I crazy in thinking you can have unit tests and int tests and no mocks and mocks and be perfectly fine?
In agreement on mocks. However, my employer requires us to have 80% "code coverage". It does not matter to them that it's type strict and compiled. Since the "fiddly" bits are not 80% of the code base, I am either forced to mock, or add a file with an unused function that does nothing but exceeds the length of the rest of the project so I can fulfill an arbitrary metric for management. I am also in the camp of "unit tests are also still code, equally prone to errors, and often more lines than the actual code they are testing".
>It does not matter to them that it's type strict and compiled Well obviously, why would it? Just cuz the app is type-safe doesn't mean it's behaving correctly. By that logic, all apps written in java are bug-free : )
@gentooman When your objective is "code coverage" then behavior validation and logic error checks need not apply. If it compiles we know it is free of syntax errors, which with the exception of some logic errors means that the code would execute. You might inadvertently find logic or behavioral errors in the process of achieving higher code coverage, but that's no guarantee. I'm not against unit testing, but I think they offer significantly less value for compiled type strict languages. Tests are also code, and if you mess up the implementation you can't expect your behavior checking code is somehow less error-prone. Even when checking inputs and outputs there are logic errors you may not catch.
@@gentoomanI'm an architect and tests are usually a net negative. Especially if one is chasing arbitrary code coverage. Why don't you test your unit tests? 😂
Imo, test writing is a team leader/project manager's job. Replace design briefs with actual specifications by limiting well designed tests. Would make it a lot easier to teach Juniors too.
In company I used to work, we tested for everything, even logs. Currently we only test functionality that means something, has purpose and needs to return/receive/modify values in certain way. Now I am not sure if author is talking about mocks. Mock is there to remove need for dependency in classes that have it while still preserving what function inside it will do (you can write whatever you want as functionality that mocked class does). In a way, you can write a mock that will behave exactly the same as class that you obviously wrote unit tests for and know that it work properly. Of course testing code in natural environment, some integration or interactive tests can be done on top of that but I usually find mocks and unit tests combined with CI Pipeline good enough and not face unforeseen issues (if tests and mocks were written correctly). You should always do stuff like stress tests etc. if your application needs to handle certain traffic as well and prepare for it beforehand.
I love external service mocks in integration testing. I am using boto, localstack and docker compose to mock databases and aws service apis. This way I can develop a lambda function, ecs service or whatever locally and directly integrate it with surrounding services with the iteration time of a few seconds. It is awesome as it speeds up development significantly, makes me confident that the code and infrastructure code will run fine in cloud environments (yes you can run terraform or aws cdk against it too) and it is a clear working documentation of the API and how other services interact with it. It can even be used to test IAM policies locally without a single deploy to AWS!!!!!1111
I didn't think code coverage is problematic until I ran into a very pleasant and necessary test in a Django project.. basically the framework has a reverse() function, which takes the name of a view and returns its url, as configured in the routing layer, and a resolve() function, which does exactly the opposite of that. The unit test was taking 3 screens "testing the urls", ensuring that resolve(reverse(x)) == x. I'm so glad the url configuration had 100% coverage, I don't know how I would've rested if that test wasn't present in it.
Putting code that fetches data into separated function is a double edges sword, because some people don't care about the underlying code, so instead of writing their own sql query that does join, they instead use your "abstraction functions" and overfetch data af. Personally, I prefer always fetching at the function that handles the api request. All other functions that fiddle with data can be on their own, but once I'm passing db connection to a function, something, somwhere went terribly wrong xd
And that’s why you don’t “over expose” your functions. The only functions callable should be the functions that operate exactly as intended. i.e. don’t make every function public. It’s okay to have your logical bits separated, and then create a public function that chains them together in the correct order. This documents how it is supposed to be used, and prevents misuse.
One thing not mentioned is the "intergrated function" returned both an error for bad user/passwords and for database problems in the same variable... This is criminal but also the kind of code a try: catch: pattern encourages. Think about what the code looks like to handle the error of this function: (lifted from the article.) func saveUser(db *sql.DB, user User) error { if user.EmailAddress == "" { return errors.New("user requires an email") } if len(user.Password) < 8 { return errors.New("user password requires at least 8 characters") } hashedPassword, err = hash(user.Password) if err != nil { return err } _, err := db.Exec(` INSERT INTO users (password, email_address, created) VALUES ($1, $2, $3);`, hashedPassword, user.EmailAddress, time.Now(), ) return err }
Depends on how much actual logic you have in there. In the case of a HTTP backend with a database, I usually skip the unittests altogether and test externally against the API. Which drops the need for logic separation. The main problem here is usually that you have external side effects which require context knowledge that you (should) not have in a unittest. It's just not worth writing actual testable code if the unittest will cover 5% of the possible errors and you have to write an external test anyways, which when run against the API can also test correct parameter processing, correct return values setup, correct permission checks and possible side effects (e.g. missing WHERE clause in a DELETE statement). When actually mocking, then do it right, means: The mock must mimic the behaviour towards the caller exactly like the original object does. But in general I agree with you, avoid mocks when possible, they can be tricky to get right.
I’m learning like totally 4 months in at this point and your videos are so advanced but still helpful. Just wanted to let you know that some of your viewers are totally ignorant of most of the shit you say 😂 but anyway I’m trying to understand Jasmine now
I sometimes need to unit test my sql and Testcontainers is an excellent option to do that. Plus, mocking external dependencies is important. Example: Calls into your cloud provider. Mocking AWS SQS and asserting the data sent to it is extremely important. Internal dependencies? You need to have a good reason to have an interface and mock for an internal dependency. A real good reason because you usually won't find that reason. Mocking internal code usually just reduces your code coverage and forces you to write more tests. Why?
I will say that using property tests is far less code than manual unit tests and it is general enough to cover unit tests to mocks. The only time example based makes sense is end-to-end or if you have specific cases that need to be checked that are extremely hard to reach randomly. If you are still a JS dev check out fast-check.
The problem is not even mocking in general (it's just a symptom), but the stupid ass way of writing tests for implementation details, instead of units of expected application behavior, and the absolutely insane obsession with microscopic testing that provides almost zero level of confidence that the actual expected application behavior is satisfied. With the cherry on top that after writing all those pointless idiotic unit tests you will never want to touch the (probably shitty) code and improve it ever again, because then all those tests written against the implementation details will break, even if the application behavior is implemented perfectly. And you can quite reasonably extrapolate this effect into the codebase at large... Leading to a frozen architecture that won't be able to evolve iteratively to follow the changes of the domain and the improvements of the team's understanding.
Great that you separated out your validation logic and wrote unit tests for the validation logic. But how do you validate that your http handler is actually calling validation logic at all? Surely not as part of your end-to-end tests?
But what if the "filddling" part requires querying the db for example? Maybe I need to validate something against some configuration or value not already in memory. Even if I'm not making external calls, I might call some other internal module which I then need to isolate from the system under test. Domain logic can still have dependencies which need to be mocked. What am I missing here?
If it's actually backend that has to serve responses ASAP, you have 3 things besides fidling logic: existing data, an update info, and what needs to be updated. All can be represented by simple structures. (Existing Data + Update Info) feed into fiddling logic, which spits out (What needs to be updated). What needs to be updated is a structure, you can test it. Or you can run all necessary queries to do what "What needs to be updated" structure says. There are always exceptions of course, and that's normal. In those cases mocks are fine. All absolute "rules" in development are bad.
@@OrbitalCookie okay, so domain logic or "fiddling" would need to keep no dependencies of its own and get everything it needs via function arguments? So it would be made up of pure functions with no state. That's fine but then the definition of domain logic from the article no longer applies since there may be an arbitrary number of abstraction levels within it that have their own separate fiddling and side effect parts. So if you still want to avoid mocking you would only be able to test the pure parts but not the stateful parts that use them, since those would be considered integration tests, even though they're part of the domain logic, leaving holes in the test coverage at different levels. I would just avoid mocking that validates against implementation, to avoid coupling the tests to it, but stubbing dependencies within domain logic should be fine imo.
This approach is fine, until your pure logic _depends_ on data that comes from the DB. For example (In TS) function saveUser(user: User, db: Database) { let sameEmail = db.getUserByEmail(user.email) if (sameEmail) { throw new Error("Email already in use") } if (user.password.length < 8) { throw new Error("Password is too short"); } /* more validation */ db.saveUser(user) } When this happens, your unit tests end up reimplementing the code inside `saveUser`, creating a dependency to the implementation. If you can, you always want to write pure functions, but sometimes there are hard logic dependencies that you cannot just ignore, otherwise you end up coupled to the implementation. Using mocks, you can actually test the expected behavior without this coupling.
Another situation (I've been there when writing Haskell) is that in order to keep the business logic pure, you end up defunctionalizing the codebase, separating what should be done from the actual execution. This makes things far more complicated (involves writing an interpreter, defining all possible actions in form of data types, etc) than just using mocks.
I'm a total nobody who writes garbage code - slowly. A 0.1x developer. I like unit testing because I'm not smart enough to hold all the other dependencies in the working memory in my head. Must be nice to be smart enough to not have to mock everything... 😔
I am beginning to wonder if Prime's self professed knowledge of development is actually pretty narrow. If throwing around blanket statements like 'never mocking' then perhaps you've worked on simple systems that never require mocks in the first place?
Meanwhile I had the joy of finding some tests in the java codebase at work where the database is mocked, and the database mock returns a mock of a plain java object. Sometimes I wonder where people got the idea that the new keyword is an antipattern
At a guess, the constructor/factory had some logic in it that talked to something, and used the results to get the object. Is that stupid? Yes. But that's what a lot of people did. Hey, a constructor is just a function right? The opposite idea is called IoC or DI: It's the idea that if your object needs say, a DAO (those things that prevents @ThePrimeTimeagen from having to make 10kloc pull requests because a data service changed), you go and get the DAO, and then hand it to the constructor. The constructor does not go out and do things. The object after it's constructed should to the extent it's reasonable not go and find things to talk to. Those things either get injected into it (through say the method call, or the constructor if it's used many times), or those things go and talk to the object. The whole injection framework thing should not be confused with the concept of DI as a coding style. They're two very different things.
@@HrHaakon Not at all, in this case it was a simple Spring Data JPA repository like public interface SomeRepository extends CrudRepository { public MyEntity findBySomething(Long value); } public class MyEntity { private Long id; private String someProperty; // Setters and stuff } And a test like void test() { var repo = mock(SomeRepository.class); var entityMock = mock(MyEntity.class); when(repo.findBySomething(anyLong()).thenReturn(entityMock); when(entityMock.getSomeProperty()).thenReturn("Hi!"); var service = new MyService(repo); var entity = service.getById(17); assertEquals("Hi!", entity.getSomeProperty()); }
@@WindupTerminus your example uses DI (repo is injected into service instead of service creating repo via new Repo) and not the new keyword, so not really sure what you wonder about.
9:08 you should not write mundane tests. Arguably you shouldn't write out simple data validation, but instead use some kind of schema validator akin to json schema or protobuf (yeah, protobuf is secondarily a schema validator). Perhaps the example was made simple for ease of reading, fine, but I have seen a lot of code written like that and I would maintain that if it's as simple as what's there, it should not have dedicated tests. On the other hand, I completely agree with how bad mocks are. I absolutely agree integration tests are very useful.
How would you approach testing a function with multiple dependencies (database service, logging service, http request service, etc), where the code execution takes different paths based on various conditionals? The function's behavior might change if, for example, a validation step fails or if an HTTP request encounters an error so further code may not be executed but other will still be executed. I want to ensure comprehensive testing to cover all possible scenarios. Given the complexity and potential branching, how would you design tests to verify that the function behaves correctly in different conditions? I don't think its possible to achieve this without relying on mocking those dependencies, or at least i can't figure one out
This is why we should follow community conventions on things. Don't fundamentally change how something works pls. Bill, needs to change jobs and work in a language that isn't Go.
I'll argue also that you need to verify everything (pure functions) is called and in the order you expect...because it is great to UT but it also great to check what you tested is actually used. So for this you need to mock your pure functions. In short, yes when you mock too much, something smells in your code.
At my job, we work mainly in Rails... our biggest mantra is, "if you find yourself fighting rails, you're probably doing something wrong".... this is what I think of when I see someone circumventing a Go feature to add.... a tryCatch function? That just sounds awful.
If a database is part of your tests, then your tests are wrong. The database has already been tested, you don't need to waste infrastructure on testing a database. Mocking a database call should be straightforward and obvious. A test that completes in a microsecond is the test that actually gets run on every commit. A test that takes 2 seconds, is the test that get's forgotten until it fails in CI/CD.
I don't see any problem with using a mock in the described example. Everytime you use mock in your test, you just must be aware that the mocked part ALWAYS has to be tested in separate test. When you use mock or design your code the way it doesn't require using mock (e.g extract the problematic part from the code as it was proposed in the example), , in both situations you are avoiding using this probematicaly part of the code in your test, so the outcome is the same and what solution you choose doesn't really matter. I advocate the solution that you should ditinguish the "impure code" (= connection with external systems) from the "pure code" (testable, eg. data transform functions) but in some situations (e.g in OOP) you just need mocks.
I'm brand new to go, learning it now thanks to Prime's videos. Hadn't heard of Naked returns yet. I googled it after he mentioned it because I couldn't tell what they were from that brief mention. I literally stood up in my chair when I read the documentation. I don't like that. I will not be using that feature lol
Mocks are the part of that fiddling function you don't test, usually that function is not just a map, it would have some logic, need to fetch data from a dependency only in specific cases and stuff like that. You can solve it by having function composition or delegates too, but in the world of dependency injection we live in I believe mocks are prety usefull. Why would you test again a dependency that is already tested? Why would you take the burden of having to initialize everything that dependency needs in your test again? Just mock it and test the fiddling part, it's a couple lines of code versus many lines of initialization and the danger of having tests overlapping each other. When someone says that overused phrase "if you mock you are not testing anything" I always ask if they are doing functional programming or what kind of dependency inversion they are using, if they use dependency injection, that sentence makes no sense to me at all.
I feel like people misunderstood the point of unit test. It's not suppose to test the real world use case it's a sanity for your small unit of code. Should be small and quick to execute and to catch unexpected edge case or when the 'unit' is somehow modified later on. It's only like the first step of testing
Then how would you structure your codebase? Filestructure and MVC logic? Please inform me on whats a good way to develop a let's say a Rust http service?
This absolutely 👏🏿👏🏿👏🏿 As soon as I learned about mocks, I intuitively thought something was off about them. I came to see limited value due to the use of a very difficult coding framework (AWS SWF’s Flow Framework), but outside of that I saw a bunch of devs pretend that they needed to test that database and other remote call parameters were set exactly. If anything necessarily changed with the API functionally the same, you broke the test for sure because it was validating the internal procedure not the behavior. The most substantial benefit I see to mocks is when you need to test exception handling behavior. With a real database or remote connection, testing against unstable or failed network conditions is too annoying to simulate. You can have a remote call trigger an exception a number of times before returning to verify that internal retries are happening too.
Same thing happened! There's no way to mock Snowflake data warehouse, so some are saying "why not let unit test access a toy Snowflake account on the cloud?" Then it's done in that way.
I would focus on integration tests, and only unit test when you have to. Mocks aren't production code and require maintenance, so you only want to use those when you can't avoid it.
Yeah basically most backend code is like this: Controller -> BusinessLogic (some people call this 'Service' just call it BusinessLogic ffs) -> Repository (Database or another API) You only unit test the BusinessLogic. Integration test can cover the entire path. But I still don't agree with "never mock". Tried it, and the codebase got so many bugs over time due to poor coverage.
Mocks are tricky for sure. Just a few days ago I ran into a set of tests that so much mocking going on, I couldn't figure out if there was any production code left that was effectively tested. Despite all that, I still think they have many uses. You just need to know when to stop.
the irony of listening to this while actively writing mocks and excessively unit testing
just today i wrote a single test with three mocks in it...
The things we do to try staying sane.
@@Kalasklister1337 No idea what's the problem with that, if the shit is not in your unit under test it should be mocked. Otherwise, you're testing other people's code and making your own tests flaky since they no longer test just your implementation but also other people's implementations on top
When I write mocks, I do it in Rust.
@@liquidsnake6879yeah there is no problem with it, people should remember that here isn’t the word of god
These blogs always pick an easy example. Most validation also requires examining the database. A common one would be, in this scenario, a requirement where user email address needs to be unique.
That’s a good point
This also somehow assumes you will never forget to call that validate function before putting it in the database -- solve one problem (testability) while creating another (reliability).
yeah well. Just put a list of all users in as parameter. There problem solved. Skill issue
/s
@stevemartin2981How would a Factory help with validation that depends on context stored in the database?
@stevemartin2981That doesn't answer the question, now does it. You can make the fanciest factory pattern, without using the database you cannot enforce constraints that require data stored in the database.
5:41 well that works until you find a compatibility issue because not all features are ported to SQLite in the exact same way as what PG did.
You mean, any non-absolutely-trivial thing? I completely agree with you.
I've mostly worked with Oracle, but have used PSQL a lot too, and things that I know are different in Oracle and SQLite are:
- Text vs varchar2, clob and char have different semantics, and Oracle infamously treats "" as null. This is kinda important that your program picks up on...
- integer types are different, and Oracle has a number that includes support for decimal numbers, which you don't want to have truncated to ints, or screwed up to 64-bit longs.
- date handling is pretty different, Oracle supports julian days, ISO strings and unix time, but: Every DBA I've ever known have broken out in sweat if you don't specify the date format in Oracle first. Why wouldn't you document your intention?
- floats are 126 bits in Oracle, 64 in SQLite
And that's just the basic datatypes. SQLite is cool, I have nothing against it in any way, but it's not a great way to test against your database. Use the free version of the database for that. Oracle, Microsoft, even PostgreSQL offers free versions of their databases that are really useful as testing tools.
(Yes, the psql mentioned was a joke. I know it's all free.)
I find a lot of mocking comes from arbitrary code coverage percentages (80%-100% unit test code coverage). Also seen with: "unit test confirms exact log output", and "unit test getter/setter/properties".
My favorite part of arbitrary code coverage percentage is when you add something small with a test but still don't have enough coverage. Naturally you add a test to some code you didn't write using doc tags and comments to guide writing your test. At this point you find out the code or documentation has always been bugged and just make the test match the current output. Now the tests make fixing the bug a regression. And all I wanted to do was add a feature that scratches an itch I have or upstream patches I'm sitting on.
I was going to say this as well: mocks are needed when unit tests are expected to cover 80+% of your code by company policies. :)
I wouldn't say they test nothing like the article though: you can still do some checks on inputs, control return values and errors... which can be useful to some extent.
@@Jeryagor Covering 80+% of the code is just crazy to begin with. Particularly if you're using an ORM. Your database queries should be automatically tested, which means only the logic parts (the fiddling) need testing that, lo and behold, shouldn't need mocks if written well.
@@CottidaeSEA You still have to somehow test if your ORM returns the things you expect depending on the ORM, e.g. Laravel's Eloquent can also be used as a Query Builder and so you might want to ensure that the correct data gets returned...
If people are mocking their own code, then they're doing it wrong, simple.
Mocks are for things outside your unit-under-test exclusively, everything within your code should be tested and anything outside it should be mocked.
Even if it's a database query, you might not test the DB query itself (your ORM handles this), but you definitely want to make sure your ORM command is requesting the right data. So you'd mock the ORM itself and check that people are making the right query to it and including all the things you expect
No pgcrypt, clientside timestamp and hashing, no RETURNING id, no error handling (duplicate email? or transient error, so gotta retry?), no prepared statements, timeouts, stats collection. We don’t even look at the response. So what do we do? Extract two of the four non-db-related steps into a separate function of course! Now we’re cooking on gas! No need to run expensive integration tests to test those 2 ifs! The main problem of the project has been solved.
For testing library-type code (e.g. data structures) in dynamic languages, mocks can be useful for verifying that your fancy data structure isn't accidentally doing something untoward with the objects you're entrusting to it -- you can verify that only functions that are part of the interface that algorithm requires are called, and that things that are part of only _most_ objects (e.g. random toString logging calls) aren't used.
I know you said dynamic languages, but this all seems like non-problems that the compiler should enforce
I highly recommend reading "Unit Testing Principles, Practices, and Patterns" by Vladimir Khorikov. It's probably one of the best software design books.
Thank you!
Yep, read that one, it's really good.
Bwhahaha. I second that. For years i am all the time recommending this (Khorikov) book to any person at first opportunity to get started with unit testing. (And then recommending TDD Kent beck as second book, for more practical approach, with learning feeling how much pause can be allowed between written working code)
There is a valid use case for mocked APIs: One time I was writing a shipment tracking service and it would connect to the API's of various shipping providers and check the status of packages and trigger notifications to users devices. While some of the shipping companies provided testing servers they didn't update the package status since there was no actual package and no real delivery process. So I couldn't use the actual API even if I wanted to. Instead I created a series of responses and stored them in a bunch of files. Then I served these files iteratively. A big benefit is that your test can go real fast without the network latency. But you have to maintain the mocked responses which is kind of a bummer.
Perfectly valid - there are all sorts of scenarios such as this where you a. cannot call the service as it would trigger a real event OR you cannot call the service as it simply would not act the way you need to test. As you've mentioned you need to keep the response files and serve these up - a common pattern and there are frameworks that do exactly this ... wiremock for example. Is this a unit test or an integration test? Does it matter? I feel we waste too much time classifying these types of things when all you're trying to do is validate the system.
Normally those kind of services do have sandbox modes where you can test your code against the real backend. If they don't have it maybe it's good idea to look for alternative.
@@adriankal Did you read my comment? A few of them had a sandbox but they didn't bother to simulate someone scanning the packages barcode at various locations it was insufficient for the purposes of the tests I had to run. And I couldn't just say oh well I guess I wont use DHL or UPS. I had to deal with much smaller shipping companies that didn't even have a test database. I would have to test those with real tracking numbers.
So contract testing. That's pretty much the only reason to mock. APIs should be contract tested at a minimum, unit tested if its your API
Making your tests depend on the API that you do not own is pretty bad idea and imo I would take mocks over direct API calls in this situation any day. Especially if you run those tests in CI/CD pipelines.
Pretty interesting that you've mentioned shipping providers btw, the company where I work also deals with those and mocks (such as gomock or httptest libraries) are actually invaluable for our needs, there is no other way to ensure that your code is not going to break and your tests are not flaky
Also sandbox APIs that shipment providers have usually either don't work half of the time or just break quite often.
The first time I heard about JavaScript on the back end, I laughed thinking it *was a joke*... I don't regret how hard I laughed
Right? I absolutely agree it would be awesome to use the same language throughout the stack... but who thought JAVASCRIPT should be that language???
“… fixed TypeScript’s biggest problem” is the summoning ritual for Matt
Exactly what i thought 😂 If you say you've fixed a language's biggest problem...then you're Matt Pocock
What’s up wizards?
you gotta appreciate the Ryan George "super easy, barely an inconvenient" quote
I love Go's named returns. Keep in mind you absolutely don't have to use them naked. They are extremely useful as quick bits of documentation if you have helpful names for the return variables, especially if you are returning more than one variable of the same type.
They also really help with reducing complexity because they guarantee those variables will be declared at the very top of scope, which can really help with refactoring and reasoning about the function. Kind of like defer guaranteeing stuff happens before exit, it makes it easier to write and refactor code while maintaining guarantees that your declarations haven't been moved around and caused some insidious bug.
Even naked returns with stuttering names like "err error" can be very useful in private implementation code and should be used freely there. The only caveat with naked returns is don't add stuttering named return values in public methods: your public methods are primarily meant to be understood by the people using them, and stuff like (string string, err error) adds needless complexity to reading the function signature.
so which is more idiomatic go? is it both?
I do use them while being naked tbh. In fact, most languages in general I program while naked
Great advice if all your functions are independent. It might be the case of 1% of all written code.
LOVE TO SEE IT. Thanks for the read Prime
You have excellent articles bro!! :D
@@SkillTrailMalefiahs yoooo thanks
I'm a desktop developer and disagree with this. Testing a desktop application without mocks is a nightmare hellscape where there is no return. I understand that backend is different but there are many more dependencies in this world other than db. What about calling other API's, report generation, interfacing with other applications or any of the many scenarios that would break a unit test without a mock?
Backend unit testing without mocking is a hell.
Hey Prime, not usually a commenter but I think this hate towards mocking is misleading to newer devs. Overall love the advice on code structure. Would really appreciate a response to the below :)
In the example in the article there is a critical piece missing about how to glue things together. This is a transitive problem at each level of abstraction (normally several of these in any large code base), and these glue layers often contain small bits of logic themselves. Also, not all dependencies are deterministic (i.e. http timeouts, db trx contention, etc.) and it can be useful to model these situations to verify your code behaves as expected (i.e. propagating the right errors or retrying a certain way)
For example, consider a common API handler function:
1. Stateless request validation (pure function)
2. Fetch some data (function with network dependency)
3. Stateful validation (pure function)
4. Example Twiddle: Some type mappings between request domain and data domain (pure function)
5. Store some data (function with network dependency)
6. Map stuff to a public facing response (pure function)
This "glue function" has logic as to how it fulfils these steps and deals with potential failures (i.e. if we fail a validation step, further functions should not be called). When it comes to testing this, I want to test the wiring logic WITHOUT rehashing the behaviors of each of the dependent functions. I could do this by "injecting" my various dependent functions as well typed parameters and then "mock" the behaviors such that I can ensure certain branches get hit. This is all decoupled from the implementations; I can change the validation logic and it doesn't change the test. Call it mocks and DI or call it function composition, its all the same at the end of the day.
You might argue all the glue branches get covered by integ tests, but this is almost never true at scale. These UTs are so easy to write too and in my experience have huge ROI for mature code bases.
was thinking the same think, I fully agreed on seperating "statefull" and pure logic, but testing the seperated statefull logic is still worth it imo
3. Stateful validation --- This is where you messed up. Parse, don't validate. Especially if you're doing HTTP requests to third-party services on the server side for some reason. This makes your "wiring" completely deterministic and testable. 5. Store some data --- Testing this is pointless, if a third-party dependency fails there's nothing you can do about that. The best approach is to branch the computation into two deterministic functions and test them both in isolation.
@@andrueanderson8637 You have different understanding of "parse" and "validate" than I do. Could you explain what's the difference?
>Especially if you're doing HTTP requests to third-party services on the server side for some reason
Yes I do, how you'd otherwise save data in user country first to follow GDPR? Do everything asynchronously (obv bad idea)? What if you're using external secret managers or anything that is complex and you don't want to write your own code for?
>if a third-party dependency fails there's nothing you can do about that
Yes you can. You can use alternative mechanism on failures (instead logging to monitoring services you write to file), you might retry or anything else. Most of time those are just unimportant details, but in some cases they're important. Things are not always deterministic unless you live in haskell land.
>branch the computation into two deterministic functions and test them both in isolation
How you'd do this with saving info to database? Anything that deals with real world won't be deterministic, unless by deterministic you mean "I know what happens if saving fails and I know what happens when saving works and both cases are tested"
totally agree with this, you want to test things without relying on your dependencies that, in integration envs, are often broken. Of course you want integration tests, but that doesn't mean you don't want functional tests that tests your service functionality mocking all your deps. The example in the article is too trivial, and even if you can relate for simple unittests, it doesn't work when layers of abstraction start to grow. And btw it's 100% better(for dev confidence) to have functional tests that work 100% of the time with mocks, to have integration tests that work 85% of the time because of network/database/other services issues independent from your service.
Yeah, I don't get the hate for DI & mocking. In my experience a sufficiently large codebase will have things that have tons of dependency. The only feasible way to test them is using DI & mocks. Mocking is akin to doing experiment where we assume everything else is correct & test with that assumption in mind.
100% agree! Mocks are a great way to fk up your test suite. Where i work, my boss mocked all his internals and taught the rest of the engineering team to do it. I never gave in. After he left i was finally able to convince the team to stop doing it. Your test should describe a narrative on how it behaves, not how it’s implemented.
How would you approach testing a function with multiple dependencies, where the code execution takes different paths based on various conditionals? The function's behavior might change if, for example, a validation step fails or if an HTTP request encounters an error so further code may not be executed but other will still be executed. I want to ensure comprehensive testing to cover all possible scenarios. Given the complexity and potential branching, how would you design tests to verify that the function behaves correctly in different conditions? I don't think its possible to achieve this without relying on mocking
Hello Prime,
i dont know if you are ever going to read this, but as someone who just finished learning this job for two years in germany and tries to find her way through all this content and all, you are a great help. As a trainee didn't learn much of programming. I was basically just prepraed for the exams and that is about it. Your content helps me.
If you or anyone here has some kind of help for me to what learn first and what I need to know, it would be sooooo helpful
Kinda same here. Did you find something?
Start writing small apllications for yourself/to learn/get projects done, would be my idea...
Yo dawg, we heard you like mocks....
...so we mocked your mocks.
@@Thomas_Loso you can mock your mocks while you are mocking your mocks.
I was taught mocking in school, it made a lot of sense to me. You want to decouple the code in tests. If module A depends on module B, you unit test B, then mock B for A's unit tests. That way, if a dummy breaks B, only B's tests fail and it's way easier to find the bug than if all tests fail together.
Also, mocking is a necessity in Test Driven Development. Not saying if TDD is a good idea or not, I really don't know. It's a good exercice though.
Yep - Fine with mocks - particularly on complex systems - yes they can get complex.
it depends, if your assert at the end is "function X was called exactly 2 times with following parameters", then it doesn't bring that much value. I prefer to mix in a bit of functional programming - all the "logic", like validation, actually modifying the data etc lives in public static functions (not async if you language support async/awawit). Async means you are doing something - db call, network call etc.
You unit test the pure functions, without any mocks. Then you integration test the big async functions that call the db (TestContainers are awesome for that), and only check if the output matches the expectations for the input - nothing else. That way you know it works, and you can easily change implementation, while unit tests are keeping the internal contracts intact
@@Qrzychu92 There is (like everything) some validity in asserting that a method on a mocked property is called exactly X times....not all the time, but some times.
If "B" breaks, that should have been caught by B's tests before it was even submitted. If it wasn't caught, it's good your tests are catching it and now blocking your release of a broken product.
If you have to use TDD, just write an integration test rather than a unit test. I don't understand why TDD advocates don't seem to think that's a good idea, or don't advocate for it over unit tests where it is useful.
Many great points from you, the article, and many comments, but I don’t think mocks are a problem *if* you prefer, as I do, to test only public interfaces. Those in some cases while actuating underlying code, will need to call outside the library which shouldn’t happen in unit test. Mocking should be done only when necessary, but it is sometimes.
I wanted to skip the video when I heard “never mock”, but didn’t… and in the end the final thought was not to use mocks for integrated tests. And here are I 100% agree with you. But I do think that mock is a powerful tool for state testing. And to be honest you need to test the contracts and the states. If both are tested you don’t even need integrated tests.
Shout out to the author for using "Super easy, barely an inconvenience" 😂 Pitch Meeting has reached the far reaches of the internet 🙏🏿
as someone who uses C# professionally, why the fuck does someone want exceptions and try/catch in go? errors as values are far superior to debugger’s jumping in catch block at the slightest hint of trouble
till you have one or two errors. When you have to handle 5+ probable errors your code becomes ugly.
@@banatibor83 but is it better trying to handle 5+ probable errors that you don't know?
@@AJ213Probablyit's better to have the best of both. Options for the win :)
@@angelocarantino4803 Yeah basically. I am heavily using Rust nowadays and Options are great. Though in the context of Godot Rust I need to use expect() instead of unwrap() since the debugging can be pretty bad there. I have learned my lesson and will now use expect instead lol
4:05 One doesn't simply insert Screen Rant memes into dev content.
Bootdev: Actually it's super easy. Barely an inconvenience.
Im gonna need you to get alll the way off his back
OH REALLY
It's definitely tight
@@sutirk ok well let me climb off that thing
yeah yeah yeah
Remember the old adage, “Use the right tool for the job.” Mock is just another tool in the development and testing toolbox.
The bigger problem in the examples was the poor design of the system, coupling data validation to the save function.
Here's how I prefer to test my backend handles:
1)I use Docker (specifically docker-compose) to run an ephemeral database just for tests.
2)I create the database and run migrations at the start of the tests, and then destroy the container when they finish.
3)I test HTTP requests to a handler, which allows me to refactor the inner logic of the requests later.
It's not exactly unit testing, but it works fine, especially in the microservice world.
@pycz I think this approach is pretty standard in e.g. Django ecosystem, because there are a lot of places using a database (Models etc.). That line between integration and unit-testing is very thin in such cases.
@@jarosawsmiejczak1138 Rails takes a similar approach iirc
testcontainers is great for this case
That's just a normal integration test. Unit tests test code of units. Integration tests test code that need external units (database) you cannot test (database engine)
@@QckSGamingI realize in recent years that most people in this industry don't even understand the basic difference between unit test and integration test.
The one area I like to use mocks is to put the code in an unexpected state and see if handles the error and that it gives a reasonable error message. If the program is logging errors then I will mock out the logger to catch the error directly and compare it to what is expected rather than actually write the error to log. The other area is when I have to fix legacy code and I need to mock out the stuff that is irrelevant or that is creating expensive connections. Sometimes you can break the code out into a separate function and not have to mock it, but I have run into cases where that's not possible without a major refactoring of the code.
as an aspiring backend dev, I am so happy i understand this
Check your unit tests privilege! Some of us are happy if there are any automated testst at all.
:D
Love the Ryan George "Super easy, barely an inconvenience".
When you said "never mocker" I pressed the subscription button and I have just around 10 subscribed channels after using youtube more than 10 years.
I think mocks are useful when your functions make calls to third party things. Like for example if you have a route that does some stuff and makes a call to an email client. You have to mock the email client response because you shouldn’t actually be calling a third party service in your unit tests.
ideally you shouldn't be calling random third party things directly in the backend call, these things can be scheduled with a message queue, then the test only needs to test that an event for email notification is in the test queue
@@samanthaqiu3416 not always. There’s lot of scenarios where you need to call third party services because you need data from them that is later used in your func. You can’t put that in a queue.
I guess in that case shouldn’t the data manipulation be separate out in its own function that way the main function shoud be
1) preprocess data logic
2) third party call
3) post process logic
Thant was you can actually unit test 1 and 3 and save running the full function with a test enviroment with integ tests
A more accurate statement would be you shouldn’t mock in unit tests
@@ashvinnihalani8821 Sure, but if you need to test the whole flow of the main function than you would need to use a mock.
I only mock things that make http requests to external things like google services. 99% of the time you don't need a real response from google to test. Everything else is fileld with fake/incorrect/empty data, with a real database.
That is the only time when you have to mock in a well designed system, when you need to fake responses from outside your code.
That would be the one thing that I wouldn't mock, to be sure that the interface did not change from their side and to keep up to date with them
@@alanonym8972 if their APIs aren't versioned or they make a breaking change without announcing it months prior, your production will crash regardless of your tests. So there's no reason to test 3rd-party services.
@@spicynoodle7419 I mean it does not necessarly crash, it might just not work as intended. It is also not necessarily a bug that will be detected immediately by your users, or maybe they won't really report it for some time. I have spotted a lot of bugs before my users do (or at least before they decided to report it).
I feel like it is much more important to contract test things that are likely to change rather than the file system for example
@@alanonym8972 you should use something like Sentry to report exceptions, no need to rely on users.
Keep in mind, that while using SQLite for testing is great, it does not share all features with postgres.
So then you have integration tests, which use different database than your production environment.
I would sleep better at night, if the testing DB engine is the same as the production one :)
Docker solves this. Most languages have some integration with test containers, if not, it’s straightforward to roll your own.
The lesson here is not that mocking is bad. The lesson is don't mix high level policy code with infrastructure code. High level policy code pairs well with unit testing, dependency inversion, and fakes of various kinds, including mocks when appropriate. Infrastructure code pairs well with integration testing and using real, as close to production as possible, collaborators.
Someone claiming to have "fixed" how any language does its error handling is hilarious.
The hoops some people invent and then jump through just so they can use something they are more familiar with never fails to amaze.
I had to work with Mocha/Javascript Unit Tests ones and they connected to and manipulated a MongoDB.
The DB Setup and -Cleanup after every test made at least 75% of the test code.
It would have been a lot faster to just set up a local docker container and do a quick reset before each test run (it's like 2 bash commands to do so).
But nooooo, none of the other devs knew how to use docker, so it was not allowed.
yikes
this is why i love sqlite, you can use a file for testing which means that you can just cp a test file which has the perfect database setup and use that for integration / e2e tests.
that is my fav :)
How do you now test that the validation is actually performed before saving anything to the database without using a real database or mocks?
Tests don't test current code. Tests test future changes to your code. Your tests are bad if your mocks are able to hide future code changes.
However, you can use them to also assure that future code changes don't break the expectations of current code.
The main problem is that people just don't realise what exactly they can test with different kinds of tests.
It's damn useful to have unit tests on sql queries using mocks (sql.Mock) to test marshalling and unmarshalling of structures, to test passing parameters in queries.
It's just hilarious how many silly bugs, panics, etc. there can be at these levels.
But that only tells you that the data flow in your code seems to be correct.
Apply encapsulation and be fine.
i like to write in-memory repositories to "mock" the database, but actually i;m using them on "prod" when it's only a prototype as well. The key idea is that i can run bazillion of test within a second or three but if i want to test it with real database - also no problem, just it's taking a little bit longer. For APIs my team also wrote inmemory service to pretend it's a real service. Usually these in-memory things are enough but several times real database and real api discovered some bugs. We're running all of the tests with real things before merging to master, but these in memory are used in development and they giving us instant feedback. That's the compromise i believe.
1:36 minutes into the video, am already very much entertained. I love this man :X (Disclaimer: In a very platonic way)
90% of our test code is integration tests, and CI has to install MariaDB in the container. The main reason is that the code is ass, but the second is that there's tons of MariaDB-specific code that would simply fail on Sqlite or H2 and wouldn't be tested at all if we used mocks for everything.
Beeceptor has been a heavensend for mocking, debugging endpoints and timeout problems, testing adjustments pre/post production, CORS support, Customized Dynamic responses, Mock serving for OAS Specs. It's fantastic
I would say there is one case where mocking is both trivial and definitely should be done: In Rust when parsing messages from some kind of data stream, when it is written correctly, it is easy to just get the test data and put it into a Cursor. That way you can just unit test the logic without having to do anything complicated.
I love mocks in unit testing. if you use it properly you can test how your code should behave when different values are returned from mocked dependencies.
I find the "abuse" of mocks an issue. But I do see scenarios where you simply don't have the real thing to test or you do have an unstable environment with no valid test cases for you to check contour scenarios.
I was one of the guys who worked with a code coverage target. At a project I even got at 100% code coverage. I still look at code coverage, to see "hey, there is this scenario that we totally did miss in our tests". And there are situations when what we need is a failure. Which in some situations may be difficult to test.
Such as if the code itself is a wrapper around a real database that is supposed to handle different database responses, or a production webservice that we cannot really mess with or the customer doesn't have a stable uat environment, or we lack access to that environment at some point. In that case, either we create a mock server. Or stop development totally. At this moment mocks comes in handy.
IN the video itself, there is a mention on "using an SQLLite" database. Is it the real production database? No, right? I do consider that a mock.
I work with developing a framework that other people uses to make real life integrations. I don't have nor want access to the customer database or services, nor do I want to test them. But I do want to receive a json payload or an invalid json payload os some times sobre issues with payload content to test how my framework reacts to that situation. For these there is no way around: I need a mock server.
In short: I think mocks have a niche use. At least at some steps of development.
@@d_6963 Exactly, when the feature to be tested is just how to handle a database response from a customer who has that mainframe based oracle database and is paying millions for it, and has personal sensitive data inside it, we can't just say, "Hey, I need to introduce a shortage in that database of yours, so I can make a test. Short thing, like we need a downtime of like , 2 minutes, some 20 times per day. Is that ok?" 😁
You could argue the stub vs mock difference, but I think that would be splitting hairs.
I agree though, that most of the hate for things comes from weird management rules.
@@HrHaakon I think it's just a bunch of misnomers, but if you describe the behavior of the test double, the context makes it easy to find out whether it's a stub or a fully fledged mock object.
This is not a great take.
tl;dr version; Mocks are good. The way people abuse mocking libraries is bad.
What Prime is recommending "instead of mocking" is literally mocking. By the dictionary definition. And also by what you will be taught in any book about mocking, or from any of the people who developed these techniques.
What we have here is what we pretty much always get in this industry: another programming cargo cult. People claiming to be using techniques they neither know nor understand, and the thing gets equated with whatever horrible nonsense people are doing just because they falsely claim to be doing the thing.
A mock is supposed to be an input that you can run your code on which satisfies the interface you're programming to in a very simple/degenerate way.
You absolutely should program to simple interfaces. That is the way to do testing and also to do design.
The reason "mocking" sucks is because people have developed extensive "mocking libraries" which allow them to quickly and automatically produce objects that satisfy interfaces *without reducing the complexity of the thing being mocked*. The mocks have extensive, complicated behavior that makes the tests hard to reason about. Because the programmers don't redesign their code to simple and elegant interfaces, they instead leave the code complex and reach for some giant mocking tool so they can carefully orchestrate their rube-goldberg test.
Whether he realizes it or would use the same terms or not, what Prime was recommending about separating the fiddling bit from the bit that fetches the stuff to fiddle *is in fact the essence* of unit testing and mocks in the actual definition of these terms.
He's saying instead of writing input to the whole program, like a filename or an id for a row in a database, instead you put an interface between the fetching and the fiddling, and then you test the fiddling code with *a test thing substituted in for what in production will be coming out of the database*. Since "a-test-thing-substituted-in-for-what-in-production-will-be-coming-out-of-the-database" is very long and cumbersome to say, we introduce the word "mock" and define it to refer to the things you fiddle on in tests.
Mocks are good. Mocking libraries are bad. If you have to reach to a mocking library to write a test, simplify your interfaces so you can write the mock by hand simply.
100% agree with you. On this one I very much disagree with Prime's take.
Was a long scroll to find exactly what I was thinking. Nowhere in the video was mocking proven to be bad. Instead I saw a weird tangent about how people don’t understand features.
The SQLite example is an advanced form of mocking in my mind.
Taking some library that bruteforces a mock is ridiculous, people do that..?
Prefer (in memory) fakes over mocks anyday. You should invest in a db fake and then you can test the write and read (100 percent of the backend) in unit tests like this: write op 1, write op 2, write op 3, read op 1 assert output. Of course in addition to the advice peddled here to break out the fiddling into isolated functions to test.
Oh and I would also add that many devs do not know how to responsibly write INT tests. A good integration test should leave no trace or at least try to leave no trace it was there, and not cost you tons of money by connection to a third party API or hammering a database when you don’t need to. Unit tests at the base of the pyramid help to ensure the integration tests on top are minimized and are as least intrusive as possible.
Or am I crazy in thinking you can have unit tests and int tests and no mocks and mocks and be perfectly fine?
In agreement on mocks. However, my employer requires us to have 80% "code coverage". It does not matter to them that it's type strict and compiled. Since the "fiddly" bits are not 80% of the code base, I am either forced to mock, or add a file with an unused function that does nothing but exceeds the length of the rest of the project so I can fulfill an arbitrary metric for management.
I am also in the camp of "unit tests are also still code, equally prone to errors, and often more lines than the actual code they are testing".
this second idea is really good idea
>It does not matter to them that it's type strict and compiled
Well obviously, why would it? Just cuz the app is type-safe doesn't mean it's behaving correctly.
By that logic, all apps written in java are bug-free : )
@gentooman When your objective is "code coverage" then behavior validation and logic error checks need not apply. If it compiles we know it is free of syntax errors, which with the exception of some logic errors means that the code would execute. You might inadvertently find logic or behavioral errors in the process of achieving higher code coverage, but that's no guarantee. I'm not against unit testing, but I think they offer significantly less value for compiled type strict languages. Tests are also code, and if you mess up the implementation you can't expect your behavior checking code is somehow less error-prone. Even when checking inputs and outputs there are logic errors you may not catch.
@@LordOfElm You sound just like every other junior developer who hates writing tests.
@@gentoomanI'm an architect and tests are usually a net negative. Especially if one is chasing arbitrary code coverage.
Why don't you test your unit tests? 😂
Imo, test writing is a team leader/project manager's job. Replace design briefs with actual specifications by limiting well designed tests. Would make it a lot easier to teach Juniors too.
In company I used to work, we tested for everything, even logs. Currently we only test functionality that means something, has purpose and needs to return/receive/modify values in certain way. Now I am not sure if author is talking about mocks. Mock is there to remove need for dependency in classes that have it while still preserving what function inside it will do (you can write whatever you want as functionality that mocked class does). In a way, you can write a mock that will behave exactly the same as class that you obviously wrote unit tests for and know that it work properly.
Of course testing code in natural environment, some integration or interactive tests can be done on top of that but I usually find mocks and unit tests combined with CI Pipeline good enough and not face unforeseen issues (if tests and mocks were written correctly). You should always do stuff like stress tests etc. if your application needs to handle certain traffic as well and prepare for it beforehand.
I’m always trying to explain this to junior guys. “We don’t need to tea the c++ stl. You only need to be testing your logic.”
I love external service mocks in integration testing. I am using boto, localstack and docker compose to mock databases and aws service apis.
This way I can develop a lambda function, ecs service or whatever locally and directly integrate it with surrounding services with the iteration time of a few seconds. It is awesome as it speeds up development significantly, makes me confident that the code and infrastructure code will run fine in cloud environments (yes you can run terraform or aws cdk against it too) and it is a clear working documentation of the API and how other services interact with it. It can even be used to test IAM policies locally without a single deploy to AWS!!!!!1111
I didn't think code coverage is problematic until I ran into a very pleasant and necessary test in a Django project.. basically the framework has a reverse() function, which takes the name of a view and returns its url, as configured in the routing layer, and a resolve() function, which does exactly the opposite of that. The unit test was taking 3 screens "testing the urls", ensuring that resolve(reverse(x)) == x. I'm so glad the url configuration had 100% coverage, I don't know how I would've rested if that test wasn't present in it.
Putting code that fetches data into separated function is a double edges sword, because some people don't care about the underlying code, so instead of writing their own sql query that does join, they instead use your "abstraction functions" and overfetch data af. Personally, I prefer always fetching at the function that handles the api request. All other functions that fiddle with data can be on their own, but once I'm passing db connection to a function, something, somwhere went terribly wrong xd
And that’s why you don’t “over expose” your functions.
The only functions callable should be the functions that operate exactly as intended. i.e. don’t make every function public.
It’s okay to have your logical bits separated, and then create a public function that chains them together in the correct order.
This documents how it is supposed to be used, and prevents misuse.
One thing not mentioned is the "intergrated function" returned both an error for bad user/passwords and for database problems in the same variable... This is criminal but also the kind of code a try: catch: pattern encourages. Think about what the code looks like to handle the error of this function: (lifted from the article.)
func saveUser(db *sql.DB, user User) error {
if user.EmailAddress == "" {
return errors.New("user requires an email")
}
if len(user.Password) < 8 {
return errors.New("user password requires at least 8 characters")
}
hashedPassword, err = hash(user.Password)
if err != nil {
return err
}
_, err := db.Exec(`
INSERT INTO users (password, email_address, created)
VALUES ($1, $2, $3);`,
hashedPassword, user.EmailAddress, time.Now(),
)
return err
}
Depends on how much actual logic you have in there. In the case of a HTTP backend with a database, I usually skip the unittests altogether and test externally against the API. Which drops the need for logic separation. The main problem here is usually that you have external side effects which require context knowledge that you (should) not have in a unittest. It's just not worth writing actual testable code if the unittest will cover 5% of the possible errors and you have to write an external test anyways, which when run against the API can also test correct parameter processing, correct return values setup, correct permission checks and possible side effects (e.g. missing WHERE clause in a DELETE statement). When actually mocking, then do it right, means: The mock must mimic the behaviour towards the caller exactly like the original object does. But in general I agree with you, avoid mocks when possible, they can be tricky to get right.
I love all of it, from the article and your reaction. 🙂
8:44 don't specifically need ORM, just an encapsulation function to not do it raw. A function which generates the SQL.
I’m learning like totally 4 months in at this point and your videos are so advanced but still helpful. Just wanted to let you know that some of your viewers are totally ignorant of most of the shit you say 😂 but anyway I’m trying to understand Jasmine now
jasmine? damn, you are out of date :)
also, appreciate you
@@ThePrimeTimeagen I’m in the launchcode course they make us learn Jasmine.
I've run "unit tests" with in memory sqlite, it was super fast and easy and with much of the logic being in the SQL queries it added a lot of value.
I sometimes need to unit test my sql and Testcontainers is an excellent option to do that.
Plus, mocking external dependencies is important. Example: Calls into your cloud provider. Mocking AWS SQS and asserting the data sent to it is extremely important.
Internal dependencies? You need to have a good reason to have an interface and mock for an internal dependency. A real good reason because you usually won't find that reason.
Mocking internal code usually just reduces your code coverage and forces you to write more tests. Why?
I will say that using property tests is far less code than manual unit tests and it is general enough to cover unit tests to mocks.
The only time example based makes sense is end-to-end or if you have specific cases that need to be checked that are extremely hard to reach randomly.
If you are still a JS dev check out fast-check.
The problem is not even mocking in general (it's just a symptom), but the stupid ass way of writing tests for implementation details, instead of units of expected application behavior, and the absolutely insane obsession with microscopic testing that provides almost zero level of confidence that the actual expected application behavior is satisfied. With the cherry on top that after writing all those pointless idiotic unit tests you will never want to touch the (probably shitty) code and improve it ever again, because then all those tests written against the implementation details will break, even if the application behavior is implemented perfectly. And you can quite reasonably extrapolate this effect into the codebase at large... Leading to a frozen architecture that won't be able to evolve iteratively to follow the changes of the domain and the improvements of the team's understanding.
Great that you separated out your validation logic and wrote unit tests for the validation logic. But how do you validate that your http handler is actually calling validation logic at all? Surely not as part of your end-to-end tests?
But what if the "filddling" part requires querying the db for example? Maybe I need to validate something against some configuration or value not already in memory. Even if I'm not making external calls, I might call some other internal module which I then need to isolate from the system under test. Domain logic can still have dependencies which need to be mocked. What am I missing here?
If it's actually backend that has to serve responses ASAP, you have 3 things besides fidling logic: existing data, an update info, and what needs to be updated. All can be represented by simple structures. (Existing Data + Update Info) feed into fiddling logic, which spits out (What needs to be updated). What needs to be updated is a structure, you can test it. Or you can run all necessary queries to do what "What needs to be updated" structure says. There are always exceptions of course, and that's normal. In those cases mocks are fine. All absolute "rules" in development are bad.
@@OrbitalCookie okay, so domain logic or "fiddling" would need to keep no dependencies of its own and get everything it needs via function arguments? So it would be made up of pure functions with no state. That's fine but then the definition of domain logic from the article no longer applies since there may be an arbitrary number of abstraction levels within it that have their own separate fiddling and side effect parts. So if you still want to avoid mocking you would only be able to test the pure parts but not the stateful parts that use them, since those would be considered integration tests, even though they're part of the domain logic, leaving holes in the test coverage at different levels.
I would just avoid mocking that validates against implementation, to avoid coupling the tests to it, but stubbing dependencies within domain logic should be fine imo.
Title: "Thoughts About Unit Testing"
Thumbnail: "Stop doing this"
You got it boss, no more unit testing for me!
The thing about mocks and unit tests, they look useless until you run them😂then you are happy you have wrote them
This approach is fine, until your pure logic _depends_ on data that comes from the DB. For example (In TS)
function saveUser(user: User, db: Database) {
let sameEmail = db.getUserByEmail(user.email)
if (sameEmail) {
throw new Error("Email already in use")
}
if (user.password.length < 8) {
throw new Error("Password is too short");
}
/* more validation */
db.saveUser(user)
}
When this happens, your unit tests end up reimplementing the code inside `saveUser`, creating a dependency to the implementation.
If you can, you always want to write pure functions, but sometimes there are hard logic dependencies that you cannot just ignore, otherwise you end up coupled to the implementation. Using mocks, you can actually test the expected behavior without this coupling.
Another situation (I've been there when writing Haskell) is that in order to keep the business logic pure, you end up defunctionalizing the codebase, separating what should be done from the actual execution. This makes things far more complicated (involves writing an interpreter, defining all possible actions in form of data types, etc) than just using mocks.
@@valcron-1000 I agree, I've also been there, and it can make the code a lot more complicated sometimes than just using a mock.
I'm a total nobody who writes garbage code - slowly. A 0.1x developer. I like unit testing because I'm not smart enough to hold all the other dependencies in the working memory in my head. Must be nice to be smart enough to not have to mock everything... 😔
god to the jr... media personality to the sr.
I am beginning to wonder if Prime's self professed knowledge of development is actually pretty narrow. If throwing around blanket statements like 'never mocking' then perhaps you've worked on simple systems that never require mocks in the first place?
You mean like that super simple codebase called Netflix... they only have a few hundred users I heard, that's not a complex codebase at all....
Meanwhile I had the joy of finding some tests in the java codebase at work where the database is mocked, and the database mock returns a mock of a plain java object. Sometimes I wonder where people got the idea that the new keyword is an antipattern
hahahaha
also sorry
At a guess, the constructor/factory had some logic in it that talked to something, and used the results to get the object.
Is that stupid? Yes.
But that's what a lot of people did. Hey, a constructor is just a function right?
The opposite idea is called IoC or DI: It's the idea that if your object needs say, a DAO (those things that prevents @ThePrimeTimeagen from having to make 10kloc pull requests because a data service changed), you go and get the DAO, and then hand it to the constructor. The constructor does not go out and do things. The object after it's constructed should to the extent it's reasonable not go and find things to talk to. Those things either get injected into it (through say the method call, or the constructor if it's used many times), or those things go and talk to the object.
The whole injection framework thing should not be confused with the concept of DI as a coding style. They're two very different things.
@@HrHaakon Not at all, in this case it was a simple Spring Data JPA repository like
public interface SomeRepository extends CrudRepository {
public MyEntity findBySomething(Long value);
}
public class MyEntity {
private Long id;
private String someProperty;
// Setters and stuff
}
And a test like
void test() {
var repo = mock(SomeRepository.class);
var entityMock = mock(MyEntity.class);
when(repo.findBySomething(anyLong()).thenReturn(entityMock);
when(entityMock.getSomeProperty()).thenReturn("Hi!");
var service = new MyService(repo);
var entity = service.getById(17);
assertEquals("Hi!", entity.getSomeProperty());
}
@@WindupTerminus your example uses DI (repo is injected into service instead of service creating repo via new Repo) and not the new keyword, so not really sure what you wonder about.
9:08 you should not write mundane tests. Arguably you shouldn't write out simple data validation, but instead use some kind of schema validator akin to json schema or protobuf (yeah, protobuf is secondarily a schema validator).
Perhaps the example was made simple for ease of reading, fine, but I have seen a lot of code written like that and I would maintain that if it's as simple as what's there, it should not have dedicated tests.
On the other hand, I completely agree with how bad mocks are. I absolutely agree integration tests are very useful.
Raw-dogged squeel is my new favourite thing to say
How would you approach testing a function with multiple dependencies (database service, logging service, http request service, etc), where the code execution takes different paths based on various conditionals? The function's behavior might change if, for example, a validation step fails or if an HTTP request encounters an error so further code may not be executed but other will still be executed. I want to ensure comprehensive testing to cover all possible scenarios. Given the complexity and potential branching, how would you design tests to verify that the function behaves correctly in different conditions? I don't think its possible to achieve this without relying on mocking those dependencies, or at least i can't figure one out
This is why we should follow community conventions on things. Don't fundamentally change how something works pls. Bill, needs to change jobs and work in a language that isn't Go.
I'll argue also that you need to verify everything (pure functions)
is called and in the order you expect...because it is great to UT but it also great to check what you tested is actually used.
So for this you need to mock your pure functions.
In short, yes when you mock too much, something smells in your code.
At my job, we work mainly in Rails... our biggest mantra is, "if you find yourself fighting rails, you're probably doing something wrong".... this is what I think of when I see someone circumventing a Go feature to add.... a tryCatch function? That just sounds awful.
8:20, that 8 is the real bad guy here.
9:30, my code is beautiful. I'm never afraid to add/change things.
8:00 The sound of silence can be pretty intense, though notably he changed to more silence.
A RYAN GEORGE REFERENCE IN MY PRIMEAGEN VIDEO? IT'S MORE LIKELY THAN YOU THINK
If a database is part of your tests, then your tests are wrong. The database has already been tested, you don't need to waste infrastructure on testing a database. Mocking a database call should be straightforward and obvious. A test that completes in a microsecond is the test that actually gets run on every commit. A test that takes 2 seconds, is the test that get's forgotten until it fails in CI/CD.
agreed
agreed
agreed
agreed
Depends on what you're mocking, and how.
I frequently mock network components
I don't like mocks but I especially don't like them when we are mocking a service or code that we own. Mocks are tech debt as they are written.
I don't see any problem with using a mock in the described example. Everytime you use mock in your test, you just must be aware that the mocked part ALWAYS has to be tested in separate test. When you use mock or design your code the way it doesn't require using mock (e.g extract the problematic part from the code as it was proposed in the example),
, in both situations you are avoiding using this probematicaly part of the code in your test, so the outcome is the same and what solution you choose doesn't really matter. I advocate the solution that you should ditinguish the "impure code" (= connection with external systems) from the "pure code" (testable, eg. data transform functions) but in some situations (e.g in OOP) you just need mocks.
I'm brand new to go, learning it now thanks to Prime's videos. Hadn't heard of Naked returns yet. I googled it after he mentioned it because I couldn't tell what they were from that brief mention. I literally stood up in my chair when I read the documentation. I don't like that. I will not be using that feature lol
Mocks are the part of that fiddling function you don't test, usually that function is not just a map, it would have some logic, need to fetch data from a dependency only in specific cases and stuff like that. You can solve it by having function composition or delegates too, but in the world of dependency injection we live in I believe mocks are prety usefull.
Why would you test again a dependency that is already tested? Why would you take the burden of having to initialize everything that dependency needs in your test again? Just mock it and test the fiddling part, it's a couple lines of code versus many lines of initialization and the danger of having tests overlapping each other.
When someone says that overused phrase "if you mock you are not testing anything" I always ask if they are doing functional programming or what kind of dependency inversion they are using, if they use dependency injection, that sentence makes no sense to me at all.
I feel like people misunderstood the point of unit test. It's not suppose to test the real world use case it's a sanity for your small unit of code. Should be small and quick to execute and to catch unexpected edge case or when the 'unit' is somehow modified later on. It's only like the first step of testing
Then how would you structure your codebase? Filestructure and MVC logic? Please inform me on whats a good way to develop a let's say a Rust http service?
This absolutely 👏🏿👏🏿👏🏿 As soon as I learned about mocks, I intuitively thought something was off about them. I came to see limited value due to the use of a very difficult coding framework (AWS SWF’s Flow Framework), but outside of that I saw a bunch of devs pretend that they needed to test that database and other remote call parameters were set exactly. If anything necessarily changed with the API functionally the same, you broke the test for sure because it was validating the internal procedure not the behavior.
The most substantial benefit I see to mocks is when you need to test exception handling behavior. With a real database or remote connection, testing against unstable or failed network conditions is too annoying to simulate. You can have a remote call trigger an exception a number of times before returning to verify that internal retries are happening too.
3:54 That reference was TIGHT!
My PR is done. But now I need to add unit tests. So the motivation hits hard
Same thing happened! There's no way to mock Snowflake data warehouse, so some are saying "why not let unit test access a toy Snowflake account on the cloud?" Then it's done in that way.
Smol Ame writing unit tests? Let's ground pound some bits!
I would focus on integration tests, and only unit test when you have to. Mocks aren't production code and require maintenance, so you only want to use those when you can't avoid it.
Yeah basically most backend code is like this:
Controller -> BusinessLogic (some people call this 'Service' just call it BusinessLogic ffs) -> Repository (Database or another API)
You only unit test the BusinessLogic.
Integration test can cover the entire path.
But I still don't agree with "never mock". Tried it, and the codebase got so many bugs over time due to poor coverage.
Mocks are tricky for sure. Just a few days ago I ran into a set of tests that so much mocking going on, I couldn't figure out if there was any production code left that was effectively tested.
Despite all that, I still think they have many uses. You just need to know when to stop.