Testing in .NET is About to Change
ฝัง
- เผยแพร่เมื่อ 20 ก.ย. 2024
- Until the 30th of September get 30% off any course on Dometrain with code BTS30: dometrain.com/...
Subscribe to my weekly newsletter: nickchapsas.com
Become a Patreon and get special perks: / nickchapsas
Hello, everybody, I'm Nick, and in this video I will introduce you to a brand new testing library in .NET called TUnit. TUnit aims to be the new way of doing testing in .NET and I am very excited about its future.
Give TUnit a star on GitHub: github.com/tho...
Workshops: bit.ly/nickwor...
Don't forget to comment, like and subscribe :)
Social Media:
Follow me on GitHub: github.com/Elf...
Follow me on Twitter: / nickchapsas
Connect on LinkedIn: / nick-chapsas
Keep coding merch: keepcoding.shop
#csharp #dotnet #codecop
Personally I prefer PUnit.
Testing in Production :3
🤣🤣🤣
@@IsaacOjeda The Crowdstrike method
This comment made my day thank you
Are you a Microsoftie?
We have shitty tests that randomly fail and depend on one another. Now we got a perfect framework to hide it and not having to fix these damn tests. YAY!!!
Now I can over-engineer my tests easier than ever! ;)
We need *test* frameworks not just *unit* test frameworks (we use them for more than just unit tests)
This is pretty cool. If I'm writing unit tests, its true I might not see the value with this framework if all my tests are perfectly isolated. But for integration tests/UI tests, i can finally not have it run in one giant method and finally have test order control! This is really useful for different types of testing.
Before assembly, what a fantastic way to hide weird behaviour in a large test suite.
Because doing things before tests is weird. And how is it going to hide it if the code that runs is annotated with Before(Assembly)?
Not true for acceptance tests where you’re spinning up a bit suite of tests.
You got a large amount of files in an assembly. A problem in one file and this thing in an unrelated file causing the issue. Seems like it would be annoying.
Most existing tests I’ve run across just leave the weird behavior in place and let their pipelines fail occasionally. People hate troubleshooting tests because most of these times the issue is in the test code and not the code being tested so they view it as a waste of time. I personally will fix them as I come across but I definitely respect the opinion that if you could just add a depends on attribute to make the order explicitz and quickly fix the issue that might be the worthwhile solution. I’ll still come by and fix it later for you but not everyone needs to be testing experts.
One usage where this can be beneficial is running your tests against an in memory server. You can start it before your tests, and dispose it after.
Hmmm maybe overtime it will show more advantages over other testing frameworks, but NUnit/XUnit are both decent, well supported, and feature rich, so it does make it more difficult to switch at the moment. But that being said, it's new and no doubt will get better, so who knows 😊
So much hate in these comments. If you’ve never ran into the limitations of xUnit then you wouldn’t realize the benefits. A lot of people override the standard xUnit framework because parallelization in xUnit is bad and not as configurable as it should be. Once you do that it’s extremely difficult to use spec flow on top of it or some other framework
This is excellent to see its source generated. It’s the way to go
If you have parallelisation issues with unit tests you definitely have bigger problem than xUnit
I love this framework and I love the assertions too, because I like my assertions to both start with "Assert" so that I can scan them in the code easily and also to be fluent. I can proudly say that I am this project's first sponsor!
All they did is copy java
Liked for "Assert DOT THAT DOT IS DOT " -Dance
There is no AI in it? Pfffff, what a waste. [ɯsɐɔɹɐs].
AIUnit next
He makes a good point chat gpt might not know how to write my tests in it. Yet.
That sarcasm tag is brilliant! Don't know why but I've never seen it written like that before
This is awesome as we don't need vstest host anymore and I personally hate it.
2:15 I love how nick chooses "random" numbers
If only the test ran for 260 seconds too
Thanks Nick. It's interesting to discover new libraries, though I don't see myself using this one before years.
The Retry feature is a real plus. When a 2-hours long CI/CD fails for the third time because of flaky unit tests (among ~40,000 unit tests in the solution) you want to see the world burn.
All the rest, I don't think I've encountered situations where Xunit was limited.
If you have a flaky code, then you need to fix the code, not adapt the freaking tests, dude.
It is not acceptable to have a flaky code in a first place.
I think it's better to add these futures to XUnit or NUnit instead of recreating a library or maybe a package for XUnit or NUnit to add these futures because most of us don't like to go for a new library
What about support for build pipelines? Are there any pros and cons there?
This gets me excited.
I'd love to see more stuff about parallel execution of tests and how this stuff works with Azure pipelines.
The biggest problem I have with test arguments is that if you are using a matrix of test arguments and expected results, most frameworks see that as one test. I'd want to use the matrix of arguments to test the cases, and if one edge case fails, the entire test fails. What would make something like this more ideal for me is that the multiple test arguments are broken out, at least for reporting when there's a failure, as individual test cases.
I think the idea of the Before decorator is good, but it would also be great if you could define categories of tests which depend on on or more of those setup methods, so that state might be reused but different fixtures might require different setup states and that could be used to apply what is needed for each test. There's a condition where one test might inherit two different conflicting states, so some thought needs to be put into that type of system, but I see a lot of potential.
I'm happy with NUnit
Yeah, I expected something really flashy. But aside from using the new test runner, it mostly feels like NUnit. Maybe it has a few more options for "Before", but otherwise, it feels the same.
xunit is streets a head of nunit
@@Maloooon The benefit of this new unit testing framework is that it's much faster than both nUnit and xUnit, and supports AoT, which is becoming a bigger and bigger thing in *some* use cases.
For th eprojects I'm working on at work it probably wouldn't matter much, but I can see some that could really use those benefits.
I use GUnit
TestsSucksUnit =)
I see the same problem with TUnit and NUnit, which is not present with XUnit, which is thats it's a lot easier to use and code tests quickly with resharper. With just base resharper if I decide some line in my code needs to be setup before the test just a quick keyboard shortcut and its automatically moved to the constructor. This make writing tests a lot faster and easier since you don't need to write boilerplate code by hand.
Also XUnit assert syntax is a lot better in my opinion
You can do the same thing in TUnit
Thanks for the info. As always, refactor all unit tests and use it in production even if it's in preview library, you will thank me later.
I would like to see simpler scaffolding for testing databases and apis, especially when it comes to delays, intermittent connections, corrupted data, partial data, really slow responses, connection strings, security [certs, encryption,etc] and more. While things like an in-memory database are good, it'd be nice to test more realistic scenarios with a project, so you can handle things better.
Looks exciting, but I was hoping to see property-based unit testing here. Patiently waiting for the first library that makes it really accessible for .NET.
What's wrong with CsCheck?
Really, most projects just need some library that can assert things and some framework that can run those assertions. I've never met a real-life scenario where unit-tests related reflection would be the bottleneck why unit tests run slowly.
Also these days you might want to use a library that LLMs know about, so that you could autogenerate test cases sometimes. But that's about it, it's not that complicated.
I must admit, i never used any 3rd party for unit-testing in .NET. So every library is most likely better than the one we use: MS-Test. But T-Unit looks good so far and i really like it too that you have lots of control over it. But don't like the assertions. I would rather have manual Assert.IsTrue, IsFalse, AreEqual, etc. If i want that chaining thing, i would use fluent assertions. But myself would never use that, because i really hate chaining api's in general. I think its a stupid idea and i see no benefits of that pattern at all. It looks nice, writing a one liner but as soon as your line exceeds the editor width or has more than two lines - then it gets less readable and gets extremely hard to debug.
Thanks for the feedback. You're not tied to the TUnit assertions and can use whichever library you like!
I wish .Net had testing builtin like Rust and actually support the same way of having a testing module in the same file that is only compiled when running tests.
Assert.That(x).IsEqualTo(y) is actually the same approach as in the Java library Assert4J
All that hype in the beginning for a more granular [SetUp] attribute o_O?
DependsOn is pretty much what I've wanted for a number of my integration tests. Thanks for sharing!
Oh hell yes! I likey!!
Yeah, this should get some funding sent it's way for sure.
Two things: When you do the full video, I would like to see how you would emulation collection fixtures in XUnit. My guess, looking at the preview, is that I would create a base class that implements `BeforeAssembly`, wire up a static property on the base class, then in all my child tests I could access that. (E.g. for an EF Context, some class that wraps API requests, etc.)
Second thing, what are your thoughts on creating an abstraction around asserts? Cause it seems like every assert library decides to do things its own way, and if you switch libraries that ends up being the worse part of the migration; changing all the asserts. If you abstracted that out into your own assert classes, and had different concrete implementations for different test libraries you could more easily swap the asserts out by just changing how you implement them in the abstractions. You'd still have to fix the test attributes, but that could probably be done through a mass find/replace.
reminds me of the spec flow library
I'm curious how the code generation performance works on large projects. I'd rather suffer through test discovery delays when I'm running tests than suffer through code generation delays every single time I build - but if the code generation/build step isn't particularly noticeable, that could be an interesting tweak.
Heya! Ive got a benchmark on GitHub for building but admittedly it is a very minimal test project. I'd be happy to extend it but writing dummy tests in itself is a bit of effort so isn't the highest on my priority list right now.
As for local development, it hopefully shouldn't affect you too much as it uses the newer incremental source generator. That means it only generates new source if it detects changes that would affect it. So hopefully it'll keep performance to an acceptable level!
Source generators don't normally run at compile time. They run every time a file is changed. If you clone a repository, or clean the build artifacts directory, it takes a little time to generate the files, but once that's done they're normal C# files and shouldn't affect the built times much.
In theory source generators could cause a lot of work every time you type a character, but incremental source generators do a quick analysis to see if the change would require it to regenerate the source. Most of the time it doesn't.
I haven't read the source generators for TUnit, but I suspect it only needs to run the source generator when one of the attributes is changed or added.
Tldr; it shouldn't cause much of an increase in build times.
Nick, would you consider making a video about required CET in .NET 9 and the 10-100% method calling performance regressions that come with it on any CPU that isn't a modern intel processor?
Wait that’s a thing??
@@nickchapsas Yes, and it's causing major performance regressions for CPUs that do not have specialized hardware for CET. It also causes issues when interoping with other runtimes that don't support it, such as Java (the specific JVM is not named).
TH-cam keeps removing my comments, probably because of my sources. You can check 107651 and 103654 in the dotnet runtime and 36619 in perf-autofiling-issues.
really cool feature. bout to start migrating code to TUnit now. XD
Are there any other unit testing libraries that use the new testing platform?
When AI writes the tests perfectly from specs, then I am happy... wait, that would mean the AI could most likely also write the software
Aren't tests that depend on other tests typically an anti-pattern? I don't see why I'd want to use the DependsOn attribute.
Integration testing or where state matters. Less relevant for unit tests.
@@Thomhurstsharing state between test runs is pure evil
TestContext is such an awesome concept. And no reflection omg, omg omg. Going to buy your testing course.
My favorite is still the G-G-G-G-G-Unit. Poppin tham thangs....
I wonder what would happen if there's a circular dependency of tests. A depends on B. B depends on A. Hopefully a runtime exception. Worst case is a deadlock...
Hey @JustArion there's a compile time analyzer and also a runtime exception (for if you'd disabled analyzers)
would love to see you do a video putting o1-preview to the test
100k, you deserve it !
This testing ordering sounds cool but it will lead to maintainance issue eventually. Tests should not depends on execution order.
Nice, it's closer to NUnit rather than XUnit. I like they put `Before(Test)` which is feature I miss from NUnit but not in XUnit. But, I have to await for assertion? Ok, time to get used to.
This opens up several windows where we had to create abominable structures. For flakiness, I had to develop a whole retry library...
Does this support Specflow?
I don t buy into stateful unit tests, before after and all that goop. I have a static creating the sut and dependencies, then the test method does arrange act assert. Everything s pure. No state, no hidden stuff no magic. For integration tests I use specflow.
I like it. I hate tests being a different project
I completely agree that Unit tests being in the same project as the class being tested would be a game changer.
After digging into it a bit more I don't think that is how it works in TUnit. Much to my disappointment.
It would be so much easier to write and refactor code if the test code didn't have to be in a separate project.
Long time lover of xUnit, but can see the benefits you showed of TUnit. Downside is that greenfield development (using newer .net versions like 8) is rare in my experience.
Completely understand your frustration but older codebases likely already have massive established test suites that they're unlikely to invest in migrating to a new framework. That combined with the benefits of newer runtime and language features is why I decided to stick to lts.
Not a fan of the assertions part at all but I do like the rest. Would prefer to use Shoudly for assertions and have sync or async tests at my own discretion rather than being forced to be async
It would be interesting if a test could return something that could be passed to the next test.. but that’s probably too crazy
Isn't test dependencies are kind of antipattern? Each test should highlight single problem and should be independent and self-sufficient? Also, the "Before" initializers are too implicit in several ways: first of all logic of initialization could be spread among test classes and be "surprising", and second - not very clear what parameters can be passed to "Before" attribute - no code completion with choices can be used.
Performance is great btw.
There's analyzers that tell you what parameters to pass to each before/after method.
And sharing among multiple classes won't be a thing for before/after class, unless you use before every/after every. I tried to make the language obvious. If you use a normal before/after(class) it'll only affect tests in the class it's defined in
I'm excited about this, especially the direct control over parallelism, but I'll be sticking with Shouldly for my assertions. I can't stand Overly.Fluent.Interfaces.That.Create.A.Class.For.Every.Word.In.An.Assertion. If your framework has a class called "Is" or "It" and it's NOT a mocking library, you've made a wrong turn, and you need to reevaluate your decisions. "But it's so discoverable", you say. Great, that helps me for the first ten minutes of using the framework, and then it's a speed bump for the rest of my career. Just stop it. Give me a "ShouldBeLessThanOrEqualTo". Oh look, I just "discovered" the method I wanted in ONE auto-complete step instead of seven, and I only had to type ".sblto" in the IDE. Velocity matters.
I'm also not really following the power of this tool over other frameworks. In my testing flow i wish there was a simple way to cache the dependencies between tests
Source generated test library 👀
I have mixed feelings.
While I'm sold on supporting the new test platform, and source generation over reflection. The rest of the features I'm not sure are all good idea. Maybe, maybe not. I'm not passing judgment yet. I do wonder if xUnit could be updated to support test platform and source generation?
The title of the video and the example was a bit confusing to me. My initial reaction was "Is he advocating that we start writing our test code inside the actual classes? Dear lord the mess!". Why would you test an Add method inside a class called Test inside an assembly with the name Tests.
All in all, I think a more real world class would have proven a better example of what TUnit is trying to bring to the table, or maybe it would just have made it a bit less confusing to me as to what the big change to testing was/is.
I found the documentation for TUnit, but their example was the same code you were showing so that didn't help.
Example is the same because probably it’s a paid ad ;)
Man.. just used NUnit 😢
Keep using it. TUnit will not exceed NUnit in a long time, and any experience you gain will be usable in TUnit just as much.
Keep using it, TUnit is not ready yet
It is ok to not be using every new thing all the time.
There are far more important things than test frameworks, don't worry
Good library. Can you compare it wth a new xUnit v3? They also made a lot of changes, comparing to the v2
How quick is the source generation? Is it noticable?
Ok, I watched a 12 minute long definitely paid ad, and didn’t see any selling feature I might actually like in this library. But I see some manipulation.
Code generation != blazing fast.
Having every test asynchronous is actually bad for performance.
They removed dependency on VSTest so…what?
You can have init and tear down…just like in other test libraries.
Assertions are too verbose.
Feature about tests depending on other test is dangerous, it CAN and WILL be used to cover up badly written tests.
I’m keen to know how people do versioning in .NET if they aren’t using CI/CD. I can’t find a way to do it that’s not manual. I had a way that worked in the project file using dates but that made Nuget packet manager behave strangely.
You can use gitversion and run it in your build script. Which is basically how we do it in our CI which is nothing more than an automated way to run your build script.
(just don't tell me you don't use version control either.)
@@pilotboba Thanks, I’ll look into that.
What is the point of DependsOnI(OtherTest) ? This is violation of FIRST principles in unit testing. Unit tests shouldn't depend on one another.
Like I said, it’s not for unit testing
Integration testing is different. You're testing suites of behavior. 1) Create a new user. 2) Retrieve the just-created user. 3) Update their email address. 4) Check that the change was persisted. 5) Clean up after yourself by deleting the user. 6) Check that the user was deleted.
These are six different tests that have to run in a specific order. Yes, you can cram all of these into a single "CRUD" test, but the granularity of 6 individual tests is better for visibility. Now, what I need is the ability to say that #3 depends on #2, and #4 depends on #3, but #5 should only run after #3 & #4 even if they fail. I want to know what the "CRD" parts work, but the "U" is currently broken.
@@MelGrubb
Awesome so you can make 6 bad interrelated tests instead of 6 independent ones. And for some reason feel good about it because they're integration tests, not unit tests
@@IronDoctorChris Integration tests are sometimes about entire workflows. This, then this, then this. It's not all the time, certainly, but I definitely write workflow/scenario-driven tests on occasion. It would be nice to have a testing framework that supports this by design rather than me having to write one multi-step test. If you don't like it, or can't wrap your head around complex testing scenarios, then nobody is trying to force you. I'm just saying built-in support for those of us that can would be nice.
Thanks for intreducing a new tool to our toolbelt, now we know it exists we can decide if and when it might solve us a problem. Cool library thanks.
Even if i wanted to use it, Projects will still just have N/X Unit 😂
Was it necessary to mention how handsome is he!! 😅😅.
Great library will give it a try
Was it necessary to comment on the mention of how handsome he is? (inception activated)
@@pilotboba 😂😂😂
Nah, this is just a conspiracy to cover the entire alphabet
I wished for G Unit but I wasn’t lucky this time
Playwright?
No thanks. Migrating to this would be so much pain and expensive, while it has no real benefits.
Add(6,9) nice 🤣
IMO too much source generation here. I can understand it for production code, but for tests.
They just made it look like junit and assertj... lol
Nice
I see no value bar its a bit faster. change for the sake of change. in the end you should be running the test your working for then a final full run before you push. after that the pipeline does the testing and really who cares on the speed of that since if its spec'd correctly it will be no issue. If my app is so large and complex that it is way to slow on testing. either your writing tests wrong or your at the point to rearchitecture and or move to go or something that better suits larger projects.
Is that paid ads?
It would have to be disclosed if it was so no
So it runs a little faster meaning my 6k tests take less than 30 seconds to run but I loose Assery.AreEqual in exchange for a much more verbose setup.
I have thousands of asserts and write those most of the time, why is this better for me on a day to day basis other than a speedup... I can always use slow tests as an excuse to optimise my code...
You can use whatever assertion library you like. You're not tied to one or the other.
@@Thomhurst at the same time I can also use any unit testing library I like, the point of switching something so foundational is to give an advantage vs what I'm already using.
If that thing is speed then this is great but if I now need to supplement the 'superior' thing with more dependencies or write wrappers to get the correct behaviour then it might not be worth the effort since it's very possible that my development workflow and this library are just not aligned.
If they committed to providing both styles (with await if that's needed here) then that would be great and would make the conversion process more palatable.
@@wiipronhiif converting to this provided challenges for you, I'd be open to hearing them so I could hopefully help alleviate them!
@@Thomhurst It's not the conversion process itself, that's (hopefully) a one time operation that can be sped up with some regex (and manual fixes). The issue is mostly the difference in style between fluent and non-fluent assertions in day to day work.
Assert.AreEqual("A", "B");
Assert.That("A").IsEqualTo("B");
Both of these do the same thing, but now I don't know all of the capabilities of the built-in assertion framework because some functionality will be hidden behind various operators. I'm not familiar with Tunit but usually there is some level of nesting for all the capabilities to be found which leads to devs searching through various sub types ("Is", "Is.Not", etc.) or giving up and using google.
If you are looking at that code above in code review it looks more straight forward the first time you read it because it reads more like a sentance... great... but am going to read and write a lot of these things. As far as I can tell there is not really a good use-case for all of this method chaining when I can do it all in one call which is faster to write and faster to read.
Maybe TUnit hasn't gone as far down the fluent rabbit hole as some other frameworks (.That(X).IsEqualTo(Y) vs .That.Is.EqualTo(Y) ) but if they are going down that path then the developers of that library are making the decision to support what I see as usability issues rather than helping me keep my code small and the code I have to review minimal.
You seem to somewhat be pro fluent assertions, can you give me a use case where the fluent style is actually an advantage vs just having all accessors available on the Assert class? Builders have their use case but once I'm chaining 3-4 of them together it always feels like I need to go over multiple lines and that calls could have just been optional parameters or scopes.
@@Thomhurst TH-cam deleted my reply but in short it's not about the conversion, that's a one time thing that is just the grunt work of getting through it.
The real issue is day to day usage where the fluent assertions make it harder to find APIs as they are hidden in sub types (typically, not necessarily here). They also don't make it faster as it's more code to write and in code review it's more code to read which only benefits from the more verbose syntax the first time you read it after that it just takes longer to read compared to normal statements.
If this is the focus of the library devs to provide this style then they seem to have different priorities which might mean that I find other issues down the line that I have to work around. Yes I can get a library but then I'm dependent on the library exposing all the functionality and I can write my own wrappers but that has the same issues and takes time to write and potentially more time to maintain as TUnit evolves.
You seem to be pro fluent assertions, can you give me an example where the more declarative style is beneficial? I usually find with this style pro programming it's more text which feel like it could have just been optional parameters on the main assert method or a scope for grouped things.
Why do I feel like I've used something like this in Java, christ I hate Java.
Really underwhelming after that introduction, lol. Sounds like problems that existing testing libraries should be solving, no need for a new one
can we stop add more ways to write tests. Focus on doing less and less tests. Let AI write tests for us.
The best way to write integration tests is not use C# to write them. I simply run my C# app and the environment in docker and send requests from jest tests in js. Tests are very short and run concurrently. Jest looks miles better than this.
Usually you want to test the final effects of the execution, such as if the data is correctly stored in the database. I'm not sure if this is easy with Jest but with standard integration tests you just query the same DB that was used for testing.
There is no way Javascript ever could ever look better than c#.
You can do queries in js of course but you don’t necessarily have to query the database manually. For example if you test POST endpoint you can check the result via GET endpoint
@@Sergio_Loureiro it’s definitely better for tests. You don’t need static typing in tests.
Source generation is lame…
There is nothing brilliant about this. This is all potato-potAto.
Use MsTest Marco
Are you advocating for testing due to benefits of it, or because you have a course on it, and it brings you a lot of m**effing money?
Both due to the benefits of it and because it brings me a lot of motherfucking money
Offers nothing really useful, most features allow you to do what you usually shouldnt be doing anyways
What a weak, narrow minded argument
First!
Offers basically nothing interesting over xUnit/NUnit. I don't know what I'm supposed to be excited about here.
If you can’t see it then it’s not for you
Same here. I can't see anything that I can't do in NUnit.
Found two nunit developers. It is very different.
You probably work with trivial business logic.
If you have 500 tests it makes a difference in testing speed performance. If you have nontrivial tests, it helps simplify seup