Enroll course *Beginning Object-Oriented Programming with C#* ► codinghelmet.com/go/beginning-oop-with-csharp Become a sponsor and gain access to additional resources ► www.patreon.com/zoranhorvat Join Discord server with topics on C# ► discord.gg/SFu7pSGq
Excellent insights. Though I would disagree a bit with "Everyone must learn programming before doing programming". Sometimes the best way of learning things is just making a an attempt to build something without diving too deep into theory. Even though you know it will be garbage in the end. Thanks for sharing and keep it up with the great content!
Because record implements GetHashCode and Equals and collections don't. There are two ways to compare two collections: in order or out of order. Since the collection cannot decide which one is right, it cannot implement Equals.
@@qorxmazmaharram8300 They do but it is not appropriate for value-typed semantics. Two identical records would say they differ if they contained collections, even if collections have the same content.
@@zoran-horvat what if the collection was an implementation wrapper around immutablelist that also did a sequence compare. A ValueCollection if you like. I've implemented this myself for a document hierarchy that i needed to check for changes, including order of elements in some sub property collection.
I'm new to the bad practice(?) to use collections as part a record. I've tried to read up on it, but I don't quite understand. Is it as easy as I shouldn't use a record if I need my type to hold a list? As always, great video and content!
Linq isn't slow anymore, but it still isn't where it would need to be for applications that need to squeeze hard. The people who have reason to care take more issue with the very easy to lake allocations when the code is required to hit at most a few kb per sec at most, preferably none. Any discussion in a specific project is worthless if you don't profile and know how to do it right.
ID implies mutation. It is not a problem to have it in the record per se, but be prepared to turn it into a full-blown class (and remove equivalence members!) if it turns out that you will have to mutate the object. For example, I implement quite a few entities in my designs as records, even with EF Core, because they are insert-only. But sometimes it turns out that I must implement mutation on some of them because there comes a request to support edit, e.g. to let the user fix typos. That is how I let a record with ID evolve into a common class later.
@@zoran-horvat I'm asking what makes an object "short lived" as mentioned near 7:30 in the video. Presumably this is the gen 0 heap you're talking about and the GC gets complicated for longer lived objects that move to the gen 1 heap.
@@vladislavzhuravlev6440 We have a proverb: "Live a hundred years, learn a hundred years, and still die a fool" What you are doing is not teaching, it is a very cool product. )
@@zoran-horvat If we use a record to seralise some data when passing it around, how do we consolidate a 'DTO' that might contain data allong with collections of data?
Regarding DTOs; should we make them mutable as suggested in your example? And I understand that having a collection in a record messes up value equality, but then what should we do instead?
The purpose of a DTO is to transfer data. It is not a model, and immutability plays no role in its design. Regarding the question about collections, they violate the value typed semantics, and hence the containing class should not implement GetHashCode and Equals methods. It can be a common immutable class.
Comparing a struct to passing PersonId, Firstname and Lastname to a method, is not a fair comparison (6:54). Passing a single class reference to a method would be faster.
@@zoran-horvat oh you are right, my bad. But when you are mentioning programmers saying immutable is slow, they often mean struct coping is slow. Not the new class records.
Hi Zoran, when should i know that record isn't the right choice and change it to a class type? I mean, records are used for immutable structure and for value type equality, my question is when i need more methods to change the logic, this breaks the immutable state and should be changed to a class, I am right?
Try a functional approach where your objects stay immutable. If you need to change the state, try to create a function that returns a new immutable object.
I think gen 0 GC does look at short lived objects because it has to determine what is rooted in order to promote to gen 1, but still it's a micro-optimisation, and as with all optimisations, you should profile it before optimising it. Unfortunately most C# devs i know don't do either. The assumption that gen 0 is free and IO is expensive is a reasonable starting point when you're first writing the code though.
There is the sweep step in between where GC does consult unreachable objects, but only as an optimization of the memory management itself, therefore making its operation overall faster.
"Value types are stored on the stack, reference types are stored on the heap" is the most common mistake I see when people try to optimise, when in reality it will often be used on a reference type or boxed, but they don't really understand when boxing happens
The collections on records thing annoys me. If I have a record with a collection of records on it, I would have liked it to keep the value equality, without having to write custom equality or custom collection types etc. It also trips people up because there is no warning that the collection is compared by reference
There is no definition of equality among collections: there are at least two valid, and different, equality comparisons for any two collection instances. I think that the issue that annoys you comes from attempting to put a collection onto a record in the first place - that is needless and cumbersome, also making it very difficult for the consumer to use. Collections simply do not belong there.
How would you model 1:m composition? My argument is that if the properties of a record are compared by value then when the properties are collections then they should also be compared by value. For example if a book has a title then you could use a record to represent it, but if it has a title and a set of authors then you couldn't use a record to represent a book? That doesn't make sense to me. The part that annoys me is that you need to know an implementation detail to know that it won't work.
@@georgehelyar Records are shallowly immutable, i.e.if a record has a mutable attribute such as a List, it's not automatically immutable anymore. However there are some third party implementations out there, that eventually could help with your scenario, for example ImmutableListWithValueSemantics, ValuesCollection etc
In the context of DTOs and records, I'm always unsure if I should use the default record (class) or a readonly record struct. When should I choose which?
@@zoran-horvatI agree, and that's where c# designers somewhat mixed the things. IMO primary constructor combined with property declaration is one thing, and value semantics - another one. Almost in 100% cases people use positional records for DTOs (they are extremely concise). Hardly anyone uses record instead of class and then declares properties as with old classic DTOs
@@zoran-horvat don't remember which linq methods were affected ... the bad classics would be .ToList and .ToArray but for most/nearly all people Linq is great
@@fritzfahrmann4730 ToList and ToArray are not actually LINQ. Those are helper methods, literally shortcuts to calling the List's constructor and Array.Copy. Proper LINQ methods differ in no way from foreach, both in terms of performance and the amount of objects they produce while operating. You cannot beat LINQ by writing the loops manually.
@@fritzfahrmann4730 Same thing in foreach, literally. How else would you work with objects if you don't create them, and what else can you do after you're done with them but leave them to the garbage collector?
Anyway... In some cases Linq is not the best choice. From my gamedev experience: Need to avoid using linq in methods wich called every tick - it's so expensive for GC (memory allocation+defragmentation) -> decrease performance. Hello Unity3d and Linq in Update()
The performance problem with linq is more related to the extra objects it creates on the heap. Some extra objects are created each time you call linq query. You can use lazy enumerable without linq and you will just save the extra object(s). But unless It's in an extra hot path the readability/re-use benefits outweigh the slight performance degradation.
Most problems with linq I have seen is when you re iterate the same query multiple times, like .Count() and then a foreach on the same IEnumerable. Sure, if your data is small enough to not cause cache issues and contained in a ready made list a simple for loop will often be faster than a linq query, but as explained in the video, its going to be the rare case when that difference is actually measurable. In most cases, spending the same time to improve DB queries and looking for more ways to cache data that does not change as much will usually win you much more performance. So using Linq can add performance even if the specific Linq query could be made faster in it self just by the time you get to spend on other performance problems :) I have seen monster Linq queries that really did have huge performance costs but that was not good code by any standard unless obfuscation and one liners was the design goal. Just breaking them apart and simplifying usually resolved all problem while still using linq for 90 % of the original one liner linq.
Wow that record DTO stone landed in my backyard with a loud thud... Never thought about collection value equality in DTOs. Probably because I don't often have collections in DTOs. So, ok records are only valuable as value-object types? But value object types need to be constructed in a valid state, cause they represent a value... So the primary constructors on records make even less sense on a records since they don't validate anything except possible nulls which can be easily ignored with the bang (!) operator. Same for records with only init-only properties, while they do eliminate the clutter of backing fields they miss the opportunity o validate the data set in those properties. So the only other explanation for this mess is indeed catering for FP. However in that case the validity of data in those records can only be guaranteed by the correctness of the function producing them... which is quite a big ask. The whole immutable design done right argument is also very frustrating. Immutable design done right can indeed be powerful, but doing it wrong is so easy I'm not even sure it's worth attempting in most cases. Sure if most developers were true professionals this wouldn't be an issue, but this cannot be further from reality. The more I think of if the more frustrated I get
@@zoran-horvat yes this is what I concluded above. The way records are structured, discourages any kind of guards before the arguments are accepted so they're not useful as (DDD) value objects either. The only remaining use is something like a structure in FP (eg. F# records). Though you can't create cartesian product of two records (yet?), they're still immutable (by default but not mandatory) meaning they can enable some FP practices, thread safety maybe. As of now the records are an overhyped and vastly misunderstood concept in C#, that's not really applicable to most code-bases.
@@zoran-horvat this is not what I meant. I said they don't fit the value object description. I'm no DDD expert by any means, but in my understanding in DDD protecting domain invariants is mandatory, and mostly done by the domain entities themselves. In my understanding this means that no invalid values are allowed to exist inside the Domain. The only use I see at the moment for records in DDD is maybe as temporary representations of values while they are being processed and before they are applied back on the domain. Perhaps they can be used in domain and integration services? I'm curious to know what you think.
@@zoran-horvat For the hardcore, yes! 'int' has no semantic value, what does it represent? So they'd ultimately wrap that int in a 'value object' like `FavoriteNumber` which would prevent things like nulls or negative values (arbitrary constraint)
With regard to point two, I think the statement "LINQ is slow" is a bad one. What is slow? Where does it start and where does it end? Is LINQ slower (!) than if you don't use it? Yes, of course. LINQ is just candy for the developer to be able to write many lines in a few short ones. Does this sugar cost time? Yes. Does it make it slower? Yes. Is the resulting readable code more important than performance in many cases? Probably, but not always. Whether LINQ is slow cannot be answered and is not answered in the video. It depends on too many factors, which are individual for each case. The video also only presents one case. It would have been better if the video had gone into more detail and raised awareness of what LINQ means and what the advantages and disadvantages are. So you do exactly the same bullshit as the "LINQ is slow" community: you found the "LINQ is not slow" community. What nonsense.
Your last point perfectly nails my pet peeve with a lot of development practices - The general idea that allocations of any kind are bad. According to the generational hyphotesis, either allocations are quickly discarded in gen 0 or they might last very long and are not a massive concern to the GC. If I hear someone talking about avoiding allocations for performance reasons and I know they are not a game dev or have some perf measurements to back it up, then I immediately raise my eyebrows because it's usually just a feeling and not a fact they are concerned about.
That is exactly the point I tried to communicate in this video. Allocations alone are not the problem. They can become a problem when combined with some other factor.
I like your videos but these times at 5:20 are a joke! A real web request would never be so fast across the world, not even in Europe is so fast. Particularly this code would be slow as hell if needs to retrieve one result from a DB and yield it across the world! A better strategy would be some kind of pagination.
The times assume a fairly small response (which is a reasonable assumption). The consequence is that TTFB and TTLB can be considered equal. The times I have used in the calculation are optimistic, because my goal is to estimate the worst case for the value under investigation. I am not sure what you find funny in that process, since it is following regular engineering practices, namely to estimate the upper bound for a measure of interest. BTW, the estimates I have calculated are ignoring the possibility of distributing the servers geographically. The "time across the world" and "time within Europe" can be surprisingly and arbitrarily short. For example, I am measuring round-trip time to google.com consistently under 10ms, which makes response transfer time below 5ms.
@@zoran-horvat I appreciate your detailed explanation regarding the timing estimates and assumptions made in the video. While I understand your approach in considering a small response size for the calculation, real-world scenarios often involve larger data sets or varied network conditions that might impact performance differently. The optimistic estimations, while helpful to visualize best-case scenarios, might not always align with practical experiences, especially when dealing with diverse geographic locations and varying database loads. Geographical distribution of servers can indeed improve response times significantly, as you've pointed out with Google's impressive metrics. However, in scenarios involving database queries and transmitting data across distant locations, considering the potential latency and actual response times becomes crucial. Pagination or other optimization strategies might indeed be more advisable in scenarios where database operations and global data transmission are involved to manage the potential slowdowns due to network latency and larger data payloads.
Remember, it only has to be *effectively* immutable, which may not require copying. A substring might just refer to the same object with different offsets vs copying. If you are really nervous, look into 'persistent structs' (nothing to do with databases). If you don't have metrics and SLAs, stfu about performance.
ปีที่แล้ว
The worst video from Zoran I've ever seen. I need more education to understand it.
very clickbaity, records are not DTOs, linq has its place, immutables have their place -> if none of this shocks you, keep scrolling... I wish creators with respect to viewers used clearer titles
You are so lucky to live in the world in which programmers around you don't come around every day just to say that records are DTOs and nothing more, that LINQ is slow and useless and that immutable design is crap. You could have read all that in comments on my other videos, but you did not bother to. You could even visit my channel to see that I don't do click baits, but you didn't bother to. Still, you couldn't resist but to post.
@@zoran-horvatcome on, '3 shocking misconceptions' is as much of a clickbait as it can be - a title that tells little about the content, while makes you want to open it just to see if you know what the 'shocking' things those are. And well, none of those are shocking. The fact your channel has people agreeing with you doesn't mean that a) most of your viewers do -> most will just ignore b) those that stick with this channel are representative -> it's actually the opposite, those that stick to your channel are a filtered group of people thinking alike (the big, SHOCKING, mystery of social media/you tube/profile target bubbles)
@@zoran-horvat Come on man. DTO's are for small payloads. Nobody thinks records are DTOs. A DTO is an output from a Model or record. LINQ can be slow if it's used out of place. How many times do programmers work with immutable objects? That's a question that I would like you to answer.
Enroll course *Beginning Object-Oriented Programming with C#* ► codinghelmet.com/go/beginning-oop-with-csharp
Become a sponsor and gain access to additional resources ► www.patreon.com/zoranhorvat
Join Discord server with topics on C# ► discord.gg/SFu7pSGq
Your speaking manner is like listening to a story :) Thanks for such informative videos!
Love this presentation style - Zoran, you are the C# Socrates gadfly that we needed. :)
Excellent insights. Though I would disagree a bit with "Everyone must learn programming before doing programming". Sometimes the best way of learning things is just making a an attempt to build something without diving too deep into theory. Even though you know it will be garbage in the end.
Thanks for sharing and keep it up with the great content!
Hi Zoran, great video! Could you elaborate a little bit on why collections should be avoided to be put in records?
Because record implements GetHashCode and Equals and collections don't. There are two ways to compare two collections: in order or out of order. Since the collection cannot decide which one is right, it cannot implement Equals.
@@zoran-horvatregular c# classes also implements default these methods
@@zoran-horvat so the solution is to use a collection type that does unambiguously implement GetHashCode? what are the (best) candidates?
@@qorxmazmaharram8300 They do but it is not appropriate for value-typed semantics. Two identical records would say they differ if they contained collections, even if collections have the same content.
@@zoran-horvat what if the collection was an implementation wrapper around immutablelist that also did a sequence compare. A ValueCollection if you like. I've implemented this myself for a document hierarchy that i needed to check for changes, including order of elements in some sub property collection.
I'm new to the bad practice(?) to use collections as part a record. I've tried to read up on it, but I don't quite understand. Is it as easy as I shouldn't use a record if I need my type to hold a list?
As always, great video and content!
Another amazing lesson. Thank you.
Linq isn't slow anymore, but it still isn't where it would need to be for applications that need to squeeze hard.
The people who have reason to care take more issue with the very easy to lake allocations when the code is required to hit at most a few kb per sec at most, preferably none.
Any discussion in a specific project is worthless if you don't profile and know how to do it right.
I wish C# can have the pipe operator like that of F#. I don’t think that can be possible based on how it is designed. That would be a game changer.
Not fully get it why record will not contain an id? Can you please explain a bit further? Thanks
ID implies mutation. It is not a problem to have it in the record per se, but be prepared to turn it into a full-blown class (and remove equivalence members!) if it turns out that you will have to mutate the object.
For example, I implement quite a few entities in my designs as records, even with EF Core, because they are insert-only. But sometimes it turns out that I must implement mutation on some of them because there comes a request to support edit, e.g. to let the user fix typos. That is how I let a record with ID evolve into a common class later.
I would like to see you work with data coming from database queries instead of using managed code to manipulate EF results.
You mean like using Dapper? That requires a justification, because infrastructure code would grow by a factor of ten.
I am still using c# 7.3 because I cannot figure out how to update my compiler in VS.
How can you keep objects out of the Gen 1 heap?
What do you mean by that?
@@zoran-horvat I'm asking what makes an object "short lived" as mentioned near 7:30 in the video. Presumably this is the gen 0 heap you're talking about and the GC gets complicated for longer lived objects that move to the gen 1 heap.
We all need more education )
Totally agree with you, but have called to mind this: "Hey, Teacher, leave those kids alone!"(c) ))
@@vladislavzhuravlev6440 We have a proverb: "Live a hundred years, learn a hundred years, and still die a fool"
What you are doing is not teaching, it is a very cool product. )
Why is it not correct to have a collection within a record?
Because record defines value-typed semantics and collections don't. That would break the record.
@@zoran-horvat If we use a record to seralise some data when passing it around, how do we consolidate a 'DTO' that might contain data allong with collections of data?
Regarding DTOs; should we make them mutable as suggested in your example? And I understand that having a collection in a record messes up value equality, but then what should we do instead?
The purpose of a DTO is to transfer data. It is not a model, and immutability plays no role in its design.
Regarding the question about collections, they violate the value typed semantics, and hence the containing class should not implement GetHashCode and Equals methods. It can be a common immutable class.
7:52 Is it slow? No, when it is done right ©
that's actually the reason of choosing the right design for the thing, you are implementing. i think, people tend to classify the immutable design as "slow" because they just don't have the application for it in their day-to-day work
Thank you
Do you have a good in depth video about immutable design?
Most of the code I made in my previous videos is immutable.
You can also read Scott Wlaschin's great book: "Domain Modelling made functional"
Comparing a struct to passing PersonId, Firstname and Lastname to a method, is not a fair comparison (6:54). Passing a single class reference to a method would be faster.
There are no structs in that portion of code. It's all values and classes.
@@zoran-horvat oh you are right, my bad. But when you are mentioning programmers saying immutable is slow, they often mean struct coping is slow. Not the new class records.
Hi Zoran, when should i know that record isn't the right choice and change it to a class type?
I mean, records are used for immutable structure and for value type equality, my question is when i need more methods to change the logic, this breaks the immutable state and should be changed to a class, I am right?
Try a functional approach where your objects stay immutable. If you need to change the state, try to create a function that returns a new immutable object.
@@Dr-Zed should I use class or records in that case?
@@alfonsdeda8912 records of course. The "with" keyword is what you'll want to look at
I think gen 0 GC does look at short lived objects because it has to determine what is rooted in order to promote to gen 1, but still it's a micro-optimisation, and as with all optimisations, you should profile it before optimising it. Unfortunately most C# devs i know don't do either.
The assumption that gen 0 is free and IO is expensive is a reasonable starting point when you're first writing the code though.
There is the sweep step in between where GC does consult unreachable objects, but only as an optimization of the memory management itself, therefore making its operation overall faster.
"Value types are stored on the stack, reference types are stored on the heap" is the most common mistake I see when people try to optimise, when in reality it will often be used on a reference type or boxed, but they don't really understand when boxing happens
Great video :)
The collections on records thing annoys me.
If I have a record with a collection of records on it, I would have liked it to keep the value equality, without having to write custom equality or custom collection types etc. It also trips people up because there is no warning that the collection is compared by reference
There is no definition of equality among collections: there are at least two valid, and different, equality comparisons for any two collection instances.
I think that the issue that annoys you comes from attempting to put a collection onto a record in the first place - that is needless and cumbersome, also making it very difficult for the consumer to use. Collections simply do not belong there.
How would you model 1:m composition?
My argument is that if the properties of a record are compared by value then when the properties are collections then they should also be compared by value.
For example if a book has a title then you could use a record to represent it, but if it has a title and a set of authors then you couldn't use a record to represent a book? That doesn't make sense to me. The part that annoys me is that you need to know an implementation detail to know that it won't work.
@@georgehelyar Records are shallowly immutable, i.e.if a record has a mutable attribute such as a List, it's not automatically immutable anymore. However there are some third party implementations out there, that eventually could help with your scenario, for example ImmutableListWithValueSemantics, ValuesCollection etc
@@johnnykeems2911 What about an array? Is a record with a immutable property still comparable?
In the context of DTOs and records, I'm always unsure if I should use the default record (class) or a readonly record struct. When should I choose which?
The decision record class vs. record struct is the same as class vs. struct. The default decision is usually record class.
I'd use required and init keywords for DTOs' properties.
That is just details. The substance is in equality comparisons.
@@zoran-horvatI agree, and that's where c# designers somewhat mixed the things. IMO primary constructor combined with property declaration is one thing, and value semantics - another one. Almost in 100% cases people use positional records for DTOs (they are extremely concise). Hardly anyone uses record instead of class and then declares properties as with old classic DTOs
yes, linq is not slow, but code written by hand can be faster and avoids this garbage
greate video like always :)
What garbage?
@@zoran-horvat don't remember which linq methods were affected ... the bad classics would be .ToList and .ToArray
but for most/nearly all people Linq is great
@@zoran-horvat 4:25 "create an object ... and leave it to the GC"
@@fritzfahrmann4730 ToList and ToArray are not actually LINQ. Those are helper methods, literally shortcuts to calling the List's constructor and Array.Copy.
Proper LINQ methods differ in no way from foreach, both in terms of performance and the amount of objects they produce while operating. You cannot beat LINQ by writing the loops manually.
@@fritzfahrmann4730 Same thing in foreach, literally. How else would you work with objects if you don't create them, and what else can you do after you're done with them but leave them to the garbage collector?
Anyway... In some cases Linq is not the best choice. From my gamedev experience: Need to avoid using linq in methods wich called every tick - it's so expensive for GC (memory allocation+defragmentation) -> decrease performance. Hello Unity3d and Linq in Update()
Agree, but game development is very specific.
The performance problem with linq is more related to the extra objects it creates on the heap. Some extra objects are created each time you call linq query.
You can use lazy enumerable without linq and you will just save the extra object(s).
But unless It's in an extra hot path the readability/re-use benefits outweigh the slight performance degradation.
Another reason is using the wrong method for the job
Most problems with linq I have seen is when you re iterate the same query multiple times, like .Count() and then a foreach on the same IEnumerable.
Sure, if your data is small enough to not cause cache issues and contained in a ready made list a simple for loop will often be faster than a linq query, but as explained in the video, its going to be the rare case when that difference is actually measurable.
In most cases, spending the same time to improve DB queries and looking for more ways to cache data that does not change as much will usually win you much more performance.
So using Linq can add performance even if the specific Linq query could be made faster in it self just by the time you get to spend on other performance problems :)
I have seen monster Linq queries that really did have huge performance costs but that was not good code by any standard unless obfuscation and one liners was the design goal.
Just breaking them apart and simplifying usually resolved all problem while still using linq for 90 % of the original one liner linq.
Wow that record DTO stone landed in my backyard with a loud thud...
Never thought about collection value equality in DTOs. Probably because I don't often have collections in DTOs.
So, ok records are only valuable as value-object types? But value object types need to be constructed in a valid state, cause they represent a value...
So the primary constructors on records make even less sense on a records since they don't validate anything except possible nulls which can be easily ignored with the bang (!) operator.
Same for records with only init-only properties, while they do eliminate the clutter of backing fields they miss the opportunity o validate the data set in those properties.
So the only other explanation for this mess is indeed catering for FP. However in that case the validity of data in those records can only be guaranteed by the correctness of the function producing them... which is quite a big ask.
The whole immutable design done right argument is also very frustrating. Immutable design done right can indeed be powerful, but doing it wrong is so easy I'm not even sure it's worth attempting in most cases. Sure if most developers were true professionals this wouldn't be an issue, but this cannot be further from reality.
The more I think of if the more frustrated I get
Validation does not belong in a record the same way it does not belong in an int or in a string.
@@zoran-horvat yes this is what I concluded above. The way records are structured, discourages any kind of guards before the arguments are accepted so they're not useful as (DDD) value objects either. The only remaining use is something like a structure in FP (eg. F# records).
Though you can't create cartesian product of two records (yet?), they're still immutable (by default but not mandatory) meaning they can enable some FP practices, thread safety maybe.
As of now the records are an overhyped and vastly misunderstood concept in C#, that's not really applicable to most code-bases.
@@aivascu Is int also unusable for DDD?
@@zoran-horvat this is not what I meant. I said they don't fit the value object description.
I'm no DDD expert by any means, but in my understanding in DDD protecting domain invariants is mandatory, and mostly done by the domain entities themselves. In my understanding this means that no invalid values are allowed to exist inside the Domain. The only use I see at the moment for records in DDD is maybe as temporary representations of values while they are being processed and before they are applied back on the domain.
Perhaps they can be used in domain and integration services? I'm curious to know what you think.
@@zoran-horvat For the hardcore, yes! 'int' has no semantic value, what does it represent? So they'd ultimately wrap that int in a 'value object' like `FavoriteNumber` which would prevent things like nulls or negative values (arbitrary constraint)
With regard to point two, I think the statement "LINQ is slow" is a bad one. What is slow? Where does it start and where does it end?
Is LINQ slower (!) than if you don't use it? Yes, of course. LINQ is just candy for the developer to be able to write many lines in a few short ones. Does this sugar cost time? Yes. Does it make it slower? Yes. Is the resulting readable code more important than performance in many cases? Probably, but not always.
Whether LINQ is slow cannot be answered and is not answered in the video. It depends on too many factors, which are individual for each case.
The video also only presents one case. It would have been better if the video had gone into more detail and raised awareness of what LINQ means and what the advantages and disadvantages are. So you do exactly the same bullshit as the "LINQ is slow" community: you found the "LINQ is not slow" community. What nonsense.
"LINQ is slow" is a misconception.
Your last point perfectly nails my pet peeve with a lot of development practices - The general idea that allocations of any kind are bad. According to the generational hyphotesis, either allocations are quickly discarded in gen 0 or they might last very long and are not a massive concern to the GC. If I hear someone talking about avoiding allocations for performance reasons and I know they are not a game dev or have some perf measurements to back it up, then I immediately raise my eyebrows because it's usually just a feeling and not a fact they are concerned about.
That is exactly the point I tried to communicate in this video. Allocations alone are not the problem. They can become a problem when combined with some other factor.
I like your videos but these times at 5:20 are a joke! A real web request would never be so fast across the world, not even in Europe is so fast. Particularly this code would be slow as hell if needs to retrieve one result from a DB and yield it across the world! A better strategy would be some kind of pagination.
The times assume a fairly small response (which is a reasonable assumption). The consequence is that TTFB and TTLB can be considered equal.
The times I have used in the calculation are optimistic, because my goal is to estimate the worst case for the value under investigation.
I am not sure what you find funny in that process, since it is following regular engineering practices, namely to estimate the upper bound for a measure of interest.
BTW, the estimates I have calculated are ignoring the possibility of distributing the servers geographically. The "time across the world" and "time within Europe" can be surprisingly and arbitrarily short. For example, I am measuring round-trip time to google.com consistently under 10ms, which makes response transfer time below 5ms.
@@zoran-horvat I appreciate your detailed explanation regarding the timing estimates and assumptions made in the video. While I understand your approach in considering a small response size for the calculation, real-world scenarios often involve larger data sets or varied network conditions that might impact performance differently. The optimistic estimations, while helpful to visualize best-case scenarios, might not always align with practical experiences, especially when dealing with diverse geographic locations and varying database loads. Geographical distribution of servers can indeed improve response times significantly, as you've pointed out with Google's impressive metrics. However, in scenarios involving database queries and transmitting data across distant locations, considering the potential latency and actual response times becomes crucial. Pagination or other optimization strategies might indeed be more advisable in scenarios where database operations and global data transmission are involved to manage the potential slowdowns due to network latency and larger data payloads.
Remember, it only has to be *effectively* immutable, which may not require copying. A substring might just refer to the same object with different offsets vs copying. If you are really nervous, look into 'persistent structs' (nothing to do with databases).
If you don't have metrics and SLAs, stfu about performance.
The worst video from Zoran I've ever seen. I need more education to understand it.
Is it the worst video, or do you need more education? You need to make a choice first.
very clickbaity, records are not DTOs, linq has its place, immutables have their place -> if none of this shocks you, keep scrolling... I wish creators with respect to viewers used clearer titles
You are so lucky to live in the world in which programmers around you don't come around every day just to say that records are DTOs and nothing more, that LINQ is slow and useless and that immutable design is crap.
You could have read all that in comments on my other videos, but you did not bother to.
You could even visit my channel to see that I don't do click baits, but you didn't bother to.
Still, you couldn't resist but to post.
@@zoran-horvatcome on, '3 shocking misconceptions' is as much of a clickbait as it can be - a title that tells little about the content, while makes you want to open it just to see if you know what the 'shocking' things those are. And well, none of those are shocking.
The fact your channel has people agreeing with you doesn't mean that a) most of your viewers do -> most will just ignore b) those that stick with this channel are representative -> it's actually the opposite, those that stick to your channel are a filtered group of people thinking alike (the big, SHOCKING, mystery of social media/you tube/profile target bubbles)
@@kocot. You have so many misconceptions about so many people.
@@zoran-horvat lol
@@zoran-horvat Come on man.
DTO's are for small payloads. Nobody thinks records are DTOs. A DTO is an output from a Model or record.
LINQ can be slow if it's used out of place.
How many times do programmers work with immutable objects? That's a question that I would like you to answer.
Records are bad DTOs