Actually (correct me if I am wrong) in a EF context, using Any() is even superior to FistOrDefault() != null because FirstOrDefault() has to query all columns and do the deserialization.
@@vladyslavhrehul2185 That's because a List has a property called Count that store the value where IEnumerable has a method called Count() that iterates the rows in order to count them so your comparison is a property versus a method.
@@vladyslavhrehul2185 Count, full stop, isn't an operation, it's a property (like Length on an array), so it's a bit of an unfair comparison. Any() will be faster than Count() > 0 in all cases, although I wouldn't know an Oracle Optimizer would execute your SQL'ed lambda-expression.
Nitpick: It will not be “exponentially slower”, asymptotically it will have the same worst case behavior, but best case is much better, and average better by a factor of 2.
From memory, EF will use a TOP(2) with Single. First isn't safe without assuming unique, if not it should also always be accompanied with an OrderBy clause if you cannot assume uniqueness.
As mentioned in the video, for you to be able to reap the benefits of First uniqueness need to be enforced on a different level (Insert, constraints etc)
Top 2 can still be safe IF you add a where for the match. If that returns 2 items its not unique, it should always return only one line to be true. But that would still have the DB do the processing of the whole dataset, but hopefully hitting only indexes.
@@andreybiryulin7207 Yes, I think I actually miss read the Steve Py comment so disregard my comment ;). In later comments I explain that a DB design that needs Single is in my opinion a bad design, if you need uniqueness it should be enforced when adding to the database, not when reading, so that you trap the error when its committed and in the best case can report it back to what ever source you got it from. And if you actually do not know this when accepting data I would probably add some intermediate processing and a secondary table with a uniqueness constraint to solve this. I have actually had similar cases where we needed one column to be unique for some rows but not for others, and we solved it with SQL constraints, so we never had to consider it when reading :)
@@davidmartensson273 this makes sense. Also consider, if you were to use Single, how would you realistically recover from the exception at read time? The ship to properly resolve it has already sailed at write time.
...but if you have such a constraint, the perf difference should be negligible. I can't really imagine a reasonable scenario where First makes significant gains, and is actually unproblematic.
Lets say you have something like this: public IObservable Create(“honda”); If you use Single you wil get an exception if there are more than 1 factories returned, using First would potentially hide an underlying defect
If there's supposed to be only one item, you should never use .First(). Let's say you have a couple different indexes on a table, and one query uses one index and a different query uses a different index, and one index is ordered by a column in a different direction from the other, .First() is going to give you a different item for one query than for the other query. There are times where it's fine to just use .First(), but you have to be absolutely certain that there's no way for there to be a second one. Or if you know there are multiple and you just don't care which one you get. Because there's no guarantee that IEnumerable enumerate the collection in the same order every time.
@@SelmirAljic I've not used IObservable before but can you not constraint the "create" to be unique by checking first if there is already any item named "honda". Seems to me trying to enforce uniqueness after populating data is too late
@@JetsetDruid Why is there an additional entry in your enumerable that you never access? Hiding that problem with OrderBy is just hiding a bug that's gonna bite you in the ass later. You should only use First() when you're logically guaranteed that only one item is in the enumerable that matches your predicate.
Love the videos keep it up! I would be interested to see some Rider tips and tricks and strengths/weaknesses compares to visual studio 2022 if you get to it 😀
Usually, Single is practically just as fast because you have the right indexes. And when you don't, First is usually wrong, because it's non-deterministic. You're almost always better off using Single.
Good video as always, just a few additional remarks: - when using first() and you are not 100% sure that there can be only one result, try to at least ensure that the collection is sorted in a deterministic fashion. Otherwise, multiple runs will produce different results, and that's a horrible place to be. - Single() is most useful to extract the single value of a colltection which you previously checked to have a count of exactly 1. First() would obviously work here as well, but I find single() to better convey the idea.
Ordering defeats the purpose. Order doesn't matter when you are querying data that shouldn't have duplicates. If there are duplicates then you have a data integrity issue in your system and your querying isn't responsible for finding it. If you assume uniqueness then first should be used over single
@@nickchapsas for in-memory, I agree. No point in saving performance by using first() if you sort the collection beforehand. But if the data is coming from a database, maybe a NoSql DB or a distributed system where the costraint can't be enforced, you can at least try to make your query somewhat more resilient by using first() over single() - but you'll need an ordering for consistent results, which can be provided without in-memory sorting e.g. by using an index. It's "pick your poison". Would you prefer to fail immediately (single) or let it continue to run (first).
@@AlanDarkworld That's even worse. NoSQL databases are partitioned so things like ordering a dataset or having single will actually cross partition can the whole database and kill your performance completely. With proper data modeling you will never have that issue
Interesting, i never considered that .SingleOrDefault requires checking the whole dataset. As I understand it basically you should only use SingleOrDefault if it is POSSIBLE for the dataset being queried to have multiple matches. If it is not possible for the dataset to return multiple matches you should always use FirstOrDefault This actually makes it alot easier to choose which one to use IMO
If the column is indexed (and when you have use-cases where you need FirstOrDefault or SingleOrDefault it most likely should be) there is no "checking the whole dataset" involved. I still think SingleOrDefault is better if the data model / use case matches.
Really the other way around. If it's possible to have multiple matches, use FirstOrDefault, or you'll get an exception. SingleOrDefault? Yeah, don't use that, except possibly to check for uniqueness on input, and always catch the exception.
This video is dangerous advice; avoid First(OrDefault) always. The cases when First are significantly faster are exceptionally rare in practice - and when they do occur, usually First is semantically _wrong!_ Almost all cases people say First, they're _assuming_ there's one match, and when there are multiple, you're much better served by failing fast with an exception than performance-optimizing returning the wrong result. There are potential exceptions such as when you have a total order and want the first in that order - but those are best served by MinBy or MaxBy, so even there First isn't useful.
Thanks for the explanation! Refactored all my .SingleOrDefaultAsync database LINQ queries! I often encounter "long running requests", for example, Razor Pages on first database call. But also on my API.
You shouldn't do that. Index your database properly, that is what your problem is. A TOP1 vs a TOP2 is negligible at the database level on an indexed column.
2 ปีที่แล้ว
First() might need a order by... and Single() might not make sense without a discreet or group by. The important part is to think about what you are trying to achieve, rather than just doing what you usually do.
I regularly fight to get my coworkers to use "Single" and "SingleOrDefault" because if there is more than one of a thing (when there should only be one thing), then it demonstrates that there's a problem in the database/upstream. I get constant push back saying "Well, the database tables permit duplicates but it will never happen." That phrase "it will never happen" is the biggest lie you will ever hear. Using the appropriate filter method consistently safeguards your code against bad data elsewhere in your system and you will find these errors immediately instead of months later when some serious damage has already been done.
Your coworkers are right. Think about it. Is it the responsibility of your queries to detect data integrity issues with your data layer. Ofc not, that’s a write/update concern.
@@nickchapsas I wish I worked in the environment that you work in where you have good controls around data integrity and your coworkers actually care about that. When you have different coworkers of different skill levels making changes to different parts of the larger machine, having safeguards at all levels of the system is essential. The database _should_ care about uniqueness if it's supposed to be a unique column, but that's not always the case and pretending like it just-so-happens to be unique because the interface doesn't (currently) permit it *always* bites you when someone else makes a change somewhere unexpected for their own use case. tl;dr; my coworkers are wrong because our database is wrong. Thanks for your video and reply, btw. I couldn't hit Subscribe fast enough.
@@nickchapsas Reply #2 I'll give you a paraphrased, badly designed, but informative use case, if you think that I'm still mistaken. The interface ultimately saves a user with name and email, the email is separated out into a separate table for our purposes. The email should be unique, the save function makes sure of that upon save, but the database permits non-unique email addresses. The SendEmailToUsers application-level function ultimately pulls out all users from the database with the goal of sending an email to each user. It iterates over every user, pulling back their single email address by calling the GetEmailOfUser method in our repository class. The repository layer's function GetEmailOfUser is supposed to return a single email address, but two could be returned from the stored procedure call since the database technically permits that data state. Why would I *not* want to have a "Single" right here? Why is "First" better?
Entity Framework doesn’t need to query the whole table etc when using SingleOrDefault. It outputs a “SELECT TOP 2” query. If it finds two matches, then it throws. It does not need to enumerate the whole table. Maybe that’s not the case in other providers besides SQL Server, but that is literally what it does against SQL databases.
Single = I want to make sure it is only ONE item. Zero or more than one - that's an error. If wonder why they added the overloaded versions with the predicate. For predicates we already have Where(...) and that is expressive and clear. First(x=>...) - what is it? You have to look at the name of the parameter to find out - its a predicate.
I recently worked in an API that was riddled with SingleOrDefault calls, frequently without exception handling. It seems that the developers took the view that "there should only be one of these, so SingleOrDefault it is". But sometime, due to a questionable ETL script or whatever, that assumption proved to be incorrect and boom. Personally, I never use Single or SingleOrDefault, because imposing those limitations is something that should happen on input, or possibly via a unique constraint in the database.
I'm the opposite. I use SingleOrDefault where I expect there to be only 1. I handle the "OrDefault" scenario (as always and as you would with First). Using First, where there should be only 1, often hides a problem. If Single throws an exception, where there should be only 1, you have a data problem which can quickly be identified by good logging and exception notification code. Then you can launch an investigation as to how the data got corrupted. Probably a lack of business validation, or some data mutation code was not atomic. I'd rather go through that path, rather than have First hiding a problem which could go unnoticed for a long time because ... First.
I never had a reason to use Single just because I don't see a use case. When I only want the first of something I'll use FirstOrDefault, and when I need to make sure that only one of this exists I'll validate using Count(). I'm not using huge data sets so this is perfectly fine for me. I don't see why Single even exists especially when you shouldn't provoke exceptions.
What if you want to get the only item that satisfies a predicate, and check the precondition that there is only one item that satisfies that predicate? When using Java I kept reinventing Single myself, especially in tests, so it's nice to have it as a built-in in C#.
@@SimonClarkstone In tests the testing framework usually have a specific test for single than not only makes the assurance but actually out of the box provides good explanations for when it fails.
@@SimonClarkstone I any case where I have had to ensure singularity its either a database where I can use a unique index (much safer than trusting every case where you ad things) or like a dictionary that also fails if you try to add duplicates. Its safer, catching the problem on addition instead of consumption. Sure, there could be a case where you might allow for some duplicates but not for others, but in that case I would still try to enforce it in the storage to catch it on addition.
@@davidmartensson273 Those do apply in many cases, yeah. I do like how C# dictionaries make it easier to detect collisions rather than silently overwriting. I've recently been writing some parsers for input that could be poorly formatted, which needs to be matched up to existing data that it might not match, and Single was well-suited to some of the tasks.
Do not use InvariantCultureIgnorecase. for example what goes first á or a , or the Swedish å and the US a. InvariantCulture will treat all those as not "a", when in reality they are all "a"
this is from the Microsoft website: Avoid the following practices when you compare strings: Do not use string operations based on StringComparison.InvariantCulture in most cases. One of the few exceptions is when you are persisting linguistically meaningful but culturally agnostic data.
Nick Chapsas, ok but what about Find for id? It's reasonable not to use SingleOrDefault for check id by performance, but isn't Find - the right option? And what with LinqToSql?
I haven't tried to see the query generated but I would assume it add a Top 2 in the query which will probably lead to a full scan if you only have one item. Haven't tried it however since I don't use EF so you might wanna give it a go and see
I've been getting my team away from using First() to always using FirstOrDefault() and handling the null case. The "Sequence contains no elements" error can be hard to debug when you have a bunch of First() calls in the constructor for a DTO and you don't know which one failed. When you check for null after the call to FirstOrDefault() you can then create a more meaningful error message.
I was wondering why the constructor is the where you are using any amount of First or FirstOrDefault, it's not really what the purpose of the constructor is, to have collections being iterated through inside it?
@@alexbarac You are building a smaller data object from a larger set so you only want a subset of the lists inside your new object. Since it's a DTO it's a dumb object with no logic inside of it. So you don't filter inside the DTO. You filter in the provider that is building it.
Even if you filtered into local variables you can have multiple lists getting filters on lines right next to each other. Since sometimes the line number on the stacktrace is of by one or two or can be hard to know which one failed. The big win is that the error messages you can surface can be unique for you application and help the users or devs solve the problem quicker.
@@JoeStevens Then why not extract each filter into its own method. If any of the filters breaks, it will break inside the method and the stack trace will contain said method. The reason I asked is that I only use '...Default' if I expect to execute different pieces of logic if the returned object is null or not, but not for error checking.
Great video as always! one thing that doesn't work for me is sending a default value as a parameter to the FirstOrDefault method (like you did with "N/A") is this a c# 10 feature?
I was thinking to myself that Microsoft had to build in a check when each result of .Single() is found to throw an exception immediately at the second item. If not, it would iterate the entire collection, then see there was more than one and throw an exception, which would result the worst possible performance. I checked the reference source. When there is no predicate, it looks good. If the collection can be converted to a list, it checks the count first. If the collection cannot be converted to a list (therefore it doesn't have a .Count property), it fails as soon as it tries to move to a second result. However, the predicate version is much worse! It is actually iterating the entire collection, then checking the count of all results after that! If they added a test for count > 1 right after incrementing count++ and exit the loop, it could save countless iterations! (EDIT: I was looking at .NET Framework 4.8 reference source. It is fixed in the latest .NET source and more efficient.)
Thanks for the content Nick, little topics like this are really valuable. I have always used .FirstOrDefault() as instructed when I first got into programming, however videos like this really help give a good understanding as to why! You continue to expand my mind.
There is probably too little real prestige in comparing a single call using method a to a single call using method b. In real scenarios, you want to optimize your queries for all the vectors. Then, how about using a grouped lookup in comparison? Group once, lookup once, benefit from knowing the amount of results plus the results, no matter the amount of queries. Surely, crossing boundaries like LINQ2SQL is another topic. I'd so much like to see whether you give it a try. :)
i don't know, it's a kinda random approach; i would have made sure to separate the discourse for linq and for the linq expressions used in ef, because here you are using just an array; i mean i get that videos should be short or fewer people would watch it, but there's not only other types of collections (ie sorted list) on which you can use linq, but the DbSet too, and that's pretty common
What about a cached hash set for the O(1) flex instead of iterating the list like a fool? xD Database engineers invented indices for a reason. Why not let the database do the heavy lifting? :D
The difference at the database level is negligible. If you are doing a query on an indexed field, TOP 1 and TOP 2 uses the same execution plan, the same index, and the data will be located in the same node of the Btree. In terms of executions a Top 2 is a single Index seek/scan then a single Data file jump to get the data. A TOP 2 is the same single index scan, but now jump to two places to get the real data. If your column is not indexed.... Then index it. So it really is not a problem. This is a non existing problems for databases that are indexed.
That’s such bad advice that I’ll make a dedicated video on it. Brute forcing a solution to a problem that doesn’t need to exit doesn’t scale. There is a reason why you opt into indexing and it’s not the default behaviour and there is a reason why indexing can cause a plethora of problems in itself, let alone the space.
@@nickchapsas Instead of using a memory dataset, on which a First() will be better than Single(), give it a try with an indexed column on a sql/postgres database, and see if the theory holds. Also try it with an unindexed column and compare the results.
@@nickchapsas I just completed a benchmark test against a table with 500k records, grabbing the first, the middle, and the last. The results are the same, there is almost no difference when you are accessing an indexed column with a sargable query. First always wins by 10 microseconds. That is a insignificant amount. It doesn't justify sacrificing readability and possible data duplicates, for such a small amount. Granted, you don't always have the option of controlling your database, and if are forced to only use data in memory, then yes, first is better. Also, you don't provide the reason for not using indexes. You should always analyze your data queries and apply indexes as necessary. They are an integral of RDMS and they offer the best way for increasing performance.
@@nickchapsas You are correct, and there would be performances issues when updating/inserting if you decide to index every column on a given table. But a good rule of thumb is that if you are doing a query that is sargable on a column, such as email/username on the user table, you will benefit greatly from indexing. I ran the same benchmark as before, a table with 500k and i did a first/single, and the results were horrible compared with the index. It took GetRecordByFirst 129,662.1 us to get to the last item on the table, before with the index, it took 150.5 us. And without the index, that number increases steadly as O(N) as your table increases, while with the index it only increases as O(log n) flattening out. Im not saying Index everything, but you should index columns you are doing sargable operators.
Btw. If you want to check if a collection has only one element and then get it, it will be faster to call .Count() < 2 and then .FirstOrDefault() with check of value !=null, than just a call to .SingleOrDefault() even if you don't catch exceptions
@@nickchapsas I agree that its innefficient. However the behavior is provider specific, EF will generate a count statement, and then a select with a top
Can you actually put this data into a real database like MySql or MSSql? I would be very surprised and frankly disappointed if "first" operated faster for querying primary key integers from a database. My expectation would be they are the same.
@@nickchapsas By convention a column named Id is assumed to be the primary key. From the docs: Code First infers that a property is a primary key if a property on a class is named “ID” (not case sensitive), or the class name followed by "ID". If the type of the primary key property is numeric or GUID it will be configured as an identity column.
re: Single() when you only have 1 match : famous last words "trust me bro, this is unique".. you going to bet the data integrity of your system against on every future code bug? ; but yes, push the unique constraint to the database where possible. or even better push the where clause down to the DB to retrieve only "1" row, and .Single() that
Your querying isn’t responsible for finding bugs on your inserts and updates. Coding defensively is fine but coding naively against your own code is not
don't really agree with this. Single is useful as an assertion that there should logically only be 1 item in the result set and to throw otherwise. really what you're using here is a linear search/filter on a 10,000 item collection. this is true for both First and Single. First can exit early if the item it wants is at the top, but regardless if you are searching through a large enough list you should be using a database index or dictionary or some other more efficient way of finding a match. choosing between First and Single is not a performance decision; in most cases there will be a logically correct choice (usually Single/SingleOrDefault).
I'm probably the outlier here, but it's because of gotchas like these that I almost never use LINQ. LINQ is great for what it is, but I"ve found that most of the time, I'm better off writing and optimizing my own code than trusting a generalized library to do so. Of course, that comes with its own tradeoffs in terms of time taken to write and optimize that code, so YMMV.
Same here in avoiding LINQ. I stay away from EF as well. One thing I've picked up from this channel is to actually benchmark and check the performance of something when I'm not sure, rather than assume something will work this or that way and should be faster or slower - especially when benchmarks are fairly quick to write using that library. Had the same feeling when I read the comments on the video suggesting not to use exceptions for the application flow - there seemed to be a lot of people in the comments using exceptions in that way.
@@fusedqyou As Nick demonstrated, there's a huge performance drop if you don't use Single the right way. Another one I've seen is a couple of times is iterating through an IEnumerable repeatedly in a loop. It's not that LINQ itself is bad, but it gives a lot of opportunities for unwitting programmers to be...extremely unwitting.
@@MaximilienNoal I'm not sure how you would jump to that conclusion. LINQ is a library, like anything else. What version you target has no bearing on whether you use the library. My code uses the latest features, just like I'm sure many other people do...it just happens that that doesn't include LINQ in the vast majority of cases.
@@fusedqyou Just as an exercise, compare what I said to what you said. Me: "LINQ is great for what it is". You: "LINQ is a bad choice". Me: "I almost never use LINQ". You: "drop LINQ altogether". Me: "comes with its own tradeoffs...YMMV" You: "you're a bit paranoid".
Actually i don't see any reasonable scenario in your business logic where you should use Single(). This method doing too much things. It's just break the rule of pure functions.
I usually really enjoy your videos but this could easily have been a YT short. I am slightly concerned that the performance implications are not painfully obvious to developers.
I don't get the point of this video. Single and First have different use cases. They are not interchangable so the performance difference shouldn't be compared. You either need to be sure there is only one result or you don't care and just want the first one.
The reason is that many do not understand that difference and trigger a lot of unnecessary processing where a First would be not only enough but also a lot faster.
@@nickchapsas I did, but the database constraint argument doesn't convince me. DB is sort of a dependency for DbSet in my repository and I don't trust it to have that constraint. Maybe if it's code first and it's somehow tested and ensured, but even then I should be writing "dependency agnostic" code.
@@B1aQQ It doesn't matter. If your app's business logic is to only have one of something then First is sufficient. Your reads aren't your profilers and they have no job troubleshooting your app. You should have appropriate safeguards and not making your queries exponensially slower as your data grows.
I think you forget something important: if you use First you should use it after an OrderBy. Maybe if you have the data pre-ordered you can omit it, but in general you don´t know the order, and if you're using LinqToSql is mandatory to put the OrderBy. You can trust the constraints and be sure there is only one item, but this is something you know, not something "the code says". I prefer to use Single whenever I cant, I think is much more clear. If there are performance problems the refactor it. Great video and channel!
Absolutely not. Ordering by first will make you go through the whole colletion first to make sure its ordered and then get the first value. It totally misses the whole point since it will perform as slow as Single and if the person writing the code doesn't know how to proprerly handle enumerables it can also allocate the whole collection again. The whole point of the video is that on unordered data sets, first will have better performance on average.
This is important, your benchmarks are using a presorted list which gives the result for First[OrDefault] will be the first ID which matches. I'd like to see the benchmarks repeated to find a fullname, requiring a sort of the collection as this emulates the real world case more closely..
@@nickchapsas I disagree. In the case where you actually get a result instead of default, you have no idea what you get without OrderBy(). Of course if you don't care about the order, that's fine. But in many cases it's an anti-pattern. It might as well be called RandomOrDefault(). Anyway, they serve different purposes, one is for getting the first if any, the other is for ensuring that there is exactly one or zero. Both have their uses and they don't replace each other. And of course, performance is not everything. Expressivity and state of intent is also important. SingleOrDefault() clearly states intend and is by it's nature agnostic about ordering, FirstOrDefault() is clearly about ordering (if you need it). Also, there is a separate discussion when we're talking Linq to EF.
@@nickchapsas obviously you are right in performance comment, but in general you need the OrderBy. For example in your little example: a customer with X emails, you want to send only one email. You can use First if you don´t care about the order, but maybe the customer emails have and "PreferedOrder" field so you must use it to order the emails. In any case I think we are in the same page, the only problem is the naming: there should be an extension method called ".SingleAndITrustInMyDataDontCheck()" (but shorter) that internally calls First :)
@@pinkfloydhomer you don't need to sort anything when you want to get the only element. And even if you want top 1 from multiple matching, unless you are using sql, using MaxBy will be much better then sorting because max is O(n) when sort is O(n*lb(n)).
Great Video!, The same applies to Count(x => x == 1) > 0 vs. Any(x => x == 1). Rider also suggests to refactor Count() > 0 to Any()
Actually (correct me if I am wrong) in a EF context, using Any() is even superior to FistOrDefault() != null because FirstOrDefault() has to query all columns and do the deserialization.
Count is faster on List and Any is faster on IEnumerable
@@vladyslavhrehul2185 That's because a List has a property called Count that store the value where IEnumerable has a method called Count() that iterates the rows in order to count them so your comparison is a property versus a method.
@@iOLlVER Using Any() is always faster since it doesn't iterate over the enumeration.
@@vladyslavhrehul2185 Count, full stop, isn't an operation, it's a property (like Length on an array), so it's a bit of an unfair comparison. Any() will be faster than Count() > 0 in all cases, although I wouldn't know an Oracle Optimizer would execute your SQL'ed lambda-expression.
Great content. I appreciate the videos on the latest and greatest features as well, but I would love to see more like this.
just learned you can add a default overload! Thanks!
@@AlbertoMonteiro In previous versions you can do .DefaultIfEmpty(obj).First()
Nitpick: It will not be “exponentially slower”, asymptotically it will have the same worst case behavior, but best case is much better, and average better by a factor of 2.
From memory, EF will use a TOP(2) with Single. First isn't safe without assuming unique, if not it should also always be accompanied with an OrderBy clause if you cannot assume uniqueness.
As mentioned in the video, for you to be able to reap the benefits of First uniqueness need to be enforced on a different level (Insert, constraints etc)
Top 2 can still be safe IF you add a where for the match.
If that returns 2 items its not unique, it should always return only one line to be true.
But that would still have the DB do the processing of the whole dataset, but hopefully hitting only indexes.
@@davidmartensson273 Generally in MS SQL TOP 2 by index is almost as performant as TOP 1. There is very negligible difference.
@@andreybiryulin7207 Yes, I think I actually miss read the Steve Py comment so disregard my comment ;). In later comments I explain that a DB design that needs Single is in my opinion a bad design, if you need uniqueness it should be enforced when adding to the database, not when reading, so that you trap the error when its committed and in the best case can report it back to what ever source you got it from. And if you actually do not know this when accepting data I would probably add some intermediate processing and a secondary table with a uniqueness constraint to solve this.
I have actually had similar cases where we needed one column to be unique for some rows but not for others, and we solved it with SQL constraints, so we never had to consider it when reading :)
@@davidmartensson273 this makes sense. Also consider, if you were to use Single, how would you realistically recover from the exception at read time? The ship to properly resolve it has already sailed at write time.
Personally I always ensure a unique constraint in the column definition for such cases. You should be doing that anyway, to ensure data integrity.
...but if you have such a constraint, the perf difference should be negligible. I can't really imagine a reasonable scenario where First makes significant gains, and is actually unproblematic.
Sure, a single table can have a unique constraint easily. But when you are joining a lot of crap in a view, single becomes a lot more important
Sas Nickchap is my choice!
Updating my code again after another video. Thanks Nick!
Thank you Nick, great video as always.
I don't think I've ever used the `Single` methods before. I know of them obviously, but I'm yet to encounter a logical use case for it
Lets say you have something like this:
public IObservable Create(“honda”);
If you use Single you wil get an exception if there are more than 1 factories returned, using First would potentially hide an underlying defect
If there's supposed to be only one item, you should never use .First(). Let's say you have a couple different indexes on a table, and one query uses one index and a different query uses a different index, and one index is ordered by a column in a different direction from the other, .First() is going to give you a different item for one query than for the other query.
There are times where it's fine to just use .First(), but you have to be absolutely certain that there's no way for there to be a second one.
Or if you know there are multiple and you just don't care which one you get. Because there's no guarantee that IEnumerable enumerate the collection in the same order every time.
@@OhhCrapGuy seems like you could resolve those two issues with an OrderBy.
@@SelmirAljic I've not used IObservable before but can you not constraint the "create" to be unique by checking first if there is already any item named "honda".
Seems to me trying to enforce uniqueness after populating data is too late
@@JetsetDruid Why is there an additional entry in your enumerable that you never access? Hiding that problem with OrderBy is just hiding a bug that's gonna bite you in the ass later. You should only use First() when you're logically guaranteed that only one item is in the enumerable that matches your predicate.
You are the man Nick! Great video!!
Love the videos keep it up! I would be interested to see some Rider tips and tricks and strengths/weaknesses compares to visual studio 2022 if you get to it 😀
Great video! I tend to use Single when querying EF at work where we have huge data sets and I never considered this.
Usually, Single is practically just as fast because you have the right indexes. And when you don't, First is usually wrong, because it's non-deterministic. You're almost always better off using Single.
Great video Nick. SemaphoreSlim and Channels could be a good topic to cover.
I have a SemaphoreSlim video in the backlog I just need to find the perfect example for the video
@@nickchapsas when handling a api accept_token that you want to refresh and don't want all methods to refresh it at the same time might be a good idea
Good video as always, just a few additional remarks:
- when using first() and you are not 100% sure that there can be only one result, try to at least ensure that the collection is sorted in a deterministic fashion. Otherwise, multiple runs will produce different results, and that's a horrible place to be.
- Single() is most useful to extract the single value of a colltection which you previously checked to have a count of exactly 1. First() would obviously work here as well, but I find single() to better convey the idea.
Ordering defeats the purpose. Order doesn't matter when you are querying data that shouldn't have duplicates. If there are duplicates then you have a data integrity issue in your system and your querying isn't responsible for finding it. If you assume uniqueness then first should be used over single
@@nickchapsas for in-memory, I agree. No point in saving performance by using first() if you sort the collection beforehand. But if the data is coming from a database, maybe a NoSql DB or a distributed system where the costraint can't be enforced, you can at least try to make your query somewhat more resilient by using first() over single() - but you'll need an ordering for consistent results, which can be provided without in-memory sorting e.g. by using an index. It's "pick your poison". Would you prefer to fail immediately (single) or let it continue to run (first).
@@AlanDarkworld That's even worse. NoSQL databases are partitioned so things like ordering a dataset or having single will actually cross partition can the whole database and kill your performance completely. With proper data modeling you will never have that issue
Interesting, i never considered that .SingleOrDefault requires checking the whole dataset. As I understand it basically you should only use SingleOrDefault if it is POSSIBLE for the dataset being queried to have multiple matches. If it is not possible for the dataset to return multiple matches you should always use FirstOrDefault
This actually makes it alot easier to choose which one to use IMO
If the column is indexed (and when you have use-cases where you need FirstOrDefault or SingleOrDefault it most likely should be) there is no "checking the whole dataset" involved.
I still think SingleOrDefault is better if the data model / use case matches.
Really the other way around. If it's possible to have multiple matches, use FirstOrDefault, or you'll get an exception. SingleOrDefault? Yeah, don't use that, except possibly to check for uniqueness on input, and always catch the exception.
This video is dangerous advice; avoid First(OrDefault) always. The cases when First are significantly faster are exceptionally rare in practice - and when they do occur, usually First is semantically _wrong!_ Almost all cases people say First, they're _assuming_ there's one match, and when there are multiple, you're much better served by failing fast with an exception than performance-optimizing returning the wrong result.
There are potential exceptions such as when you have a total order and want the first in that order - but those are best served by MinBy or MaxBy, so even there First isn't useful.
Great video. When can we get a guest appearance from Chap Nicksas?
Wait till I invite Napsas Chack
Thanks for the explanation! Refactored all my .SingleOrDefaultAsync database LINQ queries! I often encounter "long running requests", for example, Razor Pages on first database call. But also on my API.
You shouldn't do that. Index your database properly, that is what your problem is. A TOP1 vs a TOP2 is negligible at the database level on an indexed column.
First() might need a order by... and Single() might not make sense without a discreet or group by.
The important part is to think about what you are trying to achieve, rather than just doing what you usually do.
Video gets nice @ 8:14
I regularly fight to get my coworkers to use "Single" and "SingleOrDefault" because if there is more than one of a thing (when there should only be one thing), then it demonstrates that there's a problem in the database/upstream. I get constant push back saying "Well, the database tables permit duplicates but it will never happen." That phrase "it will never happen" is the biggest lie you will ever hear. Using the appropriate filter method consistently safeguards your code against bad data elsewhere in your system and you will find these errors immediately instead of months later when some serious damage has already been done.
Your coworkers are right. Think about it. Is it the responsibility of your queries to detect data integrity issues with your data layer. Ofc not, that’s a write/update concern.
@@nickchapsas I wish I worked in the environment that you work in where you have good controls around data integrity and your coworkers actually care about that. When you have different coworkers of different skill levels making changes to different parts of the larger machine, having safeguards at all levels of the system is essential. The database _should_ care about uniqueness if it's supposed to be a unique column, but that's not always the case and pretending like it just-so-happens to be unique because the interface doesn't (currently) permit it *always* bites you when someone else makes a change somewhere unexpected for their own use case.
tl;dr; my coworkers are wrong because our database is wrong.
Thanks for your video and reply, btw. I couldn't hit Subscribe fast enough.
@@nickchapsas Reply #2
I'll give you a paraphrased, badly designed, but informative use case, if you think that I'm still mistaken.
The interface ultimately saves a user with name and email, the email is separated out into a separate table for our purposes. The email should be unique, the save function makes sure of that upon save, but the database permits non-unique email addresses.
The SendEmailToUsers application-level function ultimately pulls out all users from the database with the goal of sending an email to each user. It iterates over every user, pulling back their single email address by calling the GetEmailOfUser method in our repository class.
The repository layer's function GetEmailOfUser is supposed to return a single email address, but two could be returned from the stored procedure call since the database technically permits that data state.
Why would I *not* want to have a "Single" right here? Why is "First" better?
Entity Framework doesn’t need to query the whole table etc when using SingleOrDefault.
It outputs a “SELECT TOP 2” query. If it finds two matches, then it throws. It does not need to enumerate the whole table.
Maybe that’s not the case in other providers besides SQL Server, but that is literally what it does against SQL databases.
In the worst case scenario (there is only one matching element), it does have to enumerate the whole table.
Single = I want to make sure it is only ONE item. Zero or more than one - that's an error. If wonder why they added the overloaded versions with the predicate. For predicates we already have Where(...) and that is expressive and clear. First(x=>...) - what is it? You have to look at the name of the parameter to find out - its a predicate.
Where will go through the entire dataset. The first with predicate will stop on hit.
It took me a long time to understand that there is nothing wrong with being Single()!
I had the exact same thought a few hours ago. Almost as if you read my mind 😅
My biggest take from this is the FirstOrDefault overload! :o
I recently worked in an API that was riddled with SingleOrDefault calls, frequently without exception handling. It seems that the developers took the view that "there should only be one of these, so SingleOrDefault it is". But sometime, due to a questionable ETL script or whatever, that assumption proved to be incorrect and boom. Personally, I never use Single or SingleOrDefault, because imposing those limitations is something that should happen on input, or possibly via a unique constraint in the database.
I'm the opposite. I use SingleOrDefault where I expect there to be only 1. I handle the "OrDefault" scenario (as always and as you would with First). Using First, where there should be only 1, often hides a problem. If Single throws an exception, where there should be only 1, you have a data problem which can quickly be identified by good logging and exception notification code. Then you can launch an investigation as to how the data got corrupted. Probably a lack of business validation, or some data mutation code was not atomic. I'd rather go through that path, rather than have First hiding a problem which could go unnoticed for a long time because ... First.
I never had a reason to use Single just because I don't see a use case. When I only want the first of something I'll use FirstOrDefault, and when I need to make sure that only one of this exists I'll validate using Count(). I'm not using huge data sets so this is perfectly fine for me. I don't see why Single even exists especially when you shouldn't provoke exceptions.
What if you want to get the only item that satisfies a predicate, and check the precondition that there is only one item that satisfies that predicate?
When using Java I kept reinventing Single myself, especially in tests, so it's nice to have it as a built-in in C#.
When you are using Count you executing query twice (one for count, one for result) and with single you can do it with one query
@@SimonClarkstone In tests the testing framework usually have a specific test for single than not only makes the assurance but actually out of the box provides good explanations for when it fails.
@@SimonClarkstone I any case where I have had to ensure singularity its either a database where I can use a unique index (much safer than trusting every case where you ad things) or like a dictionary that also fails if you try to add duplicates. Its safer, catching the problem on addition instead of consumption.
Sure, there could be a case where you might allow for some duplicates but not for others, but in that case I would still try to enforce it in the storage to catch it on addition.
@@davidmartensson273 Those do apply in many cases, yeah. I do like how C# dictionaries make it easier to detect collisions rather than silently overwriting.
I've recently been writing some parsers for input that could be poorly formatted, which needs to be matched up to existing data that it might not match, and Single was well-suited to some of the tasks.
We need more about linq please
What is the difference between ordinal ignore case and invariant ignore case and when can we use one over the other?
Do not use InvariantCultureIgnorecase. for example what goes first á or a , or the Swedish å and the US a. InvariantCulture will treat all those as not "a", when in reality they are all "a"
this is from the Microsoft website: Avoid the following practices when you compare strings: Do not use string operations based on StringComparison.InvariantCulture in most cases. One of the few exceptions is when you are persisting linguistically meaningful but culturally agnostic data.
Nick Chapsas, ok but what about Find for id? It's reasonable not to use SingleOrDefault for check id by performance, but isn't Find - the right option? And what with LinqToSql?
Find is the same as FirstOrDefault
How is Single() implemented, when using IQueryable with EF?
I haven't tried to see the query generated but I would assume it add a Top 2 in the query which will probably lead to a full scan if you only have one item. Haven't tried it however since I don't use EF so you might wanna give it a go and see
@@nickchapsas TOP 2 is exactly what EF/SqlServer 6.0 does
I've been getting my team away from using First() to always using FirstOrDefault() and handling the null case. The "Sequence contains no elements" error can be hard to debug when you have a bunch of First() calls in the constructor for a DTO and you don't know which one failed. When you check for null after the call to FirstOrDefault() you can then create a more meaningful error message.
I was wondering why the constructor is the where you are using any amount of First or FirstOrDefault, it's not really what the purpose of the constructor is, to have collections being iterated through inside it?
@@alexbarac You are building a smaller data object from a larger set so you only want a subset of the lists inside your new object. Since it's a DTO it's a dumb object with no logic inside of it. So you don't filter inside the DTO. You filter in the provider that is building it.
Even if you filtered into local variables you can have multiple lists getting filters on lines right next to each other. Since sometimes the line number on the stacktrace is of by one or two or can be hard to know which one failed.
The big win is that the error messages you can surface can be unique for you application and help the users or devs solve the problem quicker.
@@JoeStevens Then why not extract each filter into its own method. If any of the filters breaks, it will break inside the method and the stack trace will contain said method.
The reason I asked is that
I only use '...Default' if I expect to execute different pieces of logic if the returned object is null or not, but not for error checking.
If you have a list type you can better use Find only use FirstOrDefault on IEnumerables.
Great video as always. What is this framework you use to generate your data? Faker
It's called Bogus. I have a video on it: th-cam.com/video/T9pwE1GAr_U/w-d-xo.html
@@nickchapsas Thank you, you're the best 🦾
Do you know if EF takes a unique index into account when creating a query for a single statement?
Great video as always!
one thing that doesn't work for me is sending a default value as a parameter to the FirstOrDefault method (like you did with "N/A")
is this a c# 10 feature?
I was thinking to myself that Microsoft had to build in a check when each result of .Single() is found to throw an exception immediately at the second item. If not, it would iterate the entire collection, then see there was more than one and throw an exception, which would result the worst possible performance.
I checked the reference source. When there is no predicate, it looks good. If the collection can be converted to a list, it checks the count first. If the collection cannot be converted to a list (therefore it doesn't have a .Count property), it fails as soon as it tries to move to a second result.
However, the predicate version is much worse! It is actually iterating the entire collection, then checking the count of all results after that! If they added a test for count > 1 right after incrementing count++ and exit the loop, it could save countless iterations!
(EDIT: I was looking at .NET Framework 4.8 reference source. It is fixed in the latest .NET source and more efficient.)
Thanks for the content Nick, little topics like this are really valuable. I have always used .FirstOrDefault() as instructed when I first got into programming, however videos like this really help give a good understanding as to why! You continue to expand my mind.
There is probably too little real prestige in comparing a single call using method a to a single call using method b. In real scenarios, you want to optimize your queries for all the vectors. Then, how about using a grouped lookup in comparison? Group once, lookup once, benefit from knowing the amount of results plus the results, no matter the amount of queries. Surely, crossing boundaries like LINQ2SQL is another topic. I'd so much like to see whether you give it a try. :)
i don't know, it's a kinda random approach; i would have made sure to separate the discourse for linq and for the linq expressions used in ef, because here you are using just an array; i mean i get that videos should be short or fewer people would watch it, but there's not only other types of collections (ie sorted list) on which you can use linq, but the DbSet too, and that's pretty common
I explained in the video how EF is different and what query it will generate
What about First vs Find
Find is the same as FirstOrDefault
It's a whole different ball game with the data is in a relational database that may or may not have indexes and may or may not be clustered
Why when searching for the last Id, is Single slightly faster than First? Or is that random?
It's within marging of error because the dataset is big. It's between run to run variance but in reality it's the same
Great effort, Imma stick to FirstOrDefault though
Well that's what the video was about
I wonder what Chap Nicksas thinks about this.
What about a cached hash set for the O(1) flex instead of iterating the list like a fool? xD
Database engineers invented indices for a reason. Why not let the database do the heavy lifting? :D
The difference at the database level is negligible.
If you are doing a query on an indexed field, TOP 1 and TOP 2 uses the same execution plan, the same index, and the data will be located in the same node of the Btree. In terms of executions a Top 2 is a single Index seek/scan then a single Data file jump to get the data. A TOP 2 is the same single index scan, but now jump to two places to get the real data.
If your column is not indexed.... Then index it.
So it really is not a problem.
This is a non existing problems for databases that are indexed.
That’s such bad advice that I’ll make a dedicated video on it. Brute forcing a solution to a problem that doesn’t need to exit doesn’t scale. There is a reason why you opt into indexing and it’s not the default behaviour and there is a reason why indexing can cause a plethora of problems in itself, let alone the space.
@@nickchapsas Instead of using a memory dataset, on which a First() will be better than Single(), give it a try with an indexed column on a sql/postgres database, and see if the theory holds. Also try it with an unindexed column and compare the results.
@@nickchapsas I just completed a benchmark test against a table with 500k records, grabbing the first, the middle, and the last. The results are the same, there is almost no difference when you are accessing an indexed column with a sargable query. First always wins by 10 microseconds. That is a insignificant amount. It doesn't justify sacrificing readability and possible data duplicates, for such a small amount. Granted, you don't always have the option of controlling your database, and if are forced to only use data in memory, then yes, first is better. Also, you don't provide the reason for not using indexes. You should always analyze your data queries and apply indexes as necessary. They are an integral of RDMS and they offer the best way for increasing performance.
@@battarro Indexing isn't free
@@nickchapsas You are correct, and there would be performances issues when updating/inserting if you decide to index every column on a given table. But a good rule of thumb is that if you are doing a query that is sargable on a column, such as email/username on the user table, you will benefit greatly from indexing. I ran the same benchmark as before, a table with 500k and i did a first/single, and the results were horrible compared with the index. It took GetRecordByFirst 129,662.1 us to get to the last item on the table, before with the index, it took 150.5 us. And without the index, that number increases steadly as O(N) as your table increases, while with the index it only increases as O(log n) flattening out. Im not saying Index everything, but you should index columns you are doing sargable operators.
Btw. If you want to check if a collection has only one element and then get it, it will be faster to call .Count() < 2 and then .FirstOrDefault() with check of value !=null, than just a call to .SingleOrDefault() even if you don't catch exceptions
That doesn’t make sense since count will enumerate the full collection before it does the check
@@nickchapsas I agree that its innefficient. However the behavior is provider specific, EF will generate a count statement, and then a select with a top
I wasn't even aware of Single. Great video! Makes perfect sense. Also, make sure to use a dictionary when possible to go from O(n) to O(1) lookups.
Single
or default
#NoLinqMovement - Linq makes it fast to develop, but if you want to go fast, dont use linq :)
I hate LINQ. #AllSQL.
Can you actually put this data into a real database like MySql or MSSql? I would be very surprised and frankly disappointed if "first" operated faster for querying primary key integers from a database. My expectation would be they are the same.
I don't know if EF Core is smart enough to understand that id is a primary key and limit it but any other non-constraint value will be faster
@@nickchapsas By convention a column named Id is assumed to be the primary key. From the docs: Code First infers that a property is a primary key if a property on a class is named “ID” (not case sensitive), or the class name followed by "ID". If the type of the primary key property is numeric or GUID it will be configured as an identity column.
I see in the comment section some people still have wrong opinions even after "watching the whole video".
First ladies > Single ladies
There are multiple single ladies, but there is only a single First lady.
TIL n/a is not aplicable and not non available.
Great Video!, Thanks Nick. Again 69 :) Maestro!
re: Single() when you only have 1 match : famous last words "trust me bro, this is unique".. you going to bet the data integrity of your system against on every future code bug? ; but yes, push the unique constraint to the database where possible.
or even better push the where clause down to the DB to retrieve only "1" row, and .Single() that
Your querying isn’t responsible for finding bugs on your inserts and updates. Coding defensively is fine but coding naively against your own code is not
The real way to solve this performance issue is to use a better data structure.
The better data structure is what allows you to use First and improve your performance
I'd rather be first, because I'm now single and I'm not enjoying it
Came for a small LINQ analysis, left with a new library to try (Faker)
…pull request. Remove faker! 😂
don't really agree with this. Single is useful as an assertion that there should logically only be 1 item in the result set and to throw otherwise. really what you're using here is a linear search/filter on a 10,000 item collection. this is true for both First and Single. First can exit early if the item it wants is at the top, but regardless if you are searching through a large enough list you should be using a database index or dictionary or some other more efficient way of finding a match. choosing between First and Single is not a performance decision; in most cases there will be a logically correct choice (usually Single/SingleOrDefault).
I'm probably the outlier here, but it's because of gotchas like these that I almost never use LINQ. LINQ is great for what it is, but I"ve found that most of the time, I'm better off writing and optimizing my own code than trusting a generalized library to do so. Of course, that comes with its own tradeoffs in terms of time taken to write and optimize that code, so YMMV.
Same here in avoiding LINQ. I stay away from EF as well. One thing I've picked up from this channel is to actually benchmark and check the performance of something when I'm not sure, rather than assume something will work this or that way and should be faster or slower - especially when benchmarks are fairly quick to write using that library. Had the same feeling when I read the comments on the video suggesting not to use exceptions for the application flow - there seemed to be a lot of people in the comments using exceptions in that way.
I guess that makes your code compatible with C# 2.0. That's OLD.
@@fusedqyou As Nick demonstrated, there's a huge performance drop if you don't use Single the right way. Another one I've seen is a couple of times is iterating through an IEnumerable repeatedly in a loop. It's not that LINQ itself is bad, but it gives a lot of opportunities for unwitting programmers to be...extremely unwitting.
@@MaximilienNoal I'm not sure how you would jump to that conclusion. LINQ is a library, like anything else. What version you target has no bearing on whether you use the library. My code uses the latest features, just like I'm sure many other people do...it just happens that that doesn't include LINQ in the vast majority of cases.
@@fusedqyou Just as an exercise, compare what I said to what you said.
Me: "LINQ is great for what it is". You: "LINQ is a bad choice".
Me: "I almost never use LINQ". You: "drop LINQ altogether".
Me: "comes with its own tradeoffs...YMMV" You: "you're a bit paranoid".
Actually i don't see any reasonable scenario in your business logic where you should use Single(). This method doing too much things. It's just break the rule of pure functions.
Yes, I think most already use .FirstOrDefault() but Nick is here to teach and he does a good job of that and shows it using benchmarks.
I usually really enjoy your videos but this could easily have been a YT short.
I am slightly concerned that the performance implications are not painfully obvious to developers.
First
OrDefault
@@ryanzwe OrDefault - or default will not compile.
This must be the first time I ever gave a "First" comment a thumbs up 😂
@@CRBarchager just edited it, originally didn't care but pair programming saves lives
I don't get the point of this video. Single and First have different use cases. They are not interchangable so the performance difference shouldn't be compared. You either need to be sure there is only one result or you don't care and just want the first one.
Did you watch the full video? It answers your question
The reason is that many do not understand that difference and trigger a lot of unnecessary processing where a First would be not only enough but also a lot faster.
@@nickchapsas I did, but the database constraint argument doesn't convince me. DB is sort of a dependency for DbSet in my repository and I don't trust it to have that constraint. Maybe if it's code first and it's somehow tested and ensured, but even then I should be writing "dependency agnostic" code.
@@B1aQQ It doesn't matter. If your app's business logic is to only have one of something then First is sufficient. Your reads aren't your profilers and they have no job troubleshooting your app. You should have appropriate safeguards and not making your queries exponensially slower as your data grows.
@@nickchapsas Yea, you're right. Doesn't make sense to do a sort of "runtime validation" for my data cohesion on each read.
I think you forget something important: if you use First you should use it after an OrderBy. Maybe if you have the data pre-ordered you can omit it, but in general you don´t know the order, and if you're using LinqToSql is mandatory to put the OrderBy. You can trust the constraints and be sure there is only one item, but this is something you know, not something "the code says".
I prefer to use Single whenever I cant, I think is much more clear. If there are performance problems the refactor it.
Great video and channel!
Absolutely not. Ordering by first will make you go through the whole colletion first to make sure its ordered and then get the first value. It totally misses the whole point since it will perform as slow as Single and if the person writing the code doesn't know how to proprerly handle enumerables it can also allocate the whole collection again. The whole point of the video is that on unordered data sets, first will have better performance on average.
This is important, your benchmarks are using a presorted list which gives the result for First[OrDefault] will be the first ID which matches. I'd like to see the benchmarks repeated to find a fullname, requiring a sort of the collection as this emulates the real world case more closely..
@@nickchapsas I disagree. In the case where you actually get a result instead of default, you have no idea what you get without OrderBy(). Of course if you don't care about the order, that's fine. But in many cases it's an anti-pattern. It might as well be called RandomOrDefault(). Anyway, they serve different purposes, one is for getting the first if any, the other is for ensuring that there is exactly one or zero. Both have their uses and they don't replace each other. And of course, performance is not everything. Expressivity and state of intent is also important. SingleOrDefault() clearly states intend and is by it's nature agnostic about ordering, FirstOrDefault() is clearly about ordering (if you need it). Also, there is a separate discussion when we're talking Linq to EF.
@@nickchapsas obviously you are right in performance comment, but in general you need the OrderBy.
For example in your little example: a customer with X emails, you want to send only one email. You can use First if you don´t care about the order, but maybe the customer emails have and "PreferedOrder" field so you must use it to order the emails.
In any case I think we are in the same page, the only problem is the naming: there should be an extension method called ".SingleAndITrustInMyDataDontCheck()" (but shorter) that internally calls First :)
@@pinkfloydhomer you don't need to sort anything when you want to get the only element. And even if you want top 1 from multiple matching, unless you are using sql, using MaxBy will be much better then sorting because max is O(n) when sort is O(n*lb(n)).
th-cam.com/video/ZTWl2s8ScMc/w-d-xo.html Great video but I think you meant to say 'SingleOrDefault' 🤔
*Looks in the comments* seems I am the only one who uses .Find() 😅
Find will work the same way as First
@@nickchapsas FirstOrDefault actually (and in EF if the entity is already present in the context then Find won't query the database)