A few people pointed out that the second version is "slower", which you can see around the 4:30 mark. That was a request hitting a cold start, unfortunately. I tested it now and: - Single query: ~8ms - Split query: ~5ms I admit that I may have made an error here, leading you to believe it is in fact slower with query splitting. As I said in the video, measure everything and never take anyone's (including mine) word as the "truth". I'll try to do a better job with this in future videos! Stay awesome 😁
Thanks for the video. Just wanted to mention 1 thing which is hard for me to hear :) : why are pronouncing "SEQUEL" instead of "SQL"? You can check wiki, where it is explained why "SEQUEL" is not correct to name it now and for long period of time too.
Never noticed this capability before, so appreciate it being called out. I often just split queries by hand and join in memory. As you say - often a complex query is slower than multiple smaller ones. I will certainly look at this splitting capability though, it might save me some effort. Not strictly NET related, but since you mentioned the potential cost to multiple queries depending on proximity of data, I would mention the most impressive complex query performance remedy I have been using quite a lot recently, is using FOR JSON. In a corporate environment with MS SQL cluster spanning datacenters, consumed by APIs also spanning datacenters - we aren't always ensured of a short trip from client to the current primary node. By structuring the data as JSON from the SQL we reduce the replicated column data and the resulting payload is a fraction of the original rowset. Since JSON is a bit chatty itself, we actually use a SQL scalar function - to generate the raw JSON result (text) and then compress the result (byte array). Its simple in code to decompress what SQL has compressed with stock NET Gzip (IIRC) implementation, and deserialize to the resulting object type. You end up with a structured result like what you would do in EF projection, but tightly compressed to flow back across the wire to the client - at the cost of CPU rather than IO. I have yet to dig in to how SQL does it, but in all tests to date a FOR JSON query with many subqueries performs best (for us, YMMV) when compared to a single query with numerous joins to related data expressed as a rowset.
Every Saturday morning, I send one .NET tip in my newsletter. *2625 .NET engineers* already read it. I would love to have you with us. www.milanjovanovic.tech/
@@MilanJovanovicTech Of course, I would apply it to all my queries where I do joins with other entities. I find it great to be able to add it by behavior and for simple queries define them right there with the "AsSingleQuery()" method. Correct me if I'm wrong please :D
Entity could just use the gathering id to fetch attendees and invitations from database, instead of doing a join with inner query 😅, nice video by the way, keep up the good work! 🚀
I think this might be deliberate, as joins are more performant then saying "WHERE GatheringId = {0}" on the Attendees table. I haven't tried this myself but I'm betting that if projection is used, the split queries won't Select every column on the Gathering table either.
I wish you could provide us with videos, best practices, how to call stored procedures, views and DbFunctions, when to use them and when not to use them with EF core.
Hi, Just one question, I can see multiple includes are being used, what if I want a parameter of list for all these 3 includes and include them dynamically? How to achieve it?
I usually do load nested objects when it is required one by one. After just getting an object by Id, I do some checking, if it passes, then I load the nested object when it is required. It saves some time, I think. I never benchmarked though.
How do you test the split query vs single query performance properly locally? What if the actual app service and db servers are close but the local environment is on another server with more latency?
@@MilanJovanovicTech the latency to remote db is high from local environment because each is in a different region. But the api is hosted in the same region as db, so the actual latency is low. So split query could perform badly on the local environment with round trips to remote db but it could actually perform better on the server.
@@tasin5541I have the same issue dealing with writing queries from local and testing it with remote db, the response time isn't that great but if we test it from a server where the latency is low the response time is significantly better.
Thanks for this Milan! Is this effectively the same as doing 1 query for Gathering and then doing: await _dbContext.Entry(gathering).Collection(g => g.Attendees).LoadAsync() await _dbContext.Entry(gathering).Collection(g => g.Invitations).LoadAsync()
another question please: Can you please tell me the difference between both queries ? var result1 = dbContext.Authors.Include(x => x.Books); var result2 = from author in dbContext.Authors from book in dbContext.Books.Where(x => x.AuthorId == author.Id) select new { author, book };
Im having troubles scaling net ef core, i cant do more than 200 rps. With Nestjs with same data structure, db and etc .. im doing 5000 qps, can you show us how to tune it for heavy loads ? Im using postgresql
Hi Milan, I have a question, currently I'm working on a big API project to build API for a database with 114 tables. So: 1. who can I generates Dtos for all Entities? 2. Do I need to build controllers for all tables? all Entities ? or just the Important one? 3. Any design guide or ideas I will appreciate that. Thx.
Hi! I'm again here to make the "scalability" question. Using EF as it manages his own connection pool different than ADO. What should we do to configure max connection pool and queueing to prevent EF from crashing? A collegue found an issue where I try to access to an endpoint simulating 1000 concurrent users simultaneously with a simple query done with ADO and EF. ADO gets to 100% pool but it always returns an answer but EF gets to 100% and FAILS to return, it looks like it manages wrong the queue, any help here? By the way, if you select all columns in EF it will also go way slower than selecting the ones you use and use way more memory (for 1 to many relations with big results this does a very big difference).
@@MilanJovanovicTech System.AggregateException: One or more errors occurred. (Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.)----- This error does not happen with ADO/Dapper. Test was for about 1000 virtual users making requests all the time during 1 minute.
That was a request hitting a cold-start, actually. I tested it now and: - Single query: ~8ms - Split query: ~5ms I admit that I may have made an error here, leading you to believe it is in fact slower with query splitting. But you should measure everything. Listen from this part: th-cam.com/video/GY7QwSFeVBQ/w-d-xo.html
How about if we use Implicit Join instead of include? does AsSplitQuery has any benefit in that case? I assume when we use Implicit Join "AsNoTracking" isn't required as well, am I right?
AsNoTracking is if we don't want to track changes with EF. In this case, it's not required. You mean using the LINQ Join method? It won't make much difference I expect, since it's translated to the same SQL statement more or less.
I mean .Select() method which we use DTO(s) to transfer the only required fields instead all the entity using .Include() method, I'm not sure if AsSplitQuery has any effect in that case.
@Cyril Douglas .Select does nothing to the problem cartesian explosion poses. it is just an additional (and i recommend it as well) way to reduce db traffic AsSplitQuery has effect in both cases because after .Select you still have remaining parent child relationships which in a flattened response increase the amount of data
@@dmitrywho7990 Thanks for the reply, however there are some drawbacks using SpliQuery as Microsoft stated that there is no guarantee for data consistency using this method, as well as it implies more roundtrips to the database, even them consider to use this carefully.
not so sure about this improvement. sending multiple queries to the engine involves the latency needed to process each one. there is not only the latency introduced by network (that is some situations means not a local server but a TCP connection over the network), but also the time needed to go into the steps of parse-optimize-reduce-generate plan query cycle. For relatively small queries this time is comparable to the time to effectively execute the query itself. Plus with multiple queries you tend to prevent some optimizations the SQL Query engine could perform when the query was specified only once.
Agreed, and I mentioned this could be a potential deal breaker for using this feature. That's why although Query Splitting may seem like a good idea on the surface, you have to be careful with it and measure the performance changes.
@@MilanJovanovicTech idk. Just came here to criticize withouth any further workaround 🤣🤣. But i trully believe sql engines should deal with it more elegantly
Over the past year I've started moving away from EF. I love the code first concept and how easy EF is to use. However, that ease of use is (from what I'm seeing) making a lot of devs lazy and they end up not understanding what the tool is doing when it builds the queries.
@@MilanJovanovicTech I should clarify a little better. I've started moving away from EF Core on the query side to Dapper. I still use EF Core for migrations and writes. When building my queries I start in SSMS to make sure I'm squeezing all the performance I can in raw SQL. Problem with using EF Core is that it's not always so simple to translate that query over. I've been using Net Core since the RC days and don't get me wrong, EF Core has come far, but I think it took too long for basic functionality to make it's way from EF to EF Core
When you do this you are no longer guaranteed that you'll get a consistent result. If something has changed that data between your two calls you'll get weird results.
@@MilanJovanovicTech if used incorrectly transaction isolation levels and locks like this can lead to deadlocks or simply not do want you want. It is a much more complex subject. Worth of 20 of such videos.
A few people pointed out that the second version is "slower", which you can see around the 4:30 mark.
That was a request hitting a cold start, unfortunately.
I tested it now and:
- Single query: ~8ms
- Split query: ~5ms
I admit that I may have made an error here, leading you to believe it is in fact slower with query splitting.
As I said in the video, measure everything and never take anyone's (including mine) word as the "truth".
I'll try to do a better job with this in future videos!
Stay awesome 😁
Thanks for the video.
Just wanted to mention 1 thing which is hard for me to hear :) :
why are pronouncing "SEQUEL" instead of "SQL"?
You can check wiki, where it is explained why "SEQUEL" is not correct to name it now and for long period of time too.
@@vladyslavhrehul2185 You are the very first one to bring that up 😅
@@vladyslavhrehul2185 It's a long lasting tradition at Microsoft to pronounce acronyms word-like.
how did you setup your project such that you can see the LINQ-to-SQL translated code on the console? do you have a tutorial how to set that up?
From 21 seconds to 5 seconds. I am pretty impressed by this feature. Thank you for showing.
It can really be a lifesaver
Never noticed this capability before, so appreciate it being called out. I often just split queries by hand and join in memory. As you say - often a complex query is slower than multiple smaller ones. I will certainly look at this splitting capability though, it might save me some effort.
Not strictly NET related, but since you mentioned the potential cost to multiple queries depending on proximity of data, I would mention the most impressive complex query performance remedy I have been using quite a lot recently, is using FOR JSON. In a corporate environment with MS SQL cluster spanning datacenters, consumed by APIs also spanning datacenters - we aren't always ensured of a short trip from client to the current primary node. By structuring the data as JSON from the SQL we reduce the replicated column data and the resulting payload is a fraction of the original rowset. Since JSON is a bit chatty itself, we actually use a SQL scalar function - to generate the raw JSON result (text) and then compress the result (byte array). Its simple in code to decompress what SQL has compressed with stock NET Gzip (IIRC) implementation, and deserialize to the resulting object type. You end up with a structured result like what you would do in EF projection, but tightly compressed to flow back across the wire to the client - at the cost of CPU rather than IO. I have yet to dig in to how SQL does it, but in all tests to date a FOR JSON query with many subqueries performs best (for us, YMMV) when compared to a single query with numerous joins to related data expressed as a rowset.
That sounds super interesting! I'm going to have to explore this topic deeper
Can you guys please give any documentation where i can read about this method in detail, thanks in advance
@@bloopers2967 learn.microsoft.com/en-us/sql/relational-databases/json/format-query-results-as-json-with-for-json-sql-server?view=sql-server-ver16
Every Saturday morning, I send one .NET tip in my newsletter.
*2625 .NET engineers* already read it. I would love to have you with us.
www.milanjovanovic.tech/
This feature greatly improved my performance 10x! Maybe not that much but it was immediately noticed
Wow, that's an amazing improvement! You must've had a lot of JOINs in that query
First time I've come across this. Looks like it could be useful for some larger ef queries. Much appreciated
I wrote a blog post also with some examples:
www.milanjovanovic.tech/blog/how-to-improve-performance-with-ef-core-query-splitting
@@MilanJovanovicTechpage not exists
@@bloopers2967 Fixed it!
Wow I am discovering your channel and I have found extremely useful things
Thanks Sergio, hopefully the algorithm will get me in front of more and more people 😁
Very interesting, this lesson improves a lot in query performance. Thank you very much for sharing Milan👏😁
Thank you Fernando. Do you have an idea where you could apply this?
@@MilanJovanovicTech Of course, I would apply it to all my queries where I do joins with other entities. I find it great to be able to add it by behavior and for simple queries define them right there with the "AsSingleQuery()" method. Correct me if I'm wrong please :D
very informative, will implement in my project. thanks for sharing another wonderful video
Make sure to measure your performance!
Thanks for this video explanation
You're welcome :)
Thank you and awesome explanation.
Glad you liked it! Do you think you can use this in your project somewhere?
Ace Milan Jovanović
Thanks a lot!
Entity could just use the gathering id to fetch attendees and invitations from database, instead of doing a join with inner query 😅, nice video by the way, keep up the good work! 🚀
It is interesting how it translates the SQL indeed!
I think this might be deliberate, as joins are more performant then saying "WHERE GatheringId = {0}" on the Attendees table. I haven't tried this myself but I'm betting that if projection is used, the split queries won't Select every column on the Gathering table either.
I wish you could provide us with videos, best practices, how to call stored procedures, views and DbFunctions, when to use them and when not to use them with EF core.
Ok
Hi, Just one question, I can see multiple includes are being used, what if I want a parameter of list for all these 3 includes and include them dynamically? How to achieve it?
Construct the query dynamically by iterating that list to determine what to inlcude
@@MilanJovanovicTech would u mind send me the code for the same please.
great feature!
Did you try it out?
@@MilanJovanovicTech Yes, of course. This is good for speeding up requests, especially under load.
I usually do load nested objects when it is required one by one. After just getting an object by Id, I do some checking, if it passes, then I load the nested object when it is required. It saves some time, I think. I never benchmarked though.
This could be more performant indeed.
please make a detail video on layered architecture using identity and ef core
Sure!
Nice 👏
Thank you! Cheers!
What did you do to get the sql in the console/output of the IDE?
Just change the log level in appsettings.json
How do you test the split query vs single query performance properly locally? What if the actual app service and db servers are close but the local environment is on another server with more latency?
Then test it with the remote database?
@@MilanJovanovicTech the latency to remote db is high from local environment because each is in a different region. But the api is hosted in the same region as db, so the actual latency is low.
So split query could perform badly on the local environment with round trips to remote db but it could actually perform better on the server.
@@tasin5541I have the same issue dealing with writing queries from local and testing it with remote db, the response time isn't that great but if we test it from a server where the latency is low the response time is significantly better.
How can we generate sql query in command line window.?
I'm not sure what you mean 🤔
Nice tutorial, you got a new subscriber 👍
How did you enable logging SQL queries in console? I need this
I set the log level to Information in appsettings.json
A video on Logging is actually coming out in an hour 😁
Thanks for this Milan!
Is this effectively the same as doing 1 query for Gathering and then doing:
await _dbContext.Entry(gathering).Collection(g => g.Attendees).LoadAsync()
await _dbContext.Entry(gathering).Collection(g => g.Invitations).LoadAsync()
Yes, but a split query will likely be more performant. A split query also stops when it runs into a null principal.
another question please:
Can you please tell me the difference between both queries ?
var result1 = dbContext.Authors.Include(x => x.Books);
var result2 = from author in dbContext.Authors
from book in dbContext.Books.Where(x => x.AuthorId == author.Id)
select new { author, book };
One is LINQ, the other is 'query syntax'. And I think they get compiled to the same SQL, so no difference in theory.
@@MilanJovanovicTech do the one with query syntax is lazy loaded ?
@@ramytawfik9168 They're both eager loading
@@MilanJovanovicTech thanks a lot bro
place matters?. if i place as split at the bottom (before materializing) does it change anything?
I'm not sure, to be honest. It shouldn't matter most likely.
Well done,
I created separate queries for both ( single and AsSplitQuery). I just invoke inneeded one.
Nice work!
Im having troubles scaling net ef core, i cant do more than 200 rps. With Nestjs with same data structure, db and etc .. im doing 5000 qps, can you show us how to tune it for heavy loads ? Im using postgresql
What queries are you running? How big is your database?
@@MilanJovanovicTech they are 5 strings with uuid without joins or complexity with a limit of 25 characters
Hi Milan,
I have a question, currently I'm working on a big API project to build API for a database with 114 tables.
So:
1. who can I generates Dtos for all Entities?
2. Do I need to build controllers for all tables? all Entities ? or just the Important one?
3. Any design guide or ideas I will appreciate that.
Thx.
1. Write code? Or scaffold from the database
2. I don't know. Depends on your application
3. Use a layered architecture
Hi! I'm again here to make the "scalability" question. Using EF as it manages his own connection pool different than ADO. What should we do to configure max connection pool and queueing to prevent EF from crashing? A collegue found an issue where I try to access to an endpoint simulating 1000 concurrent users simultaneously with a simple query done with ADO and EF. ADO gets to 100% pool but it always returns an answer but EF gets to 100% and FAILS to return, it looks like it manages wrong the queue, any help here?
By the way, if you select all columns in EF it will also go way slower than selecting the ones you use and use way more memory (for 1 to many relations with big results this does a very big difference).
"but EF gets to 100% and FAILS to return" - what is the error you are getting in that case?
@@MilanJovanovicTech System.AggregateException: One or more errors occurred. (Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.)----- This error does not happen with ADO/Dapper. Test was for about 1000 virtual users making requests all the time during 1 minute.
First time u sent it, postman recorded 30ms, then split query & it became 638ms, how is this an improvement? Just curious.
That was a request hitting a cold-start, actually.
I tested it now and:
- Single query: ~8ms
- Split query: ~5ms
I admit that I may have made an error here, leading you to believe it is in fact slower with query splitting.
But you should measure everything.
Listen from this part:
th-cam.com/video/GY7QwSFeVBQ/w-d-xo.html
@@MilanJovanovicTech thanks for the update 👍
@@phugia963 I added a pinned comment also, so that everyone is aware. Thanks for pointing that out. 😁
How about if we use Implicit Join instead of include? does AsSplitQuery has any benefit in that case? I assume when we use Implicit Join "AsNoTracking" isn't required as well, am I right?
AsNoTracking is if we don't want to track changes with EF.
In this case, it's not required.
You mean using the LINQ Join method?
It won't make much difference I expect, since it's translated to the same SQL statement more or less.
I mean .Select() method which we use DTO(s) to transfer the only required fields instead all the entity using .Include() method, I'm not sure if AsSplitQuery has any effect in that case.
@@cyrildouglas9262 I'm not sure, going to check.
@Cyril Douglas .Select does nothing to the problem cartesian explosion poses. it is just an additional (and i recommend it as well) way to reduce db traffic
AsSplitQuery has effect in both cases because after .Select you still have remaining parent child relationships which in a flattened response increase the amount of data
@@dmitrywho7990 Thanks for the reply, however there are some drawbacks using SpliQuery as Microsoft stated that there is no guarantee for data consistency using this method, as well as it implies more roundtrips to the database, even them consider to use this carefully.
time goes from 30ms to 652ms as soon as you add the split query
I said you have to measure if it makes sense for your use case. Not that it will *always* improve performance 😁
@@MilanJovanovicTech 😂
not so sure about this improvement. sending multiple queries to the engine involves the latency needed to process each one.
there is not only the latency introduced by network (that is some situations means not a local server but a TCP connection over the network), but also the time needed to go into the steps of parse-optimize-reduce-generate plan query cycle. For relatively small queries this time is comparable to the time to effectively execute the query itself.
Plus with multiple queries you tend to prevent some optimizations the SQL Query engine could perform when the query was specified only once.
Agreed, and I mentioned this could be a potential deal breaker for using this feature.
That's why although Query Splitting may seem like a good idea on the surface, you have to be careful with it and measure the performance changes.
Please measure the speed for us?
Ok
Or we can, actually write sql and not worry about performance :) ?
Sure, I was never against writing SQL. SQL is more important than any ORM.
Very informative video, but what a horrible hack to bypass SQL faults
What would you propose?
@@MilanJovanovicTech idk. Just came here to criticize withouth any further workaround 🤣🤣. But i trully believe sql engines should deal with it more elegantly
Over the past year I've started moving away from EF. I love the code first concept and how easy EF is to use. However, that ease of use is (from what I'm seeing) making a lot of devs lazy and they end up not understanding what the tool is doing when it builds the queries.
With a bit of effort, you can make it work easily. Even for complex queries. You mentioned that you're moving away from EF - into what?
@@MilanJovanovicTech I should clarify a little better. I've started moving away from EF Core on the query side to Dapper. I still use EF Core for migrations and writes. When building my queries I start in SSMS to make sure I'm squeezing all the performance I can in raw SQL. Problem with using EF Core is that it's not always so simple to translate that query over. I've been using Net Core since the RC days and don't get me wrong, EF Core has come far, but I think it took too long for basic functionality to make it's way from EF to EF Core
@@guava_dev I agree, I also prefer Dapper these days.
When you do this you are no longer guaranteed that you'll get a consistent result. If something has changed that data between your two calls you'll get weird results.
What's the chance for that happening? And you can solve it with a transaction or FOR UPDATE / WITH UPDLOCK
@@MilanJovanovicTech if used incorrectly transaction isolation levels and locks like this can lead to deadlocks or simply not do want you want. It is a much more complex subject. Worth of 20 of such videos.
To be a program efficient, it should not use any framework like ef. Only something li dapper is on a brick of efficiency.
EF is awesome 🦄
😂 query splitting…
Eh?
👋