@@nickchapsas "fixing your database performance issues" implies that you do something to your database to fix the performance issues that the database has. An example of a similar situation would be if I have a car that has an underpowered engine and therefore cannot go up steep hills. And I would then say: I fixed the car's underpowered engine by avoiding steep hills.I didn't fix the engine, I just circumvented the issue. This doesn't take away that it is a valid solution. And potentially a great solution at that!
@@dondernerdnot really, compatible by saying “fix your long commute“ by buying a flat next to the office. No mention of travelling faster in the original statement so would not assume travelling at a faster speed.
Aside from just the client speed gain, this can also reduce backend load significantly. I can think of a dozen projects at work where this will come in handy already. Thanks!!!
Great for the (contrived) read-heavy scenario identified, but developers should be careful about using caching like this to mask real database performance issues. If you have a very expensive query and lean on this type of browser caching to prevent heavy database load, you're at the mercy of well-behaved clients. A bad actor could simply ignore the etag and hammer the endpoint to potentially DDoS your server. Not saying we shouldn't leverage all the tools in our belt, but fixing fundamental database/query performance issues should be top of mind, and then solutions like Delta are a "yes and" addition.
ETags are a great tool - but that timestamp field does more than enable deltas. It allows for optimistic concurrency for writes to the database, which can be super useful as well!
He's searching if a username contains "nick", which means that he doesn't look for an exact "nick" string, but "nick" can be any part of the username string. An index wouldn't save you here unless you want to implement full-text search functionality on a username column
This is interesting. But I wonder, what the limitations are. As i understood it, the cache happens in the users browser based on the ETAG. So if you have an API with permission based on user/role that returns different data and a user uses different accounts in the same browser, the user would see invalid cached data. edit: nvm after the reading the project page. the ETAG consist of three values. AssemblyWriteTime/SQL Timestamp/ optional Suffix..... so you have to set the suffix to the userId/tenantid or whatever is suitable
So, I'm guessing that Delta is checking and comparing the maximum versionrow value in a table, but what if a row is deleted? Will the change be detected by Delta and force a requery? Also, could it just grab and update those rows whose rowverson has been updated and is now > the last max cashed value?
Deleting an row does not update the max row version (in most cases unless you delete the tow with the current max version) What you would normally do in the case of deleting is check the count of the items in your “cache” if the max rowversion is the same by but the count is different then you know the data has changed.
You can check for this change faster than checking delta, also rows are not usually actually removed even when they are deleted, they are just marked as deleted. Permanently removing rows is a much bigger thing and almost always leads into big problems if done outside of controlled runs
This seems neat but also, as other people have said, perhaps limited usefulness because the first client load is still slow and it doesn't improve much of the backend if you have a lot of clients. It would be nice to see some kind of memcached that was aware of rowversion so the cache would be distributed to all users querying the same data.
Does this work if I'm loading an entire object graph via orm? Wouldn't it essentially have to do this for every relation in the graph that's being accessed? How would it determine that?
This is not practical if i cannot change the table structure in the database. I just want to speed up my queries, not change any tables as that is out if my access.
Would like to see a video on how to upload multiple photos to Azure blob storage as fast as possible. Would like to see how you would do that, Nick. Thanks for the helpful videos.
From what I can see, the library is using SQL Change Tracking, so if the tables in the joins haven't changed, then no need to run again. You can even set what tables to track, so that one table that updates every second, can be ignored.
So this only comes in if the same user does the same search with the data being exactly the same? Then I don't see this having a big impact generally at all. It would be interesting if the data was patched with the updated values, instead of retrieving everything when a single value has changed. This would probably only be useful for non-index searches though, such as name like '%nick%'.
is there a security concern having the data stored in the browser? I assume there is probably a time limit as well of how long we can store the data in cache.
No this is for read heavy scenarios. That being said, I'm sure every DB has at least a few tables that are more read heavy than others
5 ชั่วโมงที่ผ่านมา
@@nickchapsas Isn't rowversion database global? Meaning that any write to any rowversion table will increase the global rowversion which would 'break' any caching. Or does Delta do MAX(rowversion) on the individual table?
What if I don't have a UI and I just have a background service calling a stored procedure, which has to Insert or Update?
10 ชั่วโมงที่ผ่านมา +1
Nice package but there are a few solutions that immediately came to my mind that is better than this. - If table is for mostly reading, adding key to DB is gonna make querying faster and better. - For your example, only the first query should be taking 1 second for the user. Because even if the row is changed, we still have the cached version of the row. So we have an id and using that we can access the row again in a millisecond unless user changed the username (which is very rare). I don't know if there are customizations in the package for that but it could make so much difference.
This doesn't optimize queries that much, it's more about not sending a json body over the wire if your user already has the most up to date data in his browser. Although it does speed up queries in the sense that data isn't fetched from the db until the rowversion is compared to the request header value. And it's only actually loaded when it's out of date.
So am I missing something? For example I have 3 tables that have no relation to each other. All of them have RowVersion. Changing 1 row in a table causes the etag for all requests to any tables become invalid.
I wonder how much additional storage would a new column like this require overall if you were to add it to most of your tables and they have millions of records.. and would there be a performance hit when updating/recalculating the column value when something has changed in the row (ok that part is probably fine if it's just a timestamp essentially)
You'll probably have a version column anyway for concurrency issues (optimistic/pessimistic locking). Only thing I don't like is it seems tied to the rowversion timestamp mechanic of SqlServer and I prefer Postgres and Sqlite.
I think this selects the MAX(rowversion) and if data wasn't changed after the last fetch (which is specified by a request header) the browser just gets a 304 Not Modified response instead of a whole body. Otherwise the request is handled the same as if Delta wasn't used (well other than an extra ETag header added to the response for the next fetch request). I guess delete could just be handled by also selecting COUNT(*) together with the MAX.
5 ชั่วโมงที่ผ่านมา +1
That is a good question. If delete does not increase the rowversion then cached version is invalid.
@@DavidSmith-ef4eh I did go through the documentation on github and there's no mention of a count lol. It also only works with SqlServer with it's rowversion timestamp mechanism.
@@gileee that optimistic update system from mssql. never used it tbh. but it seems a good system, microsoft surely has reasons for using it instead of locking rows and tables.
And what about client-server desktop apps? Is there anything like this for desktop approach? I've made my own solution with caching that behaves very similar to Delta but maybe there's someone smarted that did it better and I do not have to play with cache every time
This was way more efficient for your architecture because it's on the user's browser instead of something you have to manage, and you don't really have to worry about how much data you are caching
@@nickchapsas Thanks. I think a combination of both seems like a really good stuff (redis for other thing). But the boss may not be willing to apply this new right away, lol
Does this work with rowversion columns which are date type ? I have been working with a lot of ERP stuff and we always have column ROWVERSION which is date type. If this works with such columns, this is really great.
Yes it should work with these columns too, but maybe check yourself. It can work with two different ways of such mechanisms actually as descibed in the docs
While an interesting concept, it's a pass. Would implement this functionality differently, and using an approach that fits each specific project more closely.
Another video with a clickbait title where DB performance is actually not improved at all and the idea like "just install that package" instead of telling how to actually make it work.
This is very hacky! Someone not familiar with project will ask question: "what is that rowversion? how is it hooked up?" If you are not familiar with "Delta" - good luck. I'm wondering how Delta performs when you have joins? Do you need rowversion for each table? There are other mechanisms for dealing with wildcard queries.
@@gileee "Enterprise software has a "version" column anyway" some does and some doesn't. What you are referring to is optimistic locking. There are other ways. Problem: "You should never ever tie your FE to database directly. Otherwise you are up for surprise down the road"
@@ostaporobets7313 I know the name. A version column is the standard method since it not only works with users actually hitting an update at the same time (like an actual race condition), but also prevents users from fetching an entity, waiting a month and then pushing an update that then overwrites all the other changes that occurred in the meantime. Transactional locks can't help with that.
@@ostaporobets7313 The are no other ways when talking about distributed systems where users don't hold connections open to the db until completion. I think Nick used a web app here to drive that point home. Maybe not intentionally, but it shows a specific scenario where a method like this works well. I called it what I did because saying optimistic locking makes people think it's only about row locks.
Looks like another useless whistle. Its just common cache, but in case you have higher load and fast changing data, then you will not have any benefit, because it will invalidate cache every second or so.
I think the "rowversion" field should be named "ModifiedDateTime", "ChangedDateTime" or "UpdatedDateTime" or short versions "Modified", "Changed", "Updated", "LastUpdate" and use single attribute for that for any field instead of create new field
Dear god no. Database calls should be via stored procedure wherever possible. That way, database devs can then work on the database. They know the best way to minimise memory use and speed up calls. My job as a C# dev is C# efficiency. Furthermore, where is/ how is security implemented?
This is an optimization which makes it easy to use the browsers built in cache, so data doesn't even need to be re-fetched if it didn't change. The only db thing it uses is the rowversion from SqlServer.
@@gileee no I get that. What I do not get is how does my browser know that the data in a database has not changed? There has to be some mechanism to check, or I am basically flagging some data as no recheck needed?
@@saberint When you do the first fetch you get a header (ETag I think which is automatically added by Delta) that specifies the last change date of that data. Your browser knows how to use this header for subsequent requests automatically and the Delta lib intercepts the request, fetches the MAX(rowversion) from your table that the endpoint is targeting and compares it to the header. If the header is the same as the max, you just get a Not Modified response, instead of the whole json body. If the data was changed then you get the whole request like Delta wasn't even there, but it again automatically sets the header in the response.
@@gileee ok, so from your explanation it still hits my db, it just does the lookup on the rowversion to see if the data has changed. So I can see it speeding up data transmission, but its not speeding up my sql query which on a properly structured database with 1 mill records should be 1ms anyway. Stored procedures all the way for me. The SQL pre-compiles and you as the database dev can lock down access outside of stored procedures. Its decoupled from any business logic which means its just like an API, just for you db. Thanks for the extra information about the product. I sound negative (but that's because I cant think of any use cases I could use this for). But your insight was much appreciated. Cheers 🍻
@@saberint That's true, but like I said to me this isn't about speeding up queries, it's about speeding up response times. For large json bodies most of the time of the request is spent serializing, then transmitting the data over the internet, then deserializing on the client side. This prevents all of that and just returns a head basically.
This is a very handy package, but I would argue that it is not really improving database performance; it is just avoiding hitting it every time.
It's reducing DB usage, fixing your database performance issues, which is accurate to the title
Not to mention that it's also reducing network io. Which might reduce cost as well.
@@nickchapsas "fixing your database performance issues" implies that you do something to your database to fix the performance issues that the database has.
An example of a similar situation would be if I have a car that has an underpowered engine and therefore cannot go up steep hills. And I would then say: I fixed the car's underpowered engine by avoiding steep hills.I didn't fix the engine, I just circumvented the issue.
This doesn't take away that it is a valid solution. And potentially a great solution at that!
@@dondernerdnot really, compatible by saying “fix your long commute“ by buying a flat next to the office. No mention of travelling faster in the original statement so would not assume travelling at a faster speed.
The term should be a "workaround" rather "fixing"
I think it is a very good package, but I hope that the maintainer will do a implementation also for PostgreSQL. Nice video, Nick!
Simon will read the comments so I'm sure this will come :D
Concurred
100%
Same actually, that would be huge.
Meanwhile I hope for something for MongoDB. I wanted to implement some ETag mechanism myself, but found that it might not be too easy.
So sad there is no Postgres implementation 😞
Aside from just the client speed gain, this can also reduce backend load significantly. I can think of a dozen projects at work where this will come in handy already.
Thanks!!!
Title is kinda wrong, but the package seems awesome
It's always Simon Cropp. One of three 'Permanent Patrons' on the Fody project here. It's always Simon Cropp.
Great for the (contrived) read-heavy scenario identified, but developers should be careful about using caching like this to mask real database performance issues. If you have a very expensive query and lean on this type of browser caching to prevent heavy database load, you're at the mercy of well-behaved clients. A bad actor could simply ignore the etag and hammer the endpoint to potentially DDoS your server. Not saying we shouldn't leverage all the tools in our belt, but fixing fundamental database/query performance issues should be top of mind, and then solutions like Delta are a "yes and" addition.
ETags are a great tool - but that timestamp field does more than enable deltas. It allows for optimistic concurrency for writes to the database, which can be super useful as well!
I've built stuff like this manually. This will save a ton of time.
good man
You do not have index for username field. Your query seems doing full table scan and then you hiding real issue using caching.
Aside from that the title is misleading
You cannot (or I guess you technically could but good luck with writes) have indices for all possible columns the user can search on.
He's searching if a username contains "nick", which means that he doesn't look for an exact "nick" string, but "nick" can be any part of the username string. An index wouldn't save you here unless you want to implement full-text search functionality on a username column
Someone should submit this to Code Cop for review...
😂😂😂😂
I'll make that video and double dip
Agree with most of the comments, it's a handy package. But I'd rather say it's a fix to the API performance.
I'm sitting on this title for the next video
This is interesting. But I wonder, what the limitations are.
As i understood it, the cache happens in the users browser based on the ETAG. So if you have an API with permission based on user/role that returns different data and a user uses different accounts in the same browser, the user would see invalid cached data.
edit: nvm after the reading the project page. the ETAG consist of three values. AssemblyWriteTime/SQL Timestamp/ optional Suffix..... so you have to set the suffix to the userId/tenantid or whatever is suitable
Yes you can heavily customize the logic to your needs
Nick established Dometrain just to get one million users and take the data for this demo :) Respect!
🤫🤫🤫
So, I'm guessing that Delta is checking and comparing the maximum versionrow value in a table, but what if a row is deleted? Will the change be detected by Delta and force a requery?
Also, could it just grab and update those rows whose rowverson has been updated and is now > the last max cashed value?
@@ChrisWalshZX it's looking at the max version of the db. I'm guessing a delete increases the db version.
Deleting an row does not update the max row version (in most cases unless you delete the tow with the current max version) What you would normally do in the case of deleting is check the count of the items in your “cache” if the max rowversion is the same by but the count is different then you know the data has changed.
you can just mark your data as deleted by modifying it. you'll have the timestamp changed this way
You can check for this change faster than checking delta, also rows are not usually actually removed even when they are deleted, they are just marked as deleted. Permanently removing rows is a much bigger thing and almost always leads into big problems if done outside of controlled runs
@@ChrisWalshZX If you're doing a MAX(rowversion) it's easy to add an extra COUNT(*) to the select.
This seems neat but also, as other people have said, perhaps limited usefulness because the first client load is still slow and it doesn't improve much of the backend if you have a lot of clients. It would be nice to see some kind of memcached that was aware of rowversion so the cache would be distributed to all users querying the same data.
Now this is actually very cool, even if limited to specific scenarios. Still, ingenuously simple.
Does this work if I'm loading an entire object graph via orm? Wouldn't it essentially have to do this for every relation in the graph that's being accessed? How would it determine that?
What about relationships (many-to-one, etc)? For example "user" is linked to "group" and someone changed the group name
I have the same question. 😀
In postgres, you can use gin/gist indexes if you don't want to deal with cache. This will speed up queries a ton in cases like this one
Thanks! I’ll look into it!
What about PostgreSQL, MySQL or Sqlite?
Wow a fantastic and easy to use library, thanks for sharing Nick
This one is pretty cool, thanks Nick and Simon
Now that was a great discovery, thanks for the share!!
This is not practical if i cannot change the table structure in the database. I just want to speed up my queries, not change any tables as that is out if my access.
Would like to see a video on how to upload multiple photos to Azure blob storage as fast as possible. Would like to see how you would do that, Nick. Thanks for the helpful videos.
Great tip! Will check it out.
The title should be about improving request performance, not database performance. You'd add an index for that.
Ok, that explains why I've seen this library on my Github wall...
another use , ai could just look at cache time and warn if a change likely occured because it ran at 1000 ms instead of estimed less then 50 ms
Thank you, very useful library
How does this work if you are using a graph of different tables in a single query?
Don't do this anymore please :)
Don’t do what? Join tables?
@@weluvmusicz what don't include data from multiple tables?
Perfect as always. Thnx!
Nick, are you planning to do a video on the new ms testing platform and x/n unit integrations and if it's actually ready for use?
I am
Great stuff but what about more complex scenarios with joins?
Same thing. It doesn’t matter. As long as you have the column in the tables of interest, this will work with joins stored procedures or anything else
From what I can see, the library is using SQL Change Tracking, so if the tables in the joins haven't changed, then no need to run again. You can even set what tables to track, so that one table that updates every second, can be ignored.
kinda whats blazor now doing for static files caching with etags, genius. btw, rowversion is used for optimistic concurrency in general.
So this only comes in if the same user does the same search with the data being exactly the same? Then I don't see this having a big impact generally at all. It would be interesting if the data was patched with the updated values, instead of retrieving everything when a single value has changed. This would probably only be useful for non-index searches though, such as name like '%nick%'.
So basically a single modification in any table resets the cache for all the db?
Effectively yeah but it doesn't really "reset" the cache rather than "data has changed in some way so read the new version"
@nickchapsas OK. And yeah, it doesn't go in every browsers to reset the cache 😁
same thing can be done using uuid v7 as primary key in your tables. dont need extra field then.
is there a security concern having the data stored in the browser? I assume there is probably a time limit as well of how long we can store the data in cache.
You already serve the data to the browser. If it was ok to serve it the first time, it is ok to cache it.
5:29 This is a 304 request...😂😂😂😂😂
Will this add much value if the DB table has fast changing data?
No this is for read heavy scenarios. That being said, I'm sure every DB has at least a few tables that are more read heavy than others
@@nickchapsas Isn't rowversion database global? Meaning that any write to any rowversion table will increase the global rowversion which would 'break' any caching. Or does Delta do MAX(rowversion) on the individual table?
holy mother of performance
As I understand RowVersion is calculated from entire database? It is possible to map particular endpoint only to RowVersion from specyfic table?
Client side caching nice!
What if I don't have a UI and I just have a background service calling a stored procedure, which has to Insert or Update?
Nice package but there are a few solutions that immediately came to my mind that is better than this.
- If table is for mostly reading, adding key to DB is gonna make querying faster and better.
- For your example, only the first query should be taking 1 second for the user. Because even if the row is changed, we still have the cached version of the row. So we have an id and using that we can access the row again in a millisecond unless user changed the username (which is very rare). I don't know if there are customizations in the package for that but it could make so much difference.
This doesn't optimize queries that much, it's more about not sending a json body over the wire if your user already has the most up to date data in his browser. Although it does speed up queries in the sense that data isn't fetched from the db until the rowversion is compared to the request header value. And it's only actually loaded when it's out of date.
So am I missing something? For example I have 3 tables that have no relation to each other. All of them have RowVersion. Changing 1 row in a table causes the etag for all requests to any tables become invalid.
Very misleading title. ”How to hide bad db performance for a user that makes multiple requests” would be more true.
So caching should never be referred to as a performance improvement but rather just hiding bad performance. Got it
very interesting, even if sql server actually does the magic.
It would be interesting to have a [delta]-like decorator to apply to endpoints
You have that with the shouldExecute as well as the Group UseDelta method
How about memory consumption? Is there a noticeable difference?
how would this work with EF with migrations scenarios?
Can you implement this with Angular?
What if you layer your db rows over DTOs?
That's totally fine. This will still work
I wonder how much additional storage would a new column like this require overall if you were to add it to most of your tables and they have millions of records.. and would there be a performance hit when updating/recalculating the column value when something has changed in the row (ok that part is probably fine if it's just a timestamp essentially)
It's a timestamp column so 8-bytes per row
You'll probably have a version column anyway for concurrency issues (optimistic/pessimistic locking). Only thing I don't like is it seems tied to the rowversion timestamp mechanic of SqlServer and I prefer Postgres and Sqlite.
@@gileee yeah true, I guess if it can be configured to rely on an existing column/setup without the need of DB changes it would be pretty good
Do you get paid or any financial benefit for promoting this package?
If I did I would have to legally disclose it. I just like the package and Simon.
does it work on delete? I assume he sorts the row version desc and fetches all the row versions that are greater than the largest one cached..
I think this selects the MAX(rowversion) and if data wasn't changed after the last fetch (which is specified by a request header) the browser just gets a 304 Not Modified response instead of a whole body. Otherwise the request is handled the same as if Delta wasn't used (well other than an extra ETag header added to the response for the next fetch request).
I guess delete could just be handled by also selecting COUNT(*) together with the MAX.
That is a good question. If delete does not increase the rowversion then cached version is invalid.
they probably get a count I guess.. Tbh, I've been using a similar system for years. My row_version number is just called updated_at :D
@@DavidSmith-ef4eh I did go through the documentation on github and there's no mention of a count lol. It also only works with SqlServer with it's rowversion timestamp mechanism.
@@gileee that optimistic update system from mssql. never used it tbh. but it seems a good system, microsoft surely has reasons for using it instead of locking rows and tables.
Zoomers invented indexes and caches?
Guessing this would not work with server-side pagination.
It can
A cache does not fix your db performance.. it might reduce network io but thats it. The db is still slow
"Database performance issues" not "database performance". Important difference there
And what about client-server desktop apps? Is there anything like this for desktop approach? I've made my own solution with caching that behaves very similar to Delta but maybe there's someone smarted that did it better and I do not have to play with cache every time
You can use Replicant there to have the same logic on the HttpClient
This is only for SQL server. Dang
Great. How about this one vs Redis?
This was way more efficient for your architecture because it's on the user's browser instead of something you have to manage, and you don't really have to worry about how much data you are caching
@@nickchapsas Thanks. I think a combination of both seems like a really good stuff (redis for other thing). But the boss may not be willing to apply this new right away, lol
got me all excited :) :(
Does this work with rowversion columns which are date type ? I have been working with a lot of ERP stuff and we always have column ROWVERSION which is date type. If this works with such columns, this is really great.
Yes it should work with these columns too, but maybe check yourself. It can work with two different ways of such mechanisms actually as descibed in the docs
is only supported for .net9?
Does the name of the "timestamp" column necessarily have the name "RowVersion" or can it have any other name?
AFAIK you don't have to call it rowversion
This delay fix ? Would like fix many aka bugs
Misleading title but nice package.
So kinda like memorycache
Wow bravo
While an interesting concept, it's a pass. Would implement this functionality differently, and using an approach that fits each specific project more closely.
this will only work in web applications ?
what about mobile application ?
Check the video until the end
Another video with a clickbait title where DB performance is actually not improved at all and the idea like "just install that package" instead of telling how to actually make it work.
You can't have DB Performance issues when you don't call the DB anymore :)
I don't see this any useful in a real applications with many data modifications in different tables.
Real applications also have read heavy scenarios for which this is perfect
This is very hacky! Someone not familiar with project will ask question: "what is that rowversion? how is it hooked up?" If you are not familiar with "Delta" - good luck. I'm wondering how Delta performs when you have joins? Do you need rowversion for each table?
There are other mechanisms for dealing with wildcard queries.
Don't you document architectural significant decisions?
Enterprise software has a "version" column anyway for catching concurrency issues. What's the problem?
@@gileee "Enterprise software has a "version" column anyway" some does and some doesn't. What you are referring to is optimistic locking. There are other ways.
Problem: "You should never ever tie your FE to database directly. Otherwise you are up for surprise down the road"
@@ostaporobets7313 I know the name. A version column is the standard method since it not only works with users actually hitting an update at the same time (like an actual race condition), but also prevents users from fetching an entity, waiting a month and then pushing an update that then overwrites all the other changes that occurred in the meantime. Transactional locks can't help with that.
@@ostaporobets7313 The are no other ways when talking about distributed systems where users don't hold connections open to the db until completion. I think Nick used a web app here to drive that point home. Maybe not intentionally, but it shows a specific scenario where a method like this works well. I called it what I did because saying optimistic locking makes people think it's only about row locks.
Looks like another useless whistle. Its just common cache, but in case you have higher load and fast changing data, then you will not have any benefit, because it will invalidate cache every second or so.
Yes, that's what the documentation says
Thats really not gut title.
You clicked
I think the "rowversion" field should be named "ModifiedDateTime", "ChangedDateTime" or "UpdatedDateTime" or short versions "Modified", "Changed", "Updated", "LastUpdate" and use single attribute for that for any field instead of create new field
You can use a date field as well if you want. You're not limited to RowVersion
@@nickchapsas Interesting. Does this mean we can tie Delta to a temporal table field like Period Start?
- Doubtful?
- Yeah, it's client-side only.
- So useless!
Dear god no. Database calls should be via stored procedure wherever possible. That way, database devs can then work on the database. They know the best way to minimise memory use and speed up calls. My job as a C# dev is C# efficiency. Furthermore, where is/ how is security implemented?
This is an optimization which makes it easy to use the browsers built in cache, so data doesn't even need to be re-fetched if it didn't change. The only db thing it uses is the rowversion from SqlServer.
@@gileee no I get that. What I do not get is how does my browser know that the data in a database has not changed? There has to be some mechanism to check, or I am basically flagging some data as no recheck needed?
@@saberint When you do the first fetch you get a header (ETag I think which is automatically added by Delta) that specifies the last change date of that data. Your browser knows how to use this header for subsequent requests automatically and the Delta lib intercepts the request, fetches the MAX(rowversion) from your table that the endpoint is targeting and compares it to the header. If the header is the same as the max, you just get a Not Modified response, instead of the whole json body. If the data was changed then you get the whole request like Delta wasn't even there, but it again automatically sets the header in the response.
@@gileee ok, so from your explanation it still hits my db, it just does the lookup on the rowversion to see if the data has changed. So I can see it speeding up data transmission, but its not speeding up my sql query which on a properly structured database with 1 mill records should be 1ms anyway. Stored procedures all the way for me. The SQL pre-compiles and you as the database dev can lock down access outside of stored procedures. Its decoupled from any business logic which means its just like an API, just for you db.
Thanks for the extra information about the product. I sound negative (but that's because I cant think of any use cases I could use this for). But your insight was much appreciated. Cheers 🍻
@@saberint That's true, but like I said to me this isn't about speeding up queries, it's about speeding up response times. For large json bodies most of the time of the request is spent serializing, then transmitting the data over the internet, then deserializing on the client side. This prevents all of that and just returns a head basically.
first.
ordefault
Single
InvalidOperationException
@@ml_serenity you rock!!
@@nickchapsas Async
is only supported for .net9??
Yes, the nuget packages only target .net 9