The Fix For Your Database Performance Issues in .NET

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 พ.ย. 2024

ความคิดเห็น • 175

  • @fedayka
    @fedayka 11 ชั่วโมงที่ผ่านมา +120

    This is a very handy package, but I would argue that it is not really improving database performance; it is just avoiding hitting it every time.

    • @nickchapsas
      @nickchapsas  11 ชั่วโมงที่ผ่านมา +27

      It's reducing DB usage, fixing your database performance issues, which is accurate to the title

    • @Tsunami14
      @Tsunami14 9 ชั่วโมงที่ผ่านมา +10

      Not to mention that it's also reducing network io. Which might reduce cost as well.

    • @dondernerd
      @dondernerd 9 ชั่วโมงที่ผ่านมา +61

      @@nickchapsas "fixing your database performance issues" implies that you do something to your database to fix the performance issues that the database has.
      An example of a similar situation would be if I have a car that has an underpowered engine and therefore cannot go up steep hills. And I would then say: I fixed the car's underpowered engine by avoiding steep hills.I didn't fix the engine, I just circumvented the issue.
      This doesn't take away that it is a valid solution. And potentially a great solution at that!

    • @Salvotation
      @Salvotation 6 ชั่วโมงที่ผ่านมา +5

      @@dondernerdnot really, compatible by saying “fix your long commute“ by buying a flat next to the office. No mention of travelling faster in the original statement so would not assume travelling at a faster speed.

    • @AlanDias17
      @AlanDias17 6 ชั่วโมงที่ผ่านมา +12

      The term should be a "workaround" rather "fixing"

  • @andreistelian9058
    @andreistelian9058 11 ชั่วโมงที่ผ่านมา +42

    I think it is a very good package, but I hope that the maintainer will do a implementation also for PostgreSQL. Nice video, Nick!

    • @nickchapsas
      @nickchapsas  11 ชั่วโมงที่ผ่านมา +9

      Simon will read the comments so I'm sure this will come :D

    • @user-uo7ch2lf3z
      @user-uo7ch2lf3z 10 ชั่วโมงที่ผ่านมา +2

      Concurred
      100%

    • @SpaceTrump
      @SpaceTrump 7 ชั่วโมงที่ผ่านมา +1

      Same actually, that would be huge.

    • @TehGM
      @TehGM 5 ชั่วโมงที่ผ่านมา +1

      Meanwhile I hope for something for MongoDB. I wanted to implement some ETag mechanism myself, but found that it might not be too easy.

    • @carlosdelvalle5417
      @carlosdelvalle5417 3 ชั่วโมงที่ผ่านมา

      So sad there is no Postgres implementation 😞

  • @TheSilent333
    @TheSilent333 11 ชั่วโมงที่ผ่านมา +10

    Aside from just the client speed gain, this can also reduce backend load significantly. I can think of a dozen projects at work where this will come in handy already.
    Thanks!!!

  • @lucasmicheleto2722
    @lucasmicheleto2722 9 ชั่วโมงที่ผ่านมา +25

    Title is kinda wrong, but the package seems awesome

  •  2 ชั่วโมงที่ผ่านมา +1

    It's always Simon Cropp. One of three 'Permanent Patrons' on the Fody project here. It's always Simon Cropp.

  • @james-not-jim
    @james-not-jim 8 ชั่วโมงที่ผ่านมา +5

    Great for the (contrived) read-heavy scenario identified, but developers should be careful about using caching like this to mask real database performance issues. If you have a very expensive query and lean on this type of browser caching to prevent heavy database load, you're at the mercy of well-behaved clients. A bad actor could simply ignore the etag and hammer the endpoint to potentially DDoS your server. Not saying we shouldn't leverage all the tools in our belt, but fixing fundamental database/query performance issues should be top of mind, and then solutions like Delta are a "yes and" addition.

  • @serverlesssolutionsllc8273
    @serverlesssolutionsllc8273 9 ชั่วโมงที่ผ่านมา +7

    ETags are a great tool - but that timestamp field does more than enable deltas. It allows for optimistic concurrency for writes to the database, which can be super useful as well!

  • @iandrake4683
    @iandrake4683 9 ชั่วโมงที่ผ่านมา +3

    I've built stuff like this manually. This will save a ton of time.

    • @tdgeeee
      @tdgeeee 8 ชั่วโมงที่ผ่านมา

      good man

  • @icecreamstickmodel
    @icecreamstickmodel 10 ชั่วโมงที่ผ่านมา +36

    You do not have index for username field. Your query seems doing full table scan and then you hiding real issue using caching.

    • @benwagner7422
      @benwagner7422 6 ชั่วโมงที่ผ่านมา +3

      Aside from that the title is misleading

    • @viktorsafar
      @viktorsafar 3 ชั่วโมงที่ผ่านมา +1

      You cannot (or I guess you technically could but good luck with writes) have indices for all possible columns the user can search on.

    • @antonzhernosek5552
      @antonzhernosek5552 20 นาทีที่ผ่านมา

      He's searching if a username contains "nick", which means that he doesn't look for an exact "nick" string, but "nick" can be any part of the username string. An index wouldn't save you here unless you want to implement full-text search functionality on a username column

  • @Arcadenut1
    @Arcadenut1 6 ชั่วโมงที่ผ่านมา +6

    Someone should submit this to Code Cop for review...

    • @john5516
      @john5516 6 ชั่วโมงที่ผ่านมา

      😂😂😂😂

    • @nickchapsas
      @nickchapsas  5 ชั่วโมงที่ผ่านมา +1

      I'll make that video and double dip

  • @MahmoudBakkar
    @MahmoudBakkar 5 ชั่วโมงที่ผ่านมา +7

    Agree with most of the comments, it's a handy package. But I'd rather say it's a fix to the API performance.

    • @nickchapsas
      @nickchapsas  5 ชั่วโมงที่ผ่านมา +2

      I'm sitting on this title for the next video

  • @dBug404
    @dBug404 12 ชั่วโมงที่ผ่านมา +8

    This is interesting. But I wonder, what the limitations are.
    As i understood it, the cache happens in the users browser based on the ETAG. So if you have an API with permission based on user/role that returns different data and a user uses different accounts in the same browser, the user would see invalid cached data.
    edit: nvm after the reading the project page. the ETAG consist of three values. AssemblyWriteTime/SQL Timestamp/ optional Suffix..... so you have to set the suffix to the userId/tenantid or whatever is suitable

    • @nickchapsas
      @nickchapsas  11 ชั่วโมงที่ผ่านมา +1

      Yes you can heavily customize the logic to your needs

  • @jarek_rudnik
    @jarek_rudnik 10 ชั่วโมงที่ผ่านมา +9

    Nick established Dometrain just to get one million users and take the data for this demo :) Respect!

    • @nickchapsas
      @nickchapsas  10 ชั่วโมงที่ผ่านมา +4

      🤫🤫🤫

  • @ChrisWalshZX
    @ChrisWalshZX 11 ชั่วโมงที่ผ่านมา +11

    So, I'm guessing that Delta is checking and comparing the maximum versionrow value in a table, but what if a row is deleted? Will the change be detected by Delta and force a requery?
    Also, could it just grab and update those rows whose rowverson has been updated and is now > the last max cashed value?

    • @BillyBraga
      @BillyBraga 11 ชั่วโมงที่ผ่านมา +2

      @@ChrisWalshZX it's looking at the max version of the db. I'm guessing a delete increases the db version.

    • @FrancoisduPlessisIsAwesome
      @FrancoisduPlessisIsAwesome 9 ชั่วโมงที่ผ่านมา +2

      Deleting an row does not update the max row version (in most cases unless you delete the tow with the current max version) What you would normally do in the case of deleting is check the count of the items in your “cache” if the max rowversion is the same by but the count is different then you know the data has changed.

    • @timramone
      @timramone 6 ชั่วโมงที่ผ่านมา

      you can just mark your data as deleted by modifying it. you'll have the timestamp changed this way

    • @Songfugel
      @Songfugel 6 ชั่วโมงที่ผ่านมา

      You can check for this change faster than checking delta, also rows are not usually actually removed even when they are deleted, they are just marked as deleted. Permanently removing rows is a much bigger thing and almost always leads into big problems if done outside of controlled runs

    • @gileee
      @gileee 6 ชั่วโมงที่ผ่านมา

      @@ChrisWalshZX If you're doing a MAX(rowversion) it's easy to add an extra COUNT(*) to the select.

  • @silentdebugger
    @silentdebugger 7 ชั่วโมงที่ผ่านมา +1

    This seems neat but also, as other people have said, perhaps limited usefulness because the first client load is still slow and it doesn't improve much of the backend if you have a lot of clients. It would be nice to see some kind of memcached that was aware of rowversion so the cache would be distributed to all users querying the same data.

  • @trojakm
    @trojakm 3 ชั่วโมงที่ผ่านมา

    Now this is actually very cool, even if limited to specific scenarios. Still, ingenuously simple.

  • @adambickford8720
    @adambickford8720 9 ชั่วโมงที่ผ่านมา +4

    Does this work if I'm loading an entire object graph via orm? Wouldn't it essentially have to do this for every relation in the graph that's being accessed? How would it determine that?

  • @pablodemono6831
    @pablodemono6831 8 ชั่วโมงที่ผ่านมา +5

    What about relationships (many-to-one, etc)? For example "user" is linked to "group" and someone changed the group name

    • @johanheyvaert
      @johanheyvaert 4 ชั่วโมงที่ผ่านมา

      I have the same question. 😀

  • @przemek265
    @przemek265 8 ชั่วโมงที่ผ่านมา +1

    In postgres, you can use gin/gist indexes if you don't want to deal with cache. This will speed up queries a ton in cases like this one

    • @carlosdelvalle5417
      @carlosdelvalle5417 3 ชั่วโมงที่ผ่านมา

      Thanks! I’ll look into it!

  • @weluvmusicz
    @weluvmusicz 11 ชั่วโมงที่ผ่านมา +7

    What about PostgreSQL, MySQL or Sqlite?

  • @BasuraRatnayake
    @BasuraRatnayake 10 ชั่วโมงที่ผ่านมา

    Wow a fantastic and easy to use library, thanks for sharing Nick

  • @lalithprasadsrigiriraju
    @lalithprasadsrigiriraju 10 ชั่วโมงที่ผ่านมา

    This one is pretty cool, thanks Nick and Simon

  • @NOG6669
    @NOG6669 8 ชั่วโมงที่ผ่านมา

    Now that was a great discovery, thanks for the share!!

  • @foonlam7134
    @foonlam7134 9 ชั่วโมงที่ผ่านมา +2

    This is not practical if i cannot change the table structure in the database. I just want to speed up my queries, not change any tables as that is out if my access.

  • @macmcmillen6282
    @macmcmillen6282 4 ชั่วโมงที่ผ่านมา

    Would like to see a video on how to upload multiple photos to Azure blob storage as fast as possible. Would like to see how you would do that, Nick. Thanks for the helpful videos.

  • @keithjairam8452
    @keithjairam8452 7 ชั่วโมงที่ผ่านมา

    Great tip! Will check it out.

  • @theMagos
    @theMagos 3 ชั่วโมงที่ผ่านมา

    The title should be about improving request performance, not database performance. You'd add an index for that.

  • @local9
    @local9 11 ชั่วโมงที่ผ่านมา

    Ok, that explains why I've seen this library on my Github wall...

  • @Karol-g9d
    @Karol-g9d 14 นาทีที่ผ่านมา

    another use , ai could just look at cache time and warn if a change likely occured because it ran at 1000 ms instead of estimed less then 50 ms

  • @mohammadtoficmohammad3594
    @mohammadtoficmohammad3594 9 ชั่วโมงที่ผ่านมา

    Thank you, very useful library

  • @mattbristo6933
    @mattbristo6933 11 ชั่วโมงที่ผ่านมา +4

    How does this work if you are using a graph of different tables in a single query?

    • @weluvmusicz
      @weluvmusicz 11 ชั่วโมงที่ผ่านมา

      Don't do this anymore please :)

    • @Adiu72
      @Adiu72 11 ชั่วโมงที่ผ่านมา +4

      Don’t do what? Join tables?

    • @mattbristo6933
      @mattbristo6933 10 ชั่วโมงที่ผ่านมา

      @@weluvmusicz what don't include data from multiple tables?

  • @Vosoo-e9r
    @Vosoo-e9r 12 ชั่วโมงที่ผ่านมา

    Perfect as always. Thnx!

  • @Thompsoncs
    @Thompsoncs 10 ชั่วโมงที่ผ่านมา

    Nick, are you planning to do a video on the new ms testing platform and x/n unit integrations and if it's actually ready for use?

    • @nickchapsas
      @nickchapsas  10 ชั่วโมงที่ผ่านมา +1

      I am

  • @adamstawarek7520
    @adamstawarek7520 12 ชั่วโมงที่ผ่านมา +2

    Great stuff but what about more complex scenarios with joins?

    • @nickchapsas
      @nickchapsas  12 ชั่วโมงที่ผ่านมา +2

      Same thing. It doesn’t matter. As long as you have the column in the tables of interest, this will work with joins stored procedures or anything else

    • @local9
      @local9 11 ชั่วโมงที่ผ่านมา

      From what I can see, the library is using SQL Change Tracking, so if the tables in the joins haven't changed, then no need to run again. You can even set what tables to track, so that one table that updates every second, can be ignored.

  • @AtikBayraktar
    @AtikBayraktar 6 ชั่วโมงที่ผ่านมา

    kinda whats blazor now doing for static files caching with etags, genius. btw, rowversion is used for optimistic concurrency in general.

  • @Reellron
    @Reellron 6 ชั่วโมงที่ผ่านมา

    So this only comes in if the same user does the same search with the data being exactly the same? Then I don't see this having a big impact generally at all. It would be interesting if the data was patched with the updated values, instead of retrieving everything when a single value has changed. This would probably only be useful for non-index searches though, such as name like '%nick%'.

  • @BillyBraga
    @BillyBraga 12 ชั่วโมงที่ผ่านมา +2

    So basically a single modification in any table resets the cache for all the db?

    • @nickchapsas
      @nickchapsas  11 ชั่วโมงที่ผ่านมา

      Effectively yeah but it doesn't really "reset" the cache rather than "data has changed in some way so read the new version"

    • @BillyBraga
      @BillyBraga 11 ชั่วโมงที่ผ่านมา

      @nickchapsas OK. And yeah, it doesn't go in every browsers to reset the cache 😁

  • @АндрейЧепкунов-и3н
    @АндрейЧепкунов-и3н 27 นาทีที่ผ่านมา

    same thing can be done using uuid v7 as primary key in your tables. dont need extra field then.

  • @KrisTenRob
    @KrisTenRob 5 ชั่วโมงที่ผ่านมา

    is there a security concern having the data stored in the browser? I assume there is probably a time limit as well of how long we can store the data in cache.

    • @nickchapsas
      @nickchapsas  5 ชั่วโมงที่ผ่านมา +3

      You already serve the data to the browser. If it was ok to serve it the first time, it is ok to cache it.

  • @Sergio_Loureiro
    @Sergio_Loureiro ชั่วโมงที่ผ่านมา

    5:29 This is a 304 request...😂😂😂😂😂

  • @nathanharris3916
    @nathanharris3916 10 ชั่วโมงที่ผ่านมา +1

    Will this add much value if the DB table has fast changing data?

    • @nickchapsas
      @nickchapsas  10 ชั่วโมงที่ผ่านมา +4

      No this is for read heavy scenarios. That being said, I'm sure every DB has at least a few tables that are more read heavy than others

    •  5 ชั่วโมงที่ผ่านมา

      @@nickchapsas Isn't rowversion database global? Meaning that any write to any rowversion table will increase the global rowversion which would 'break' any caching. Or does Delta do MAX(rowversion) on the individual table?

  • @FrancescoCaprio-l6t
    @FrancescoCaprio-l6t 10 ชั่วโมงที่ผ่านมา

    holy mother of performance

  • @micha3712
    @micha3712 8 ชั่วโมงที่ผ่านมา

    As I understand RowVersion is calculated from entire database? It is possible to map particular endpoint only to RowVersion from specyfic table?

  • @justgame5508
    @justgame5508 6 ชั่วโมงที่ผ่านมา

    Client side caching nice!

  • @payamism
    @payamism 8 ชั่วโมงที่ผ่านมา

    What if I don't have a UI and I just have a background service calling a stored procedure, which has to Insert or Update?

  •  10 ชั่วโมงที่ผ่านมา +1

    Nice package but there are a few solutions that immediately came to my mind that is better than this.
    - If table is for mostly reading, adding key to DB is gonna make querying faster and better.
    - For your example, only the first query should be taking 1 second for the user. Because even if the row is changed, we still have the cached version of the row. So we have an id and using that we can access the row again in a millisecond unless user changed the username (which is very rare). I don't know if there are customizations in the package for that but it could make so much difference.

    • @gileee
      @gileee 6 ชั่วโมงที่ผ่านมา +2

      This doesn't optimize queries that much, it's more about not sending a json body over the wire if your user already has the most up to date data in his browser. Although it does speed up queries in the sense that data isn't fetched from the db until the rowversion is compared to the request header value. And it's only actually loaded when it's out of date.

  • @sadralatif
    @sadralatif 5 ชั่วโมงที่ผ่านมา

    So am I missing something? For example I have 3 tables that have no relation to each other. All of them have RowVersion. Changing 1 row in a table causes the etag for all requests to any tables become invalid.

  • @JohanNordberg
    @JohanNordberg 2 ชั่วโมงที่ผ่านมา +1

    Very misleading title. ”How to hide bad db performance for a user that makes multiple requests” would be more true.

    • @nickchapsas
      @nickchapsas  2 ชั่วโมงที่ผ่านมา

      So caching should never be referred to as a performance improvement but rather just hiding bad performance. Got it

  • @Trinita75
    @Trinita75 11 ชั่วโมงที่ผ่านมา

    very interesting, even if sql server actually does the magic.
    It would be interesting to have a [delta]-like decorator to apply to endpoints

    • @nickchapsas
      @nickchapsas  11 ชั่วโมงที่ผ่านมา

      You have that with the shouldExecute as well as the Group UseDelta method

  • @WeaselGreasel
    @WeaselGreasel 4 ชั่วโมงที่ผ่านมา

    How about memory consumption? Is there a noticeable difference?

  • @heliogatts
    @heliogatts 11 ชั่วโมงที่ผ่านมา +1

    how would this work with EF with migrations scenarios?

  • @trink7703
    @trink7703 2 ชั่วโมงที่ผ่านมา

    Can you implement this with Angular?

  • @alizia7114
    @alizia7114 11 ชั่วโมงที่ผ่านมา +1

    What if you layer your db rows over DTOs?

    • @nickchapsas
      @nickchapsas  11 ชั่วโมงที่ผ่านมา +1

      That's totally fine. This will still work

  • @mamicatatibe
    @mamicatatibe 10 ชั่วโมงที่ผ่านมา

    I wonder how much additional storage would a new column like this require overall if you were to add it to most of your tables and they have millions of records.. and would there be a performance hit when updating/recalculating the column value when something has changed in the row (ok that part is probably fine if it's just a timestamp essentially)

    • @nickchapsas
      @nickchapsas  10 ชั่วโมงที่ผ่านมา

      It's a timestamp column so 8-bytes per row

    • @gileee
      @gileee 6 ชั่วโมงที่ผ่านมา +1

      You'll probably have a version column anyway for concurrency issues (optimistic/pessimistic locking). Only thing I don't like is it seems tied to the rowversion timestamp mechanic of SqlServer and I prefer Postgres and Sqlite.

    • @mamicatatibe
      @mamicatatibe 6 ชั่วโมงที่ผ่านมา +1

      @@gileee yeah true, I guess if it can be configured to rely on an existing column/setup without the need of DB changes it would be pretty good

  • @TechAndMath
    @TechAndMath 3 ชั่วโมงที่ผ่านมา

    Do you get paid or any financial benefit for promoting this package?

    • @nickchapsas
      @nickchapsas  3 ชั่วโมงที่ผ่านมา +1

      If I did I would have to legally disclose it. I just like the package and Simon.

  • @DavidSmith-ef4eh
    @DavidSmith-ef4eh 9 ชั่วโมงที่ผ่านมา +1

    does it work on delete? I assume he sorts the row version desc and fetches all the row versions that are greater than the largest one cached..

    • @gileee
      @gileee 6 ชั่วโมงที่ผ่านมา +1

      I think this selects the MAX(rowversion) and if data wasn't changed after the last fetch (which is specified by a request header) the browser just gets a 304 Not Modified response instead of a whole body. Otherwise the request is handled the same as if Delta wasn't used (well other than an extra ETag header added to the response for the next fetch request).
      I guess delete could just be handled by also selecting COUNT(*) together with the MAX.

    •  5 ชั่วโมงที่ผ่านมา +1

      That is a good question. If delete does not increase the rowversion then cached version is invalid.

    • @DavidSmith-ef4eh
      @DavidSmith-ef4eh 5 ชั่วโมงที่ผ่านมา +1

      they probably get a count I guess.. Tbh, I've been using a similar system for years. My row_version number is just called updated_at :D

    • @gileee
      @gileee 5 ชั่วโมงที่ผ่านมา +1

      @@DavidSmith-ef4eh I did go through the documentation on github and there's no mention of a count lol. It also only works with SqlServer with it's rowversion timestamp mechanism.

    • @DavidSmith-ef4eh
      @DavidSmith-ef4eh 3 ชั่วโมงที่ผ่านมา

      @@gileee that optimistic update system from mssql. never used it tbh. but it seems a good system, microsoft surely has reasons for using it instead of locking rows and tables.

  • @kpakozz96pyc
    @kpakozz96pyc 11 ชั่วโมงที่ผ่านมา +2

    Zoomers invented indexes and caches?

  • @RudiOnRails
    @RudiOnRails 6 ชั่วโมงที่ผ่านมา

    Guessing this would not work with server-side pagination.

    • @nickchapsas
      @nickchapsas  6 ชั่วโมงที่ผ่านมา

      It can

  • @dominic3606
    @dominic3606 ชั่วโมงที่ผ่านมา

    A cache does not fix your db performance.. it might reduce network io but thats it. The db is still slow

    • @nickchapsas
      @nickchapsas  ชั่วโมงที่ผ่านมา

      "Database performance issues" not "database performance". Important difference there

  • @mDoThis
    @mDoThis 11 ชั่วโมงที่ผ่านมา

    And what about client-server desktop apps? Is there anything like this for desktop approach? I've made my own solution with caching that behaves very similar to Delta but maybe there's someone smarted that did it better and I do not have to play with cache every time

    • @nickchapsas
      @nickchapsas  11 ชั่วโมงที่ผ่านมา +1

      You can use Replicant there to have the same logic on the HttpClient

  • @onlythestrongsurvive
    @onlythestrongsurvive ชั่วโมงที่ผ่านมา

    This is only for SQL server. Dang

  • @thang.huynh.2608
    @thang.huynh.2608 12 ชั่วโมงที่ผ่านมา

    Great. How about this one vs Redis?

    • @nickchapsas
      @nickchapsas  12 ชั่วโมงที่ผ่านมา +5

      This was way more efficient for your architecture because it's on the user's browser instead of something you have to manage, and you don't really have to worry about how much data you are caching

    • @thang.huynh.2608
      @thang.huynh.2608 11 ชั่วโมงที่ผ่านมา

      ​@@nickchapsas Thanks. I think a combination of both seems like a really good stuff (redis for other thing). But the boss may not be willing to apply this new right away, lol

  • @davidjackson148
    @davidjackson148 4 ชั่วโมงที่ผ่านมา

    got me all excited :) :(

  • @stranger0152
    @stranger0152 12 ชั่วโมงที่ผ่านมา

    Does this work with rowversion columns which are date type ? I have been working with a lot of ERP stuff and we always have column ROWVERSION which is date type. If this works with such columns, this is really great.

    • @nickchapsas
      @nickchapsas  12 ชั่วโมงที่ผ่านมา

      Yes it should work with these columns too, but maybe check yourself. It can work with two different ways of such mechanisms actually as descibed in the docs

  • @venussmodzhd1886
    @venussmodzhd1886 10 ชั่วโมงที่ผ่านมา

    is only supported for .net9?

  • @adriano.digiere
    @adriano.digiere 11 ชั่วโมงที่ผ่านมา

    Does the name of the "timestamp" column necessarily have the name "RowVersion" or can it have any other name?

    • @nickchapsas
      @nickchapsas  11 ชั่วโมงที่ผ่านมา +1

      AFAIK you don't have to call it rowversion

  • @Karol-g9d
    @Karol-g9d 22 นาทีที่ผ่านมา

    This delay fix ? Would like fix many aka bugs

  • @estebanpacheco7102
    @estebanpacheco7102 8 ชั่วโมงที่ผ่านมา

    Misleading title but nice package.

  • @MrMattberry1
    @MrMattberry1 3 ชั่วโมงที่ผ่านมา

    So kinda like memorycache

  • @ndasss9563
    @ndasss9563 10 ชั่วโมงที่ผ่านมา

    Wow bravo

  • @andersborum9267
    @andersborum9267 10 ชั่วโมงที่ผ่านมา

    While an interesting concept, it's a pass. Would implement this functionality differently, and using an approach that fits each specific project more closely.

  • @RedEye_Developers
    @RedEye_Developers 12 ชั่วโมงที่ผ่านมา

    this will only work in web applications ?
    what about mobile application ?

    • @nickchapsas
      @nickchapsas  12 ชั่วโมงที่ผ่านมา +3

      Check the video until the end

  • @vitalydushkin
    @vitalydushkin 3 ชั่วโมงที่ผ่านมา

    Another video with a clickbait title where DB performance is actually not improved at all and the idea like "just install that package" instead of telling how to actually make it work.

    • @nickchapsas
      @nickchapsas  3 ชั่วโมงที่ผ่านมา

      You can't have DB Performance issues when you don't call the DB anymore :)

  • @timramone
    @timramone 6 ชั่วโมงที่ผ่านมา

    I don't see this any useful in a real applications with many data modifications in different tables.

    • @nickchapsas
      @nickchapsas  6 ชั่วโมงที่ผ่านมา

      Real applications also have read heavy scenarios for which this is perfect

  • @ostaporobets7313
    @ostaporobets7313 7 ชั่วโมงที่ผ่านมา +1

    This is very hacky! Someone not familiar with project will ask question: "what is that rowversion? how is it hooked up?" If you are not familiar with "Delta" - good luck. I'm wondering how Delta performs when you have joins? Do you need rowversion for each table?
    There are other mechanisms for dealing with wildcard queries.

    • @krccmsitp2884
      @krccmsitp2884 7 ชั่วโมงที่ผ่านมา +2

      Don't you document architectural significant decisions?

    • @gileee
      @gileee 7 ชั่วโมงที่ผ่านมา +2

      Enterprise software has a "version" column anyway for catching concurrency issues. What's the problem?

    • @ostaporobets7313
      @ostaporobets7313 6 ชั่วโมงที่ผ่านมา +1

      @@gileee "Enterprise software has a "version" column anyway" some does and some doesn't. What you are referring to is optimistic locking. There are other ways.
      Problem: "You should never ever tie your FE to database directly. Otherwise you are up for surprise down the road"

    • @gileee
      @gileee 6 ชั่วโมงที่ผ่านมา

      @@ostaporobets7313 I know the name. A version column is the standard method since it not only works with users actually hitting an update at the same time (like an actual race condition), but also prevents users from fetching an entity, waiting a month and then pushing an update that then overwrites all the other changes that occurred in the meantime. Transactional locks can't help with that.

    • @gileee
      @gileee 6 ชั่วโมงที่ผ่านมา

      @@ostaporobets7313 The are no other ways when talking about distributed systems where users don't hold connections open to the db until completion. I think Nick used a web app here to drive that point home. Maybe not intentionally, but it shows a specific scenario where a method like this works well. I called it what I did because saying optimistic locking makes people think it's only about row locks.

  • @mauno525
    @mauno525 8 ชั่วโมงที่ผ่านมา

    Looks like another useless whistle. Its just common cache, but in case you have higher load and fast changing data, then you will not have any benefit, because it will invalidate cache every second or so.

    • @gileee
      @gileee 7 ชั่วโมงที่ผ่านมา

      Yes, that's what the documentation says

  • @TheJubeiam
    @TheJubeiam 6 ชั่วโมงที่ผ่านมา

    Thats really not gut title.

    • @nickchapsas
      @nickchapsas  6 ชั่วโมงที่ผ่านมา

      You clicked

  • @МаксимЧистяков-ч7ц
    @МаксимЧистяков-ч7ц 11 ชั่วโมงที่ผ่านมา +1

    I think the "rowversion" field should be named "ModifiedDateTime", "ChangedDateTime" or "UpdatedDateTime" or short versions "Modified", "Changed", "Updated", "LastUpdate" and use single attribute for that for any field instead of create new field

    • @nickchapsas
      @nickchapsas  11 ชั่วโมงที่ผ่านมา +1

      You can use a date field as well if you want. You're not limited to RowVersion

    • @onistag
      @onistag 9 ชั่วโมงที่ผ่านมา

      @@nickchapsas Interesting. Does this mean we can tie Delta to a temporal table field like Period Start?

  • @pierre9368
    @pierre9368 7 ชั่วโมงที่ผ่านมา

    - Doubtful?
    - Yeah, it's client-side only.
    - So useless!

  • @saberint
    @saberint 9 ชั่วโมงที่ผ่านมา +1

    Dear god no. Database calls should be via stored procedure wherever possible. That way, database devs can then work on the database. They know the best way to minimise memory use and speed up calls. My job as a C# dev is C# efficiency. Furthermore, where is/ how is security implemented?

    • @gileee
      @gileee 7 ชั่วโมงที่ผ่านมา

      This is an optimization which makes it easy to use the browsers built in cache, so data doesn't even need to be re-fetched if it didn't change. The only db thing it uses is the rowversion from SqlServer.

    • @saberint
      @saberint 6 ชั่วโมงที่ผ่านมา

      @@gileee no I get that. What I do not get is how does my browser know that the data in a database has not changed? There has to be some mechanism to check, or I am basically flagging some data as no recheck needed?

    • @gileee
      @gileee 6 ชั่วโมงที่ผ่านมา +1

      @@saberint When you do the first fetch you get a header (ETag I think which is automatically added by Delta) that specifies the last change date of that data. Your browser knows how to use this header for subsequent requests automatically and the Delta lib intercepts the request, fetches the MAX(rowversion) from your table that the endpoint is targeting and compares it to the header. If the header is the same as the max, you just get a Not Modified response, instead of the whole json body. If the data was changed then you get the whole request like Delta wasn't even there, but it again automatically sets the header in the response.

    • @saberint
      @saberint 6 ชั่วโมงที่ผ่านมา +1

      @@gileee ok, so from your explanation it still hits my db, it just does the lookup on the rowversion to see if the data has changed. So I can see it speeding up data transmission, but its not speeding up my sql query which on a properly structured database with 1 mill records should be 1ms anyway. Stored procedures all the way for me. The SQL pre-compiles and you as the database dev can lock down access outside of stored procedures. Its decoupled from any business logic which means its just like an API, just for you db.
      Thanks for the extra information about the product. I sound negative (but that's because I cant think of any use cases I could use this for). But your insight was much appreciated. Cheers 🍻

    • @gileee
      @gileee 5 ชั่วโมงที่ผ่านมา

      @@saberint That's true, but like I said to me this isn't about speeding up queries, it's about speeding up response times. For large json bodies most of the time of the request is spent serializing, then transmitting the data over the internet, then deserializing on the client side. This prevents all of that and just returns a head basically.

  • @devmanasseh
    @devmanasseh 12 ชั่วโมงที่ผ่านมา +5

    first.

    • @ml_serenity
      @ml_serenity 12 ชั่วโมงที่ผ่านมา +9

      ordefault

    • @nickchapsas
      @nickchapsas  12 ชั่วโมงที่ผ่านมา +5

      Single

    • @sunefred
      @sunefred 12 ชั่วโมงที่ผ่านมา +1

      InvalidOperationException

    • @onlythestrongsurvive
      @onlythestrongsurvive ชั่วโมงที่ผ่านมา

      @@ml_serenity you rock!!

    • @Sergio_Loureiro
      @Sergio_Loureiro ชั่วโมงที่ผ่านมา

      @@nickchapsas Async

  • @josedavidriosgerena7225
    @josedavidriosgerena7225 10 ชั่วโมงที่ผ่านมา

    is only supported for .net9??

    • @onistag
      @onistag 9 ชั่วโมงที่ผ่านมา +1

      Yes, the nuget packages only target .net 9