High Performance Postgres with Rust, Cloudflare & Hyperdrive

แชร์
ฝัง
  • เผยแพร่เมื่อ 17 ธ.ค. 2024

ความคิดเห็น • 11

  • @kooshini
    @kooshini 2 หลายเดือนก่อน +2

    Thanks for this, very useful

  • @ryanswatson
    @ryanswatson 2 หลายเดือนก่อน

    Is 600ms considered performant for database queries?
    Perhaps I am missing something, but local Postgres queries should be < 10ms.

    • @kooshini
      @kooshini 2 หลายเดือนก่อน

      I agree, it’s certainly not fast but it’s connecting to a 3rd party each time - I’m using D1 so it’s at least all still on cloudflare’s network but neon is much nicer. I’m going to test hyperdrive with some queries that can be easily cached to see how much faster it can be but if you just span up a VPS with Postgres and Rust and ran the same API you’d probably be 50ms - would be a nice video on the channel to compare

    • @ryanswatson
      @ryanswatson 2 หลายเดือนก่อน

      @@kooshini Agreed, I understand it's connecting to a 3rd party every time.
      I just couldn't imagine making a site wait for 0.6 seconds before it gets the results of an API query to process. On a shared VPS, I imagine it would be sub 10ms every time. Just pinging one of my own servers (in my local country) is under 20ms every time... so not sure why connecting to a 3rd party should take so long.
      I mean, I just pinged a server on the _absolute_ opposite side of the world to me, the furthest server away I could find and I am getting 341ms.

    • @kooshini
      @kooshini 2 หลายเดือนก่อน +2

      @@ryanswatson pinging is just a packet though, this is an actual response of of fetching the data from the database and returning it not just sending an ICMP - using a worker or lambda you get - network latency + worker cold boot time + function run time + latency to remote db + speed of db query + response time to get the data back etc

    • @ryanswatson
      @ryanswatson 2 หลายเดือนก่อน

      @@kooshini My point wasn't that it should take the same time as a ping...but considering a local ping being 20ms, the other 30X time to process a request over the network and and return the response for a query which would take less than 10ms locally seems entirely unreasonable given any conditions. If there is a cold boot time involved, understandable... but something which I would imagine best avoided for website API requests unless there were a local cache of the data triggered by web-hooks.

    • @serverlessjames
      @serverlessjames  2 หลายเดือนก่อน +1

      @ryanwatson I mean, comparing something to running locally isn't a fair test. Even with a shared VPS, if you're running your application and database on the same virtual server, you now have a single point of failure for your application and state. Plus, you've got the overhead of managing the virtual server. I've been there, and you're one misconfiguration away from everything going away.
      600ms is an absolute steal for a database that auto scales, I can create dev branches, and that means I have next to 0 operational overhead. Add Cloudflare workers in there as well, still with 0 operational overhead, and I'll take that over a shared VPS any day of the week.
      If 20ms latency is something a project needed, sure go wild with operating virtual servers. Almost every project I've ever worked on ~500ms response is imperceptible to most people and worked just fine for that use case. I'd guess this would apply to most projects out there. But it's all trade offs right, getting close to the metal for a 20ms response great. But there are things you lose by doing that.
      Thanks for your input @kooshini