A Video About Queues

แชร์
ฝัง
  • เผยแพร่เมื่อ 23 พ.ย. 2024

ความคิดเห็น • 153

  • @spageen
    @spageen 5 หลายเดือนก่อน +295

    I'll save this video for later

    • @amsyarasyiq
      @amsyarasyiq 4 หลายเดือนก่อน +3

      video queued

  • @davidsiewert8649
    @davidsiewert8649 5 หลายเดือนก่อน +123

    We don't have to care about dropping requests, because our app with only

    • @saelorasinanardiel8983
      @saelorasinanardiel8983 4 หลายเดือนก่อน +4

      three days later: "So our little app went viral"

  • @abhisheknavgan6829
    @abhisheknavgan6829 5 หลายเดือนก่อน +50

    this was the first time I had fun reading and understanding a blog post.

    • @samrosewho
      @samrosewho 5 หลายเดือนก่อน +12

    • @yt-sh
      @yt-sh 5 หลายเดือนก่อน +4

      @@samrosewho it must have been tough to make that article bug free, how did you do it

    • @samrosewho
      @samrosewho 5 หลายเดือนก่อน

      @@yt-sh It isn't bug free, but I think it's reasonably close. You do actually see one bug in the video, I'll leave it as an exercise to the reader to find it.
      I spend a lot of time on these and test them extensively on a range of devices and browsers. I also have wonderful friends who help me test them, too. There are a few automated tests but nothing substantial.

  • @roganl
    @roganl 4 หลายเดือนก่อน +4

    As you mentioned in passing, it would seem trivial to add an initial processing branch that filters "expired/timed-out" requests providing an early exit, and rapid dequeuing of these abandoned requests. Similarly, why isn't there a comparison and drop on enqueue, that prioritizes oldest entries, which at worst would restart the TTL for its queue slot. Yet another thought, dequeue capacity is the metric to gauge - how long does a request take to process? Once that time, multiplied by the depth has elapsed, the system has gone pear shaped and additional capacity should be allocated.
    I am reminded of all the priority queuing knobs at the network layer (including WRED and RED) that exist but are both largely unused on the "public" net, and used only in limited fashion on corporate networks, to accommodate RTP media traffic. It was telling that in the US and parts of Europe, ISPs strongly preferred simply adding capacity, rather than tinker with queuing. This was in large part because non-FIFO queues were hard to reason about, and turning the knobs only makes a difference on the edge cases - and when I say edge cases, think about the last person being shoved onto the subway by a train attendant in Tokyo. If you wanted a better "experience" then adding capacity, i.e. ability to handle/dequeue requests, is the only game in town.
    One more thought; Fry's Electronics utilized a single queue for its dozens of checkout counters - effectively a shared scalable processing queue. This was effectively human scale load balancing.

    • @samrosewho
      @samrosewho 4 หลายเดือนก่อน

      What do you mean by "why isn't there a comparison and drop on enqueue"? And what do you mean by "dequeue capacity"?

    • @ShinyVeggie
      @ShinyVeggie 4 หลายเดือนก่อน +1

      Mythbusters did an episode on using one long queue for multiple store checkouts, and the results were that average wait time was higher, but average satisfaction was also higher due to fairness.

  • @kenscode
    @kenscode 5 หลายเดือนก่อน +17

    We just finished learning priority queues and now working with weighted graphs and trees in school. A lot of people say you don't need to know these things, but they really are used in a lot of applications

    • @lazymass
      @lazymass 5 หลายเดือนก่อน +5

      You don't need to learn them all, you can definitely make a quick research before deciding the architecture of whatever you are working on. You should do that research anyway, since you can always find out some better solution. So I kinda get why people say that. But it's still highly preferable to at least know these stuff exist.

    • @kenscode
      @kenscode 4 หลายเดือนก่อน +2

      @@lazymass I agree 100%. I got lucky and was able to go back to college and study this stuff, which I am highly grateful for because there is so much information out there now its hard for me too stay focused and know what to study. It kind of keeps me structured. Lots of cool stuff out there 🙏

    • @samrosewho
      @samrosewho 4 หลายเดือนก่อน +3

      I've long been a defender of learning data structures and algorithms. I don't think you necessarily need to have implemented them, and you certainly don't need to be able to do it from memory like so many companies ask you to in interview, but knowing what tools you have available to you helps you solve problems better.

    • @hellowill
      @hellowill 4 หลายเดือนก่อน

      i've yet to see a priority queue used in system design... we just have a high priority queue and normal queue.

    • @samrosewho
      @samrosewho 4 หลายเดือนก่อน

      @@hellowill yep, common approach due to its simplicity and, in a way, is a form of priority queue (if slightly crude). If I was going to go back and change one thing about this post it’d be to include this because so many people keep bringing it up 😅

  • @mduvigneaud
    @mduvigneaud 4 หลายเดือนก่อน +2

    I wrote a queuing system some years ago that allowed a mix of all of those strategies plus some other features. It was for data processing, though. I was converting a large, monolithic batch process (a couple hours to run) to be parallel and closer to realtime (sub second.)

  • @cherubin7th
    @cherubin7th 4 หลายเดือนก่อน +6

    I wonder about using multiple queues in parallel? Like the priority queue on 2B2T.

    • @samrosewho
      @samrosewho 4 หลายเดือนก่อน +8

      Yeah, not an uncommon strategy. I didn't include it because it would have been a bit more difficult to visualise alongside the single queues 😅

  • @bone_broth_000
    @bone_broth_000 4 หลายเดือนก่อน +3

    The amount of care and polish Sam puts into his posts cannot be understated. Really solid stuff!

  • @rashim
    @rashim 5 หลายเดือนก่อน +15

    Main thing I am taking away from this video...
    Sam is a FKKING LEGEND !!!

  • @danielgilleland8611
    @danielgilleland8611 4 หลายเดือนก่อน +17

    "... unless it was Jira and it was my job..." 🤣

  • @tuna5618
    @tuna5618 5 หลายเดือนก่อน +14

    this article was incredible

  • @FryGuy1013
    @FryGuy1013 4 หลายเดือนก่อน +1

    This reminded me a lot of the visualizations on the Nicky Case blog (particularly, the "build a better ballot" one)

    • @samrosewho
      @samrosewho 4 หลายเดือนก่อน +1

      I’m a big Nicky Case fan 😁

  • @thisaintmyrealname1
    @thisaintmyrealname1 4 หลายเดือนก่อน +1

    absolutely loved this, and thank you for introducing me to Sam! all blogs and web based learning material should be as interactive as this!!

    • @samrosewho
      @samrosewho 4 หลายเดือนก่อน

      Nice to meet you 😊

  • @Paul-di9xv
    @Paul-di9xv 5 หลายเดือนก่อน +3

    I don't think FIFO is from the engineering world though, I think it's originally used as an inventory method in retail for example

    • @ufffd
      @ufffd 4 หลายเดือนก่อน +1

      yep, it was required knowledge on the food handlers test at least a decade ago

  • @Winnetou17
    @Winnetou17 4 หลายเดือนก่อน +3

    Theo: Check out Sam!
    Me: Sam who ?
    Theo: Yeah!

  • @tangentfox4677
    @tangentfox4677 4 หลายเดือนก่อน

    The thing I'm stuck on is "what is a priority request" at the moment. I guess because I was only thinking of a basic user collab app, there is no priority.
    I guess if I have moderators, they should have priority at least!

  • @ShinyVeggie
    @ShinyVeggie 4 หลายเดือนก่อน

    Seems like a priority LIFO would be a smart choice. If the queue is full, you could also drop a normal request inside the queue if a priority request is coming in. Also instead of dropping new requests, maybe you could instead drop the oldest request in the queue, or have a 50/50 chance of dropping either.

    • @samrosewho
      @samrosewho 4 หลายเดือนก่อน

      I also thought about this but only once I was near the end of writing the post. Never did try it, but it’s hard to imagine it being bad isn’t it? Priority LIFO with RED.

    • @ShinyVeggie
      @ShinyVeggie 4 หลายเดือนก่อน

      @@samrosewho Is RED better than dropping requests already in the queue? I suppose it's better to fail fast rather than waiting a long time and still failing in the end.

  • @IdoAloni123
    @IdoAloni123 4 หลายเดือนก่อน +2

    What about if we have the priority queue but when a priority request arrives we would drop a low priority request from the back of the queue, then add it after the last priority request, that if the queue is not filled with priority requests.
    I think this may be better then the queues in this article.

    • @samrosewho
      @samrosewho 4 หลายเดือนก่อน +2

      In practice this queue management makes the most sense under load, and under load you probably don't want to be wasting cycles doing that sort of queue management. Simple usually trumps complex here.

    • @julesjacobs1
      @julesjacobs1 4 หลายเดือนก่อน

      @@samrosewho This type of queue management literally takes nanoseconds. Spending a few cycles is absolutely worth it.

    • @samrosewho
      @samrosewho 4 หลายเดือนก่อน

      @@julesjacobs1I’d love to see examples of it being done in the wild. I’ve no doubt people are doing it and there are workloads where it’s an appreciable optimisation, I just haven’t seen it myself.

    • @julesjacobs1
      @julesjacobs1 4 หลายเดือนก่อน

      @@samrosewho I'm pretty sure everyone who uses priority queues or even LIFO queues does it that way. The way it's implemented in this blog post is simply wrong.

    • @samrosewho
      @samrosewho 4 หลายเดือนก่อน

      @@julesjacobs1entirely possible. Dropping at the other end of the LIFO queue does seem objectively better. I tried, I guess 😅

  • @sarjannarwan6896
    @sarjannarwan6896 4 หลายเดือนก่อน

    Would be an interesting addition to account for request processing time. E.g. A ping vs something that does a bunch of database stuff / processing.

  • @JJCUBER
    @JJCUBER 4 หลายเดือนก่อน

    How about priority + RED, but LIFO for lowest priority? (Or, we could use LIFO within each priority bracket, especially if we have multiple priority levels.)

  • @samcalder6946
    @samcalder6946 5 หลายเดือนก่อน +10

    Queueing is Britain's national sport!

    • @samrosewho
      @samrosewho 5 หลายเดือนก่อน +4

      I’m glad someone said it. This article is a point of national pride.

  • @JS-fd5oh
    @JS-fd5oh 4 หลายเดือนก่อน

    There's an error with Priority Queue demonstration and explanation. Priority queue will not let a checkout request with high priority drop because it will replace a low priority requests in the queue. That's how bounded PQ works by definition. It's impossible to lose a priority request if there's a low priority request in the queue. You can only lose a priority request if ALL requests in the queue are priority requests.

    • @samrosewho
      @samrosewho 4 หลายเดือนก่อน +2

      I actually hadn’t considered this, believe it or not. I’m struggling to find a formal definition of priority queues that mandates pushing out lower priority items when full but it’s logical and strikes me as good default behaviour. No one has called me out for it yet but this article desperately skirts around having to explain TCP global synchronisation and congestion control, the primary motivations for RED.

  • @ivanmaglica264
    @ivanmaglica264 4 หลายเดือนก่อน

    LIFO is not a good option for PUT requests (or any state update), since it makes them out of order. You can't afford to switch cancel and resume actions or updating same field with different data.

    • @samrosewho
      @samrosewho 4 หลายเดือนก่อน

      The network isn’t going to guarantee you an ordering for PUT requests even if you issue them in sequence, so you’d want to have another mechanism for those anyway (revision ID, deterministic conflict resolution, etc)

  • @NickHamilton88
    @NickHamilton88 5 หลายเดือนก่อน +25

    Queue the video

  • @Primalmoon
    @Primalmoon 4 หลายเดือนก่อน

    16:40 "Let's be real a priority request should never take 18 seconds. Like imagine hitting a checkout button, and 18 seconds later you finally get to checkout. That's not acceptable"
    Surprisingly, this is the opposite of how I would feel.
    While I wouldn't like it, I'm much more forgiving of such delays on stuff like a checkout because at that point I'm committed. I've been trained to not try to refresh the page or hit the button again when doing the scary operations for fear of getting duplicate charges to my cards and what not, so I'm much more likely to actually see the request to completion.
    But if I'm getting such delays on the non-important stuff, like simply showing me product pages in the first place or on adding stuff to my cart, then I'm much more likely to bounce off the website entirely since I'm not committed to anything yet.

  • @mcgruff0972
    @mcgruff0972 4 หลายเดือนก่อน +1

    FIFO was a term way before engineering. Great vid though!

  • @JimmyNguyen-h6o
    @JimmyNguyen-h6o 4 หลายเดือนก่อน

    doesnt apache kafka handle queues under the hood? // im still learning more about this stuff.

  • @codeChuck
    @codeChuck 4 หลายเดือนก่อน

    Those interactive examples for teaching stuff - they are GOLDeN :) Good work!
    Dropped requests - very fun :)

  • @eth3792
    @eth3792 4 หลายเดือนก่อน

    Fun drinking game: take a shot every time he geeks out over the visualizations (no but seriously they are incredible)

    • @samrosewho
      @samrosewho 4 หลายเดือนก่อน +1

      Thank you ❤

  • @NithinJune
    @NithinJune 4 หลายเดือนก่อน +1

    british person’s dream

  • @ryanmatthews5771
    @ryanmatthews5771 4 หลายเดือนก่อน

    Was FIFO really a tech thing first? I learned about it for the first time prepping food for freebirds (restaurant)

    • @RyanAustinx98
      @RyanAustinx98 4 หลายเดือนก่อน +1

      No, it’s been used in asset management and inventory management long before.

    • @ryanmatthews5771
      @ryanmatthews5771 4 หลายเดือนก่อน

      @@RyanAustinx98 yeah that’s kinda what I expected

  • @Oktokolo
    @Oktokolo 4 หลายเดือนก่อน

    And then there is the elephant in the room:
    Don't actually process a disconnected or nearly timed-out request. Just drop it instead. This is a trivial optimization and should be the first thing to do.
    When trying to enque any request while the queue is full, replace the request waiting the longest with the new request. That way, it's still fair FIFO but when not everyone can be served, you still get a chance as the requests in front of you might be about to time out.
    You totally can scan the entire queue every time a request arrives to determine which one to drop. Worst you get is O(N). It's fine. Don't go all-in optimization fetish on request queues. The CPU shouldn't be the bottleneck in your load balancers.

    • @samrosewho
      @samrosewho 4 หลายเดือนก่อน

      Trade-offs, innit. I'd love to see more things dropping timed-out requests by default, not sure why they don't. Not quite as sure about scanning the queue for each new request, it wouldn't surprise me if that has an overall negative affect under load and only marginal benefit at lower scale.

  • @aghileslounis
    @aghileslounis 5 หลายเดือนก่อน +1

    What an incredible article!

  • @anon_y_mousse
    @anon_y_mousse 4 หลายเดือนก่อน

    I guess this explains why all the balls at the bottom had stripes, because I kept forcing priority requests to drop just for funsies. Going into the hundreds kind of breaks it, though.

    • @samrosewho
      @samrosewho 4 หลายเดือนก่อน +1

      It should cap out at 100 and remove older requests to keep the CPU down. I did notice it can chug a bit when full on less powerful devices, which I wasn’t happy with but decided it was okay enough to ship.

    • @anon_y_mousse
      @anon_y_mousse 4 หลายเดือนก่อน

      @@samrosewho You don't have to tell me, I already know I've got a potato. Cool interactive article though.

  • @anwiseru9064
    @anwiseru9064 5 หลายเดือนก่อน +2

    omg the end of the article was so clever

    • @samrosewho
      @samrosewho 5 หลายเดือนก่อน

      It was my friend Colin’s idea. He said “wouldn’t it be fun if all my dropped requests were lying at the bottom” like… 2 days before the article was scheduled to be published. Had to rush to learn Matter.js and get it in, too good an idea not to. 😁

  • @abhisheknavgan6829
    @abhisheknavgan6829 5 หลายเดือนก่อน +4

    Your videos are getting high priority in my subscriptions nowadays 😂😂 (dropping other videos)

  • @drachebrume3257
    @drachebrume3257 5 หลายเดือนก่อน +4

    Video’s title brings a different vibe in French

    • @Deemo_codes
      @Deemo_codes 5 หลายเดือนก่อน

      What?

    • @tipeon
      @tipeon 5 หลายเดือนก่อน +2

      Ah ah! You're telling so much about yourself in this single comment.
      Don't forget to delete your browser history 😅

    • @MrDadidou
      @MrDadidou 5 หลายเดือนก่อน

      ​@@Deemo_codesTH-cam auto translate video titles sometimes. I'm in France but I switched my system to English. I end up with french video titles translated in English sometimes

  • @tangentfox4677
    @tangentfox4677 4 หลายเดือนก่อน

    I have one problem with RED: If the queue is below like 5% full or something, instead of dropping 5% of requests, none should be dropped. Why always have a minimum level of failure when you probably would always be able to handle all of those requests?

    • @samrosewho
      @samrosewho 4 หลายเดือนก่อน

      I didn’t really talk about it but you could set whatever thresholds you want. I’d recommend simulating them first, this sort of stuff has a habit of not being intuitive. For the post, I kept it simple 😊

  • @DMZT_dhruv
    @DMZT_dhruv 5 หลายเดือนก่อน +2

    I thought you started data structures lol

  • @FlorianEbeling
    @FlorianEbeling 4 หลายเดือนก่อน

    great post, great discussion of it

  • @sayemprodhanananta144
    @sayemprodhanananta144 4 หลายเดือนก่อน

    An engineer became bored, so he became a doctor 🙃

  • @JagdeepSinghKalsi
    @JagdeepSinghKalsi 4 หลายเดือนก่อน

    ThE pHp ExPeRiEnCe...
    Bah!
    Try sending many(hundreds or thousands or more) requests to a PHP server, and share the percentage of dropped requests.

  • @Kane0123
    @Kane0123 5 หลายเดือนก่อน

    Had to go and search for the other video to make sure I wasn't going crazy... loving Sam Rose's content

    • @samrosewho
      @samrosewho 5 หลายเดือนก่อน +1

      Thank you ❤

  • @fev4
    @fev4 4 หลายเดือนก่อน

    damn, this is an insta-save article and video

  • @codeChuck
    @codeChuck 4 หลายเดือนก่อน

    Drop the timed out requests from the queue, do let them go to the server.

  • @JohnMullee
    @JohnMullee 4 หลายเดือนก่อน

    watch queing video? = 'at least once'

  • @pramudyaindra
    @pramudyaindra 4 หลายเดือนก่อน

    finnaly, a video thumbnail without face

  • @Pekz00r
    @Pekz00r 4 หลายเดือนก่อน

    Fantastic visualizations, but the article misses the mark quite a lot IMO.
    Two tings that do way more than anything in the examples here is:
    - Drop the the requests that are considered timed out.
    - Have a separete queue for priority requests and serve that queue first.
    The visualization is also misrepresenting what would happen in a real life scenario where you have multiple servers/workers processing requests in parallel. The scenarios where the server capacity is so under dimentioned is also very extreme and not realistic at all.

    • @samrosewho
      @samrosewho 4 หลายเดือนก่อน

      Dropping timed out requests would make a big difference, you’re not wrong.
      Here’s a fun thought experiment: how does 2 queues differ from a single priority queue in practice?
      I’m also interested in how multiple servers would make a meaningful difference to the post. Abstractly requests get taken off the queue at some rate and the number of consumers is mostly irrelevant. The only relevance I could think of is if I were modelling queues at both the load balancer and server level, which does get a bit hairy. Was beyond the scope of what I wanted to talk about though.

    • @Pekz00r
      @Pekz00r 4 หลายเดือนก่อน

      @@samrosewho Two queues would make a lot of difference. Mainly because you can balance the how many jobs that are popped from each queue. You can for example take 3 jobs from the priority queue for every job you take from the normal queue, and thus, the normal queue will not starve out completely.
      But the biggest problem is that the whole setup is very unrealistic. In pretty much any a system with high thouput you measure the wait time for each queue(how long it takes between a job is added to the queue and then it is handled). When the wait time exceeds some threshold, you just spin up more workers for that queue. Then you will be fine with a FIFO-queue and all this would be pretty much irrelevant.

  • @Aditya_Vyas
    @Aditya_Vyas 5 หลายเดือนก่อน +1

    I'll watch it now

  • @sub-harmonik
    @sub-harmonik 5 หลายเดือนก่อน +1

    wait, why are LIFOs better for most workloads? did I miss something? Seems like FIFOs are much better queues..
    edit: ok it's because fewer 'timed out' but what scenario would that be the most important consideration in?

    • @samrosewho
      @samrosewho 5 หลายเดือนก่อน +1

      For HTTP requests it is, generally, better to reject a request than to make it wait a long time. It’s a broad general statement and there are going to be HTTP workloads where it’s not true, but I wish LIFO was the default everywhere instead of FIFO.

    • @Holobrine
      @Holobrine 4 หลายเดือนก่อน

      @@samrosewhoTbh yeah, an explicit “nope try again” is better than leaving you hanging

    • @samrosewho
      @samrosewho 4 หลายเดือนก่อน

      @@Holobrine There are certainly exceptions, but the worst case in FIFO if you've got no logic to drop timed out requests means you can get into a totally broken state that's hard to recover from. At least with LIFO the worst case still has you serving users.

  • @pawan29121991
    @pawan29121991 4 หลายเดือนก่อน

    I have seen this nice interactive one

  • @oaklyfoundation
    @oaklyfoundation 4 หลายเดือนก่อน

    This post is amazing.

    • @samrosewho
      @samrosewho 4 หลายเดือนก่อน

      Thank you 🙏

  • @andrewkosenko2757
    @andrewkosenko2757 4 หลายเดือนก่อน +1

    For the Americans out there: Lines
    Thank me later 😂

  • @МаксШинкаренко-з1ъ
    @МаксШинкаренко-з1ъ 5 หลายเดือนก่อน

    Open for the listen queue spelling on english 🎉 thank you!

  • @smoked-old-fashioned-hh7lo
    @smoked-old-fashioned-hh7lo 4 หลายเดือนก่อน

    your video quality has increased in the last few videos. when you're not constantly shilling vercel and react it's much better.

  • @SecretMoose
    @SecretMoose 4 หลายเดือนก่อน +1

    0:05 speak for yourself, I’m British. 😂

    • @samrosewho
      @samrosewho 4 หลายเดือนก่อน

      Same :)

    • @yosefberger6259
      @yosefberger6259 2 หลายเดือนก่อน +1

      I got a two volume series of books on queuing theory from my university library for free
      When anyone asks what they're about, I just say I'm studying to get British citizenship

  • @codeChuck
    @codeChuck 4 หลายเดือนก่อน

    If you have a good reach, like Theo, you may cover the work of other specialized devs, like Sam, who have a good content. A mutually benefitial thing :)

  • @RandomGeometryDashStuff
    @RandomGeometryDashStuff 5 หลายเดือนก่อน

    07:58 what if random request processed?

    • @samrosewho
      @samrosewho 4 หลายเดือนก่อน

      What do you mean?

    • @RandomGeometryDashStuff
      @RandomGeometryDashStuff 4 หลายเดือนก่อน

      ​@@samrosewhoprocess random item from queue instead of first or last

    • @samrosewho
      @samrosewho 4 หลายเดือนก่อน

      @@RandomGeometryDashStuff ahh hm, not sure. I've never seriously considered it. What problem would it solve?

    • @RandomGeometryDashStuff
      @RandomGeometryDashStuff 4 หลายเดือนก่อน

      ​@@samrosewhoI figured out how to put visualizations in my own local web page.
      I added s-multi-queues id="a" timeout="6" with FIFO, LIFO, FIRO (first in, random out), FIRFOLO (first in, randomly first or last out).
      I added s-wait-time-bar-graph id="ag" target="a".
      I added s-latency-percentile-toggle target="ag" options="50 75 90 95 99".
      I added s-timeouts-served-graph target="a".
      I only spam click yellow ball.
      Results are inconsistent between attempts.
      s-wait-time-bar-graph:
      50th: FIFO 7.0s, LIFO 1.3s, FIRO 6.3s, FIRFOLO 4.7s
      75th: FIFO 7.1s, LIFO 1.4s, FIRO 11.1s, FIRFOLO 14.1s
      90th: FIFO 7.1s, LIFO 31.0s, FIRO 12.8s, FIRFOLO 14.5s
      95th: FIFO 7.1s, LIFO 32.5s, FIRO 13.9s, FIRFOLO 17.0s
      99th: FIFO 7.1s, LIFO 33.7s, FIRO 17.6s, FIRFOLO 18.1s
      s-timeouts-served-graph:
      FIFO 21
      LIFO 4
      FIRO 13
      FIRFOLO 11

  • @TokyoXtreme
    @TokyoXtreme 4 หลายเดือนก่อน

    People lining up to watch this video.

  • @mohammedgoder
    @mohammedgoder 4 หลายเดือนก่อน

    This article grossly misrepresents how long timeout take to process in proportion to requests.

    • @samrosewho
      @samrosewho 4 หลายเดือนก่อน

      Could you go into more detail here? I'm not sure what you mean.

    • @mohammedgoder
      @mohammedgoder 4 หลายเดือนก่อน

      @@samrosewho Certainly.
      The receiving end of a networking APIs can associate a timestamp for when a packet is received and dumped into a queue. The processing end of the API can immediately drop the packet if the timestamp is too old.
      It's literally the following:
      if (timeout < (currentTime - packetTimestamp)) continue;
      That's easily less than 4-8 clock cycles on any server produced in the last 7 years.

    • @mohammedgoder
      @mohammedgoder 4 หลายเดือนก่อน

      @@samrosewho YT refuses to post my reply.
      The summary is processing a timeout takes less than 4 clock cycles whilst processing a request could be in the order of hundreds or thousands of clock cycles depending on what the request is.

    • @samrosewho
      @samrosewho 4 หลายเดือนก่อน

      @@mohammedgoder Do you have a code example showing what you mean here? I tested a variety of setups and nothing I tested drops timed out requests by default.

    • @mohammedgoder
      @mohammedgoder 4 หลายเดือนก่อน

      ​@@samrosewho if (timeout < (currentTime - packetTimestamp)) continue;
      This is something that has to be done once the packet reaches userspace.
      My point was that larger FIFO queues are not going to cause older packets to timeout.
      What do you mean a variety of setups?

  • @willcoder
    @willcoder 5 หลายเดือนก่อน

    FIFO is pronounced "F-eye Foe" rhyming with High-Low. And rhymes with LIFO.

    • @ItsUhMeIbraheem
      @ItsUhMeIbraheem 5 หลายเดือนก่อน

      The way everyone around me has always pronounced it is "Fee Foo" and "Leaf Ooh"

    • @mathijsriezebos
      @mathijsriezebos 5 หลายเดือนก่อน

      You're both wrong. It's _f_irst _i_n _f_irst _ou_t, so it should be pronounced "fì-fow" rhyming with "give now". /s

    • @willcoder
      @willcoder 4 หลายเดือนก่อน +1

      it's a nitpick, but pronunciation isn't defined by the letters in a word, but by how a consensus of people pronounce it, like all language. maybe Fee-Foe is a dialect, but I've only heard as F-eye-Foe for the last ~30 years.

  • @Salbeira
    @Salbeira 4 หลายเดือนก่อน

    I am so not the target demographic to tell "waiting longer than 5 seconds makes users leave the page" - I am sorry but I am so used to the internet being spotty at times I will always wait as long as I have to for a page to load. What arrogance leads people to quit interacting with something they actively chose to interact with if they need to wait a bit? Screw statistics and user surveys, if a user quits when they have to wait, I do not want to serve them.

  • @thebahrimedia
    @thebahrimedia 5 หลายเดือนก่อน

    Queues

  • @chad_giga6934
    @chad_giga6934 5 หลายเดือนก่อน

    Really nice video

  • @cerulity32k
    @cerulity32k 4 หลายเดือนก่อน

    kwewes

  • @josedallasta
    @josedallasta 4 หลายเดือนก่อน

    14:15 bruh
    really

  • @JMCV2
    @JMCV2 3 หลายเดือนก่อน

    ReactiveX ?

  • @jeroenw9853
    @jeroenw9853 4 หลายเดือนก่อน

    Hi Twitch!

  • @mihirphadnis2
    @mihirphadnis2 5 หลายเดือนก่อน

    Caught in time..

  • @ProSureString
    @ProSureString 5 หลายเดือนก่อน

    Ok

  • @redhood7105
    @redhood7105 4 หลายเดือนก่อน

    You are so beautiful when you talk about tech ☺☺☺☺☺☺

  • @ILsupereroe67
    @ILsupereroe67 5 หลายเดือนก่อน

    What http servers are so stupid as to serve a request from a client that has already disconnected?

    • @samrosewho
      @samrosewho 4 หลายเดือนก่อน

      I tested quite a few and they all processed requests from clients that had disconnected. I'd love to see examples where that's not true.

    • @ILsupereroe67
      @ILsupereroe67 4 หลายเดือนก่อน

      @@samrosewho Is Apache one of those you tested? 🤔

    • @samrosewho
      @samrosewho 4 หลายเดือนก่อน +1

      @@ILsupereroe67 No.

    • @ILsupereroe67
      @ILsupereroe67 4 หลายเดือนก่อน

      I know PHP by default stops execution if the client disconnects in the middle, I'd assume that would also apply if the client has disconnected before the script even starts but I never tested that

    • @samrosewho
      @samrosewho 4 หลายเดือนก่อน

      @@ILsupereroe67 That would make sense to me as PHP is typically writing the response back to the user during execution. When a client disconnects it sends a TCP FIN and gets an ACK back from the server (this happens when you close a tab in your browser or kill curl on the CLI), from that point on any data sent to the client will result in the client sending an RST back to the server and I suspect the server would stop sending the response. I think this likely happens in most of the ones I tested but the tests I did had a bunch of processing followed by a short response. Until you start trying to write to the broken connection, you'll keep processing. I'd be surprised to see PHP behaving differently to this. If you have time, I'd love to see a small project displaying otherwise.

  • @collinoly
    @collinoly 4 หลายเดือนก่อน

    Why do you say FEFO

  • @AK-vx4dy
    @AK-vx4dy 4 หลายเดือนก่อน

    Author should receive some prize
    , together with this Bytes2Go guy

    • @samrosewho
      @samrosewho 4 หลายเดือนก่อน