Deep dive on how static files are served with HTTP (kernel, sockets, file system, memory, zero copy)

แชร์
ฝัง
  • เผยแพร่เมื่อ 6 ก.ค. 2024
  • In this video I do a deep dive on how serving static files work in web servers.
    0:00 Intro
    2:00 Overview
    3:00 Request handling and Receive Queue
    8:50 Reading file from disk
    13:50 Response and the Send Queue
    24:00 Sending Response to the Client
    Discovering Backend Bottlenecks: Unlocking Peak Performance
    performance.husseinnasser.com
    Fundamentals of Backend Engineering Design patterns udemy course (link redirects to udemy with coupon)
    backend.husseinnasser.com
    Fundamentals of Networking for Effective Backends udemy course (link redirects to udemy with coupon)
    network.husseinnasser.com
    Fundamentals of Database Engineering udemy course (link redirects to udemy with coupon)
    database.husseinnasser.com
    Follow me on Medium
    / membership
    Introduction to NGINX (link redirects to udemy with coupon)
    nginx.husseinnasser.com
    Python on the Backend (link redirects to udemy with coupon)
    python.husseinnasser.com
    Become a Member on TH-cam
    / @hnasr
    Buy me a coffee if you liked this
    www.buymeacoffee.com/hnasr
    Arabic Software Engineering Channel
    / @husseinnasser
    🔥 Members Only Content
    • Members-only videos
    🏭 Backend Engineering Videos in Order
    backend.husseinnasser.com
    💾 Database Engineering Videos
    • Database Engineering
    🎙️Listen to the Backend Engineering Podcast
    husseinnasser.com/podcast
    Gears and tools used on the Channel (affiliates)
    🖼️ Slides and Thumbnail Design
    Canva
    partner.canva.com/c/2766475/6...
    Stay Awesome,
    Hussein
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 38

  • @hnasr
    @hnasr  9 หลายเดือนก่อน +8

    fundamentals of backend of engineering course
    backend.win

    • @AbdoTawdy
      @AbdoTawdy 9 หลายเดือนก่อน

      i appreciate your effort and work in explaining stuff, i wish in collage that i got taught deep like that, i would be more interested in understanding why C/C++ and what is system calls.

    • @AmadeusMoon
      @AmadeusMoon 8 หลายเดือนก่อน

      I am just 6 months into coding but I have been watching you since my first month, I have to say I really enjoy your content, you are probably one of the reasons I leaned into backend. Thank you for the effort you put into always finding and bringing up such topics.

  • @gorangagrawal
    @gorangagrawal 9 หลายเดือนก่อน +11

    I like to digest the information as slow as possible and your explanations are what I love to watch. Thanks for being slow.
    Slow is smooth, smooth is fast.

  • @juniordevmedia
    @juniordevmedia 9 หลายเดือนก่อน +63

    Aah, my favourite 1.5x playback speed guy

    • @herrxerex8484
      @herrxerex8484 9 หลายเดือนก่อน +1

      same

    • @ShubhamGhuleCode
      @ShubhamGhuleCode 9 หลายเดือนก่อน +4

      2x 😂

    • @cbrunnkvist
      @cbrunnkvist 9 หลายเดือนก่อน +1

      Aahahahah he would retain only 10% of viewers had it not been for the Playback Speed settings for sure 😂
      hashtag 1.5x engineer

    • @mousquetaire86
      @mousquetaire86 9 หลายเดือนก่อน

      2.5x for me

    • @shujamigo
      @shujamigo 9 หลายเดือนก่อน

      1.75x

  • @imanmokwena1593
    @imanmokwena1593 9 หลายเดือนก่อน +1

    Man. This came out the hour after I stopped working on my side project to learn the first principles of how HTTP and node really work... without all the fancy abstractions from the libraries.

  • @andresroca9736
    @andresroca9736 6 หลายเดือนก่อน

    Very good walkthrough! I like things that way. It builds intuition around the subject, and induce to think better on elements and problems

  • @thewave2118
    @thewave2118 9 หลายเดือนก่อน

    Very good, look forward to more videos

  • @leonzer8257
    @leonzer8257 9 หลายเดือนก่อน

    Nice content every time!!!thanks!

  • @ryanseipp6944
    @ryanseipp6944 9 หลายเดือนก่อน +2

    Would love a video on io_uring. Epoll doesn't have to be chatty, as you can let the process block until a fd is ready, but you do a lot of syscalls, which is the thing io_uring gets rid of the most. Currently looking into registered buffers which if I understand correctly can eliminate a copy as the kernel can theoretically place socket data directly in your buffer, after it assembles packets of course. No idea yet if it actually does or not

  • @prathameshgharat7772
    @prathameshgharat7772 9 หลายเดือนก่อน +3

    For me it has been mostly been about the basics, RAM v/s Disk and SSL termination, those are the bottlenecks in simple content websites with huge traffic. The disk/RAM control Varnish Cache offers is great IF there is ever a need for it. There is always RAM disk too. Add CloudFlare on top of that.

  • @nhancu3964
    @nhancu3964 8 หลายเดือนก่อน

    Your content is so awesome Hussein. When watching this video, I have a question about throughput and latency with chunky streaming (like websocket because it use http underline). My question is whether chunky message affects the total latency. For example, the total latency between sending large file bytes in one websocket message and sending large file bytes in multiple websocket message back to back immediately (chunky). Thank you

  • @WeekendStudy-xo6lq
    @WeekendStudy-xo6lq 9 หลายเดือนก่อน

    Can you show the source code of how write buffet read file is actually sync async in kernel and nodejs so this would really sink in my memory?

  • @vivkrish
    @vivkrish 7 หลายเดือนก่อน

    How are huge contents served? Suppose a huge json file is the response to the http request?
    What I am asking is, does the socket cache start sending the packets before the node process finishes writing to the file cache?
    Also, how big is the file cache?

  • @MinatoCreations
    @MinatoCreations 8 หลายเดือนก่อน

    What if the server (user process) read from disk on server startup (before receiving any requests) and pre-processed the file content (for headers) and pre-compressed the file content.
    This way, we'd save the time necessary for the read syscalls, writing headers, compressing content, etc.
    Just receive the request at the user process and directly send the syscall to respond directly.
    Would that be possible?

    • @lakhveerchahal
      @lakhveerchahal 8 หลายเดือนก่อน

      It can increase the startup time (cold starts) which is very critical for serverless applications.

  • @ivankraev4264
    @ivankraev4264 9 หลายเดือนก่อน

    Awesome
    One question - what is the lifecycle of those read/write queues ? I suppose they live in the server memory, but on what point are they being destroyed ? Do they live there for one request/response cycle ?

    • @hnasr
      @hnasr  9 หลายเดือนก่อน +2

      good question I guess it really depends on the implementation. but I don't see a reason to keep the request packets after processing the request.

  • @rodstephens6612
    @rodstephens6612 9 หลายเดือนก่อน +2

    This covers Caching at the User process (webserver) scenario. How does this translate when a Reverse Proxy is inserted into the mix? Does the Reverse Proxy perform a READ to it's own cached disk looking for the file? or does it have an implementation of a GET that evaluates whether the request can be served locally rather than reaching out to a backend webserver?

    • @hnasr
      @hnasr  9 หลายเดือนก่อน

      exactly. it become seven more interesting. Exactly thinking through it you will have to go through the same layers.
      Reverse proxy is even more complex as it needs an upstream connection.

  • @SeunA-sr2ss
    @SeunA-sr2ss หลายเดือนก่อน

    I guess, one question is. Is this the same on Windows servers?

  • @biswaMastAadmi
    @biswaMastAadmi 9 หลายเดือนก่อน

  • @ahmedyasser571
    @ahmedyasser571 9 หลายเดือนก่อน +1

    we really need this content in Arabic

  • @sshirgaleev
    @sshirgaleev 9 หลายเดือนก่อน

    😊

  • @WeekendStudy-xo6lq
    @WeekendStudy-xo6lq 9 หลายเดือนก่อน

    Slack is the root of all evil

  • @King311___
    @King311___ 9 หลายเดือนก่อน

    Bro please buy me an Alienware ❤