Deep dive on how static files are served with HTTP (kernel, sockets, file system, memory, zero copy)

Поделиться
HTML-код
  • Опубликовано: 1 фев 2025

Комментарии • 39

  • @hnasr
    @hnasr  Год назад +9

    fundamentals of backend of engineering course
    backend.win

    • @AbdoTawdy
      @AbdoTawdy Год назад

      i appreciate your effort and work in explaining stuff, i wish in collage that i got taught deep like that, i would be more interested in understanding why C/C++ and what is system calls.

    • @AmadeusMoon
      @AmadeusMoon Год назад

      I am just 6 months into coding but I have been watching you since my first month, I have to say I really enjoy your content, you are probably one of the reasons I leaned into backend. Thank you for the effort you put into always finding and bringing up such topics.

  • @gorangagrawal
    @gorangagrawal Год назад +12

    I like to digest the information as slow as possible and your explanations are what I love to watch. Thanks for being slow.
    Slow is smooth, smooth is fast.

  • @imanmokwena1593
    @imanmokwena1593 Год назад +1

    Man. This came out the hour after I stopped working on my side project to learn the first principles of how HTTP and node really work... without all the fancy abstractions from the libraries.

  • @andresroca9736
    @andresroca9736 Год назад

    Very good walkthrough! I like things that way. It builds intuition around the subject, and induce to think better on elements and problems

  • @juniordevmedia
    @juniordevmedia Год назад +65

    Aah, my favourite 1.5x playback speed guy

  • @darshansharma_
    @darshansharma_ 5 месяцев назад

    Amazing Hussain ❤❤

  • @thewave2118
    @thewave2118 Год назад

    Very good, look forward to more videos

  • @prathameshgharat7772
    @prathameshgharat7772 Год назад +3

    For me it has been mostly been about the basics, RAM v/s Disk and SSL termination, those are the bottlenecks in simple content websites with huge traffic. The disk/RAM control Varnish Cache offers is great IF there is ever a need for it. There is always RAM disk too. Add CloudFlare on top of that.

  • @ryanseipp6944
    @ryanseipp6944 Год назад +2

    Would love a video on io_uring. Epoll doesn't have to be chatty, as you can let the process block until a fd is ready, but you do a lot of syscalls, which is the thing io_uring gets rid of the most. Currently looking into registered buffers which if I understand correctly can eliminate a copy as the kernel can theoretically place socket data directly in your buffer, after it assembles packets of course. No idea yet if it actually does or not

  • @NikolaosZer
    @NikolaosZer Год назад

    Nice content every time!!!thanks!

  • @nhancu3964
    @nhancu3964 Год назад

    Your content is so awesome Hussein. When watching this video, I have a question about throughput and latency with chunky streaming (like websocket because it use http underline). My question is whether chunky message affects the total latency. For example, the total latency between sending large file bytes in one websocket message and sending large file bytes in multiple websocket message back to back immediately (chunky). Thank you

  • @ivankraev4264
    @ivankraev4264 Год назад

    Awesome
    One question - what is the lifecycle of those read/write queues ? I suppose they live in the server memory, but on what point are they being destroyed ? Do they live there for one request/response cycle ?

    • @hnasr
      @hnasr  Год назад +2

      good question I guess it really depends on the implementation. but I don't see a reason to keep the request packets after processing the request.

  • @vivkrish
    @vivkrish Год назад

    How are huge contents served? Suppose a huge json file is the response to the http request?
    What I am asking is, does the socket cache start sending the packets before the node process finishes writing to the file cache?
    Also, how big is the file cache?

  • @SeunA-sr2ss
    @SeunA-sr2ss 8 месяцев назад

    I guess, one question is. Is this the same on Windows servers?

  • @rodstephens6612
    @rodstephens6612 Год назад +2

    This covers Caching at the User process (webserver) scenario. How does this translate when a Reverse Proxy is inserted into the mix? Does the Reverse Proxy perform a READ to it's own cached disk looking for the file? or does it have an implementation of a GET that evaluates whether the request can be served locally rather than reaching out to a backend webserver?

    • @hnasr
      @hnasr  Год назад

      exactly. it become seven more interesting. Exactly thinking through it you will have to go through the same layers.
      Reverse proxy is even more complex as it needs an upstream connection.

  • @MinatoCreations
    @MinatoCreations Год назад

    What if the server (user process) read from disk on server startup (before receiving any requests) and pre-processed the file content (for headers) and pre-compressed the file content.
    This way, we'd save the time necessary for the read syscalls, writing headers, compressing content, etc.
    Just receive the request at the user process and directly send the syscall to respond directly.
    Would that be possible?

    • @lakhveerchahal
      @lakhveerchahal Год назад

      It can increase the startup time (cold starts) which is very critical for serverless applications.

  • @WeekendStudy-xo6lq
    @WeekendStudy-xo6lq Год назад

    Can you show the source code of how write buffet read file is actually sync async in kernel and nodejs so this would really sink in my memory?

  • @ahmedyasser571
    @ahmedyasser571 Год назад +1

    we really need this content in Arabic

  • @biswaMastAadmi
    @biswaMastAadmi Год назад

  • @WeekendStudy-xo6lq
    @WeekendStudy-xo6lq Год назад

    Slack is the root of all evil

  • @sshirgaleev
    @sshirgaleev Год назад

    😊

  • @boomerang0101
    @boomerang0101 Год назад

    Bro please buy me an Alienware ❤