Can you Max-out the Connections between Load Balancer and Backend Servers?

Поделиться
HTML-код
  • Опубликовано: 15 дек 2024

Комментарии •

  • @hnasr
    @hnasr  2 года назад

    Get my Fundamentals of Networking for Effective Backends udemy course Head to network.husseinnasser.com (link redirects to udemy with coupon)

  • @codygaudet8071
    @codygaudet8071 3 года назад +2

    Bro, you explain things so clearly. This was a great explanation.

  • @rajeshramakrishnan4121
    @rajeshramakrishnan4121 3 года назад +1

    The way you explain sounds like listening to a friend at a coffee shop ... The way you deliver the message across sets you apart from other youtubers . Keep up the good work 😶

    • @hnasr
      @hnasr  3 года назад +1

      Thank you so much! Means alot ❤️

  • @RayZde
    @RayZde 3 года назад +2

    Your videos have very useful systems development.

  • @RavianXReaver
    @RavianXReaver 3 года назад +4

    Watching this video during work break 😌😌😌

  • @jackedelic9188
    @jackedelic9188 3 года назад +6

    Can I say that tcp connection is identified by source ip+port and dest ip+port? But as can be seen in devtool, fetching a web page using http 1.1 results in many tcp connections between my machine and the server. we know that browser makes 6 parallel tcp connections for http 1.1. Then what identifies these 6 parallel tcp connections? How does the server distinguish these 6 tcp connections if a connection is only identified by source ip+port and dest ip+port?

    • @yashthatte6137
      @yashthatte6137 3 года назад +4

      client will make 6 requests from same ip but different port i guess

    • @jackedelic9188
      @jackedelic9188 3 года назад

      Yea I think when the browser does some system call to make a tcp connection, the OS will assign different ports for the different tcp connections. But then if I'm behind a wifi router, the router will need to maintain a lot of mappings from internal ports to external ports. I wonder if i can see the list of mappings automatically created by the wifi router.

    • @hnasr
      @hnasr  3 года назад +6

      Lets say the client IP is C and the server IP is S and the port is 443
      The client will establish up to 6 tcp connections to the server as follows. Each time the client will generate a random source port to make the connection unique
      C | 1111 | 443 | S
      C | 2222 | 443 | S
      C | 3333 | 443 | S
      C | 4444 | 443 | S
      C | 5555 | 443 | S
      C | 6666 | 443 | S
      the server uses the source port to return the response back to the client to the exact TCP connection

    • @jackedelic9188
      @jackedelic9188 3 года назад

      @@hnasr i see, so different ports are assigned for different TCP connections. Now I see how h2 can save some ports by using only one TCP connection for multiple requests gg to the same destination.

    • @asimarunava
      @asimarunava 3 года назад

      @@hnasr With above example if i think C as reverse proxy R, then R can 65k request to Server S1. If i add one more server S2 in this case R can not make new request to S2 as R has used all his port number with S1.
      R |111| S1:443
      R|222| S1:443
      R|65**| S1:443
      With all its port taken can R make new request to S2?
      Thanks

  • @javedutube10
    @javedutube10 3 года назад

    Hussein, learning different things is not a problem but remembering those is huge problem to me. How do you do that?. Huge amount of love and respect to you.

    • @hnasr
      @hnasr  3 года назад +1

      Repetition and understanding fundamentals so you can derive pieces you don’t remember. Its like math, you don’t memorize how to solve every possible equation, you understand the basics and apply rules

    • @javedutube10
      @javedutube10 3 года назад

      @@hnasr Thanks a lot. I'll apply it ASAP.

  • @iam_kundan
    @iam_kundan 3 года назад +1

    Great Explanation !!

  • @sumitkumarmishra3140
    @sumitkumarmishra3140 3 года назад +5

    Hello Hussein, curious to know did you left your job or something because you are making videos on such a short period of time. But still big fan of your videos.

    • @hnasr
      @hnasr  3 года назад +14

      Nope still employed. I just took a 2 week leave. Thanks !

  • @zephyrus.9
    @zephyrus.9 3 года назад

    Can you explain SNAT port exhaustion as well?

    • @hnasr
      @hnasr  3 года назад

      I explained it @ 2:00 when the client source port is exhausted .

  • @Openspeedtest
    @Openspeedtest 3 года назад

    Started watching.. my answer is YES.. let's see

  • @jjames7206
    @jjames7206 3 года назад +1

    Thank you bro

  • @Alex00082
    @Alex00082 3 года назад

    Great video!!!! Could you recommend some reference material? Thank you

    • @hnasr
      @hnasr  3 года назад +3

      Of course for sure watch the OSI video
      And NAT
      ruclips.net/video/7IS7gigunyI/видео.html
      ruclips.net/video/RG97rvw1eUo/видео.html

  • @afrozalam5389
    @afrozalam5389 3 года назад +1

    wow! this was dense

  • @navjot7397
    @navjot7397 3 года назад

    Can you tell how can we avoid Load balancer becoming a single point of failure

    • @hnasr
      @hnasr  3 года назад +3

      You put multiple load balancers and have them share the same virtual ip address ruclips.net/video/d-Bfi5qywFo/видео.html

  • @paaticcio
    @paaticcio 3 года назад

    Thank you!

  • @backendengineering007
    @backendengineering007 3 года назад

    Great explanations and content on ur channel Hussein, Big fan of your channel!!!! I have a small doubt/clarification about this video: In the backend scenario where the load balancer IP is fixed, though we have multiple backends service with 65000ports(give/take) each, the source being load balancer can have only 65000 random/dynamic ports which are available for the response to come back to the load balancer from any of the backends and eventually routing it back to the UI client. Is my understanding correct or have I missed something? Meaning irrespective of how many backend services we have we can only establish 65000 TCP connection from one load balancer, right? So just increasing the backend service numbers will not increase the TCP connection capacity as the load balancer has only 65000 ports in itself.
    So the scenario being:
    there are 10 backend services and 1 load balancer. The load balancer gets 65000 requests from different clients now all the 65000 are redirected to the backend 1 (for simplicity as it'll be equally distributed ) now all the random 65000 of the load balancer are used up. Now any new request coming the load balancer way it does not have any available port (assuming that none of the previous requests completed) now though there are 9 more backend services available with 65000 ports each the load balancer cannot make the connection with any of the 9 as it does not have an available port which can be assigned for the new request TCP connection, right??

    • @hnasr
      @hnasr  3 года назад

      If you have 65k different client hitting your LB (each creating a new connection) then technically the LB will load balance them across all backends it will not redirect them to one backend.
      That also depends if LB is layer 7 LB which will allow it to share connections.

    • @backendengineering007
      @backendengineering007 3 года назад

      ​@@hnasr Got it, it load balances across multiple backends, I mentioned in the above comment as one backend for simplicity. So now all 65000 are redirected to multiple backend devices and the load balancer assigns a random port for each redirection. So now it is out of dynamic random ports as it has assigned 65000 ports to the multiple backed. Now when there is a 65001st request to the load balancer there are backend services that can process the request but the load balancer does not have a random port available to assign in the NAT table so that the backend server can respond back to the load balancer after processing the request(assuming all the previous 65000 requests are still being processed). I hope I was able to explain and did not confuse you :)

    • @anupmehta9504
      @anupmehta9504 2 года назад

      @@backendengineering007 did you get an answer to this question ?

    • @aniruddhpandya5999
      @aniruddhpandya5999 2 года назад

      @@backendengineering007 I think you can have multiple connections from the same client random port to different backend services. This is similar to how a single backend service port(eg 80) can host millions of connections.

    • @backendengineering007
      @backendengineering007 2 года назад

      @@aniruddhpandya5999 so u mean to say that the IP address & port combination has a limit of 65k, hence if there are ‘n’ IP address or backend then n * 65k is the limit? Is my understanding correct?

  • @kyojindev3978
    @kyojindev3978 3 года назад

    will be very useful to make load_balancer(Nginx) -> to real server Nginx config with all sec pitfalls

  • @megazord5696
    @megazord5696 3 года назад +2

    I'll Watch until I undestrand it. 2 years later... Okay I got it

  • @arminrosic
    @arminrosic 3 года назад +3

    Noooo, not the ducks :/ 🤣 🦆

    • @zedzpan
      @zedzpan 3 года назад +1

      No-one thinks about the ducks. This guy gets it!

  • @ajaychavda2826
    @ajaychavda2826 3 года назад

    Wow ❤❤❤

  • @jithin_zac
    @jithin_zac 3 года назад +3

    Let me be honest. It was difficult for me to understand this in one go!

    • @MAK28031991
      @MAK28031991 3 года назад

      don't worry. He will make another series to explain this video in detail. :)

    • @hnasr
      @hnasr  3 года назад +4

      It it difficult topic to grasp thats why I put it on the advanced backend playlist. It took me 2 years to be able understand the inner working of networking stack to explain this .. ask questions for anything that is not clear and will try my best. And yes this won’t be my last video on the topoc
      Once you understand the fundamentals of basic TCP connections and how traffic is routed it becomes easier.

    • @MAK28031991
      @MAK28031991 3 года назад +1

      @@hnasr I dont even need to ask what i need to know. You are posting in the same sequence i need to know. Its like I just need to keep following u.
      Its really time consuming to read books and curate it at one place in structured manner.
      Thanks a lot from the community.🙏

    • @RayZde
      @RayZde 3 года назад

      @@hnasr it would be easier with a diagram.

  • @javedutube10
    @javedutube10 3 года назад

    After 10 minutes it was too heavy for me to consume :).

    • @hnasr
      @hnasr  3 года назад +1

      It is a beefy topic and deep , Ill do better to simplify next time and take it piece by piece

  • @mohamedhabas7391
    @mohamedhabas7391 3 года назад

    hahaha , this topic was my first successfull attack of a production server :)
    you gotta love HAPROXY
    if you wanna have fun use Siege with Slowloris to flood the network as long as you stick to the limits on the configs for haproxy
    haproxy will break the setup for you
    anyways that was in 2018

  • @dinakaranonline
    @dinakaranonline 3 года назад +3

    To be honest , I could not follow this :)

  • @sudipto.m
    @sudipto.m 3 года назад +1

    First to view.