The way you explain sounds like listening to a friend at a coffee shop ... The way you deliver the message across sets you apart from other youtubers . Keep up the good work 😶
Can I say that tcp connection is identified by source ip+port and dest ip+port? But as can be seen in devtool, fetching a web page using http 1.1 results in many tcp connections between my machine and the server. we know that browser makes 6 parallel tcp connections for http 1.1. Then what identifies these 6 parallel tcp connections? How does the server distinguish these 6 tcp connections if a connection is only identified by source ip+port and dest ip+port?
Yea I think when the browser does some system call to make a tcp connection, the OS will assign different ports for the different tcp connections. But then if I'm behind a wifi router, the router will need to maintain a lot of mappings from internal ports to external ports. I wonder if i can see the list of mappings automatically created by the wifi router.
Lets say the client IP is C and the server IP is S and the port is 443 The client will establish up to 6 tcp connections to the server as follows. Each time the client will generate a random source port to make the connection unique C | 1111 | 443 | S C | 2222 | 443 | S C | 3333 | 443 | S C | 4444 | 443 | S C | 5555 | 443 | S C | 6666 | 443 | S the server uses the source port to return the response back to the client to the exact TCP connection
@@hnasr i see, so different ports are assigned for different TCP connections. Now I see how h2 can save some ports by using only one TCP connection for multiple requests gg to the same destination.
@@hnasr With above example if i think C as reverse proxy R, then R can 65k request to Server S1. If i add one more server S2 in this case R can not make new request to S2 as R has used all his port number with S1. R |111| S1:443 R|222| S1:443 R|65**| S1:443 With all its port taken can R make new request to S2? Thanks
Hussein, learning different things is not a problem but remembering those is huge problem to me. How do you do that?. Huge amount of love and respect to you.
Repetition and understanding fundamentals so you can derive pieces you don’t remember. Its like math, you don’t memorize how to solve every possible equation, you understand the basics and apply rules
Hello Hussein, curious to know did you left your job or something because you are making videos on such a short period of time. But still big fan of your videos.
Great explanations and content on ur channel Hussein, Big fan of your channel!!!! I have a small doubt/clarification about this video: In the backend scenario where the load balancer IP is fixed, though we have multiple backends service with 65000ports(give/take) each, the source being load balancer can have only 65000 random/dynamic ports which are available for the response to come back to the load balancer from any of the backends and eventually routing it back to the UI client. Is my understanding correct or have I missed something? Meaning irrespective of how many backend services we have we can only establish 65000 TCP connection from one load balancer, right? So just increasing the backend service numbers will not increase the TCP connection capacity as the load balancer has only 65000 ports in itself. So the scenario being: there are 10 backend services and 1 load balancer. The load balancer gets 65000 requests from different clients now all the 65000 are redirected to the backend 1 (for simplicity as it'll be equally distributed ) now all the random 65000 of the load balancer are used up. Now any new request coming the load balancer way it does not have any available port (assuming that none of the previous requests completed) now though there are 9 more backend services available with 65000 ports each the load balancer cannot make the connection with any of the 9 as it does not have an available port which can be assigned for the new request TCP connection, right??
If you have 65k different client hitting your LB (each creating a new connection) then technically the LB will load balance them across all backends it will not redirect them to one backend. That also depends if LB is layer 7 LB which will allow it to share connections.
@@hnasr Got it, it load balances across multiple backends, I mentioned in the above comment as one backend for simplicity. So now all 65000 are redirected to multiple backend devices and the load balancer assigns a random port for each redirection. So now it is out of dynamic random ports as it has assigned 65000 ports to the multiple backed. Now when there is a 65001st request to the load balancer there are backend services that can process the request but the load balancer does not have a random port available to assign in the NAT table so that the backend server can respond back to the load balancer after processing the request(assuming all the previous 65000 requests are still being processed). I hope I was able to explain and did not confuse you :)
@@backendengineering007 I think you can have multiple connections from the same client random port to different backend services. This is similar to how a single backend service port(eg 80) can host millions of connections.
@@aniruddhpandya5999 so u mean to say that the IP address & port combination has a limit of 65k, hence if there are ‘n’ IP address or backend then n * 65k is the limit? Is my understanding correct?
It it difficult topic to grasp thats why I put it on the advanced backend playlist. It took me 2 years to be able understand the inner working of networking stack to explain this .. ask questions for anything that is not clear and will try my best. And yes this won’t be my last video on the topoc Once you understand the fundamentals of basic TCP connections and how traffic is routed it becomes easier.
@@hnasr I dont even need to ask what i need to know. You are posting in the same sequence i need to know. Its like I just need to keep following u. Its really time consuming to read books and curate it at one place in structured manner. Thanks a lot from the community.🙏
hahaha , this topic was my first successfull attack of a production server :) you gotta love HAPROXY if you wanna have fun use Siege with Slowloris to flood the network as long as you stick to the limits on the configs for haproxy haproxy will break the setup for you anyways that was in 2018
Get my Fundamentals of Networking for Effective Backends udemy course Head to network.husseinnasser.com (link redirects to udemy with coupon)
Bro, you explain things so clearly. This was a great explanation.
The way you explain sounds like listening to a friend at a coffee shop ... The way you deliver the message across sets you apart from other youtubers . Keep up the good work 😶
Thank you so much! Means alot ❤️
Your videos have very useful systems development.
Watching this video during work break 😌😌😌
Can I say that tcp connection is identified by source ip+port and dest ip+port? But as can be seen in devtool, fetching a web page using http 1.1 results in many tcp connections between my machine and the server. we know that browser makes 6 parallel tcp connections for http 1.1. Then what identifies these 6 parallel tcp connections? How does the server distinguish these 6 tcp connections if a connection is only identified by source ip+port and dest ip+port?
client will make 6 requests from same ip but different port i guess
Yea I think when the browser does some system call to make a tcp connection, the OS will assign different ports for the different tcp connections. But then if I'm behind a wifi router, the router will need to maintain a lot of mappings from internal ports to external ports. I wonder if i can see the list of mappings automatically created by the wifi router.
Lets say the client IP is C and the server IP is S and the port is 443
The client will establish up to 6 tcp connections to the server as follows. Each time the client will generate a random source port to make the connection unique
C | 1111 | 443 | S
C | 2222 | 443 | S
C | 3333 | 443 | S
C | 4444 | 443 | S
C | 5555 | 443 | S
C | 6666 | 443 | S
the server uses the source port to return the response back to the client to the exact TCP connection
@@hnasr i see, so different ports are assigned for different TCP connections. Now I see how h2 can save some ports by using only one TCP connection for multiple requests gg to the same destination.
@@hnasr With above example if i think C as reverse proxy R, then R can 65k request to Server S1. If i add one more server S2 in this case R can not make new request to S2 as R has used all his port number with S1.
R |111| S1:443
R|222| S1:443
R|65**| S1:443
With all its port taken can R make new request to S2?
Thanks
Hussein, learning different things is not a problem but remembering those is huge problem to me. How do you do that?. Huge amount of love and respect to you.
Repetition and understanding fundamentals so you can derive pieces you don’t remember. Its like math, you don’t memorize how to solve every possible equation, you understand the basics and apply rules
@@hnasr Thanks a lot. I'll apply it ASAP.
Great Explanation !!
Hello Hussein, curious to know did you left your job or something because you are making videos on such a short period of time. But still big fan of your videos.
Nope still employed. I just took a 2 week leave. Thanks !
Can you explain SNAT port exhaustion as well?
I explained it @ 2:00 when the client source port is exhausted .
Started watching.. my answer is YES.. let's see
Thank you bro
Great video!!!! Could you recommend some reference material? Thank you
Of course for sure watch the OSI video
And NAT
ruclips.net/video/7IS7gigunyI/видео.html
ruclips.net/video/RG97rvw1eUo/видео.html
wow! this was dense
Can you tell how can we avoid Load balancer becoming a single point of failure
You put multiple load balancers and have them share the same virtual ip address ruclips.net/video/d-Bfi5qywFo/видео.html
Thank you!
Great explanations and content on ur channel Hussein, Big fan of your channel!!!! I have a small doubt/clarification about this video: In the backend scenario where the load balancer IP is fixed, though we have multiple backends service with 65000ports(give/take) each, the source being load balancer can have only 65000 random/dynamic ports which are available for the response to come back to the load balancer from any of the backends and eventually routing it back to the UI client. Is my understanding correct or have I missed something? Meaning irrespective of how many backend services we have we can only establish 65000 TCP connection from one load balancer, right? So just increasing the backend service numbers will not increase the TCP connection capacity as the load balancer has only 65000 ports in itself.
So the scenario being:
there are 10 backend services and 1 load balancer. The load balancer gets 65000 requests from different clients now all the 65000 are redirected to the backend 1 (for simplicity as it'll be equally distributed ) now all the random 65000 of the load balancer are used up. Now any new request coming the load balancer way it does not have any available port (assuming that none of the previous requests completed) now though there are 9 more backend services available with 65000 ports each the load balancer cannot make the connection with any of the 9 as it does not have an available port which can be assigned for the new request TCP connection, right??
If you have 65k different client hitting your LB (each creating a new connection) then technically the LB will load balance them across all backends it will not redirect them to one backend.
That also depends if LB is layer 7 LB which will allow it to share connections.
@@hnasr Got it, it load balances across multiple backends, I mentioned in the above comment as one backend for simplicity. So now all 65000 are redirected to multiple backend devices and the load balancer assigns a random port for each redirection. So now it is out of dynamic random ports as it has assigned 65000 ports to the multiple backed. Now when there is a 65001st request to the load balancer there are backend services that can process the request but the load balancer does not have a random port available to assign in the NAT table so that the backend server can respond back to the load balancer after processing the request(assuming all the previous 65000 requests are still being processed). I hope I was able to explain and did not confuse you :)
@@backendengineering007 did you get an answer to this question ?
@@backendengineering007 I think you can have multiple connections from the same client random port to different backend services. This is similar to how a single backend service port(eg 80) can host millions of connections.
@@aniruddhpandya5999 so u mean to say that the IP address & port combination has a limit of 65k, hence if there are ‘n’ IP address or backend then n * 65k is the limit? Is my understanding correct?
will be very useful to make load_balancer(Nginx) -> to real server Nginx config with all sec pitfalls
I'll Watch until I undestrand it. 2 years later... Okay I got it
Noooo, not the ducks :/ 🤣 🦆
No-one thinks about the ducks. This guy gets it!
Wow ❤❤❤
Let me be honest. It was difficult for me to understand this in one go!
don't worry. He will make another series to explain this video in detail. :)
It it difficult topic to grasp thats why I put it on the advanced backend playlist. It took me 2 years to be able understand the inner working of networking stack to explain this .. ask questions for anything that is not clear and will try my best. And yes this won’t be my last video on the topoc
Once you understand the fundamentals of basic TCP connections and how traffic is routed it becomes easier.
@@hnasr I dont even need to ask what i need to know. You are posting in the same sequence i need to know. Its like I just need to keep following u.
Its really time consuming to read books and curate it at one place in structured manner.
Thanks a lot from the community.🙏
@@hnasr it would be easier with a diagram.
After 10 minutes it was too heavy for me to consume :).
It is a beefy topic and deep , Ill do better to simplify next time and take it piece by piece
hahaha , this topic was my first successfull attack of a production server :)
you gotta love HAPROXY
if you wanna have fun use Siege with Slowloris to flood the network as long as you stick to the limits on the configs for haproxy
haproxy will break the setup for you
anyways that was in 2018
To be honest , I could not follow this :)
First to view.