dude, i simply love the way you articulate. It is like listening to a story. I tried to look up some other videos on this topic. But frankly speaking, the difference between Layer 4 and 7 was never explained so easily. Hats off to you Hussein
I think the data here is URL u just entered in the browser which gives IP address so doesn't make sense if you don't want to see them this URL for privacy reasons
this helped me have a better understanding of the difference between a Layer 4 Load Balancer and a Layer 7 Load Balancer. Now I understand that an ingress is a Layer 7 Load Balancer
Very well explained, thanks so much! Your way of explaining with an intent to capture audience attention but the same time not compromising the technical details is very nice.
Was just looking for websockets ended up watching 7hrs in a row ur awesome and ur just the teacher I wanted no 5min videos tho it was quite hard to click on those 40min videos once I did it didn't felt like I was watching something for 40mins well it's just interest and great content quality but dude heads off to u thanks for all this awesome knowledge at once place
@@hnasr still counting, after learning from all the linked and suggested videos I'm finally seeing the first video of websockets for which I initially came 😂 came for websockets became network engineer and more aware back-end engineer ur awesome dude I have no words for the content quality and availability
Ur content is super great! And ur narration 😂 just when I’m losing my attention you say something funny and then I’m paying attention again 😆😆 many thanks!
Most load balancers (and especially HAProxy and nginx!) still use two different TCP connections in L4 mode with potentially different timeouts, window sizes etc. There are load balancers that simply forward packets (e.g. Linux ipvs) and only have a single end-to-end TCP connection between the client and the backend, but these are more uncommon.
Absolutey true. I just captured packets on the interfaced where HAProxy runs and it turns out it uses different TCP connections for the backend servers.
What you're describing around the 30 minute mark - sharing the pool of connections - is exactly what nginx does. It's sometimes called multiplexing. In our case, this causes an issue, since the application behind the LB needs to recognize the client and attempts to set a very long cookie, which the client truncates.
HI Hussein ! I'm so happy to have found a channel with such a great content. I have noticed that besides the videos, your slides are also very clear and concise. It would be really helpful if you could also share a link to them ! Keep up the good work
Can you create a video about the Denial of Service features of a load balancer, and talk about how an ADC is the same or like a load balancer? Very cool delivery and the humor is appreciated and very good!
PK CC you guys are awesome! Of course I will make videos on topic you guys interested in. Ill just adjust priorities. Hope you enjoy it and thanks for commenting ! Stay awesome 😎
@@hnasr but overall great work. keep'em coming. it's been so fun and informational watching your videos. right to the point with demos. I teach in college part time and I wish I could be as a good speaker as you.
Hi Hussein, your method of explanation is amazing, simple and logical. You are right bang on it. Amazing. I am really impressed, because I have always been confused with LBs. I have a few doubts (and probably some ideas for content enhancement based on my doubts), is there a way I can connect with you. Any support is helpful. Thanks.
I have a small question, also thank you for answering my question on your other video!!! So essentially, my thinking is that you'd need to balance your load if one server is .. well being overloaded. So in this instance, if i do setup a load balancer that doesn't redirect me but actually funnels the data through itself. Then, what's preventing the load balancer from being overloaded itself ? It's handling all the tcp connections of both the servers in your example right ? Sure, it's not doing cpu work but there's some I/O throttling that'll happen eventually right ? I'm just confused here because if that's the case would you put a load balancer to that load balancer, there's still going to be a single point of failure if the balancer dies/overloads. Correct me if i'm assuming that I/O load could be high, maybe funneling bytes isn't that tiring and the load balancer could do it no problem. (or) maybe the tcp connection splits away after the initial hit and the data doesn't go through the balancer anymore ? I'd love for you to answer this since i've not been able to wrap my head around this part.
Your concerns are totally valid. If there are too many concurrent connections, it can throttle the load balancer itself bringing down the overall availability of the system. Hence, heavy traffic applications like Google and Facebook opt for distributed load balancing where the load balancer is not a single server. Google offers one such service called GCLB-Google cloud load balancer. You can find some info on it here:landing.google.com/sre/workbook/chapters/managing-load/
I understand the concept of two flavors of load balancer but my question is since a load balancer is basically a software and processes the incoming request why both are not working in a single layer(layer 7)?
How does actual forwarding works? Create new socket between RP and Server? There has to be at least two open socket invoked in client to backend, is not it? It is just the RP makes it look like one TCP connection.
Yeah I was a bit confused about the '1 tcp connection' pro that was called out for Layer 4 LBs. The only way this could work is if you had 2 sockets for each logical connection. One for the client connection, and one for the back-end server connection. As soon as the proxy accepts a client connection it must store that socket along with a new socket connection to which ever back-end server was chosen. Bytes just get copied between the receive and send buffers of the socket pairs. All this ip swapping is automatically done through the socket API abstraction. The same would have to happen for layer 7 LBs. Afaik, the only difference is that Layer7 LBs deserialize the http protocol. One huge benefit is per. http request routing, which can help even out load across your backends.
The moment you said request you have moved to L7. There isn’t a concept of a request in L4, its just segments/packets with no context on higher layers. E.g. One HTTP request = many TCP segments
@@hnasr Hi Hussein, By request I meant TCP block. But I guess it will be a bit awkward if it's made up of more than one segment. Nice video by the way.
@hnasr If Layer 4 load balancer heavily relies on IP Address which is Layer 3. Why do we still call it Layer 4 load balancer? We could call it Layer 3 load balancer. Isn't it?
How does the layer 7 LB know which client to respond to, when it receives the response from server? In layer 4, it is done by maintaining a NAT, and it is the same TCP connection.
Thanks for this great video, I have a question, what is the difference between using an SSL certificate on layer 4 load balancer vs using it on layer 7 load balancer?
Fantastic question. Layer 7 load balancer must terminate TLS while Layer 4 load balancer doesn’t have to. L4 LB can terminate TLS means serve the certificate from the LB, which means it can decrypt and look at the content. It can also decides to Passthrough the TLS. Hello all the way to the backend which means it is end to end encryption and cert is served from Backend
Hey, thank for this helpfull and amazing tutorial and explaination. sorry for my bad english. I have a question, the backend IP can be found / detected by anyone ?? its there a possibility to hide a tcp connection using netcat to the load balancer proxy, and connecting to the backend, with sniffing maybe no? thank for people who respond and help !
Jexxie Woo thanks 🙏 I did notice that after i posted the video. Thankfully nothing in the chopped screen is important. Appreciate your comment ! And ill make sure to avoid that in the future.
Textras I have not! Didn’t know about it thanks for sharing.. I only discussed HTTP 1.1 smuggling. I wouldn’t worry about h2 clear text though since most h2 setups are secure
Hi Hussein, great content. Thanks!! One question is in your example of layer 7 load balancing with Haproxy I did not see ssl certificate mentioned in configuration,then how Haproxy was able to work on layer7????
From my understanding, one http request can potentially span multiple packets. if that is the case, how does layer 4 proxy take care of that. Like if it does a round robin or something else and sends the packets to different servers, wouldn't the packets be useless? How does the layer4 proxy handle this Edit: Ok so, I see that stickiness is employed. Just had to rewatch the video!
Awesome tutorials, I've learned a lot from them about networking thanks !! :) you seems to know a lot about networking so I have one question (maybe not 100% related to this video, it's also related to previous videos that you released) - I have the following use case, I want to redirect traffic to my local private network from a public cloud provider VM and I was thinking whether I should use iptables tcp forwarding (after seeing your other tutorial) or for example nginx / haproxy ws tunnel. Do you happen to know what's the pros and cons of these approaches ? what would be most reliable in terms of latency and security ? should I be just fine with iptables TCP forwarding ? or should I go with ws tunnel (The next step would be to build client / server app to automate the update of my private NAT IP address on that server so I can keep getting traffic from that "cloud static ip", I would also make it open source with MIT once I get into implemention) I've tested the iptables tcp proxy from your other tutorial and it did in fact worked, I could recieve traffic and respond from a cloud instance that was transfering the tcp packet to my local network. I've never tried ws tunnel thought and would like to know your opinion. To be more specific I have a Kubernetes Ingress on my local network (Layer 7 Loadbalancer) that the traffic would be transfered to it using portforwarding on my router. So from the cloud instance all I need is the static ip address basically (and the point of it is that I would be able to use more resources by only creating one single instance for probably $5 a month), I just want to transfer the client to my private cluster on my private network, the rest would be handled on my local private network.
in layer 4 ,we have one connection between the client and the load balancer and the server ? how is that happening , what about the acks > and what happen if the packet is lost between one node and the other ?
Can you do a log-format in using logging (rsyslog) in HAProxy. I like to troubleshoot an HAProxy issue where it is dropping connections in rare occasions. Something to do with a sticky-bit in HAProxy timing out. The HAProxy is used as a Load Balancer ( reversed proxy) and runs as a container. Only connect using HTTPS in the Load Balancer using ports 80, 443, and 7999. From the logs, I like to see why and where it is dropping the connection. I would like to see the log info on the time duration of the connection. I am currently using 3.2.14 version of HAProxy. Thanks!
Layer 4 Load Balancer Demo : How did HAProxy know 4444 is down? Was it the first failed request that told it or was it some heartbeat type of mechanism between HAProxy and the backend services? Also, I am curious about the "check" keyword mentioned in the cfg file.
Great video Hussein! I was wondering if you could clear something up for me. In Layer 4 load balancing you mention that only one TCP connection is used - the connection between the client and the load balancer. How exactly does the load balancer communicate with the backend servers in this scenario? My naive assumption was that it'll need to maintain TCP connections to the backend servers in order to communicate with them, but this isn't the case for L4 load balancing.
tobiisurmaster thanks for your question! Again it really depends on the implementation but one implementation is to use NAT or network address translation. Think about your router, when you want to establish a TCP connection between your machine and a server say google.com you have one TCP connection between you and google despite having your router in between. Every single packet goes through your router and your router maintains a table (NAT) and replaces the sourceIP address with its own . This is very similar to what layer 4 load balancer does. Its one connection between you and the final destination backend server the only difference is that you don’t know which destination server you will hit, the LB now replaces the destination IP address in this case. And when the backend responds to the LB the LB replaces the sourceIP from backend ip to its own IP. So the LB just plays on table. Again this is one implementation there are many others.,
Qasim Albaqali hey Qasim, check out my in depth HAPROXY Video HAProxy Crash Course (TLS 1.3, HTTPS, HTTP/2 and more) ruclips.net/video/qYnA2DFEELw/видео.html
Learn the fundamentals of the backend, scaling and load balancing with my Introduction to NGINX udemy course nginx.husseinnasser.com
this playlist just keep getting better and better honestly it may be the greatest channel in the tech field
Glad you enjoy it! Thanks Bahaa
"More we repeat more we learn!" that's the way to teach - thank you so much Nasser, you are the best!
dude, i simply love the way you articulate. It is like listening to a story. I tried to look up some other videos on this topic. But frankly speaking, the difference between Layer 4 and 7 was never explained so easily. Hats off to you Hussein
I randomly came here from your NAT video just for fun. I didn't expect to leave with a clear understanding of this. You're awesome.
Great, I am a unix sysadmin and you helped me understand some old stuff making it easier. Those good old days!
29:50 "I dont want the load balancer to look at my data." Sir, the fact that you dont have a million subscribers is a crime on humanity.
Agreed
I think the data here is URL u just entered in the browser which gives IP address so doesn't make sense if you don't want to see them this URL for privacy reasons
I dont know if I would have understood L4 and L7 R-Proxy better...
THANKS A LOT!
Bow to you!
the best channel ever with real-world application of tech
this helped me have a better understanding of the difference between a Layer 4 Load Balancer and a Layer 7 Load Balancer.
Now I understand that an ingress is a Layer 7 Load Balancer
Very well explained, thanks so much! Your way of explaining with an intent to capture audience attention but the same time not compromising the technical details is very nice.
Was just looking for websockets ended up watching 7hrs in a row ur awesome and ur just the teacher I wanted no 5min videos tho it was quite hard to click on those 40min videos once I did it didn't felt like I was watching something for 40mins well it's just interest and great content quality but dude heads off to u thanks for all this awesome knowledge at once place
❤️❤️ that is awesome 👏 thank you for your kind words and glad you enjoyed the content 🙏🙏
@@hnasr still counting, after learning from all the linked and suggested videos I'm finally seeing the first video of websockets for which I initially came 😂 came for websockets became network engineer and more aware back-end engineer ur awesome dude I have no words for the content quality and availability
Hi Hussein!
I recently came across your channel and now I wish I had found this earlier. Thanks for the amazing informatic videos.
Ur content is super great! And ur narration 😂 just when I’m losing my attention you say something funny and then I’m paying attention again 😆😆 many thanks!
Most load balancers (and especially HAProxy and nginx!) still use two different TCP connections in L4 mode with potentially different timeouts, window sizes etc. There are load balancers that simply forward packets (e.g. Linux ipvs) and only have a single end-to-end TCP connection between the client and the backend, but these are more uncommon.
Hi, could you please share any documentation to verify this information
Absolutey true. I just captured packets on the interfaced where HAProxy runs and it turns out it uses different TCP connections for the backend servers.
This guy will be a great dad 😂😁
An excellent demostration of difference between the 2 Load balancers, good job Hussein !!
Thanks Jeetendra!! appreciate it
HE PUT A THAT'S WHAT SHE SAID JOKE IN THERE. ABSOLUTE LEGEND.
He killed me with the "pew pew pew" at the round robin demonstration
Thumbs up for the clear explanation of this topic and also for the super funny comment “that’s what she said.” 14:11
A great explanation with a lot of energy. Love it!
What you're describing around the 30 minute mark - sharing the pool of connections - is exactly what nginx does. It's sometimes called multiplexing. In our case, this causes an issue, since the application behind the LB needs to recognize the client and attempts to set a very long cookie, which the client truncates.
BTW, you are clearly the best teacher on youtube.
I think you can teach anything actually ;).
You are making learning so much joyful.
Iliasbhal aww 😊 thank you so much I am glad you enjoy the content
Your content is gold.
HI Hussein ! I'm so happy to have found a channel with such a great content. I have noticed that besides the videos, your slides are also very clear and concise. It would be really helpful if you could also share a link to them ! Keep up the good work
Glad you like them! thanks Erlad!
guys not only is he extremely helpful but he also loves The Office :'( amazing
If you are an office fan You will like this http/2 video ruclips.net/video/fVKPrDrEwTI/видео.html
Can you create a video about the Denial of Service features of a load balancer, and talk about how an ADC is the same or like a load balancer? Very cool delivery and the humor is appreciated and very good!
Thanks Brian, I talked about DOS here ruclips.net/video/4I7tPW8of2g/видео.html
MashAllah you are so good and professional in your area.
Good to see configuration in the video
Hey man, thanks for the video, it was informative.
Your funny style made it even more interesting.
Thanks for making our life easy and also for making your videos a lot more entertaining. :D
very crisp & informative
Great Explanation, however you said you love to repeat, then you do not follow DRY principles :)... Keep posting such great videos. Appreciate !!
was the page not changing at 21:37 was because of the browser cache?
thank you, for your hard work. you are such an amazing person, sharing all this wonderful knowledge.
U are AWESOME. U made this video even though this is not most voted topic in ur last survey.
PK CC you guys are awesome! Of course I will make videos on topic you guys interested in. Ill just adjust priorities. Hope you enjoy it and thanks for commenting ! Stay awesome 😎
thanks for this great video, helps a lot for preparing system design interviews!
شكرا ابو علي
العفو
"..that's what she said!!!..." Ha - someone is a Micheal Scott fan!!
You think? ruclips.net/video/fVKPrDrEwTI/видео.html
TLDR?
that cracked me up lol
The man the legend!
Again a very nice video. A detailed tutorial video of haproxy would be great.
Thanks 😊 haproxy tutorial is requested a lot! Ill need to make it soon. Have so much other videos on my backlog
thank you a lot.
Thank you for the great post!
yeeahhh...!! This was fun .. :) .. great video sir !
Enjoyed this video. Learning made fun!! :)
Thanks George! Glad it was
Kristen n charles
Great placement of that's what she said! Great tutorial @Hussein
This tutorial is great
Keep up the good Tutorials.. thanks for sharing :)
Glad you like them!
Great video
great explanation
Glad you liked it
wonderful video!
Loved it. Great content.
Glad you enjoyed it!
Oh I bought your lecture on udemy, teacher
Just amazing ....
Great content!
Thanks Dan!
Great stuff indeed
This was a hell of video!
great video and explanation. the flying red dot is a bit distracting
thanks and apologies for the red dot, I try to get better as I make more videos
@@hnasr but overall great work. keep'em coming. it's been so fun and informational watching your videos. right to the point with demos. I teach in college part time and I wish I could be as a good speaker as you.
Thank you Peng for your kind words. I find making videos help improve my skills. I still need lots of work, particularly getting to the point quicker
Thanks Sir!!
awesome!
Great video.Thank You
Thanks Nafas!
Great video. great content, very well explained. thank you for your effort :-)
you are the best
Great! :) funny and clear
😊 thanks !
Hi Hussein, your method of explanation is amazing, simple and logical. You are right bang on it. Amazing. I am really impressed, because I have always been confused with LBs. I have a few doubts (and probably some ideas for content enhancement based on my doubts), is there a way I can connect with you. Any support is helpful. Thanks.
Good one.
I have a small question, also thank you for answering my question on your other video!!! So essentially, my thinking is that you'd need to balance your load if one server is .. well being overloaded. So in this instance, if i do setup a load balancer that doesn't redirect me but actually funnels the data through itself. Then, what's preventing the load balancer from being overloaded itself ? It's handling all the tcp connections of both the servers in your example right ? Sure, it's not doing cpu work but there's some I/O throttling that'll happen eventually right ? I'm just confused here because if that's the case would you put a load balancer to that load balancer, there's still going to be a single point of failure if the balancer dies/overloads.
Correct me if i'm assuming that I/O load could be high, maybe funneling bytes isn't that tiring and the load balancer could do it no problem. (or) maybe the tcp connection splits away after the initial hit and the data doesn't go through the balancer anymore ?
I'd love for you to answer this since i've not been able to wrap my head around this part.
Your concerns are totally valid. If there are too many concurrent connections, it can throttle the load balancer itself bringing down the overall availability of the system. Hence, heavy traffic applications like Google and Facebook opt for distributed load balancing where the load balancer is not a single server. Google offers one such service called GCLB-Google cloud load balancer. You can find some info on it here:landing.google.com/sre/workbook/chapters/managing-load/
may be you would have found your answer, if not, see keepalived video of Hussein. You will get an idea
Edit : keywords VIP ( virtual ip ), VRRP
i love you man
Ayala Giny ❤️❤️❤️
I understand the concept of two flavors of load balancer but my question is since a load balancer is basically a software and processes the incoming request why both are not working in a single layer(layer 7)?
Hi, is there example if the HAProxy Loadbalancer using redis db as a session
Great stuff!
i am a load balancer expert now!
How does actual forwarding works? Create new socket between RP and Server? There has to be at least two open socket invoked in client to backend, is not it? It is just the RP makes it look like one TCP connection.
Yeah I was a bit confused about the '1 tcp connection' pro that was called out for Layer 4 LBs. The only way this could work is if you had 2 sockets for each logical connection. One for the client connection, and one for the back-end server connection. As soon as the proxy accepts a client connection it must store that socket along with a new socket connection to which ever back-end server was chosen. Bytes just get copied between the receive and send buffers of the socket pairs. All this ip swapping is automatically done through the socket API abstraction. The same would have to happen for layer 7 LBs. Afaik, the only difference is that Layer7 LBs deserialize the http protocol. One huge benefit is per. http request routing, which can help even out load across your backends.
الله يرحم والديك
ووالديك مهدي.. تسلم عزيزي
Could you make a tutorial about L7 - load balancer with „Envoy“ ? I do think it has a tremendous potential in this industry
Minh Tran thanks Minh for the suggestion I agree. L7 load balancing is complex topic that needs its own video. Envoy is a good candidate
Caching for L4 can be done based on the hash of the request ... you don't need to understand the data to cache it.
The moment you said request you have moved to L7. There isn’t a concept of a request in L4, its just segments/packets with no context on higher layers.
E.g. One HTTP request = many TCP segments
@@hnasr Hi Hussein, By request I meant TCP block. But I guess it will be a bit awkward if it's made up of more than one segment. Nice video by the way.
@hnasr
If Layer 4 load balancer heavily relies on IP Address which is Layer 3. Why do we still call it Layer 4 load balancer? We could call it Layer 3 load balancer. Isn't it?
Great explaination,
and Very Cool Cursor movements.
what kind is it ?
Rico Agung Firmansyah Thanks 🙏 I use google slides
How does the layer 7 LB know which client to respond to, when it receives the response from server? In layer 4, it is done by maintaining a NAT, and it is the same TCP connection.
Thanks for this great video, I have a question, what is the difference between using an SSL certificate on layer 4 load balancer vs using it on layer 7 load balancer?
Fantastic question. Layer 7 load balancer must terminate TLS while Layer 4 load balancer doesn’t have to. L4 LB can terminate TLS means serve the certificate from the LB, which means it can decrypt and look at the content. It can also decides to Passthrough the TLS. Hello all the way to the backend which means it is end to end encryption and cert is served from Backend
Hey, thank for this helpfull and amazing tutorial and explaination. sorry for my bad english.
I have a question, the backend IP can be found / detected by anyone ?? its there a possibility to hide a tcp connection using netcat to the load balancer proxy, and connecting to the backend, with sniffing maybe no?
thank for people who respond and help !
Thanks for the content! Just sometimes the screen got chopped off 35:58
Jexxie Woo thanks 🙏 I did notice that after i posted the video. Thankfully nothing in the chopped screen is important. Appreciate your comment ! And ill make sure to avoid that in the future.
2023 Still relevant. A+
Im here cuz of ur poll
I had a Nginx behind the HAProxy, how can I pass the authentication of Nginx back to Nginx server through the HAProxy?
Have you done a video on H2C smuggling?
Textras I have not! Didn’t know about it thanks for sharing.. I only discussed HTTP 1.1 smuggling. I wouldn’t worry about h2 clear text though since most h2 setups are secure
@@hnasr saw it here twitter.com/theBumbleSec/status/1303305853525725184?s=19
It is actually a serious one if you backend supports h2c, I need to discuss this thanks for sharing!
Hi Hussein, great content. Thanks!!
One question is in your example of layer 7 load balancing with Haproxy I did not see ssl certificate mentioned in configuration,then how Haproxy was able to work on layer7????
From my understanding, one http request can potentially span multiple packets. if that is the case, how does layer 4 proxy take care of that. Like if it does a round robin or something else and sends the packets to different servers, wouldn't the packets be useless?
How does the layer4 proxy handle this
Edit: Ok so, I see that stickiness is employed. Just had to rewatch the video!
Awesome tutorials, I've learned a lot from them about networking thanks !! :) you seems to know a lot about networking so I have one question (maybe not 100% related to this video, it's also related to previous videos that you released) - I have the following use case, I want to redirect traffic to my local private network from a public cloud provider VM and I was thinking whether I should use iptables tcp forwarding (after seeing your other tutorial) or for example nginx / haproxy ws tunnel. Do you happen to know what's the pros and cons of these approaches ? what would be most reliable in terms of latency and security ? should I be just fine with iptables TCP forwarding ? or should I go with ws tunnel (The next step would be to build client / server app to automate the update of my private NAT IP address on that server so I can keep getting traffic from that "cloud static ip", I would also make it open source with MIT once I get into implemention)
I've tested the iptables tcp proxy from your other tutorial and it did in fact worked, I could recieve traffic and respond from a cloud instance that was transfering the tcp packet to my local network. I've never tried ws tunnel thought and would like to know your opinion.
To be more specific I have a Kubernetes Ingress on my local network (Layer 7 Loadbalancer) that the traffic would be transfered to it using portforwarding on my router. So from the cloud instance all I need is the static ip address basically (and the point of it is that I would be able to use more resources by only creating one single instance for probably $5 a month), I just want to transfer the client to my private cluster on my private network, the rest would be handled on my local private network.
Wondering if microservices can run behind Layer 4 LB by running the services on different ports?
nice
in layer 4 ,we have one connection between the client and the load balancer and the server ? how is that happening , what about the acks > and what happen if the packet is lost between one node and the other ?
Can you do a log-format in using logging (rsyslog) in HAProxy. I like to troubleshoot an HAProxy issue where it is dropping connections in rare occasions.
Something to do with a sticky-bit in HAProxy timing out. The HAProxy is used as a Load Balancer ( reversed proxy) and runs as a container. Only connect using HTTPS in the Load Balancer using ports 80, 443, and 7999.
From the logs, I like to see why and where it is dropping the connection. I would like to see the log info on the time duration of the connection.
I am currently using 3.2.14 version of HAProxy.
Thanks!
Layer 4 Load Balancer Demo : How did HAProxy know 4444 is down? Was it the first failed request that told it or was it some heartbeat type of mechanism between HAProxy and the backend services? Also, I am curious about the "check" keyword mentioned in the cfg file.
Haproxy periodically does a health check against the backends and see if they are alive. And if they are not they remove it from the backend pool
Great video Hussein! I was wondering if you could clear something up for me. In Layer 4 load balancing you mention that only one TCP connection is used - the connection between the client and the load balancer. How exactly does the load balancer communicate with the backend servers in this scenario? My naive assumption was that it'll need to maintain TCP connections to the backend servers in order to communicate with them, but this isn't the case for L4 load balancing.
tobiisurmaster thanks for your question! Again it really depends on the implementation but one implementation is to use NAT or network address translation.
Think about your router, when you want to establish a TCP connection between your machine and a server say google.com you have one TCP connection between you and google despite having your router in between.
Every single packet goes through your router and your router maintains a table (NAT) and replaces the sourceIP address with its own . This is very similar to what layer 4 load balancer does. Its one connection between you and the final destination backend server the only difference is that you don’t know which destination server you will hit, the LB now replaces the destination IP address in this case. And when the backend responds to the LB the LB replaces the sourceIP from backend ip to its own IP. So the LB just plays on table.
Again this is one implementation there are many others.,
@@hnasr I see. Thanks for clarifying!
@@hnasr thanks, perfectly explained
Hussein what is the use of load balancer if i can use reverse proxy because reverse proxy is a load balancer? Is it any efficient?
Would like to see how HA proxy works :) A bit more in depth maybe?
Qasim Albaqali hey Qasim, check out my in depth HAPROXY Video HAProxy Crash Course (TLS 1.3, HTTPS, HTTP/2 and more)
ruclips.net/video/qYnA2DFEELw/видео.html
The sneaking in of that's what she said :)
Layer 7 is like ingress "path routing" + load balancing between servers
Great tutorial. One question, if load balancer algo it round robin, then it was sticking to 4444 or 5555 only without you killing one of the server?
I'm wondering abaut that either. It's so weird.