Thank you for the video. In studying the Nginx Cook-Book and trying to further understand the "hash directive" from this video, it's not 100% clear how it works. Do I create a hash of the specific data I want to match and if it's "anywhere" in the request, then it goes to a specific server? Or do I have to create a hash of the entire "request" I'm looking to match? Any help in this area is greatly appreciated. 🙏
I think, I understand it a little bit better (after digesting content from different sources on this subject). I don't create the hash, I simply provide the data that "may" be in a specific request I'm trying to match and want forwarded to a specific server. Nginx does the hashing of every request using the data I provided and if a request matches that hash, then the request goes to the server that request session originated from? I presume, it requires the administrator to be familiar with the structure of the requests he/she is trying to match against, in addition to what kind of data might be in those requests, to use the "generic hash" directive effectively. Anyway, thanks youtube comment section for being my rubber ducky! :P
all ports 9001 ,9002 and 9003 pointing to different project path, my question, can we point to same project folder path? does pointing to same folder path cause performance degradation?
Im using Nginx for loadbalancing my 4 server setup for a custom application. The application has a media share option that if disabled the forward option does not allow people from other networks to see it. It uses the IP of the user for this. The problem is if i use load balancing all the hits to my web server are coming from the load balancers IP. So even if a user from external network tries to access the media after disabling that option, he is able to access the media because server is getting the IP of Nginx setup. Is there any feature to overcome this issue?
Is there any way to make the load balancer not be the single point of failure ?? If it goes down, all the healthy machines and services become useless right ??
Great to see nice demo explanation. 👍
Glad you liked it!
Links:
NGINX Configuration Context: ruclips.net/video/C5kMgshNc6g/видео.html
NGINX Reverse Proxy: ruclips.net/video/lZVAI3PqgHc/видео.html
This was a great explanation!
Jay is the man! Thank you.
Amazing explanation!
please note that sticky session is not free, it come with nginx plus
this is a great demo, just one question is session persistance available in non plus nginx edition
My gosh it was a great explanation !
thanks for this training, it was very good
Good work Jay!
Thank you for the video. In studying the Nginx Cook-Book and trying to further understand the "hash directive" from this video, it's not 100% clear how it works. Do I create a hash of the specific data I want to match and if it's "anywhere" in the request, then it goes to a specific server? Or do I have to create a hash of the entire "request" I'm looking to match? Any help in this area is greatly appreciated. 🙏
I think, I understand it a little bit better (after digesting content from different sources on this subject). I don't create the hash, I simply provide the data that "may" be in a specific request I'm trying to match and want forwarded to a specific server. Nginx does the hashing of every request using the data I provided and if a request matches that hash, then the request goes to the server that request session originated from? I presume, it requires the administrator to be familiar with the structure of the requests he/she is trying to match against, in addition to what kind of data might be in those requests, to use the "generic hash" directive effectively. Anyway, thanks youtube comment section for being my rubber ducky! :P
Love the explanation, hate the indentations.
It's a very nice presentation..
this very good catch, thank you Sir!
all ports 9001 ,9002 and 9003 pointing to different project path, my question, can we point to same project folder path? does pointing to same folder path cause performance degradation?
From VietNam. Thanks
Thanks for the helpful session.
Thank you. great demo.
Im using Nginx for loadbalancing my 4 server setup for a custom application. The application has a media share option that if disabled the forward option does not allow people from other networks to see it. It uses the IP of the user for this. The problem is if i use load balancing all the hits to my web server are coming from the load balancers IP. So even if a user from external network tries to access the media after disabling that option, he is able to access the media because server is getting the IP of Nginx setup. Is there any feature to overcome this issue?
Hash the request
Is there any way to make the load balancer not be the single point of failure ?? If it goes down, all the healthy machines and services become useless right ??
Create 2 load balancers and add 2 different ip address in A record of your domain
thank for teaching
very useful, thanks
Workload I need load-balanced is VOIP streamed via UDP. Can NGINX Open Source forward UDP or it needs to be Plus?
This LB isn't DSR (Direct server return) or is it? 🤨
And can nginx do such LB?
Great video
thank you , amazing content
Awesome video ++++++++++++ 🙂
Epic explanation.. made it very easy.
Thanks!
Could you add the links you mentioned in the video?
Thanks Felix. The links are included in the "GitHub Repo" like included with the comments. github.com/jay-nginx/load-balancing
9oo
9op
Think you
I cant see,increase font size on no.20
HA for nginx ? How to do that .
is there a way to install a web GUI to administer the load balancer?
15:48 You need to pay for that.
Izin belajar pak
Oww so the correct way to call it is engine X. I was mistaken my whole life.
Why wouldn’t you demo this with SSL?
No sir. Linux (Ubuntu) has a built- in and much easier and secure way to load balance
These three web servers still share one physical server right?
please i need to contact you