Thanks a ton for the tutorial. Got it up and running rather quickly with the examples you provided, now to take that knowledge into my own ingress ventures.
Hi, How the request from haproxy to worker nodes are flowing to port 80. In my case when i configured haproxy backed with worker IP’s with port 80, it posting an error connection refused 80, how an ingress controller open port 80 on Worker Nodes? Any Suggestions
Your vides are the best. Easy to follow and very clear. The context you provide at the beginning of each video is perfect. I have learned so much from your instructions in last couple of week to setup my K8s infrastructure. Thanks really for such great quality content.
This is a great video that explains nginx ingress very well. Some viewers might be trying to run this on vps servers in the cloud using the new lxd/lxc version and can't get haproxy to work. You run lxc config device add haproxy myport80 proxy listen=tcp:0.0.0.0:80 connect=tcp:127.0.0.1:80 (note: haproxy in the command is the name of the lxc container). So if you followed the video and haproxy is not forwarding the traffic you may need this command or see if there is a firewall enbaled. Another helpful command is from inside the haproxy container use haproxy -c -V -f /etc/haproxy/haproxy.cfg which checks to make sure your configuration is valid before starting/restarting the haproxy service. Thank-you for putting this video series together they are one of the best ones out here.
Hi Felipe, many thanks for watching. Bear in mind some of the videos might be outdated and I am relying on viewers to tell me whether something is broken so that I can do a follow up video with latest versions of softwares. Cheers.
Just me and Opensource Are you planning to release the trafik v2 tutorial? there are big changes against v1 and also there are problems with the API version of Kubernetes 1.16.2 where many things are deprecated. I can't get trafik v2 up and running like DaemonSet. Thank you in advance for your reply.
@@martin_mares_cz I don't have Traefik v2 in my list but will add. I have videos scheduled for the next two months. And lot more videos in the pipeline to be recorded. Thanks.
Great video, thanks, one quick question, Does the ingress controller POD exposed to the HAproxy directly? but I don't see you use "hostNetwork: true" ?
Hi Xiuhua, thanks for watching. If you do kubectl describe on the ingress controller daemonset, you will see that it binds to the host port on the underlying worker node.
Hi There, Nice video bro! I have one question, on the HAPROXY you configure all the IP addresses from the worker nodes. what is if you scale out or scale the Cluster (Add or remove worker nodes)? then you have to manually change the Configuration on the HAPROXY? Also if the worker nodes are deployed via DHCP and somehow the ip change, then its also needed to change the config. Do you have an Solution for this? Thank you very much.
Hello Sir, I just watched your video, will follow your instructions to try it tomorrow, and get feedback to you. From what I've seen, you've made an excellent tutorial on Ingress Controller - application load balancer, and HAProxy - network load balancer for bare-metal Kubernetes cluster. That's exactly what I am looking for at this moment. You're very hands-on. Great Jobs. Subscribed. Thank you.
Thank you very much ❤ for providing such good video.. In this video you have taken haproxy for routing and exposed over the private IP how the nginx application will access all clients with domain name...
How does the haproxy discover the ingress pods through the node-ip:80 lines in the haproxy.cfg without any service defined with the nodeport set to 80?
Hi Walid, thanks for watching. When you deploy ingress controllers in your cluster, the ingress controller pods will bind to port 80 on the worker nodes they are running. HAProxy load balances the traffic to all worker nodes. When a request is received HAproxy will route it to one of the worker nodes on port 80 where the ingress controller pod is listening and which inturn will route it to the appropriate service. And the service will route it to one of the backend pods. Cheers.
Hello Venkat, again, excellent explanation of the topic. I read the documentation about ingress where they mentioned nginx, ingress controller, nginx, load-balancer etc. It was all respect to some cloud provider and not about bare metal k8s cluster. It was all so confusing. Your component and flow diagram made the concept crystal clear. Since it is bare metal, I can practice in my home lab. Today, your video quality was max 360p, so difficulty in reading text, may be due to just upload. Tomorrow, I would do hands on on my home lab. One suggestion on demo container/pod. I generally use hashicorp/http-echo image to show different pods or different container in a single pod as below. It might make your demos easy than using nginx. apiVersion: apps/v1 kind: Deployment metadata: name: fruit-deployment labels: app: fruit spec: replicas: 4 selector: matchLabels: app: fruit template: metadata: labels: app: fruit spec: containers: - name: apple-app image: hashicorp/http-echo args: - "-text=response from apple-app" - "-listen=:6000" # default container port is 5678 ports: - containerPort: 6000 - name: banana-app image: hashicorp/http-echo args: - "-text=response from banana-app" - "-listen=:6001" # default container port is 5678 ports: - containerPort: 6001
Hi Ajit, Thanks for the http-echo container suggestion. Looks good. I just checked my video and I can see all the video playback qualities. I can switch to 720p or 1080p for high resolution. Have you checked if you can change the video quality setting? Depending on your internet connection speed, youtube will automatically select appropriate quality.
@@justmeandopensource Strange, regarding resolution, I watched your video in chromium browser on windows, in that your video has max resolution of 360p where as other channels are having normal higher resolutions. I checked your video on Google Chrome, it is having higher resolution, 1080p. I would use that browser :)
Hello Venkat, I have a question regarding the HAProxy. This load balancer cannot be provisioned as a pod inside of the k8s cluster? I saw that you made a separate VM for it. I'm asking you this because i use VPSs for my k8s cluster. Thanks and regards.
Hi Knight, Thanks for watching this video. Although I haven't tried it, Haproxy can be provisioned inside the cluster itself as a pod. But it involves lots of configurations to make it work. Deploying a haproxy as a container/pod isn't difficult. Then you will have to create a service for that to expose it outside of the cluster. Lots of ports mappings involved. The below link might give you some direction. www.bluematador.com/blog/running-haproxy-docker-containers-kubernetes You mentioned you are using VPS. You can install haproxy on the master node itself, and don't have to use a separate VM for it. Thanks
Fantastic video. I've seen your metalLB videos too. My question is - If i deploy nginx-ingress-controller as a daemonset on 4 physical nodes of my cluster at home and expose ingress deployment as as nodeport service on port 31111 + then attach haproxy to this, why do i need MetalLB to load balance?
Thanks Venkat. It's such a great stuff that I've missed this long. I think your k8s installation processes, you're installing the latest k8s version and might want to stick to some version something like below. I'm using ubuntu, therefore it looks as below. # Install Kubernetes echo "[TASK 9] Install Kubernetes kubeadm, kubelet and kubectl" apt-get install -y kubeadm=1.17.1-00 kubelet=1.17.1-00 kubectl=1.17.1-00 apt-mark hold kubelet kubeadm kubectl
Hi Sesh, thanks for watching. Yes I could have locked it down to a specific version. I have different kubernetes setup videos and I think on some of them I do lock it down to a know working version of docker and kubernetes. I will have to update the github docs. Cheers.
Thanks Venkat. Yeah, I've realised it later while I'm covering your other videos. Also, I've a scenario here and not sure if you've covered, if so, could you please point me to the correct clip. I've a k8s cluster (with 3 nodes) running in my local wifi network. The vagrant network looks as below. I've chosen this way because I've built another db server (postgres) as a standalone box running outside k8s in the same network as wifi(192.168.1- subnet) . I'd like the pods communicate with it and it works fine using IP and port from the pod. If I try to create a headless service something like below, it didn't work. I use the service name from my pod. I'd like to use name instead of IPof my db server. Any suggestions please. apiVersion: v1 kind: Service metadata: name: postgre spec: type: ExternalName externalName: 192.168.1.13 Vagrant kmaster.vm.network "public_network",bridge: "en0: Wi-Fi (Wireless)",ip:"192.168.1.30"
Just found your video's on kubernetes, kubespray and nginx ingresses. You are very good at explaining the default behaviors, which gives the highest chance for success. The nginx docs explained that in the default server secret file they provided a default self-signed cert and key and that they recommended to use your own certificate. Things to note: the cert and key are base64 encoded (again), so keep this in mind when you add the cert to the default-server-secret.yaml file. Also, if you are using windows to generate the keys, make sure you remove the CR characters ( ^M), before base64 encoding the cert and key. Otherwise you'll get an error when trying to start the nginx-ingress pods.
Hi Kunchala, thanks for watching. Yes you can run haproxy on master node itself or on any of your existing Kubernetes nodes, if its for learning or development purpose.
Hey, fantastic content, I’m a fan! Just one question: how would you manage if the worker nodes get scaled out or in or if the IP addresses change? Is there a way that the HaProxy Config automatically stays in sync with the cluster?
Hi, thanks for watching. In this video, I used HAProxy for proxying to worker nodes where ingress controllers are listening. But in recent versions of ingress, you don't need this external load balancer. You can make use of MetalLB. So don't worry about configuring and maintaining the haproxy with dynamic worker node details.
Excellent session! I do come across a small snag when deploying nginx-ingress in that the create DaemonSet (kubectl apply -f daemon-set/nginx-ingress.yaml) as shown in your demo works. On the other hand, if I choose Create a Deployment (kubectl apply -f deployment/nginx-ingress.yaml) then all requests via HAProxy would fail with 503! Is there a hack that need to be applied? Thank you Venkat
Hi, thanks for watching. I haven't actually tried the deployment type. Always gone for the daemonset as my dev cluster has only few nodes. I think its the haproxy configuration that needs to be tweaked but not entirely sure.
Completed the excercise "without" HAProxy as of now (by referring nodes directly). One question. There is a a lot of hard coding in this solution: 1. In HAProxy, we need to hard code the node IPs. 2. In Ingress, we need to hard code the service and port details. What if the nodes are dynamic (addition, deletion, replacements)? Similarly, what if the services are dynamic? How to tackle the above situation?
Hi Ajit, Yes node IPs are hardcoded in HAProxy configuration. You need to update the configuration if you make changes to your nodes. There are automated dynamic solutions for this. You can have a script to do it or you can use Hashicorp's Consul architecture. I haven't explored either. www.haproxy.com/blog/dynamic-scaling-for-microservices-with-runtime-api/ www.reddit.com/r/devops/comments/50df4d/ways_to_dynamically_add_and_remove_servers_in/ About your second point. Thats the way ingress works. You create an ingress resource specifying where to route the traffic. Which service to route to and to which port. Your application will always be running on a same port. You have to decide this and create service and ingress resource. I haven't come across any best practices documentation/article for ingress in Kubernetes. Most of the articles describe the basic setup at a very high level. I just want to introduce this concept to the viewers. All my videos are for beginners, so I don't go in depth into any of the topic. Just a getting started guide. Thanks, Venkat
@@justmeandopensource Thanks for the resources for dynamic HA Proxy load balancing. Would look into that. I would let you know if I come across the solution for dynamic ingress update.
Hi Venkat, Great video! I am planning to use plublic DNS as noip.com and make a port forward in a router to reach a microservice backend. How does the HA Proxy can reach the service if the service as an internal IP from the cluster?
Hi Venkat, thanks for this great video. One question though, I still could not understand how haproxy is able to connect to the worker nodes port 80. We only have the cluster IP service created and ingress resource has routing in it to point to the cluster ip service. There is no node port service or LoadBalancer service to access it from outside the kubernetes cluster. I was trying to get it working by following your video. If I check the get all -n nginx-ingress with the steps, I see only the nginx-ingress pods and the daemonset in the nginx-ingress namespace. The get all ( without namespace) only gives the nginx-pod and the cluster ip service pointing to the nginx-pod. I am wondering how is it working without having a nodeport service or a loadbalancer service running to connect to the worker node from haproxy ? As per the haproxy configuration, it directly using the IP addresses of the worker nodes and port number 80. Looks like I miss something ...
Hi Nevin, thanks for watching. Have a look at the output of kubectl describe daemonset . The ingress controller pods are deployed as daemonset, so there will be one ingress controller pod on each worker node. They use hostport to bind to port 80 and 443. This will be clear when you look at the kubectl describe output. Cheers.
Hi Venkat, thank you for another excellent video. I got it working on your vagrant environment. I also tried it on the cluster as created via the-hard-way (Kelsey Hightower). But that didn't work. Looks like iptables are blocking port 80. Just wondering how iptables are setup (probably done by kube-proxy). Hard to find this info. Maybe a suggestion to make a video about network setup and the protocols, ports and their flow and how iptables are setup. But again, thank you for taking time to make these videos and sharing them with us.
Hi Venkat, I've found the issue. It is actually quite simple. I got triggered when I was doing your video about Prometheus on my "the-hard-way" cluster. I noticed the prometheus-node-exporter got IP addresses of the the worker nodes. Normally it is more secure, to have the POD IP address range used for the PODs. So I noticed that if the hostNetwork parameter is set to true, the IP addresses of the hosts is used! So I changed the ingress file daemon-set/nginx-ingress.yaml by adding this parameter and now it all works!!!
Hi Venkat, when we are doing it on aws do we need to set up the haproxy server?? I have setup app1 as Jenkins and app2 as nexus, they got there own external I.P's, do I need to create haproxy for these two apps on aws?? any input would be much appreciated. thank you for sharing your knowledge its helping a lot ...
Hi Sunny, thanks for watching this video and taking time to comment. I am just starting to explore public cloud especially AWS. Most of my Kubernetes videos are around bare metal. Looking at the logic, if you are using a public cloud, you may not need to use a haproxy service. In AWS you can use elastic load balancers for that. The following article might clear your doubts around traffic routing. medium.com/@chamilad/load-balancing-and-reverse-proxying-for-kubernetes-services-f03dd0efe80 Thanks
Hi Saurabh, thanks for watching. At what point in this video you are getting this error? I have never seen this before. github.com/nginxinc/kubernetes-ingress/issues/783 May be try using different version of kubernetes-ingress from github.
I had the same issue, using v0.3 of the ingress controller. Follow the instructions to install the latest version from : docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/ Then test with the following rules: kubernetes.github.io/ingress-nginx/user-guide/basic-usage/ That solved it for me
i see haproxy is outside K8s cluster running on port 80 loadbalancing on port 80 of workers, but there is nothing reachable on port 80 from outside on worker. ClusterIP port which is on port 80 is internal, how HAproxy from outside can reach the nginx clusterIP service running on port 80?
Hi Pankaj, thanks for watching. The ingress controller you deploy on the worker nodes bind to port 80 on the worker nodes. HAProxy will send the traffic to one of the worker node on port 80 where nginx ingress controller is listening. Ingress controller will then route the traffic to appropriate service (clusterIP) which sends the traffic to backend pods. Cheers.
Hi Thanks for the video, I have followed the exact same step to create an nginx controller using daemonset however I am not able to browse the app deployed in the pods. I have noticed that the ports 80 and 443 are not getting exposed on the worker nodes despite trying to create the daemonset multiple times. What can be the reason for this? I am using weavenet
Hi Venkat, I tired with ingress in gke(google k8s engine). 1) Created the my pods. 2)Expose it though loadbalancer service type. (it was working with http) 3) then i configure ingress, it says error "Some backend services are in UNHEALTHY state". Can you pls suggest any possibilities to configure in https.
HI Surendar, thanks for watching. I haven't used this setup in Google cloud yet. So I can't be sure of your problem. If I get some time I will test this.
When my cluster has 2 worker nodes, it's successful. but when i turn it into 3 worker nodes. When i command: "kubectl create -f daemon-set/nginx-ingress.yaml" (daemonset), it always two container at running state, the other is in ContainerCreating. I don't know why. Can you give me somes suggestion? Thank you!
Hi Vu, thanks for watching this video. It doesn't matter how many worker nodes you have in your cluster. As we are deploying the ingress controller as daemonset, it should get deployed on all worker nodes. You mentioned it is in "ContainerCreating" state on the 3rd node, which means deployment set /replication controller is doing its job. The reason its still in ContainerCreating state might be due to various reasons. You can check the status of the deployment using "kubectl describe deploy nginx-ingress" and check the events section at the bottom. You can also get the name of the pod that is in ContainerCreating state and use "kubectl describe pod " to see what it is trying to do. Is the 3rd node okay with other deployments? Just run the below command to see if a simple nginx container can be run on that node. "kubectl run nginx --image nginx --replicas=3" And check if the nginx container is running on the 3rd worker node. Otherwise you might have some network problems on the 3rd node. Also check if you can reach internet (ping outside world) from the 3rd node. Thanks, Venkat
Hello thanks for your content. I got 2 small questions. First: What is the point of an ha-proxy since even if we point to the same node, the svc of type NodePort will load balance btw pod. Second: As a mac user how can we communicate from the host to the cluster ? On mac kubernetes (docker for mac) use an hidden VM.
Hi Gilles, thanks for watching. 1. Yes but if you want to expose your application with a DNS name (eg: myapp.example.com), what entry would you add in your DNS? Would you add myapp.example.com with an IP address of one of the worker nodes? What if that worker node goes down? You will then have to update DNS for myapp.example.com with the ip address of another worker node. Just to simplify this process, we use HAproxy or any other load balancer so we don't have to worry about underlying servers (worker nodes) and you don't have to update DNS for myapp.example.com often. 2. I haven't tried this on Mac with Docker for mac. So I am afraid I can't comment on that. I am a Linux person by birth.
hello venkat, it is me again :) with yet another question :), I was wondering how with Kubernetes can I manage to request a pod from another pod? , I have a web service in python with flask that is inside a container that I can request thanks to nodeport, but I want that web service to also send a request to a TensorFlow serving ( a container that when requested return a series of probabilities)? should I expose a service for the TFserving too?
Hi Mike, thanks for watching. If you take a look at the output of kubectl describe of one of the nginx ingress controller pod, you will notice that it binds to the host port on the worker node it is running. And haproxy's backend configuration points to these worker nodes on the ports where the ingress controller pods are bound.
Hi,Bro In load balance pool of HA proxy you used node IP:80 as pool members, is these service (type=NodeIP) for NGINX ingress controller ? I didn't see you create a service for ingress controller in k8s cluster. BTW, Nice Video, many thanks.
Hello. After installing nginx-ingress controller I get a nginx-ingress-nginx-ingress service of type LoadBalancer and the EXTERNAL IP gets in . It waiting for the LB to be created (which wont happen). In this case I understand that the service should be a nodeport, however, I didn't see any modifications during the video. Iam using a BigIP lb. Is this still valid or is there any advice on this? thanks
@@justmeandopensource hey thank you for your prompt reply. In the case of using HAproxy as a LB, does nginx-ingress-nginx-ingress service needs to be setup as a loadbalancer or nodeport service? Thanks!
@@pablodamico5729 HAProxy load balancer is for outside the kubernetes cluster load balancing between the k8s nodes. Within the cluster you need to have a load balancer capable of providing/handing out ip addresses to the loadbalancer type service. MetalLB is one such solution. There is also kube-vip. This is different to HAProxy. Cheers.
Thanks very much for ur totos, i have a question, i'm strugling to deploy an ingress controller of a type load balancer and let haproxy to give it an ip and connect to it ?
You have given :80 in HAProxy default backend config but where have we configured Nginx ingress controller to listen on port 80 of worker nodes for incoming traffic from Load balancer? Thanks Venkat.
Have a look at the definition of the ingress daemonset. Kubectl describe daemonset . You will find that the ingress controller pods on each worker nodes uses host port to bind to port 80 and 443. (github.com/nginxinc/kubernetes-ingress/blob/master/deployments/daemon-set/nginx-ingress.yaml)
Hi Nagaraju, thanks for watching this video. I was playing with path-based routing while recording this demo but it didn't work for me either. I will have to play with it bit more to get it working and then if I get anywhere I will definitely do a video of it. You could also try the Traefik ingress video and see if it works with it. ruclips.net/video/A_PjjCM1eLA/видео.html I haven't verified it with Traefik, but worth trying. Thanks.
I have seen other videos and there they expose ingress deployment as well through a nodeport service(so loadbalancers can hit nodes,from where it would be routed based on ingress resources).Here I am seeing loadbalancer can hit workernode at port 80(as defined as haproxy config) even though there was never a nodeport service enabled. Is it because we are using daemonset instead of deployment to setup ingress controller?My question is basically how loadbalancer can access worker nodes at port 80?
Hi Siddiqui, as commented previously, here is the explanation. The nginx ingress controllers are pods running on each of the nodes (since it is a daemonset). The ingress controller pods bind to port 80 and 443 on the respective worker nodes. HAProxy that is outside the cluster will load balance the traffice between the worker nodes on port 80 where ingress pods are listening. Ingress pods will then route the traffic to appropriate service which inturn sends the traffic to backend pods. Hope this makes sense. Cheers.
@@justmeandopensource thanks for the reply..I researched it and found that if you deploy the ingress controller as a daemonset,it makes sure any traffic on port 80 reaches you(basically looking for traffic at 80 comes out of the box).This shouldnt work if it was deployment in place of daemon as we will one more service to expose our ingress
@@sariksiddiqui6059 Yup. You are right. Since my cluster was small I decided to deploy it as a daemonset instead of deployment. I never tried the deployment type for ingress controller.
hi Venkat, Wonderful session again; with hands on ingress setup. I tried with similar things using vagrant setup instead of lxd; worked well. I found one issue with vagrant though ... simply by hitting hostname on browser vm instance doesn't display anything on windows OS. But within HAProxy instance if i simply do curl on 3 host names i get expected output as mentioned in session. how to access vagrant vm instance using hostname instead of private_network ip on browser?
Hi Bhalchandra, if you were using a Linux machine, then you can update /etc/hosts file with IP address and VM name and then you can access it through the name. Similarly you can do it in Windows as well. The below link might help you. www.howtogeek.com/howto/27350/beginner-geek-how-to-edit-your-hosts-file/ Cheers.
What if you are running the ingress controller only on worker1 and haproxy hits worker2? Secondly, what if we run ingress controller on the master node (non-HA)? In that case, should we only provide the IP address of the master in the haproxy backend?
Hi Venkat, Thanks for this class, i have tried this tutorial on AWS instances, but getting site can't reachable. which IP(Private IP or Public IP of HAProxy server) i have to place in /etc/hosts. Or should i do any other configuration since i am using AWS instances?. I am using a security group in which all ports are open.
Hi Venkat if we have setup a rule for node provisioning based on CPU, Memory or based on user request , in that case our total number of nodes will not be same all the time , then how we will do the haproxy entry.
Hi Kunal, thanks for watching this video. Thats a good question. One other viewer also asked similar question I think. I haven't researched much about this. But the following reddit posts seems to discuss few possibilities. amp-reddit-com.cdn.ampproject.org/v/s/amp.reddit.com/r/devops/comments/50df4d/ways_to_dynamically_add_and_remove_servers_in/?amp_js_v=a2&_gsa=1&usqp=mq331AQCCAE%3D#referrer=https%3A%2F%2Fwww.google.com&_tf=From%20%251%24s&share=https%3A%2F%2Fwww.reddit.com%2Fr%2Fdevops%2Fcomments%2F50df4d%2Fways_to_dynamically_add_and_remove_servers_in%2F
Hey one request can you make one video on how to attach a load balancer like nlb in front of our kubernates cluster that can load balance between different nodes
Hi Mihir, thanks for watching. You need some form of load balancing. Take a look at the recent updated video on this topic ruclips.net/video/UvwtALIb2U8/видео.html. You can use load balancer solution like metallb and can get away without haproxy stuff.
At about 16:02 in the video, you show the search results for ingress and you choose the second one. Why not the first one? What is the difference? can the first one from Kubernetes be used in the same way?
Hi, thanks for watching this video. Since I am doing an Nginx ingress video, I preferred to use the second github link from nginxinc. Functionality or feature wise I don't think there are any differences. I haven't tried the other link you mentioned. I just went through it now and don't see anything different. Thanks, Venkat
Hi Venkat..Thanks for an eye opener .. iam having a question here.. how ingress controller are bind to port 80 of the worker nodes? .. since we are exposing ingress as a nodeport and nodeport supports only port greater than 3000 range..
H Venkat, thanks for watching. Have a look at the definition of the ingress daemonset. Kubectl describe daemonset . You will find that the ingress controller pods on each worker nodes uses host port to bind to port 80 and 443. Cheers.
@@justmeandopensource spec: template: spec: hostNetwork: true These lines should be there in yaml file of those daemon set right? ... sorry for the series of questions though.. and thanks for you response
If I dont have multiple nodes, just one master node (like a minkube or microk8s), can i just set ingress controller to take requests from my host name, instead of the loadbalancer?
I am having hard time configuring this for TCP. I was able to configure for Http, but the instructions here isn't clear. So I define conflig map, and not using load balancer, as I am using single node. Http works, but the TCP connection just says 'Closed Connection'. Do you have an example of TCP ingress?
@@justmeandopensource I was able to get it working with a tcp config map and following certain examples www.ibm.com/support/knowledgecenter/ru/SSSHTQ/omnibus/helms/all_helms/wip/reference/hlm_expose_probe.html
Thank you for the tutorial. May i know how can i set sticky session (stateful application) for this environment? Should i configure in haproxy or in the ingress?
Actually it can be done at the ingress level as well it seems by adding appropriate annotations to the ingress resource. kubernetes.github.io/ingress-nginx/examples/affinity/cookie/#:~:text=Deployment,-Session%20affinity%20can&text=The%20affinity%20mode%20defines%20how,or%20persistent%20for%20maximum%20stickyness.&text=When%20set%20to%20false%20nginx,even%20if%20previous%20attempt%20failed.
How the connection happens between haproxy loadbalancer and ingress controller when someone hits the haproxy ip bcoz i can see haproxy will load balance to the worker nodes only not the ingress controller..correct me if i am wrong
Ingress controllers are bound to host ports on respective workers nodes. Ha proxy sends traffic to port 443 on the workers nodes where the ingress controller is bound to. Hope it makes sense.
hello, this setup works locally right? i want to expose my application through the internet. i tried with a basic ingress YAML file and deployed the laravel application and created a service. to expose it on the internet i just run the minikube tunnel and exposed the external IP and tried it in the browser but the app is not loading. is my way is correct or what i have to do to expose my app on the internet with minikube. please guide me
Hi! Nide video. I have a question, could You help me please ? Is it possible to make whitelist by ip with ingress for TCP 4 layer ? I used annotations but they worked for 7 layer. In my k8s cluster I have a postgres db on stolon and I would like to limit access to it by ip I use kubernetes/nginx-ingress on bare metal k8s
Hi Martin, I think pod network policies might help you achieve what you wanted. I have it in my queue to be recorded. Basically, using pod network policies you can define network access between pods both ingress and egress. Thanks, Venkat
Hi Venkat, thanks for the wonderful video. I have created instances in GCP and ingress setup is done. Please advise me how to check the setup is working or not in GCP. I have used 1 HAproxy server, 1 master and 2 worker nodes.
Very helpful tutorial, just one quick question. I assume that since you have the cluster running on containers, the reason you are able to execute kubectl commands from the host machine is some sort of rule on your .zshrc file? If so, could you please explain how that is accomplished? I tried using an alias such as alias kubectl='lxc exec kmaster kubectl'. And while this works just fine for listing resources and what not, the forwarding of the command breaks when you need to add flags. So while I can run 'kubectl get nodes', if I try to run 'kubectl get nodes -o wide' it breaks.
Hi Jose, thanks for watching this video. I covered the kubeconfig details in various other cluster provisioning videos. I had an assumption that viewers watched all my previous videos. That's why I don't repeat all the information in every video. So you are using lxc containers for kubernetes cluster? I copy the /etc/kubernetes/admin.conf file from the master node to my host machine as $HOME/.kube/config. I also download the kubectl binary and move it /usr/local/bin. Hope this helps. If you are stuck, give me a shout again. Thanks.
Hi What do you use for the terminal zsh prompt. Looks nice especially the history commands are coming up automatically. I have been using python powerline, but haven't configured internals
Hi Ashish, thanks for your interest in this video. Actually I have done a video on my terminal setup. ruclips.net/video/soAwUq2cQHQ/видео.html But this was long time ago. I have moved since then to a whole different setup to using I3 tiling window manager. ruclips.net/p/PL34sAs7_26wOgqJAHey16337dkqahonNX Cheers.
how to compare between Nginx Ingress and MetalLB? Can I use both together? if metalLB alone in layer 2 mode, it suffers slow failover. Will ingress help to solve this?
Hi Chai, thanks for watching. Fundamentally they both serve different purpose. Please check the below discussion for more understanding. superuser.com/questions/1522616/metallb-vs-nginx-ingress-in-kubernetes#:~:text=Metallb%20is%20a%20load%20balancer,provides%20routing%20to%20different%20routes.
hi venket can we use same ha proxy server which we are using for controller for multi master kubernetes architecture or we have to create new haproxy server for worker
Hi Atul, thanks for watching. If they are on different ports then you can use the same haproxy and you can configure additional backends. For control plane, you must have configured the haproxy frontend to listen on 6443 and load balance it with backend control planes on 6443. So you can add another frontend/backend block for your worker nodes for ingress controller. Thanks.
Hi Christian, thanks for watching. Apologies for delay in responding. The ingress controllers on the worker nodes bind to port 80 and 443 on the worker node through hostPort. You can check that by looking at the output of kubectl describe ingress controller daemonset/deployment and search for hostPort.
Hi Michael, Thanks for watching this video. Metallb is a load balancing solution specifically for bare metal clusters. If you happen to have your cluster with one of the cloud provider, you can just create a service of type LoadBalancer and the cloud provider will automatically create load balancer. Metallb is for load balancing and Nginx ingress is for traffic routing. To use nginx ingress controller, the service will have to be of type ClusterIP. The load balancer will be external to the cluster. Hope this makes sense. Thanks.
@@justmeandopensource I'm in the process of setting up a production on premise cluster. I have the cluster set up (using kubespray), but now I'm trying to merge the concepts of video 31 and 32 together so I have metallb and an ingress-controller. Thanks for the great resources you provide.
HI Sudhesh, thanks for watching. Both these are developed and maintained by different community of people. Otherwise the concepts/features are quite similar. I used to use ingress from Nginx Inc but later started using the Kubernetes based nginx-ingress. I haven't used it extensively to see the difference.
Hi Hari, thanks for watching. The idea is to access your app via a dns name (for eg: nginx.example.com). Since we don't have a dns server for this demo, we can make a dns entry in our local /etc/hosts file from the machine where you are trying to browse nginx.example.com. In this video, it was my Linux host machine where i edited /etc/hosts file.
@@justmeandopensource Thanks for the quick reply..my k8s nodes are VMs. so, in that case, what would be the clue to access the (for eg: nginx.example.com).please suggest
@@harihari579 From where you will be accessing nginx.example.com? So you will be opening a web browser and entering nginx.example.com. What machine you will be doing this? In that machine you will have to make sure nginx.example.com resolves to the ip address of the haproxy. If its a Linux machine, like what I have shown in this video, update /etc/hosts. If its a Windows machine, still you can update hosts file but it will be located somwhere. I am sure you can google it.
Thanks for the tutorial. I ma facing one issue. Getting error "I0211 06:49:24.970683 1 manager.go:215] Starting nginx 2022/02/11 06:49:24 [emerg] 10#10: open() "/etc/nginx/conf.d/default.conf" failed (2: No such file or directory) in /etc/nginx/nginx.conf:92 nginx: [emerg] open() "/etc/nginx/conf.d/default.conf" failed (2: No such file or directory) in /etc/nginx/nginx.conf:92" Any idea what is the problem? Following steps as per your tutorial. Thanks
Very nicely explained. Just I wanted to know the system information, battery life , networking , processor and which package required to install on Ubuntu OS
Hi Hanuma, thanks for watching. The widget that you see on the right side of my screen that shows various system information is conky. You have to install conky software and have a conky configuration file. You can search online for ready to use conkyrc configuration or you can customize as per your need. Cheers,
Thanks a ton for the tutorial. Got it up and running rather quickly with the examples you provided, now to take that knowledge into my own ingress ventures.
Hi Anthony, many thanks for watching this video and taking time to comment. Cheers.
Hi,
How the request from haproxy to worker nodes are flowing to port 80. In my case when i configured haproxy backed with worker IP’s with port 80, it posting an error connection refused 80, how an ingress controller open port 80 on Worker Nodes? Any Suggestions
Interesting, informative and really illuminating for me as a K8s learner! Thanks!
Glad it was helpful! Thanks for watching.
Your vides are the best. Easy to follow and very clear. The context you provide at the beginning of each video is perfect. I have learned so much from your instructions in last couple of week to setup my K8s infrastructure. Thanks really for such great quality content.
No worries. Thanks for watching.
This is a great video that explains nginx ingress very well. Some viewers might be trying to run this on vps servers in the cloud using the new lxd/lxc version and can't get haproxy to work. You run lxc config device add haproxy myport80 proxy listen=tcp:0.0.0.0:80 connect=tcp:127.0.0.1:80 (note: haproxy in the command is the name of the lxc container). So if you followed the video and haproxy is not forwarding the traffic you may need this command or see if there is a firewall enbaled. Another helpful command is from inside the haproxy container use haproxy -c -V -f /etc/haproxy/haproxy.cfg which checks to make sure your configuration is valid before starting/restarting the haproxy service. Thank-you for putting this video series together they are one of the best ones out here.
H iJames, many thanks for sharing this info. It will be really of great help to others looking for proper implementation. Cheers.
Thank you so much for all your tutorials. You do a fantastic job. I look forward to continue learning from you.
Many thanks for watching. Cheers.
This is a brilliant tutorial, i really enjoyed the simple nicely paced step by step approach. Great work
Hi, Thanks for watching.
AMAZING work, your didactic is on point and doing it hands-on is exactly what I needed. Gonna watch the whole playlist for sure!
Hi Felipe, many thanks for watching. Bear in mind some of the videos might be outdated and I am relying on viewers to tell me whether something is broken so that I can do a follow up video with latest versions of softwares. Cheers.
Nice Video Bro!! You contents of Kubernetes Bare Metal Help me a lot.
Hi Diego, Thanks for watching.
Excellent ! Simply Wow. நன்றி
Thanks for watching this video. மகிழ்ச்சி
I loved it , and post videos frequently with real time scenarios , request you to do videos on jenkins as well
I will continue to do my best. Thanks for watching.
Great complete ingress controller. Thank you.
Hi Godfrey, many thanks for watching. Cheers.
Glad to hear that you are doing this great videos for us💐💐
Hi Siva, thanks for watching. Cheers.
Very good tutorial, thank you for sharing. Tested on a kubernetes that runs behind rancher2 using Hetzner servers. Next step is to test Traefik
Hi, Thanks for watching this video.
Thanks a lot Bro for this tutorial. all my questions are clear.
Glad to hear that. Thanks for watching.
Your channel is life saver
Hi Yohan, thanks for watching.
All your videos are excellent. Keep up your good job
Hi Arun, thanks for watching.
You're simply the best🤟
HI Rayehan, thanks for watching. Cheers.
Great tutorial, clearly explained
Hi Ale, thanks for watching.
If I understand you correctly, we need HA Proxy for Ingress to working? Is HA Proxy the prerequisite for Ingress?
No. HAProxy isn't a requirement for ingress.
Thanks a lot for this video; very clear, it helps me a lot !
Hi Chris, thanks for watching. Cheers.
This video is magic. Better explaination around Ingress Controller for Kubernetes Bare Metal !!
Hi Cedrick, thanks for watching.
Best tutorial ingress I've ever seen. great man!
Hi Martin, thanks for watching. Cheers.
Just me and Opensource Are you planning to release the trafik v2 tutorial? there are big changes against v1 and also there are problems with the API version of Kubernetes 1.16.2 where many things are deprecated. I can't get trafik v2 up and running like DaemonSet. Thank you in advance for your reply.
@@martin_mares_cz I don't have Traefik v2 in my list but will add. I have videos scheduled for the next two months. And lot more videos in the pipeline to be recorded. Thanks.
Thank for sharing your knowledge
Hi Abdul, thanks for watching. Cheers.
amazing video
Hi, thanks for watching. Cheers.
Great video, thanks, one quick question, Does the ingress controller POD exposed to the HAproxy directly? but I don't see you use "hostNetwork: true" ?
Hi Xiuhua, thanks for watching. If you do kubectl describe on the ingress controller daemonset, you will see that it binds to the host port on the underlying worker node.
Hi There,
Nice video bro!
I have one question, on the HAPROXY you configure all the IP addresses from the worker nodes.
what is if you scale out or scale the Cluster (Add or remove worker nodes)? then you have to manually change the Configuration on the HAPROXY?
Also if the worker nodes are deployed via DHCP and somehow the ip change, then its also needed to change the config. Do you have an Solution for this?
Thank you very much.
Hello Sir, I just watched your video, will follow your instructions to try it tomorrow, and get feedback to you. From what I've seen, you've made an excellent tutorial on Ingress Controller - application load balancer, and HAProxy - network load balancer for bare-metal Kubernetes cluster. That's exactly what I am looking for at this moment. You're very hands-on. Great Jobs. Subscribed. Thank you.
Hi Michael, thanks for watching. Cheers.
Hello thank you for this video !
Do we need to link the load balancer only to the nodes that have an ingress controller ? Or to all of them ?
Thank you very much ❤ for providing such good video..
In this video you have taken haproxy for routing and exposed over the private IP how the nginx application will access all clients with domain name...
That will be via the ingress route resource which defines the hostname to service mapping. Thanks for watching.
How does the haproxy discover the ingress pods through the node-ip:80 lines in the haproxy.cfg without any service defined with the nodeport set to 80?
Hi Walid, thanks for watching. When you deploy ingress controllers in your cluster, the ingress controller pods will bind to port 80 on the worker nodes they are running. HAProxy load balances the traffic to all worker nodes. When a request is received HAproxy will route it to one of the worker nodes on port 80 where the ingress controller pod is listening and which inturn will route it to the appropriate service. And the service will route it to one of the backend pods. Cheers.
Hello Venkat, again, excellent explanation of the topic. I read the documentation about ingress where they mentioned nginx, ingress controller, nginx, load-balancer etc. It was all respect to some cloud provider and not about bare metal k8s cluster. It was all so confusing.
Your component and flow diagram made the concept crystal clear. Since it is bare metal, I can practice in my home lab. Today, your video quality was max 360p, so difficulty in reading text, may be due to just upload. Tomorrow, I would do hands on on my home lab.
One suggestion on demo container/pod. I generally use hashicorp/http-echo image to show different pods or different container in a single pod as below. It might make your demos easy than using nginx.
apiVersion: apps/v1
kind: Deployment
metadata:
name: fruit-deployment
labels:
app: fruit
spec:
replicas: 4
selector:
matchLabels:
app: fruit
template:
metadata:
labels:
app: fruit
spec:
containers:
- name: apple-app
image: hashicorp/http-echo
args:
- "-text=response from apple-app"
- "-listen=:6000" # default container port is 5678
ports:
- containerPort: 6000
- name: banana-app
image: hashicorp/http-echo
args:
- "-text=response from banana-app"
- "-listen=:6001" # default container port is 5678
ports:
- containerPort: 6001
Hi Ajit, Thanks for the http-echo container suggestion. Looks good.
I just checked my video and I can see all the video playback qualities. I can switch to 720p or 1080p for high resolution. Have you checked if you can change the video quality setting? Depending on your internet connection speed, youtube will automatically select appropriate quality.
@@justmeandopensource Strange, regarding resolution, I watched your video in chromium browser on windows, in that your video has max resolution of 360p where as other channels are having normal higher resolutions. I checked your video on Google Chrome, it is having higher resolution, 1080p. I would use that browser :)
Yeah, just googled the issue and there were lot of discussions around this where all the video qualities are not listed on some browsers.
Hello Venkat, I have a question regarding the HAProxy. This load balancer cannot be provisioned as a pod inside of the k8s cluster?
I saw that you made a separate VM for it.
I'm asking you this because i use VPSs for my k8s cluster.
Thanks and regards.
Hi Knight,
Thanks for watching this video. Although I haven't tried it, Haproxy can be provisioned inside the cluster itself as a pod. But it involves lots of configurations to make it work. Deploying a haproxy as a container/pod isn't difficult. Then you will have to create a service for that to expose it outside of the cluster. Lots of ports mappings involved.
The below link might give you some direction.
www.bluematador.com/blog/running-haproxy-docker-containers-kubernetes
You mentioned you are using VPS. You can install haproxy on the master node itself, and don't have to use a separate VM for it.
Thanks
Wonderful work
Thanks for watching.
Great tutorial, thank you.
Hi Aryadi, thanks for watching.
Great demo!! Thank You
hi, thanks for watching.
Fantastic video. I've seen your metalLB videos too. My question is - If i deploy nginx-ingress-controller as a daemonset on 4 physical nodes of my cluster at home and expose ingress deployment as as nodeport service on port 31111 + then attach haproxy to this, why do i need MetalLB to load balance?
Thanks Venkat. It's such a great stuff that I've missed this long.
I think your k8s installation processes, you're installing the latest k8s version and might want to stick to some version something like below. I'm using ubuntu, therefore it looks as below.
# Install Kubernetes
echo "[TASK 9] Install Kubernetes kubeadm, kubelet and kubectl"
apt-get install -y kubeadm=1.17.1-00 kubelet=1.17.1-00 kubectl=1.17.1-00
apt-mark hold kubelet kubeadm kubectl
Hi Sesh, thanks for watching. Yes I could have locked it down to a specific version. I have different kubernetes setup videos and I think on some of them I do lock it down to a know working version of docker and kubernetes. I will have to update the github docs. Cheers.
Thanks Venkat. Yeah, I've realised it later while I'm covering your other videos.
Also, I've a scenario here and not sure if you've covered, if so, could you please point me to the correct clip.
I've a k8s cluster (with 3 nodes) running in my local wifi network. The vagrant network looks as below. I've chosen this way because I've built another db server (postgres) as a standalone box running outside k8s in the same network as wifi(192.168.1- subnet) . I'd like the pods communicate with it and it works fine using IP and port from the pod.
If I try to create a headless service something like below, it didn't work. I use the service name from my pod. I'd like to use name instead of IPof my db server.
Any suggestions please.
apiVersion: v1
kind: Service
metadata:
name: postgre
spec:
type: ExternalName
externalName: 192.168.1.13
Vagrant
kmaster.vm.network "public_network",bridge: "en0: Wi-Fi (Wireless)",ip:"192.168.1.30"
Just found your video's on kubernetes, kubespray and nginx ingresses. You are very good at explaining the default behaviors, which gives the highest chance for success.
The nginx docs explained that in the default server secret file they provided a default self-signed cert and key and that they recommended to use your own certificate. Things to note: the cert and key are base64 encoded (again), so keep this in mind when you add the cert to the default-server-secret.yaml file.
Also, if you are using windows to generate the keys, make sure you remove the CR characters ( ^M), before base64 encoding the cert and key. Otherwise you'll get an error when trying to start the nginx-ingress pods.
Hi Rik, thanks for watching and sharing your thoughts.
Hi sir, just wondering if i can run haproxy in master node itself. or do i need separate VM for this?
Hi Kunchala, thanks for watching. Yes you can run haproxy on master node itself or on any of your existing Kubernetes nodes, if its for learning or development purpose.
Thanks in advance for the great tutorials, by any change, is there any load balancer that can support Diameter and be used inside K8s cluster?
dude you helped my a lot, thanks!
No worries. You are welcome.
Hey, fantastic content, I’m a fan!
Just one question: how would you manage if the worker nodes get scaled out or in or if the IP addresses change? Is there a way that the HaProxy Config automatically stays in sync with the cluster?
Hi, thanks for watching. In this video, I used HAProxy for proxying to worker nodes where ingress controllers are listening. But in recent versions of ingress, you don't need this external load balancer. You can make use of MetalLB. So don't worry about configuring and maintaining the haproxy with dynamic worker node details.
@@justmeandopensource thanks, very helpful tip! So you would consider MetalLB already fit for production environments?
@@justmeandopensource Thanks Venkat!
@@sticksen you are welcome
Excellent session! I do come across a small snag when deploying nginx-ingress in that the create DaemonSet (kubectl apply -f daemon-set/nginx-ingress.yaml) as shown in your demo works. On the other hand, if I choose Create a Deployment (kubectl apply -f deployment/nginx-ingress.yaml) then all requests via HAProxy would fail with 503! Is there a hack that need to be applied? Thank you Venkat
Hi, thanks for watching. I haven't actually tried the deployment type. Always gone for the daemonset as my dev cluster has only few nodes. I think its the haproxy configuration that needs to be tweaked but not entirely sure.
why port 80 is not listening using ingress controller
Hi Mohammad, thanks for watching. I have replied to this in a different comment.
Completed the excercise "without" HAProxy as of now (by referring nodes directly).
One question. There is a a lot of hard coding in this solution:
1. In HAProxy, we need to hard code the node IPs.
2. In Ingress, we need to hard code the service and port details.
What if the nodes are dynamic (addition, deletion, replacements)? Similarly, what if the services are dynamic?
How to tackle the above situation?
Hi Ajit,
Yes node IPs are hardcoded in HAProxy configuration. You need to update the configuration if you make changes to your nodes.
There are automated dynamic solutions for this. You can have a script to do it or you can use Hashicorp's Consul architecture. I haven't explored either.
www.haproxy.com/blog/dynamic-scaling-for-microservices-with-runtime-api/
www.reddit.com/r/devops/comments/50df4d/ways_to_dynamically_add_and_remove_servers_in/
About your second point. Thats the way ingress works. You create an ingress resource specifying where to route the traffic. Which service to route to and to which port. Your application will always be running on a same port. You have to decide this and create service and ingress resource.
I haven't come across any best practices documentation/article for ingress in Kubernetes. Most of the articles describe the basic setup at a very high level. I just want to introduce this concept to the viewers. All my videos are for beginners, so I don't go in depth into any of the topic. Just a getting started guide.
Thanks,
Venkat
@@justmeandopensource Thanks for the resources for dynamic HA Proxy load balancing. Would look into that. I would let you know if I come across the solution for dynamic ingress update.
Hi Venkat,
Great video! I am planning to use plublic DNS as noip.com and make a port forward in a router to reach a microservice backend. How does the HA Proxy can reach the service if the service as an internal IP from the cluster?
Hi Venkat, thanks for this great video. One question though, I still could not understand how haproxy is able to connect to the worker nodes port 80. We only have the cluster IP service created and ingress resource has routing in it to point to the cluster ip service. There is no node port service or LoadBalancer service to access it from outside the kubernetes cluster. I was trying to get it working by following your video. If I check the get all -n nginx-ingress with the steps, I see only the nginx-ingress pods and the daemonset in the nginx-ingress namespace. The get all ( without namespace) only gives the nginx-pod and the cluster ip service pointing to the nginx-pod. I am wondering how is it working without having a nodeport service or a loadbalancer service running to connect to the worker node from haproxy ? As per the haproxy configuration, it directly using the IP addresses of the worker nodes and port number 80. Looks like I miss something ...
Hi Nevin, thanks for watching. Have a look at the output of kubectl describe daemonset . The ingress controller pods are deployed as daemonset, so there will be one ingress controller pod on each worker node. They use hostport to bind to port 80 and 443. This will be clear when you look at the kubectl describe output. Cheers.
excellent video, thank you!
Hi Larper, thanks for watching.
Hi Venkat, thank you for another excellent video. I got it working on your vagrant environment. I also tried it on the cluster as created via the-hard-way (Kelsey Hightower). But that didn't work. Looks like iptables are blocking port 80. Just wondering how iptables are setup (probably done by kube-proxy). Hard to find this info. Maybe a suggestion to make a video about network setup and the protocols, ports and their flow and how iptables are setup. But again, thank you for taking time to make these videos and sharing them with us.
Hi Michel, thanks for watching this video. Thats interesting. When I get some time I will test ingress on the cluster set up the hard way. Thanks.
Hi Venkat, I've found the issue. It is actually quite simple. I got triggered when I was doing your video about Prometheus on my "the-hard-way" cluster. I noticed the prometheus-node-exporter got IP addresses of the the worker nodes. Normally it is more secure, to have the POD IP address range used for the PODs. So I noticed that if the hostNetwork parameter is set to true, the IP addresses of the hosts is used! So I changed the ingress file daemon-set/nginx-ingress.yaml by adding this parameter and now it all works!!!
@@michelbisschoff6993 Thats great. I learnt something new today. Thanks for that. Cheers.
Hi Venkat,
when we are doing it on aws do we need to set up the haproxy server??
I have setup app1 as Jenkins and app2 as nexus, they got there own external I.P's, do I need to create haproxy for these two apps on aws?? any input would be much appreciated.
thank you for sharing your knowledge its helping a lot ...
Hi Sunny, thanks for watching this video and taking time to comment. I am just starting to explore public cloud especially AWS. Most of my Kubernetes videos are around bare metal. Looking at the logic, if you are using a public cloud, you may not need to use a haproxy service. In AWS you can use elastic load balancers for that. The following article might clear your doubts around traffic routing.
medium.com/@chamilad/load-balancing-and-reverse-proxying-for-kubernetes-services-f03dd0efe80
Thanks
@@justmeandopensource thanks for the article. ,🙂
@@sunnynehar You are welcome. Cheers.
i am getting error "Failed to list *v1.VirtualServer: the server could not find the requested resource (get virtualservers.k8s.nginx.org "
Hi Saurabh, thanks for watching. At what point in this video you are getting this error? I have never seen this before.
github.com/nginxinc/kubernetes-ingress/issues/783
May be try using different version of kubernetes-ingress from github.
I had the same issue, using v0.3 of the ingress controller. Follow the instructions to install the latest version from :
docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/
Then test with the following rules:
kubernetes.github.io/ingress-nginx/user-guide/basic-usage/
That solved it for me
Typo, version 1.6.3 at the time being
@@michaeldundek9573 Thanks..but I have already resolve :D
@@sourabh729 Perfect.
i see haproxy is outside K8s cluster running on port 80 loadbalancing on port 80 of workers, but there is nothing reachable on port 80 from outside on worker. ClusterIP port which is on port 80 is internal, how HAproxy from outside can reach the nginx clusterIP service running on port 80?
Hi Pankaj, thanks for watching. The ingress controller you deploy on the worker nodes bind to port 80 on the worker nodes. HAProxy will send the traffic to one of the worker node on port 80 where nginx ingress controller is listening. Ingress controller will then route the traffic to appropriate service (clusterIP) which sends the traffic to backend pods.
Cheers.
@@justmeandopensource I see probably I missed ingrescontroller..
@@PankajGupta-jz2gb no worries.
Hi Thanks for the video, I have followed the exact same step to create an nginx controller using daemonset however I am not able to browse the app deployed in the pods. I have noticed that the ports 80 and 443 are not getting exposed on the worker nodes despite trying to create the daemonset multiple times. What can be the reason for this? I am using weavenet
Are all your ingress related pods running fine? Have you setup haproxy as shown in this video?
Hi Venkat,
I tired with ingress in gke(google k8s engine).
1) Created the my pods.
2)Expose it though loadbalancer service type. (it was working with http)
3) then i configure ingress, it says error "Some backend services are in UNHEALTHY state".
Can you pls suggest any possibilities to configure in https.
HI Surendar, thanks for watching. I haven't used this setup in Google cloud yet. So I can't be sure of your problem. If I get some time I will test this.
Great content, greatly appreciate all the kubernetes tutorials!
Hi Edwin, thanks for watching.
When my cluster has 2 worker nodes, it's successful. but when i turn it into 3 worker nodes. When i command: "kubectl create -f daemon-set/nginx-ingress.yaml" (daemonset), it always two container at running state, the other is in ContainerCreating. I don't know why. Can you give me somes suggestion? Thank you!
Hi Vu, thanks for watching this video.
It doesn't matter how many worker nodes you have in your cluster. As we are deploying the ingress controller as daemonset, it should get deployed on all worker nodes. You mentioned it is in "ContainerCreating" state on the 3rd node, which means deployment set /replication controller is doing its job.
The reason its still in ContainerCreating state might be due to various reasons.
You can check the status of the deployment using "kubectl describe deploy nginx-ingress" and check the events section at the bottom. You can also get the name of the pod that is in ContainerCreating state and use "kubectl describe pod " to see what it is trying to do.
Is the 3rd node okay with other deployments?
Just run the below command to see if a simple nginx container can be run on that node.
"kubectl run nginx --image nginx --replicas=3"
And check if the nginx container is running on the 3rd worker node.
Otherwise you might have some network problems on the 3rd node.
Also check if you can reach internet (ping outside world) from the 3rd node.
Thanks,
Venkat
Hello thanks for your content.
I got 2 small questions.
First: What is the point of an ha-proxy since even if we point to the same node, the svc of type NodePort will load balance btw pod.
Second: As a mac user how can we communicate from the host to the cluster ? On mac kubernetes (docker for mac) use an hidden VM.
Hi Gilles, thanks for watching.
1. Yes but if you want to expose your application with a DNS name (eg: myapp.example.com), what entry would you add in your DNS? Would you add myapp.example.com with an IP address of one of the worker nodes? What if that worker node goes down? You will then have to update DNS for myapp.example.com with the ip address of another worker node. Just to simplify this process, we use HAproxy or any other load balancer so we don't have to worry about underlying servers (worker nodes) and you don't have to update DNS for myapp.example.com often.
2. I haven't tried this on Mac with Docker for mac. So I am afraid I can't comment on that. I am a Linux person by birth.
hello venkat, it is me again :) with yet another question :), I was wondering how with Kubernetes can I manage to request a pod from another pod? , I have a web service in python with flask that is inside a container that I can request thanks to nodeport, but I want that web service to also send a request to a TensorFlow serving ( a container that when requested return a series of probabilities)? should I expose a service for the TFserving too?
Yeah, that's the way to access another pod by exposing it as a service for the TensorFlow pod.
Hi, since the Ngnix controller pod is inside the cluster, how haproxy can reach the ingress controller pods? Thanks
Hi Mike, thanks for watching. If you take a look at the output of kubectl describe of one of the nginx ingress controller pod, you will notice that it binds to the host port on the worker node it is running. And haproxy's backend configuration points to these worker nodes on the ports where the ingress controller pods are bound.
Hi,Bro
In load balance pool of HA proxy you used node IP:80 as pool members, is these service (type=NodeIP) for NGINX ingress controller ? I didn't see you create a service for ingress controller in k8s cluster. BTW, Nice Video, many thanks.
Hi Yi, thanks for watching. If you look at the nginx ingress daemonset that you deployed in your cluster, they do hostport binding on port 80 and 443.
@@justmeandopensource aha,I saw it , many thanks
@@yijiang6037 No worries. You are welcome.
Hello. After installing nginx-ingress controller I get a nginx-ingress-nginx-ingress service of type LoadBalancer and the EXTERNAL IP gets in . It waiting for the LB to be created (which wont happen). In this case I understand that the service should be a nodeport, however, I didn't see any modifications during the video. Iam using a BigIP lb. Is this still valid or is there any advice on this? thanks
I haven't used F5 Big-IP load balancer before. I am afraid I can't comment on this.
@@justmeandopensource hey thank you for your prompt reply. In the case of using HAproxy as a LB, does nginx-ingress-nginx-ingress service needs to be setup as a loadbalancer or nodeport service? Thanks!
@@pablodamico5729 HAProxy load balancer is for outside the kubernetes cluster load balancing between the k8s nodes. Within the cluster you need to have a load balancer capable of providing/handing out ip addresses to the loadbalancer type service. MetalLB is one such solution. There is also kube-vip. This is different to HAProxy. Cheers.
Thanks very much for ur totos, i have a question, i'm strugling to deploy an ingress controller of a type load balancer and let haproxy to give it an ip and connect to it ?
Excellent 👍
Thanks for watching. Cheers.
Great video Venkat
Hi Gary, thanks for watching. Cheers.
You have given :80 in HAProxy default backend config but where have we configured Nginx ingress controller to listen on port 80 of worker nodes for incoming traffic from Load balancer? Thanks Venkat.
Have a look at the definition of the ingress daemonset. Kubectl describe daemonset . You will find that the ingress controller pods on each worker nodes uses host port to bind to port 80 and 443. (github.com/nginxinc/kubernetes-ingress/blob/master/deployments/daemon-set/nginx-ingress.yaml)
Thanks for the video, can you tell what are the other ways of creating cluster instead of lxc.
Hi Venkat, I followed the same procedure. but, path-based routing is not working, Could you give any inputs?
Hi Nagaraju, thanks for watching this video. I was playing with path-based routing while recording this demo but it didn't work for me either. I will have to play with it bit more to get it working and then if I get anywhere I will definitely do a video of it.
You could also try the Traefik ingress video and see if it works with it.
ruclips.net/video/A_PjjCM1eLA/видео.html
I haven't verified it with Traefik, but worth trying.
Thanks.
Many thanks for your quick reply Venkat. I am pleased :)
@@nagarajuyarlagadda4997 No worries. You are welcome.
Shoudn't there also be a nodeport service created to allow access from the haproxy to the ingress controller?
good video, but it would be greate if you could explain a bit regarding ingress controller.
Yeah.. May be on another video. Thanks for watching.
I have seen other videos and there they expose ingress deployment as well through a nodeport service(so loadbalancers can hit nodes,from where it would be routed based on ingress resources).Here I am seeing loadbalancer can hit workernode at port 80(as defined as haproxy config) even though there was never a nodeport service enabled. Is it because we are using daemonset instead of deployment to setup ingress controller?My question is basically how loadbalancer can access worker nodes at port 80?
Hi Siddiqui, as commented previously, here is the explanation.
The nginx ingress controllers are pods running on each of the nodes (since it is a daemonset). The ingress controller pods bind to port 80 and 443 on the respective worker nodes. HAProxy that is outside the cluster will load balance the traffice between the worker nodes on port 80 where ingress pods are listening. Ingress pods will then route the traffic to appropriate service which inturn sends the traffic to backend pods. Hope this makes sense. Cheers.
@@justmeandopensource thanks for the reply..I researched it and found that if you deploy the ingress controller as a daemonset,it makes sure any traffic on port 80 reaches you(basically looking for traffic at 80 comes out of the box).This shouldnt work if it was deployment in place of daemon as we will one more service to expose our ingress
@@sariksiddiqui6059 Yup. You are right. Since my cluster was small I decided to deploy it as a daemonset instead of deployment. I never tried the deployment type for ingress controller.
@@justmeandopensource cool man...really like your videos.One of the best on kubernetes handsdown
@@sariksiddiqui6059 Thanks for your interest in my videos.
hi Venkat,
Wonderful session again; with hands on ingress setup.
I tried with similar things using vagrant setup instead of lxd; worked well.
I found one issue with vagrant though ... simply by hitting hostname on browser vm instance doesn't display anything on windows OS. But within HAProxy instance if i simply do curl on 3 host names i get expected output as mentioned in session.
how to access vagrant vm instance using hostname instead of private_network ip on browser?
Hi Bhalchandra, if you were using a Linux machine, then you can update /etc/hosts file with IP address and VM name and then you can access it through the name. Similarly you can do it in Windows as well. The below link might help you.
www.howtogeek.com/howto/27350/beginner-geek-how-to-edit-your-hosts-file/
Cheers.
What if you are running the ingress controller only on worker1 and haproxy hits worker2?
Secondly, what if we run ingress controller on the master node (non-HA)? In that case, should we only provide the IP address of the master in the haproxy backend?
Hi Venkat, Thanks for this class, i have tried this tutorial on AWS instances, but getting site can't reachable. which IP(Private IP or Public IP of HAProxy server) i have to place in /etc/hosts. Or should i do any other configuration since i am using AWS instances?. I am using a security group in which all ports are open.
Hi Raghu, Have you fixed it? If yes.. what have you done? Thanks
Hi Venkat
if we have setup a rule for node provisioning based on CPU, Memory or based on user request , in that case our total number of nodes will not be same all the time , then how we will do the haproxy entry.
Hi Kunal, thanks for watching this video. Thats a good question.
One other viewer also asked similar question I think.
I haven't researched much about this. But the following reddit posts seems to discuss few possibilities.
amp-reddit-com.cdn.ampproject.org/v/s/amp.reddit.com/r/devops/comments/50df4d/ways_to_dynamically_add_and_remove_servers_in/?amp_js_v=a2&_gsa=1&usqp=mq331AQCCAE%3D#referrer=https%3A%2F%2Fwww.google.com&_tf=From%20%251%24s&share=https%3A%2F%2Fwww.reddit.com%2Fr%2Fdevops%2Fcomments%2F50df4d%2Fways_to_dynamically_add_and_remove_servers_in%2F
Hey one request can you make one video on how to attach a load balancer like nlb in front of our kubernates cluster that can load balance between different nodes
Awesome!! Thanks a ton!
Hi Aromal, thank for watching.
Do we always need to register ingress controllers to load balancers(HA proxy in this case).
Hi Mihir, thanks for watching. You need some form of load balancing.
Take a look at the recent updated video on this topic ruclips.net/video/UvwtALIb2U8/видео.html.
You can use load balancer solution like metallb and can get away without haproxy stuff.
At about 16:02 in the video, you show the search results for ingress and you choose the second one. Why not the first one? What is the difference? can the first one from Kubernetes be used in the same way?
Hi, thanks for watching this video. Since I am doing an Nginx ingress video, I preferred to use the second github link from nginxinc. Functionality or feature wise I don't think there are any differences. I haven't tried the other link you mentioned. I just went through it now and don't see anything different.
Thanks,
Venkat
Hi Venkat..Thanks for an eye opener .. iam having a question here.. how ingress controller are bind to port 80 of the worker nodes? .. since we are exposing ingress as a nodeport and nodeport supports only port greater than 3000 range..
H Venkat, thanks for watching. Have a look at the definition of the ingress daemonset. Kubectl describe daemonset . You will find that the ingress controller pods on each worker nodes uses host port to bind to port 80 and 443. Cheers.
@@justmeandopensource but hostport range is greater than 30000 only right?..
@@venkatraman3653 Its nodeport whose range is limited to 30000-32767 and not host port.
@@justmeandopensource spec:
template:
spec:
hostNetwork: true
These lines should be there in yaml file of those daemon set right? ... sorry for the series of questions though.. and thanks for you response
@@venkatraman3653 Yes. It uses hostnetwork to bind to ports 80 and 443
If I dont have multiple nodes, just one master node (like a minkube or microk8s), can i just set ingress controller to take requests from my host name, instead of the loadbalancer?
Hi Senthil, thanks for watching. I haven't tried that. But yes you should be able to do that without a load balancer. Cheers.
I am having hard time configuring this for TCP. I was able to configure for Http, but the instructions here isn't clear. So I define conflig map, and not using load balancer, as I am using single node. Http works, but the TCP connection just says 'Closed Connection'.
Do you have an example of TCP ingress?
Hi Senthil, thanks for watching. I haven't done any work on using tcp ingress. I will have a play with it and find out.
@@justmeandopensource I was able to get it working with a tcp config map and following certain examples www.ibm.com/support/knowledgecenter/ru/SSSHTQ/omnibus/helms/all_helms/wip/reference/hlm_expose_probe.html
@@SenthilRameshJV Perfect.
Thank you for the tutorial.
May i know how can i set sticky session (stateful application) for this environment?
Should i configure in haproxy or in the ingress?
Hi, thanks for watching.
I believe it has to be done at the haproxy level.
thisinterestsme.com/haproxy-sticky-sessions/
Actually it can be done at the ingress level as well it seems by adding appropriate annotations to the ingress resource.
kubernetes.github.io/ingress-nginx/examples/affinity/cookie/#:~:text=Deployment,-Session%20affinity%20can&text=The%20affinity%20mode%20defines%20how,or%20persistent%20for%20maximum%20stickyness.&text=When%20set%20to%20false%20nginx,even%20if%20previous%20attempt%20failed.
One pod of Ingress controller says `port 80 is already in use. Please check the flag --http-port`. Any solution please
How the connection happens between haproxy loadbalancer and ingress controller when someone hits the haproxy ip bcoz i can see haproxy will load balance to the worker nodes only not the ingress controller..correct me if i am wrong
Ingress controllers are bound to host ports on respective workers nodes. Ha proxy sends traffic to port 443 on the workers nodes where the ingress controller is bound to. Hope it makes sense.
MetalLB & HAProxy both are Load balancer ?. Instead of HAProxy, I can use MetalLB ?
hello, this setup works locally right? i want to expose my application through the internet. i tried with a basic ingress YAML file and deployed the laravel application and created a service. to expose it on the internet i just run the minikube tunnel and exposed the external IP and tried it in the browser but the app is not loading. is my way is correct or what i have to do to expose my app on the internet with minikube. please guide me
Hi!
Nide video.
I have a question, could You help me please ?
Is it possible to make whitelist by ip with ingress for TCP 4 layer ?
I used annotations but they worked for 7 layer.
In my k8s cluster I have a postgres db on stolon and I would like to limit access to it by ip
I use kubernetes/nginx-ingress on bare metal k8s
Hi Martin,
I think pod network policies might help you achieve what you wanted. I have it in my queue to be recorded.
Basically, using pod network policies you can define network access between pods both ingress and egress.
Thanks,
Venkat
@@justmeandopensource ty for tips!
@@martinambaryan9640 You are welcome. Cheers.
Hi Venkat, thanks for the wonderful video. I have created instances in GCP and ingress setup is done. Please advise me how to check the setup is working or not in GCP.
I have used 1 HAproxy server, 1 master and 2 worker nodes.
Very helpful tutorial, just one quick question. I assume that since you have the cluster running on containers, the reason you are able to execute kubectl commands from the host machine is some sort of rule on your .zshrc file? If so, could you please explain how that is accomplished? I tried using an alias such as alias kubectl='lxc exec kmaster kubectl'. And while this works just fine for listing resources and what not, the forwarding of the command breaks when you need to add flags. So while I can run 'kubectl get nodes', if I try to run 'kubectl get nodes -o wide' it breaks.
Hi Jose, thanks for watching this video. I covered the kubeconfig details in various other cluster provisioning videos. I had an assumption that viewers watched all my previous videos. That's why I don't repeat all the information in every video.
So you are using lxc containers for kubernetes cluster?
I copy the /etc/kubernetes/admin.conf file from the master node to my host machine as $HOME/.kube/config.
I also download the kubectl binary and move it /usr/local/bin.
Hope this helps. If you are stuck, give me a shout again.
Thanks.
Hi
What do you use for the terminal zsh prompt. Looks nice especially the history commands are coming up automatically.
I have been using python powerline, but haven't configured internals
Hi Ashish, thanks for your interest in this video. Actually I have done a video on my terminal setup.
ruclips.net/video/soAwUq2cQHQ/видео.html
But this was long time ago. I have moved since then to a whole different setup to using I3 tiling window manager.
ruclips.net/p/PL34sAs7_26wOgqJAHey16337dkqahonNX
Cheers.
@@justmeandopensource and your desktop theme? It looks cool too
how to compare between Nginx Ingress and MetalLB? Can I use both together?
if metalLB alone in layer 2 mode, it suffers slow failover. Will ingress help to solve this?
Hi Chai, thanks for watching. Fundamentally they both serve different purpose. Please check the below discussion for more understanding.
superuser.com/questions/1522616/metallb-vs-nginx-ingress-in-kubernetes#:~:text=Metallb%20is%20a%20load%20balancer,provides%20routing%20to%20different%20routes.
@@justmeandopensource do you recommend, lets say PHP web pods exposed with NodePort and using HAProxy as load balancer? No ingress controller is used.
hi venket can we use same ha proxy server which we are using for controller for multi master kubernetes architecture or we have to create new haproxy server for worker
Hi Atul, thanks for watching. If they are on different ports then you can use the same haproxy and you can configure additional backends. For control plane, you must have configured the haproxy frontend to listen on 6443 and load balance it with backend control planes on 6443. So you can add another frontend/backend block for your worker nodes for ingress controller. Thanks.
How does the worker nodes expose the 80 port? Is there any iptables rules that the ingress controller creates in background?
Hi Christian, thanks for watching. Apologies for delay in responding.
The ingress controllers on the worker nodes bind to port 80 and 443 on the worker node through hostPort. You can check that by looking at the output of kubectl describe ingress controller daemonset/deployment and search for hostPort.
@@justmeandopensource thanks for the answer. That solved my doubts
No worries.
Would it make sense to use metallb AND an ingress-Nginx-controller? Or are these two different solutions to the same problem.
Hi Michael, Thanks for watching this video. Metallb is a load balancing solution specifically for bare metal clusters. If you happen to have your cluster with one of the cloud provider, you can just create a service of type LoadBalancer and the cloud provider will automatically create load balancer.
Metallb is for load balancing and Nginx ingress is for traffic routing. To use nginx ingress controller, the service will have to be of type ClusterIP. The load balancer will be external to the cluster. Hope this makes sense.
Thanks.
@@justmeandopensource I'm in the process of setting up a production on premise cluster. I have the cluster set up (using kubespray), but now I'm trying to merge the concepts of video 31 and 32 together so I have metallb and an ingress-controller. Thanks for the great resources you provide.
@@michaelpuglisi5108 Alright. It all depends on individual usecases. Thanks.
What is the main diff b/w the ingress controller managed by Nginx vs the one maintained my kubernetes?
HI Sudhesh, thanks for watching. Both these are developed and maintained by different community of people. Otherwise the concepts/features are quite similar. I used to use ingress from Nginx Inc but later started using the Kubernetes based nginx-ingress. I haven't used it extensively to see the difference.
Is there a reason why all the microservice ports are same? What if the ports are different. Do we have to create those many backend entries in haproxy
How the haproxy LB points to the ingress controller? Configuration file only points the haproxy to worker nodes ips and port 80.
Thanks for the explanation, i have a concern here. May i know in which host /etc/hosts file we are adding the haproxcy IP? is it in Master node?
Hi Hari, thanks for watching.
The idea is to access your app via a dns name (for eg: nginx.example.com). Since we don't have a dns server for this demo, we can make a dns entry in our local /etc/hosts file from the machine where you are trying to browse nginx.example.com. In this video, it was my Linux host machine where i edited /etc/hosts file.
@@justmeandopensource Thanks for the quick reply..my k8s nodes are VMs. so, in that case, what would be the clue to access the (for eg: nginx.example.com).please suggest
@@harihari579 From where you will be accessing nginx.example.com? So you will be opening a web browser and entering nginx.example.com. What machine you will be doing this? In that machine you will have to make sure nginx.example.com resolves to the ip address of the haproxy. If its a Linux machine, like what I have shown in this video, update /etc/hosts. If its a Windows machine, still you can update hosts file but it will be located somwhere. I am sure you can google it.
@@justmeandopensource yeah..I am using windows 10 as my base host and I found a way to update the /etc/hosts.. thankyou once again
@@harihari579 Perfect.
Thanks for the tutorial. I ma facing one issue. Getting error "I0211 06:49:24.970683 1 manager.go:215] Starting nginx
2022/02/11 06:49:24 [emerg] 10#10: open() "/etc/nginx/conf.d/default.conf" failed (2: No such file or directory) in /etc/nginx/nginx.conf:92
nginx: [emerg] open() "/etc/nginx/conf.d/default.conf" failed (2: No such file or directory) in /etc/nginx/nginx.conf:92" Any idea what is the problem? Following steps as per your tutorial. Thanks
Very nicely explained. Just I wanted to know the system information, battery life , networking , processor and which package required to install on Ubuntu OS
Hi Hanuma, thanks for watching.
The widget that you see on the right side of my screen that shows various system information is conky. You have to install conky software and have a conky configuration file. You can search online for ready to use conkyrc configuration or you can customize as per your need. Cheers,