Excellent video. Kubernetes cluster installation made easy. Please keep up your good work. Only suggestion is to create the vagrant image for the common installation like, User Creation, Installation o docker, kubelet, kubeadm and kubectl. If done it avoids installation on more nodes.
this is awesome one and much appreciate for your effort that help us to understand the configuration. hopping a session with nginx ingress control instead of haproxy:)
Thank you for the video. Its really helpful in scaling out the master nodes. But can you please make a tutorial on migrating applications from one on-premise k8s cluster to another on-premise k8s cluster? Lets say we have major hardware upgrades.
Hi Niharika, thanks for watching. You can use Velero to backup from current cluster and restore to another cluster. I have already done a video on Velero which might give you some directions hopefully. ruclips.net/video/C9hzrexaIDA/видео.html
Very good video as always. I followed your video about RKE2 and I managed to setup my cluster, kudus on that one too. I wanted to use HA on that cluster but I was afraid to mess it up since, although similar, the setup is not the same. Would you consider a video showing how to do that in a RKE2? Even as a next step for the last video about HA and the RKE2 itself? 10x!👊
@Venkat, excellent illustration. Thanks for the detailed documentation as well. Can you help me to clarify the below questions 1) Is the loadbalancer mandatory for the HA Cluster? 2) In case if i do not have the LB configured in my initial setup of the environment and we may plan it later stage....so, in that scenario where can i modify the setting to=> --control-plane-endpoint=":6443" in both master nodes? 3)is the same LB is going to be used for service type: LoadBalancer when we expose a pod/deployment? Thank you!
Hi Vamsi, thanks for watching. 1. Yes. In a HA multi-master cluster, each control plane is running the api server component. You can access any of the master directly without using a load balancer. But the whole point of HA is to be able to use the cluster even when one of the master fails. So if you don't use a load balancer, you will have to switch manually to using other master nodes when the one you were using goes down. 2. You can't do that as far as I know without downtime. You will have to do kubeadm reset on all master nodes and re-initialize the cluster with control-plane-endpoint option 3. No this LB is for load balancing the master nodes. The load balancer type you use with the service is applicable inside the cluster. If you are not using any cloud provider, and using on-prem bare metal servers, you can use MetalLB for this purpose. I have done few videos on MetalLB if you are interested. ruclips.net/video/xYiYIjlAgHY/видео.html ruclips.net/video/zNbqxPRTjFg/видео.html Cheers.
First, I want to say thank you for all the great videos which you provide to the community! Secondly, is there any option that you create ActiveMQ Artemis broker service for k8s at HA mode? This topic is now more and more needed in the world of microservice architecture. Wish you all the best!
HI Igor, thanks for watching. I haven't tried any message queueing systems in k8s yet. I will have to spend some time learning about. Thanks for suggesting this. Cheers.
Hi there, first of all should thank you for such helpful videos you created, then have a question for you, I'm trying to create an ecosystem with 3 masternodes and 8 worker nodes using a HA-Proxy for master nodes all of the parts are running fine but I want to know how to have etcd cluster separately like external etcd clusters , thanks a lot
Hi Farshad, thanks for watching. I am yet to explore the possibilities of setting up and running an external etcd cluster. Its in my list as requested by many viewers. I will try and do it soon. Cheers.
Very Nice content . Can you make video on "How to convert single master cluster to HA cluster using kubeadm ?" Make sure current resources(pods svc) are not impacted
Hi, Thanks for watching. I don't think thats possible without downtime. To be honest, I haven't actually tried it. You will have to make a decision beforehand whether to go with single master of HA cluster. In production, you would never start with a single master. This is a very rare scenario. Development clusters are fine with single master. With single master cluster, during cluster initialization, the kubeadm init command is slightly different to a cluster with multi-master. You can do kubeadm reset and then re-initialize the cluster with the right options to kubeadm init command and then add other masters. But I am not sure whether the currently running workloads are affected during this process. Cheers.
@@justmeandopensource Thanks for reply . Downtime is fine but resources like pods and svc should not be deleted . That's is our goal . Actually we have cluster with single master with lot of deployments and svc . Of course that is non prod . But I am trying to upgrade to HA cluster .
@@saibbsr03 It would be so simple to build a new k8s cluster with multiple masters and then use Velero to backup workloads from the old cluster and restore it to this new one.
Awesome! I do have 2 questions tho, 1. I get met with a page that just says Client sent an HTTP request to an HTTPS server when I try to access the LB on port 6443, what's the cause of that? 2. how would I make my deployments like nginx publicly available like on a regular cluster?
thanks for the video(s), very helpful. I got bunch of errors about certs when Master2 node joined the Master1. I dont know what the problem is but trying to figure it out. Since you mentioned in this vide, can you make a tutorial about how to set up Nat Network and Bridge. Where I work is an air-gapped and cant have an internet connection, in fact there is no internet connection.
Thank you for the video. Its very helpful in scaling out the master nodes to support HA by using software load balancer. However I have a question on hardward load balancer. How can I build the multple master nodes by using hardward load balancer. Can I use the same step as this vedio or It's require specific activity depend on hardware vendor?
Hi, thanks for watching. The concept is going to be the same. You will have a load balancer (software or hardware doesn't matter) configured with all your control planes as the backend servers. And while initializing the kubernetes cluster with kubeadm binary, you will specify the control-plane-endpoint option to be the ip address of the load balancer.
@@justmeandopensource Thank you for your answer, I'll try following you suggestion and thank you again for every good video that you have done here. Your are explain step by step clearly and make it easy to understand. I'll recommend you video to my friend how want to learn kubernetes.
Thanks man, Superb tutorial Can you please tell me, is kubeadm good for production ready kubernetes setup? Like in terms of security, authorization high availability. Or should i use microk8s or anything else if you can suggest. Thanks
Hows the flow if we have multimsater infra(multiple master & multiple workers) with ingress? 1. LB > Master > HAPROXY > Worker OR 2. LB > Wroker Nodes > HAPROXY > Masters? Please also add INGRESS in above design & let me know. Thanks.
@@justmeandopensource I solved my question. Thx for instructions. Can u tell me better way to use php-cli workers as example for yii2-queue in Kubernetes? I saw not good way to use supervisord on kube
Hi Suraj, I posted an updated video today. Here is the link ruclips.net/video/SueeqeioyKY/видео.html Again I used a separate load balancer nodes for the HA setup. However its possible to get a similar setup by just using master nodes and not requiring separate load balancer nodes. I am hoping to do that in my next video. Cheers.
Hi, have a nice afternoon. I'm planning to set up a Kubernetes Cluster from scratch. I like your content. I ask you to please tell me if you are setting up all this from your PC or a laptop. What are the resources to have for setting up this infrastructure. Maybe how many physical cpu you have. Because you say to have at least 8cpu to set thi up. But is it 8cpu in a physical laptop or in a VM. Please assist me by telling me resources you have in your PC or laptop.
Hi Jesus, thanks for watching. This set of virtual machines I used in this video in total needs 6 cpu and 6G memory. I ran this on my Laptop where I had 8 cpu cores and 16G memory. Cheers.
Hi Venkat, Really appreciate for your efforts. One quick question - If we add additional master node in later point of time, do we need to sync etcd data to new master or will it be auto synced and it remains in the same state of existing master DB? Or single etc pod sufficient for serving all the transactions. Can you please make an video HA for etcd as well
Hi Kishore, thanks for watching. There is two topology here. You can either set up an external etcd cluster separate from the control plane nodes or you can setup a stacked topology where each control plane will have an etcd service. So in stacked topology when you add new master node, the etcd will join the existing etcd cluster from other master nodes and will be synced automatically.
Hello Venkat, Thanks for posting. It's a very detailed and good guide to HA Kubernetes Cluster setup. I have a question, I have to install Kubernetes 1.12.2 and the docker version is 18.06.1~ce (a specific requirement) but with the installation step that is given for docker, I am not able to install this version (even though I edit the version field to docker-ce=18.06.1~ce~3-0~ubuntu). Can you suggest to me how I can do that?
If you are on Ubuntu 20.04 Focal Fossa, then you can only install from one of these versions. root@kmaster:~# apt list -a docker-ce Listing... Done docker-ce/focal 5:20.10.5~3-0~ubuntu-focal amd64 docker-ce/focal 5:20.10.4~3-0~ubuntu-focal amd64 docker-ce/focal 5:20.10.3~3-0~ubuntu-focal amd64 docker-ce/focal 5:20.10.2~3-0~ubuntu-focal amd64 docker-ce/focal 5:20.10.1~3-0~ubuntu-focal amd64 docker-ce/focal 5:20.10.0~3-0~ubuntu-focal amd64 docker-ce/focal 5:19.03.15~3-0~ubuntu-focal amd64 docker-ce/focal 5:19.03.14~3-0~ubuntu-focal amd64 docker-ce/focal 5:19.03.13~3-0~ubuntu-focal amd64 docker-ce/focal 5:19.03.12~3-0~ubuntu-focal amd64 docker-ce/focal 5:19.03.11~3-0~ubuntu-focal amd64 docker-ce/focal 5:19.03.10~3-0~ubuntu-focal amd64 docker-ce/focal 5:19.03.9~3-0~ubuntu-focal amd64
@@justmeandopensource Thanks for the reply! Oh, I see, you are correct, it is Ubuntu 20.04.1 LTS (GNU/Linux 5.4.0-58-generic x86_64). Venkat, for my installation I need a specific 18.06.1 docker version for two reasons, one it's a requirement, two Kubernetes 1.12.2 doesn't support 19.03 :( Do I need to change the ubuntu version to have such a cluster?
@@sg04rocxx The versions that I pasted in my previous comment are the ones that you can use with Ubuntu 20.04. It will be possible to install 18.06.1 in Ubuntu 20.04 but I wouldn't use it because its not in the supported matrix.
Thank you for the video, I really like it, I still have a question regarding the IPs given when installing the clusters. Do 'advertise-ip-address' have to be a public IP? what if I have the following scenario: - haproxy is installed on AWS EC2 public instance - all the other nodes are set up to run in a private subnet for more security How should the init command be in this case? Thank you again
Beautiful - well demonstrated - the svc NodePort does not seem to appear on haproxy - I have to directly connect to the master nodes to view the web page. Am I missing some thing? I am using Kubernetes v.1.20 with containerd.
Hi Venkat.. Thanks for the amazing video. Currently I have a K8's cluster configured with single master and 4 worker nodes. The cluster was configure 1 year ago and now I need to make the cluster high avaiable by adding another Master node.. Is that possible.. ??
Hi Vivek, thanks for watching. You won't be able to make your existing single master cluster HA by adding more master nodes. The decision of whether to go with single master or multi master has to be made at the cluster provisioning/initialization time as you have to pass additional parameters during kubeadm init phase if you want to set up multi master. In your case, you might have to * set up a new multi-master cluster * move workloads to the new cluster (you can use velero to backup from source and restore to target cluster)
@@justmeandopensource Hi Venkat.. Thanks for replying... That was precisely what I was looking for..!! Looking fwd to more videos from u... Thank you very much..
@@justmeandopensource Hi Venkat.. Just need a clarification on the above. Suppose I am carrying out the following steps. 1. Take an etcd back-up including the certs and keys 2. Identify 2 extra nodes(1 for master and 1 for load balancer) 3. Perform a kubeadm reset 4. Kubeadm initiatize using load balancer and also the restored etcd. Will this work for me..? Will i suffer any data loss..?? Your comments are much appreciated.. Thanks,Vivek
Thank you for this amazing video .... As I'm very new to Kubernetes, this video was very informative and easy to learn. I have one basic query i.e. where to install Kubernetes Dashboard ? (if it is in Load Balancer then do I need to install kubectl again in LB machine.If it is in any one of the Master Node then what will happen if the Master Node goes down ?)
Hi Rohit, thanks for watching. Kubernetes dashboard is like any other deployments you deploy in your kubernetes cluster. Its not a process you run on load balancer or master nodes. But in your kubernetes cluster as pods. So if a node goes down, the kubernetes dashboard pod will get rescheduled on another node and you can continue to access it.
Thanks for the video. Can you please explain below. 1. Is it shared etcd across master1 and master2? 2. I shutdown the master1, then I tried to run kubectl commands on master2, its not reachable. how to resolve this. Once again appreciate your efforts.
Hi Mogal, thanks for watching. 1. Its not shared etcd. Kubeadm init sets up an etcd node in each master node and all of these etcd form a cluster 2. In multi master cluster, you normally won't connect to a specific master node by its ip. Instead you will connect through a load balancer in front of all the master nodes. In your kube config file, you will have the server url as the load balancer ip address. When you run kubectl commands, it will hit the load balancer which inturn sends the traffic to one of the master nodes. If you take a master node down, load balancer won't forward the traffic to that master. It might take few seconds for the load balancer to realize that one of the master is down and it may send traffic to the master that is down for a very brief time. Keep running the kubectl command.
@@justmeandopensource Thanks for the reply. Below are the details before shutting down the kmaster1 and after. root@kmaster2:~# kubectl get nodes NAME STATUS ROLES AGE VERSION kmaster1 Ready master 22h v1.19.2 kmaster2 Ready master 22h v1.19.2 kworker1 Ready 22h v1.19.2 -----------------------------------------------------------------------shutdown now on master1------------ root@kmaster2:~# kubectl get nodes Error from server: etcdserver: request timed out root@kmaster2:~# kubectl get nodes Error from server: etcdserver: request timed out root@kmaster2:~# ping 172.16.16.100 PING 172.16.16.100 (172.16.16.100) 56(84) bytes of data. 64 bytes from 172.16.16.100: icmp_seq=1 ttl=64 time=0.580 ms 64 bytes from 172.16.16.100: icmp_seq=2 ttl=64 time=0.716 ms 64 bytes from 172.16.16.100: icmp_seq=3 ttl=64 time=1.08 ms ^C --- 172.16.16.100 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2053ms rtt min/avg/max/mdev = 0.580/0.792/1.081/0.211 ms root@kmaster2:~# kubectl get nodes Unable to connect to the server: net/http: TLS handshake timeout root@kmaster2:~# kubectl get nodes Unable to connect to the server: net/http: TLS handshake timeout
hi , i performed same setup but after i ran calico installation command on master one things git wrong earlier kubectl get nodes was working from any master byt after calico setup kubectl command are getting net/http tls handshake timeout. what's that i couldn't figure out this issue. my cluster state was "not ready " before network setup command.
Hello Venkat, Thanks a lot for the video, I configured multi master node cluster (2 masters) in my environment. But if I bring down any one master node another master is unable to run any kubectl commands, where as you mentioned if I am having 3master nodes and if one goes down I am able to run the kubectl commands. If you don't mind may I know why this is behavior is like this. Thanks in advance
Hi, Thanks for watching. Best practice for any cluster is to have at least 3 nodes. Have you tried the same with 3 masters? I think I too faced similar problem with 2 masters.
As per my knowledge,The fault tolerance with 2 master is zero..so one is down cluster is down..we need minimum 3 for setting up multi master nodes in production environment
@@suraj2533 Yeah minimum 3. But with 2 you can run for a while as you fix and bring the 3rd broken one online. Otherwise no point in having 3 node cluster.
Hi Venkat, thank you for the amazing video, everything worked smoothly! I'm trying to modify the load balancer as I would like to use an HA cluster to balance the Kubernetes. I have 2 load balancers and I installed keepalived on them. I can see the virtual IP up on the master node but I can't reach the Kubernetes cluster. Can you help? Much appreciated
Hi Gabriele, thanks for watching. Thats a very valid point. HAproxy is a single point of failure. I haven't covered High availability of HAproxy itself in any of my videos. Yes we could use keepalived with floating IPs. I will make a note of this topic and add it to my list. I can get to it once I finish my current pending list. Cheers.
Hi Martin, thanks for watching. You will have to generate new certificate and bootstrap tokens to add additional nodes. I just finished recording a follow up video where I explain it. It will be released tomorrow Wednesday at 7:30 British Summer Time. Stay tuned. Cheers.
Hi Ganesh, thanks for watching. My focus currently is on Kubernetes and other cloud native technologies. HAProxy is just a tool and I don't think its worth starting a series on that.
@@justmeandopensource Thanks for the reply venkat. At least you can prepare a video like an hour or so showing requests distribution to databases, balancing apps...SSL, monitoring nodes, etc. I think you can fit in a couple of videos all the things needed in the production. RUclips still doesn't have good contents on haproxy. Thanks
Hello Venkat, Quick one, I have completed the docker installation in docker masters and HAPROXY in loadbalancer too but when I am installing docker in worker1 node I am getting hash mismatch error. I have verified and googled everything it worked well in master node but in worker node it is not working,.. Please advice Venkat
Hi Chadwick, thanks for watching. What errors do you see? Can you paste the relevant output in pastebin.com or somewhere and share the link? I will also give it a try later today.
@@justmeandopensource executing `kubectl logs nginx-depl-5c8bf76b5b-sr5j2` outputs: Error from server (NotFound): the server could not find the requested resource ( pods/log nginx-depl-5c8bf76b5b-sr5j2) then executing `kubectl exec -it nginx-depl-5c8bf76b5b-sr5j2 -- echo hello` outputs error: unable to upgrade connection: pod does not exist I'm sure the pod exists and running perfectly. However, inspecting the worker's kubelet journal gives many CNI errors such as: CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container... Error adding default_nginx-depl-5c8bf76b5b-sr5j2/4447d54ccaa8027d5d5ec38681dcb1a5262af6e977bcd461e76eccb0342714ea to network calico/k8s-pod-network... RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to set up sandbox container... CreatePodSandbox for pod "nginx-depl-5c8bf76b5b-sr5j2_default(305ac619-9321-452c-9b7d-1eeebec6c841)" failed: rpc error... Error syncing pod 305ac619-9321-452c-9b7d-1eeebec6c841 ("nginx-depl-5c8bf76b5b-sr5j2_default(305ac619-9321-452c-9b7d-1eeebec6c841)")... failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "nginx-depl-5c8bf76b5b-sr5j2_default"... Container "e38eca1ba2d7f2a5966d590fff7860fe043b90f7d5079b26dd8d6a0553f5929f" not found in pod's containers... CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container.... I also checked the calico nodes, and its fine. Maybe there's a need for further configuration on calico? New to kubernetes, am really not sure what can I do
Hi, seems the cluster set up with multiples of master nodes. Let say if the first master node goes down, will the 2nd master node take over the cluster ? Currently I`m trying to create HA with multiple master nodes to eliminate single point of failure.
Hi, thanks for watching. With this two master setup its not like active passive kind of setup. Both the masters are active at all the time. There is a load balancer component in front of the masters which balances the traffic in a roundrobin fashion. It won't forward the traffic to a master if its not reachable. Only forwards traffic to healthy master nodes.
@@justmeandopensource .. Hi, I have tried your tutorial and i deploy nginx into the cluster. I shut down the master node where i do kubeadm init. I still able to access the nginx page with one of the master node down. However, i cannot control the cluster using kubectl command, i`ll received error related to the etcd server. I only able control the cluster once the master node i shut off is going up agian. Is it normal ?
Hi Venkat, When I'm trying to join the third master node. I'm facing issues. While joining the first and second it works fine. Dont know whats wrong. I'll post the log below. Any idea? STDOUT: [download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace STDOUT: [download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace STDERR: error execution phase control-plane-prepare/download-certs: error downloading certs: error decoding secret data with provided key: cipher: message authentication failed To see the stack trace of this error execute with --v=5 or higher STDERR: error execution phase control-plane-prepare/download-certs: error downloading certs: error decoding secret data with provided key: cipher: message authentication failed To see the stack trace of this error execute with --v=5 or higher Thanks
Hi Gonzalo, thanks for watching. May I know how exactly your k8s cluster was setup? I can do a quick testing from my side. I never had problems running metrics server on any of my cluster setup. Cheers.
@@justmeandopensource I set up with kubeadm (like it saids on kubeadm documentation) and a haproxy balancing the masters, it keeps me saying cant get the metrics even ( off course adding the variables of metrics server like its saids on the official documentation for baremetal)
@@justmeandopensource Ok great ! The machines are vm in esxi of vmware... i guess i miss the line in kubeadm adding --apiserver-advertise ... i guess it can be the error because metrics-server cant connect to the api...
@@gouterelo I just tried it on my 1.19.2 kubernetes cluster and it worked fine as per the below video. ruclips.net/video/PEs2ccoZ3ow/видео.html I used v0.3.7 of metrics server github.com/kubernetes-sigs/metrics-server/releases As mentioned in my above video, all I have to do was to add the below two arguments to metrics-server pod --kubelet-preferred-address-types=InternalIP --kubelet-insecure-tls
Hi Can you please provide all your kubernates videos at one place Like Google drive or any zip file so that we download It would be helpful for every one
Hi Karthik, thanks for your interest in my channel. I don't have a local copy of all my videos. Once processed and uploaded to RUclips I delete them. You can use youtube-dl command line tool to download videos from RUclips. You can even download all videos in a playlist in single command. Cheers.
Hi Punna, thanks for watching. You should be able to use kubectl command to interact with the cluster even when one of the master is down. That is the point of HA cluster. Provided you have a load balancer in front of those multi-master setup and access the cluster through the load balancer.
Hello Venkat, Great video. Thank you. Can we use F5 load balancer for Multimaster setup? I am getting the error kubelet_node_status.go:93] Unable to register node load balancer kube api server cannot be reached
After running "Kubeadm init ....." command it got stuck and shows this. [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I have setup Haproxy for the cluster with the ip address of the haproxy to be the control-pane-endpoint but i like to modify the control-pane-endpoint of the cluster to be the dns name because i am planning to have 2 haproxy servers behind the dns name. how can i modify the cluster control-pane-endpoint?
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding getting this error while running kubectl commands, could you please help in this.
Hmm. Never seen this before. Without knowing how your cluster was setup and its environment, I am afraid I can't help. It looks like a proxy thing. Is your cluster behind a proxy?
HA is not functional. When one master node (e.g. kmaster2) shut down, the whole cluster failed as below: vagrant@kmaster1:~$ kubectl get node Error from server (InternalError): an error on the server ("") has prevented the request from succeeding Any idea?
I am follow your steps on v.21.0 but facing following issue while joining master node [discovery] Failed to request cluster-info, will try again: context deadline exceeded sudo kubeadm init --kubernetes-version 1.21.0 --control-plane-endpoint="192.168.56.103:6443" --apiserver-advertise-address=192.168.56.100 --upload-certs
Hi Man Thanks for the video. i followed your video did the complete setup. after completion i made one master node power off to check if the HA is working but as soon as i stop the node the cluster comes down and nothing is available vagrant@loadbalancer:~$ kubectl get nodes Unable to connect to the server: net/http: TLS handshake timeout HA Proxy Logs Jun 1 12:11:13 loadbalancer haproxy[13516]: [WARNING] 151/121113 (13516) : Server kubernetes-backend/kmaster1 is DOWN, reason: Layer4 connection problem, info: "Connection refused at initial connection step of tcp-check", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue. Jun 1 12:11:13 loadbalancer haproxy[13516]: Server kubernetes-backend/kmaster1 is DOWN, reason: Layer4 connection problem, info: "Connection refused at initial connection step of tcp-check", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue. Jun 1 12:11:13 loadbalancer haproxy[13516]: Server kubernetes-backend/kmaster1 is DOWN, reason: Layer4 connection problem, info: "Connection refused at initial connection step of tcp-check", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue. Jun 1 12:11:13 loadbalancer haproxy[13516]: backend kubernetes-backend has no server available! Jun 1 12:11:13 loadbalancer haproxy[13516]: [ALERT] 151/121113 (13516) : backend 'kubernetes-backend' has no server available! Jun 1 12:11:13 loadbalancer haproxy[13516]: backend kubernetes-backend has no server available!
Hi, I have followed your steps and getting the below error. Do you have any idea on the same. sdv9719@kmaster:~$ kubectl get nodes Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
Hi Venkat, Thanks a lot for this video. I have a query. As per your video, i have created 4 VM's in GCP 1. LoadBalancer 2. Master1 3. Master2 4. Worker1 I ran all the commands. Now, the Master-1 server has been crashed/stopped and try to run the command in master2 and it is not working. Please help me where i did wrong. ansiblesrikanth@kmaster2:~$ kubectl get nodes The connection to the server localhost:8080 was refused - did you specify the right host or port? This is the error im getting.
@@justmeandopensource Hi Vekat, thanks for the update. Please be informed that, now im facing another issue. I have created 3 Master nodes, 2 worker nodes and 1 lb. Done all the settings as per the video. Now i have stopped 2 Master nodes and try to run the kubectl command not working. I could see in LB logs all the master nodes are showing down. Seriously dont know what im missing. Please help me.
@@srikanthv8108 as per what you said, you have 1 of the 3 master nodes up and running. You also have a load balancer (which your kubectl is pointing at). The load balancer should forward the traffic only to the healthy nodes. In this case, clearly the two nodes are down and LB shouldn't be forwarding traffic to them. If its forwarding to the 1 master node that is up and not getting any response back, check if the kube-apiserver process is running on that node. Otherwise it is something to do with the load balancer. Cheers.
Hi Sir, thanks for your reply. Could you please help me 🙏🏻🙏🏻 Shall I share my screen. Buz I'm following your steps but still facing issue. And one more thing is LB is automatically goes down. Please help me 🙏🏻🙏🏻🙏🏻 I beg you
Excellent video. Kubernetes cluster installation made easy. Please keep up your good work. Only suggestion is to create the vagrant image for the common installation like, User Creation, Installation o docker, kubelet, kubeadm and kubectl. If done it avoids installation on more nodes.
Hats off to you, Venkat for your efforts. Your training materials are superb!!
Hi Akshay, many thanks for your interest in this channel. Cheers.
Brilliant! Amazing! Really good. Congratulations for the great work you've done here. Thank you.
Thanks for watching.
fantastic, I really like this series, thanks man!
Hi Liping, thanks for your interest in this channel. Cheers.
Great Video Saved my life.
Hi Arslaan, thanks for watching.
this is awesome one and much appreciate for your effort that help us to understand the configuration. hopping a session with nginx ingress control instead of haproxy:)
Thank you for the video. Its really helpful in scaling out the master nodes. But can you please make a tutorial on migrating applications from one on-premise k8s cluster to another on-premise k8s cluster? Lets say we have major hardware upgrades.
Hi Niharika, thanks for watching. You can use Velero to backup from current cluster and restore to another cluster. I have already done a video on Velero which might give you some directions hopefully.
ruclips.net/video/C9hzrexaIDA/видео.html
Really great tutorial!!
Thank you a lot!!! it works for me.
Thanks for watching.
thanks. This video is useful and i resolved this solution!
Hi, thanks for watching.
thanks for your video, it's really very helpful ..but for the cluster backup what should be the best approach also cluster security ...
Very good video as always. I followed your video about RKE2 and I managed to setup my cluster, kudus on that one too. I wanted to use HA on that cluster but I was afraid to mess it up since, although similar, the setup is not the same. Would you consider a video showing how to do that in a RKE2? Even as a next step for the last video about HA and the RKE2 itself? 10x!👊
@Venkat, excellent illustration. Thanks for the detailed documentation as well.
Can you help me to clarify the below questions
1) Is the loadbalancer mandatory for the HA Cluster?
2) In case if i do not have the LB configured in my initial setup of the environment and we may plan it later stage....so, in that scenario where can i modify the setting to=> --control-plane-endpoint=":6443" in both master nodes?
3)is the same LB is going to be used for service type: LoadBalancer when we expose a pod/deployment?
Thank you!
Hi Vamsi, thanks for watching.
1. Yes. In a HA multi-master cluster, each control plane is running the api server component. You can access any of the master directly without using a load balancer. But the whole point of HA is to be able to use the cluster even when one of the master fails. So if you don't use a load balancer, you will have to switch manually to using other master nodes when the one you were using goes down.
2. You can't do that as far as I know without downtime. You will have to do kubeadm reset on all master nodes and re-initialize the cluster with control-plane-endpoint option
3. No this LB is for load balancing the master nodes. The load balancer type you use with the service is applicable inside the cluster. If you are not using any cloud provider, and using on-prem bare metal servers, you can use MetalLB for this purpose. I have done few videos on MetalLB if you are interested.
ruclips.net/video/xYiYIjlAgHY/видео.html
ruclips.net/video/zNbqxPRTjFg/видео.html
Cheers.
Hi Venkat, thank you very much for the clarification.
@@vamseenath1 No worries. You are welcome.
Hello, thanks for the video. Do we have to install metallb here? Does HAProxy replace it?
First, I want to say thank you for all the great videos which you provide to the community! Secondly, is there any option that you create ActiveMQ Artemis broker service for k8s at HA mode? This topic is now more and more needed in the world of microservice architecture. Wish you all the best!
HI Igor, thanks for watching. I haven't tried any message queueing systems in k8s yet. I will have to spend some time learning about. Thanks for suggesting this. Cheers.
@@justmeandopensource I hope this topic will arrive at your channel and be well explained like other videos. Cheers friend!
great demo! is the load ba,ancer working at HTTP level e.g. as HTTP proxy?
Hi Antonio, thanks for watching. Yes, the haproxy load balancer acts just as a proxy forwarding requests to the control planes.
Hi there, first of all should thank you for such helpful videos you created, then have a question for you, I'm trying to create an ecosystem with 3 masternodes and 8 worker nodes using a HA-Proxy for master nodes all of the parts are running fine but I want to know how to have etcd cluster separately like external etcd clusters , thanks a lot
Hi Farshad, thanks for watching. I am yet to explore the possibilities of setting up and running an external etcd cluster. Its in my list as requested by many viewers. I will try and do it soon. Cheers.
Very Nice content . Can you make video on "How to convert single master cluster to HA cluster using kubeadm ?" Make sure current resources(pods svc) are not impacted
Hi, Thanks for watching. I don't think thats possible without downtime. To be honest, I haven't actually tried it. You will have to make a decision beforehand whether to go with single master of HA cluster. In production, you would never start with a single master. This is a very rare scenario. Development clusters are fine with single master.
With single master cluster, during cluster initialization, the kubeadm init command is slightly different to a cluster with multi-master. You can do kubeadm reset and then re-initialize the cluster with the right options to kubeadm init command and then add other masters. But I am not sure whether the currently running workloads are affected during this process. Cheers.
@@justmeandopensource Thanks for reply . Downtime is fine but resources like pods and svc should not be deleted . That's is our goal . Actually we have cluster with single master with lot of deployments and svc . Of course that is non prod . But I am trying to upgrade to HA cluster .
@@saibbsr03 It would be so simple to build a new k8s cluster with multiple masters and then use Velero to backup workloads from the old cluster and restore it to this new one.
Hi Venkat, Nice presentation. Which screen capture software you use while making these videos ? Is it some in-built tool inside ubuntu linux ?
Simplescreenrecorder.
Perfect man :)
Awesome! I do have 2 questions tho,
1. I get met with a page that just says Client sent an HTTP request to an HTTPS server when I try to access the LB on port 6443, what's the cause of that?
2. how would I make my deployments like nginx publicly available like on a regular cluster?
thanks for the video(s), very helpful. I got bunch of errors about certs when Master2 node joined the Master1. I dont know what the problem is but trying to figure it out. Since you mentioned in this vide, can you make a tutorial about how to set up Nat Network and Bridge. Where I work is an air-gapped and cant have an internet connection, in fact there is no internet connection.
Thank you for the video. Its very helpful in scaling out the master nodes to support HA by using software load balancer.
However I have a question on hardward load balancer. How can I build the multple master nodes by using hardward load balancer.
Can I use the same step as this vedio or It's require specific activity depend on hardware vendor?
Hi, thanks for watching. The concept is going to be the same. You will have a load balancer (software or hardware doesn't matter) configured with all your control planes as the backend servers. And while initializing the kubernetes cluster with kubeadm binary, you will specify the control-plane-endpoint option to be the ip address of the load balancer.
@@justmeandopensource Thank you for your answer, I'll try following you suggestion and thank you again for every good video that you have done here. Your are explain step by step clearly and make it easy to understand. I'll recommend you video to my friend how want to learn kubernetes.
@@eddy0055 Much appreciated. Thanks
Thanks man, Superb tutorial
Can you please tell me, is kubeadm good for production ready kubernetes setup?
Like in terms of security, authorization high availability. Or should i use microk8s or anything else if you can suggest. Thanks
Hows the flow if we have multimsater infra(multiple master & multiple workers) with ingress?
1. LB > Master > HAPROXY > Worker
OR
2. LB > Wroker Nodes > HAPROXY > Masters?
Please also add INGRESS in above design & let me know. Thanks.
As example we fire nginx deploy and create service. Should we create rule in balance for nginx ?
Hi German, thanks for watching. I didn't get your question. Can you rephrase it please. Cheers.
@@justmeandopensource I solved my question. Thx for instructions. Can u tell me better way to use php-cli workers as example for yii2-queue in Kubernetes? I saw not good way to use supervisord on kube
Hi,Can we do this without configure load balancer on separate machine? If so,Is there any issues configuring it in production environment?
Hi Suraj, I posted an updated video today. Here is the link ruclips.net/video/SueeqeioyKY/видео.html
Again I used a separate load balancer nodes for the HA setup. However its possible to get a similar setup by just using master nodes and not requiring separate load balancer nodes. I am hoping to do that in my next video. Cheers.
Hi, have a nice afternoon. I'm planning to set up a Kubernetes Cluster from scratch. I like your content. I ask you to please tell me if you are setting up all this from your PC or a laptop. What are the resources to have for setting up this infrastructure. Maybe how many physical cpu you have. Because you say to have at least 8cpu to set thi up. But is it 8cpu in a physical laptop or in a VM. Please assist me by telling me resources you have in your PC or laptop.
Hi Jesus, thanks for watching. This set of virtual machines I used in this video in total needs 6 cpu and 6G memory. I ran this on my Laptop where I had 8 cpu cores and 16G memory. Cheers.
@@justmeandopensource thanks a lot. By the way, will you have any suggestion about which laptop is ok for these type of projects? .
Thanks, Can you please share how to sync the commands in multiple terminal tabs??
Hi, thanks for watching. I used tmux to open multiple panes and synchronise them. Useful feature.
Hi Venkat, Really appreciate for your efforts. One quick question - If we add additional master node in later point of time, do we need to sync etcd data to new master or will it be auto synced and it remains in the same state of existing master DB? Or single etc pod sufficient for serving all the transactions. Can you please make an video HA for etcd as well
Hi Kishore, thanks for watching. There is two topology here. You can either set up an external etcd cluster separate from the control plane nodes or you can setup a stacked topology where each control plane will have an etcd service. So in stacked topology when you add new master node, the etcd will join the existing etcd cluster from other master nodes and will be synced automatically.
Sirs, thanks sirs.
Hi, thanks for watching.
Hello Venkat,
Thanks for posting. It's a very detailed and good guide to HA Kubernetes Cluster setup.
I have a question, I have to install Kubernetes 1.12.2 and the docker version is 18.06.1~ce (a specific requirement) but with the installation step that is given for docker, I am not able to install this version (even though I edit the version field to docker-ce=18.06.1~ce~3-0~ubuntu). Can you suggest to me how I can do that?
Hi Shresthi,
What version of Ubuntu are you using for this setup?
If you are on Ubuntu 20.04 Focal Fossa, then you can only install from one of these versions.
root@kmaster:~# apt list -a docker-ce
Listing... Done
docker-ce/focal 5:20.10.5~3-0~ubuntu-focal amd64
docker-ce/focal 5:20.10.4~3-0~ubuntu-focal amd64
docker-ce/focal 5:20.10.3~3-0~ubuntu-focal amd64
docker-ce/focal 5:20.10.2~3-0~ubuntu-focal amd64
docker-ce/focal 5:20.10.1~3-0~ubuntu-focal amd64
docker-ce/focal 5:20.10.0~3-0~ubuntu-focal amd64
docker-ce/focal 5:19.03.15~3-0~ubuntu-focal amd64
docker-ce/focal 5:19.03.14~3-0~ubuntu-focal amd64
docker-ce/focal 5:19.03.13~3-0~ubuntu-focal amd64
docker-ce/focal 5:19.03.12~3-0~ubuntu-focal amd64
docker-ce/focal 5:19.03.11~3-0~ubuntu-focal amd64
docker-ce/focal 5:19.03.10~3-0~ubuntu-focal amd64
docker-ce/focal 5:19.03.9~3-0~ubuntu-focal amd64
I mean the oldest stable version that you can get in Ubuntu 20.04 is 19.03.9.
@@justmeandopensource
Thanks for the reply!
Oh, I see, you are correct, it is Ubuntu 20.04.1 LTS (GNU/Linux 5.4.0-58-generic x86_64).
Venkat, for my installation I need a specific 18.06.1 docker version for two reasons, one it's a requirement, two Kubernetes 1.12.2 doesn't support 19.03 :(
Do I need to change the ubuntu version to have such a cluster?
@@sg04rocxx
The versions that I pasted in my previous comment are the ones that you can use with Ubuntu 20.04. It will be possible to install 18.06.1 in Ubuntu 20.04 but I wouldn't use it because its not in the supported matrix.
Thank you for the video, I really like it, I still have a question regarding the IPs given when installing the clusters.
Do 'advertise-ip-address' have to be a public IP? what if I have the following scenario:
- haproxy is installed on AWS EC2 public instance
- all the other nodes are set up to run in a private subnet for more security
How should the init command be in this case?
Thank you again
i think as long as the haproxy has access to these private ips, just use the private
Beautiful - well demonstrated - the svc NodePort does not seem to appear on haproxy - I have to directly connect to the master nodes to view the web page. Am I missing some thing? I am using Kubernetes v.1.20 with containerd.
Did I solve that question? I have the same
Is it possible to setup private-network on eth0 and NAT on eth1 in vagrant environment.
You should be able to but I haven't tried.
Hi Venkat.. Thanks for the amazing video. Currently I have a K8's cluster configured with single master and 4 worker nodes. The cluster was configure 1 year ago and now I need to make the cluster high avaiable by adding another Master node.. Is that possible.. ??
Hi Vivek, thanks for watching. You won't be able to make your existing single master cluster HA by adding more master nodes. The decision of whether to go with single master or multi master has to be made at the cluster provisioning/initialization time as you have to pass additional parameters during kubeadm init phase if you want to set up multi master. In your case, you might have to
* set up a new multi-master cluster
* move workloads to the new cluster (you can use velero to backup from source and restore to target cluster)
@@justmeandopensource Hi Venkat.. Thanks for replying... That was precisely what I was looking for..!! Looking fwd to more videos from u... Thank you very much..
@@vivekb1988 Cheers.
@@justmeandopensource Hi Venkat.. Just need a clarification on the above. Suppose I am carrying out the following steps.
1. Take an etcd back-up including the certs and keys
2. Identify 2 extra nodes(1 for master and 1 for load balancer)
3. Perform a kubeadm reset
4. Kubeadm initiatize using load balancer and also the restored etcd.
Will this work for me..? Will i suffer any data loss..?? Your comments are much appreciated..
Thanks,Vivek
Thank you for this amazing video .... As I'm very new to Kubernetes, this video was very informative and easy to learn. I have one basic query i.e. where to install Kubernetes Dashboard ? (if it is in Load Balancer then do I need to install kubectl again in LB machine.If it is in any one of the Master Node then what will happen if the Master Node goes down ?)
Hi Rohit, thanks for watching.
Kubernetes dashboard is like any other deployments you deploy in your kubernetes cluster. Its not a process you run on load balancer or master nodes. But in your kubernetes cluster as pods. So if a node goes down, the kubernetes dashboard pod will get rescheduled on another node and you can continue to access it.
@@justmeandopensource Thank you so much...I'll follow the same to deploy dashboard.
@@RohitPatnaik7 No worries.
Thanks for the video. Can you please explain below.
1. Is it shared etcd across master1 and master2?
2. I shutdown the master1, then I tried to run kubectl commands on master2, its not reachable. how to resolve this.
Once again appreciate your efforts.
Hi Mogal, thanks for watching.
1. Its not shared etcd. Kubeadm init sets up an etcd node in each master node and all of these etcd form a cluster
2. In multi master cluster, you normally won't connect to a specific master node by its ip. Instead you will connect through a load balancer in front of all the master nodes. In your kube config file, you will have the server url as the load balancer ip address. When you run kubectl commands, it will hit the load balancer which inturn sends the traffic to one of the master nodes. If you take a master node down, load balancer won't forward the traffic to that master. It might take few seconds for the load balancer to realize that one of the master is down and it may send traffic to the master that is down for a very brief time. Keep running the kubectl command.
@@justmeandopensource Thanks for the reply. Below are the details before shutting down the kmaster1 and after.
root@kmaster2:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kmaster1 Ready master 22h v1.19.2
kmaster2 Ready master 22h v1.19.2
kworker1 Ready 22h v1.19.2
-----------------------------------------------------------------------shutdown now on master1------------
root@kmaster2:~# kubectl get nodes
Error from server: etcdserver: request timed out
root@kmaster2:~# kubectl get nodes
Error from server: etcdserver: request timed out
root@kmaster2:~# ping 172.16.16.100
PING 172.16.16.100 (172.16.16.100) 56(84) bytes of data.
64 bytes from 172.16.16.100: icmp_seq=1 ttl=64 time=0.580 ms
64 bytes from 172.16.16.100: icmp_seq=2 ttl=64 time=0.716 ms
64 bytes from 172.16.16.100: icmp_seq=3 ttl=64 time=1.08 ms
^C
--- 172.16.16.100 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2053ms
rtt min/avg/max/mdev = 0.580/0.792/1.081/0.211 ms
root@kmaster2:~# kubectl get nodes
Unable to connect to the server: net/http: TLS handshake timeout
root@kmaster2:~# kubectl get nodes
Unable to connect to the server: net/http: TLS handshake timeout
hi , i performed same setup but after i ran calico installation command on master one things git wrong earlier kubectl get nodes was working from any master byt after calico setup kubectl command are getting net/http tls handshake timeout. what's that i couldn't figure out this issue.
my cluster state was "not ready " before network setup command.
Hi, thanks for watching. Can you paste the output of all commands in pastebin.com and share the link?
Hello Venkat, Thanks a lot for the video, I configured multi master node cluster (2 masters) in my environment. But if I bring down any one master node another master is unable to run any kubectl commands, where as you mentioned if I am having 3master nodes and if one goes down I am able to run the kubectl commands.
If you don't mind may I know why this is behavior is like this.
Thanks in advance
Hi, Thanks for watching. Best practice for any cluster is to have at least 3 nodes. Have you tried the same with 3 masters? I think I too faced similar problem with 2 masters.
@@justmeandopensource I should admit that you are really great, I didn't expect so early reply.
Yes I did tried with 3 masters and it worked fine.
@@madhusudhankasturi3371 cool.
As per my knowledge,The fault tolerance with 2 master is zero..so one is down cluster is down..we need minimum 3 for setting up multi master nodes in production environment
@@suraj2533 Yeah minimum 3. But with 2 you can run for a while as you fix and bring the 3rd broken one online. Otherwise no point in having 3 node cluster.
Thanks !!
Hi, thanks for watching.
@justmeandopensource is haproxy is mandatory for creating multi-master Kubernetes clusters
Hi Venkat, thank you for the amazing video, everything worked smoothly! I'm trying to modify the load balancer as I would like to use an HA cluster to balance the Kubernetes. I have 2 load balancers and I installed keepalived on them. I can see the virtual IP up on the master node but I can't reach the Kubernetes cluster. Can you help? Much appreciated
Hi Gabriele, thanks for watching. Thats a very valid point. HAproxy is a single point of failure. I haven't covered High availability of HAproxy itself in any of my videos. Yes we could use keepalived with floating IPs. I will make a note of this topic and add it to my list. I can get to it once I finish my current pending list. Cheers.
Nice video, can you show how to add more masters at a later stage, if you dont have the join command?
Hi Martin, thanks for watching. You will have to generate new certificate and bootstrap tokens to add additional nodes. I just finished recording a follow up video where I explain it. It will be released tomorrow Wednesday at 7:30 British Summer Time. Stay tuned. Cheers.
@@justmeandopensource thanks!
Venkat my bro, can you make a proper series on Haproxy?
Thanks!
Hi Ganesh, thanks for watching. My focus currently is on Kubernetes and other cloud native technologies. HAProxy is just a tool and I don't think its worth starting a series on that.
@@justmeandopensource Thanks for the reply venkat. At least you can prepare a video like an hour or so showing requests distribution to databases, balancing apps...SSL, monitoring nodes, etc. I think you can fit in a couple of videos all the things needed in the production. RUclips still doesn't have good contents on haproxy.
Thanks
Hello Venkat, Quick one, I have completed the docker installation in docker masters and HAPROXY in loadbalancer too but when I am installing docker in worker1 node I am getting hash mismatch error. I have verified and googled everything it worked well in master node but in worker node it is not working,.. Please advice Venkat
kubectl logs and exec is not working, not sure if I miss something.
Hi Chadwick, thanks for watching. What errors do you see? Can you paste the relevant output in pastebin.com or somewhere and share the link? I will also give it a try later today.
@@justmeandopensource
executing `kubectl logs nginx-depl-5c8bf76b5b-sr5j2` outputs:
Error from server (NotFound): the server could not find the requested resource ( pods/log nginx-depl-5c8bf76b5b-sr5j2)
then executing `kubectl exec -it nginx-depl-5c8bf76b5b-sr5j2 -- echo hello` outputs
error: unable to upgrade connection: pod does not exist
I'm sure the pod exists and running perfectly. However, inspecting the worker's kubelet journal gives many CNI errors such as:
CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container...
Error adding default_nginx-depl-5c8bf76b5b-sr5j2/4447d54ccaa8027d5d5ec38681dcb1a5262af6e977bcd461e76eccb0342714ea to network calico/k8s-pod-network...
RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to set up sandbox container...
CreatePodSandbox for pod "nginx-depl-5c8bf76b5b-sr5j2_default(305ac619-9321-452c-9b7d-1eeebec6c841)" failed: rpc error...
Error syncing pod 305ac619-9321-452c-9b7d-1eeebec6c841 ("nginx-depl-5c8bf76b5b-sr5j2_default(305ac619-9321-452c-9b7d-1eeebec6c841)")...
failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "nginx-depl-5c8bf76b5b-sr5j2_default"...
Container "e38eca1ba2d7f2a5966d590fff7860fe043b90f7d5079b26dd8d6a0553f5929f" not found in pod's containers...
CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container....
I also checked the calico nodes, and its fine. Maybe there's a need for further configuration on calico? New to kubernetes, am really not sure what can I do
Hi, seems the cluster set up with multiples of master nodes. Let say if the first master node goes down, will the 2nd master node take over the cluster ? Currently I`m trying to create HA with multiple master nodes to eliminate single point of failure.
Hi, thanks for watching. With this two master setup its not like active passive kind of setup. Both the masters are active at all the time. There is a load balancer component in front of the masters which balances the traffic in a roundrobin fashion. It won't forward the traffic to a master if its not reachable. Only forwards traffic to healthy master nodes.
@@justmeandopensource .. Hi, I have tried your tutorial and i deploy nginx into the cluster. I shut down the master node where i do kubeadm init. I still able to access the nginx page with one of the master node down. However, i cannot control the cluster using kubectl command, i`ll received error related to the etcd server. I only able control the cluster once the master node i shut off is going up agian. Is it normal ?
Hi Venkat,
When I'm trying to join the third master node. I'm facing issues. While joining the first and second it works fine. Dont know whats wrong. I'll post the log below. Any idea?
STDOUT: [download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
STDOUT: [download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
STDERR: error execution phase control-plane-prepare/download-certs: error downloading certs: error decoding secret data with provided key: cipher: message authentication failed
To see the stack trace of this error execute with --v=5 or higher
STDERR: error execution phase control-plane-prepare/download-certs: error downloading certs: error decoding secret data with provided key: cipher: message authentication failed
To see the stack trace of this error execute with --v=5 or higher
Thanks
Hi Dinesh, I will give it a try later today and let you know how it went. Cheers.
Great video !!! Did you test metrics-server in a multimaster cluster ? i cant still get it work !!!
Hi Gonzalo, thanks for watching.
May I know how exactly your k8s cluster was setup? I can do a quick testing from my side. I never had problems running metrics server on any of my cluster setup. Cheers.
@@justmeandopensource I set up with kubeadm (like it saids on kubeadm documentation) and a haproxy balancing the masters, it keeps me saying cant get the metrics even ( off course adding the variables of metrics server like its saids on the official documentation for baremetal)
@@gouterelo Okay. I am going to try metrics server in this multi master setup shown in this video and will update you soon.
@@justmeandopensource Ok great ! The machines are vm in esxi of vmware... i guess i miss the line in kubeadm adding --apiserver-advertise ... i guess it can be the error because metrics-server cant connect to the api...
@@gouterelo I just tried it on my 1.19.2 kubernetes cluster and it worked fine as per the below video.
ruclips.net/video/PEs2ccoZ3ow/видео.html
I used v0.3.7 of metrics server
github.com/kubernetes-sigs/metrics-server/releases
As mentioned in my above video, all I have to do was to add the below two arguments to metrics-server pod
--kubelet-preferred-address-types=InternalIP
--kubelet-insecure-tls
wow。。hahaha great dude
Hi, thanks for watching. Cheers.
Hi
Can you please provide all your kubernates videos at one place
Like Google drive or any zip file so that we download
It would be helpful for every one
Hi Karthik, thanks for your interest in my channel. I don't have a local copy of all my videos. Once processed and uploaded to RUclips I delete them. You can use youtube-dl command line tool to download videos from RUclips. You can even download all videos in a playlist in single command. Cheers.
Is kubectl is working if master 1 down?
Hi Punna, thanks for watching. You should be able to use kubectl command to interact with the cluster even when one of the master is down. That is the point of HA cluster. Provided you have a load balancer in front of those multi-master setup and access the cluster through the load balancer.
Hello Venkat, Great video. Thank you. Can we use F5 load balancer for Multimaster setup? I am getting the error kubelet_node_status.go:93] Unable to register node load balancer kube api server cannot be reached
I think up ip not available to connect for node
After running "Kubeadm init ....." command it got stuck and shows this.
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
Hi Tahsib, thanks for watching. How long is the command stuck? Have you tried it on a different machine if possible?
@@justmeandopensource It works, just took some time to finish it.
@@tahsib7 Perfect.
where i can find the file u are referring??
Hi Ashish, thanks for watching. What's the file you are talking about?
I have setup Haproxy for the cluster with the ip address of the haproxy to be the control-pane-endpoint but i like to modify the control-pane-endpoint of the cluster to be the dns name because i am planning to have 2 haproxy servers behind the dns name. how can i modify the cluster control-pane-endpoint?
Few A records on domain will round robin beetween proxy
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
getting this error while running kubectl commands, could you please help in this.
Hmm. Never seen this before. Without knowing how your cluster was setup and its environment, I am afraid I can't help. It looks like a proxy thing. Is your cluster behind a proxy?
HA is not functional. When one master node (e.g. kmaster2) shut down, the whole cluster failed as below:
vagrant@kmaster1:~$ kubectl get node
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
Any idea?
This might be an issue with your loadbalancer not being able to connect to the server, your can see the log with "systemctl status haproxy".
I am follow your steps on v.21.0 but facing following issue while joining master node
[discovery] Failed to request cluster-info, will try again: context deadline exceeded
sudo kubeadm init --kubernetes-version 1.21.0 --control-plane-endpoint="192.168.56.103:6443" --apiserver-advertise-address=192.168.56.100 --upload-certs
Hi Man Thanks for the video. i followed your video did the complete setup. after completion i made one master node power off to check if the HA is working but as soon as i stop the node the cluster comes down and nothing is available
vagrant@loadbalancer:~$ kubectl get nodes
Unable to connect to the server: net/http: TLS handshake timeout
HA Proxy Logs
Jun 1 12:11:13 loadbalancer haproxy[13516]: [WARNING] 151/121113 (13516) : Server kubernetes-backend/kmaster1 is DOWN, reason: Layer4 connection problem, info: "Connection refused at initial connection step of tcp-check", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jun 1 12:11:13 loadbalancer haproxy[13516]: Server kubernetes-backend/kmaster1 is DOWN, reason: Layer4 connection problem, info: "Connection refused at initial connection step of tcp-check", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jun 1 12:11:13 loadbalancer haproxy[13516]: Server kubernetes-backend/kmaster1 is DOWN, reason: Layer4 connection problem, info: "Connection refused at initial connection step of tcp-check", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jun 1 12:11:13 loadbalancer haproxy[13516]: backend kubernetes-backend has no server available!
Jun 1 12:11:13 loadbalancer haproxy[13516]: [ALERT] 151/121113 (13516) : backend 'kubernetes-backend' has no server available!
Jun 1 12:11:13 loadbalancer haproxy[13516]: backend kubernetes-backend has no server available!
Hi, I have followed your steps and getting the below error. Do you have any idea on the same.
sdv9719@kmaster:~$ kubectl get nodes Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
Any luck please..
Hi Venkat, Thanks a lot for this video. I have a query.
As per your video, i have created 4 VM's in GCP
1. LoadBalancer
2. Master1
3. Master2
4. Worker1
I ran all the commands.
Now, the Master-1 server has been crashed/stopped and try to run the command in master2 and it is not working.
Please help me where i did wrong.
ansiblesrikanth@kmaster2:~$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
This is the error im getting.
Take a look at the kubeconfig file. I believe the kubeconfig you have got, might not be the right one.
@@justmeandopensource Hi Vekat, thanks for the update.
Please be informed that, now im facing another issue.
I have created 3 Master nodes, 2 worker nodes and 1 lb.
Done all the settings as per the video.
Now i have stopped 2 Master nodes and try to run the kubectl command not working.
I could see in LB logs all the master nodes are showing down.
Seriously dont know what im missing.
Please help me.
@@srikanthv8108 as per what you said, you have 1 of the 3 master nodes up and running. You also have a load balancer (which your kubectl is pointing at). The load balancer should forward the traffic only to the healthy nodes. In this case, clearly the two nodes are down and LB shouldn't be forwarding traffic to them. If its forwarding to the 1 master node that is up and not getting any response back, check if the kube-apiserver process is running on that node. Otherwise it is something to do with the load balancer. Cheers.
Hi Sir, thanks for your reply. Could you please help me 🙏🏻🙏🏻 Shall I share my screen. Buz I'm following your steps but still facing issue. And one more thing is LB is automatically goes down. Please help me 🙏🏻🙏🏻🙏🏻 I beg you