Thank you for the video it helped me to add the PDB's in our ENV. And now we are able to edit the PDB, bcz I tried editing and the changes were reflected.
Hi Venkat, thank your effort on your all videos. Great thanks. Can you make videos on cicd using Jenkins kubernetes with samle frontend app and backend with db
Hi Hareesh, thanks for watching this video. I have planned for few videos around Jenkins CI/CD in Kubernetes. I am playing with it at the moment. Soon I will release video on it. Thanks.
Since you already uncordon kworker2 then why the pods could not be scheduled & created on kworker2 when you drain kworker1? Also IMO it is conflicting as upgrading the cluster with major K8s version the nodes are restarted then does it mean because of pdb the administrator cannot upgrade the cluster?
Hi Venkat, Thank your effort on your all videos. Great thanks. I have created a statefulset mongo-db in kubernates cluster with one as primary pod and another as a secondary pod. Now i need to do enable the arbiter in mongo. How i can do that, should i use pod or statefulset?.
Hi Rajesh, Thanks for watching this video. I was working on a separate series for MongoDB tutorial and made around 9 videos. I am planning to do a video on deploying Mongodb in k8s cluster. I have already played with it by deploying a simple mongodb replicaset in k8s cluster. When I get some time, I will record the video and release it. Other than that, I haven't tried much with MongoDB. Not sure about arbiter deployment. May be I will get some idea when I try playing with it. Thanks.
Hello, Thank you for the video! really helpful. I was trying to apply the Pod Disruption Budget to a statefulset after I update my release I can see statefulset get update one at a time instead of 3 which I have defined in my PDB.
this what I am looking for.. thanks for sharing this Venkat one thing, may I know what kind of zsh plugin you're using for that splitting terminal output (top) and command (bottom)?
Hi, thanks for watching. I use Arch Linux with I3 Tiling Window Manager - Zsh shell - oh-my-zsh (zsh plugin manager) - zsh-autosuggestions (plugin that suggests commands based on my history) - zsh-syntax-highlighting (plugin) Termite as terminal emulator. I am also planning to do a video on my latest terminal setup as many users requested. Here is the one I did a while ago. ruclips.net/video/soAwUq2cQHQ/видео.html
Hi mate, thanks very much for your time to organising these beneficial videos. I have tries your demo and got this error I could not fix it: kubectl drain myk8s-cluster-worker3 node/myk8s-cluster-worker3 cordoned error: unable to drain node "myk8s-cluster-worker3", aborting command... There are pending nodes to be drained: myk8s-cluster-worker3 error: the server could not find the requested resource: kindnet-d6897, kube-proxy-j7ff5
hi Sir,as per my understanding currently i'm running node1 with 2 pods and node2 with 2 pods.now i'm going to drain node1 and pdp will set for allowed disruption is 1, 1. first it will evict one pod from node1 and another pod is always running on node1 or it will move to node2 once the previous pod will be scheduled on node2.??? please help me...
Hi Tamil Selvan, thanks for watching. In your case, it will be fine. When you drain that node, first pod will be evicted and until it is successfully relaunched in the other node and running the last pod on the current node won't get evicted as pdb is set to 1 which means at any point, you can afford to loose just one pod. Cheers.
@@justmeandopensource hi sir but I wants to update os on node1.in this scenario whether the pod is keep on running on node1??or once the pod2 is launched and and available for users then the pod on node1 will be scheduled to some other node.kindly provide ur suggestion for my clarification..
@@tamilselvan8343 You have number of replicas set to 4 and PDB to 1. And 2 pods are running on two of the nodes right. Here you are saying that at any point in time, i need to be running atleast 3 pods. As you drain the node, first pod will get terminated and launched to another node. Then the other pod will get terminated and launched in another node. This simply means that the draining won't evict both pods at the same time. Hope this makes sense. Cheers.
Hi Gijo, Thanks for watching this video. minReplicas and PDB are two different concepts for different usecases. Replica set to 1: To make sure atleast we have one pod running always. If the node thats running this pod crashes, the pod will get rescheduled on another node. If cluster administrator drains the node this pod is running, the pod will be evicted and will be created on another node. If you have one pod in your deployment, during this draining/evicting process, your application will be down for a short period while the pod is created on another node. PDB: You define that my deployment has to have atleast 1 pod running always. This is in addition to setting the replica count to 1. When a node has to be drained and if your pod is on that node, it won't be evicted thus keeping your applications uptime. Thanks
I was trying PDB in GKE, with a min one WordPress instance that should be live always. I've set PDB for WordPress, created a new node pool, and then deleted the old node pool. However, WordPress was down during that time. Is it normal? How can I prevent downtime during these node upgrades?
Hi Gijo, deleting a node pool will delete the pod and create it in another node. PDB won't help in this situation. You can cordon the nodes, increase the replica count for your app. Now the new replica will be scheduled in the new node pool. Then you can delete the other node pool. And finally reduce the replicas to 1. Thanks.
@@justmeandopensource is there any way to create new pods when a shutdown command is received? If so I could use GKE's preemptible VMs which are super cheap
@@justmeandopensource is there any way to create new pods when a shutdown command is received? If so I could use GKE's preemptible VMs which are super cheap
Venkat, this is gold content! Thank you for your effort!
Hi Oleh, many thanks for watching. Cheers.
Thank you for the video it helped me to add the PDB's in our ENV. And now we are able to edit the PDB, bcz I tried editing and the changes were reflected.
Hi Nani, Thanks for watching.
Excellent work Venkat
Thanks for watching.
Hi Venkat, thank your effort on your all videos. Great thanks.
Can you make videos on cicd using Jenkins kubernetes with samle frontend app and backend with db
Hi Hareesh, thanks for watching this video. I have planned for few videos around Jenkins CI/CD in Kubernetes. I am playing with it at the moment. Soon I will release video on it. Thanks.
Since you already uncordon kworker2 then why the pods could not be scheduled & created on kworker2 when you drain kworker1?
Also IMO it is conflicting as upgrading the cluster with major K8s version the nodes are restarted then does it mean because of pdb the administrator cannot upgrade the cluster?
Never heard of it, interesting to watch :-)
Hi Rewanth, thanks for watching.
Hi Venkat,
Thank your effort on your all videos. Great thanks.
I have created a statefulset mongo-db in kubernates cluster with one as primary pod and another as a secondary pod. Now i need to do enable the arbiter in mongo. How i can do that, should i use pod or statefulset?.
Hi Rajesh,
Thanks for watching this video. I was working on a separate series for MongoDB tutorial and made around 9 videos. I am planning to do a video on deploying Mongodb in k8s cluster. I have already played with it by deploying a simple mongodb replicaset in k8s cluster. When I get some time, I will record the video and release it. Other than that, I haven't tried much with MongoDB. Not sure about arbiter deployment. May be I will get some idea when I try playing with it.
Thanks.
Useful. Thanks.
Hello, Thank you for the video! really helpful. I was trying to apply the Pod Disruption Budget to a statefulset after I update my release I can see statefulset get update one at a time instead of 3 which I have defined in my PDB.
this what I am looking for.. thanks for sharing this Venkat
one thing, may I know what kind of zsh plugin you're using for that splitting terminal output (top) and command (bottom)?
Hi, thanks for watching.
I use Arch Linux with I3 Tiling Window Manager
- Zsh shell
- oh-my-zsh (zsh plugin manager)
- zsh-autosuggestions (plugin that suggests commands based on my history)
- zsh-syntax-highlighting (plugin)
Termite as terminal emulator.
I am also planning to do a video on my latest terminal setup as many users requested.
Here is the one I did a while ago.
ruclips.net/video/soAwUq2cQHQ/видео.html
@@justmeandopensource thank you Venkat
I guess it is quite off topic but do anyone know of a good website to stream new movies online ?
Hi mate, thanks very much for your time to organising these beneficial videos.
I have tries your demo and got this error I could not fix it:
kubectl drain myk8s-cluster-worker3
node/myk8s-cluster-worker3 cordoned
error: unable to drain node "myk8s-cluster-worker3", aborting command...
There are pending nodes to be drained:
myk8s-cluster-worker3
error: the server could not find the requested resource: kindnet-d6897, kube-proxy-j7ff5
Finally it is working. I look forward to more videos. Thanks for sharing.
Hi Abdel, thanks for watching and glad that you managed to get it working. Cheers.
hi Sir,as per my understanding currently i'm running node1 with 2 pods and node2 with 2 pods.now i'm going to drain node1 and pdp will set for allowed disruption is 1,
1. first it will evict one pod from node1 and another pod is always running on node1 or it will move to node2 once the previous pod will be scheduled on node2.??? please help me...
Hi Tamil Selvan, thanks for watching. In your case, it will be fine. When you drain that node, first pod will be evicted and until it is successfully relaunched in the other node and running the last pod on the current node won't get evicted as pdb is set to 1 which means at any point, you can afford to loose just one pod. Cheers.
@@justmeandopensource hi sir but I wants to update os on node1.in this scenario whether the pod is keep on running on node1??or once the pod2 is launched and and available for users then the pod on node1 will be scheduled to some other node.kindly provide ur suggestion for my clarification..
@@tamilselvan8343 You have number of replicas set to 4 and PDB to 1. And 2 pods are running on two of the nodes right. Here you are saying that at any point in time, i need to be running atleast 3 pods. As you drain the node, first pod will get terminated and launched to another node. Then the other pod will get terminated and launched in another node. This simply means that the draining won't evict both pods at the same time. Hope this makes sense. Cheers.
@@justmeandopensource now it's cleared sir. Thanks for ur valuable answers.
@@tamilselvan8343 No worries. You are welcome. Cheers.
What is the best way to keep at least 1 pod? Is it via minReplicas:1 or via PDB set to 1?
Hi Gijo, Thanks for watching this video.
minReplicas and PDB are two different concepts for different usecases.
Replica set to 1:
To make sure atleast we have one pod running always. If the node thats running this pod crashes, the pod will get rescheduled on another node. If cluster administrator drains the node this pod is running, the pod will be evicted and will be created on another node. If you have one pod in your deployment, during this draining/evicting process, your application will be down for a short period while the pod is created on another node.
PDB:
You define that my deployment has to have atleast 1 pod running always. This is in addition to setting the replica count to 1. When a node has to be drained and if your pod is on that node, it won't be evicted thus keeping your applications uptime.
Thanks
@@justmeandopensource Thanks a lot
@@gijovarghese7548 You are welcome. Cheers.
I was trying PDB in GKE, with a min one WordPress instance that should be live always. I've set PDB for WordPress, created a new node pool, and then deleted the old node pool. However, WordPress was down during that time. Is it normal? How can I prevent downtime during these node upgrades?
Hi Gijo, deleting a node pool will delete the pod and create it in another node. PDB won't help in this situation. You can cordon the nodes, increase the replica count for your app. Now the new replica will be scheduled in the new node pool. Then you can delete the other node pool. And finally reduce the replicas to 1. Thanks.
@@justmeandopensource Thanks :)
@@justmeandopensource is there any way to create new pods when a shutdown command is received? If so I could use GKE's preemptible VMs which are super cheap
@@gijovarghese7548 you are welcome. Cheers.
@@justmeandopensource is there any way to create new pods when a shutdown command is received? If so I could use GKE's preemptible VMs which are super cheap