To change the existing label of a specific pod the command is : kubectl label pod = --overwrite To add a new label just remove the --overwrite flag from the above command
Great video! I look forward to more content like this. I have a request for a deep dive into Kubernetes after this series. It would be great to explore how controllers utilize ListersWatchers internally, the server-side watch mechanism, deep etcd workings, etcd leases, gRPC framework, advanced network policies, the end-to-end flow of a kubectl command request by a user, advanced scheduling strategies, vault integration with k8s, best practices, and some project ideas(and advanced topics that i didn't mention here). I could use AI to gather this information, but your explanation style is awesome and I would love to see things from your perspective as an engineer. I'm just curious how the k8s work internally 🫡😁.
I have one question: How can downtime occur in a blue-green deployment when we are creating a separate environment for the new version and routing some test traffic to it for testing? Isn’t it highly available since our old environment is completely safe and all the traffic is still running on it? This seems like a safe solution. Could you please explain how downtime might happen?
In create, if something you edit in the file, will not be created again until the pod is deleted. In apply, the changes will be done on the fly, if I am not wrong @Prerit. After apply, it will show changes made.
To change the existing label of a specific pod the command is :
kubectl label pod = --overwrite
To add a new label just remove the --overwrite flag from the above command
Thanks for this video sir
Amazing prerit brother. Thanks for Hindi explanation. Great delivery and presentation
Thanks man:)
Be consistent now brother. Great to see you back. I hope we create a community of the best engineers out there.🎉
I will try my best 😆
Great video! I look forward to more content like this. I have a request for a deep dive into Kubernetes after this series. It would be great to explore how controllers utilize ListersWatchers internally, the server-side watch mechanism, deep etcd workings, etcd leases, gRPC framework, advanced network policies, the end-to-end flow of a kubectl command request by a user, advanced scheduling strategies, vault integration with k8s, best practices, and some project ideas(and advanced topics that i didn't mention here). I could use AI to gather this information, but your explanation style is awesome and I would love to see things from your perspective as an engineer. I'm just curious how the k8s work internally 🫡😁.
Absolutely, though this series is an advanced series, but after this series I will be regularly launching videos on internals and system design!!
Awesome content I am following from day 1 and I am utilizing the same in my Professional career as well ❤
Glad you liked it 🙌
Thank you prerit
Happy to see You back 😀.Great Content as Always
Thanks man!!
thank you for this video/stream
Glad you liked the video :)
I have one question: How can downtime occur in a blue-green deployment when we are creating a separate environment for the new version and routing some test traffic to it for testing? Isn’t it highly available since our old environment is completely safe and all the traffic is still running on it? This seems like a safe solution. Could you please explain how downtime might happen?
In create, if something you edit in the file, will not be created again until the pod is deleted. In apply, the changes will be done on the fly, if I am not wrong @Prerit. After apply, it will show changes made.
Welcome Back sir
Thank you :)
I think you forgot to add the important links in the description
where we can do practice of kubernetes commands
Just create a cloud based cluster and start applying
back in action
Glad you liked the video :)
Hey @techwithprerit brother
please update notes