Generally we don't need to change etcd manifest --> the mountPath and --data-dir location , as we know we --data-dir flag take from the mountPath location and mountPath location is linked with volume - hostPath ---So we just need to change the volume -> hostPath value that's it. Great tutorial
you spoke about stopping the API server and ETCD POd before taking hte backup, but did not demonstrate it, or any doc related to that which u can suggest
I have explained that part in one of the previous videos. You can move the manifest yamls of these two components to a different directory from /etc/kubernetes/manifest and it will stop those
Hello Vijay, peers are the members of an ETCD cluster in HA mode, you can still take the snapshot in the same way, to restore , you can to restore on each of the members by changing the value of --initial-advertise-peer-urls to the member IP. Rest all the process is same.
Hi Piyush need one clarification here 21:25 we are restoring the snapshot at new directory that is /var/lib/etcd-restore-from-backup right. But can we restore an etcd snapshot directly into the existing etcd data directory...if possible so we dont need to modify the etcd.yaml file...waiting for your response on this. Thaks in advance 🙏
@@TechTutorialswithPiyush overwriting backup problem will come. Instead of that we can do below things: first we should check etcdutl and etcdctl working any location or not for this you can download etcdctl and etcdutl package binary and move its binary to /usr/bin/ after than remove folder etcd at location /var/lib/etcd now if you do etcdutl , you can do restore from snapshot... you don't need to do anything ...i have tried in killercoda exercise and its working... you can try it
Hi! I took my CKA exam yesterday, I got the question about ssh into node, and perform back up and restore of etcd with peer.key and peer.crt. If I passed those in when backing up, I received “no such file or directory” error on the peer.crt/peer.key. Any idea what to do here? Can’t find any examples out there that does this either. I also tried server.crt and server.key, but all I got was “permission denied”, so I suspect I have to authenticate with the peer flags somehow(?) These peer flags were also not present when checking the etcd pod’s commands. I do see in the docs, under Securing Communication that there are some peer flags for configuring etcd with secure peer comm., but these flags were unknown to the etcdctl tool
@@TechTutorialswithPiyush Thanks for the answer, I passed the exam successfully and I was able to perform the entire backup and restore process with sudo, although it is not stated on the left that I should use sudo.
Hi, why we are giving new path as --data-dir to store backup file, backup already store in /opt/*.db path. can we use this path /opt/*.db path in etcd.yaml file? Pls explain on this
i have deployed ectd as a service not as a pod what is the process of taking etcd and backup i have tired ur process but not able to restore etcd can u please help
Hello, It should not matter how ETCD is deployed the process is same. Can you please share the steps you have followed and issue you are facing? It will be easy to troubleshoot if you can join the discord community (thecloudopscommunity.org) and share the details over there
Plugins can be used for anything but we are learning things cloud native way. You can setup a cluster with just a few click in a managed cloud or just a single click using Terraform still, we need to understand how to setup from scratch. Our main goal is learning and that can be done by doing things the hard way.
@TechTutorialswithPiyush thanks recently we migrated from 1.20 to 1.31 cluster and infact velerio is an option but somehow its fail to sync with new cluster. We did with using some plugins and it works like charm. And in vedio you were explaining through etcd is it work with some older version as well? Like 1.19 to 1.30? Thanks
Hi Piyush! again thanks for your guidance on ETCD backup and restore, I learned a lot with your excellent pace in this course. There is one issue i encountered during the assignment. After performing the actions along with your video, i had the backup and restore successfully as you've shown. My cluster looked well. Then i followed the assignment steps and started by creating a new deployment. However pod creation and termination processes are halted with the etcd-restore-from-backup setting. after few hours of troubleshooting attempt - deleting calico pods etc, I managed to solve the problem by reverting the cluster from "etcd-restore-from-backup" setting to initial (original etcd) setup by modifying the etcd yaml. I think this would probably cause a downtime in a live system... In your case, would you be able to successfully create/terminate pods after the restore?
ok, i tried again and this time it worked somehow :) i also tried to reproduce the problem but i couldn't make it :)) thanks anyway Piyush, you are welcome if you have any comments on this one.
I am super happy to know that you were able to fix the issue. That's how a devops engineer should work. You might have missed a step earlier but you did not give up, you tried again and fixed the issue, more learning and more understanding comes with that. More power to you
Completed the video...!!!!!
More than any praise, may be this word puts a smile - 'Subscribed'!
Thank you 😊🙏
Thanks
Just Amazing as all the videos of yours !! Thanks for this sir.
Thank you! Glad you're enjoying the series.
Generally we don't need to change etcd manifest --> the mountPath and --data-dir location , as we know we --data-dir flag take from the mountPath location and mountPath location is linked with volume - hostPath ---So we just need to change the volume -> hostPath value that's it. Great tutorial
thanks for your clear explanation, that made things easier and more fun
Awesome! I'm glad you found it fun and easy to understand. 😄
Awesome video
Thanks! 😊
Thanks Piyush for your wonderful videos
You're most welcome
very informative.
Thank you!
Appreciate your efforts. Thank you.
Thanks for watching!
Appreciate your efforts ❤
Thank you
Super
you spoke about stopping the API server and ETCD POd before taking hte backup, but did not demonstrate it, or any doc related to that which u can suggest
I have explained that part in one of the previous videos. You can move the manifest yamls of these two components to a different directory from /etc/kubernetes/manifest and it will stop those
Thanks for sharing.
Thanks for watching!
Thanks Piyush
Welcome :)
Hi , Could you pls explain about PEER etcd restore and backup. now a days they asked in CKA exam
Hello Vijay, peers are the members of an ETCD cluster in HA mode, you can still take the snapshot in the same way, to restore , you can to restore on each of the members by changing the value of --initial-advertise-peer-urls to the member IP. Rest all the process is same.
🤩
Hi Piyush need one clarification here 21:25 we are restoring the snapshot at new directory that is /var/lib/etcd-restore-from-backup right. But can we restore an etcd snapshot directly into the existing etcd data directory...if possible so we dont need to modify the etcd.yaml file...waiting for your response on this. Thaks in advance 🙏
Hello Buddy, Yes, we can restore it to existing directory as well but we are doing this to avoid overwriting the backup.
@@TechTutorialswithPiyush overwriting backup problem will come.
Instead of that we can do below things:
first we should check etcdutl and etcdctl working any location or not for this you can download etcdctl and etcdutl package binary and move its binary to /usr/bin/
after than remove folder etcd at location /var/lib/etcd now if you do etcdutl , you can do restore from snapshot... you don't need to do anything ...i have tried in killercoda exercise and its working... you can try it
So Beautiful Session ❤❤❤❤❤
Glad you found it helpful
Hi! I took my CKA exam yesterday, I got the question about ssh into node, and perform back up and restore of etcd with peer.key and peer.crt. If I passed those in when backing up, I received “no such file or directory” error on the peer.crt/peer.key. Any idea what to do here? Can’t find any examples out there that does this either.
I also tried server.crt and server.key, but all I got was “permission denied”, so I suspect I have to authenticate with the peer flags somehow(?) These peer flags were also not present when checking the etcd pod’s commands. I do see in the docs, under Securing Communication that there are some peer flags for configuring etcd with secure peer comm., but these flags were unknown to the etcdctl tool
You might need to use sudo -i to elevate the privileges or use the command along with sudo, also, etcdutl tool, have you tried this?
@@TechTutorialswithPiyush Thanks for the answer, I passed the exam successfully and I was able to perform the entire backup and restore process with sudo, although it is not stated on the left that I should use sudo.
@@erdi005 Congratulations 👏🏽👏🏽 Yes, they should have mentioned this. Same happened with me as well
@@TechTutorialswithPiyush thank you for everything
Hi, why we are giving new path as --data-dir to store backup file, backup already store in /opt/*.db path. can we use this path /opt/*.db path in etcd.yaml file?
Pls explain on this
Can you please share the exact timestamp?
@@TechTutorialswithPiyush - 21:22, "--data-dir="
❤❤❤
Is this etcd backup and restore method will restore secret, configmap with data or only object will be restored?
Yes, as the config data also get stored inside the etcd database
@@TechTutorialswithPiyush Thanks for the reply. I will practice it.
Comment for target.....!!!!!
i have deployed ectd as a service not as a pod what is the process of taking etcd and backup
i have tired ur process but not able to restore etcd
can u please help
Hello, It should not matter how ETCD is deployed the process is same. Can you please share the steps you have followed and issue you are facing? It will be easy to troubleshoot if you can join the discord community (thecloudopscommunity.org) and share the details over there
Krew plugin is required.. to smooth backup and restore
Plugins can be used for anything but we are learning things cloud native way. You can setup a cluster with just a few click in a managed cloud or just a single click using Terraform still, we need to understand how to setup from scratch. Our main goal is learning and that can be done by doing things the hard way.
@TechTutorialswithPiyush thanks recently we migrated from 1.20 to 1.31 cluster and infact velerio is an option but somehow its fail to sync with new cluster.
We did with using some plugins and it works like charm.
And in vedio you were explaining through etcd is it work with some older version as well? Like 1.19 to 1.30?
Thanks
And through etcd method is it ensure having the same ingress gateway? Like the external ip.
Hi Piyush! again thanks for your guidance on ETCD backup and restore, I learned a lot with your excellent pace in this course.
There is one issue i encountered during the assignment.
After performing the actions along with your video, i had the backup and restore successfully as you've shown. My cluster looked well.
Then i followed the assignment steps and started by creating a new deployment. However pod creation and termination processes are halted with the etcd-restore-from-backup setting.
after few hours of troubleshooting attempt - deleting calico pods etc, I managed to solve the problem by reverting the cluster from "etcd-restore-from-backup" setting to initial (original etcd) setup by modifying the etcd yaml.
I think this would probably cause a downtime in a live system... In your case, would you be able to successfully create/terminate pods after the restore?
ok, i tried again and this time it worked somehow :)
i also tried to reproduce the problem but i couldn't make it :))
thanks anyway Piyush, you are welcome if you have any comments on this one.
I am super happy to know that you were able to fix the issue. That's how a devops engineer should work. You might have missed a step earlier but you did not give up, you tried again and fixed the issue, more learning and more understanding comes with that. More power to you