Just in few minutes I was testing the deployment of nfs provisioner with that first repo, and I've got stuck in that issue!! Thank you 🙏, your video has really saved me.
@@justmeandopensource the only thing I would add is how to use an archived PVC after it has been deleted on the node. I see it on my NFS now renamed to "archived-nfs-test-claim-pvc"
Thanks so much bro. I'm stuck with NFS Share after I upgraded my k8s version from 1.18 to 1.20. I have no idea why it cannot create pv. So this video helps me alot. Thanks again!.
This video has literally saved my butt! Thank you so much...i was stuck with my deployments and everything was on pending for volume provisioning. Im am subscribed and ready to learn more.
Hi, I like all your videos. I am using as storage solution on my kubernetes homelab longhorn v1.1.0. It’s a great solution . It’s a powerful distributed block storage system. Now longhorn support readwritemany workloads. You can create backup and snapshots . Think about make a video about longhorn. Best Regards.
Thanks for the detailed explanation. I have followed the helm installation. after creating PVC, I am trying to extend the storage size. Even though allowVolumeExpansion is set to true.The PVC expansion is not reflecting. The change is there in the manifest file when I am looking for manifest file. Any idea why this is PVC expansion is not working?
your videos quite comprehensive and have helped me in setting an ELK cluster in k8s. thanks a lot i am facing an issue with nfs snapshot repo, have send a message in slack. could you pls take a look and provide some suggestion ?
Just saw your message in Slack. Can you provide me with more information. 1. What is your kubernetes cluster look like (how did you provision, cloud or local laptop, windows or Linux host) 2. What docs you followed for deploying ELK basically, if you could give me as much information as possible to help me recreate your set up in my environment, it would help troubleshoot
This is one of the great explanation. The kubernetes nfs provisioner, pvc and pv under nfs share are got created successfully. However, upon deploying test-pod.yaml file from the same github source with restartPolicy as 'Always', the pod is crashing with a message 'Restarting' is failing. Really appreciate if you can provide some suggestions and reasons.
ty! but i want to add that in rbac.ymal (or whatever u may call it) u need to add rights to get watch list create delete update endpoints. Without that i have some errors in provision-pod
Thanks for this helpful tutorial. It is like finding missing part in a jigsaw. Do you know any tutorial about iscsi and openfiler usage in kubernetes. Thanks again...
Hi Hasan, thanks for watching. I haven't come across any videos on iscsi/openfiler in kubernetes. Sure, there will be some docs lying around in the internet. Cheers.
Hello Sir After following the step i am getting below error. Warning FailedScheduling 9s (x2 over 70s) default-scheduler 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
Best video collection for k8s. Could you please help me that which persistent storage solution I can use if I have a 3 nodes k3s Lab setup on my laptop ?
Hi Ashu, thanks for watching. You don't have much option if you want to do that on your laptop. You can use NFS as shown in this video. Some kubernetes distribution like KinD, K3D, minikube all come with localpath provisioner pre-deployed so that you can start requesting and using persistent volumes without much effort
hi sir i always error like this after apply claim controller.go:966] error syncing claim "704565bf-af3a-4a5f-89a0-fcb63ba50f34": failed to provision volume with StorageClass "managed-nfs-storage": unable to create directory to provision new pv: mkdir /persistentvolumes/pv-claim-pvc-704565bf-af3a-4a5f-89a0-fcb63ba50f34: read-only file system how to fix it?
Hello, very good kubernetes domain and excellent explanation, I thank you for sharing your knowledge. One question, this solution that you explain in your video of dynamic provisioning using NFS (with ReadWriteMany) can be used on an AWS EC2 with an EBS ?. Before installing NFS for dynamic provisioning, you must prepare the disk with ext4 and LVM? Format. Thank you
Hi Juan, thanks for watching. I didn't dive deeper into this as mostly in real world (production), you won't be using NFS as dynamic volume provisioner. Clusters will be running in the cloud and make use of dynamic provisioning that the cloud provider offers. NFS is not secure. There are OpenEBS, Rancher's Longhorn. When it comes to storage provisioning, there are lots of projects. I only covered the basics. If you were to take it seriously, then you will have multiple disk and some form of quota restrictions on the nfs server side. Because even if you create a PV of 1Gi, there is nothing stopping the pod from writing more than 1Gi data on that volume unless you have quotas setup on the nfs server side.
hi venkat...your explanation is awsome...I have a query...I have created a storage in cloud like amazon ebs or azure file disk or google cloud storge...can i use that storage in our on premise kubernetes cluster as dynamic volume
I can't remember but there is nothing that stops the pod from using more than it has claimed. I think I have tested it and noticed that the pod could indeed use more than requested.
fruitful as usual .. thanks! Now, I have a question, when I delete a pvc while the pv is still there, how to re-attach the pvc to the same pv again ?! trying to find any nfs solution providing this !
Hi, thanks for watching. What is the retain policy set to? There is Retain, Recycle, Delete. I guess yours is set to retain. It can't be reused for any other pvc. You can set it to recycle, in which case, the content of the pv will be deleted before it is bound to another pvc. If you want the content of previous pv to be available for some reason, then you can't use dynamic provisioning. You will have to manage the pv yourself. Cheers.
Just me and Opensource I’ve figured it out finally. However recycle is now deprecated, but using retain and specifying the volumeName solved it for me.
Such a nice explanation. Just a small question. When the 'archiveOnDelete: "true"' is set, the volume is prefixed with the archived on the deletion of PVC. So is this volume reusable or not since it is prefixed with archived?
Hi Yassine, thanks for watching. What you mean by mode? I used Archlinux with I3 tiling window manager in this video. For terminal, I used Alacritty terminal with ZSH shell and zsh-autosuggestions & zsh-syntax-highlighting plugin if thats what you were asking. Cheers.
I just use the GUI in Rancher to create a PV base on an NFS share on my NAS. Till now I was not able to connect that share to my workload. The moment I create a PVC base on that Persistent Volume and connect it to my workload, the workload stop working. You have any idea what this could be? What is the difference between your method and creating a PV or PVC in the GUI of Rancher?
Hi, Nice explanation and this works perfectly fine in most environments. Do you think this approach will work if I have setup a NFS server on a WSL Ubuntu image and then deploying a K8s cluster using K3d. Because in K3d we can not login into the nodes to install any packages. So I am trying to use a HELM chart nfs-client-provisioner to install client. But I am getting connection refused error in return, I have disabled the UFW also. Any comment on this scenario or what I should take care while using this.
Very nice video! However I have a problem installing it on my cluster: failed to provision volume with StorageClass "managed-nfs-storage" - permission denied I am sure that it has permission to write as I am able to mount the NFS share on my computer. Do you know any way to fix it?
@@justmeandopensource Thanks for your reply, I'm currently trying to setup a HA Cluster with vSphere Cloud Provider in order to get persistent volumes across the nodes, I'm following this instructions: cloud-provider-vsphere.sigs.k8s.io/tutorials/kubernetes-on-vsphere-with-kubeadm.html Now I'm stuck in adding a new master node, I need to pass the flag: cloud-provider through a yaml file with kubeadm init --config to the master and kubeadm join --config to the workers, however I cannot do it to a second master. My issue: serverfault.com/questions/1060197/kubernetes-vsphere-cloud-provider Thanks again!
The thing is, it won't work when i change the namespace in rbac.yml, can you update me, I updated everything with my own namespace even role,service account, deployment everything, i think the problem is in rbac.yml i added my own namespace to every different kind. is it fine to change or add namespace in rbac.yml?
I'm a new viewer, trying to install Traefik, First you send me to this video and then you send me back even further. That's annoying and not very considerate on your part.
Hi, sorry about that. I didn’t have a clear plan as these videos were recorded at different times and i didn’t think it from the viewers point of view. Thanks for bringing it out. I’ll try and correct this in my future content.
This is the most detailed and educational video about NFS provisioning on Kubernetes I've ever seen. Thanks in millions.
Hi Serkan, thanks for watching.
Wow, you're faster then fast , quicker than quick, and yet informative and no bs pure knowledge, awesome! ❤
Glad you liked it! Thanks for watching.
Excellent video! I followed the instructions using "master" branch of kubernetes-sigs repo and everything ran smoothly!
Hi Angelos, thanks for watching.
Just in few minutes I was testing the deployment of nfs provisioner with that first repo, and I've got stuck in that issue!! Thank you 🙏, your video has really saved me.
Glad it helped. Thanks for wathching. Cheers.
this works so well and thank you for walking us through this!!!! seeing a PVC show up on my NAS was so exciting!!
Hi Jason, Thanks for watching.
@@justmeandopensource the only thing I would add is how to use an archived PVC after it has been deleted on the node. I see it on my NFS now renamed to "archived-nfs-test-claim-pvc"
@@Weirlive that archived volume/dir is more for the admin/user to recover any data if needed. Otherwise it can’t be reused within the cluster.
@@justmeandopensource interesting, well thanks for the info.
@@Weirlive no worries
Thank you for this video, and as a gratitude i shared this video in my Linkedin profile. keep up the great K8S videos. wishing you all the best.
Hi Nahum, thanks for watching and sharing. Much appreciated.
The BEST tutorial on setting up NFS in kubernetes, thanks + subscribed
Hi Pezhvak, thanks for watching.
Thanks a lot.. I was so horrifically searching google wit AI and could not achieve what I want.. keep up the good work Sir
Thanks for watching.
Thanks Venkat for making video on this issue. Before we need to add selflink in k8s default manifest file.🤟🏼
No worries.
01:43 set up nfs server --> You set up nfs server on k8s node itself. correct me if worng. Great video as usaul. Thanks
Thank you for the support !
Hi David, thanks for watching.
Thanks so much bro. I'm stuck with NFS Share after I upgraded my k8s version from 1.18 to 1.20. I have no idea why it cannot create pv. So this video helps me alot. Thanks again!.
Can you let me know the name of app (execute command line) in this video?
Sorry, it means the tool you used instead of the cmd tool. It's so cool.
Many doubts cleared in single Video ..Thanks ..
Glad to hear that. Thanks for watching. Cheers 😊
Nice to see something setup before giving it a go, awesome video, thanks!
Thanks for watching.
Hi Venkat, thanks for the amazing video.
Thanks for watching.
This is the best and deeply explained! Thank you... saved ton of time..
Hi Krunal, Thanks for watching. Glad it helped
detailed and practical explanation, thank you for such quality content.
Cool. Thanks for watching. Cheers.
Lots of love from me. You have saved my prestige 😘😘😘😘
Hi Sumith, thanks for your interest in this content. Glad to hear you found it useful. Cheers.
Fantastic video, thanks so much!
Thanks for watching.
I was getting that self link error, thank you so much for this video, you saved a lot of time for me
Glad that it was helpful. Thanks for watching.
This video has literally saved my butt! Thank you so much...i was stuck with my deployments and everything was on pending for volume provisioning. Im am subscribed and ready to learn more.
So glad it helped. Many thanks for watching and subscribing. Cheers.
This was very helpful. Thanks a bunch
Hi Ody, thanks for watching. Cheers.
this works perfectly, thanks for ur help brother finally got it after week thank-you so much.
Hi Sagar, thanks for watching. Glad it helped. Cheers.
Thank you man, It helped a alot. Appreciating your efforts
Hi, I like all your videos. I am using as storage solution on my kubernetes homelab longhorn v1.1.0. It’s a great solution . It’s a powerful distributed block storage system. Now longhorn support readwritemany workloads. You can create backup and snapshots . Think about make a video about longhorn. Best Regards.
Yeah. That's in my list. Cheers.
Excellent video
Thanks for watching.
Thank You, helped a LOT.
Thanks for watching Vitor.
Excellence in every video you do
Hi Roberto, thanks for watching.
@@justmeandopensource Can you do a knative video next :)? Thank you
@@RobertoFabrizi I can try.
Thanks for making such a usefull video man!!!!Can you please make a video on kubernetes operator,CRD and CR.
Thanks for watching. I can try. Cheers.
@@justmeandopensource Thank you for your reply!!!
Wow great tutorial, I think you should make another k8s pv tutorial, how about rook/ceph?
Very nice and helpful video ...
Hi Amit, Thanks for watching.
Thanks for the detailed explanation. I have followed the helm installation. after creating PVC, I am trying to extend the storage size. Even though allowVolumeExpansion is set to true.The PVC expansion is not reflecting. The change is there in the manifest file when I am looking for manifest file. Any idea why this is PVC expansion is not working?
thankyou so much venkat bro
You are welcome. Thanks for watching.
your videos quite comprehensive and have helped me in setting an ELK cluster in k8s. thanks a lot
i am facing an issue with nfs snapshot repo, have send a message in slack. could you pls take a look and provide some suggestion ?
Just saw your message in Slack. Can you provide me with more information.
1. What is your kubernetes cluster look like (how did you provision, cloud or local laptop, windows or Linux host)
2. What docs you followed for deploying ELK
basically, if you could give me as much information as possible to help me recreate your set up in my environment, it would help troubleshoot
very detailed explanation, thanks for your help. keep doing the good stuff..
Great explanation. Loved all ur videos
Hi Rajesh, Thanks for watching.
This is one of the great explanation. The kubernetes nfs provisioner, pvc and pv under nfs share are got created successfully. However, upon deploying test-pod.yaml file from the same github source with restartPolicy as 'Always', the pod is crashing with a message 'Restarting' is failing. Really appreciate if you can provide some suggestions and reasons.
Thanks for great tutorials.btw, your browser looks strange. What browser is it? 😅
Thanks for your effort
Hi, thanks for watching.
Thanks a bunch, mate. this tutorial saved me a lot of time 🙏
Nice explanation. Is there any native kubernetes dynamic hostbased persistent volume provisioner?
Great content as usual :)
Any plans on a video (series) on securing K8S apps with a WAF like modsecurity?
Hi Henni, thanks for watching.. I can try..
Thanks man.
You are welcome. Thanks for watching.
Awesome !!!!
Thanks for watching.
you saved my life, thankyou
Hi Krittidet, thanks for watching. Glad it helped.
ty! but i want to add that in rbac.ymal (or whatever u may call it) u need to add rights to get watch list create delete update endpoints. Without that i have some errors in provision-pod
Amazing
Thanks for watching.
Could you please also create a session on rook-ceph provisioning and hook with prometheus, etc.
I will add it to my list. Cheers.
Thanks for this helpful tutorial. It is like finding missing part in a jigsaw. Do you know any tutorial about iscsi and openfiler usage in kubernetes. Thanks again...
Hi Hasan, thanks for watching. I haven't come across any videos on iscsi/openfiler in kubernetes. Sure, there will be some docs lying around in the internet. Cheers.
you're awesome dude
Hello Sir
After following the step i am getting below error.
Warning FailedScheduling 9s (x2 over 70s) default-scheduler 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
How to do dynamic provisioning of PVCs for statefulsets yaml resources in bare-metal setup?
you are awesome thanks for your help
Hi, thanks for watching.
Best video collection for k8s. Could you please help me that which persistent storage solution I can use if I have a 3 nodes k3s Lab setup on my laptop ?
Hi Ashu, thanks for watching. You don't have much option if you want to do that on your laptop. You can use NFS as shown in this video. Some kubernetes distribution like KinD, K3D, minikube all come with localpath provisioner pre-deployed so that you can start requesting and using persistent volumes without much effort
Amazing,
Thank you
Hi Mohamed, thanks for watching.
Thank you for this video! Do you know the (dis)advantages of rook-nfs over this approach?
Thanks for watching. Haven't tried rook-nfs yet to be able to comment. I might try it soon. Cheers.
hi sir i always error like this after apply claim
controller.go:966] error syncing claim "704565bf-af3a-4a5f-89a0-fcb63ba50f34": failed to provision volume with StorageClass "managed-nfs-storage": unable to create directory to provision new pv: mkdir /persistentvolumes/pv-claim-pvc-704565bf-af3a-4a5f-89a0-fcb63ba50f34: read-only file system how to fix it?
A Biiig Thanks
Thanks for watching
Hello, very good kubernetes domain and excellent explanation, I thank you for sharing your knowledge. One question, this solution that you explain in your video of dynamic provisioning using NFS (with ReadWriteMany) can be used on an AWS EC2 with an EBS ?. Before installing NFS for dynamic provisioning, you must prepare the disk with ext4 and LVM? Format. Thank you
Hi Juan, thanks for watching. I didn't dive deeper into this as mostly in real world (production), you won't be using NFS as dynamic volume provisioner. Clusters will be running in the cloud and make use of dynamic provisioning that the cloud provider offers. NFS is not secure. There are OpenEBS, Rancher's Longhorn. When it comes to storage provisioning, there are lots of projects. I only covered the basics. If you were to take it seriously, then you will have multiple disk and some form of quota restrictions on the nfs server side. Because even if you create a PV of 1Gi, there is nothing stopping the pod from writing more than 1Gi data on that volume unless you have quotas setup on the nfs server side.
good work there
Hi, thanks for watching.
hi venkat...your explanation is awsome...I have a query...I have created a storage in cloud like amazon ebs or azure file disk or google cloud storge...can i use that storage in our on premise kubernetes cluster as dynamic volume
Hi Uday, thanks for watching. You won't be able to achieve that from your on-prem servers unfortunately.
Thank you.
Hi Yong, thanks for watching.
Hi Venkat, thanks for the amazing video.
Can you give me several references for making dynamic provision allow expand disk size?
Thank you
what happens if the pod exceeds the amount claimed? would be nice if this was tested
I can't remember but there is nothing that stops the pod from using more than it has claimed. I think I have tested it and noticed that the pod could indeed use more than requested.
Hey, can i share my local storage to k8s on aws or other provider
Nice
Thanks for watching. Cheers.
Thank you
You are welcome
fruitful as usual .. thanks! Now, I have a question, when I delete a pvc while the pv is still there, how to re-attach the pvc to the same pv again ?! trying to find any nfs solution providing this !
Hi, thanks for watching. What is the retain policy set to? There is Retain, Recycle, Delete. I guess yours is set to retain. It can't be reused for any other pvc. You can set it to recycle, in which case, the content of the pv will be deleted before it is bound to another pvc. If you want the content of previous pv to be available for some reason, then you can't use dynamic provisioning. You will have to manage the pv yourself. Cheers.
Just me and Opensource I’ve figured it out finally. However recycle is now deprecated, but using retain and specifying the volumeName solved it for me.
@@nah0221 Perfect.
Such a nice explanation. Just a small question. When the 'archiveOnDelete: "true"' is set, the volume is prefixed with the archived on the deletion of PVC. So is this volume reusable or not since it is prefixed with archived?
what mode are you using in the Desktop environment for arch linux ? i want to use that style
Thank you
Hi Yassine, thanks for watching. What you mean by mode? I used Archlinux with I3 tiling window manager in this video. For terminal, I used Alacritty terminal with ZSH shell and zsh-autosuggestions & zsh-syntax-highlighting plugin if thats what you were asking. Cheers.
@@justmeandopensource 😶🥴🥴
I just use the GUI in Rancher to create a PV base on an NFS share on my NAS. Till now I was not able to connect that share to my workload. The moment I create a PVC base on that Persistent Volume and connect it to my workload, the workload stop working. You have any idea what this could be? What is the difference between your method and creating a PV or PVC in the GUI of Rancher?
Hi,
Nice explanation and this works perfectly fine in most environments.
Do you think this approach will work if I have setup a NFS server on a WSL Ubuntu image and then deploying a K8s cluster using K3d.
Because in K3d we can not login into the nodes to install any packages. So I am trying to use a HELM chart nfs-client-provisioner to install client. But I am getting connection refused error in return, I have disabled the UFW also.
Any comment on this scenario or what I should take care while using this.
I like your video. Thanks you!
Hi, thanks for your interest. Cheers.
The old deployment didn't work because the self-link option disabled in Kubernetes API 1.20.x and will be removed in 1.21
Hi, thanks for adding this information. Cheers.
Very nice video! However I have a problem installing it on my cluster: failed to provision volume with StorageClass "managed-nfs-storage" - permission denied
I am sure that it has permission to write as I am able to mount the NFS share on my computer. Do you know any way to fix it?
i got an error on Pod crashloopback, can you help me?
so, what does archive give you? how do you restore?
Would you be able to do some work on VMware, more specifically on vSphere Cloud Provider?
Thank you.
Hi, What exactly do you want me to try?
@@justmeandopensource Thanks for your reply, I'm currently trying to setup a HA Cluster with vSphere Cloud Provider in order to get persistent volumes across the nodes, I'm following this instructions: cloud-provider-vsphere.sigs.k8s.io/tutorials/kubernetes-on-vsphere-with-kubeadm.html
Now I'm stuck in adding a new master node, I need to pass the flag: cloud-provider through a yaml file with kubeadm init --config to the master and kubeadm join --config to the workers, however I cannot do it to a second master.
My issue: serverfault.com/questions/1060197/kubernetes-vsphere-cloud-provider
Thanks again!
The thing is, it won't work when i change the namespace in rbac.yml, can you update me, I updated everything with my own namespace even role,service account, deployment everything, i think the problem is in rbac.yml i added my own namespace to every different kind. is it fine to change or add namespace in rbac.yml?
what i really want is, i want all these things to run on my own namespace not default so
Hey at that time i was new, it's really simple it works!
why did you mount then unmount ?
Please try and update all your videos, very good videos but all too old
Thanks for watching. Yes you are right. I need to update most of my videos which I’ll be doing going forward.
rpc error. code=Unknown desc =Error response from daemon :unauthorised :The client does not have permission for manifest
Hi, thanks for watching. How did you export your nfs share? I had to add insecure option in the /etc/exports entry.
@@justmeandopensource in our project we are taking images from private registry
@@rajasekharreddy2377 Where exactly you are seeing this error. I mean when trying to do what you get this?
Getting Image pull back off error for nfs
@@justmeandopensource nfs pod is not up, and there by we are not able to deploy istio ingress gateway after istiod
any slack group for discussion
Check my channel banner. You will have link to join my workspace.
@@justmeandopensource Thanks Venkat.Joined the group.
@@nksajeer Thank you
I'm a new viewer, trying to install Traefik, First you send me to this video and then you send me back even further.
That's annoying and not very considerate on your part.
Hi, sorry about that. I didn’t have a clear plan as these videos were recorded at different times and i didn’t think it from the viewers point of view. Thanks for bringing it out. I’ll try and correct this in my future content.