[ Kube 20 ] NFS Persistent Volume in Kubernetes Cluster

Поделиться
HTML-код
  • Опубликовано: 12 янв 2019
  • In this video I will show you how you can use NFS as a storage backend for your Kubernetes cluster.
    Github: github.com/justmeandopensourc...
    For any questions/issues/feedback, please leave me a comment.
    Thanks for watching this video and if you find it useful, please share it with your friends and subscribe to my channel for more videos.
    If you wish to support me:
    www.paypal.com/cgi-bin/webscr...
    Thanks,
    Venkat

Комментарии • 202

  • @xedriq1
    @xedriq1 3 года назад +1

    Superb explanation and demos man, I'm really enjoying your vids! Thank you for sharing your skills.

  • @praveshjangra07
    @praveshjangra07 4 года назад +2

    Simply awesome, watched and practiced. Thank You.

  • @shivakumara6312
    @shivakumara6312 4 года назад +1

    Hi Venkat, Start started to learn about pv and pvc, The video is very good for the beginner. I am going to do it on my cluster. Thanks for the video.

  • @ps100469
    @ps100469 4 года назад +2

    You sir are a rock star. Thanks for the great video. You have saved me tons of time!

  • @motarski
    @motarski 5 лет назад +3

    This might be the most straith forward video on the net for k8s volumes, it's just awesome. Thank you sir for sharing it.

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      Hi Ivan, thanks for watching this video and taking time to comment. Much appreciated.

  • @jhmnieuwenhuis
    @jhmnieuwenhuis 3 года назад +4

    Excellent video. I also watched the dynamic Nfs also excellent stuff. Thanks !!
    You are a great instructor and you know what you are talking about.
    Learned a lot today.
    Regards, Hans

  • @rainbowp19
    @rainbowp19 3 года назад +1

    Thanks Venkat. Tried this lab today. I did it without any hiccups :)

  • @zaibakhanum203
    @zaibakhanum203 2 года назад +1

    Literally awesome sir..lovely work and lovely explanation...

  • @magicvoid
    @magicvoid 3 года назад +1

    Hi - very helpful! You did a really great job and helped me out on my journey!

  • @AtulkumarGupta080796
    @AtulkumarGupta080796 3 года назад +1

    excellent presentation, really helped me, thank you, keep up the good work

  • @mrkumar9591
    @mrkumar9591 5 лет назад +1

    Hey Venkat,
    You are doing such a fabulous job man.
    I learned kubernets lot from your presentation.
    your presentation was awesome.
    What a dedication .... ahhh
    Here the beauty is how you got a time to reply most of the audience.. [it's unbelievable]
    It's really good attitude..
    keep Rocking..! " I have tried this exercise on Centos 7 " It works as expected.. Note : I just mentioned share * (rw,sync), I didn't get any error.

    • @justmeandopensource
      @justmeandopensource  5 лет назад +3

      Hi Kumar, many thanks for watching this video and taking time to give feedback. Glad that you liked my videos. Just trying to give back to community something that I have been learning.
      Thanks,
      Venkat

  • @atjhamit
    @atjhamit 3 года назад +1

    Wonderfully explained!!
    Thanks.

  • @openshores4288
    @openshores4288 3 года назад +1

    love you man, i do love documentations but sometimes its better to watch video :)

    • @justmeandopensource
      @justmeandopensource  3 года назад +1

      Hi, thanks for watching. Yeah, with documentation you can get through quickly what you need but sometimes if you want to really learn, videos are great. Cheers.

  • @abdulazizallan3634
    @abdulazizallan3634 4 года назад +2

    very very good , your tutorial very simple , thank you so much for share this

  • @2271masoud
    @2271masoud 2 года назад

    Thanks for the video. I had to run nfs client commands on worker nodes to get it up and running (sudo apt install nfs-common nfs4-acl-tools) for Ubuntu- (dnf install nfs-utils nfs4-acl-tools) for Centos

  • @alekseylitvinov404
    @alekseylitvinov404 2 года назад +1

    thank you very much

  • @kdetony
    @kdetony 4 года назад +1

    Regards!!!!! .... & Congrats

    • @justmeandopensource
      @justmeandopensource  4 года назад +2

      Hi Anthony, thanks for watching this video.

    • @kdetony
      @kdetony 4 года назад +1

      @@justmeandopensource sure !!!! :)

  • @bhaskarreddy3201
    @bhaskarreddy3201 5 лет назад +1

    good Video

  • @raghumca
    @raghumca 2 года назад

    Thank you

  • @MrKERTAK
    @MrKERTAK 4 года назад +2

    Спасибо

  • @devmrtcbk
    @devmrtcbk 3 года назад +1

    thanks

  • @manalirahate3611
    @manalirahate3611 4 года назад +2

    Can you create a video on how to provision ceph storage to the multimaster kubernetes cluster using ceph-rbd and ceph-fs.
    It will be very helpfull for us.
    All your videos has helped me to learn many things.

    • @justmeandopensource
      @justmeandopensource  4 года назад +3

      Hi Manali, thanks for watching. I will certainly do video on ceph as thats in my list. Cheers.

    • @manalirahate3611
      @manalirahate3611 4 года назад

      @@justmeandopensource I have successfully attached the ceph-rbd and ceph-fs to the multimaster setup. Now I am trying to provision nextcloud with this ceph storage and I am getting stuck in it. Can you help me with it and ingress for the same.

  • @PanosGeorgiadis
    @PanosGeorgiadis 5 лет назад +2

    instead of doing enable and start the nfs server in two steps, you can do 'systemctl enable --now' which will do both at the same step ;)

    • @justmeandopensource
      @justmeandopensource  5 лет назад +6

      Hi Panos, thanks for watching this video and taking time to share your suggestion. Yes I am aware of that but wanted to show each steps for those who are not used to "enable --now" option. Enabling and starting makes sense for simple users. Thanks.

  • @anishaanil1
    @anishaanil1 3 года назад

    Hello Venkat,
    Been following your tutorials and they are very informative. But a quick question,
    How does one do the same thing (PVC & PVC Claim * Deployment Template) when there is a directory and sub-directories that are involved. The single file example is real easy but in production instances an application is not bound with a single file but directories and sub directories and files inside these directories.
    How can one handle this situation??

  • @costalesea
    @costalesea 4 года назад +1

    Hi, Venkat. I apprece so much your effort and dedication to make this videos.
    A little question: suppose I have a previous nfs export with some static files, pdf files for example that I need to mount in every replica of my app. How it works with this provisioner if every pod of deployment will claim for a single volume? I need the same data in every pod, it understand?
    Sorry if the question is not so consistent.
    Thank you from Argentina!

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi Eduardo, thanks for watching. First create a pvc which will create a pv for you if have dynamic volume provisioning setup. Then copy the files on to the directory and each of your pod using that pvc will have that data.

  • @Samos667
    @Samos667 4 года назад +1

    Very amazing, you have save me with your video. Just you can tell me how you have made to display completion command directly on the prompt, it's very insane.

    • @justmeandopensource
      @justmeandopensource  4 года назад +2

      HI Sammy, thanks for watching. I used oh-my-zsh and the command completion in the background is using the zsh-autosuggestion plugin that suggests command based on your history. I have done few videos on my setup where I have explained my terminal setup.
      ruclips.net/p/PL34sAs7_26wOgqJAHey16337dkqahonNX
      Thanks

  • @karlovasiradio
    @karlovasiradio 3 года назад

    If we deploy our nfs server to a gcp instance for example. Should we expose any port from the firewall in order to access the server from anywhere; must we define it into the yaml file for the persistent volume description?

  • @toroktechnology7420
    @toroktechnology7420 4 года назад

    How can I update a persistent volume or update the pvc configs

  • @singaravelann3671
    @singaravelann3671 3 года назад +2

    Hi Venket,
    Thanks for the great playlist on Kubernetes, How can we set up the HA(Highly available) NFS server??

    • @Gersberms
      @Gersberms 3 года назад

      Excellent question! I've already tried HAProxy for the service load balancing, it's a great program and I'm very happy with his suggestion. But that's for publishing a service to an external IP address, it has nothing to do with the NFS server. This is an important step in high availability that I'm interested in.

  • @jamesaker7048
    @jamesaker7048 4 года назад +1

    I think around 8:54 you give a clue as to what the ip needs to be set to on the nfs server in the /etc/exports/. I used the subnet assigned to the LXC containers instead of * so my /etc/exports looks like /srv/nfs/kubedata 10.105.38.0/24(rw,no_root_squash) I didn't have to use chmod 777 which I don't like to use. Let me know if this is helpful. Thanks again for all your work!

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi James, thanks for watching and sharing your thoughts. What you have done is exactly what is needed. I wanted to cover a big audience and few people might have issues with their networking setup. So I used * for the ip range and insecure option as well. Cheers.

  • @mommyandagastyaa
    @mommyandagastyaa 4 года назад

    Venkat, I want to do persistent volume encryption in kubernetes how can i do that?? Can you please help me out with it.

  • @vudinhdai2638
    @vudinhdai2638 5 лет назад +1

    can I install nfs-server on directly Kmasternode and implement dynamically provision on Kubernetes?

    • @justmeandopensource
      @justmeandopensource  5 лет назад +2

      Hi Vu,
      For development/test environment, and for learning purpose you can very well install nfs-server on the master node. All you need is an nfs server with exported shares that the worker nodes can access. But in production environment, this won't be practical. Either have a separate NFS server or go for container based storage solution like PortWorx, OpenEBS or anthing else.
      Thanks,
      Venkat

  • @meeraj2009
    @meeraj2009 4 года назад +2

    Do you have any examples of migrating pv,pvc,nfs-share from old nfs server to new nfs server without loosing data.... in migration also include docker-registry and metric

    • @justmeandopensource
      @justmeandopensource  4 года назад +2

      Hi Raj, thanks for watching. I haven't done any such thing before. Sorry, no idea.

  • @saeidghafouri8501
    @saeidghafouri8501 2 года назад +1

    Thank you for the awesome explanation! could you please let me know what is the name of the resource usage tool that you have on the right of the screen (showing the system's networking, resources usage)

    • @justmeandopensource
      @justmeandopensource  2 года назад

      Hi Saeid, thanks for watching. The stuff you see on the right side of my screen is a conky script which can be configured to show anything you want really from the system. You can download some sample conky configuration from internet, modify it, install conky and use it. Cheers.

  • @hichammegari6926
    @hichammegari6926 3 года назад

    great job , but I have a problem My nginx pod is always in Pending State !?

  • @SonNguyen-pw8lm
    @SonNguyen-pw8lm 4 года назад +2

    Hello Admin, can you do tutorial about glusterfs? Thank you. The tutorial very useful.

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi, thanks for watching. I have done a series on GlusterFS which you can watch in the below playlist.
      ruclips.net/p/PL34sAs7_26wOwCvth-EjdgGsHpTwbulWq

  • @gurudath1000
    @gurudath1000 4 года назад +1

    How can we setup nfs file server on aws so it can be mounted to all nodes?

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi Guru, thanks for watching. You can use EFS Elastic File Storage in AWS as an NFS file server. More details in this link.
      aws.amazon.com/premiumsupport/knowledge-center/eks-pods-efs/

  • @ChristianAltamiranoAyala
    @ChristianAltamiranoAyala 4 года назад

    Hi Venkat, first of all what a great video. I curious about the size of the NFS server. Is it possible to use LVM volume for the NFS directory?

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi Christian, thanks for watching this video. Yes, you can use LVM volue for NFS share. Cheers.

  • @karlovasiradio
    @karlovasiradio 3 года назад

    My ubuntu host cannot locate nfs-utils pacman package..any ideas why? I have updated my system!

  • @naturevibezz
    @naturevibezz 2 года назад

    chown: changing ownership of '/var/lib/mysql/': Operation not permitted Got this error when tried with a mysql deployment.

  • @manikandans8808
    @manikandans8808 5 лет назад +2

    This was so interesting to watch. It worked perfectly good. What are the drawbacks of creating insecure in exports. Can you tell about the nobody:nogroup credentials. Thanku

    • @justmeandopensource
      @justmeandopensource  5 лет назад +3

      Hi Mani, Drawbacks of insecure option as it says it is insecure. Anyways NFS is insecure. I just wanted to get it working somehow as my focus was not on NFS but on Kubernetes using NFS. If you read through Ajit Singh's comment, he has explained how he worked it out without the insecure option.
      Basically we need to set the ownership of the nfs exported directory to a more generic owner:group. Leaving it as root:root will cause permissions problem when you try to access it from the pod for reading/writing.
      If you set the ownership to a generic account, all the data you write to that directory will inherit the ownership.
      For certain distros, the generic user is nfsnobody and for certain others it is nobody/nogroup.
      Thanks,
      Venkat

    • @manikandans8808
      @manikandans8808 5 лет назад +2

      @@justmeandopensource perfectly explained. Thanks for that.

    • @justmeandopensource
      @justmeandopensource  5 лет назад +3

      @@manikandans8808 No problem.

  • @sufiaalmas5354
    @sufiaalmas5354 5 лет назад +2

    Thankyou for your video.I am trying to add persistent volume to jenkins container.I am using /var/jenkins_home as mount path, but when i am creating this conatiner it is going in crashloopbackoff state and in logs I got error permission denied we can not write in this path in container.How to resolve this error?

    • @justmeandopensource
      @justmeandopensource  5 лет назад +3

      Hi Sufia, thanks for watching this video. I am not sure how you are deploying jenkins and persistent volumes. Its worth checking the logs of the jenkins pod to see why it is failing.
      I have done a video on running jenkins with persistence on Kubernetes cluster. If you are interested, you can take a look at it in the below link.
      ruclips.net/video/ObGR0EfVPlg/видео.html
      Also for dynamic NFS provisioning, you can check the below video.
      ruclips.net/video/AavnQzWDTEk/видео.html
      Thanks

    • @sufiaalmas5354
      @sufiaalmas5354 5 лет назад +2

      Thank you so much..because of your dynamic NFS provisioning video i am able to do that😊

    • @justmeandopensource
      @justmeandopensource  5 лет назад +2

      @@sufiaalmas5354 You are welcome. Cheers.

  • @sivav3675
    @sivav3675 3 года назад +1

    hey venkat, thanks for your video. Appreciate your efforts in doing so. a quick doubt about NFS volume type,
    I have created a nfs server and the root disk is 10gb (i created a GCE in GCP).
    I have mounted that nfs server to my cluster (1 master and 2 worker nodes created using kubeadm in GCE).
    I have created a PV for 1Gi, and a PVC for 500 Mi.
    I Created a pod for nginx , and mounted this PVC. Its succesfull.
    But when i login to the pod(using kubectl exec) command , and do "df -h" command i am seeing 10gb size for my /usr/share/nginx/html mount, even though i just provisioned 500Mi of my 1Gi PV, why is it showing 10Gi which is my root mount of NFS ??
    Thanks!

    • @justmeandopensource
      @justmeandopensource  3 года назад +1

      Hi, thanks for watching. What you have mentioned is kind of expected. You have to put some restrictions on the nfs server side. It will be complex.
      You may want to configure individual disk or partition on the nfs server side and then export it.

    • @sivav3675
      @sivav3675 3 года назад +1

      @@justmeandopensource Thanks Venkat, got it.

    • @justmeandopensource
      @justmeandopensource  3 года назад +1

      @@sivav3675 Cool.

  • @devanshuoza8561
    @devanshuoza8561 5 лет назад +2

    Hi Venkat,
    Thank you for video, It is awesome. It has given more clarification regards to PV in context of k8.
    Can you guide us like if i expanding the size of NFS directory, is PV automatically update volume size or we need to do it manually.
    For Example :
    I have one NFS server with 50 GB . I have create PV and PVC with 50GB size. Now my storage got full and i wanted to update it, so i have updated volume on my NFS server but k8 doesn't aware about that changes because we have configured it with it 50GB. so will it do that changes automatically or we need to do some change in YAML and re-run again ?

    • @justmeandopensource
      @justmeandopensource  5 лет назад +2

      Hi Devanshu, If you expand the file system where you have nfs exported directory, it will be available in k8s cluster. But the persistent volume will remain the same as you requested in the pvc. Thanks.

    • @justmeandopensource
      @justmeandopensource  5 лет назад +2

      If you want to resize your persistent volume you have to delete and recreate it. Thanks.

  • @travelersnotebook3503
    @travelersnotebook3503 2 года назад

    Can we use this NFS Persistent to run database?

  • @emilpeychev8714
    @emilpeychev8714 Год назад

    Do you think auto fs may propose some advantages?

  • @fatimazahrabenyahya9112
    @fatimazahrabenyahya9112 3 года назад +1

    Hey venkant , plz i want to ask if there is a possiblity to install nfs with helm chart in kubernetes without using this method

    • @justmeandopensource
      @justmeandopensource  3 года назад +1

      Hi Fatima, thanks for watching. I did a video recently on that.
      Here it is
      ruclips.net/video/DF3v2P8ENEg/видео.html
      And you can use helm to install the nfs provisioniner in the cluser.

  • @ninja2807
    @ninja2807 4 года назад +1

    Thanks Venkat. Another great video. We are all learning a lot from you.
    The persistent volume worked perfectly. However, sometimes when I refresh the page the response I`m getting is with the default of Nginx html page and not my own page. What do you think could be happening? it seems like the nfs connection is having some drops, but I`m not sure what is going on.

    • @justmeandopensource
      @justmeandopensource  4 года назад +2

      Hi, thanks for watching. I am not entirely sure why that would be happening. You are creating a deployment with certain number of replicas. And you are exposing it through a service. Your deployment manifest specifies that the pods need volumes. So I guess you are mounting the same volume on all your nginx pods as read-only on multi nodes. Service will load balance all the pods based on the labels. SO you should be seeing the same page.

    • @ninja2807
      @ninja2807 4 года назад +1

      @@justmeandopensource, ​I figured out what was the issue, and I believe it was some type of bug because for me it does not make sense anyway. From your previous video, I had nginx pod called "nginx2" running as a NodePort service on port 31600 in the default namespace. However, following this video we create another nginx pod with the NFS and this one we called "nginx-deploy" and we made a custom webpage for it, I also expose this as NodePort but on the port 31700 in the default namespace. So, in short, when I requested the page from the browser example url= "kworker1:31700", sometimes the request was hitting the service nginx2 on port 31600 that did not have the custom webpage. This was very strange behavior in my humble opinion. Once I delete the nginx2 service and deployment. I had no more problems.

    • @justmeandopensource
      @justmeandopensource  4 года назад +2

      @@ninja2807 Cool.

  • @a13xhackintosh
    @a13xhackintosh 4 года назад +1

    hi Venkat,
    i'm quite confuse with PV and PVC.
    i'm going to deploy wordpress in my cluster. i've created 2 PVs , do i still need PVC?
    alex@bionic30:~/yamls/wordpress$ cat 01_nfs-pv-wordpress-web.yaml
    apiVersion: v1
    kind: PersistentVolume
    metadata:
    name: nfs-pv-wordpress-web
    spec:
    capacity:
    storage: 10Gi
    accessModes:
    - ReadWriteMany
    nfs:
    path: /wordpress-web
    server: 192.168.0.20
    persistentVolumeReclaimPolicy: Retain
    alex@bionic30:~/yamls/wordpress$
    alex@bionic30:~/yamls/wordpress$ cat 02_nfs-pv-wordpress-mysql.yaml
    apiVersion: v1
    kind: PersistentVolume
    metadata:
    name: nfs-pv-wordpress-mysql
    spec:
    capacity:
    storage: 10Gi
    accessModes:
    - ReadWriteOnce
    nfs:
    path: /wordpress-mysql
    server: 192.168.0.20
    persistentVolumeReclaimPolicy: Retain
    alex@bionic30:~/yamls/wordpress$

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi Alex, thanks for watching. Yes you will still need PVC by which you are requesting the storage. You will then use the PVC in your pod definition.

  • @david2358
    @david2358 3 года назад +1

    is it necessary to mount the NFS export in the worker nodes ?? OR can we just use the NFS share in the Pods using pv and pvc without mounting the NFS share in the worker nodes ??

    • @justmeandopensource
      @justmeandopensource  3 года назад +2

      Hi David, you don't have to manually mount the nfs share on the worker nodes. Creating the PV will do that for you. I was just testing whether the worker nodes can mount the shares from the nfs server just to get the basic networking right.

    • @david2358
      @david2358 3 года назад +1

      @Just me and Opensource thank you so much for the fast response 👍👍

    • @justmeandopensource
      @justmeandopensource  3 года назад

      @@david2358 No worries. You are welcome.

  • @srout1000
    @srout1000 4 года назад +1

    Hi Venkat,
    I am getting below error.could you please fix the yaml file.
    [root@master yamls]# kubectl version --short
    Client Version: v1.16.0
    Server Version: v1.16.1
    [root@master yamls]# ls 4*
    4-busybox-pv-hostpath.yaml 4-nfs-nginx.yaml 4-pvc-hostpath.yaml 4-pvc-nfs.yaml 4-pv-hostpath.yaml 4-pv-nfs.yaml
    [root@master yamls]# kubectl create -f 4-nfs-nginx.yaml
    error: unable to recognize "4-nfs-nginx.yaml": no matches for kind "Deployment" in version "extensions/v1beta

    • @justmeandopensource
      @justmeandopensource  4 года назад +2

      Hi Santosh,
      Thanks for watching this video. In Kubernetes v1.16 apiVersions for some of the resources have been deprecated and can no longer be used.
      If you are using a DaemonSet, Deployment, StatefulSet or a ReplicaSet, update your yaml file and change the apiVersion to apps/v1 instead of extensions/v1beta1.
      I know a lot people will have this issue. All the yamls I have in the my Github repo are having extensions/v1beta1 as apiVersion. I don't want to change the files as that might break people using older versions of kubernetes.
      I have infact recorded a video about this k8s v1.16 changes which will be released soon.
      Thanks.

  • @makrand1584
    @makrand1584 4 года назад +1

    Hi Venkat,
    Off topic - that is nice concky display you got on right. Can you share it's config file? It should be /home/USER/.conkyrc
    Nice video BTW

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi Makrand, sorry, unfortunately I deleted the Github repo that had these config files. I was using that desktop setup only for a while. Not using that anymore. Many viewers asked for that conky config. I chose one from internet and customized to my liking.

  • @m8_981
    @m8_981 4 года назад +2

    hi is there a video about your terminal tweaks etc?

    • @justmeandopensource
      @justmeandopensource  4 года назад +2

      Hi M8, thanks for watching this video.
      I recently started using I3 tiling window manager. Done some videos on my setup, if you are interested.
      ruclips.net/video/XpNcxzzkkT0/видео.html
      ruclips.net/video/SMfidTyrqDo/видео.html
      ruclips.net/video/omhky9FgViU/видео.html
      Or the old desktop environment and setup I used to use can be found in the below link
      ruclips.net/video/soAwUq2cQHQ/видео.html
      Thanks.

    • @m8_981
      @m8_981 4 года назад +2

      @@justmeandopensource amazing thanks! Ill Look into then :)

    • @justmeandopensource
      @justmeandopensource  4 года назад +2

      @@m8_981 You are welcome. Cheers.

  • @nagendrareddybandi1710
    @nagendrareddybandi1710 3 года назад +1

    HI Sir,
    Its superb...................
    PV was created with 1Gi and PVC requested for 500Mi and it got bound that.. all cool
    If we create one more PVC with 500Mi will it allocate from same PV ? As I've seen that's in pending state..
    if in that case in 1st PV 500Mi would waste right?
    Or else IF we need to increase the 1st PVC, can we increase it? (asper LVM)
    POD was created on worker1 only so when we expose that to internet.. it should work on only worker1's IP right? why its working with worker2 IP also?

    • @justmeandopensource
      @justmeandopensource  3 года назад +1

      Hi Nagendra, thanks for watching. In case of manual persistent volume like shown in this video, the 1G PV won't be reused. If you requested just 500M from a pvc, then the remaining 500M on that pv is wasted. This is why there is dynamic volume provisioning. So you don't have to create any pv in advance and a pv will get created with exact size as requested by a pvc.
      You can edit the pvc and increase the size, but that will take effect only when you restart the pod.
      Cheers.

    • @nagendrareddybandi1710
      @nagendrareddybandi1710 3 года назад +1

      cool.. Thankyou So much Sir for clarification

    • @justmeandopensource
      @justmeandopensource  3 года назад +1

      @@nagendrareddybandi1710 You are welcome. Cheers.

  • @anjanpoonacha
    @anjanpoonacha 4 года назад

    Hi Venkat, Thanks for K8S playlist. It is very helpful.
    Mount is working fine. I created a persistent volume.
    nfsiostat
    > 10.128.0.8:/srv/nfs/kubedata mounted on /mnt:
    When I create a persistentVolumeClaim, it stays in the pending forever, since throws error "storageclass.storage.k8s.io "manual" not found"
    kubectl get sc
    > No resources found
    Shouldn't the creation of PV create a storageClass?
    What could be the issue here?
    Please share the resources where I can read more about it.

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi Anjan, thanks for watching.
      Starting at 11:15 in this video, I showed 4-pv-nfs.yaml which creates the persistent volume. This manifest contains the storageclassname and you need to use the same storageclass in your pvc. Did you use the manifests in my github repo or you used your own? Just double check that you are using the same storage class name in both pv and pvc.

    • @anjanpoonacha
      @anjanpoonacha 4 года назад

      @Just me and Opensource Thank you very much for replying.
      Yes I used the same yaml files. It didn't work for me. I could see the persistent volume "manual", but could not create a pvc using "manual" as a storageClassName
      I didn't modify anything in the manifest.
      Thanks to your Dynamic provisioning video. It worked perfectly for me.

    • @justmeandopensource
      @justmeandopensource  4 года назад

      @@anjanpoonacha no worries.

  • @rahulmalgujar1110
    @rahulmalgujar1110 4 года назад +1

    Very nice video..I am trying to create pv using aws efs, but when i create the pvc its state is in pending state and when i describe pvc it says that pv is not found, but the pv is already being created. I want to deploy my influxdb inside k8s and mount it to efs.

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi Rahul, I haven't used EFS as a persistent volume storage. However you can check it.
      Please provide bit more details on your setup.
      1) Is your cluster running locally on your laptop or in AWS
      2) How did you provision your Kubernetes cluster
      3) How did you setup dynamic volume provisioning? NFS-provisioning? Did you create a storage class?
      4) How did the persistent volume got created? Did you create the PV manually?
      Thanks.

    • @rahulmalgujar1110
      @rahulmalgujar1110 4 года назад +1

      ​@@justmeandopensource​1. yes it is running locally.
      2. we have provisioned using script.
      3. I am using windows and I have created a EFS in aws console, I am not sure whether I need to configure NFS locally, I read a document whether he provides Ip of nfs server in deployment.yaml and I am providing EFS server in my yaml.is it correct?
      4. I have created pv and pvc manually, and now it bounded to the pv I have created.
      But when i apply the yaml file the pod is not running, its state is container_is_creating. Not sure why it is in pending state?

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      @@rahulmalgujar1110 Okay. Anyways you have got the pv created manually and now pvc is bound to that pv. Usually the pod will be in pending state if it is waiting for a persistent volume. But in your case, pv is already there and is bound to a pvc. If pod is in container_creating state, it could be something else. Do the worker nodes have sufficient memory available to take this pod? Usually when you don't have enough memory on the worker node, the pods will be in container_creating state. You can check the output of the describe command.
      $ kubectl describe pod or $ kubectl describe deploy
      It will show you why the pod is in that state.
      Also you can check the output of "kubectl get events" immediately after deploying your pod.

    • @rahulmalgujar1110
      @rahulmalgujar1110 4 года назад +1

      @@justmeandopensource I am getting this two warnings when I describe the pod Unable to mount volumes for pod "" and MountVolume.SetUp failed for volume "pv-efs" : mount failed: exit status 32

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      @@rahulmalgujar1110 Can you try mounting the EFS volume manually on the worker nodes?

  • @yuven437
    @yuven437 5 лет назад +2

    Thank you for your videos! I am trying to follow what you do, but I keep getting [...] access denied by server while mounting 11.0.0.75:/mnt/nfs/var/nfsshare
    Even though I have turned off firewall and opened the nfs for all

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      HI Yuven, thanks for watching this video. I initially had this issue.
      When you are exporting the share from the NFS server, did you add the insecure option in /etc/exports file?
      Thanks,
      Venkat

    • @yuven437
      @yuven437 5 лет назад +1

      @@justmeandopensource /var/nfsshare 11.0.0.0/8(rw,sync,no_root_squash,no_all_squash,insecure)
      /home 11.0.0.0/8(rw,sync,no_root_squash,no_all_squash,insecure)
      /var/nfsshare *(rw,sync,no_root_squash,no_all_squash,insecure)
      This is my /etc/exports :)
      I really did not expect an answer so fast :0

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      @@yuven437 So you have exported /var/nfsshare from the NFS server. From one of the worker nodes, try mounting it manually may be with verbose option.
      From one of your worker node,
      $ showmount -e
      $ mkdir /mnt/tmp
      $ mount :/var/nfsshare /mnt/tmp
      Also just noticed, that you are trying to mount /mnt/nfs/var/nfsshare, but you have exported /var/nfsshare.
      Thanks,
      Venkat

    • @yuven437
      @yuven437 5 лет назад +1

      @@justmeandopensource thank you very much! I will look into this as soon as i can! You are the best :)
      You have a patreon?

    • @yuven437
      @yuven437 5 лет назад +1

      @@justmeandopensource it seems to me that the problem happens from the k8s side. I can mount and use the nfs storage from the node, but k8s still show the same error :C

  • @richardwang3438
    @richardwang3438 4 года назад +1

    nice video, I wish I could subscribe twice.
    BTW, when you run 'kubectl version --short', what's the difference between client version and server version? I suppose client version is the version of kubectl on your local machine, and server version is the version of kubectl on the cluster?
    but can you help me explain below output, I ran it on the master node of my k8s cluster, why 'Server Version' is different from 'VERSION'?
    /root [root@10.41.143.203] [20:59]
    > kubectl version --short
    Client Version: v1.11.10
    Server Version: v1.11.3
    /root [root@10.41.143.203] [20:59]
    > kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    10.41.143.203 Ready master 9d v1.11.10
    10.41.143.207 Ready 9d v1.11.10
    10.41.143.209 Ready 9d v1.11.10

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi Richard, many thanks for watching. Client version is the version of kubectl binary you are using on your machine. Server version is Kubernetes cluster version.
      I have also noticed this difference sometimes. The server version you see in the kubectl version command and kubectl get nodes command are slightly different. I have notices this as well occasionally. No clue why thats the case.

  • @manikandans8808
    @manikandans8808 5 лет назад +1

    What will happen if the pods excites the pvc claim storage. Dose the pods stops working?

    • @justmeandopensource
      @justmeandopensource  5 лет назад +2

      I didn't quite get you. What you exactly mean by excites? Thanks

    • @manikandans8808
      @manikandans8808 5 лет назад +1

      @@justmeandopensource if the pod storage of the pvc increases what will happen? Since we request only for 500mi but in case of increase in Stroage what will happen?

    • @justmeandopensource
      @justmeandopensource  5 лет назад +3

      @@manikandans8808Good question which I never thought of. Theoretically, it shouldn't let you use more than what is assigned. But I have never tried that. To find the answer you can just try it. I am gonna try it sometime this afternoon. Thanks

    • @manikandans8808
      @manikandans8808 5 лет назад +1

      @@justmeandopensource sure Venkat I'll also try it out and if got the answer pls comment. It will be much helpful.

    • @justmeandopensource
      @justmeandopensource  5 лет назад +1

      @@manikandans8808 Sure will let you know. I am very interested in trying that. May be later tonight. Cheers.

  • @alexal4
    @alexal4 2 года назад +2

    Nice video as usual. 172.42.42 will not work as you probably connecting from 192.168.x.x which is eth0

    • @justmeandopensource
      @justmeandopensource  2 года назад +2

      Yeah realized it later. Thanks for watching.

    • @alexal4
      @alexal4 2 года назад +1

      @@justmeandopensource
      Want to try rancher and need storage class to make it working. Don't remember anything I did before :) Thanks for your videos, great stuff.

    • @justmeandopensource
      @justmeandopensource  2 года назад +1

      @@alexal4 No worries.

  • @krishnaveni-cf5sy
    @krishnaveni-cf5sy 3 года назад +1

    Hello Mr. Venkat , you doing a great videos its really useful for us. anyway i have doubt on this session. actually you using your linux base machine right? and you installing nfs there and export to all nodes . actually i am using a windows base machine so i installed nfs server in my k8s cluser which is virtualbox . cluster side working perfect but when i go to worker node and try to mount nfs mean it doesnot mounting facing issue : incorrect mount type. i could install nfs in cluster right or shouldnot do that ?

    • @justmeandopensource
      @justmeandopensource  3 года назад +2

      Hi Krishna veni, thanks for watching.
      Yes I used my Linux host machine as NFS server. You could use kmaster as your NFS server.
      Please let me know how you installed and how you are trying to test mounting it from other k8s nodes.

  • @hiteshzope296
    @hiteshzope296 4 года назад +1

    in /etc/exports without "insecure" option deployment is working for me

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Hi Hitesh, thanks for watching. When I tried, it didn't work. Good to know that it worked for you.

    • @hiteshzope296
      @hiteshzope296 4 года назад +1

      @@justmeandopensource Thanks Venkat, your videos are really helpful

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      Most welcome.

  • @oraculotube
    @oraculotube 4 года назад

    Thank you for the Amazing videos. Dont know if this is possible. I got stuck, I created an image that populate successfully my /nfs using docker run command, but if I use kubernetes yml, it is not populate the /nfs any more. Is this possible? actually, I'm losing my data inside the container..! Greetings from Birmingham-UK. Cheers

    • @justmeandopensource
      @justmeandopensource  4 года назад

      Hi Everton, thanks for watching. Using PV (persistent volumes) you can do that. If you could explain a bit more on your problem with some details like the yaml, it would be helpful. Cheers.

    • @oraculotube
      @oraculotube 4 года назад

      @@justmeandopensource Hi, thank you for reply me back.
      So, I'm trying to populate my NFS using the contents that is inside my image. For example, if you run a nginx without a volume, you can see a index.html at /usr/share/nginx/html. but, if I mount using the PV and point so the same /usr/share/nginx/html, the index.html vanish, it is not anymore in /usr/share/nginx/html. I think it is something about permission. My nfs has 777 and I also tried securityContext in the yaml. See yml below.
      apiVersion: v1
      kind: Pod
      metadata:
      name: containers-privileged
      spec:
      restartPolicy: Never
      securityContext:
      runAsUser: 0
      runAsGroup: 0
      fsGroup: 0
      volumes:
      - name: shared-data
      nfs:
      server: 192.168.149.10
      path: /illumasnfs
      containers:
      - name: nginx-container
      image: nginx
      volumeMounts:
      - name: shared-data
      mountPath: /usr/share/nginx/html
      securityContext:
      privileged: true
      Thanks

    • @justmeandopensource
      @justmeandopensource  4 года назад

      @@oraculotube Okay. So you have the index.html in your image at /usr/share/nginx/html directory. And you are trying to mount nfs volume into the container at /usr/share/nginx/html right? In this case, you are overlaying the nfs volume /usr/share/nginx/html on top of /usr/share/nginx/html in the container. So you won't be able to see index.html that was originally in the container.
      You will have to create index.html in the nfs volume.

    • @oraculotube
      @oraculotube 4 года назад

      @@justmeandopensource Thanks Venkat. I have this working using docker compose, but not in kuberntes. Actually, my image has lots files, I've converted an application to the company that I'm working, but now, I'm trying to use the image in kuberntes, and populate my nfs, but I cant see any ways.

    • @justmeandopensource
      @justmeandopensource  4 года назад

      @@oraculotube How about mounting the nfs volume in different place and copy the files.

  • @naganaga3731
    @naganaga3731 4 года назад +2

    Hi can u make video on glusterfs on kubernetes

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      HI Naga, thanks for watching. Thats already in my list but didn't get a chance to do it. I will see if I can do it. Cheers.

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      @@naganaga3731 I can play with it this weekend.

    • @naganaga3731
      @naganaga3731 4 года назад +1

      Tq so much

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      @@naganaga3731 You are welcome.

    • @naganaga3731
      @naganaga3731 4 года назад +1

      Can u make video on glusterfs

  • @SonNguyen-pw8lm
    @SonNguyen-pw8lm 4 года назад +2

    Hello Admin, can you do tutorial about glusterfs? Thank you. The tutorial very useful.

    • @justmeandopensource
      @justmeandopensource  4 года назад +2

      ruclips.net/p/PL34sAs7_26wOwCvth-EjdgGsHpTwbulWq

    • @SonNguyen-pw8lm
      @SonNguyen-pw8lm 4 года назад +2

      @@justmeandopensource Hi, thank you response me, Do you have tutorial about kubernetes and glusterfs?

    • @justmeandopensource
      @justmeandopensource  4 года назад +1

      @@SonNguyen-pw8lm Not yet. Planning to do soon.

    • @SonNguyen-pw8lm
      @SonNguyen-pw8lm 4 года назад +1

      @@justmeandopensource thank you so much :D

    • @justmeandopensource
      @justmeandopensource  4 года назад +2

      @@SonNguyen-pw8lm You are welcome.