[ Kube 97 ] Live switching of Kubernetes container runtime | From Docker to Containerd

Поделиться
HTML-код
  • Опубликовано: 7 сен 2024
  • In this video, I will show you how to change the container runtime on your existing kubernetes cluster with workloads running.
    [ Kube 93 ] Kubernetes drops docker? What you need to know
    • [ Kube 93 ] Kubernetes...
    Learn Kubernetes Playlist:
    • Learn Kubernetes
    Github:
    github.com/jus...
    Hope you enjoyed this video. Please share it with your friends and don't forget to subscribe to my channel. For any questions/issues/feedback, please leave me a comment and I will be happy to help.
    Thanks for watching.
    If you wish to support me:
    www.paypal.com...

Комментарии • 121

  • @justmeandopensource
    @justmeandopensource  3 года назад +19

    Hello Viewers,
    Apologies that I forgot to do a final step of uncordoning kmaster.
    $ kubectl uncordon kmaster

    • @krishnarajan319
      @krishnarajan319 2 года назад

      How can you do that brother because master was Cordon then how could possible

    • @normanyang2425
      @normanyang2425 Год назад

      @@krishnarajan319 Thank you so much, it looks like this works well for K8S cluster not managed by Rancher, in my case, Rancher 2.6.9 with four k8S 1.23.6 nodes running with Docker runtime 19.3.10 (1 master, 3 worker nodes), when I followed your method to switch to containerd 1.6.4 for worker node 1,2,3 (master still using docker runtime), all the pods kept being evicted and created, the total number of pods kept increasing, eventually triggering "disk space" usage issue for the k8S nodes.

  • @akashroy1618
    @akashroy1618 2 года назад +1

    Your way of explaining things is way good than others.

  • @selvakumars6487
    @selvakumars6487 Год назад +1

    You are a Samurai with a keyboard !!!

    • @justmeandopensource
      @justmeandopensource  Год назад

      Everything comes with practice 😇. Thanks for watching.

    • @selvakumars6487
      @selvakumars6487 Год назад +1

      Thank you for all the content Venkat, if I want to check something, I first come to your channel, no nonsense, to the point, clear explanation! Thanks again!

    • @justmeandopensource
      @justmeandopensource  Год назад

      @@selvakumars6487 happy to hear that. Thank you.

  •  3 года назад +1

    I follow your steps and I been able to migrate from docker to containerd. I had to change a line : kubectl drain NODE --ignore-daemonsets --delete-local-data . now it's time to test it

    •  3 года назад +1

      finally.. lot of problems. Need to setup private registry. other things too

  • @awspractice931
    @awspractice931 3 года назад +1

    Thanks for the video .It helped me to handle my cluster with containerd

  • @ourjamie
    @ourjamie 3 года назад +1

    thanks the Kubelet config is just what I needed to know

  • @a143r
    @a143r 3 года назад +2

    Hi Venkat, hope you are doing well. Its been days you are not posting any videos, all of your viewers are waiting for you, please come back. Thank you.

    • @justmeandopensource
      @justmeandopensource  3 года назад +2

      Hi Rutvick, thanks for checking on me. I promise I will resume posting videos from next week. I had to pause for a while because I broke my laptop. It took sometime to sort out the new laptop. All good now. You can see my video from next week. Cheers.

  • @deepdeep4629
    @deepdeep4629 2 года назад +1

    you're the bruce lee of typing on the keyboard ! lol

  • @CharlesVosloo
    @CharlesVosloo 3 года назад +1

    Great stuff! Just follow the steps. Thank you.

  • @ifergus3790
    @ifergus3790 2 года назад +1

    Excellent. It worked perfectly! Thanks a lot.

  • @janl.6568
    @janl.6568 3 года назад +1

    Worked seamlessly!

  • @TechGamerRomance
    @TechGamerRomance 2 года назад +1

    thanks it is very useful!

  • @darylkupper9339
    @darylkupper9339 3 года назад +1

    Awesome Video, I greatly appreciate it. Thank you so much!

  • @pjj7466
    @pjj7466 Год назад

    Mind-blowing

  • @Dan-zw4jz
    @Dan-zw4jz 3 года назад +1

    Great work. Thanks for your videos!

  • @nah0221
    @nah0221 3 года назад +1

    Super !

  • @MrPeterJohan
    @MrPeterJohan 3 года назад +3

    Great video and very helpful, thank you. Pretty much the only video I've found on how to do the docker --> containerd switch. Did you reference any particular documentation for this?

    • @justmeandopensource
      @justmeandopensource  3 года назад +5

      Hi Peter, thanks for watching. There were no solid documentations on this topic so researched it myself. Cheers.

  • @pratsgl123
    @pratsgl123 3 года назад +1

    Hi Venkat , thanks for this gr8 video . I was able to change CONTAINER-RUNTIME from docker to containerd successfully . Can you make video on bootstrap file that uses "containerd " as "CONTAINER-RUNTIME" by default , when cluster come up with v1.20 ?
    Thx,
    Pradeep

  • @dhanushkasamarasinghe7785
    @dhanushkasamarasinghe7785 3 года назад +1

    This was very helpful. thanks! :)

  • @getrobbed7818
    @getrobbed7818 3 года назад

    Do you have videos adding Windows Worker Node in a cluster with containerd as runtime? Looking forward to that. Thanks!

  • @petrivanov1598
    @petrivanov1598 3 года назад

    Create as always. Thank you fro videos

  • @johnclarkson6120
    @johnclarkson6120 3 года назад

    hello. long Time no see.
    nice course .
    if U got free time.
    plz do a dual stack course.
    thx a lot.

  • @faridakbarov4532
    @faridakbarov4532 3 года назад +2

    Hi Venkat where are you bro? everything is ok ? 2 month have no your videos on youtube (((

    • @justmeandopensource
      @justmeandopensource  3 года назад +5

      Hi Farid, thanks for checking. I broke my Dell XPS which served me last couple years. I have been trying to get a new one but already sent two of them back due to hardware issues. Hence the delay. Hopefully will get back on track soon.

  • @PRASHANTHKUMAR-vu2xx
    @PRASHANTHKUMAR-vu2xx 3 года назад

    Hi Venkat,
    Can you make a full video for installing openunison-orchestra on kubernetes
    and integrating with K8s cluster for IAM.

  • @rahulakkineni7640
    @rahulakkineni7640 3 года назад

    Great video. I am trying to make a deployment of jupyterhub in kubernetes. But I am unable to because of some version and other issues. Can you please make a video of jupyterhub deployment in kubernetes?

  • @xitrumgmail
    @xitrumgmail 2 года назад

    Thanks for the video Venkat. I follow the steps and everything seems to work until I remove docker from the master node then nothing works after that. When I tried to run 'kubectl get nodes' I get 'Unable to connect to the server: Gateway Timeout'. Somehow, kubectl still depends on docker.

  • @kamalraj2213
    @kamalraj2213 2 года назад

    How can i able to change the containerd path from one location to another. Where that config file will be there and what are the steps to be followed to change the path?

  • @farukabdullah2607
    @farukabdullah2607 3 года назад +1

    Successfully done jobs .... from which registry containerd image will pull ??

    • @justmeandopensource
      @justmeandopensource  3 года назад +1

      It will pull from the registry where your image is stored. That simple.

  • @premierde
    @premierde Год назад

    The k8s 1.24 via Ansible (kubespray ) is with containerd.
    The snapshots subfolder got deleted hence Control plane master KubeScheduler is not coming up. Here is the msg. Any suggestion how to recover.
    kube-scheduler: failed to create containerd container: failed to create prepare snapshot dir : stat /data/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots: no such file or directory Warning BackOff kubelet Back-off restarting failed container.

  • @premierde
    @premierde Год назад +1

    K8s 1.24.6 spun via Kubespray which also install containerd.
    This containerd does not respect https_proxy available via shell ENV ?. The ctr prompt is also not found & permission denied. I am executing ctr as root. Status shows ctr running. Due to proxy, pods are now in ImagePullbackOff err. Any feedback on this?. On that node itself podman can access external registry, but containerd fails to go outside?.🥴

    • @justmeandopensource
      @justmeandopensource  Год назад

      Hmm interesting. I haven't tried using proxy on an air-gapped environment. Need to read containerd documentation.

    • @premierde
      @premierde Год назад +1

      @@justmeandopensource
      Okay, one needs to set http_proxy, no_proxy in the containerd proxy.cfg file.

  • @sergeidjprime8349
    @sergeidjprime8349 3 года назад +2

    Hello. Thanks for the video!!
    Does it work only with kubernetes 1.20.0? As I tried it with 1.17.3 and 1.19.3 provisioned by kubespray and it didn't work for me.

    • @justmeandopensource
      @justmeandopensource  3 года назад +4

      Hi Sergei, Thanks for watching. Kubespray is an automation tool. If you are using kubespray to manage your k8s cluster, then you don't have to follow this process. The process is different. You need to change the container runtime in the config file and rerun the playbook. However I tried and it didn't work.

    • @sergeidjprime8349
      @sergeidjprime8349 3 года назад

      Hi Venkat. Been a long time :-)
      Do you have any plans to make videos on Gitlab-CI? It would be intreresting to take a look how to create a docker image from scratch then push it to Gitlab docker registry and deploy it to kubernetes.
      Well, if you have time to make a series of videos on Gitlab CI/CD it would be just great. Thank you. ;-)

  • @vietdevops
    @vietdevops Год назад +1

    May i know what terminal are you using for SSH connection in this video?

    • @justmeandopensource
      @justmeandopensource  Год назад

      Hi, thanks for watching. I used Alacritty terminal in this video on I3 tiling window manager on Archlinux.

  • @sudheer157
    @sudheer157 3 года назад

    Excellent!!
    does this approach works for rke/rancher provisioned cluster?
    have you tried new rke2 ?

  • @nitinkansal
    @nitinkansal 3 года назад +1

    Great video. I am curious to know the scenario when I have a kubernetes cluster running with more than 50 nodes., do I still need to login in to each node and do changes? Apart from this how do I enforce new worker nodes to come up with containerd not docker.

    • @justmeandopensource
      @justmeandopensource  3 года назад +2

      Thanks for watching. I was just demonstrating that there is a way you can change the container runtime on a running cluster. This doesn't mean you have to follow this approach. You would normally provision new worker nodes with containerd installed and migrate the workloads to these new nodes and get rid of the old nodes.

  • @vishavjeetrohilla6053
    @vishavjeetrohilla6053 2 года назад

    hey bro i want docker runtime insted of conatiner runtime how it work please help me if i use containrd runtime it throw image pull back error on nodes

  • @raoufmnif6569
    @raoufmnif6569 3 года назад +1

    Hi Venkat, Good as always. I know that we are switching to containerd, so what is the best solution to build a docker image with cri-o or containerd?

  • @Fickysyahreza
    @Fickysyahreza 3 года назад +2

    with CRI-O please

  • @bobreselman5731
    @bobreselman5731 2 года назад +1

    I have a question specific to K8S and container runtimes. I want to experiment running a variety of OCI compliant runtimes over a variety of worker nodes under K8S, for example Work1 runs containerd and Worker2 runs gVisor. Tell me please, is my assumption that I can do this accurate? PS: Thanks for this great video.

    • @justmeandopensource
      @justmeandopensource  2 года назад +1

      Hi Bob, I know I have already mentioned this in the slack workspace but for the benefit of RUclips viewers replying here too.
      I am not entirely sure If thats possible literally like running containerd on one node and something else on another node. I haven't tried that. But I think you could configure containerd differently on your nodes to use different downstream runtimes that are OCI compliant. The default is runc and you could use containerd to use something else too I guess.

  • @rupeshchowdary5969
    @rupeshchowdary5969 3 года назад

    Hello sir if possible could you please make video on istio. thank you

  • @jinkahari
    @jinkahari Год назад

    Error: failed to start containerd task "win-cal": hcs::System::CreateProcess when we are deploying windows pod in windows node pool

  • @faridakbarov4532
    @faridakbarov4532 3 года назад +1

    Hi Vencat , why we dont see your fresh videos? is all good?

    • @justmeandopensource
      @justmeandopensource  3 года назад +5

      Hi Farid, many thanks for checking on me. All good. I was just taking some time off RUclips for few weeks.
      Will be back with kubernetes videos from next week. Cheers.

  • @shaikjakeer8556
    @shaikjakeer8556 3 года назад

    Hi Venkat.. could you please make a vedio on kube monkey and chaos engineering

  • @thaleseduardo4670
    @thaleseduardo4670 3 года назад

    Hello. I did search on the net but i did not find. Do you know how change container runtime on the AWS EKS? Thank you.

  • @ccnankai5591
    @ccnankai5591 3 года назад +1

    After I uninstalled docker, kubectl can't connect anymore. The following error will appear :The connection to server 192.168.1.1:6443 was refused - did you specify the right host or port ?

    • @justmeandopensource
      @justmeandopensource  3 года назад +1

      Hi, did you check if the api server is running on the master node?

    • @ccnankai5591
      @ccnankai5591 3 года назад

      @@justmeandopensource Uninstall docker directly on the master node, and the kubectl gets command can no longer be used, and nothing can be seen. . My os version is ubuntu16.04 and the k8 version is 1.20.4. On the master node, the containerd version is 1.5.2. Is my version wrong?

    • @xitrumgmail
      @xitrumgmail 2 года назад

      @@ccnankai5591 I am having the same issue. Mine are ubuntu 20.04, k8 1.23.3, container 1.5.5

  • @madrag
    @madrag 3 года назад +1

    I have k8s on raspberry, containerd as runtime, and in config.toml - disabled_plugins = ["cri"] not commented, docker also installed... do you know how is that working :D ?

    • @justmeandopensource
      @justmeandopensource  3 года назад +1

      hi, thanks for watching. Thats kind of expected. This link may give you some information.
      github.com/kinvolk/Flatcar/issues/283

    • @madrag
      @madrag 3 года назад +1

      @@justmeandopensource thank you, didn't know that's default setup.

    • @justmeandopensource
      @justmeandopensource  3 года назад

      @@madrag no worries. you are welcome.

  • @ratnakarreddyg3851
    @ratnakarreddyg3851 3 года назад +1

    Hello Venkat,
    My application was working fine until today and today I have upgraded the cluster from 1.18.14 to 1.19.9. After the upgrade K8S is not able to pull images and launch images from the private registry. Getting below error as per kubectl describe command output.
    'Failed to pull image "": rpc error: code = Unknown desc = failed to pull and unpack image "": unexpected end of JSON input'
    Public images like MongoDB, ELK Stack, RabbitMQ are working fine without any issues. Getting error only for my owned docker images. Do we need to make any changes on the Docker image before using Containerd? Could you please help me.

    • @justmeandopensource
      @justmeandopensource  3 года назад +1

      Hmm. That was strange. I haven't encountered that before. Can you spin up a new k8s 1.19.9 cluster and test this to confirm?

    • @premierde
      @premierde Год назад

      Similar situation, Try setting the http_proxy & no_proxy on each node?.

  • @magrag1987
    @magrag1987 3 года назад +1

    If we use cloud service kubernetes (Azure,aws,etc) even then should we have to do this?

    • @justmeandopensource
      @justmeandopensource  3 года назад +1

      Hi Raghu, thanks for watching. If you are using one of the managed kubernetes service in the cloud, your control planes will be done for you automatically, but you will have to do this on worker node pools/groups. Steps will be different obviously in the cloud to the one I explained in this video. Cheers.

  • @SivaKumar-vo9cj
    @SivaKumar-vo9cj 3 года назад

    Can you please show me the switching process in centos 7

  • @karlovasiradio
    @karlovasiradio 3 года назад +1

    I have an lxd infrastructure and i want to install a kubernetes cluster on it.
    I followed your latest video about it and i ran ./kubelx command.
    It seems that cni and flannel network were not created and when i run a kubectl command it shows me the following message:
    The connection to the server 10.211.7.165:6443 was refused - did you specify the right host or port?
    I defined the kubeconfig parameter for the admin config but it continues the same behavior.
    Any other ideas about what might be the problem?

    • @justmeandopensource
      @justmeandopensource  3 года назад +1

      Did it work? And whats your problem? I think I responded to this query on the other video where you originally asked.

    • @karlovasiradio
      @karlovasiradio 3 года назад

      @@justmeandopensource I answered there what happened!Unfortunately not! I dont know why the cni network is not ceated and i cannot run any kubectl command due to connection lost.

  • @weitanglau162
    @weitanglau162 3 года назад +1

    Great video!
    You said that when master is down, there will be a short downtime. Is there any way of mitigating that other than having 2 masters?

    • @justmeandopensource
      @justmeandopensource  3 года назад +2

      Hi, thanks for watching. Absolutely no way with single master. Its not a downtime actually as the workloads in worker nodes continue to run. kubectl commands won't work.

    • @weitanglau162
      @weitanglau162 3 года назад +1

      @@justmeandopensource Thanks for replying! Is scaling to 2 master nodes as easy as scaling more worker nodes?
      PS: Sorry have yet to watch your video on having multi master node

    • @justmeandopensource
      @justmeandopensource  3 года назад +2

      @@weitanglau162 You have to do the ground work when you initialize the cluster for the first time. Once you have set up a multi master cluster, then it will be easier to scale masters up or down. But you won't be able to convert your single master cluster to multi-master easily.

    • @darylkupper9339
      @darylkupper9339 3 года назад +2

      Also, it is recommended to use an odd number of Master’s, so that the cluster can determine quorum, I think that is the word. Basically the etcd database needs an odd number just in case one gets out of sync. By “Needs,” I mean best practices/recommended.

    • @darylkupper9339
      @darylkupper9339 3 года назад +1

      Also forgot to mention, you can actually run a master and worker as the same node, that is running workloads on a master.
      Microk8s actually is one way of doing that.
      Alternatively, you can manually set your Kubernetes Master to take workloads with kube ctl with taints. This would allow you to setup either a single node cluster or a 3 node master cluster running the workloads on the masters. Just make sure that you allow enough resources (RAM, CPU, and disk space) for running a node as a Master and worker.
      I am actually using both methods right now to learn Kubernetes with limited resources, on virtual machines sort of bare metal, as I don’t want the costs of the cloud.

  • @garybennett5814
    @garybennett5814 3 года назад +1

    What zsh theme are you using?

    • @justmeandopensource
      @justmeandopensource  3 года назад +3

      Hi Gary, thanks for watching.
      I use Powerlevel10k for theme and couple plugins (zsh-autosuggestion & zsh-syntax-highlighting)

  • @aayush3377
    @aayush3377 3 года назад +1

    what is the difference between cordon and drain ???

    • @justmeandopensource
      @justmeandopensource  3 года назад +2

      Hi Ayush, thanks for watching. Cordon just disables the node so that no more pods get scheduled on it. The pods that are already running on that node will continue to run. Drain evicts the pod from that node. Hope it makes sense.

    • @LampJustin
      @LampJustin 3 года назад

      @@justmeandopensource since draining a node also disables it for scheduling, you don't have to cordon it as well ;)
      You only need that to stop kubernetes from scheduling new pods on that node for example :)

    • @justmeandopensource
      @justmeandopensource  3 года назад +1

      @@LampJustin You are right.

  • @bodakuntlahemanthkumar4634
    @bodakuntlahemanthkumar4634 2 года назад

    please any one let me know how to switch from cri-o run time to docker run time using commands.

    • @justmeandopensource
      @justmeandopensource  2 года назад

      Hi Bodakuntla, thanks for watching. Its not a good practice to live switching the underlying container runtime in your kubernetes cluster. I was just illustrating the possibility of doing that. Better and clean to bring up a new cluster with desired runtime and migrate stuff to it.

  • @marksuvi947
    @marksuvi947 3 года назад

    can you make videos on cobbler.github.io/ and kickstart installations of linux.

  • @mejesh1
    @mejesh1 3 года назад

    Hi,
    By following your video, I have successfully migrated from docker to contained in my live cluster.
    Now I have tried to upgrade the cluster from 1.19.4 to 1.20.2,
    But facing with below problem.
    [upgrade/config] Making sure the configuration is correct:
    [upgrade/config] Reading configuration from the cluster...
    [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
    W0211 17:56:45.651061 2118 kubelet.go:200] cannot automatically set CgroupDriver when starting the Kubelet: cannot execute 'docker info -f {{.CgroupDriver}}': exit status 2
    [preflight] Running pre-flight checks.
    [upgrade] Running cluster health checks
    [upgrade/version] You have chosen to change the cluster version to "v1.20.2"
    [upgrade/versions] Cluster version: v1.19.5
    [upgrade/versions] kubeadm version: v1.20.2
    [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
    [upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
    [upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
    [upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
    [preflight] Some fatal errors occurred:
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.20.2: output: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
    , error: exit status 1
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.20.2: output: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
    , error: exit status 1
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.20.2: output: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
    , error: exit status 1
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.20.2: output: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
    , error: exit status 1
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.2: output: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
    , error: exit status 1
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.4.13-0: output: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
    , error: exit status 1
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.7.0: output: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
    , error: exit status 1
    [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
    To see the stack trace of this error execute with --v=5 or higher
    It seems still is referring to the docker on kubeadm upgrade.
    Please provide me your valuable suggestion

    • @justmeandopensource
      @justmeandopensource  3 года назад +1

      Hi Ganesh, thanks for watching.
      Can you try this instead by explicitly specifying the runtime?
      $ kubeadm config images pull --cri-socket /run/containerd/containerd.sock

    • @justmeandopensource
      @justmeandopensource  3 года назад +1

      And then try doing kubeadm upgrade plan and kubeadm upgrade apply etc...

  • @darshandeshmukh7223
    @darshandeshmukh7223 11 месяцев назад

    @justmeandopensource - If we move from Kubernetes 1.23 to 1.28 will this continue to work? OR If we want to continue using Docker Engine we need to migrate to a CRI-compatible adapter like cri-dockerd. ?

  • @abhishek9044855265
    @abhishek9044855265 Год назад

    hi is there any way to detect container runtime endpoint after creating cluster ?