Thank you for this straight-forward, easy to understand tutorial! I have tried setting up many Kubernetes clusters on my own following online documentation and It has never worked correctly for me. Now that I have watched your tutorial, I realize there were so many mis-steps I was making.
I love how you even go over the painful steps that are obvious to intermediate users just looking for a leg up, but not for beginners. Wonderful teaching methods. I absolutely love to see it.
At around 46:00, when you are adding the first node to the cluster, I think the reason why it didn't work was because you used the join command for the control-plane instead of the worker node. It didn't appear to be because the time was too long. When you regenerated the join, it provided the correct join for a worker node.
You are absolutely right, it also caught my eye when I viewed the video for a second time and saw that there are two commands for adding nodes to the cluster, one for control-plane nodes and one for worker nodes. The one in this video initially takes the control-plane node command in stead of the worker node command.
THANK YOU! Honestly the best guide on YT. As someone who's versed in Docker i found Kubernetes to require a lot of stuff based on what other RUclipsr's were doing and i would constantly be overwhelmed! This was very simple , all requirements were said at the beginning of the video and only those things were used for the video. Simple, to the point and just awesome! Sending much love and many thanks
Nice video for learning Kubernetes, I am thankful I can watch this. I tried to learn it once by myself, long time a ago, but the tutorials back then were so confusing to follow. I am glad to have this and get my foot into Kubernetes. I finally know what it is and why its so popular. I have finish the whole video and proof the setup in the video still valid today. I had trouble with the command to install kubelet kubeadm kubectl, maybe just my mistake, but it was not hard to resolve. The kubelet kubeadm kubectl weren't found in the default ubuntu repo. I managed to follow the official kubernetes guide and used the commands in section "Install using native package management", and successfully install the three packages.
This series is golden. I wish I could contribute more, my budget is too tight this month, but I felt bad watching the series without giving something more than a like back. Fantastic job
Last week I tore down my bare metal kubernetes cluster and installed proxmox on the amd64 nodes. I set up two VMs on each, one for control pane and one for worker. I also wiped the arm64 nodes. I used talos for each VM and amd64 node. It’s quite handy.
I can always rely on your videos. I never have a tough time following your videos. Super great job always, thank you for all the time and effort you put into teaching. I am new to sys admin and homelabbing and i have learned so much from you. Thank you again
Awesome timing! We were just discussing k8s at work last week, and the need to start prep work to set up a development lab at the office to support new projects coming in Jan. Great walkthrough. Noticed a few typo's on the blog (missing the echo "deb..... command, the pod.yml has some extraneous Chapter 18 25, and in service-nodeport.yaml missing the the indent the last line) but easy enough to sort through. Can't wait for more k8s magic! Thanks for all you do!
Jay, thank you so much for this timely tutorial! I have been trying to find a tutorial on how setup Kubernetes in Proxmox for the past couple weeks. And now you released this big guy, which is just as great as the rest of Proxmox related and other videos. Thanks!
Helped me a lot, minor suggestions: Since the worker node becomes a template, 901 would be a more logical ID. Setting the time zone for the template would have taken care of another tedious to do In the Blog post make sure that copy paste does not inlude additional line breaks. Eventually you may want to fix the sudo typo (instead of suod chown) There seems to be a long command missing in the process of the GPG rings...
Hi Jay, Great walkthrough, thank you. One thing to note is that there's currently an issue when running kubeadm init after installing v1.26 of kubeadm, kubectl and kubelet. For some reason when using that version kubelet fails to start. A workaround is to specify v1.25.5-00 when installing those components via apt.
Great stuff, nice run through getting a cluster up and running. The reason your 1st attempt didn't work was because you copied the "--control-plane" option which means that certificates and keys from the first node have to be copied over before it can also become a controller.
Hey Jay, thanks for the awesome content. However I noticed there is a command missing on your blog post (after curl gpg) for installing the repository.
Step 2 would be to get MetalLB and a PVC provisioner ;) Most would recommend Longhorn but actually I'd like to suggest something else. The Piraeus Operator uses DRBD9 and is much faster, especially on 1Gig connections as reads always happen locally if possible. Longhorn just tanks when having to write a lot and will eventually fall behind in replication. Also don't use the NFS Ganesha server provisioner unless you absolutely have to. It's a chore and highly unmaintained. If you do, be sure to build the image to run it from my MR. That's at least a little more up to date
thanks for the nice tutorial. On the written version, you are missing the step to add the k8s repo, it jumps from adding the gpg key to installing the kubeadm... packages
Hello sir, really thanks for the video, it's saved my life. I ran it on Ubuntu 24.04 and it works like a charm. But here is my question: I want to add high availability for my cluster with HAProxy. Do I need to use the same command for my other control planes, or do I need another command? thanks a lot have a nice day
Thanks a lot for this, Jay. Very helpful. On a side note, I was sitting here getting completely triggered by your pronunciation of sudo and lib. After thinking about it, your way actually seems more correct. My entire Linux career has been a lie 😂
DHCP with DynDNS updates make the early parts so much easier. As soon as I give my temlate clone a hostname and boot it, it gets a DNS entry. I still did give them static assignments too.
Does anyone have experience utilizing Rancher with Proxmox? I have been having difficulties. Would love the centralized manhement of kubernetes that rancher provides.
Thanks a lot for this valuable tutorial. However, part of it needs to be fixed or updated as I have faced various errors on the way still this helped me to set up my first Kubernetes cluster!!
The issue wasn't that the token expired, but you tried to add the other nodes as control planes. This only works if you copied the right CA certificates first. You should've used the 2nd join command which didn't include the --control-plane argument
Hi Jay, this Kubernetes video is straight-forward & to the point. I was wondering if you have pointers on how to perform your steps but using Oracle VirtualBox. If I have 3 Ubuntu 22.04 VMs on VirtualBox with similar specs to your VMs & use /etc/hosts to network them. Is this a good starting point?
The step to modify netplan requires nano, which is not part of ubuntu-22.04-minimal-cloudimg-amd64.img (used in the recommended/prior video) 'sudo apt install -y nano' works, just noting it here for folks who might get confused (and for any notes on a future version of this video).
I have a question - what's the advantage of going the route you're describing, as opposed to just installing microk8s from the distribution and calling it a day?
... loving this... curious, have you looked at doing this using ansible ? thinking i might want to try that... take the base ubuntu image and do all prep/deployment using a ansible cookbook.
As usual, your videos are fantastic from the fact that you are a great teacher. If I have to be picky, in this video audio is a bit out of sync. ;) . Also, in the blog you forgot to add a command to update k8s packages. Would you consider doing a video on Ansible -AWX at levels 200-300?
Have you added Ingress to this setup so that you can access the cluster from outside from your network? Setting this up in AWS is simple you just add LB. What about home network? Have you used Nginx deployed in proxmox as well or maybe metallb?
Hello Jay, what about the database and a second ctrl-node? is there a possibility for a video (for the integraded db shared on different nodes, not with an extra host with my-sql)
Thanks for the video. Love your content. However, the blog post does not match the commands in the video at the point where you need to add the repos for apt. It does not work. Gives me errors that the key is not matching and that the release has no release file. At this point, I am not able to install the kube utils with apt install. Tried using SNAP but that didn't work either. Could you update your blog and video to get us back up to current workings?
Awesome. The only comment I have is that you forgat to add the command to link the kubernetes repositories in the blogpost writeup. Other than that, continue the great work!
Thanks for a great video!!! I did, however, run into a problem, where I could not create my cluster. As it turns out, the issue lies with an incompatibility between kubernetes version 1.26 and containerd version 1.5.9. The error would look like: "command failed err=failed to run Kubelet: validate service connection : CRI v1 runtime API is not implemented for endpoint...." The fix is quite easy: downgrade to kuberenetes 1.25 (sudo apt remove --purge kubelet && sudo apt install -y kubeadm kubelet=1.25.5-00) or to manually upgrade containerd to 1.6.0 Take a look at question "failed to run Kubelet: validate service connection: CRI v1 runtime API is not implemented for endpoint" on ServerFault.
Awesome explanations! Do you plan in any time soon to release a video with loadbalancer setup for k8s cluster too ? And maybe a nextcloud server in the k8s cluster? Would be great for sure! 🤝♥️
Hello, I would like to have a TrueNas course. I really enjoyed this Proxmox course, especially for its didactic and methodical nature, and the way of explaining and organizing the topics and each of the relevant aspects. While I've heard your recommendations about other channels that deal with TrueNas, they clearly don't deal with it in the same way or as extensively as you do. Thank you very much.
The main issue with covering TrueNAS is that it's BSD and not Linux. But now we have TrueNAS Scale, so there's no reason not to consider covering it. I'll definitely consider that and it does sound like a great idea!
Hi There! Love your channel! Any plans to do a Gentoo install. I have been trying to install it on an Asus VivoBook 1TB NVME set up and can’t get it to boot because it gives an error saying it “block device is invalid”. Makes me think there is a specialized driver that need to be loaded by initramfs before loading the kernel. Very weird… Thanks!
Jay, thanks for all your videos and your book. I have learnt a lot from you. But one question: Why do you double the amount of work up until the point where you make a template of the worker node? The steps are identical for the worker node and the control node up until that point, so the template can be used to generate all four nodes. I have actually recreated the control node from this template and it works perfectly.
Hi Jay, thanks for this usefull guide! It's awesome for a noob like me that are learning! Is it possible to manage and view the K8s cluster with an interface like for example OpenLens or something similar? Can it possibly be installed on a dedicated VM or in a Docker container? Thank you!!
Do you have anything that talks about storage and persistent volumes in Kubernetes? I followed this and have a working cluster, but some of the things I want to run like redis require persistent volumes, and I'm struggling to figure out the secret sauce.
This is a fantastic introduction, but i'm confused as to why you would want multiple k8s nodes on the same vm server? isn't the point fault tolerance and resource distribution? I assumed that it would be best practices to have a single k8s node per physical server, disable HA for those particular VMs (because k8s does HA internally)... keeping HA on for the controller because I don't think the controller is fault tolerant if i'm not mistaken. I admit I'm a total noob with this stuff, but Id like to know more.
As far as I know, distributing the workload on a single physical machine with virtual machines is a valid approach. Like that you have virtual redundancy and scalability. If you want physical redundancy, you could replicate the virtual cluster on a second, third, and so on, physical machine. With this setup, you can distribute the application workload over multiple clusters on physical machines. So if one cluster should fail, you still have additional clusters which will handle the workload of the failed cluster. This multi cluster architecture would also enable running your services on hybrid cloud, multi-cloud, and other infrastructure models.
Very nice tutorial. I will definitely give it another go after running into some issues in the past. Any idea if the same steps (with some minor changes) work on Debian as well?
Can confirm that it works for the most part. The only real difference is that the version of containerd was slightly different (1.4), so the option for systemd cgroup is slightly different, problem is that there is one that looks like it, but it has to be the one that is in the runc.options section. Thanks again for the awesome video
I remain unclear whether the Proxmox VMs are paravirtualized and if the k8s cluster can then use the full power of the underlying hardware. Can you elaborate on that?
Unfortunately it's not working anymore :( Got stuck at initializing the cluster, which results in "[ERROR CRI]: container runtime is not running". Seems like there are some issues with the current versions. Just a heads-up for everyone following this tutorial! I am 50 minutes in and kinda need to restart from the beginning. Not the fault of Jay, good tutorial in general! But to everyone trying this in Mar 2023: You might need to look for another guide. Also be aware that some commands are just in the video and NOT in the linked blog article!
Your join command failed at 46:35 because you saved 42:04 the command for joining additional controllers (--control-plane) instead of the worker node join was below (and same command obtain later at 47:04)
hello Im having some issues SSH into my VM for the master and nodes. Keep getting error : No supported authentication methods available ( server sent: publickey). Ive google this and tired a few things with no luck what did I do wrong, Im using putty to ssh?
I followed the tutorial and works great between the host machine and the cluster. However if I try to curl the URL (subnet:30080) from another Proxmox VM on the same subnet it is very slow, around 1.5minutes. curl from the host machine or any of the nodes takes less than 1s. Does anyone know what could this be? Thanks in advance
Thank you for this straight-forward, easy to understand tutorial! I have tried setting up many Kubernetes clusters on my own following online documentation and It has never worked correctly for me. Now that I have watched your tutorial, I realize there were so many mis-steps I was making.
I love how you even go over the painful steps that are obvious to intermediate users just looking for a leg up, but not for beginners.
Wonderful teaching methods. I absolutely love to see it.
At around 46:00, when you are adding the first node to the cluster, I think the reason why it didn't work was because you used the join command for the control-plane instead of the worker node. It didn't appear to be because the time was too long. When you regenerated the join, it provided the correct join for a worker node.
You are absolutely right, it also caught my eye when I viewed the video for a second time and saw that there are two commands for adding nodes to the cluster, one for control-plane nodes and one for worker nodes. The one in this video initially takes the control-plane node command in stead of the worker node command.
THANK YOU! Honestly the best guide on YT. As someone who's versed in Docker i found Kubernetes to require a lot of stuff based on what other RUclipsr's were doing and i would constantly be overwhelmed! This was very simple , all requirements were said at the beginning of the video and only those things were used for the video. Simple, to the point and just awesome!
Sending much love and many thanks
Nice video for learning Kubernetes, I am thankful I can watch this. I tried to learn it once by myself, long time a ago, but the tutorials back then were so confusing to follow. I am glad to have this and get my foot into Kubernetes. I finally know what it is and why its so popular. I have finish the whole video and proof the setup in the video still valid today.
I had trouble with the command to install kubelet kubeadm kubectl, maybe just my mistake, but it was not hard to resolve. The kubelet kubeadm kubectl weren't found in the default ubuntu repo. I managed to follow the official kubernetes guide and used the commands in section "Install using native package management", and successfully install the three packages.
took me a few days of frustration but finally got a working 3-node kubernetes cluster using Hyper-V instead of proxmox lol...thank you Jay!
@31:16 - your. echo "deb .... command is not in the build doc" or am I missing something?
This series is golden. I wish I could contribute more, my budget is too tight this month, but I felt bad watching the series without giving something more than a like back. Fantastic job
That was plenty and appreciated. Thank you so much!
The timing is impeccable! I’ve just began my CKA journey and was about to roll a lab out to my proxmox. Can’t wait to see this
Last week I tore down my bare metal kubernetes cluster and installed proxmox on the amd64 nodes. I set up two VMs on each, one for control pane and one for worker. I also wiped the arm64 nodes.
I used talos for each VM and amd64 node. It’s quite handy.
I can always rely on your videos. I never have a tough time following your videos. Super great job always, thank you for all the time and effort you put into teaching.
I am new to sys admin and homelabbing and i have learned so much from you. Thank you again
Awesome timing! We were just discussing k8s at work last week, and the need to start prep work to set up a development lab at the office to support new projects coming in Jan. Great walkthrough. Noticed a few typo's on the blog (missing the echo "deb..... command, the pod.yml has some extraneous Chapter 18 25, and in service-nodeport.yaml missing the the indent the last line) but easy enough to sort through. Can't wait for more k8s magic!
Thanks for all you do!
Jay, thank you so much for this timely tutorial! I have been trying to find a tutorial on how setup Kubernetes in Proxmox for the past couple weeks. And now you released this big guy, which is just as great as the rest of Proxmox related and other videos. Thanks!
Me toi Jay is a beast
Helped me a lot, minor suggestions:
Since the worker node becomes a template, 901 would be a more logical ID.
Setting the time zone for the template would have taken care of another tedious to do
In the Blog post make sure that copy paste does not inlude additional line breaks.
Eventually you may want to fix the sudo typo (instead of suod chown)
There seems to be a long command missing in the process of the GPG rings...
Always appreciate the clear and detailed explanations in your videos and the nicely judged pace. Thank you for all of them.
Hi Jay,
Great walkthrough, thank you. One thing to note is that there's currently an issue when running kubeadm init after installing v1.26 of kubeadm, kubectl and kubelet. For some reason when using that version kubelet fails to start. A workaround is to specify v1.25.5-00 when installing those components via apt.
Just updated and I've found that issue too
First I was against the template cloud instnace, but after following your tutorial, I can;'t believe I wasn't doing this earlier on
Great stuff, nice run through getting a cluster up and running. The reason your 1st attempt didn't work was because you copied the "--control-plane" option which means that certificates and keys from the first node have to be copied over before it can also become a controller.
I realized that afterwards, and forgot I left that in. But thank you so much for noticing though, comments like those are very helpful 😃
I've literally been thinking about doing this for the past month. Thank you very much :)
Glad I could help!
To see the IP address in the k8s-ctrlr and k8s-node will be available after qemu-guest-agent "start" and "enable".
Cannot wait for the next instalment to the series on Kubernetes
I'd love to see a follow up to add a load balancer and additional control plane nodes
Hey Jay, thanks for the awesome content. However I noticed there is a command missing on your blog post (after curl gpg) for installing the repository.
Great work! Thank you! It would be nice to see your video on creating HA k8s-cluster with 3 cp-nodes!
This is exactly the kind of detailed walkthrough I've been hoping for, thank you!
Step 2 would be to get MetalLB and a PVC provisioner ;) Most would recommend Longhorn but actually I'd like to suggest something else. The Piraeus Operator uses DRBD9 and is much faster, especially on 1Gig connections as reads always happen locally if possible. Longhorn just tanks when having to write a lot and will eventually fall behind in replication. Also don't use the NFS Ganesha server provisioner unless you absolutely have to. It's a chore and highly unmaintained. If you do, be sure to build the image to run it from my MR. That's at least a little more up to date
This is EXACTLY the series I have been waiting for. Thanks so much for content.
thanks for the nice tutorial. On the written version, you are missing the step to add the k8s repo, it jumps from adding the gpg key to installing the kubeadm... packages
Wow, this is an amazing video. Followed it step by step and am very very contended. THank you Jay
This data Is gold!
You are the best Jay!
good stuff. K8S is the future and I'm really excited learning it. Thanks Jay
Awesome peaceful and positive energy ! I enjoyed your video, was such a quick way to overview Kubernetes.
Hello sir, really thanks for the video, it's saved my life. I ran it on Ubuntu 24.04 and it works like a charm. But here is my question: I want to add high availability for my cluster with HAProxy.
Do I need to use the same command for my other control planes, or do I need another command?
thanks a lot
have a nice day
Thanks a lot for this, Jay. Very helpful. On a side note, I was sitting here getting completely triggered by your pronunciation of sudo and lib. After thinking about it, your way actually seems more correct. My entire Linux career has been a lie 😂
DHCP with DynDNS updates make the early parts so much easier. As soon as I give my temlate clone a hostname and boot it, it gets a DNS entry. I still did give them static assignments too.
Does anyone have experience utilizing Rancher with Proxmox? I have been having difficulties. Would love the centralized manhement of kubernetes that rancher provides.
Just so you know the blog post is missing the step to add the k8s repository. You have the step to get the key but do not actually add the repo.
I absolutely love your style of presentation. Always enjoyable to watch and a huge inspiration for me! 💪
Thanks a lot for this valuable tutorial. However, part of it needs to be fixed or updated as I have faced various errors on the way still this helped me to set up my first Kubernetes cluster!!
The issue wasn't that the token expired, but you tried to add the other nodes as control planes. This only works if you copied the right CA certificates first. You should've used the 2nd join command which didn't include the --control-plane argument
Hi Jay, this Kubernetes video is straight-forward & to the point. I was wondering if you have pointers on how to perform your steps but using Oracle VirtualBox. If I have 3 Ubuntu 22.04 VMs on VirtualBox with similar specs to your VMs & use /etc/hosts to network them. Is this a good starting point?
The step to modify netplan requires nano, which is not part of ubuntu-22.04-minimal-cloudimg-amd64.img (used in the recommended/prior video) 'sudo apt install -y nano' works, just noting it here for folks who might get confused (and for any notes on a future version of this video).
I have a question - what's the advantage of going the route you're describing, as opposed to just installing microk8s from the distribution and calling it a day?
you get vanilla k8s vs an opinionated/OEM based version of Kubernetes - both have their pros and cons
Thanks mate. This was a really good guide. It helped me build my setup. I also followed your other tutorial about creating templates. Thanks again.
This was a master class. Thank you Jay.
Jay, why not k3s favour of kubernetes, espacially for home/lab environment? People say K3S requires much less resources.
same question here 😉
... loving this... curious, have you looked at doing this using ansible ? thinking i might want to try that... take the base ubuntu image and do all prep/deployment using a ansible cookbook.
As usual, your videos are fantastic from the fact that you are a great teacher. If I have to be picky, in this video audio is a bit out of sync. ;) . Also, in the blog you forgot to add a command to update k8s packages. Would you consider doing a video on Ansible -AWX at levels 200-300?
Idk if it is desynchronized audio and video streams. He has a cadence and delivery that can seem delayed or belated .
Extremely well explained
Have you added Ingress to this setup so that you can access the cluster from outside from your network? Setting this up in AWS is simple you just add LB. What about home network? Have you used Nginx deployed in proxmox as well or maybe metallb?
Great video, got one question for you, you used VMs for cluster nodes, is it posible to use proxmox containers for this purpose ?
thanks for the demo and info, have a great day
Hello Jay, what about the database and a second ctrl-node? is there a possibility for a video (for the integraded db shared on different nodes, not with an extra host with my-sql)
Thanks for the video. Love your content. However, the blog post does not match the commands in the video at the point where you need to add the repos for apt. It does not work. Gives me errors that the key is not matching and that the release has no release file. At this point, I am not able to install the kube utils with apt install. Tried using SNAP but that didn't work either. Could you update your blog and video to get us back up to current workings?
So many thanks for these videos !!! That works so fine.
Jay can’t thank you enough for this video. Brilliant
Great video ❤! I'd like to know if anyone has opinion on MaaS + K8 vs PVE + K8. Which one is better for production based on your experience?
Thank you very much for your offered training.
Great demo Jay. Can you cover ingress in a future video?
Awesome. The only comment I have is that you forgat to add the command to link the kubernetes repositories in the blogpost writeup. Other than that, continue the great work!
Thanks for a great video!!!
I did, however, run into a problem, where I could not create my cluster. As it turns out, the issue lies with an incompatibility between kubernetes version 1.26 and containerd version 1.5.9. The error would look like: "command failed err=failed to run Kubelet: validate service connection : CRI v1 runtime API is not implemented for endpoint...."
The fix is quite easy: downgrade to kuberenetes 1.25 (sudo apt remove --purge kubelet && sudo apt install -y kubeadm kubelet=1.25.5-00) or to manually upgrade containerd to 1.6.0
Take a look at question "failed to run Kubelet: validate service connection: CRI v1 runtime API is not implemented for endpoint" on ServerFault.
thank you!
@@kmedleiss of course!
thank you so much for mentioning the same.
Is this guide now OBE? The xenial keyring is no longer signed and the kubeadm/kubectl/kubelet install commands suggest installing as snaps. Thoughts?
Nice sharing a lot of knowledge here
Thanks for this. It was really helpful.
Awesome explanations! Do you plan in any time soon to release a video with loadbalancer setup for k8s cluster too ? And maybe a nextcloud server in the k8s cluster? Would be great for sure! 🤝♥️
please start a playlist from there :D
Hello, I would like to have a TrueNas course. I really enjoyed this Proxmox course, especially for its didactic and methodical nature, and the way of explaining and organizing the topics and each of the relevant aspects. While I've heard your recommendations about other channels that deal with TrueNas, they clearly don't deal with it in the same way or as extensively as you do. Thank you very much.
The main issue with covering TrueNAS is that it's BSD and not Linux. But now we have TrueNAS Scale, so there's no reason not to consider covering it. I'll definitely consider that and it does sound like a great idea!
@@LearnLinuxTV You're awesome. Thank you very much. I'm waiting for my paid to support you.
Thank you for this wonderful content.
Hi There! Love your channel! Any plans to do a Gentoo install. I have been trying to install it on an Asus VivoBook 1TB NVME set up and can’t get it to boot because it gives an error saying it “block device is invalid”. Makes me think there is a specialized driver that need to be loaded by initramfs before loading the kernel. Very weird… Thanks!
Great tutorial and fantastic content! Thank you!
how about using talos VMs? it does the same thing, way quicker and easier to manage
Is it possible to run all config of the nodes and controller usin Ansible or Terraform?
Hi Jay, I bought your book first edition. Is it possible to update to the newest 22.04 version?
Jay, thanks for all your videos and your book. I have learnt a lot from you.
But one question:
Why do you double the amount of work up until the point where you make a template of the worker node?
The steps are identical for the worker node and the control node up until that point, so the template can be used to generate all four nodes.
I have actually recreated the control node from this template and it works perfectly.
Me too!!! This is by far the best kubernetes setup around. All others can be a little unreliable. Can't get k3sup to work for anything.
I've noticed you needn't to mark a hold on kubectl and all the required plugin for kubernetes? is that important? holding the versions?
Hi Jay, thanks for this usefull guide! It's awesome for a noob like me that are learning!
Is it possible to manage and view the K8s cluster with an interface like for example OpenLens or something similar? Can it possibly be installed on a dedicated VM or in a Docker container?
Thank you!!
Do you have anything that talks about storage and persistent volumes in Kubernetes? I followed this and have a working cluster, but some of the things I want to run like redis require persistent volumes, and I'm struggling to figure out the secret sauce.
Maybe explain why we needed to edit the SystemdCgroup? Maybe that would be handy to say why we need this to true?..
To answer myself, it containerd will use systemd cgroups under Linux (for access control and alike).
This is a great video and a great channel
This is a fantastic introduction, but i'm confused as to why you would want multiple k8s nodes on the same vm server? isn't the point fault tolerance and resource distribution? I assumed that it would be best practices to have a single k8s node per physical server, disable HA for those particular VMs (because k8s does HA internally)... keeping HA on for the controller because I don't think the controller is fault tolerant if i'm not mistaken. I admit I'm a total noob with this stuff, but Id like to know more.
As far as I know, distributing the workload on a single physical machine with virtual machines is a valid approach. Like that you have virtual redundancy and scalability. If you want physical redundancy, you could replicate the virtual cluster on a second, third, and so on, physical machine. With this setup, you can distribute the application workload over multiple clusters on physical machines. So if one cluster should fail, you still have additional clusters which will handle the workload of the failed cluster. This multi cluster architecture would also enable running your services on hybrid cloud, multi-cloud, and other infrastructure models.
Very nice tutorial. I will definitely give it another go after running into some issues in the past. Any idea if the same steps (with some minor changes) work on Debian as well?
Can confirm that it works for the most part. The only real difference is that the version of containerd was slightly different (1.4), so the option for systemd cgroup is slightly different, problem is that there is one that looks like it, but it has to be the one that is in the runc.options section.
Thanks again for the awesome video
Once I start being a System Admin for Ubuntu Servers, I would like to order your book then
Can k8s be setup on LXC instead of KVM nodes?
I don't see why not. I'm about to set one up.
... do you have a video that discusses qemu... why do i need it ?
Dude, you are awesome! Thank you a lot!
This is great man thank you!
The blog article is missing the steps for adding the repository
any chance you do a proxmox with ceph hyper-converged tutorial?
I remain unclear whether the Proxmox VMs are paravirtualized and if the k8s cluster can then use the full power of the underlying hardware. Can you elaborate on that?
Can we get a link to where you got that shirt from?
somethinks not working correctly can you make a update if it is conviniet for you
Unfortunately it's not working anymore :( Got stuck at initializing the cluster, which results in "[ERROR CRI]: container runtime is not running". Seems like there are some issues with the current versions. Just a heads-up for everyone following this tutorial! I am 50 minutes in and kinda need to restart from the beginning.
Not the fault of Jay, good tutorial in general! But to everyone trying this in Mar 2023: You might need to look for another guide.
Also be aware that some commands are just in the video and NOT in the linked blog article!
Thank you so much
Why should I choose Kubernetes over Docker Swarm?
There is a typo in the pod.yml section on the blog right after metadata:
great tutorial !!
Your join command failed at 46:35 because you saved 42:04 the command for joining additional controllers (--control-plane) instead of the worker node join was below (and same command obtain later at 47:04)
Great video! Keep up the great work!
Thanks!
hello Im having some issues SSH into my VM for the master and nodes. Keep getting error : No supported authentication methods available ( server sent: publickey). Ive google this and tired a few things with no luck what did I do wrong, Im using putty to ssh?
I followed the tutorial and works great between the host machine and the cluster. However if I try to curl the URL (subnet:30080) from another Proxmox VM on the same subnet it is very slow, around 1.5minutes. curl from the host machine or any of the nodes takes less than 1s. Does anyone know what could this be? Thanks in advance