Clear tutorial as always! I'm actually already running k3s (with Rancher) on three Lenovo minis, must say that I'm really enjoying the platform. Some differences are, in my install, I started with installing etcd on all three nodes first, and I am using a virtual (floating IP address) rather than having an external load balancer. So the cluster is self contained and doesn't depend on an external database or LB. I am actually running workloads on all three masters, figured that the small overhead of control node stuff isn't that big.
We use eks/serverless at work, so after my dual e5-2660 R720 arrives this week I'll start moving some of my straight docker stuff into k3s on that and my R9 3900X server.
I was actually going to ask about your thoughts on deploying a HA cluster using k3d, I saw a tutorial and it seems interesting at least to practice. Great tutorials, so far my favourite channel, straight to the point and useful tools that we all should be running. I was also wondering if you see value in doing “Kubernetes the Hard Way” to learn k8s or is there a better resource, I feel I need to learn more to follow your tutorials better.
I was not able to follow along with this tutorial for three reasons: 1. I don't know how to configure a load balancer (is it on another VM, is it on my personal machine, etc). 2. I don't know how to set up the mysql database (again, where does this go). 3. how do I set up the vms for my servers and agents? I've already followed your tutorials on proxmox, but I feel like that was left out here. Otherwise, thanks for this tutorial, you make great content, and I appreciate everything!!
Excellent work. You keep it simple and accessible. I would never attempt these levels of sophistication in my work. You are boiling this down to the simplest terms. Much appreciated. You have earned a loyal follower
Just started setting up my Homelab and learning Kubernetes for work. Your videos are the best I have seen. Thank you for clear and detailed directions.
@TechnoTim, you may add two things: 1. the k3s servers don't just pick up K3S_DATASTORE_ENDPOINT variable, and need --datastore-endpoint parameter 2. second k3s server needs token argument In my setup, in April 2024, these are what made it working. Thanks anyway for this awesome video. Do you have any plans to have terraform based k3s deployment on XO or Proxmox or whatever homelab owners have?
Great, great tutorial for deploying HA K3s, and after two years, everything still works as per your explanations (with a couple of minor adjustments). Thanks a bunch for your excellent work !
Hey Tim, love your tutorials. If you want an idea for a video, there is one thing very tricky but could be extremely powerful to setup. Can you add persistent storage on truenas for kubernetes nodes through NFS or iSCSI.
I know this is a pretty old video but i have some question marks with high availability. I get that with 2 master nodes you get redundancy for the kubernetes managment stuff. But how do we setup high available Persistent volumes? as i understand they are still a single point of failure. I mean as i under stand if you have a pod running MySQL then you can eassyly say if it isn't running spin it up on another node. But they relly on Persistent Volumes wich in my case are NFS Mounts with fixed IP addresses. How do we make them Highavailable?
when I specify the --token [value] param on the 2nd master, it starts but it never seems to actually join, both nodes show themselves when doing a kubectl get nodes but they don't show the other, any ideas?
I don't know if it's just me... but for anyone else out there, for the 2nd server I had to use the /var/lib/rancher/k3s/server/node-token on the 2nd server install, not just the agents. journalctl output was saying a cluster was already bootstrapped with another token
Not sure if something changed in k3s, however the tutorial sadly doesn't work for me Installing a single master node works fine, but if I try adding another master node, the Kubernetes service crashes with "starting kubernetes: preparing server: bootstrap data already found and encrypted with different token" This can be bypassed by copying the token from the first node and using "--token TokenHere" when setting up the server, however this is not mentioned anywhere, even in the original documentation this isn't mentioned! And even then, it still has some issues with K3s's Metric service not being able to access other nodes' metrics.
Another issue that I've found: Using the "@tcp(IpHere:PortHere)" format doesn't work for me for some reason, the service fails with "preparing server: creating storage endpoint: building kine: parse \"postgres://postgres:passwordhere@tcp(172.29.2.1:5432)\": invalid port \":5432)\" after host". Maybe because I'm not using a load balancer for my PostgreSQL server? I don't know but I don't this is the issue.
Smol update: Looks like they mentioned it in k3s' changelog about the bootstrapping issue! I think it would be nice to update the docs in the description to mention that change "If you are using K3s in a HA configuration with an external SQL datastore, and your server (control-plane) nodes were not started with the --token CLI flag, you will no longer be able to add additional K3s servers to the cluster without specifying the token. Ensure that you retain a copy of this token, as is required when restoring from backup. Previously, K3s did not enforce the use of a token when using external SQL datastores."
@@TechnoTim submitted an issue with all the issues I commented here + solutions! :3 Currently my Kubernetes cluster is running fine and well, so I guess the fixes really worked :P
It seems like the way of setting up 2 control servers has slightly change. You need to obtain the server token from server1 first and add it to the server2 setup command. You need to add token param e.g. ....server --token
I've been looking at lots of docs the past few days to set up k3s in HA and from what I remember it can handle 1 server failure if we have at least 3 of them, I may be mixed up by k8s or something though. Nice videos, that's the first one that I actually ended up with a working k3s cluster! Thank you
Thanks Tim, great tutorial. I only have one comment. With all the tabbed terminals and switching between them, it was sometimes a bit hard to follow where you actually are issuing the commands. Second point, your face was somtimes blocking the commands you were explaining. I guess using multiple windows would likely make this easier to follow. But then again, it's easy enough to go back a bit and look carefully. So again, thanks for creating this. Really helpful and still valuable after some years.
I had tried with other collation and this message appears at the moment of start the service: "creating storage endpoint: building kine: Error 1071: Specified key was too long; max key length is 767 bytes" Great video!
Hi Timothy, I love your tutorials. Thanks for keeping the information all so updated. I am having trouble installing rancher on my k3s cluster. 1 master node with embedded datastore, 2 worker nodes and I have a LB as well. I used k3sup to deploy the k3s cluster. I installed cert-manager and there after installed rancher but I can't access the rancher UI or the traefik dashboard.
@@fuzzylogicq It is easier because Traefik V1 is included however the current version is V2 - SO I would suggest toggling the option not to install Traefik, then install V2 via Helm charts - from there you can permit it to route anything including individual static routes with many options via the Traefik CRD on your ingress, and/or inside the Traefik config/CRD. Mastering ingress/Traefik can be challenging but perhaps one of the most important learning steps of K8
Trying to figure out how to type this. FIRST. Your videos are the best. Most pertinent to my interests. Most aligned with my current configuration. Most detailed. But I'm not to your level. I don't fully understand nginx, myseql, certs, etc. foundation issues...but, I'll get there! thanks for your help. saying thanks, and giving feedback for newbs like me!
With all the problems with google photos at the moment a video on something like photo prism would go a long way. Thank you for continously making quality content, keep up the good work!
FYA explaining HA server configuration at a high level would have prevented me from going down a rabbit hole of sillyness trying to setup a HA configuration without the datastore. Specifically the part about HA server configuration requiring the datastore. Still though, as someone new to K8s very helpful!
I tried to find material on performing rolling updates with K3s, but nothing pops out. Some say that K3s can't do rolling updates. I'm confused - high availability, but no rolling updates...
Though I have learned allot from your video's I wish you would come back to this video and go from A to Z. For alas I still cannot get it to work. 1. Start with the load balancer once the VM's are up and IP's are available 2. Installation of mysql or mariadb on a separate VM and show what app you use (DBeaver). 3. How installation and state what version of docker you are putting on the master and worker nodes.
Thanks for the feedback! I try to break them up otherwise they would be an hour long and less consumable. Hop in our discord or live stream for questions.
Great tutorial! Been waiting for you to do something like this for a while. Thanks! Ps: your tutorials are amazing. Well structured and easy to follow. Keep up the good work!
I know it's just for the demonstration purpose, but it's kinda funny you're using proxmox to run full VMs to run kubernets and load balancer while proxmox just has a shiny button for creating LXC containers right on the dashboard.
Tim great video! However, this took me days, not minutes with many issues. Note that the uninstall scripts don't clear out the database, which was my problem. Once I did that, and re-installed k3s, everything was fine.
I just noticed you and Zack Nelson (Jerry Rigs Everything) RUclips channel have the same exact cadence when speaking. It's very unique, soothing, and somewhat unsettling (because of the timing). Moreover, it holds my attention. Not because of the material but because of the almost 5/3 timing of your speach pattern.
you're becoming my favourite yt channel for this kind of stuff... a lot of others are just super annoying or try to sell you stuff all the time like network chuck etc. This is really useful stuff.
Any drawbacks running the "load balancer" in front of the masters as say, a standalone management server running Docker that not only runs the Nginx container but also the MariaDB container to serve as the datastoreDB? I wouldn't think so personally but if we're talking about spinning up a fresh K3S cluster, i'd be nice to everything under one roof. Great video BTW!
you say you have mysql on a load balancer, similar setup of the nginx you showed in video? and i'm setting up my test lab on my i7 with 32gb and using proxmox: what do you think of creating an nfs share for the storage part on the host itself, and mounted on the agent nodes, so it will be sure the storage is available way before the nodes are started.... and, still, how do you setup boot order and which is the correct one for all thos vms in proxmox? thanks!
when you setup the taint on the control plane nodes, this also prevents monitoring pods from scheduling to them (node-exporter). how could I configure the taints such that general workload pods aren't scheduled to the control plane but, say, monitoring related pods do
Can you demonstrate sever a simple webpage without needing to exec or proxy? Traefik is preinstalled, but maybe showing with nginx installation will be more helpful.
Great Video! I am planning to deploy a kubernetes cluster using K3S and I have some questions. Hope you will reply! 1) Should the external datastore (e.g. MySQL) be at a separate VM? Or living inside one of the master node will do? 2) If I am planning of using Nginx Ingress Controller (instead of having an external LB like you showed here), how should I go about doing this? Or are they actually different things?
1) You database should not live in your cluster 2) This load balancer is not my ingress controller. k3s comes with traefik for this. Hope this makes sense
Hi Tim, what are you hosting in your K3s Kubernetes cluster? Can you elaborate on the load balancers (both internal (traeafik, etc.) and external (nginx))? How do I configure the Kubectl load balancers along with the Rancher HA LB? Why actually the svclb in the k3s cluster are not rolled out properly when I install an app via the app tab in the menu?
What exactly is the purpose of this "external datasource"? What is being stored in it? If I have an application running on my cluster, is there anything preventing me from talking to an external database not set with the --datasource option??
Thanks for the guide Tim! As Tim said, pay attention to the database collations (latin1_swedish_ci). This can cause issues when deploying the server nodes and come up only in /etc/var/syslog.
Excellent video! Loved the pace and quality of information. I subscribed and am looking forward to browsing around your channel further! Thank you Tim 👍
I just wanted to say how much i love your blog. The documention to go along with the videos is spot on. And i love the layout. May I ask what its built on? It definitely doesnt look like wordpress.
Tim, I was watching your K3S spin-up video, but I did not see the vm config for the 6 vms. Currently, I have 6 Ubuntu VMs 1 socket and cpu, drive size 60 gb , but there was no defined VM in the video. Am I on the right track?
Hey Tim. I have a quesiton about '12:17 - Get our k3s kube config and copy to our dev machine'. What machine is that? I guess it's not an agent? What do you mean by 'dev machine' ?
Very clear to me. But the only part that I didn't get was how you defined your Loadbalance IP. Newer versions of K3S come with traefik and it assumes as the load balance IP the IP from the server you are creating the cluster. How do I define a separate and exclusive IP to the default traefik load balancer?
Great tutorial as ever Tim, love the bit from the live stream at the end too. It's always good to pay it forward. Quick question, were all the nodes virtualised in this tutorial or cloud hosted?
I am adding my vote for more on external load-balancing. It sounds like you are running 3 different load-Balancer for the API, NodePorts and MySQL. I suspect the API and MySQL would be similar but you may want a layer 7 load-balancer for the NodePorts. Are you running one load-balancer or multiple?
Main confusion for this guide is when you are suddenly able to get into hit the k3s dashboard from localhost... After a couple hours, finally figured out how to install/configure kubectl on Windows 10. Wasn't exactly straight forward, but could do with a mention next time. (total time for me to complete everything you did in the video was 10 hours)
Sorry, yeah if you are going to run kubectl from Windows I highly recommend WSL. Then everything should just work. The proxy command should work too however not sure with WSL2 and it's odd networking.
@@TechnoTim I missed what it was when you said it, but ended up getting there eventually. Seriously, thank you so much for these videos, they're amazing at getting through the majority of what needs to be done for these kinds of setups. Plus your documentation is a fantastic bonus! (pretty sure i'm like... at least 50 of the views on this video, the amount of times i've run through some of the steps after breaking everything over and over)
Why did you use even number of worker nodes? I'm learning and I got to know online that we are supposed to use odd number off worker nodes to help with availability
I think you forgot, or at least didn't show, the last load balancer that would access the 20 pods. Would you just create it like the first one, just accessing the agents/workers and then setup an ingress?
Tim, great video, kubernetes in minutes is a bit of a stretch thought, hahaha, I say that after the 8 hours I just spent trying to get this up. I am stuck at getting the dashboard on my windows machine. Are you using a Linux machine to install the dashboard or one of the windows options? Also what GUI are you using for the mySQL DB? What would help me is if you were just a little more specific/clear on what machine you are installing the components.. Dont get me wrong your the best thing going on RUclips, I have no idea how you do it all, you put in a ton of hard work!! truly grateful for your content!!
I would skip the k8s dashboard and get rancher instead :) I use HeidiSQL for my SQL client on Windows. Thank you!! ruclips.net/video/APsZJbnluXg/видео.html
What do you do when you have multiple nodes running the same application code with user generated content? You need to use one application DB that all the nodes will use for your application? (Not referring to k3s itself.) How do you handle uploads? Shared storage that the application uses (across all nodes)?
Tim, this is a great video, thks a lot! Btw, I have a question about the Db. You created a HA clusters having 3 servers, but the DB is not a point of failure ? Is not more appropiatte to have a etcd db created on each master and have then syncronized between them ?
Sure, you can make your database HA. This is the HA install of k3s vs single node. Making your DB HA will be up to you. I’ve used etcd too and it has its own problem too
Great video ! Was trying to follow through however my agents are not joining the cluster? Returns with this "Failed to connect to proxy". Do I have to setup k3s external ip? If so do I do it in one of my servers and put in LB's ip? Thank you.
Very clear explanations, your channel is a gold mine !! I was curious tho.... Is it possible to include a server (let's name it Alpha 1) which already has docker application running on it to a k8s cluster so that these already existing applications could be automatically recreated within any of the cluster "worker nodes" should a problem occur? I understand that K8s solves this issues but my question really emphasize on the fact that the applications on the server "Alpha 1" were deployed before the cluster was created...In a nutshell I would like to know if it is possible to include a stand alone server into a cluster making sure its already exesting applications could be handled within a freshly created K8s cluster. I hope I could make my question clear to understand.
If I understand correctly the external Database and LoadBalancer should reside on other machines and shall not be on the master node? Or are only supposed to be outside the kubernetes cluster?
NICE kickstart presentation!! THANKS! Especially for using 100% of your available screen area, using a proper font size!! However there are important points for me to put my finger on: Where is my valuable DATA of the apps i deployed stored exactly ? Are my files "highly available" too ? And what about your "single point of failure" services like your seperate mysql db and nginx load BL ? shouldnt it be possible to add or "migrate" somehow these services on to your K3s HA Cluster ?
Man really awesome tutorial!!! That was absolutely clear and easy to understand!! keep going with your work and hope one day we can share more experiences like this one !. Only one thing that I've missed! is your Load Balancer another VM or CT in your proxmox server? or is it installed inside each k8s server? Cheers man! awesome job!
Thank you! My LB is nginx running outside of the cluster (because I want to be able to communicate with it if the nodes are down). To make things more manageable, my nginx runs in Docker on another VM, but doesn't have to.
@TechnoTim - this is an old video, but I was wandering - what are you runing postgres on? is it container, or vm and in both cases how do you ensure that this is not your SPOF for K3s?
I have a question with regards to the load balancer to access the Kubernetes servers.....What happens if the load balancer goes down? How will you be able to access your Kubernetes cluster? Along the same line what happens if the DB goes down? Are these two components single points of failure?
Hello... Great tutorial...I'm new to k3s and feeling my way around...could u kindly point to the kubectl installation link on the dev machine...? U did mention you had done this in an earlier video... Trying to look it up but not finding it... Probably, me not looking correctly..
What is the average sizing for the K3s servers and the Agents? I am talking about running VM's on Proxmox. or is there any documentation which i can read to see how many vCPU's and memory i have to give each VM? I asked this because i found out that Proxmox is using at least one third of the total installed memory. That without even one VM running.
hey im running k3s in proxmox LXCs i have the issues: -none of the mandatory pods are beeing scheduled on first master (traefik, coredns and metrics are all in pending state) -when join a second master k3s wont start on second master -when joining a worker node still nothing beeing scheduled -when getting the nodes i just get the response 'No resource found', its not even showing the first master im running the command on
Great tutorial as always! Question, is it possible to specify multiple additions for the --tls-san option? Ie an IP address and domain name? If so how would that be done?
Thanks for the very clear video - one thing I did not get though is how to setup the networking to access the (nginx pods - ie the "hello nginx" page ) service from a client (let's say from your "personal host" machine - you are indeed showing that the nginx server is running but i'm not sure how it can be accessed from the oustide network - (maybe it is the LB on the right in the initial diagram - to the externet ? )... is there some doc you can point me to ?
One thing that I feel could have had better explanation and consideration is regarding the database. It's recommended in the video to have 2x K3s Server machines, that use MySQL as its database. However, if we end-up using a single MySQL database, doesn't that cause a single point of failure?
@@TechnoTimWould love to have a video on that, its something that I was wondering the most efficient way to do it, while having 2 to 3 Server Nodes lol
Having quite some issues configuring nginx load balancer, wouldn't mind a video on that. Will probably sort it out before that comes out but it would be nice to have
Hey Tim, great video. In fact great channel I love all your videos, they are very well done. So I plan to build myself a ProxMox HA Cluster out of a few machines I have lying around at home and then build a k3s HA cluster like you mention in this tutorial. However I have this question that puzzles me since I first saw your tutorial and keeps me from starting the work on this. I can do this directly on the metal, with a linux server + k3s, and not bother with ProxMox. What's the advantage of doing it with ProxMox and is it worth it?
Hey! Tim! thank you but something i don't understand is it necessary installig K3S like you show here, if rancher already installed with (K3S in it) as a container (your first video with rancher, kubernetes and minecraft server? there something i misunsterstood..
This is process for installing k3s first (kubernetes) and then installing rancher inside of it. It’s the HA way of installing Rancher. The docker way is a non HA rancher (but HA services if you add more nodes)
I think you might be referring to the Ingress Controller? See the load balancer and architecture section in this video for an explanation , it's there so we can use the kubernetes API from the outside, regardless of which server is up/down.
awesome vids ! thx it is very funny because i'm litterally doing this at work since couple of months. i'm running rancher 1.6 for couple of years now. 1 virtualized/saved singlenode at work for dev purpose. HA install on production webservers we pull up to rancher 2.0 at work for a while now but... i have only have two physical servers for webservers production and i ran into a breakdown in my head with K8s triple nodes and quorum things. first planning was two etcd/ctrlpane fixed VM on host and a third etcd VM with vsphere HA balancing.... weird config. so here is my point : what do you think about k3s and prodution HA environnements ? It better fit with my physical infrastructure but it seems also a young project. my first tests are mitigate i broke my k3s server with local helm install (i know not recommended but i like to try and broke things so i know how to get them stable after XD ) again thx a lot for all your work share, very usefull and interesting !
First: Thanks Tim for your videos, very inspiring stuff you do with your homelab! And you have very good instructions! But i encoutnered a problem while following the steps in your video. I had a problem on adding the second master node. It seems k3s in version 1.24.3 needs the token of the initial server set in the curl command. As far as i investigated that issue earlier version did use an empty token by default, but that changed in a later release to your guide. This token can be found by a cat /var/lib/rancher/k3s/server/token on the first master node and needs to be added in the curl command with an --token XXX... after the server part of the command . After i fixed that, i had to delete some outdated certificates on the new node and e voila my second node was up and running. Took me some time to figure that out. I don't know if thats a common problem but Maybe its helpful!
Thanks man, you saved my bacon! :) For anyone else with this problem, to remove the outdated certificates, run this command on the server(s) with the error: rm -r /var/lib/rancher/k3s/server/tls Then add the token into the curl command as Alex said, so your new command will look like this: ...server --token --node-taint...
I have an application and database container running I can pass database connector URL as environemnt variable to web server but containers being such a volatile environment ip keeps changing every time its rebuild how do you pass ip of the database container to web server.
Great video !! Do you not need to configure MetalLB for the HA cluster? Is that want the external Nginx provides instead? Can you use MetalLB instead of Nginx?
Tim, love the vidz. Great content. My company deploys 30-plus microservices and as a team lead I would like to find an inexpensive solution for my team for debugging k8s using the development laptops. K3s seems like a great candidate for this, considering its lightweight footprint. At the moment we use Docker and docker-compose to model the K8s for the Core 5-6 services that handle the majority of the work. I want my devs to understand how k8s works and knowing how Docker works is great but it does not the same as k8s. Q: Have you compared Docker for Windows w/ k8s vs. k3s?
Hey Tim, thank you for this video. Can you please explain how to upgrade the traefik to the latest version? k3s uses 1.81 as default and I want to use the traefik with v2.3. Can I just edit the traefik yaml, or can I make a new deployment? Also, how do I browse to the nginx test deployment? at this time only a curl from localhost works and not browsing to the load-balancer ip.
What are you going to run your k3s cluster on?
Right now a Mac mini and a shuttle ;) works really well with about 30 containers
Clear tutorial as always! I'm actually already running k3s (with Rancher) on three Lenovo minis, must say that I'm really enjoying the platform. Some differences are, in my install, I started with installing etcd on all three nodes first, and I am using a virtual (floating IP address) rather than having an external load balancer. So the cluster is self contained and doesn't depend on an external database or LB. I am actually running workloads on all three masters, figured that the small overhead of control node stuff isn't that big.
@@pjotrvrolijk3537 Nice! Super flexible!
We use eks/serverless at work, so after my dual e5-2660 R720 arrives this week I'll start moving some of my straight docker stuff into k3s on that and my R9 3900X server.
I was actually going to ask about your thoughts on deploying a HA cluster using k3d, I saw a tutorial and it seems interesting at least to practice.
Great tutorials, so far my favourite channel, straight to the point and useful tools that we all should be running.
I was also wondering if you see value in doing “Kubernetes the Hard Way” to learn k8s or is there a better resource, I feel I need to learn more to follow your tutorials better.
I was not able to follow along with this tutorial for three reasons: 1. I don't know how to configure a load balancer (is it on another VM, is it on my personal machine, etc). 2. I don't know how to set up the mysql database (again, where does this go). 3. how do I set up the vms for my servers and agents? I've already followed your tutorials on proxmox, but I feel like that was left out here. Otherwise, thanks for this tutorial, you make great content, and I appreciate everything!!
i think a lot of things have updated now on deploying K3S. Could you make an updated video for newest version of K3S?
Excellent work. You keep it simple and accessible. I would never attempt these levels of sophistication in my work. You are boiling this down to the simplest terms. Much appreciated. You have earned a loyal follower
thank you!
Just started setting up my Homelab and learning Kubernetes for work. Your videos are the best I have seen. Thank you for clear and detailed directions.
@TechnoTim, you may add two things:
1. the k3s servers don't just pick up K3S_DATASTORE_ENDPOINT variable, and need --datastore-endpoint parameter
2. second k3s server needs token argument
In my setup, in April 2024, these are what made it working.
Thanks anyway for this awesome video. Do you have any plans to have terraform based k3s deployment on XO or Proxmox or whatever homelab owners have?
The video tutorials you make are gold.
Great, great tutorial for deploying HA K3s, and after two years, everything still works as per your explanations (with a couple of minor adjustments). Thanks a bunch for your excellent work !
I really appreciate this video! I've been struggling to get a k8s cluster up and running, and this video was exactly what I needed. Thanks so much!
Thank you!
Hey Tim, love your tutorials. If you want an idea for a video, there is one thing very tricky but could be extremely powerful to setup.
Can you add persistent storage on truenas for kubernetes nodes through NFS or iSCSI.
Yeah I would love to see some examples as well. I'm thinking about migrating to k8s too.
Just use the nfs client provisioner, many of our community members have already set this up. It comes up daily in our Discord! discord.gg/DJKexrJ
I know this is a pretty old video but i have some question marks with high availability. I get that with 2 master nodes you get redundancy for the kubernetes managment stuff. But how do we setup high available Persistent volumes? as i understand they are still a single point of failure. I mean as i under stand if you have a pod running MySQL then you can eassyly say if it isn't running spin it up on another node. But they relly on Persistent Volumes wich in my case are NFS Mounts with fixed IP addresses. How do we make them Highavailable?
Nice tutorial about spinning up the kubernetes cluster! But what about load balancing of all this worloads from client side?
Hey @Techno Tim, actually you need server token to start second/third/nth k3s server also (control-plane, master). Tested it on version 1.21.5
when I specify the --token [value] param on the 2nd master, it starts but it never seems to actually join, both nodes show themselves when doing a kubectl get nodes but they don't show the other, any ideas?
@@jaycol12 hi ı'm experiencing the same problem right now. Did you by any chance solve this issue?
I don't know if it's just me... but for anyone else out there, for the 2nd server I had to use the /var/lib/rancher/k3s/server/node-token on the 2nd server install, not just the agents. journalctl output was saying a cluster was already bootstrapped with another token
Can you post a video about your db setup?
Whoa you are throwing hot tutos, been watching and following along for 2weeks now, homelab is starting to look nice now :)
Not sure if something changed in k3s, however the tutorial sadly doesn't work for me
Installing a single master node works fine, but if I try adding another master node, the Kubernetes service crashes with "starting kubernetes: preparing server: bootstrap data already found and encrypted with different token"
This can be bypassed by copying the token from the first node and using "--token TokenHere" when setting up the server, however this is not mentioned anywhere, even in the original documentation this isn't mentioned! And even then, it still has some issues with K3s's Metric service not being able to access other nodes' metrics.
Another issue that I've found: Using the "@tcp(IpHere:PortHere)" format doesn't work for me for some reason, the service fails with "preparing server: creating storage endpoint: building kine: parse \"postgres://postgres:passwordhere@tcp(172.29.2.1:5432)\": invalid port \":5432)\" after host". Maybe because I'm not using a load balancer for my PostgreSQL server? I don't know but I don't this is the issue.
Smol update: Looks like they mentioned it in k3s' changelog about the bootstrapping issue! I think it would be nice to update the docs in the description to mention that change
"If you are using K3s in a HA configuration with an external SQL datastore, and your server (control-plane) nodes were not started with the --token CLI flag, you will no longer be able to add additional K3s servers to the cluster without specifying the token. Ensure that you retain a copy of this token, as is required when restoring from backup. Previously, K3s did not enforce the use of a token when using external SQL datastores."
Thank you! Can you open an issue in github and I will check it out and add notes!
@@TechnoTim submitted an issue with all the issues I commented here + solutions! :3
Currently my Kubernetes cluster is running fine and well, so I guess the fixes really worked :P
Thanks buddy, you saved my afternoon!
It seems like the way of setting up 2 control servers has slightly change. You need to obtain the server token from server1 first and add it to the server2 setup command. You need to add token param e.g. ....server --token
I've been looking at lots of docs the past few days to set up k3s in HA and from what I remember it can handle 1 server failure if we have at least 3 of them, I may be mixed up by k8s or something though. Nice videos, that's the first one that I actually ended up with a working k3s cluster! Thank you
With k3s and an eternal data store you need 2 at minimum :)
Thanks Tim, great tutorial. I only have one comment. With all the tabbed terminals and switching between them, it was sometimes a bit hard to follow where you actually are issuing the commands. Second point, your face was somtimes blocking the commands you were explaining. I guess using multiple windows would likely make this easier to follow. But then again, it's easy enough to go back a bit and look carefully. So again, thanks for creating this. Really helpful and still valuable after some years.
I had tried with other collation and this message appears at the moment of start the service:
"creating storage endpoint: building kine: Error 1071: Specified key was too long; max key length is 767 bytes"
Great video!
Glad I checked the collation!
Hi Timothy, I love your tutorials. Thanks for keeping the information all so updated. I am having trouble installing rancher on my k3s cluster. 1 master node with embedded datastore, 2 worker nodes and I have a LB as well. I used k3sup to deploy the k3s cluster. I installed cert-manager and there after installed rancher but I can't access the rancher UI or the traefik dashboard.
there is also an option for an embedded etcd so you don't need an external db :)
Performance issues are a concern
@@sexualsmile its a k3s … noones gonna host business critical infrastructure in a selfhosted homelab
@@TillmannHuebner making assumptions and not reading the doc's.
😂.
You didn't even address my initial comment too. 😆
Q: Wouldn't you want to use MetalLB and then perhaps Traefik for more robust ingress/loadbalancer? (love your stuff)
Traefik is built into k3s
@@fuzzylogicq It is easier because Traefik V1 is included however the current version is V2 - SO I would suggest toggling the option not to install Traefik, then install V2 via Helm charts - from there you can permit it to route anything including individual static routes with many options via the Traefik CRD on your ingress, and/or inside the Traefik config/CRD. Mastering ingress/Traefik can be challenging but perhaps one of the most important learning steps of K8
@@dougsellner9353 Isn't MetalLB better overall than Traefik?
Trying to figure out how to type this.
FIRST. Your videos are the best. Most pertinent to my interests. Most aligned with my current configuration. Most detailed.
But I'm not to your level. I don't fully understand nginx, myseql, certs, etc.
foundation issues...but, I'll get there! thanks for your help.
saying thanks, and giving feedback for newbs like me!
You can do it!
@@TechnoTim I heard that in schwarzenegger voice. I do love schwarzenegger. Lol
With all the problems with google photos at the moment a video on something like photo prism would go a long way. Thank you for continously making quality content, keep up the good work!
Thank you!
FYA explaining HA server configuration at a high level would have prevented me from going down a rabbit hole of sillyness trying to setup a HA configuration without the datastore. Specifically the part about HA server configuration requiring the datastore. Still though, as someone new to K8s very helpful!
Hey Tim, You're an inspiration for sure!
I have been consuming content for long time and I am also looking to start paying that forward soon.
Would love to see a video on Metallb and OpenBGPD
I tried to find material on performing rolling updates with K3s, but nothing pops out. Some say that K3s can't do rolling updates. I'm confused - high availability, but no rolling updates...
Though I have learned allot from your video's I wish you would come back to this video and go from A to Z. For alas I still cannot get it to work.
1. Start with the load balancer once the VM's are up and IP's are available
2. Installation of mysql or mariadb on a separate VM and show what app you use (DBeaver).
3. How installation and state what version of docker you are putting on the master and worker nodes.
Thanks for the feedback! I try to break them up otherwise they would be an hour long and less consumable. Hop in our discord or live stream for questions.
Great tutorial! Been waiting for you to do something like this for a while.
Thanks!
Ps: your tutorials are amazing. Well structured and easy to follow.
Keep up the good work!
Thank you so much!
I know it's just for the demonstration purpose, but it's kinda funny you're using proxmox to run full VMs to run kubernets and load balancer while proxmox just has a shiny button for creating LXC containers right on the dashboard.
Hi Tim! Do you have a tutorial on load balancing mysql servers? Thanks!
Tim great video! However, this took me days, not minutes with many issues. Note that the uninstall scripts don't clear out the database, which was my problem. Once I did that, and re-installed k3s, everything was fine.
I just noticed you and Zack Nelson (Jerry Rigs Everything) RUclips channel have the same exact cadence when speaking. It's very unique, soothing, and somewhat unsettling (because of the timing). Moreover, it holds my attention. Not because of the material but because of the almost 5/3 timing of your speach pattern.
Maybe I can work on the material holding your attention too😂! Thank you!
@@TechnoTim The material is pretty good too. I'm learning a bunch. Following your guide right now on setting up my K3S environment. Thank you!
you're becoming my favourite yt channel for this kind of stuff... a lot of others are just super annoying or try to sell you stuff all the time like network chuck etc.
This is really useful stuff.
Thank you. I love Network Chuck. Learned a ton of new topics from him!
Please make a video on how to allocate cpu and memory resources! I’m finding it hard to find a balance and what to watch for in a HA cluster
I just made a video on Monitoring and Alerting, check it out, it might help!
So well explained! Got my cluster up and running so well!!
Any drawbacks running the "load balancer" in front of the masters as say, a standalone management server running Docker that not only runs the Nginx container but also the MariaDB container to serve as the datastoreDB? I wouldn't think so personally but if we're talking about spinning up a fresh K3S cluster, i'd be nice to everything under one roof. Great video BTW!
you say you have mysql on a load balancer, similar setup of the nginx you showed in video?
and i'm setting up my test lab on my i7 with 32gb and using proxmox: what do you think of creating an nfs share for the storage part on the host itself, and mounted on the agent nodes, so it will be sure the storage is available way before the nodes are started....
and, still, how do you setup boot order and which is the correct one for all thos vms in proxmox?
thanks!
storage soon!
@@TechnoTim great!
That sounds always easier to do when you explane it, but after i try it out it looks waaaay harder 😅👍
Thank you anyhow for this video
You're welcome 😊
when you setup the taint on the control plane nodes, this also prevents monitoring pods from scheduling to them (node-exporter). how could I configure the taints such that general workload pods aren't scheduled to the control plane but, say, monitoring related pods do
Thanks for the video. How to setup ingress controller for path based routing for k3s cluster. Any documentations
Can you demonstrate sever a simple webpage without needing to exec or proxy? Traefik is preinstalled, but maybe showing with nginx installation will be more helpful.
Great Video!
I am planning to deploy a kubernetes cluster using K3S and I have some questions. Hope you will reply!
1) Should the external datastore (e.g. MySQL) be at a separate VM? Or living inside one of the master node will do?
2) If I am planning of using Nginx Ingress Controller (instead of having an external LB like you showed here), how should I go about doing this? Or are they actually different things?
1) You database should not live in your cluster
2) This load balancer is not my ingress controller. k3s comes with traefik for this.
Hope this makes sense
Hi Tim,
what are you hosting in your K3s Kubernetes cluster?
Can you elaborate on the load balancers (both internal (traeafik, etc.) and external (nginx))? How do I configure the Kubectl load balancers along with the Rancher HA LB? Why actually the svclb in the k3s cluster are not rolled out properly when I install an app via the app tab in the menu?
What exactly is the purpose of this "external datasource"? What is being stored in it?
If I have an application running on my cluster, is there anything preventing me from talking to an external database not set with the --datasource option??
Thanks for the guide Tim! As Tim said, pay attention to the database collations (latin1_swedish_ci). This can cause issues when deploying the server nodes and come up only in /etc/var/syslog.
Thank you! I figured it was better to know for sure but was unsure if it affected anything. Thanks for confirming!
Excellent video! Loved the pace and quality of information. I subscribed and am looking forward to browsing around your channel further! Thank you Tim 👍
I just wanted to say how much i love your blog. The documention to go along with the videos is spot on. And i love the layout. May I ask what its built on? It definitely doesnt look like wordpress.
Thank you! It’s open source too. You can clone and fork it! It’s in my GitHub which is included in the blog too!
Sounds like I might have a new project at work on Monday. Thx!!
Have fun!
Tim,
I was watching your K3S spin-up video, but I did not see the vm config for the 6 vms.
Currently, I have 6 Ubuntu VMs 1 socket and cpu, drive size 60 gb , but there was no defined VM in the video.
Am I on the right track?
Hey Tim. I have a quesiton about '12:17 - Get our k3s kube config and copy to our dev machine'. What machine is that? I guess it's not an agent? What do you mean by 'dev machine' ?
Get the kube config from one of the servers and copy it to your local machine
Is there a way to run the external db in HA as well? So a replicated MySQL db in case one of them fail?
Yep. I would be great if there is a thorough guide on how to do it.
Very clear to me. But the only part that I didn't get was how you defined your Loadbalance IP. Newer versions of K3S come with traefik and it assumes as the load balance IP the IP from the server you are creating the cluster. How do I define a separate and exclusive IP to the default traefik load balancer?
When you mentioned setting up a TCP load balancer with nginx, how would that work for DNS say with PiHole for example where you use UDP as well?
Hi, can you post a video of storage in Rancher? How to set it up in rancher and how the database to use it. Thanks
Great suggestion!
Great tutorial as ever Tim, love the bit from the live stream at the end too. It's always good to pay it forward. Quick question, were all the nodes virtualised in this tutorial or cloud hosted?
Thank you! All virtualized in my proxmox cluster!
I am adding my vote for more on external load-balancing. It sounds like you are running 3 different load-Balancer for the API, NodePorts and MySQL. I suspect the API and MySQL would be similar but you may want a layer 7 load-balancer for the NodePorts. Are you running one load-balancer or multiple?
I’ll be running 2, like in the diagram.
Could you do a video using K3OS?
Main confusion for this guide is when you are suddenly able to get into hit the k3s dashboard from localhost...
After a couple hours, finally figured out how to install/configure kubectl on Windows 10. Wasn't exactly straight forward, but could do with a mention next time.
(total time for me to complete everything you did in the video was 10 hours)
Sorry, yeah if you are going to run kubectl from Windows I highly recommend WSL. Then everything should just work. The proxy command should work too however not sure with WSL2 and it's odd networking.
@@TechnoTim I missed what it was when you said it, but ended up getting there eventually.
Seriously, thank you so much for these videos, they're amazing at getting through the majority of what needs to be done for these kinds of setups. Plus your documentation is a fantastic bonus!
(pretty sure i'm like... at least 50 of the views on this video, the amount of times i've run through some of the steps after breaking everything over and over)
Why did you use even number of worker nodes? I'm learning and I got to know online that we are supposed to use odd number off worker nodes to help with availability
I think you forgot, or at least didn't show, the last load balancer that would access the 20 pods.
Would you just create it like the first one, just accessing the agents/workers and then setup an ingress?
Tim, great video, kubernetes in minutes is a bit of a stretch thought, hahaha, I say that after the 8 hours I just spent trying to get this up. I am stuck at getting the dashboard on my windows machine. Are you using a Linux machine to install the dashboard or one of the windows options? Also what GUI are you using for the mySQL DB? What would help me is if you were just a little more specific/clear on what machine you are installing the components.. Dont get me wrong your the best thing going on RUclips, I have no idea how you do it all, you put in a ton of hard work!! truly grateful for your content!!
I would skip the k8s dashboard and get rancher instead :) I use HeidiSQL for my SQL client on Windows. Thank you!! ruclips.net/video/APsZJbnluXg/видео.html
What do you do when you have multiple nodes running the same application code with user generated content?
You need to use one application DB that all the nodes will use for your application? (Not referring to k3s itself.)
How do you handle uploads? Shared storage that the application uses (across all nodes)?
Tim, this is a great video, thks a lot! Btw, I have a question about the Db. You created a HA clusters having 3 servers, but the DB is not a point of failure ? Is not more appropiatte to have a etcd db created on each master and have then syncronized between them ?
Sure, you can make your database HA. This is the HA install of k3s vs single node. Making your DB HA will be up to you. I’ve used etcd too and it has its own problem too
Great video ! Was trying to follow through however my agents are not joining the cluster? Returns with this "Failed to connect to proxy". Do I have to setup k3s external ip? If so do I do it in one of my servers and put in LB's ip? Thank you.
Looks fine, but i dont think any advantages over docker swarm for simple projects
Very clear explanations, your channel is a gold mine !! I was curious tho....
Is it possible to include a server (let's name it Alpha 1) which already has docker application running on it to a k8s cluster so that these already existing applications could be automatically recreated within any of the cluster "worker nodes" should a problem occur? I understand that K8s solves this issues but my question really emphasize on the fact that the applications on the server "Alpha 1" were deployed before the cluster was created...In a nutshell I would like to know if it is possible to include a stand alone server into a cluster making sure its already exesting applications could be handled within a freshly created K8s cluster. I hope I could make my question clear to understand.
Thank you!
If I understand correctly the external Database and LoadBalancer should reside on other machines and shall not be on the master node? Or are only supposed to be outside the kubernetes cluster?
NICE kickstart presentation!! THANKS! Especially for using 100% of your available screen area, using a proper font size!! However there are important points for me to put my finger on: Where is my valuable DATA of the apps i deployed stored exactly ? Are my files "highly available" too ? And what about your "single point of failure" services like your seperate mysql db and nginx load BL ? shouldnt it be possible to add or "migrate" somehow these services on to your K3s HA Cluster ?
Thanks for the tips!
You setup 2 master servers. You said the token is the same on each. Mine is different on each. How do both servers know about each other
@TechnoTim Thanks for great video , is the nginx load balancer only in nginx plus ? not in the free version of nginx?
I think so but it’s included in the docker image last i checked!
Man really awesome tutorial!!! That was absolutely clear and easy to understand!! keep going with your work and hope one day we can share more experiences like this one !. Only one thing that I've missed! is your Load Balancer another VM or CT in your proxmox server? or is it installed inside each k8s server? Cheers man! awesome job!
Thank you! My LB is nginx running outside of the cluster (because I want to be able to communicate with it if the nodes are down). To make things more manageable, my nginx runs in Docker on another VM, but doesn't have to.
@TechnoTim - this is an old video, but I was wandering - what are you runing postgres on? is it container, or vm and in both cases how do you ensure that this is not your SPOF for K3s?
Now i run it in kubernetes but i am running the etcd version of k3s (linked in description). Otherwise you’ll have to build a mysql cluster for HA
Having real issues with K3s connectivity into a postgress container. Tested connectivity with other apps and they're fine
Check out my latest video on installing k3s with etcd, it's 100% automated
can you make a guide on how to install k3s in air gaped environment (without internet access)
What a great video. Very much wow. Love your style of presenting the content. Sweet sweet.
I have a question with regards to the load balancer to access the Kubernetes servers.....What happens if the load balancer goes down? How will you be able to access your Kubernetes cluster? Along the same line what happens if the DB goes down? Are these two components single points of failure?
If it’s down, point your kube config directly at one of the k3s servers instead of the load balancer. If DB goes down, k3s goes down.
What’s the difference of this set up, vs using rancher, then adding nodes as worker to that?
So what is the difference between setting up the cluster this way vs the rancher cluster in the portal?
Hello... Great tutorial...I'm new to k3s and feeling my way around...could u kindly point to the kubectl installation link on the dev machine...? U did mention you had done this in an earlier video... Trying to look it up but not finding it... Probably, me not looking correctly..
What is the average sizing for the K3s servers and the Agents? I am talking about running VM's on Proxmox. or is there any documentation which i can read to see how many vCPU's and memory i have to give each VM? I asked this because i found out that Proxmox is using at least one third of the total installed memory. That without even one VM running.
Keep the good work Tim, excellent content. Thanks so much!
hey im running k3s in proxmox LXCs
i have the issues:
-none of the mandatory pods are beeing scheduled on first master (traefik, coredns and metrics are all in pending state)
-when join a second master k3s wont start on second master
-when joining a worker node still nothing beeing scheduled
-when getting the nodes i just get the response 'No resource found', its not even showing the first master im running the command on
Sorry, I recommend against running k3s in lxc. You are basically containerizing a container platform. Nothing but troubles :)
@@TechnoTim I see thank you :)
Great tutorial as always!
Question, is it possible to specify multiple additions for the --tls-san option? Ie an IP address and domain name? If so how would that be done?
Thanks for the very clear video - one thing I did not get though is how to setup the networking to access the (nginx pods - ie the "hello nginx" page ) service from a client (let's say from your "personal host" machine - you are indeed showing that the nginx server is running but i'm not sure how it can be accessed from the oustide network - (maybe it is the LB on the right in the initial diagram - to the externet ? )... is there some doc you can point me to ?
Yes, exactly. You'll need to set up an ingress controller and metal lb if you are going to expose these services outside of k3s
One thing that I feel could have had better explanation and consideration is regarding the database.
It's recommended in the video to have 2x K3s Server machines, that use MySQL as its database.
However, if we end-up using a single MySQL database, doesn't that cause a single point of failure?
Yes, you will need to make your SQL DB HA by running replicas.
@@TechnoTimWould love to have a video on that, its something that I was wondering the most efficient way to do it, while having 2 to 3 Server Nodes lol
Having quite some issues configuring nginx load balancer, wouldn't mind a video on that.
Will probably sort it out before that comes out but it would be nice to have
Thanks! I have an example in the docs!
@@TechnoTim Do you have a link to a docker-compose.yml for the nginx setup you have here?
Hey Tim, great video. In fact great channel I love all your videos, they are very well done.
So I plan to build myself a ProxMox HA Cluster out of a few machines I have lying around at home and then build a k3s HA cluster like you mention in this tutorial. However I have this question that puzzles me since I first saw your tutorial and keeps me from starting the work on this. I can do this directly on the metal, with a linux server + k3s, and not bother with ProxMox.
What's the advantage of doing it with ProxMox and is it worth it?
You cal always go bare metal. I just just a hhypervisor to virtualize all the things! That way I can share resources.
Hey! Tim! thank you but something i don't understand is it necessary installig K3S like you show here, if rancher already installed with (K3S in it) as a container (your first video with rancher, kubernetes and minecraft server? there something i misunsterstood..
This is process for installing k3s first (kubernetes) and then installing rancher inside of it. It’s the HA way of installing Rancher. The docker way is a non HA rancher (but HA services if you add more nodes)
What are the benefits of running an external load balancer? Is the Kubernetes one more difficult? I know I never understood it.
I think you might be referring to the Ingress Controller? See the load balancer and architecture section in this video for an explanation , it's there so we can use the kubernetes API from the outside, regardless of which server is up/down.
awesome vids ! thx
it is very funny because i'm litterally doing this at work since couple of months.
i'm running rancher 1.6 for couple of years now.
1 virtualized/saved singlenode at work for dev purpose.
HA install on production webservers
we pull up to rancher 2.0 at work for a while now but...
i have only have two physical servers for webservers production and i ran into a breakdown in my head with K8s triple nodes and quorum things.
first planning was two etcd/ctrlpane fixed VM on host and a third etcd VM with vsphere HA balancing.... weird config.
so here is my point : what do you think about k3s and prodution HA environnements ? It better fit with my physical infrastructure but it seems also a young project.
my first tests are mitigate i broke my k3s server with local helm install (i know not recommended but i like to try and broke things so i know how to get them stable after XD )
again thx a lot for all your work share, very usefull and interesting !
First: Thanks Tim for your videos, very inspiring stuff you do with your homelab! And you have very good instructions!
But i encoutnered a problem while following the steps in your video. I had a problem on adding the second master node.
It seems k3s in version 1.24.3 needs the token of the initial server set in the curl command. As far as i investigated that issue earlier version did use an empty token by default, but that changed in a later release to your guide.
This token can be found by a cat /var/lib/rancher/k3s/server/token on the first master node and needs to be added in the curl command with an --token XXX... after the server part of the command .
After i fixed that, i had to delete some outdated certificates on the new node and e voila my second node was up and running. Took me some time to figure that out. I don't know if thats a common problem but Maybe its helpful!
Thank you! Yeah k3s changed how tokens are handled! If you want an automated install see my other video!
Thanks man, you saved my bacon! :)
For anyone else with this problem, to remove the outdated certificates, run this command on the server(s) with the error:
rm -r /var/lib/rancher/k3s/server/tls
Then add the token into the curl command as Alex said, so your new command will look like this:
...server --token --node-taint...
I have an application and database container running I can pass database connector URL as environemnt variable to web server but containers being such a volatile environment ip keeps changing every time its rebuild how do you pass ip of the database container to web server.
Great video !! Do you not need to configure MetalLB for the HA cluster? Is that want the external Nginx provides instead? Can you use MetalLB instead of Nginx?
You do need to configure MetalLB if you want to have an external load balancer.
Tim, love the vidz. Great content. My company deploys 30-plus microservices and as a team lead I would like to find an inexpensive solution for my team for debugging k8s using the development laptops. K3s seems like a great candidate for this, considering its lightweight footprint. At the moment we use Docker and docker-compose to model the K8s for the Core 5-6 services that handle the majority of the work. I want my devs to understand how k8s works and knowing how Docker works is great but it does not the same as k8s. Q: Have you compared Docker for Windows w/ k8s vs. k3s?
I would use WSL and take Windows out of the equation for local dev. Then, it’s just like a Linux server thank you!
Hey Tim, thank you for this video.
Can you please explain how to upgrade the traefik to the latest version? k3s uses 1.81 as default and I want to use the traefik with v2.3.
Can I just edit the traefik yaml, or can I make a new deployment?
Also, how do I browse to the nginx test deployment? at this time only a curl from localhost works and not browsing to the load-balancer ip.
For people who don't have a cloud account where they can spin up instances, can you show how to do this with docker containers?
I think you are looking for a single node rancher install then ruclips.net/video/oILc0ywDVTk/видео.html
@@TechnoTim No I was able to do pretty much your entire tutorial using k3d to create multiple server and worker nodes.