HIGH AVAILABILITY k3s (Kubernetes) in minutes!

Поделиться
HTML-код
  • Опубликовано: 23 ноя 2024

Комментарии • 471

  • @TechnoTim
    @TechnoTim  4 года назад +27

    What are you going to run your k3s cluster on?

    • @sirsirae343
      @sirsirae343 4 года назад +6

      Right now a Mac mini and a shuttle ;) works really well with about 30 containers

    • @pjotrvrolijk3537
      @pjotrvrolijk3537 4 года назад +11

      Clear tutorial as always! I'm actually already running k3s (with Rancher) on three Lenovo minis, must say that I'm really enjoying the platform. Some differences are, in my install, I started with installing etcd on all three nodes first, and I am using a virtual (floating IP address) rather than having an external load balancer. So the cluster is self contained and doesn't depend on an external database or LB. I am actually running workloads on all three masters, figured that the small overhead of control node stuff isn't that big.

    • @TechnoTim
      @TechnoTim  4 года назад

      @@pjotrvrolijk3537 Nice! Super flexible!

    • @morosis82
      @morosis82 4 года назад

      We use eks/serverless at work, so after my dual e5-2660 R720 arrives this week I'll start moving some of my straight docker stuff into k3s on that and my R9 3900X server.

    • @sidefxs
      @sidefxs 3 года назад +3

      I was actually going to ask about your thoughts on deploying a HA cluster using k3d, I saw a tutorial and it seems interesting at least to practice.
      Great tutorials, so far my favourite channel, straight to the point and useful tools that we all should be running.
      I was also wondering if you see value in doing “Kubernetes the Hard Way” to learn k8s or is there a better resource, I feel I need to learn more to follow your tutorials better.

  • @duffyscottc
    @duffyscottc 11 месяцев назад +3

    I was not able to follow along with this tutorial for three reasons: 1. I don't know how to configure a load balancer (is it on another VM, is it on my personal machine, etc). 2. I don't know how to set up the mysql database (again, where does this go). 3. how do I set up the vms for my servers and agents? I've already followed your tutorials on proxmox, but I feel like that was left out here. Otherwise, thanks for this tutorial, you make great content, and I appreciate everything!!

  • @uuutttuuubbbee
    @uuutttuuubbbee Год назад +3

    i think a lot of things have updated now on deploying K3S. Could you make an updated video for newest version of K3S?

  • @jessicalee462
    @jessicalee462 4 года назад +14

    Excellent work. You keep it simple and accessible. I would never attempt these levels of sophistication in my work. You are boiling this down to the simplest terms. Much appreciated. You have earned a loyal follower

  • @JeffOwens
    @JeffOwens 3 года назад +2

    Just started setting up my Homelab and learning Kubernetes for work. Your videos are the best I have seen. Thank you for clear and detailed directions.

  • @gumtreeuser9768
    @gumtreeuser9768 7 месяцев назад +2

    @TechnoTim, you may add two things:
    1. the k3s servers don't just pick up K3S_DATASTORE_ENDPOINT variable, and need --datastore-endpoint parameter
    2. second k3s server needs token argument
    In my setup, in April 2024, these are what made it working.
    Thanks anyway for this awesome video. Do you have any plans to have terraform based k3s deployment on XO or Proxmox or whatever homelab owners have?

  • @harrycox6303
    @harrycox6303 3 года назад +7

    The video tutorials you make are gold.

  • @andrebalsa203
    @andrebalsa203 Год назад +1

    Great, great tutorial for deploying HA K3s, and after two years, everything still works as per your explanations (with a couple of minor adjustments). Thanks a bunch for your excellent work !

  • @pchasco
    @pchasco 3 года назад +2

    I really appreciate this video! I've been struggling to get a k8s cluster up and running, and this video was exactly what I needed. Thanks so much!

  • @sacha8416
    @sacha8416 4 года назад +8

    Hey Tim, love your tutorials. If you want an idea for a video, there is one thing very tricky but could be extremely powerful to setup.
    Can you add persistent storage on truenas for kubernetes nodes through NFS or iSCSI.

    • @helderferreira8498
      @helderferreira8498 4 года назад

      Yeah I would love to see some examples as well. I'm thinking about migrating to k8s too.

    • @TechnoTim
      @TechnoTim  3 года назад +1

      Just use the nfs client provisioner, many of our community members have already set this up. It comes up daily in our Discord! discord.gg/DJKexrJ

  • @lukasblenk3684
    @lukasblenk3684 6 месяцев назад +1

    I know this is a pretty old video but i have some question marks with high availability. I get that with 2 master nodes you get redundancy for the kubernetes managment stuff. But how do we setup high available Persistent volumes? as i understand they are still a single point of failure. I mean as i under stand if you have a pod running MySQL then you can eassyly say if it isn't running spin it up on another node. But they relly on Persistent Volumes wich in my case are NFS Mounts with fixed IP addresses. How do we make them Highavailable?

  • @nwareroot
    @nwareroot 4 года назад +5

    Nice tutorial about spinning up the kubernetes cluster! But what about load balancing of all this worloads from client side?

  • @VladimirBezuglyi
    @VladimirBezuglyi 3 года назад +3

    Hey @Techno Tim, actually you need server token to start second/third/nth k3s server also (control-plane, master). Tested it on version 1.21.5

    • @jaycol12
      @jaycol12 3 года назад

      when I specify the --token [value] param on the 2nd master, it starts but it never seems to actually join, both nodes show themselves when doing a kubectl get nodes but they don't show the other, any ideas?

    • @earthisadinosaur2338
      @earthisadinosaur2338 Год назад

      @@jaycol12 hi ı'm experiencing the same problem right now. Did you by any chance solve this issue?

  • @tektech440
    @tektech440 3 года назад +1

    I don't know if it's just me... but for anyone else out there, for the 2nd server I had to use the /var/lib/rancher/k3s/server/node-token on the 2nd server install, not just the agents. journalctl output was saying a cluster was already bootstrapped with another token

  • @HenrickSteele
    @HenrickSteele 2 года назад +1

    Can you post a video about your db setup?

  • @shinzoken1
    @shinzoken1 Год назад

    Whoa you are throwing hot tutos, been watching and following along for 2weeks now, homelab is starting to look nice now :)

  • @MrPowerGamerBR
    @MrPowerGamerBR 3 года назад +1

    Not sure if something changed in k3s, however the tutorial sadly doesn't work for me
    Installing a single master node works fine, but if I try adding another master node, the Kubernetes service crashes with "starting kubernetes: preparing server: bootstrap data already found and encrypted with different token"
    This can be bypassed by copying the token from the first node and using "--token TokenHere" when setting up the server, however this is not mentioned anywhere, even in the original documentation this isn't mentioned! And even then, it still has some issues with K3s's Metric service not being able to access other nodes' metrics.

    • @MrPowerGamerBR
      @MrPowerGamerBR 3 года назад

      Another issue that I've found: Using the "@tcp(IpHere:PortHere)" format doesn't work for me for some reason, the service fails with "preparing server: creating storage endpoint: building kine: parse \"postgres://postgres:passwordhere@tcp(172.29.2.1:5432)\": invalid port \":5432)\" after host". Maybe because I'm not using a load balancer for my PostgreSQL server? I don't know but I don't this is the issue.

    • @MrPowerGamerBR
      @MrPowerGamerBR 3 года назад +1

      Smol update: Looks like they mentioned it in k3s' changelog about the bootstrapping issue! I think it would be nice to update the docs in the description to mention that change
      "If you are using K3s in a HA configuration with an external SQL datastore, and your server (control-plane) nodes were not started with the --token CLI flag, you will no longer be able to add additional K3s servers to the cluster without specifying the token. Ensure that you retain a copy of this token, as is required when restoring from backup. Previously, K3s did not enforce the use of a token when using external SQL datastores."

    • @TechnoTim
      @TechnoTim  3 года назад

      Thank you! Can you open an issue in github and I will check it out and add notes!

    • @MrPowerGamerBR
      @MrPowerGamerBR 3 года назад

      @@TechnoTim submitted an issue with all the issues I commented here + solutions! :3
      Currently my Kubernetes cluster is running fine and well, so I guess the fixes really worked :P

    • @jeanmarcos8265
      @jeanmarcos8265 3 года назад +1

      Thanks buddy, you saved my afternoon!

  • @flobow8446
    @flobow8446 Год назад

    It seems like the way of setting up 2 control servers has slightly change. You need to obtain the server token from server1 first and add it to the server2 setup command. You need to add token param e.g. ....server --token

  • @EvilDesktop
    @EvilDesktop 4 года назад

    I've been looking at lots of docs the past few days to set up k3s in HA and from what I remember it can handle 1 server failure if we have at least 3 of them, I may be mixed up by k8s or something though. Nice videos, that's the first one that I actually ended up with a working k3s cluster! Thank you

    • @TechnoTim
      @TechnoTim  4 года назад +1

      With k3s and an eternal data store you need 2 at minimum :)

  • @rstcologne
    @rstcologne Год назад

    Thanks Tim, great tutorial. I only have one comment. With all the tabbed terminals and switching between them, it was sometimes a bit hard to follow where you actually are issuing the commands. Second point, your face was somtimes blocking the commands you were explaining. I guess using multiple windows would likely make this easier to follow. But then again, it's easy enough to go back a bit and look carefully. So again, thanks for creating this. Really helpful and still valuable after some years.

  • @dimitryarmstrong
    @dimitryarmstrong 3 года назад

    I had tried with other collation and this message appears at the moment of start the service:
    "creating storage endpoint: building kine: Error 1071: Specified key was too long; max key length is 767 bytes"
    Great video!

    • @TechnoTim
      @TechnoTim  3 года назад

      Glad I checked the collation!

  • @michaelkasede1489
    @michaelkasede1489 3 года назад +3

    Hi Timothy, I love your tutorials. Thanks for keeping the information all so updated. I am having trouble installing rancher on my k3s cluster. 1 master node with embedded datastore, 2 worker nodes and I have a LB as well. I used k3sup to deploy the k3s cluster. I installed cert-manager and there after installed rancher but I can't access the rancher UI or the traefik dashboard.

  • @TillmannHuebner
    @TillmannHuebner 3 года назад +7

    there is also an option for an embedded etcd so you don't need an external db :)

    • @sexualsmile
      @sexualsmile 2 года назад

      Performance issues are a concern

    • @TillmannHuebner
      @TillmannHuebner 2 года назад

      @@sexualsmile its a k3s … noones gonna host business critical infrastructure in a selfhosted homelab

    • @sexualsmile
      @sexualsmile 2 года назад

      @@TillmannHuebner making assumptions and not reading the doc's.
      😂.

    • @sexualsmile
      @sexualsmile 2 года назад

      You didn't even address my initial comment too. 😆

  • @dougsellner9353
    @dougsellner9353 3 года назад +5

    Q: Wouldn't you want to use MetalLB and then perhaps Traefik for more robust ingress/loadbalancer? (love your stuff)

    • @ThePandaGuitar
      @ThePandaGuitar 3 года назад +3

      Traefik is built into k3s

    • @dougsellner9353
      @dougsellner9353 3 года назад

      @@fuzzylogicq It is easier because Traefik V1 is included however the current version is V2 - SO I would suggest toggling the option not to install Traefik, then install V2 via Helm charts - from there you can permit it to route anything including individual static routes with many options via the Traefik CRD on your ingress, and/or inside the Traefik config/CRD. Mastering ingress/Traefik can be challenging but perhaps one of the most important learning steps of K8

    • @eduardmart1237
      @eduardmart1237 Год назад

      @@dougsellner9353 Isn't MetalLB better overall than Traefik?

  • @coletraintechgames2932
    @coletraintechgames2932 3 года назад

    Trying to figure out how to type this.
    FIRST. Your videos are the best. Most pertinent to my interests. Most aligned with my current configuration. Most detailed.
    But I'm not to your level. I don't fully understand nginx, myseql, certs, etc.
    foundation issues...but, I'll get there! thanks for your help.
    saying thanks, and giving feedback for newbs like me!

    • @TechnoTim
      @TechnoTim  3 года назад +1

      You can do it!

    • @coletraintechgames2932
      @coletraintechgames2932 3 года назад

      @@TechnoTim I heard that in schwarzenegger voice. I do love schwarzenegger. Lol

  • @unmetplayer2727
    @unmetplayer2727 4 года назад +1

    With all the problems with google photos at the moment a video on something like photo prism would go a long way. Thank you for continously making quality content, keep up the good work!

  • @AndrewBradTanner
    @AndrewBradTanner 2 года назад

    FYA explaining HA server configuration at a high level would have prevented me from going down a rabbit hole of sillyness trying to setup a HA configuration without the datastore. Specifically the part about HA server configuration requiring the datastore. Still though, as someone new to K8s very helpful!

  • @abrahamlora3650
    @abrahamlora3650 Год назад

    Hey Tim, You're an inspiration for sure!
    I have been consuming content for long time and I am also looking to start paying that forward soon.

  • @clumbo1567
    @clumbo1567 3 года назад +3

    Would love to see a video on Metallb and OpenBGPD

  • @shellcatt
    @shellcatt 2 месяца назад

    I tried to find material on performing rolling updates with K3s, but nothing pops out. Some say that K3s can't do rolling updates. I'm confused - high availability, but no rolling updates...

  • @hackula8210
    @hackula8210 3 года назад +1

    Though I have learned allot from your video's I wish you would come back to this video and go from A to Z. For alas I still cannot get it to work.
    1. Start with the load balancer once the VM's are up and IP's are available
    2. Installation of mysql or mariadb on a separate VM and show what app you use (DBeaver).
    3. How installation and state what version of docker you are putting on the master and worker nodes.

    • @TechnoTim
      @TechnoTim  3 года назад

      Thanks for the feedback! I try to break them up otherwise they would be an hour long and less consumable. Hop in our discord or live stream for questions.

  • @yeezul
    @yeezul 4 года назад +4

    Great tutorial! Been waiting for you to do something like this for a while.
    Thanks!
    Ps: your tutorials are amazing. Well structured and easy to follow.
    Keep up the good work!

  • @PascalBrax
    @PascalBrax Год назад

    I know it's just for the demonstration purpose, but it's kinda funny you're using proxmox to run full VMs to run kubernets and load balancer while proxmox just has a shiny button for creating LXC containers right on the dashboard.

  • @nbensa
    @nbensa 2 года назад +1

    Hi Tim! Do you have a tutorial on load balancing mysql servers? Thanks!

  • @robertdilworth1105
    @robertdilworth1105 3 года назад +1

    Tim great video! However, this took me days, not minutes with many issues. Note that the uninstall scripts don't clear out the database, which was my problem. Once I did that, and re-installed k3s, everything was fine.

  • @betterwithrum
    @betterwithrum 2 года назад

    I just noticed you and Zack Nelson (Jerry Rigs Everything) RUclips channel have the same exact cadence when speaking. It's very unique, soothing, and somewhat unsettling (because of the timing). Moreover, it holds my attention. Not because of the material but because of the almost 5/3 timing of your speach pattern.

    • @TechnoTim
      @TechnoTim  2 года назад +1

      Maybe I can work on the material holding your attention too😂! Thank you!

    • @betterwithrum
      @betterwithrum 2 года назад

      @@TechnoTim The material is pretty good too. I'm learning a bunch. Following your guide right now on setting up my K3S environment. Thank you!

  • @GabREAL1983
    @GabREAL1983 4 года назад

    you're becoming my favourite yt channel for this kind of stuff... a lot of others are just super annoying or try to sell you stuff all the time like network chuck etc.
    This is really useful stuff.

    • @TechnoTim
      @TechnoTim  4 года назад +3

      Thank you. I love Network Chuck. Learned a ton of new topics from him!

  • @oughtington1628
    @oughtington1628 4 года назад +2

    Please make a video on how to allocate cpu and memory resources! I’m finding it hard to find a balance and what to watch for in a HA cluster

    • @TechnoTim
      @TechnoTim  3 года назад

      I just made a video on Monitoring and Alerting, check it out, it might help!

  • @nischalstha9
    @nischalstha9 2 года назад

    So well explained! Got my cluster up and running so well!!

  • @bflnetworkengineer
    @bflnetworkengineer 11 месяцев назад

    Any drawbacks running the "load balancer" in front of the masters as say, a standalone management server running Docker that not only runs the Nginx container but also the MariaDB container to serve as the datastoreDB? I wouldn't think so personally but if we're talking about spinning up a fresh K3S cluster, i'd be nice to everything under one roof. Great video BTW!

  • @squalazzo
    @squalazzo 4 года назад +3

    you say you have mysql on a load balancer, similar setup of the nginx you showed in video?
    and i'm setting up my test lab on my i7 with 32gb and using proxmox: what do you think of creating an nfs share for the storage part on the host itself, and mounted on the agent nodes, so it will be sure the storage is available way before the nodes are started....
    and, still, how do you setup boot order and which is the correct one for all thos vms in proxmox?
    thanks!

  • @DJ-Manuel
    @DJ-Manuel 4 года назад +3

    That sounds always easier to do when you explane it, but after i try it out it looks waaaay harder 😅👍
    Thank you anyhow for this video

    • @TechnoTim
      @TechnoTim  4 года назад +1

      You're welcome 😊

  • @michaelb8302
    @michaelb8302 2 месяца назад

    when you setup the taint on the control plane nodes, this also prevents monitoring pods from scheduling to them (node-exporter). how could I configure the taints such that general workload pods aren't scheduled to the control plane but, say, monitoring related pods do

  • @AsfarAsfar-g7b
    @AsfarAsfar-g7b Год назад

    Thanks for the video. How to setup ingress controller for path based routing for k3s cluster. Any documentations

  • @JiffyJames85
    @JiffyJames85 3 года назад

    Can you demonstrate sever a simple webpage without needing to exec or proxy? Traefik is preinstalled, but maybe showing with nginx installation will be more helpful.

  • @weitanglau162
    @weitanglau162 3 года назад

    Great Video!
    I am planning to deploy a kubernetes cluster using K3S and I have some questions. Hope you will reply!
    1) Should the external datastore (e.g. MySQL) be at a separate VM? Or living inside one of the master node will do?
    2) If I am planning of using Nginx Ingress Controller (instead of having an external LB like you showed here), how should I go about doing this? Or are they actually different things?

    • @TechnoTim
      @TechnoTim  3 года назад +2

      1) You database should not live in your cluster
      2) This load balancer is not my ingress controller. k3s comes with traefik for this.
      Hope this makes sense

  • @luca-leonhausdorfer8814
    @luca-leonhausdorfer8814 3 года назад

    Hi Tim,
    what are you hosting in your K3s Kubernetes cluster?
    Can you elaborate on the load balancers (both internal (traeafik, etc.) and external (nginx))? How do I configure the Kubectl load balancers along with the Rancher HA LB? Why actually the svclb in the k3s cluster are not rolled out properly when I install an app via the app tab in the menu?

  • @agirmani
    @agirmani 2 года назад

    What exactly is the purpose of this "external datasource"? What is being stored in it?
    If I have an application running on my cluster, is there anything preventing me from talking to an external database not set with the --datasource option??

  • @Clocen
    @Clocen 4 года назад

    Thanks for the guide Tim! As Tim said, pay attention to the database collations (latin1_swedish_ci). This can cause issues when deploying the server nodes and come up only in /etc/var/syslog.

    • @TechnoTim
      @TechnoTim  4 года назад +1

      Thank you! I figured it was better to know for sure but was unsure if it affected anything. Thanks for confirming!

  • @mbigras
    @mbigras 3 года назад

    Excellent video! Loved the pace and quality of information. I subscribed and am looking forward to browsing around your channel further! Thank you Tim 👍

  • @Error_404-F.cks_Not_Found
    @Error_404-F.cks_Not_Found 3 года назад

    I just wanted to say how much i love your blog. The documention to go along with the videos is spot on. And i love the layout. May I ask what its built on? It definitely doesnt look like wordpress.

    • @TechnoTim
      @TechnoTim  3 года назад +1

      Thank you! It’s open source too. You can clone and fork it! It’s in my GitHub which is included in the blog too!

  • @jeffherdzina6716
    @jeffherdzina6716 4 года назад +1

    Sounds like I might have a new project at work on Monday. Thx!!

  • @dtippit324
    @dtippit324 Год назад

    Tim,
    I was watching your K3S spin-up video, but I did not see the vm config for the 6 vms.
    Currently, I have 6 Ubuntu VMs 1 socket and cpu, drive size 60 gb , but there was no defined VM in the video.
    Am I on the right track?

  • @eLCe_Wro
    @eLCe_Wro 3 года назад +1

    Hey Tim. I have a quesiton about '12:17 - Get our k3s kube config and copy to our dev machine'. What machine is that? I guess it's not an agent? What do you mean by 'dev machine' ?

    • @TechnoTim
      @TechnoTim  3 года назад

      Get the kube config from one of the servers and copy it to your local machine

  • @itskagiso
    @itskagiso 2 года назад +1

    Is there a way to run the external db in HA as well? So a replicated MySQL db in case one of them fail?

    • @eduardmart1237
      @eduardmart1237 Год назад

      Yep. I would be great if there is a thorough guide on how to do it.

  • @BrunoLessa
    @BrunoLessa Год назад

    Very clear to me. But the only part that I didn't get was how you defined your Loadbalance IP. Newer versions of K3S come with traefik and it assumes as the load balance IP the IP from the server you are creating the cluster. How do I define a separate and exclusive IP to the default traefik load balancer?

  • @itsathejoey
    @itsathejoey 2 года назад

    When you mentioned setting up a TCP load balancer with nginx, how would that work for DNS say with PiHole for example where you use UDP as well?

  • @yongshengyang8144
    @yongshengyang8144 3 года назад +1

    Hi, can you post a video of storage in Rancher? How to set it up in rancher and how the database to use it. Thanks

  • @bitsbytesandgigabytes
    @bitsbytesandgigabytes 3 года назад +1

    Great tutorial as ever Tim, love the bit from the live stream at the end too. It's always good to pay it forward. Quick question, were all the nodes virtualised in this tutorial or cloud hosted?

    • @TechnoTim
      @TechnoTim  3 года назад

      Thank you! All virtualized in my proxmox cluster!

  • @wstrater
    @wstrater 4 года назад

    I am adding my vote for more on external load-balancing. It sounds like you are running 3 different load-Balancer for the API, NodePorts and MySQL. I suspect the API and MySQL would be similar but you may want a layer 7 load-balancer for the NodePorts. Are you running one load-balancer or multiple?

    • @TechnoTim
      @TechnoTim  4 года назад +1

      I’ll be running 2, like in the diagram.

  • @charlesrodriguez3657
    @charlesrodriguez3657 4 года назад +2

    Could you do a video using K3OS?

  • @AlexanderDockham
    @AlexanderDockham 3 года назад

    Main confusion for this guide is when you are suddenly able to get into hit the k3s dashboard from localhost...
    After a couple hours, finally figured out how to install/configure kubectl on Windows 10. Wasn't exactly straight forward, but could do with a mention next time.
    (total time for me to complete everything you did in the video was 10 hours)

    • @TechnoTim
      @TechnoTim  3 года назад +1

      Sorry, yeah if you are going to run kubectl from Windows I highly recommend WSL. Then everything should just work. The proxy command should work too however not sure with WSL2 and it's odd networking.

    • @AlexanderDockham
      @AlexanderDockham 3 года назад

      @@TechnoTim I missed what it was when you said it, but ended up getting there eventually.
      Seriously, thank you so much for these videos, they're amazing at getting through the majority of what needs to be done for these kinds of setups. Plus your documentation is a fantastic bonus!
      (pretty sure i'm like... at least 50 of the views on this video, the amount of times i've run through some of the steps after breaking everything over and over)

  • @oldmanscreaming
    @oldmanscreaming 2 года назад

    Why did you use even number of worker nodes? I'm learning and I got to know online that we are supposed to use odd number off worker nodes to help with availability

  • @Acentia
    @Acentia 3 года назад

    I think you forgot, or at least didn't show, the last load balancer that would access the 20 pods.
    Would you just create it like the first one, just accessing the agents/workers and then setup an ingress?

  • @chadmccluskey6465
    @chadmccluskey6465 3 года назад

    Tim, great video, kubernetes in minutes is a bit of a stretch thought, hahaha, I say that after the 8 hours I just spent trying to get this up. I am stuck at getting the dashboard on my windows machine. Are you using a Linux machine to install the dashboard or one of the windows options? Also what GUI are you using for the mySQL DB? What would help me is if you were just a little more specific/clear on what machine you are installing the components.. Dont get me wrong your the best thing going on RUclips, I have no idea how you do it all, you put in a ton of hard work!! truly grateful for your content!!

    • @TechnoTim
      @TechnoTim  3 года назад

      I would skip the k8s dashboard and get rancher instead :) I use HeidiSQL for my SQL client on Windows. Thank you!! ruclips.net/video/APsZJbnluXg/видео.html

  • @reesejenner3594
    @reesejenner3594 3 года назад

    What do you do when you have multiple nodes running the same application code with user generated content?
    You need to use one application DB that all the nodes will use for your application? (Not referring to k3s itself.)
    How do you handle uploads? Shared storage that the application uses (across all nodes)?

  • @AndresGorostidi
    @AndresGorostidi 3 года назад +1

    Tim, this is a great video, thks a lot! Btw, I have a question about the Db. You created a HA clusters having 3 servers, but the DB is not a point of failure ? Is not more appropiatte to have a etcd db created on each master and have then syncronized between them ?

    • @TechnoTim
      @TechnoTim  3 года назад

      Sure, you can make your database HA. This is the HA install of k3s vs single node. Making your DB HA will be up to you. I’ve used etcd too and it has its own problem too

  • @milakhan8734
    @milakhan8734 3 года назад

    Great video ! Was trying to follow through however my agents are not joining the cluster? Returns with this "Failed to connect to proxy". Do I have to setup k3s external ip? If so do I do it in one of my servers and put in LB's ip? Thank you.

  • @Evteboda
    @Evteboda 2 года назад

    Looks fine, but i dont think any advantages over docker swarm for simple projects

  • @anthonymacle1880
    @anthonymacle1880 2 года назад

    Very clear explanations, your channel is a gold mine !! I was curious tho....
    Is it possible to include a server (let's name it Alpha 1) which already has docker application running on it to a k8s cluster so that these already existing applications could be automatically recreated within any of the cluster "worker nodes" should a problem occur? I understand that K8s solves this issues but my question really emphasize on the fact that the applications on the server "Alpha 1" were deployed before the cluster was created...In a nutshell I would like to know if it is possible to include a stand alone server into a cluster making sure its already exesting applications could be handled within a freshly created K8s cluster. I hope I could make my question clear to understand.

  • @ohwii
    @ohwii 3 года назад

    If I understand correctly the external Database and LoadBalancer should reside on other machines and shall not be on the master node? Or are only supposed to be outside the kubernetes cluster?

  • @AxelWerner
    @AxelWerner 3 года назад

    NICE kickstart presentation!! THANKS! Especially for using 100% of your available screen area, using a proper font size!! However there are important points for me to put my finger on: Where is my valuable DATA of the apps i deployed stored exactly ? Are my files "highly available" too ? And what about your "single point of failure" services like your seperate mysql db and nginx load BL ? shouldnt it be possible to add or "migrate" somehow these services on to your K3s HA Cluster ?

    • @TechnoTim
      @TechnoTim  3 года назад

      Thanks for the tips!

  • @whatthefunction9140
    @whatthefunction9140 11 месяцев назад

    You setup 2 master servers. You said the token is the same on each. Mine is different on each. How do both servers know about each other

  • @boriss282
    @boriss282 11 месяцев назад

    @TechnoTim Thanks for great video , is the nginx load balancer only in nginx plus ? not in the free version of nginx?

    • @TechnoTim
      @TechnoTim  11 месяцев назад +1

      I think so but it’s included in the docker image last i checked!

  • @xSnake75
    @xSnake75 4 года назад

    Man really awesome tutorial!!! That was absolutely clear and easy to understand!! keep going with your work and hope one day we can share more experiences like this one !. Only one thing that I've missed! is your Load Balancer another VM or CT in your proxmox server? or is it installed inside each k8s server? Cheers man! awesome job!

    • @TechnoTim
      @TechnoTim  4 года назад

      Thank you! My LB is nginx running outside of the cluster (because I want to be able to communicate with it if the nodes are down). To make things more manageable, my nginx runs in Docker on another VM, but doesn't have to.

  • @theobserver_
    @theobserver_ 8 месяцев назад

    @TechnoTim - this is an old video, but I was wandering - what are you runing postgres on? is it container, or vm and in both cases how do you ensure that this is not your SPOF for K3s?

    • @TechnoTim
      @TechnoTim  8 месяцев назад +1

      Now i run it in kubernetes but i am running the etcd version of k3s (linked in description). Otherwise you’ll have to build a mysql cluster for HA

  • @madhudson1
    @madhudson1 2 года назад

    Having real issues with K3s connectivity into a postgress container. Tested connectivity with other apps and they're fine

    • @TechnoTim
      @TechnoTim  2 года назад

      Check out my latest video on installing k3s with etcd, it's 100% automated

  • @eduardmart1237
    @eduardmart1237 Год назад

    can you make a guide on how to install k3s in air gaped environment (without internet access)

  • @jakubprogramming29
    @jakubprogramming29 2 года назад

    What a great video. Very much wow. Love your style of presenting the content. Sweet sweet.

  • @jairomartin645
    @jairomartin645 3 года назад

    I have a question with regards to the load balancer to access the Kubernetes servers.....What happens if the load balancer goes down? How will you be able to access your Kubernetes cluster? Along the same line what happens if the DB goes down? Are these two components single points of failure?

    • @TechnoTim
      @TechnoTim  3 года назад

      If it’s down, point your kube config directly at one of the k3s servers instead of the load balancer. If DB goes down, k3s goes down.

  • @benjamincabalonajr6417
    @benjamincabalonajr6417 2 года назад

    What’s the difference of this set up, vs using rancher, then adding nodes as worker to that?

  • @reasonmath
    @reasonmath Год назад

    So what is the difference between setting up the cluster this way vs the rancher cluster in the portal?

  • @leonpinto5693
    @leonpinto5693 3 года назад

    Hello... Great tutorial...I'm new to k3s and feeling my way around...could u kindly point to the kubectl installation link on the dev machine...? U did mention you had done this in an earlier video... Trying to look it up but not finding it... Probably, me not looking correctly..

  • @Equality-and-Liberty
    @Equality-and-Liberty 3 года назад

    What is the average sizing for the K3s servers and the Agents? I am talking about running VM's on Proxmox. or is there any documentation which i can read to see how many vCPU's and memory i have to give each VM? I asked this because i found out that Proxmox is using at least one third of the total installed memory. That without even one VM running.

  • @asdflkj3809fjlkd3
    @asdflkj3809fjlkd3 3 года назад

    Keep the good work Tim, excellent content. Thanks so much!

  • @kaidobit6954
    @kaidobit6954 3 года назад

    hey im running k3s in proxmox LXCs
    i have the issues:
    -none of the mandatory pods are beeing scheduled on first master (traefik, coredns and metrics are all in pending state)
    -when join a second master k3s wont start on second master
    -when joining a worker node still nothing beeing scheduled
    -when getting the nodes i just get the response 'No resource found', its not even showing the first master im running the command on

    • @TechnoTim
      @TechnoTim  3 года назад

      Sorry, I recommend against running k3s in lxc. You are basically containerizing a container platform. Nothing but troubles :)

    • @kaidobit6954
      @kaidobit6954 3 года назад

      @@TechnoTim I see thank you :)

  • @QuantumDrift-u5k
    @QuantumDrift-u5k 3 года назад

    Great tutorial as always!
    Question, is it possible to specify multiple additions for the --tls-san option? Ie an IP address and domain name? If so how would that be done?

  • @fgamberini2
    @fgamberini2 4 года назад

    Thanks for the very clear video - one thing I did not get though is how to setup the networking to access the (nginx pods - ie the "hello nginx" page ) service from a client (let's say from your "personal host" machine - you are indeed showing that the nginx server is running but i'm not sure how it can be accessed from the oustide network - (maybe it is the LB on the right in the initial diagram - to the externet ? )... is there some doc you can point me to ?

    • @TechnoTim
      @TechnoTim  3 года назад

      Yes, exactly. You'll need to set up an ingress controller and metal lb if you are going to expose these services outside of k3s

  • @ralmslb
    @ralmslb 10 месяцев назад

    One thing that I feel could have had better explanation and consideration is regarding the database.
    It's recommended in the video to have 2x K3s Server machines, that use MySQL as its database.
    However, if we end-up using a single MySQL database, doesn't that cause a single point of failure?

    • @TechnoTim
      @TechnoTim  10 месяцев назад

      Yes, you will need to make your SQL DB HA by running replicas.

    • @ralmslb
      @ralmslb 10 месяцев назад +1

      @@TechnoTimWould love to have a video on that, its something that I was wondering the most efficient way to do it, while having 2 to 3 Server Nodes lol

  • @darkgodmaster
    @darkgodmaster 3 года назад

    Having quite some issues configuring nginx load balancer, wouldn't mind a video on that.
    Will probably sort it out before that comes out but it would be nice to have

    • @TechnoTim
      @TechnoTim  3 года назад

      Thanks! I have an example in the docs!

    • @MrVejovis
      @MrVejovis 3 года назад

      @@TechnoTim Do you have a link to a docker-compose.yml for the nginx setup you have here?

  • @soreLful
    @soreLful 3 года назад

    Hey Tim, great video. In fact great channel I love all your videos, they are very well done.
    So I plan to build myself a ProxMox HA Cluster out of a few machines I have lying around at home and then build a k3s HA cluster like you mention in this tutorial. However I have this question that puzzles me since I first saw your tutorial and keeps me from starting the work on this. I can do this directly on the metal, with a linux server + k3s, and not bother with ProxMox.
    What's the advantage of doing it with ProxMox and is it worth it?

    • @TechnoTim
      @TechnoTim  3 года назад

      You cal always go bare metal. I just just a hhypervisor to virtualize all the things! That way I can share resources.

  • @hubertnelzi7560
    @hubertnelzi7560 3 года назад

    Hey! Tim! thank you but something i don't understand is it necessary installig K3S like you show here, if rancher already installed with (K3S in it) as a container (your first video with rancher, kubernetes and minecraft server? there something i misunsterstood..

    • @TechnoTim
      @TechnoTim  3 года назад

      This is process for installing k3s first (kubernetes) and then installing rancher inside of it. It’s the HA way of installing Rancher. The docker way is a non HA rancher (but HA services if you add more nodes)

  • @Jimmy_Jones
    @Jimmy_Jones 4 года назад

    What are the benefits of running an external load balancer? Is the Kubernetes one more difficult? I know I never understood it.

    • @TechnoTim
      @TechnoTim  4 года назад

      I think you might be referring to the Ingress Controller? See the load balancer and architecture section in this video for an explanation , it's there so we can use the kubernetes API from the outside, regardless of which server is up/down.

  • @vinc1793
    @vinc1793 4 года назад

    awesome vids ! thx
    it is very funny because i'm litterally doing this at work since couple of months.
    i'm running rancher 1.6 for couple of years now.
    1 virtualized/saved singlenode at work for dev purpose.
    HA install on production webservers
    we pull up to rancher 2.0 at work for a while now but...
    i have only have two physical servers for webservers production and i ran into a breakdown in my head with K8s triple nodes and quorum things.
    first planning was two etcd/ctrlpane fixed VM on host and a third etcd VM with vsphere HA balancing.... weird config.
    so here is my point : what do you think about k3s and prodution HA environnements ? It better fit with my physical infrastructure but it seems also a young project.
    my first tests are mitigate i broke my k3s server with local helm install (i know not recommended but i like to try and broke things so i know how to get them stable after XD )
    again thx a lot for all your work share, very usefull and interesting !

  • @alexh.3913
    @alexh.3913 2 года назад

    First: Thanks Tim for your videos, very inspiring stuff you do with your homelab! And you have very good instructions!
    But i encoutnered a problem while following the steps in your video. I had a problem on adding the second master node.
    It seems k3s in version 1.24.3 needs the token of the initial server set in the curl command. As far as i investigated that issue earlier version did use an empty token by default, but that changed in a later release to your guide.
    This token can be found by a cat /var/lib/rancher/k3s/server/token on the first master node and needs to be added in the curl command with an --token XXX... after the server part of the command .
    After i fixed that, i had to delete some outdated certificates on the new node and e voila my second node was up and running. Took me some time to figure that out. I don't know if thats a common problem but Maybe its helpful!

    • @TechnoTim
      @TechnoTim  2 года назад

      Thank you! Yeah k3s changed how tokens are handled! If you want an automated install see my other video!

    • @timhowitz9405
      @timhowitz9405 Год назад

      Thanks man, you saved my bacon! :)
      For anyone else with this problem, to remove the outdated certificates, run this command on the server(s) with the error:
      rm -r /var/lib/rancher/k3s/server/tls
      Then add the token into the curl command as Alex said, so your new command will look like this:
      ...server --token --node-taint...

  • @definat111
    @definat111 3 года назад

    I have an application and database container running I can pass database connector URL as environemnt variable to web server but containers being such a volatile environment ip keeps changing every time its rebuild how do you pass ip of the database container to web server.

  • @SG-tq9tk
    @SG-tq9tk 3 года назад

    Great video !! Do you not need to configure MetalLB for the HA cluster? Is that want the external Nginx provides instead? Can you use MetalLB instead of Nginx?

    • @TechnoTim
      @TechnoTim  3 года назад

      You do need to configure MetalLB if you want to have an external load balancer.

  • @jharding65
    @jharding65 3 года назад +1

    Tim, love the vidz. Great content. My company deploys 30-plus microservices and as a team lead I would like to find an inexpensive solution for my team for debugging k8s using the development laptops. K3s seems like a great candidate for this, considering its lightweight footprint. At the moment we use Docker and docker-compose to model the K8s for the Core 5-6 services that handle the majority of the work. I want my devs to understand how k8s works and knowing how Docker works is great but it does not the same as k8s. Q: Have you compared Docker for Windows w/ k8s vs. k3s?

    • @TechnoTim
      @TechnoTim  3 года назад +1

      I would use WSL and take Windows out of the equation for local dev. Then, it’s just like a Linux server thank you!

  • @HellStorm666NL
    @HellStorm666NL 3 года назад

    Hey Tim, thank you for this video.
    Can you please explain how to upgrade the traefik to the latest version? k3s uses 1.81 as default and I want to use the traefik with v2.3.
    Can I just edit the traefik yaml, or can I make a new deployment?
    Also, how do I browse to the nginx test deployment? at this time only a curl from localhost works and not browsing to the load-balancer ip.

  • @damu6678
    @damu6678 3 года назад +1

    For people who don't have a cloud account where they can spin up instances, can you show how to do this with docker containers?

    • @TechnoTim
      @TechnoTim  3 года назад +1

      I think you are looking for a single node rancher install then ruclips.net/video/oILc0ywDVTk/видео.html

    • @damu6678
      @damu6678 3 года назад

      @@TechnoTim No I was able to do pretty much your entire tutorial using k3d to create multiple server and worker nodes.