The FASTEST Way to run Kubernetes at Home - k3s Ansible Automation - Kubernetes in your HomeLab

Поделиться
HTML-код
  • Опубликовано: 25 дек 2024

Комментарии • 327

  • @fahadysf
    @fahadysf 2 года назад +51

    This setup is pure gold. I can't thank you enough, within a day I've understood how MetalLB is the LB alternative for self-hosted / bare-metal kubernetes deployment and this playbook has saved me many many expensive hours which would have been needed to get my test lab up. Can't thank you enough!

    • @TechnoTim
      @TechnoTim  2 года назад +4

      Thank you!

    • @bradwilson3766
      @bradwilson3766 2 года назад +1

      I second this! This made me join as a member.

  • @unijabnx2000
    @unijabnx2000 2 года назад +39

    As someone whom has been working on deploying an OVA (including application setup after the vm deploys) with ansible... i can appreciate how much work you've put into this.

    • @TechnoTim
      @TechnoTim  2 года назад +11

      Thank you! I am standing on the shoulders of giants who have built most of this out before me!

  • @angelgonzalez2379
    @angelgonzalez2379 Год назад +1

    Wow I set up this cluster having almost no idea what to do with it.
    After setting up the cluster I relied on various blogs to get services running. I'm now at a point where I've set up services using only docker documentation, docker-compose files, and Kompose.
    My latest project has been delving into using BGP on metallb to be able to direct traffic from certain pods to my VPN.
    Thank you so much Tim!!!

  • @Nighteater333
    @Nighteater333 2 года назад +6

    Hello from czech republic, thank you for your work, I am so glad that i have found your channel.It's very helpful.

  • @NerdzNZ
    @NerdzNZ Год назад +2

    O.M.G this was amazing, in a single night. I setup a Ubuntu Server cloud init template in ProxMox, built 9 VMs (3 masters, 6 workers) and ran though this video to get a fully HA k3s Kubernetes cluster installed.
    The best part, I am a freaking n00b at all of this. Such a great teacher, love your work and I am looking forward to consuming more of your content.

  • @johnjbateman
    @johnjbateman 2 года назад +4

    I love that you OSS guys are using each other’s work and shouting each other out. Stream on!

  • @suikast420
    @suikast420 2 года назад +3

    Great talk dude. I am exactly on this. I want provide a full secured k3s cluster for airgapped environments ( for industiral production for example ) .
    The final setup should like this:
    1. Private registry with SSL setup
    2. Provide docker on a builder node for remote builds
    3. Sec Stack
    3.1 Cert manager
    3.2 Keycloak as ODIC provider
    4. Monitoring Stack
    4.1 Grafana
    4.2 Loki
    Currently my repo is private beacaue it is in dev. After my first relase will share with you. Maybe can do more together ;-)

  • @testdasi
    @testdasi Год назад +1

    Just want to provide a testament to how good and useful Tim's work is. I messed my K3s cluster pretty badly so decided to reset from scratch. 15 minutes and 2 commands ("ansible-playbook reset.yml -i inventory/my-cluster/hosts.ini" and then "ansible-playbook site.yml -i inventory/my-cluster/hosts.ini") and I have a fresh start. Another 15 minutes of "kubectl apply -f" to reinstate my deployment yamls and everything is back to its original working state.
    Thanks a lot mate!
    👍

    • @TechnoTim
      @TechnoTim  Год назад

      Glad you liked it! Thank you!

  • @chrisdelucatube
    @chrisdelucatube Год назад +1

    Inconceivable!! This worked really well right out of the box as promised. I had a k3s 3-node Raspberry Pi cluster up and running in minutes - and I love the Ansible add in. I was vaguely familiar with Ansible from a introduction about a year ago, but this took my understanding to a whole new level. Thank you very much!

  • @dominick253
    @dominick253 Год назад +3

    My issue was the wrong lan network! Mine is a 10. Network not a 192. The three masters would work and join but it'd hang on joining the agents. Changed everything including vip and flannel ip ranges to my lan and it worked like a charm! Also I was using -K for become password but I tried it without that and it worked. Hope this helps anyone out who may have the same problem. Thanks for the work to get this going!!!

    • @gibransvarga8487
      @gibransvarga8487 Год назад

      did you make k3s api accessible from the internet?

    • @JamesBond-re2nt
      @JamesBond-re2nt 3 месяца назад

      In case of using digital ocean virtual machines, how to know the virtual ip and lb ip ranges that works properly ?

  • @ryancoble8776
    @ryancoble8776 2 года назад

    I absolutely love you. I was following all your old stuff and just beating my head into my desk trying to get it all to work right for my situation. Thank you so much for this. This needs to be the first result when you search RUclips for k3s setup, for real.

  • @syedmwma
    @syedmwma 2 года назад +2

    Thanks! This will help spin my raspberry pi clusters. Not having to use the external load balancer for kubernetes is awesome! Thanks again.

  • @conorkeane
    @conorkeane 11 месяцев назад

    Thanks Tim! Got a technical interview on Tuesday and you've just helped me prep for it!

  • @mitchross2852
    @mitchross2852 2 года назад

    There are a few assumptions in this video that a beginner will bang their head against the wall for days. I finally got this all working. This is awesome.

    • @TechnoTim
      @TechnoTim  2 года назад

      Nice work!

    • @TechnoTim
      @TechnoTim  2 года назад

      Out of curiosity, what were they?

    • @mitchross2852
      @mitchross2852 2 года назад

      @@TechnoTim
      If open for discord DM's ill send you a friend request. I have some general feedback I think could be used to help your channel/others troubleshoot. Else I can comment here, let me know!

  • @jpb2085
    @jpb2085 Год назад +1

    So so awesome, just tried this out and works so well. Thanks for the supporting documentation as well!

  • @kgottsman
    @kgottsman 2 года назад +5

    Really killing it content wise... Your videos have been so helpful lately.

  • @spikeukspikeuk
    @spikeukspikeuk Год назад

    Hey Tim. I did see this some time ago and tried to run it but had some issue. Don't remember what it was now so went on the back burner. Just updated the repo and run and works perfect. Lots of thanks for this.

  • @ToGoMania19
    @ToGoMania19 Год назад +1

    Thanks!

  • @ArifKamaruzaman
    @ArifKamaruzaman Год назад +2

    Hardware Haven sent me to a real expert 👍

  • @MonkeyD.Dragon
    @MonkeyD.Dragon 2 года назад +42

    Please add Rancher to this

    • @coletraintechgames2932
      @coletraintechgames2932 2 года назад +1

      Great idea

    • @JohnWeland
      @JohnWeland 2 года назад +1

      This! I wonder how this works in Rancher. Your preconfigured nodes, were they created in rancher or by hand?

  • @willdrumforfood7371
    @willdrumforfood7371 2 года назад +5

    This is a super helpful video, thank you for putting this together! It would be great to have a followup where you add in some applications or perhaps even a container of one of your own apps. Thank you for all these great and helpful videos!

    • @TechnoTim
      @TechnoTim  2 года назад +3

      Great suggestion!

    • @thetruth3107
      @thetruth3107 2 года назад +1

      I agree with OP, full prod WordPress / email server would be great...

  • @troybrocato
    @troybrocato Год назад

    This truly is pure gold. The only thing i would add to this is to also have FORKs for different hypervisors. Ansible is very friendly with all hypervisors and can create the K3s VMs automagically.

    • @TechnoTim
      @TechnoTim  Год назад

      Thank you! This is hypervisor agnostic and even works with bare metal!

  • @SpadeQc123
    @SpadeQc123 2 года назад +4

    Amazing vid as always Tim! I did the same setup but with a debian cloud image instead and it works great

    • @annusingh4694
      @annusingh4694 Год назад

      Can you please share more? Did you set it up remotely?

  • @rickhernandez2114
    @rickhernandez2114 2 года назад

    dude!!
    I'm using Rocky Linux and this installed like an absolute dream.
    I'm ready to stop having all these pets in my homelab. Time for the cattle.
    Thank you

  • @zoejs7042
    @zoejs7042 2 года назад +17

    Tim, great work here. I'd really like to show you a similar way I did this with custom k3os images, proxmox and terraform.

    • @chrisa.1740
      @chrisa.1740 2 года назад +5

      I find this interesting. I have been working on a combination of Terraform and Ansible to spin up a k3s cluster on the Oracle Cloud Free Tier. Would love to see your ideas on how to get this working.

    • @TVfen
      @TVfen 2 года назад +1

      Me too. I'm working with a weird cluster of raspberries, x32 laptops, and an x64 mini-pc, with proxmox, K3s, terraform, cloud-init, and ansible.
      And I'm interested on your project too, could you leave us a link to your project? (github, blog, even a google doc could be good)
      Thanks!

    • @TechnoTim
      @TechnoTim  2 года назад +2

      Thank you! Sounds awesome. I’ve made this to be a building block that can fit into any infra automation ☺️

    • @zoejs7042
      @zoejs7042 2 года назад +3

      @@chrisa.1740 okie, i'll make a video outlining how i did it :)

    • @2_wheels_fun
      @2_wheels_fun 2 года назад

      @@zoejs7042 Sounds good

  • @jabujavi
    @jabujavi Год назад +1

    Hardware Haven sent me to a real expert 👍

  • @xandercode
    @xandercode 2 года назад

    12:50 "Super, super awesome" Tim's smile says it all. 😁👍Well done just downloaded to try out now. Thanks Tim

    • @TechnoTim
      @TechnoTim  2 года назад

      Thank you so much! I am glad you liked it. Let me know how it works out!

  • @jacktsmart
    @jacktsmart 2 года назад

    This is an awesome video. I have understood how HA and K3S works, but never understood how you could access the webserver from a single IP. Keep up the awesome work!

  • @coletraintechgames2932
    @coletraintechgames2932 2 года назад +1

    DUDE! I have not watched this video... But this seems exactly like a video I have needed for ever.... 4-ev-er.
    Your my boy blue!

  • @lechaldon
    @lechaldon 2 года назад +2

    Mate, thanks for this, now I need to go figure out how you wrote this playbook so I can understand how it all operates and how k3s works. I plan to migrate my entire docker-compose stack to HA k3s and this is perfect. Thanks again!

  • @Rockshoes1
    @Rockshoes1 5 месяцев назад

    Works 100%. Running two masters on Debian and 1 master on a Debian vm

  • @hotrodhunk7389
    @hotrodhunk7389 Год назад +2

    So quick! Only spent a week to get it to work in a few minutes 😂😅😂

  • @meroxdev
    @meroxdev 11 месяцев назад +1

    It s possible to run longhorn in lxc container ?

  • @andrewjohnston359
    @andrewjohnston359 2 года назад +21

    It would be great ifyou could begin tesing these setups in a real world environment. For instance, putting each VM on separate physical proxmox nodes. Then testing both performance and data integrity of the HA mysql/mariadb, and maybe some kind of php web application - and start powering off VM's mid use (while writing and reading lots of data). Any load balancer can show a static nginx page - but when you have fast changing application data tied to user sessions (you'll want redis for this) all of a sudden you realise these magic clusters aren't as magic as they make out. Also sql databses have clustering capabilities baked in (sharding) - and the sql agent itself is aware and in control of ensuring the integrity of the cluster. How do containers in a cluster ensure the database integrity? I personally love the idea of a self scaling, self healing auto magical high performance container cluster - but all you ever see are demo examples that developers show off and then destroy. I guess what I'm getting at is often the applications themselves need to be written to be cluster aware/compatible, and the architecture needs to be manually configured more often than not to make this stuff work - you can't generally just spin up 10 containers with a stock OSS container application image and have it 'just work'

    • @TechnoTim
      @TechnoTim  2 года назад +21

      100% agree and this topic comes up a lot, RE scaling. I discussed it on stream today too. Applications need to be written and architected with scaling in mind, most of them too stateful to run more than one. I'd love to dive deeper into more topics like this in the future if there's appetite for it!

    • @ekekw930
      @ekekw930 2 года назад

      @@TechnoTim I would love to see you cover something like this

    • @crazyglue1337
      @crazyglue1337 2 года назад

      The repo works with raspberry pis (I just test my cluster up!), so you could in theory just like 5 pis and just unplug them from the network stack whenever while the performance tests are running

  • @islameldemery
    @islameldemery 2 года назад +1

    It just worked from the first time! complete awesomeness!!

    • @TechnoTim
      @TechnoTim  2 года назад

      That's awesome news!

  • @prabhujeeva2228
    @prabhujeeva2228 2 года назад

    Awesome Tim!, Thanks for automated the entire process

    • @TechnoTim
      @TechnoTim  2 года назад

      Glad it was helpful!

  • @inversemetric
    @inversemetric 2 года назад

    Welp, you've done it now, Tim. Great job!

  • @jamesajohnson82
    @jamesajohnson82 2 года назад +1

    I have 8 old Mac minis that I have been working on to make into a K3s cluster using rancher, Ubuntu 20.04, and a ton of trial and error. I am just about to spin this up, but should I scratch that and go with Ansible? Dang, as soon as you think you have a grip on something, someone awesome like TechnoTim comes along and throws a new solution right at you. Thanks for all the great videos!

    • @TechnoTim
      @TechnoTim  2 года назад +1

      Thanks! Haha! This is automating what you would otherwise copy and paste from docs and adds load balancers so you don't have to. :)

  • @l0gic23
    @l0gic23 2 года назад +1

    Micro center road trip... Who's with me?

  • @peace2941
    @peace2941 2 года назад +3

    Thank you so much, As a beginner, I was expecting to see how you add Traefik and configure it to proxy requests to the example service you had.

    • @TechnoTim
      @TechnoTim  2 года назад

      Thank you! I have docs on Traefik, I might break it out soon into its own playbook because not everyone will use Traefik!

    • @MrPatrickberry
      @MrPatrickberry 2 года назад +1

      @@TechnoTim Second this. Looking forward for Traefik install with Helm guide

  • @TechnoTim
    @TechnoTim  2 года назад +5

    How would you describe kubernetes? Wrong answers are OK too 😀

    • @syedmwma
      @syedmwma 2 года назад

      I’ve used it at my workplace, but most of it has been setup by the provider. I’ve seen the usefulness of it of making sure our application scale, as well as zero time deployment.

    • @therealjamescobb
      @therealjamescobb 2 года назад

      Not serverless :D

    • @EugeneBerger
      @EugeneBerger 2 года назад

      Don't use it, unless:
      1. You are 100% sure you need it for your specific use case.
      2. You have the needed skills in your team to set it up and maintain it.
      3. You have the time and patience for setting it up and automating the whole thing.

    • @TechnoTim
      @TechnoTim  2 года назад

      @@EugeneBerger thanks! But that’s why we have labs to test and learn ☺️

    • @everyhandletaken
      @everyhandletaken 2 года назад

      @@szymex8341 I agree, have spent countless hours playing with it in the past, but never really reached a point of perfect completion.
      Too many moving parts is the thing, it is insanely complex & hard to fix when you don’t have heaps of in-depth knowledge of all those components (which I don’t).
      Going from just running some docker containers to k8s, is like going from a front desk receptionist, to company ceo.

  • @eldaria
    @eldaria Год назад +1

    Although this setup was great and worked really fast, I don't think I want to use it right now. The reason is that I won't learn anything from it. So if something goes wrong or I would like to tweak something, I would not have any clue how to do it. For example I noticed in the settings that there was an entry for setting up bgp and ASN, since I use pfSense I figure that would be a great way of getting the routing for the K3s cluster working. But no matter what I tried I could not get it working. But it has inspired me to actually start learning Kubernetes and start building my first k3s cluster from scratch.
    It would actually be great if you would break down in a more in depth video or series the different parts that you used to get this running and options one could use to tweak things. Because it is really cool how fast it goes to get the cluster up and running.

  • @RicardoWagner
    @RicardoWagner 11 месяцев назад

    Great job Tim... would be great the following follow ups:
    Rancher install on this cluster and how about some longhorn? Cheers

  • @grocerylist
    @grocerylist 2 года назад +1

    I keep getting an error for my masters when attempting to provision the cluster. Something about 'no usable temporary directory found in /tmp, /var/tmp, /usr/tmp or my home/user directory'. The directories exist, not sure what it means they're not usable. I tried pasting the full error here but my post keeps getting deleted.
    Any idea what might be causing this and how to resolve it?

  • @mitchross2852
    @mitchross2852 2 года назад +2

    Is there a follow-up video for rancher and how to install k3 apps? being that this is HA/Vip Etc it would be good to have a video on how to utilize all of this to deploy trafik, maybe pihole/adgaurd HA, etc. I know you have some topics on this already, but I don't know if they still apply with this setup. If they do can you link to the next proper video for rancher, HA adgaurd/pi-hole, etc...

  • @MrToup
    @MrToup 2 года назад

    Thanks for the video. I started the journey with kubernetes for my Homelab thanks to your videos.
    I ended to the same results. Having it automated. I use the pretty good template from k8s-at-home. They have All setup including sops and flux.

  • @kjc420
    @kjc420 2 года назад +3

    Now wouldn't it be nice to have hypervisor support, like Proxmox?
    Spin up VMs on one or more hosts, retrieve the IPs for ansible, then deploy k3s on them automagically...
    Down the rabbit hole we go!

  • @Subbeh2
    @Subbeh2 2 года назад +2

    Love your work. Just managed to set all this up, but I'm still clueless about how to use it. Would be amazing if you could do a video on how you're using and deploying your stuff on this cluster. TA

    • @TechnoTim
      @TechnoTim  2 года назад +2

      Thank you! I have tons of videos on kubernetes apps

  • @webwarriorc4683
    @webwarriorc4683 2 года назад

    Yay, This video helped me learning ansible, it feel really good to make everything automated XD

  • @MoviesAndTraillersTZ
    @MoviesAndTraillersTZ Год назад +1

    Hardware heaven sent me

  • @peterkleingunnewiek5068
    @peterkleingunnewiek5068 10 месяцев назад

    The k3s cluster runs very smooth, thanks for your effort Tim. ik only can't install rancher on it. Everthing else works great. But Reancher does the hole cluster crache. the expose of metalLB ip to rancher Works. but the installation never ends before the cluster crashes. I looked different video how to instal rancher on a k3s cluster but it never works

  • @declanmcardle
    @declanmcardle Год назад +1

    @1:37 I see Hellmann's are now doing thermal paste...

  • @canishelix6740
    @canishelix6740 Год назад +1

    Really appreciate this video. I definitely need to research your blogs and understand them, I know what I want but the order of execution eludes me. I've got a HA SQL cluster already (so want to use that instead of etcd), I do want Longhorn, and Rancher and Traefik 2.... if I'm right I can just add the datastore param to the global_vars and it should use that SQL db, but how do I stop it installing etcd? And I'd assuming the best order of events would be the ansible playbook, then longhorn, rancher and traefik 2 (in that order)... as for cert-manager.... I guess between longhorn/rancher?

  • @roboto_
    @roboto_ 2 года назад

    thansk so much for doing this, i started working on this exact problem like a year ago but had to shelve it because i didn't have time anymore :( thanks so much!!!!!!!

    • @TechnoTim
      @TechnoTim  2 года назад

      Thank you! Happy to help!

  • @HootanHM
    @HootanHM Год назад

    👏👏👏 it'd have been amazing if you could make a video on how we can run Hadoop and pyspark on top this kube cluster to have some data transformation at home 🤩

  • @rubenkhachaturov3309
    @rubenkhachaturov3309 8 месяцев назад

    This is fantastic! Thank you so much!!!👍

  • @TVfen
    @TVfen 2 года назад +3

    By the way, Tim, could you show us some provisioning from scratch. Some bare-metal way to install it all with ansible (and maybe terraform, cloud-init, or something like that).?
    Your videos have inspired me to start my own cluster of servers (I have low end hardware, but a bunch of it, so I'm trying to make it all work together).
    Thanks!

    • @Irish2086
      @Irish2086 2 года назад +1

      There is a nice Proxmox Terraform plug in available to provision LXC or VM. Similar to cloud init but using TF. It is not perfect but a nice way to learn TF without having to spend money on cloud provider. I should reach out to him.

    • @TVfen
      @TVfen 2 года назад +1

      @@Irish2086 Yes, thanks, I saw those before, but... so far, for what I understood (and what I need):
      - Cloud-init plugin is only a "read" plugin. Meaning, I can read the file, store values read from that file into vars, then use them with other plugins.
      - The proxmox one, is great for the mini-PC that has proxmox. ☺️
      - But for the raspberries, I think is best, not to use any type of VMs (proxmox or ESXI). I know there is a TF plugin for K8s and for docker, but at that point, I think I rather set it up with Ansible.

    • @TVfen
      @TVfen 2 года назад +1

      @@Irish2086 This is my "set up" path right now. I want to provision all the machines from scratch. But PXE is an awful, unsecure, and obsolete technology. So:
      1) Copy Linux image to the SD card.
      2) Copy, manually, Cloud-init to the card.
      3) Let the Raspberry, boot, install, and apply all the settings on the Cloud-init file (usually some basic accounts, IPs, Hostname, SSH, Firewall, and install Terraform).
      4) Download from a URL Terraform tf files, and apply. (These would finish setting the device, with secrets, better security (key-pairs, etc.), Install Ansible.
      5) Setup everything else with Ansible.(K8s, images, Jenkins).
      6) any future changes on the system, would be applied by Ansible (if it is setting the "apps"), Jenkins (if it is setting the contents of the apps (a website, microservice, webapps, Dbs, etc), TF (if it is the system itself).
      And that's as far as I have gotten so far! 🤷🏻‍♂️
      (With the concepts, but I haven't applied them yet, and still 'designing' the whole system in Terraform/Ansible files.

    • @TechnoTim
      @TechnoTim  2 года назад +2

      You can definitely combine them! This is just one building block, a LEGO if you will!

  • @kaelwang1251
    @kaelwang1251 2 года назад

    WOW, you make really good content, detail and well explained, thanks.

  • @Brainpitcher
    @Brainpitcher 2 года назад +1

    Have you managed to deploy a Rancher web ui on it?

  • @PCMagikHomeLab
    @PCMagikHomeLab 2 года назад

    Great job like always TIM :) thx for all you hard work!

  • @djonkoful
    @djonkoful 10 месяцев назад

    What do you think about talos kubernetes ? thanks

  • @TradersTradingEdge
    @TradersTradingEdge 2 года назад

    Super awesom, thanks Tim. 🖖

  • @etony3097
    @etony3097 2 года назад +2

    It's great! I have a question in site.yml. What is the purpose of raspberripy in there? I install k3s on CentOS so should I remove it?

  • @diamantin55
    @diamantin55 2 года назад +2

    Great video as usual Tim. "I love open source". It would be great if you add LongHorn support at that script. Additionally, it would be great if you can do a video on how to migrate a docker install that already have some data on a local volume to kubernetes...
    Thanks man for your videos!

    • @TechnoTim
      @TechnoTim  2 года назад +1

      Thank you! I’ve considered adding longhorn and rancher to the script but many may not need it. My other videos shows how to install these with a few commands! Will consider it in the future!

  • @jpconstantineau
    @jpconstantineau 2 года назад +3

    Great work! Have you looked into gitops with flux or argocd? I find it quite useful to simply push to git and have the cluster pick up the manifests and deploy them automatically. The first thing I do after installing a vanilla K3S install is to connect the cluster to a git repo (using flux) and send all my manifests by pushing to Git. The cluster automatically configured itself, including MetalLB. That makes it really easy to tear down a cluster and build it up again.

    • @TechnoTim
      @TechnoTim  Год назад +1

      Yes, I have! I've done a video on flux! ruclips.net/video/PFLimPh5-wo/видео.html

  • @MartinHiggs84
    @MartinHiggs84 9 месяцев назад

    Watching again 😊 any chance of one with rancher/longhorn added?

  • @christiandassy8128
    @christiandassy8128 2 года назад

    Keep up man your channel is just excellent

  • @jeffreyurlwin6212
    @jeffreyurlwin6212 2 года назад

    Awesome Tim -- just setup on 4 raspberry pi CM4s using Deskpi (need 2 more pis to fill!!!). I also have some NVME storage (overkill, I know), so I put k3s & containers on the storage instead of the emmc for speed/safety. I plan on watching your video on Longhorn to do that next! Huge Thanks!
    at the end, you had a reboot script -- I couldn't find that - and checked, i don't see an obvious project in your github repos where it may be. I wanted to use that as a basis for a reboot script. Thanks!

    • @jeffreyurlwin6212
      @jeffreyurlwin6212 2 года назад

      sad. replying to myself. :( I found it in launchpad! Somewhat anti-climactic in that it was / looks sooo simple. I think setting serial to 1 should work. :) testing time :)

  • @IcyTone1
    @IcyTone1 Год назад

    Thank you so much for your assistance in setting up k3s using Ansible. Could you possibly create an updated video on how to install Rancher along with Traefik + cert-manager? Additionally, could you demonstrate how to use this k3s cluster with a GitLab CI/CD pipeline? It would be of great help.

  • @jrdwiz
    @jrdwiz Год назад

    Hi Tim, is there an easy way to add more master nodes with etcd later on? Thank so much.

  • @vikingthedude
    @vikingthedude 4 месяца назад

    do you need a load balancer in front of your two load balancers?

  • @thiagobarrichelo
    @thiagobarrichelo 2 года назад

    Hi Tim thanks for sharing definitely very helpful and great work there! Just forked your repo as I need my CNI to be Calico instead of Flannel. Thanks a lot!

  • @whh-bu7nj
    @whh-bu7nj 5 месяцев назад

    This process works great. However, the issues I ran into is when a node died, how do I add new master/worker node into the cluster to replace died node/pc. I didn't have luck with it.

  • @Josef-K
    @Josef-K Год назад

    How do I set this up with a FQDN in place of ip address for joining a cluster off network?

  • @sean1334
    @sean1334 2 года назад +3

    How long did it take to run the entire playbook? My 5 node cluster has been stuck on "Enable and check K3s service" for 25minutes now, and I'm wondering if something is going wrong.
    Edit: the default disk size was like 1.9GB and I ran out of space on the 3 masters. Trying it again
    Edit again: resizing the disk worked!

    • @TechnoTim
      @TechnoTim  2 года назад

      Should take no more than 2 minutes on normal hardware, if that.

    • @StevenMcCurdy
      @StevenMcCurdy 2 года назад +1

      I had this too. I hadn’t changed the flannel interface from eth0 in all.yml so the nodes couldn’t communicate. I did an ‘ip a’ on my servers and saw they were ens18.

    • @rossco7356
      @rossco7356 2 года назад

      @@StevenMcCurdy HERO

  • @iga3725
    @iga3725 2 года назад

    A masterpiece!! Thx for sharing

  • @mikkel3135
    @mikkel3135 2 года назад

    Automated K0s and RKE2 ansible deployments the other day with some pretty barebones playbooks. It's kinda fun trying to automate and architecture everything.
    Want to get to a point that I can just setup a new installation of Proxmox using ansible, and have it create VMs and a cluster (or join existing).

  • @motu1337
    @motu1337 2 года назад +2

    Hey Tim, awesome guide and repo. This helped expand my Ansible knowledge and produce a useful outcome of a k3s cluster with which I can mess around. Question for you, when I deploy Rancher using Helm (following another guide of yours, thanks) it doesn't seem to be accessible externally. Is this a MetalLB and Rancher conflict? Any guides I can look to that would help me resolve this? Thanks!

    • @TechnoTim
      @TechnoTim  2 года назад +1

      Hi. Thanks! It shouldn’t be as long as you disabled the service load balancer using the arg

    • @TwoThreeXray
      @TwoThreeXray 2 года назад

      Did you happen to find a way to expose rancher? I am currently trying to figure this out as well

    • @motu1337
      @motu1337 2 года назад +1

      @Tristen I was able to get it exposed using a MetalLB address by running "kubectl expose deployment rancher -n cattle-system --type=LoadBalancer --name=rancher-lb --port=443" that should work for you too assuming you've already got rancher installed and it's running in the cattle-system namespace. I found this on another guide of Tim's on installing Rancher.

    • @TwoThreeXray
      @TwoThreeXray 2 года назад +1

      @@motu1337 I think I just cam across the guide you used! lol
      Thank you sir for the help! That worked!

    • @therus000
      @therus000 2 года назад

      @@motu1337 is it work ? without traefik

  • @raymondvanderwerf
    @raymondvanderwerf 2 года назад

    Can't wait to watch this one!!
    Saved...will watch and probably have a go at this coming Monday
    Thx Tim!! ✌️

  • @rodrimora
    @rodrimora 2 года назад

    It may be a silly question, but deplying with ansible is an alternative to rancher? I don't fully grasp the concept of both

    • @TechnoTim
      @TechnoTim  2 года назад

      This is a way to bootstrap a k3s cluster. You then can install rancher on it if you like

  • @AlexBF488
    @AlexBF488 2 года назад

    Im trying to deploy this cluster on a testing server, but when i start to install rancher i don´t have a dns to assign. How can i do this with this infraestructure?

  • @jeffrisdon2803
    @jeffrisdon2803 3 месяца назад

    Hi Time, I love your videos! Thanks for taking the time to share your knowledge! Ive been running the all.yml with ansible and get this error
    The offending line appears to be:
    ---
    k3s_version: v1.31.0+k3s1
    ^ here
    any ideas? thank you!

  • @szymex22
    @szymex22 Год назад

    Can I use the same node as master and worker in your playbook???

  • @psyman9780
    @psyman9780 2 года назад +1

    Would the "Configuring Traefik 2 Ingress for Kubernetes" doc page be preferred for getting Traefik going? Just curious on the whole MetalLB IP configuration in traefik-chart-values.yml. The comment says set it to the MetalLB IP address. But I'm not sure if that means the "apiserver_endpoint" or something else, because using that IP doesn't work. It throws an error about it not being allowed and being unable to assign an IP.

    • @keanu4773
      @keanu4773 2 года назад +1

      Currently stuck on the same problem at the moment. Can't figure out what the MetalLB IP is meant to be to get Traefik working.

    • @psyman9780
      @psyman9780 2 года назад +1

      @@keanu4773 Let me know if you figure it out! It seems like it could be just a static IP for Traefik or something i.e., setting it to something besides the MetalLB IP makes it work and assigns it that IP specifically. But I'm a bit behind the curve on whether or not that's the correct thing to do.

    • @K34nuT
      @K34nuT 2 года назад

      @@psyman9780 I did try that myself, but didn't manage to get it working!

    • @TechnoTim
      @TechnoTim  2 года назад +2

      Yes that's where you start, and the metal lb ip is the one that gets created during setting up k3s with my script. you define this range but you will need to pick one for traefik to use in that range!

  • @hb-cloud
    @hb-cloud 2 года назад

    This is great content but could you tell me how could i create a k3s cluster with Cilium cni using this setup instead of flannel ?

  • @northcode_
    @northcode_ 2 года назад

    Hey Tim. I've been looking for a while now. Are there any load balancers for k3s other than klipper(svclb) that work over a layer3 VPN like wireguard?
    I've tried getting both metallb and kube-router working, but they won't route between nodes that are only accessible across the VPN. Probably because there's no layer2 connection between the nodes, only layer3.
    I'd love if there was some way to get metallb or some other lb working that can assign VPN-internal addresses to services.
    I'm working around it now with klipperlb and just using different ports and network policies for my internal services but it's not optimal.

  • @LUISPLAPINO
    @LUISPLAPINO 7 месяцев назад

    Hi!! Thank you so much!! This is just a work of art. Dude: how could I add a new node without removing and restarting the existing service?

  • @junejuan8561
    @junejuan8561 2 года назад +1

    Wow! Another solid content! Thank you very much.
    Is it possible to use rke2 instead of k3s?

    • @TechnoTim
      @TechnoTim  2 года назад +1

      Not with this playbook, but one might exist

  • @davidwilliss5555
    @davidwilliss5555 2 года назад +1

    I'm trying to get this to work, but the VIP never comes up and the step that waits for all the servers to join the cluster times out because it ends up trying to access the control plane via the VIP.. Oh, and my VMs are all based off the focal-server-cloudimg-amd64 image, which I resized the partition and fs by 32Gig.

    • @davidwilliss5555
      @davidwilliss5555 2 года назад

      Update: It turns out that the VIP is coming up but only for a few seconds at a time. Checking the containerd.log, it looks like containerd is restarting every few seconds for no apparent reason. There's nothing in the containerd.log or syslog that says why it's restarting.

    • @SeanDion
      @SeanDion Год назад +1

      @@davidwilliss5555 Did you ever get yours working. I think I'm having the same thing happen.

  • @ElliotWeishaar
    @ElliotWeishaar Год назад

    Love the video Tim. I'm struggling to get this up and running on 5 ubuntu 22.04 machines. I've noticed that the args in your video don't match what's in your repo. Any reason for the change or are the original args listed anywhere? Wasn't sure if I should open an issue on GH or not.

    • @TechnoTim
      @TechnoTim  Год назад +1

      The default args in the repo should work! If you want to see what was used in the video just look at the first few commits, however the latest repo is what you want!

    • @ElliotWeishaar
      @ElliotWeishaar Год назад

      @@TechnoTim Thanks! I think I found my issue. I'm running ansible from an RPI 4, which was running Python 3.7 and ansible core 2.11.5. I think that 2.11.5 is not new enough for split to work. I'm upgrading and hopefully that will fix it!

  • @ryanarnold2293
    @ryanarnold2293 2 года назад

    Thanks Tim! I was able to get my cluster up and running easily with this. Question, I installed Rancher and now need to access the UI. How can I config the nginx ingress to route to the Rancher UI?

  • @docteur3805
    @docteur3805 2 года назад

    thanks you for this video ! 😊

  • @elchupabara
    @elchupabara Год назад

    Sample group vars file not matching video. For example "kubelet-arg node-status-update-frequency=5s" is missing.

  • @THEMithrandir09
    @THEMithrandir09 Год назад

    Why use kubevip or MetalLB instead of just traefik everywhere? Doesn't k3s ship with traefik by default?

  • @ameeno
    @ameeno 2 года назад

    can you add zerotier or eireguard clustering? also how about metallb external address detection internal address detection for nodes?
    finally would it be possible to have pods run in control plane?
    I want halb but don't want to lose workernodes

  • @yourpcmd
    @yourpcmd 2 года назад

    Tim, I have a question or two. First, your 1U SuperMicro servers aren't available anymore. Can you recommend a similar server (1U 4-bay)? I'll need to populate it with 4 4-8TB drives. Secondly, and this is crucial, using Proxmox (which I love), how would one install it on an SSD and have 2 additional HDD's for all the VM's? There is no video out there that I have found that goes over this. Please consider doing one. Bonus question, how would one having the scenario above (SSD+HDDs) also backup not only the Proxmox drive but all VM data? Thanks.

    • @TechnoTim
      @TechnoTim  2 года назад

      I have many recommendations on my kit site. kit.co/TechnoTim

  • @Crimson_Tinted
    @Crimson_Tinted 2 года назад

    I use that helm installer CRD that k3s offers and just have Ansible drop a yaml file in the respective directory to install kube-vip, personally. This approach of yours is equally valid but one lets me use the stock upstream module which is nice. It also lets me install my CD of choice (Argo in my case, saw you had a Flux guide too), and I just drop everything else to install including MetalLB into an set of app-of-apps Argo CD Applications. I find I prefer k3s only handling absolute minimum to make the control plane HA to be the easiest strategy for me and then let my CD system take it the rest of the way.

  • @TheJimNicholson
    @TheJimNicholson 2 года назад

    Why did you go with metalLB over klipper, which comes with k3s? Was it just preference, or were there specific reasons you chose to deploy your own load balancer?

    • @TechnoTim
      @TechnoTim  2 года назад

      I prefer MetalLB over klipper. I've used it many times outside of k3s so it's familiar and battle tested.

  • @AI-Tech-Stack
    @AI-Tech-Stack 2 года назад

    Thanks for the Videos. I have a 5 node clustered server rack and will be trying harvester out on it. But in order to utilize it do I need to set up K3S on a RPI then deploy workload loads to the harvester cluster?

    • @TechnoTim
      @TechnoTim  2 года назад

      Set up your clusters first then create vms, then use this ansible playbook to target all machines. Or create the k3s cluster in Harvester and join your Pis to the cluster manually

    • @AI-Tech-Stack
      @AI-Tech-Stack 2 года назад

      @@TechnoTim Thank you for the help cant wait to try it out

  • @JhonnyXpress
    @JhonnyXpress Год назад

    I’m thinking on using this model but for 2 mini servers