The FASTEST Way to run Kubernetes at Home - k3s Ansible Automation - Kubernetes in your HomeLab

Поделиться
HTML-код
  • Опубликовано: 23 ноя 2024

Комментарии • 328

  • @fahadysf
    @fahadysf 2 года назад +51

    This setup is pure gold. I can't thank you enough, within a day I've understood how MetalLB is the LB alternative for self-hosted / bare-metal kubernetes deployment and this playbook has saved me many many expensive hours which would have been needed to get my test lab up. Can't thank you enough!

    • @TechnoTim
      @TechnoTim  2 года назад +4

      Thank you!

    • @bradwilson3766
      @bradwilson3766 2 года назад +1

      I second this! This made me join as a member.

  • @unijabnx2000
    @unijabnx2000 2 года назад +39

    As someone whom has been working on deploying an OVA (including application setup after the vm deploys) with ansible... i can appreciate how much work you've put into this.

    • @TechnoTim
      @TechnoTim  2 года назад +11

      Thank you! I am standing on the shoulders of giants who have built most of this out before me!

  • @dominick253
    @dominick253 Год назад +3

    My issue was the wrong lan network! Mine is a 10. Network not a 192. The three masters would work and join but it'd hang on joining the agents. Changed everything including vip and flannel ip ranges to my lan and it worked like a charm! Also I was using -K for become password but I tried it without that and it worked. Hope this helps anyone out who may have the same problem. Thanks for the work to get this going!!!

    • @gibransvarga8487
      @gibransvarga8487 Год назад

      did you make k3s api accessible from the internet?

    • @JamesBond-re2nt
      @JamesBond-re2nt 2 месяца назад

      In case of using digital ocean virtual machines, how to know the virtual ip and lb ip ranges that works properly ?

  • @angelgonzalez2379
    @angelgonzalez2379 Год назад

    Wow I set up this cluster having almost no idea what to do with it.
    After setting up the cluster I relied on various blogs to get services running. I'm now at a point where I've set up services using only docker documentation, docker-compose files, and Kompose.
    My latest project has been delving into using BGP on metallb to be able to direct traffic from certain pods to my VPN.
    Thank you so much Tim!!!

  • @suikast420
    @suikast420 2 года назад +2

    Great talk dude. I am exactly on this. I want provide a full secured k3s cluster for airgapped environments ( for industiral production for example ) .
    The final setup should like this:
    1. Private registry with SSL setup
    2. Provide docker on a builder node for remote builds
    3. Sec Stack
    3.1 Cert manager
    3.2 Keycloak as ODIC provider
    4. Monitoring Stack
    4.1 Grafana
    4.2 Loki
    Currently my repo is private beacaue it is in dev. After my first relase will share with you. Maybe can do more together ;-)

  • @syedmwma
    @syedmwma 2 года назад +2

    Thanks! This will help spin my raspberry pi clusters. Not having to use the external load balancer for kubernetes is awesome! Thanks again.

  • @conorkeane
    @conorkeane 10 месяцев назад

    Thanks Tim! Got a technical interview on Tuesday and you've just helped me prep for it!

  • @chrisdelucatube
    @chrisdelucatube Год назад +1

    Inconceivable!! This worked really well right out of the box as promised. I had a k3s 3-node Raspberry Pi cluster up and running in minutes - and I love the Ansible add in. I was vaguely familiar with Ansible from a introduction about a year ago, but this took my understanding to a whole new level. Thank you very much!

  • @johnjbateman
    @johnjbateman 2 года назад +4

    I love that you OSS guys are using each other’s work and shouting each other out. Stream on!

  • @MonkeyD.Dragon
    @MonkeyD.Dragon 2 года назад +42

    Please add Rancher to this

    • @coletraintechgames2932
      @coletraintechgames2932 2 года назад +1

      Great idea

    • @JohnWeland
      @JohnWeland Год назад +1

      This! I wonder how this works in Rancher. Your preconfigured nodes, were they created in rancher or by hand?

  • @Nighteater333
    @Nighteater333 2 года назад +6

    Hello from czech republic, thank you for your work, I am so glad that i have found your channel.It's very helpful.

  • @peterkleingunnewiek5068
    @peterkleingunnewiek5068 9 месяцев назад

    The k3s cluster runs very smooth, thanks for your effort Tim. ik only can't install rancher on it. Everthing else works great. But Reancher does the hole cluster crache. the expose of metalLB ip to rancher Works. but the installation never ends before the cluster crashes. I looked different video how to instal rancher on a k3s cluster but it never works

  • @testdasi
    @testdasi Год назад

    Just want to provide a testament to how good and useful Tim's work is. I messed my K3s cluster pretty badly so decided to reset from scratch. 15 minutes and 2 commands ("ansible-playbook reset.yml -i inventory/my-cluster/hosts.ini" and then "ansible-playbook site.yml -i inventory/my-cluster/hosts.ini") and I have a fresh start. Another 15 minutes of "kubectl apply -f" to reinstate my deployment yamls and everything is back to its original working state.
    Thanks a lot mate!
    👍

    • @TechnoTim
      @TechnoTim  Год назад

      Glad you liked it! Thank you!

  • @spikeukspikeuk
    @spikeukspikeuk 11 месяцев назад

    Hey Tim. I did see this some time ago and tried to run it but had some issue. Don't remember what it was now so went on the back burner. Just updated the repo and run and works perfect. Lots of thanks for this.

    • @tcasex
      @tcasex 11 месяцев назад

      I just got my environment setup with ansible and docker compose files and then I run across this....lol. I need a break before I embark on this journey and just enjoy the homelab. Sometimes I think I enjoy tourting myself.

  • @eldaria
    @eldaria Год назад +1

    Although this setup was great and worked really fast, I don't think I want to use it right now. The reason is that I won't learn anything from it. So if something goes wrong or I would like to tweak something, I would not have any clue how to do it. For example I noticed in the settings that there was an entry for setting up bgp and ASN, since I use pfSense I figure that would be a great way of getting the routing for the K3s cluster working. But no matter what I tried I could not get it working. But it has inspired me to actually start learning Kubernetes and start building my first k3s cluster from scratch.
    It would actually be great if you would break down in a more in depth video or series the different parts that you used to get this running and options one could use to tweak things. Because it is really cool how fast it goes to get the cluster up and running.

  • @ryancoble8776
    @ryancoble8776 2 года назад

    I absolutely love you. I was following all your old stuff and just beating my head into my desk trying to get it all to work right for my situation. Thank you so much for this. This needs to be the first result when you search RUclips for k3s setup, for real.

  • @Rockshoes1
    @Rockshoes1 3 месяца назад

    Works 100%. Running two masters on Debian and 1 master on a Debian vm

  • @RicardoWagner
    @RicardoWagner 10 месяцев назад

    Great job Tim... would be great the following follow ups:
    Rancher install on this cluster and how about some longhorn? Cheers

  • @jpb2085
    @jpb2085 Год назад +1

    So so awesome, just tried this out and works so well. Thanks for the supporting documentation as well!

  • @andrewjohnston359
    @andrewjohnston359 2 года назад +21

    It would be great ifyou could begin tesing these setups in a real world environment. For instance, putting each VM on separate physical proxmox nodes. Then testing both performance and data integrity of the HA mysql/mariadb, and maybe some kind of php web application - and start powering off VM's mid use (while writing and reading lots of data). Any load balancer can show a static nginx page - but when you have fast changing application data tied to user sessions (you'll want redis for this) all of a sudden you realise these magic clusters aren't as magic as they make out. Also sql databses have clustering capabilities baked in (sharding) - and the sql agent itself is aware and in control of ensuring the integrity of the cluster. How do containers in a cluster ensure the database integrity? I personally love the idea of a self scaling, self healing auto magical high performance container cluster - but all you ever see are demo examples that developers show off and then destroy. I guess what I'm getting at is often the applications themselves need to be written to be cluster aware/compatible, and the architecture needs to be manually configured more often than not to make this stuff work - you can't generally just spin up 10 containers with a stock OSS container application image and have it 'just work'

    • @TechnoTim
      @TechnoTim  2 года назад +21

      100% agree and this topic comes up a lot, RE scaling. I discussed it on stream today too. Applications need to be written and architected with scaling in mind, most of them too stateful to run more than one. I'd love to dive deeper into more topics like this in the future if there's appetite for it!

    • @ekekw930
      @ekekw930 2 года назад

      @@TechnoTim I would love to see you cover something like this

    • @crazyglue1337
      @crazyglue1337 2 года назад

      The repo works with raspberry pis (I just test my cluster up!), so you could in theory just like 5 pis and just unplug them from the network stack whenever while the performance tests are running

  • @ArifKamaruzaman
    @ArifKamaruzaman Год назад +2

    Hardware Haven sent me to a real expert 👍

  • @willdrumforfood7371
    @willdrumforfood7371 2 года назад +5

    This is a super helpful video, thank you for putting this together! It would be great to have a followup where you add in some applications or perhaps even a container of one of your own apps. Thank you for all these great and helpful videos!

    • @TechnoTim
      @TechnoTim  2 года назад +3

      Great suggestion!

    • @thetruth3107
      @thetruth3107 2 года назад +1

      I agree with OP, full prod WordPress / email server would be great...

  • @lechaldon
    @lechaldon 2 года назад +2

    Mate, thanks for this, now I need to go figure out how you wrote this playbook so I can understand how it all operates and how k3s works. I plan to migrate my entire docker-compose stack to HA k3s and this is perfect. Thanks again!

  • @troybrocato
    @troybrocato Год назад

    This truly is pure gold. The only thing i would add to this is to also have FORKs for different hypervisors. Ansible is very friendly with all hypervisors and can create the K3s VMs automagically.

    • @TechnoTim
      @TechnoTim  Год назад

      Thank you! This is hypervisor agnostic and even works with bare metal!

  • @NerdzNZ
    @NerdzNZ Год назад +2

    O.M.G this was amazing, in a single night. I setup a Ubuntu Server cloud init template in ProxMox, built 9 VMs (3 masters, 6 workers) and ran though this video to get a fully HA k3s Kubernetes cluster installed.
    The best part, I am a freaking n00b at all of this. Such a great teacher, love your work and I am looking forward to consuming more of your content.

    • @TechnoTim
      @TechnoTim  11 месяцев назад

      Nice work!! Thank you!

  • @mitchross2852
    @mitchross2852 2 года назад +2

    Is there a follow-up video for rancher and how to install k3 apps? being that this is HA/Vip Etc it would be good to have a video on how to utilize all of this to deploy trafik, maybe pihole/adgaurd HA, etc. I know you have some topics on this already, but I don't know if they still apply with this setup. If they do can you link to the next proper video for rancher, HA adgaurd/pi-hole, etc...

  • @peace2941
    @peace2941 2 года назад +3

    Thank you so much, As a beginner, I was expecting to see how you add Traefik and configure it to proxy requests to the example service you had.

    • @TechnoTim
      @TechnoTim  2 года назад

      Thank you! I have docs on Traefik, I might break it out soon into its own playbook because not everyone will use Traefik!

    • @MrPatrickberry
      @MrPatrickberry 2 года назад +1

      @@TechnoTim Second this. Looking forward for Traefik install with Helm guide

  • @zoejs7042
    @zoejs7042 2 года назад +17

    Tim, great work here. I'd really like to show you a similar way I did this with custom k3os images, proxmox and terraform.

    • @chrisa.1740
      @chrisa.1740 2 года назад +5

      I find this interesting. I have been working on a combination of Terraform and Ansible to spin up a k3s cluster on the Oracle Cloud Free Tier. Would love to see your ideas on how to get this working.

    • @TVfen
      @TVfen 2 года назад +1

      Me too. I'm working with a weird cluster of raspberries, x32 laptops, and an x64 mini-pc, with proxmox, K3s, terraform, cloud-init, and ansible.
      And I'm interested on your project too, could you leave us a link to your project? (github, blog, even a google doc could be good)
      Thanks!

    • @TechnoTim
      @TechnoTim  2 года назад +2

      Thank you! Sounds awesome. I’ve made this to be a building block that can fit into any infra automation ☺️

    • @zoejs7042
      @zoejs7042 2 года назад +3

      @@chrisa.1740 okie, i'll make a video outlining how i did it :)

    • @saltandsham
      @saltandsham 2 года назад

      @@zoejs7042 Sounds good

  • @etony3097
    @etony3097 Год назад +2

    It's great! I have a question in site.yml. What is the purpose of raspberripy in there? I install k3s on CentOS so should I remove it?

  • @jamesajohnson82
    @jamesajohnson82 2 года назад +1

    I have 8 old Mac minis that I have been working on to make into a K3s cluster using rancher, Ubuntu 20.04, and a ton of trial and error. I am just about to spin this up, but should I scratch that and go with Ansible? Dang, as soon as you think you have a grip on something, someone awesome like TechnoTim comes along and throws a new solution right at you. Thanks for all the great videos!

    • @TechnoTim
      @TechnoTim  2 года назад +1

      Thanks! Haha! This is automating what you would otherwise copy and paste from docs and adds load balancers so you don't have to. :)

  • @jabujavi
    @jabujavi 11 месяцев назад +1

    Hardware Haven sent me to a real expert 👍

  • @HootanHM
    @HootanHM Год назад

    👏👏👏 it'd have been amazing if you could make a video on how we can run Hadoop and pyspark on top this kube cluster to have some data transformation at home 🤩

  • @Subbeh2
    @Subbeh2 2 года назад +2

    Love your work. Just managed to set all this up, but I'm still clueless about how to use it. Would be amazing if you could do a video on how you're using and deploying your stuff on this cluster. TA

    • @TechnoTim
      @TechnoTim  2 года назад +2

      Thank you! I have tons of videos on kubernetes apps

  • @jpconstantineau
    @jpconstantineau 2 года назад +3

    Great work! Have you looked into gitops with flux or argocd? I find it quite useful to simply push to git and have the cluster pick up the manifests and deploy them automatically. The first thing I do after installing a vanilla K3S install is to connect the cluster to a git repo (using flux) and send all my manifests by pushing to Git. The cluster automatically configured itself, including MetalLB. That makes it really easy to tear down a cluster and build it up again.

    • @TechnoTim
      @TechnoTim  Год назад +1

      Yes, I have! I've done a video on flux! ruclips.net/video/PFLimPh5-wo/видео.html

  • @mitchross2852
    @mitchross2852 2 года назад

    There are a few assumptions in this video that a beginner will bang their head against the wall for days. I finally got this all working. This is awesome.

    • @TechnoTim
      @TechnoTim  2 года назад

      Nice work!

    • @TechnoTim
      @TechnoTim  2 года назад

      Out of curiosity, what were they?

    • @mitchross2852
      @mitchross2852 2 года назад

      @@TechnoTim
      If open for discord DM's ill send you a friend request. I have some general feedback I think could be used to help your channel/others troubleshoot. Else I can comment here, let me know!

  • @canishelix6740
    @canishelix6740 Год назад +1

    Really appreciate this video. I definitely need to research your blogs and understand them, I know what I want but the order of execution eludes me. I've got a HA SQL cluster already (so want to use that instead of etcd), I do want Longhorn, and Rancher and Traefik 2.... if I'm right I can just add the datastore param to the global_vars and it should use that SQL db, but how do I stop it installing etcd? And I'd assuming the best order of events would be the ansible playbook, then longhorn, rancher and traefik 2 (in that order)... as for cert-manager.... I guess between longhorn/rancher?

  • @kjc420
    @kjc420 2 года назад +3

    Now wouldn't it be nice to have hypervisor support, like Proxmox?
    Spin up VMs on one or more hosts, retrieve the IPs for ansible, then deploy k3s on them automagically...
    Down the rabbit hole we go!

  • @IcyTone1
    @IcyTone1 Год назад

    Thank you so much for your assistance in setting up k3s using Ansible. Could you possibly create an updated video on how to install Rancher along with Traefik + cert-manager? Additionally, could you demonstrate how to use this k3s cluster with a GitLab CI/CD pipeline? It would be of great help.

  • @kgottsman
    @kgottsman 2 года назад +5

    Really killing it content wise... Your videos have been so helpful lately.

  • @grocerylist
    @grocerylist 2 года назад +1

    I keep getting an error for my masters when attempting to provision the cluster. Something about 'no usable temporary directory found in /tmp, /var/tmp, /usr/tmp or my home/user directory'. The directories exist, not sure what it means they're not usable. I tried pasting the full error here but my post keeps getting deleted.
    Any idea what might be causing this and how to resolve it?

  • @hotrodhunk7389
    @hotrodhunk7389 Год назад +2

    So quick! Only spent a week to get it to work in a few minutes 😂😅😂

  • @SpadeQc123
    @SpadeQc123 2 года назад +4

    Amazing vid as always Tim! I did the same setup but with a debian cloud image instead and it works great

    • @annusingh4694
      @annusingh4694 Год назад

      Can you please share more? Did you set it up remotely?

  • @jacksmart2643
    @jacksmart2643 2 года назад

    This is an awesome video. I have understood how HA and K3S works, but never understood how you could access the webserver from a single IP. Keep up the awesome work!

  • @mikkel3135
    @mikkel3135 2 года назад

    Automated K0s and RKE2 ansible deployments the other day with some pretty barebones playbooks. It's kinda fun trying to automate and architecture everything.
    Want to get to a point that I can just setup a new installation of Proxmox using ansible, and have it create VMs and a cluster (or join existing).

  • @chrisd1243
    @chrisd1243 2 месяца назад

    Awesome vid and for a n00b easy to follow. But being a n00b i have all the K3s VM's set up for ssh keys. When i run the playbook it naturally errors out because it knows nothing of the keys. how exactly do i go about getting the playbook set with the right args for the keys? the only think i know about ansible is what i learned in this video LOL
    Update: Disregard, I figured out the SSH Key. Running playbook now with no issues

  • @jeffreyurlwin6212
    @jeffreyurlwin6212 2 года назад

    Awesome Tim -- just setup on 4 raspberry pi CM4s using Deskpi (need 2 more pis to fill!!!). I also have some NVME storage (overkill, I know), so I put k3s & containers on the storage instead of the emmc for speed/safety. I plan on watching your video on Longhorn to do that next! Huge Thanks!
    at the end, you had a reboot script -- I couldn't find that - and checked, i don't see an obvious project in your github repos where it may be. I wanted to use that as a basis for a reboot script. Thanks!

    • @jeffreyurlwin6212
      @jeffreyurlwin6212 2 года назад

      sad. replying to myself. :( I found it in launchpad! Somewhat anti-climactic in that it was / looks sooo simple. I think setting serial to 1 should work. :) testing time :)

  • @LUISPLAPINO
    @LUISPLAPINO 6 месяцев назад

    Hi!! Thank you so much!! This is just a work of art. Dude: how could I add a new node without removing and restarting the existing service?

  • @TVfen
    @TVfen 2 года назад +3

    By the way, Tim, could you show us some provisioning from scratch. Some bare-metal way to install it all with ansible (and maybe terraform, cloud-init, or something like that).?
    Your videos have inspired me to start my own cluster of servers (I have low end hardware, but a bunch of it, so I'm trying to make it all work together).
    Thanks!

    • @Irish2086
      @Irish2086 2 года назад +1

      There is a nice Proxmox Terraform plug in available to provision LXC or VM. Similar to cloud init but using TF. It is not perfect but a nice way to learn TF without having to spend money on cloud provider. I should reach out to him.

    • @TVfen
      @TVfen 2 года назад +1

      @@Irish2086 Yes, thanks, I saw those before, but... so far, for what I understood (and what I need):
      - Cloud-init plugin is only a "read" plugin. Meaning, I can read the file, store values read from that file into vars, then use them with other plugins.
      - The proxmox one, is great for the mini-PC that has proxmox. ☺️
      - But for the raspberries, I think is best, not to use any type of VMs (proxmox or ESXI). I know there is a TF plugin for K8s and for docker, but at that point, I think I rather set it up with Ansible.

    • @TVfen
      @TVfen 2 года назад +1

      @@Irish2086 This is my "set up" path right now. I want to provision all the machines from scratch. But PXE is an awful, unsecure, and obsolete technology. So:
      1) Copy Linux image to the SD card.
      2) Copy, manually, Cloud-init to the card.
      3) Let the Raspberry, boot, install, and apply all the settings on the Cloud-init file (usually some basic accounts, IPs, Hostname, SSH, Firewall, and install Terraform).
      4) Download from a URL Terraform tf files, and apply. (These would finish setting the device, with secrets, better security (key-pairs, etc.), Install Ansible.
      5) Setup everything else with Ansible.(K8s, images, Jenkins).
      6) any future changes on the system, would be applied by Ansible (if it is setting the "apps"), Jenkins (if it is setting the contents of the apps (a website, microservice, webapps, Dbs, etc), TF (if it is the system itself).
      And that's as far as I have gotten so far! 🤷🏻‍♂️
      (With the concepts, but I haven't applied them yet, and still 'designing' the whole system in Terraform/Ansible files.

    • @TechnoTim
      @TechnoTim  2 года назад +2

      You can definitely combine them! This is just one building block, a LEGO if you will!

  • @Crimson_Tinted
    @Crimson_Tinted 2 года назад

    I use that helm installer CRD that k3s offers and just have Ansible drop a yaml file in the respective directory to install kube-vip, personally. This approach of yours is equally valid but one lets me use the stock upstream module which is nice. It also lets me install my CD of choice (Argo in my case, saw you had a Flux guide too), and I just drop everything else to install including MetalLB into an set of app-of-apps Argo CD Applications. I find I prefer k3s only handling absolute minimum to make the control plane HA to be the easiest strategy for me and then let my CD system take it the rest of the way.

  • @jeffrisdon2803
    @jeffrisdon2803 2 месяца назад

    Hi Time, I love your videos! Thanks for taking the time to share your knowledge! Ive been running the all.yml with ansible and get this error
    The offending line appears to be:
    ---
    k3s_version: v1.31.0+k3s1
    ^ here
    any ideas? thank you!

  • @rickhernandez2114
    @rickhernandez2114 2 года назад

    dude!!
    I'm using Rocky Linux and this installed like an absolute dream.
    I'm ready to stop having all these pets in my homelab. Time for the cattle.
    Thank you

  • @MartinHiggs84
    @MartinHiggs84 8 месяцев назад

    Watching again 😊 any chance of one with rancher/longhorn added?

  • @islameldemery
    @islameldemery 2 года назад +1

    It just worked from the first time! complete awesomeness!!

    • @TechnoTim
      @TechnoTim  2 года назад

      That's awesome news!

  • @thiagobarrichelo
    @thiagobarrichelo 2 года назад

    Hi Tim thanks for sharing definitely very helpful and great work there! Just forked your repo as I need my CNI to be Calico instead of Flannel. Thanks a lot!

  • @declanmcardle
    @declanmcardle Год назад +1

    @1:37 I see Hellmann's are now doing thermal paste...

  • @whh-bu7nj
    @whh-bu7nj 4 месяца назад

    This process works great. However, the issues I ran into is when a node died, how do I add new master/worker node into the cluster to replace died node/pc. I didn't have luck with it.

  • @diamantin55
    @diamantin55 2 года назад +2

    Great video as usual Tim. "I love open source". It would be great if you add LongHorn support at that script. Additionally, it would be great if you can do a video on how to migrate a docker install that already have some data on a local volume to kubernetes...
    Thanks man for your videos!

    • @TechnoTim
      @TechnoTim  2 года назад +1

      Thank you! I’ve considered adding longhorn and rancher to the script but many may not need it. My other videos shows how to install these with a few commands! Will consider it in the future!

  • @sean1334
    @sean1334 2 года назад +3

    How long did it take to run the entire playbook? My 5 node cluster has been stuck on "Enable and check K3s service" for 25minutes now, and I'm wondering if something is going wrong.
    Edit: the default disk size was like 1.9GB and I ran out of space on the 3 masters. Trying it again
    Edit again: resizing the disk worked!

    • @TechnoTim
      @TechnoTim  2 года назад

      Should take no more than 2 minutes on normal hardware, if that.

    • @StevenMcCurdy
      @StevenMcCurdy 2 года назад +1

      I had this too. I hadn’t changed the flannel interface from eth0 in all.yml so the nodes couldn’t communicate. I did an ‘ip a’ on my servers and saw they were ens18.

    • @rossco7356
      @rossco7356 2 года назад

      @@StevenMcCurdy HERO

  • @MrToup
    @MrToup 2 года назад

    Thanks for the video. I started the journey with kubernetes for my Homelab thanks to your videos.
    I ended to the same results. Having it automated. I use the pretty good template from k8s-at-home. They have All setup including sops and flux.

  • @JhonnyXpress
    @JhonnyXpress Год назад

    I’m thinking on using this model but for 2 mini servers

  • @meroxdev
    @meroxdev 10 месяцев назад +1

    It s possible to run longhorn in lxc container ?

  • @emilhuseynli
    @emilhuseynli Год назад

    Hi, first of all thanks a lot for such a great tutorial! Can you please elaborate why the netaddr dependency is needed? where exactly is it used?

  • @junejuan8561
    @junejuan8561 2 года назад +1

    Wow! Another solid content! Thank you very much.
    Is it possible to use rke2 instead of k3s?

    • @TechnoTim
      @TechnoTim  2 года назад +1

      Not with this playbook, but one might exist

  • @roboto_
    @roboto_ 2 года назад

    thansk so much for doing this, i started working on this exact problem like a year ago but had to shelve it because i didn't have time anymore :( thanks so much!!!!!!!

    • @TechnoTim
      @TechnoTim  2 года назад

      Thank you! Happy to help!

  • @motu1337
    @motu1337 2 года назад +2

    Hey Tim, awesome guide and repo. This helped expand my Ansible knowledge and produce a useful outcome of a k3s cluster with which I can mess around. Question for you, when I deploy Rancher using Helm (following another guide of yours, thanks) it doesn't seem to be accessible externally. Is this a MetalLB and Rancher conflict? Any guides I can look to that would help me resolve this? Thanks!

    • @TechnoTim
      @TechnoTim  2 года назад +1

      Hi. Thanks! It shouldn’t be as long as you disabled the service load balancer using the arg

    • @TwoThreeXray
      @TwoThreeXray 2 года назад

      Did you happen to find a way to expose rancher? I am currently trying to figure this out as well

    • @motu1337
      @motu1337 2 года назад +1

      @Tristen I was able to get it exposed using a MetalLB address by running "kubectl expose deployment rancher -n cattle-system --type=LoadBalancer --name=rancher-lb --port=443" that should work for you too assuming you've already got rancher installed and it's running in the cattle-system namespace. I found this on another guide of Tim's on installing Rancher.

    • @TwoThreeXray
      @TwoThreeXray 2 года назад +1

      @@motu1337 I think I just cam across the guide you used! lol
      Thank you sir for the help! That worked!

    • @therus000
      @therus000 2 года назад

      @@motu1337 is it work ? without traefik

  • @willrnsantana
    @willrnsantana Год назад

    I gotta take some time to debug this to my use case, as the kibe-vip LB is not working o my odroid-c2 (armbian arm64) cluster. But thanks for the hard work to put it all together

  • @roeidalm
    @roeidalm 2 года назад

    Gear video! Thanks!
    About deploying Traefik. Can you explain or point to an article that show how to set the traefik to work with k3s?

  • @xandercode
    @xandercode 2 года назад

    12:50 "Super, super awesome" Tim's smile says it all. 😁👍Well done just downloaded to try out now. Thanks Tim

    • @TechnoTim
      @TechnoTim  2 года назад

      Thank you so much! I am glad you liked it. Let me know how it works out!

  • @psyman9780
    @psyman9780 2 года назад +1

    Would the "Configuring Traefik 2 Ingress for Kubernetes" doc page be preferred for getting Traefik going? Just curious on the whole MetalLB IP configuration in traefik-chart-values.yml. The comment says set it to the MetalLB IP address. But I'm not sure if that means the "apiserver_endpoint" or something else, because using that IP doesn't work. It throws an error about it not being allowed and being unable to assign an IP.

    • @keanu4773
      @keanu4773 2 года назад +1

      Currently stuck on the same problem at the moment. Can't figure out what the MetalLB IP is meant to be to get Traefik working.

    • @psyman9780
      @psyman9780 2 года назад +1

      @@keanu4773 Let me know if you figure it out! It seems like it could be just a static IP for Traefik or something i.e., setting it to something besides the MetalLB IP makes it work and assigns it that IP specifically. But I'm a bit behind the curve on whether or not that's the correct thing to do.

    • @K34nuT
      @K34nuT 2 года назад

      @@psyman9780 I did try that myself, but didn't manage to get it working!

    • @TechnoTim
      @TechnoTim  2 года назад +2

      Yes that's where you start, and the metal lb ip is the one that gets created during setting up k3s with my script. you define this range but you will need to pick one for traefik to use in that range!

  • @coletraintechgames2932
    @coletraintechgames2932 2 года назад +1

    DUDE! I have not watched this video... But this seems exactly like a video I have needed for ever.... 4-ev-er.
    Your my boy blue!

  • @l0gic23
    @l0gic23 2 года назад +1

    Micro center road trip... Who's with me?

  • @kevinyu9934
    @kevinyu9934 2 года назад

    There is another project called k0s, which also provides an option to provision HA cluster. Feel free to check it out.

    • @TechnoTim
      @TechnoTim  2 года назад

      Thanks! Looks awesome although I am trying to stick to the base k3s with minimal additions. This playbook automates the same thing I was doing to install k3s manually rather than choose an entirely different stack. :)

  • @Brainpitcher
    @Brainpitcher 2 года назад +1

    Have you managed to deploy a Rancher web ui on it?

  • @subzizo091
    @subzizo091 Год назад

    thanks for the great video content, but please this setup work with VMware workstation and if it does, what parameters should be changed

  • @MoviesAndTraillersTZ
    @MoviesAndTraillersTZ Год назад +1

    Hardware heaven sent me

  • @prabhujeeva2228
    @prabhujeeva2228 2 года назад

    Awesome Tim!, Thanks for automated the entire process

    • @TechnoTim
      @TechnoTim  2 года назад

      Glad it was helpful!

  • @northcode_
    @northcode_ 2 года назад

    Hey Tim. I've been looking for a while now. Are there any load balancers for k3s other than klipper(svclb) that work over a layer3 VPN like wireguard?
    I've tried getting both metallb and kube-router working, but they won't route between nodes that are only accessible across the VPN. Probably because there's no layer2 connection between the nodes, only layer3.
    I'd love if there was some way to get metallb or some other lb working that can assign VPN-internal addresses to services.
    I'm working around it now with klipperlb and just using different ports and network policies for my internal services but it's not optimal.

  • @zenmaster24
    @zenmaster24 2 года назад +3

    the next level is using ansible to provision the nodes in proxmox, and automatically configuring them as master or worker nodes

  • @ryanarnold2293
    @ryanarnold2293 2 года назад

    Thanks Tim! I was able to get my cluster up and running easily with this. Question, I installed Rancher and now need to access the UI. How can I config the nginx ingress to route to the Rancher UI?

  • @yourpcmd
    @yourpcmd 2 года назад

    Tim, I have a question or two. First, your 1U SuperMicro servers aren't available anymore. Can you recommend a similar server (1U 4-bay)? I'll need to populate it with 4 4-8TB drives. Secondly, and this is crucial, using Proxmox (which I love), how would one install it on an SSD and have 2 additional HDD's for all the VM's? There is no video out there that I have found that goes over this. Please consider doing one. Bonus question, how would one having the scenario above (SSD+HDDs) also backup not only the Proxmox drive but all VM data? Thanks.

    • @TechnoTim
      @TechnoTim  2 года назад

      I have many recommendations on my kit site. kit.co/TechnoTim

  • @gustavoganso
    @gustavoganso Год назад

    Will you do a follow up video on how to set up rancher on this fresh deployed k3s cluster without integrated traefik? Your "High Availability Rancher on kubernetes" misses some details as far as I get it 🙂

    • @TechnoTim
      @TechnoTim  Год назад

      I don't unfortunately, that video should cover it! be sure to use the documentation too when following that video!

  • @jrdwiz
    @jrdwiz Год назад

    Hi Tim, is there an easy way to add more master nodes with etcd later on? Thank so much.

  • @enkaskal
    @enkaskal Год назад

    Thanks for the vid and appreciate you publishing your repo! :) Very helpful and I was able to use them along with k3s-ansible upstream, Lempa's vid, and the k3s docs to pull it all apart, figure it out, and get my own k3s setup codified. However, I skipped all the Metal LB as I found it trivial to get kube-vip to work as the load balancer for both the control plane and services. Curious as to what you got caught up on?

    • @chrisa.1740
      @chrisa.1740 Год назад

      Would you mind sharing your setup that uses only KubeVIP? I would like to compare that with this, on my way to modifying the Ansible playbook to use Traefik for these tasks if possible.

    • @enkaskal
      @enkaskal 4 месяца назад

      ​ @chrisa.1740 i don't know why youtube keeps insta-deleting my replies :/ but doing some github cleanup today and i made my main infrastructure repo public so i'm deleting the old sample repo i made for you. looks like youtube deleted my previous comment (probably because it had a link in it), but JIC you're still using my code for reference it can be found on my github/enkaskal/range/tree/master/ansible/roles/k3s

  • @ElliotWeishaar
    @ElliotWeishaar Год назад

    Love the video Tim. I'm struggling to get this up and running on 5 ubuntu 22.04 machines. I've noticed that the args in your video don't match what's in your repo. Any reason for the change or are the original args listed anywhere? Wasn't sure if I should open an issue on GH or not.

    • @TechnoTim
      @TechnoTim  Год назад +1

      The default args in the repo should work! If you want to see what was used in the video just look at the first few commits, however the latest repo is what you want!

    • @ElliotWeishaar
      @ElliotWeishaar Год назад

      @@TechnoTim Thanks! I think I found my issue. I'm running ansible from an RPI 4, which was running Python 3.7 and ansible core 2.11.5. I think that 2.11.5 is not new enough for split to work. I'm upgrading and hopefully that will fix it!

  • @elchupabara
    @elchupabara Год назад

    Sample group vars file not matching video. For example "kubelet-arg node-status-update-frequency=5s" is missing.

  • @inversemetric
    @inversemetric Год назад

    Welp, you've done it now, Tim. Great job!

  • @webwarriorc4683
    @webwarriorc4683 2 года назад

    Yay, This video helped me learning ansible, it feel really good to make everything automated XD

  • @TheJimNicholson
    @TheJimNicholson 2 года назад

    Why did you go with metalLB over klipper, which comes with k3s? Was it just preference, or were there specific reasons you chose to deploy your own load balancer?

    • @TechnoTim
      @TechnoTim  2 года назад

      I prefer MetalLB over klipper. I've used it many times outside of k3s so it's familiar and battle tested.

  • @rubenkhachaturov3309
    @rubenkhachaturov3309 6 месяцев назад

    This is fantastic! Thank you so much!!!👍

  • @hb-cloud
    @hb-cloud 2 года назад

    This is great content but could you tell me how could i create a k3s cluster with Cilium cni using this setup instead of flannel ?

  • @CTimmerman
    @CTimmerman Год назад

    HA = High-Availability, presumably by automatic failover.
    k3s = k8s but lean, 10x faster by eliminating bloat like drivers.
    k8s = Kubernetes, Ansible for container management?
    Ansible = YAML-based script runner to install and configure software. I hear Terraform is better because it figures out execution order on its own.

  • @comprofix
    @comprofix 2 года назад

    Brilliant, now just need a role to install and present the Kubernetes Dashboard from LAN Access and maybe Rancher as well?

    • @TechnoTim
      @TechnoTim  2 года назад +1

      or just do it with helm/kubectl!

    • @comprofix
      @comprofix 2 года назад +1

      @@TechnoTim Yeah I followed the steps on the Rancher site and got it installed. I had to use kubctl to "edit" the rancher services after they came up to change them from ClusterIP to LoadBalancer. But its all working :)

  • @tommsla123
    @tommsla123 Год назад

    Helpful video Tim. Thank you so much. I have a question : Is this prod ready or for testing only ? Should I adjust some params for prod deployment ?

  • @kaelwang1251
    @kaelwang1251 2 года назад

    WOW, you make really good content, detail and well explained, thanks.

  • @SuperMati9999
    @SuperMati9999 2 года назад

    First of all thank you for the video and the code. I have one question, i was trying to set ha kubernetes until i saw this video. I was doing this setup with some VPS. My question is if i have to create a net between them to run this ansible playbook. That was what i do configurating ha kubernetes. In the other hand, ther is a way to do it without a VPN? What VPN would you recommend to me, i was using wireguard with 2 servers (one master) and other the nodes as clients. This because i need to create a virtual ip and that i think that is only posible with a /24 net. Thanks for reading, sory if you dont understand my problem at all.

  • @phantom6653
    @phantom6653 Год назад

    So after some time, and aggravation, and a bunch of dependencies with a lot of googling with a bunch of extra steeps; I finally got this up and running on my 3 pi 4s.
    Thanks for the Git repo
    ----------------------old below-------------------
    I tried this setup on 3 Pi 4s, and ran into an issue.
    original message: No filter named 'split' found
    come to find that stack overflow says this is a ansable issue missing.
    So I reinstall ansable etc. then get a python import error.
    I really wanted this to work, but this does not work with Pi 4s on Ubuntu. I will try Pi OS 64bit, but I am unsure this will work either.

  • @Josef-K
    @Josef-K Год назад

    How do I set this up with a FQDN in place of ip address for joining a cluster off network?

  • @ameeno
    @ameeno 2 года назад

    can you add zerotier or eireguard clustering? also how about metallb external address detection internal address detection for nodes?
    finally would it be possible to have pods run in control plane?
    I want halb but don't want to lose workernodes

  • @jp_baril
    @jp_baril Год назад

    Hi Tim, I've discovered and been following your channel since a year and basically watched all your videos. So well explained every time!
    If I were to try your Ansible script to test things out at a small scale in a first time, would the script work if I were to put the same few IP addresses both as Masters and as Workers?
    (I know it's not best).
    Also, one thing I always notice in your video is how many IP addresses you have, more precisely all the different subnets you use. It would be very useful to get a video on the segmentation logic you use. Because in the case of deploying this script, I really don't have a clue on which IP (and ranges) to use so that it does not interfere with other devices, VMs, services, etc. and so that I don't have to redo the whole deployment in the future.
    Thank you.

    • @TechnoTim
      @TechnoTim  Год назад +1

      Thank you! Yes, you can have all nodes use all roles! About networks, yes! Coming soon!

    • @jp_baril
      @jp_baril Год назад

      @@TechnoTim Thank you for your answer! Can't wait to watch it! Continue the good work!

  • @TheGiorgioRdz
    @TheGiorgioRdz 2 года назад +1

    Hi Tim, you might want to check XanManning's ansible playbook. Its kinda the same but with metallb

    • @TechnoTim
      @TechnoTim  2 года назад

      Mine has metal lb!

    • @TheGiorgioRdz
      @TheGiorgioRdz 2 года назад

      @@TechnoTim hey tim sorry for that comment that I did. I meant say that you should check out the Xanmanning's repo. It can be used as an ansible role and also it copies your kubeconfig on to your machine at the end of the instalation