The Simplest Kubernetes Deployment? K3S, HA, Loadbalancer! Kubernetes At Home: Part 3

Поделиться
HTML-код
  • Опубликовано: 27 июл 2024
  • The 3rd video in the 7 part mini-series detailing how to configure Kubernetes in your homelab.
    This video walks through a K3S script to automatically deploy a highly available Kubernetes cluster. Simply configure a few variables and run the script to have a highly available, loadbalanced K3S cluster in a few minutes.
    Kubernetes Script:
    github.com/JamesTurland/JimsG...
    Recommended Hardware: github.com/JamesTurland/JimsG...
    Discord: / discord
    Twitter: / jimsgarage_
    Reddit: / jims-garage
    GitHub: github.com/JamesTurland/JimsG...
    00:00 - Introduction to Kubernetes Deployment Script
    01:30 - Script Dependencies
    03:09 - Script Variables
    08:44 - How the Script Works
    19:34 - Preparing to Run Script
    21:20 - Running the Script & Walkthrough
    25:15 - Checking Nginx Works
    25:53 - Outro & Next Videos
  • НаукаНаука

Комментарии • 242

  • @draukuxan1081
    @draukuxan1081 9 месяцев назад +22

    Your step by step walk through of the entire script is something more creators should do, never apologize for that.
    This 'alternate' one-click method does diverge from most of the other published guides out there, but I like it a lot. The fact you created your own script to accomplish what other tools do, but in a much more simple manner is great. I love that there are so many ways of accomplishing something similar. Finding and creating these methods is a big part of the 'fun' when it comes to setting up our home labs.
    Keep up the fantastic content, this series is amazing!

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад

      Thanks, really appreciate your feedback

    • @ganon4
      @ganon4 7 месяцев назад

      Like he said thank you for explaining everything and putting timestamps is awesome so everyone is free to go where he wants.
      I had some problems running this and configuring the template + vm but the fact you describe everything helped me a lot to get it to the point where it's actually working :)
      Going to the next video ! Thank you again

  • @kearneyIT
    @kearneyIT 9 месяцев назад +18

    you should have a million views on your videos, man I love your no BS tutorials and the time taken to explain what is doing what.....love it. Please keep this awesome work up. RUclips needed someone just like you. Thank you.

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад

      That's very kind, thanks. The algorithm is cruel...

  • @petersimmons7833
    @petersimmons7833 17 дней назад +1

    Jim, thanks a ton. This work with VERY minimal changing of your script to do it. I look forward to more. I created my VMs wtih Terraform so I have luxury of blowing it all away and starting from scratch to make sure it does what I want. It takes a lot of planning to get this all going smoothly but you really have put together some great tutorials in this series.

    • @Jims-Garage
      @Jims-Garage  17 дней назад

      @@petersimmons7833 thanks 👍 check out my RKE2 with Ansible, that's more reliable.

  • @tomrogo87
    @tomrogo87 2 месяца назад +1

    Had to divide the script after step 7 in order to get it running one me setup, but in the end did, so thank you very much. Too much detail cannot hurt here to understand what's happening. Great work!

  • @comosaycomosah
    @comosaycomosah 2 месяца назад +1

    damn son youre videos are so in depth appreciate you

    • @Jims-Garage
      @Jims-Garage  2 месяца назад

      Thanks, hopefully it's helpful. I have a new Kubernetes deployment with Ansible that I recommend over this (albeit this is still useful for understanding what's going on).

  • @jurie911
    @jurie911 9 месяцев назад +2

    It is great to see that this channel is gaining popularity! I would like to request the addition of traefik, longhorn, and possibly vxlan wireguard connectivity to enable remote workers to connect with each other. Thank you!

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад

      Thanks 👍 Traefik will be in the next video for kubernetes, as well as longhorn and fleet gitops (see video 1 for overview). Eventually we'll replicate our docker host in Kubernetes.

  • @ninja2807
    @ninja2807 9 месяцев назад +3

    Great video... You are one of the few tech RUclipsrs that is actually teaching something yet in a good pace where anyone can understand. I hope your channel grows and you get lots of subscribers.

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад

      Thanks, ninja. Appreciate your support.

  • @jainabaceesay5147
    @jainabaceesay5147 5 месяцев назад +2

    Hello Jim. I am trying to break into sys Admin job. you have been really helping newcomers like me. Thank you Jim

    • @Jims-Garage
      @Jims-Garage  5 месяцев назад

      Thanks, appreciate the feedback. Good luck with pursuing your new role!

  • @Bealafolle
    @Bealafolle 4 месяца назад +2

    I can't thank you enough for putting this out there. Between your channel and a choice few others, I have really levelled up and I sincerely appreciate the way that you present it without patronising or hyping anything. Just crisp straight-to-the-point help & guidance in very complicated topics that leave enough room for us to take it forwrd ourselves. I can't thank you enough for your deployment boost :)

    • @Jims-Garage
      @Jims-Garage  4 месяца назад

      Wow, thank you! Really appreciate the feedback, I'm glad you have found it useful. Please consider hitting the subscribe button :)

  • @rmorris1968
    @rmorris1968 5 месяцев назад +1

    You are the man!!! I have broken down so many deployments trying to get k3s up and running. After the first try with your script It works. Thank you soo much for a rock solid script and tutorial.

  • @darksidephotography6768
    @darksidephotography6768 9 месяцев назад +5

    Some of the best tech content on YT, and love the release cycle
    Take care not to burn out though 👍

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад

      Thanks, really appreciate the feedback

  • @jonathanquinn3967
    @jonathanquinn3967 4 месяца назад +1

    Great work, thanks Jim! This worked first try. Appreciate your knowledge-sharing techniques. I appreciate the walk-through of the script,. You have just the right cadence for me. Thanks Again !!

    • @Jims-Garage
      @Jims-Garage  4 месяца назад

      Thanks, really appreciate the feedback (hit that subscribe 😉)

  • @thespicehoarder
    @thespicehoarder 6 месяцев назад +2

    This is amazing! I can't tell you how many times I've seen a "beginner's k3s tutorial" only to go to their repo to discover it's now this monstrous overcomplicated monolith. This is just what I needed!

    • @Jims-Garage
      @Jims-Garage  6 месяцев назад

      Thanks, really appreciate the feedback

  • @dan_d278
    @dan_d278 6 месяцев назад +1

    Great video Jim. I really appreciate the time you've taken to explain every line of the script, and every output in the shell. Too many times other creators skip over vital parts and it ends up being 'monkey see, monkey do' which doesn't help my learning. I really get the feel that you've studied this thoroughly and make sure to understand it fully before teaching it out to others, which is very reassuring!

    • @Jims-Garage
      @Jims-Garage  6 месяцев назад +1

      Really appreciate your feedback, I'm glad it was helpful! I too found that most tech channels simply gave you a list of actions without providing the necessary knowledge to effectively manage and run it.

  • @wotsthestory
    @wotsthestory 9 месяцев назад +2

    Two minutes from running the script on my nodes pre-configured and desperately waiting for this next step. Keep it up! EDIT: Meant to add, it worked first time!

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад

      Awesome! Well you're in luck, rancher just dropped 😎

  • @kf4bzt
    @kf4bzt 9 месяцев назад +3

    Thank you so much. I have been needing this series for a while now. You rock sir. Thanks for all your hard work.

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад

      Most welcome, I hope it works for you!

  • @georgebobolas6363
    @georgebobolas6363 9 месяцев назад +3

    1 click script, snapshots, and you don't have to worry about typing all those commands if you mess something up. Awesome content as always! Your content is on par if not better than channels with thousands more subscribers.
    Keep up the good work and may the RUclips algorithm favor your views :D

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад

      Really appreciate your feedback

  • @techdad6135
    @techdad6135 9 месяцев назад +3

    Can't wait to give this a go!

  • @ve2tax
    @ve2tax 7 месяцев назад +1

    Great video Jim! Thanks for sharing your knowledge and spending the time to develop and test that script.. I’ve learned a lot!

    • @Jims-Garage
      @Jims-Garage  7 месяцев назад

      You're welcome, thanks for the feedback

  • @marcelorruiz
    @marcelorruiz 9 месяцев назад +2

    Jim as I stated before, amazing content, well documented and appreciate the details and explainations of exactly what you have done, and yes, ran your script and we off to the races! Keep it up mate, Cheers! 🚀

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад +2

      Amazing, have been waiting on tenter hooks all evening waiting for someone to confirm!

  • @terjemoen8193
    @terjemoen8193 7 месяцев назад +1

    This was awesome! Great script and tut - keep up the good work!

    • @Jims-Garage
      @Jims-Garage  7 месяцев назад

      Thanks for the feedback

  • @DavidDavisL
    @DavidDavisL 6 месяцев назад +1

    Great work on the script. The only item I questioned while watching the video was the hard coded certificate name - but I see the latest update of the script addresses that issue. I will be coming back to this when I am ready to give the install a try. Thank you!

    • @Jims-Garage
      @Jims-Garage  6 месяцев назад

      Thanks for the feedback, appreciated. Let me know how you get on.

  • @SOHOLAB
    @SOHOLAB 2 месяца назад +1

    Thanks a lot for your perfect work! Everything was set up and running, I just used Ubuntu 24.04 cloud image and changed version of k3s in your script

    • @Jims-Garage
      @Jims-Garage  2 месяца назад

      Great, that's good to hear. Thanks

  • @gmoore48
    @gmoore48 9 месяцев назад +2

    Working perfectly. Thanks Jim!

  • @georgebobolas6363
    @georgebobolas6363 7 месяцев назад +1

    Everything working flawlessly! Apart from me forgetting to increase the VM disk size of the templates from 2GB and wondering why those 20s to bring the pods online felt so long 😅 The script worked on first try, so thanks for the 1-click install solution!

  • @rudypieplenbosch6752
    @rudypieplenbosch6752 9 месяцев назад +3

    Great video👍, will try it later, will rewatch to make sure i understand everything.

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад +1

      Great, reach out if you have any issues.

    • @rudypieplenbosch6752
      @rudypieplenbosch6752 9 месяцев назад

      ​​​​@@Jims-GarageHi Jim, I'm running this at this moment. I seem to got stuck at first copying these public key into that user .ssh directory. It got into an endless loop, something with local host 127..0.0 etc, saying something in the line of : dial tcp 127.0.0.1:8080: connect:connection refused.. So i copied those keys by hand and now your script seems to run OK 👍. So just to let you know, on my host ssh wasn't installed at first, but after installing i got this endless loop error anyway.
      One problem however, my load balancer doesn't get an IP address, it's got status pending after 17 minutes ? It's does have a cluster ip

  • @MrWedge22
    @MrWedge22 6 месяцев назад +1

    This worked first time, great work on the script and the video.

  • @phil2768
    @phil2768 6 месяцев назад +1

    We need more people like Jim!

    • @Jims-Garage
      @Jims-Garage  6 месяцев назад

      Jim-GPT in the works... 😂

  • @alexplane3279
    @alexplane3279 9 месяцев назад +1

    Thanks Jim ..it works fine for me , very good stuff

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад

      Awesome, glad to hear it.

  • @CHLEE-ou6ub
    @CHLEE-ou6ub 7 месяцев назад +2

    It works !! Thanks Jim😊

  • @bartosznitkiewicz9611
    @bartosznitkiewicz9611 9 месяцев назад +1

    Awesome job. Thank You. Time to spin up my test setup :)

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад

      Have fun! Let me know if you have any issues.

  • @unmatal
    @unmatal 2 месяца назад +1

    Thank you for all your effort. I have created a K3S cluster with your script. I used the latest KVVERSION="v0.8.0" and k3sVersion="v1.29.4+k3s1". And it came out ok

  • @BonesMoses
    @BonesMoses 5 месяцев назад

    This is a great idea. I've submitted a PR because it does a few things that could be considered impolite to the deployment system.

  • @anibalandrade754
    @anibalandrade754 7 месяцев назад +1

    Great tutorial!!! Thanks and congrats!

  • @speppucci
    @speppucci 8 месяцев назад +1

    applause for the clarity. I installed the ks3 cluster at the first run of ./k3s.sh on a minipc where I already had proxmox. Great.

    • @Jims-Garage
      @Jims-Garage  8 месяцев назад

      Thanks, appreciate the feedback. Glad it worked first time.

  • @MatrikServices
    @MatrikServices 8 месяцев назад +1

    I was looking for a good tutorial to setup a Kubernetes cluster and found your channel and it's amazing. I start looking from the beginning of all your videos (because everything is interesting) and learned a lot. I like your style of clearly explaining every step in detail. And the fun thing is that everything you told is working for me so far! So, a big thank you for all your effort in sharing your knowledge with us. I'm looking forwards to see the rest of your videos. I now reached the Kubernetes part of your channel and everything works out of the box and I have setup the HA k3s cluster successfully. One question left (hopefully not a stupid question): I don't want to have my computer up for 24/7 so I want shutdown my Proxmox server at the end of the day. Because of the HA setup of the cluster over multiple nodes, is there a graceful way of stopping (or suspending) the cluster? And how to startup the cluster again. I know that this is not the purpose of HA ;-) but for my homelab it is not a problem. Thanks in advance.

    • @Jims-Garage
      @Jims-Garage  8 месяцев назад

      Thanks, I really appreciate your feedback. You can do this, but it's not straightforward. Have a look at 'draining' nodes. You want to drain nodes before you shutdown the VM.

  • @KilSwitch0
    @KilSwitch0 9 месяцев назад +3

    I like the video and ideas of the script. I would prefer to see this done via Ansible or Rancher deployment. :) Keep the good videos coming! I enjoy a new creator in the space.

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад +1

      Thanks. I can understand Ansible, but why through rancher? That would be a lot slower and possibly introduce human error.

  • @EricLRyder
    @EricLRyder 6 месяцев назад +1

    Following this 2 months later but not noticing that there was no longer a Docker install, really threw me for a loop. Then all the errors started happening because of keys. I was unaware that puttygen file format for Private key was not compatible. So I had to correct that and then re-run the script. 2nd time through and everything worked perfectly. Need more memory on my node though, pushing 90% now. Although I can likely remove the Admin/installer VM now so that will help. Thanks Jim!

    • @Jims-Garage
      @Jims-Garage  6 месяцев назад

      Awesome, glad you got it working 😄

  • @angelgonzalez2379
    @angelgonzalez2379 9 месяцев назад +1

    This looks great!

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад

      I'm also putting together another script for longhorn

  • @simuman
    @simuman 4 месяца назад +1

    Great video as usual, thxs 4 the script. Had a bit if issues on first run using Jammy image vm's, but found issue was virtual HD space ran it again after reset and all went to plan. Apart from that all worked well and up and running looking forward to following next videos.

    • @Jims-Garage
      @Jims-Garage  4 месяца назад

      Great, thanks for the feedback ☺️

  • @cyberjohn44
    @cyberjohn44 9 месяцев назад +2

    Good video and script.

  • @aliaghil1
    @aliaghil1 6 месяцев назад +1

    I appreciate your great work; keep it up. ❤❤❤

  • @muhammadirfanmalik9712
    @muhammadirfanmalik9712 9 месяцев назад +2

    Hi Jim, great video as always, I always look forward to your content. You elaborate and document things very well which is very helpful and Kudos for that! As for this script, first time it was a breeze didn't take much efforts to setup everything with small changes however second time I tried lowering the k3s version as you mentioned in your rancher video and it gave me a headache turned out the the version I chose it wasn't stable release (v1.25.9) finally it worked albeit without kube-vip load balancer which I had to install separately. I think its because every time I ran this script, I note this error 'unknown command k3s-ha for kubectl" related to setting up the context for cluster I guess.
    I will give another shot to this script mixing amd64 & arm64 architecture.
    Thanks for your great work and keep it coming :)

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад

      Thanks for commenting. If you make any amendments to wget it working on other distros let me know and I'll add to the repo.

  • @JustinJ.
    @JustinJ. 9 месяцев назад +2

    You're spoiling us mate!

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад

      Rancher video soon 🥳

  • @looper6120
    @looper6120 5 месяцев назад +1

    I love the script, it's rock solid haha.

  • @dusty2445
    @dusty2445 4 месяца назад +1

    Excellent Video ..

  • @DamjanKumin
    @DamjanKumin 9 месяцев назад +1

    Jim, preparing an update for your script - I would suggest not to do anything using docker but rather with ctr! Also, I am enjoying your vides A LOT! For me its the perfect layout of data! Bravo! Keep them coming!

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад +1

      Thanks, I originally used CTR in the script but went with docker for familiarity with the audience. I agree that CTR is a better choice, look forward to seeing what you can do, thanks!

  • @briancrouch4389
    @briancrouch4389 3 месяца назад +1

    good tutorial. i used the script, worked perfectly, i even added nodes more in the script for grins.

    • @Jims-Garage
      @Jims-Garage  3 месяца назад

      Great! I'll be replacing this soon with Ansible and hopefully adopting cilium CNI

    • @BenSmithuk
      @BenSmithuk Месяц назад +1

      ​@@Jims-GarageI'm just at this stage is it worth looking at different video instead of running this script? Only bit I'm stuck on is how to do this across 3 oroxmox nodes in a cluster as if I copy the id_rsa to home from proxmox node 1 to node 2 it'll override the existing id_rsa in the home directory?

    • @Jims-Garage
      @Jims-Garage  Месяц назад +1

      @@BenSmithuk the script should still work but Ansible is a better option. The simplest way to overcome this issue is to create all machines on a single node and migrate to other nodes.

    • @BenSmithuk
      @BenSmithuk Месяц назад +1

      @@Jims-Garage thanks Jim! I assume I just return to part 4 if I follow one of your ansible videos? Aye I've set up 6 nodes on one node than was going to transfer, I assume I just update the cloud init ssh key on the transferred machine (or do you manually replace in VM) - really appreciate you supporting us all! (Your videos are fantastically detailed btw)

    • @Jims-Garage
      @Jims-Garage  Месяц назад +1

      @@BenSmithuk Ansible videos will create a new SSH key and copy it to your hosts, it will then deploy RKE2. You should then be able to run the longhorn RKE2 script

  • @wiesawpeche7273
    @wiesawpeche7273 9 месяцев назад +4

    Just tricking the algorithm for Your 10k😉

  • @ekloc
    @ekloc 9 месяцев назад +1

    Thank you very much for great video and instructions! I had some issues with the non-standard ssh key names (I like to generate single purpose ssh keys) but I modified the script slightly to deal with it. I have one outstanding issue with LoadBalancer's external ip address being in status but hopefully I will be able to resolve it soon. Thanks again.

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад +1

      Check to make sure all nodes are available, and make sure each VM has enough space.

    • @ekloc
      @ekloc 9 месяцев назад +1

      @@Jims-Garage I have it fixed now. I run the installation script on Ubuntu 23.10 (mantic) and script failed to install Docker. Running on 23.04 is fine

  • @danmcdaniel709
    @danmcdaniel709 6 месяцев назад +1

    Nice!

  • @urzalukaskubicek9690
    @urzalukaskubicek9690 8 месяцев назад +1

    I did it! First time running kubernetes for me :) Thanks for this minimalistic approach. That is exactly what I needed - no ansible and such, just run one bash script and that's it. Now let's see if I can break it :)

    • @Jims-Garage
      @Jims-Garage  8 месяцев назад

      Thanks 👍 don't worry, if you haven't already you will do soon 😂

    • @urzalukaskubicek9690
      @urzalukaskubicek9690 8 месяцев назад +1

      @@Jims-Garage Jim sorry for stupid question, but should the nginx container after this script be HA? I truned off the node where the nginx container is running (I just truned it off in proxmox) and then there is no response. I assumed (wrongly?) that kubernetes will notice and spun it on another worker node..

    • @urzalukaskubicek9690
      @urzalukaskubicek9690 8 месяцев назад +1

      Ah nevermind, it's up! It just took longer than I expected..

    • @urzalukaskubicek9690
      @urzalukaskubicek9690 8 месяцев назад +1

      I have another question though - when master one is down, kubectl doesn't work (nginx still running with two other masters and workers online), can I tell kubectl to connect to another master instead of the first one?

    • @Jims-Garage
      @Jims-Garage  8 месяцев назад

      @@urzalukaskubicek9690 It's not HA in my setup, but it's self healing. If you want HA, increase the replica count. At the moment, it always tries to have 1 available. If 1 goes node due to a node being offline, it'll recreate it on another available node.

  • @hazemturki
    @hazemturki 5 месяцев назад +1

    This part needs to be updated

    • @Jims-Garage
      @Jims-Garage  5 месяцев назад

      Thanks, yes the script has changed slightly but the video is still largely relevant. I will do another video when I shift to ansible and cilium

  • @cubespawn261
    @cubespawn261 Месяц назад +1

    I think the channel and the content is great, your a natural teacher!, keep up the good work.
    I AM learning, but I'll probably spend plenty of time in the corner with the pointy hat on before its over.
    So far, my experience with the script was not exactly as smooth, however, mostly, I am sure due to 1.) impatience, and 2.) ignorance.. : i.e. I built the template machine, spun up the 5 nodes, loaded keys and script onto the admin node ... many mistakes were made, such as not expanding the drives, an oversight soon corrected, but not until some pain and confusion was traversed.
    But there was much more to come: at the conclusion of the script i get this from
    sudo kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    k3s-02 Ready control-plane,etcd,master 54m v1.26.10+k3s2
    k3s-04 Ready 54m v1.26.10+k3s2
    ks3-01 Ready control-plane,etcd,master 54m v1.26.10+k3s2
    no idea why nodes 3 and 5 never showed up... or what node 4 is up to...
    The script veers off at " First Node bootstrapped successfully!"
    next line:
    error: error validating "kube-vip.io/manifests/rbac.yaml":
    and then shirtloads of additional errors
    concluding with an endless supply of:
    E0603 04:37:06.149788 2326 memcache.go:265] couldn't get current server API group list: Get "192.168.1.151:6443/api?timeout=32s": tls: failed to verify certificate: x509: certificate signed by unknown authority
    Unable to connect to the server: tls: failed to verify certificate: x509: certificate signed by unknown authority
    I'll nuke and rebuild the nodes again, and perhaps walk through the script steps line by line on the relevant nodes (master/worker) to try and see whats going on, but I'm a little lost without a map at present
    I'm sure i have some underlying configuration issue, but, as usual, don't know where its coming from yet. and, I WILL figure it out, but not tonight ;-)
    cheers!
    -James

    • @Jims-Garage
      @Jims-Garage  Месяц назад +1

      Thanks, it's worth trying the RKE2 Ansible deployment in a more recent video. It's a lot more robust in terms of waiting for items to complete.

    • @cubespawn261
      @cubespawn261 Месяц назад +1

      @@Jims-Garage and 5th try was successful, all caused by oversights on my part, the fact that you showed it COULD Work kept me circling back on "what did I forget to do THIS time?" - I'll get there, but its not too pretty ;-)

    • @Jims-Garage
      @Jims-Garage  Месяц назад +1

      @@cubespawn261 That's awesome, kudos on perservering!

    • @cubespawn261
      @cubespawn261 Месяц назад

      @@Jims-Garage yeah, we get-er-done, just not quickly (yet), the end goal is to deploy containers to simulate a fairly complex industrial control network, and THAT, I have built, in the old conventional way. With each piece of hardware running one piece of code. swarm-y containers is much cooler way to do it and will probably result in much higher utilization of the compute hardware, so its just good, clean fun! For efficiency.

  • @viggyprabhu
    @viggyprabhu 4 месяца назад +1

    It worked just perfectly (bumped kube-vip to v0.7.2 and k3s to v1.28.7+k3s1). Thank you for explaining with great clarity !!!

  • @superdownwards
    @superdownwards 6 месяцев назад +1

    Thanks!

    • @Jims-Garage
      @Jims-Garage  6 месяцев назад

      Thanks for the donation, that's very kind. Be sure to hop onto discord if you haven't already.

  • @godelrt
    @godelrt 7 месяцев назад +1

    Just did this on ubuntu 22.04 vm's running in xcpng/xen orchestra and it all worked. Wow Jim, you da man! I assume that if I was to spin up another 5 vm's on a separate machine and add the ip addresses to the script it would add those nodes as well. word of caution to others, make sure you run the script from the home directory like Jim says, i originally ran it from a folder i made and the load balancer ips didn't work

    • @Jims-Garage
      @Jims-Garage  7 месяцев назад

      Yes, you can add more IPs and values to the arrays at the top part of the script and it should scale indefinitely.

  • @user-gs6jl3jp4i
    @user-gs6jl3jp4i 4 месяца назад +1

    Great script. The only recommendation I have is to delete the .kube directory if it exists before creating the nodes.

    • @Jims-Garage
      @Jims-Garage  4 месяца назад

      Thanks, good suggestion. Ansible coming soon

    • @jdratlif
      @jdratlif 4 месяца назад

      That was a problem for me as well. A failed install before left me with a partial kubeconfig that was merged and then it couldn't connect to the cluster. After that, things seemed to work.

  • @fedefede843
    @fedefede843 9 месяцев назад +1

    Well done!
    I wish you can do it using Kubespray, and maybe MetalLB for the ingress balancing. A tip: try semaphore on front of Kubespray to make it more friendly, for those with no Ansible background.
    Good effort!

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад +1

      Thanks for the tips! I've used metallb for a while but now Kube-VIP can do both HA and LB it makes sense to remove additional apps (IMO).

    • @fedefede843
      @fedefede843 9 месяцев назад

      ​@@Jims-Garage Yes, In that matter you will still need something else to achieve HA if use MetalLB for balancing services ingress. One thing I have never seen so far is a proper BGP configuration for MetalLB. Also just read Kube-VIP supports it, I did not know that. That could be a nice advanced topic to touch, given the absence of content.
      Un abrazo!

  • @nicoladellino8124
    @nicoladellino8124 9 месяцев назад +2

    THX👏

  • @MichaelRinghusGertz
    @MichaelRinghusGertz 9 месяцев назад +2

    If i want to expose the services I host I this k3s, I will be running into issue, I can only port forward port 80 to one IP, and not all the IP's that kube-vip will use... If you ask me, kube-vip isn't the best solution... I would have used the regular servicelb and then put a small ubuntu server with nginx configured as a loadbalancer in front of the cluster, that way, I can portforward port 80 on my router to that load balancer, and be able to expose the services.
    I will test the script tomorrow, and let you know if it works.
    Besides that, you are making some really good videos, keep up the good work. Glad I found this channel.

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад +3

      Thanks. We'll be fixing that issue with Traefik reverse proxy in the near future.

    • @bluesquadron593
      @bluesquadron593 9 месяцев назад +1

      @@Jims-GarageI am waiting for the traefik as well

  • @kenny45532
    @kenny45532 7 месяцев назад +1

    I struggled with this for a couple of hours not realizing I was running out of space on all of my clones. Come to find out, you need to expand the filesystem size prior to making the template.

    • @Jims-Garage
      @Jims-Garage  7 месяцев назад +1

      Yes, although I prefer to keep it small and expand to the VM's needs. Most of the time I use a VM for a very short period (typically testing).

  • @SamuelGarcia-oc9og
    @SamuelGarcia-oc9og 5 месяцев назад +1

    I followed all your videos and everything is working great.
    I wanted to ask how you can edit the lbrange after everything is working?
    Thank you.

    • @Jims-Garage
      @Jims-Garage  5 месяцев назад +1

      Great 👍 edit and redeploy it. kubectl apply -f ipaddresspool.yaml

  • @linusfalkstal6450
    @linusfalkstal6450 2 месяца назад +1

    Thank you for all your incites. A lot of my own startup with a homelab comes from your videos and guidance. I do have an issue that I would like some help with regarding this bash script for a cluster. I can not seem to get it to work properly. It did work in my testing on a single bare metal machine bat no as I am trying to get it to work on my full setup I get stuck on a problem with the kube-vip that I can not seem to get passed. Is there some way to contact you Jim for some QnA? That is ofc if you have the time for it. Best regards Linus Falkstål

    • @Jims-Garage
      @Jims-Garage  2 месяца назад

      Thanks. The best way is to create a help ticket on discord and tag me. Put some logs etc to assist

  • @thomaslyneborg9357
    @thomaslyneborg9357 5 месяцев назад +1

    As always, greate video👍👍 Installed the cluster on 2x m73 tiny | intel i3-4330t with 16gb of RAM, have i done anything wrong? It looks like all the memory is used up before any applications is deployed apart from Rangcher

    • @Jims-Garage
      @Jims-Garage  5 месяцев назад

      Sounds right. They need about 6-8GB each. Kubernetes is hungry for ram

  • @gjte1
    @gjte1 8 месяцев назад +1

    Hi, Thanks for the clear descriptive videos. But I was wondering if you needed to do some extra's to get hosts address and the LB and VIP addresses to see each other. did you use specific firewall rules?

    • @Jims-Garage
      @Jims-Garage  8 месяцев назад

      No need, they're all on the same subnet. /24

    • @gjte1
      @gjte1 8 месяцев назад +1

      @@Jims-Garage Aha, there is a difference in the file between the video vs github. In github the lb range is 192.168.1.* while the others are in 192.168.3.* and in the videao they are all in 192.168.3.*

    • @Jims-Garage
      @Jims-Garage  8 месяцев назад

      @@gjte1 correct, the script will evolve over time, but the core process will remain the same

    • @gjte1
      @gjte1 8 месяцев назад +1

      @@Jims-Garage But now the lb range and vip address are not in the same subnet anymore as the hosts. I thought you meant that.

    • @Jims-Garage
      @Jims-Garage  8 месяцев назад +1

      @@gjte1 those are just placeholder values, you need to amend to your setup. I'll tweak the script values to avoid confusion

  • @apatock
    @apatock 4 дня назад +1

    Hello Jim, I somehow have the problem that all Kubernetes videos don't show how to create a service in Kubernetes behind the vip. In your example, I want to reach the NGinx on the vip 192.168.3.50 and not behind any IP that the LoadBalancer assigns (in your case 192.168.3.60). Am I making a mistake in my thinking or is there something missing in the videos?

    • @Jims-Garage
      @Jims-Garage  4 дня назад

      You're partly mistaken, the VIP is purely for node communication. The LoadBalancerIP is also a VIP though, it's shared across the cluster. So for instance, Traefik will be assigned a LoadBalancerIP that works across all worker nodes (that's the beauty of it - you can also host most of your services behind Traefik on that IP).

  • @looper6120
    @looper6120 5 месяцев назад +1

    Hey Jim, just a quick question, I'm wondering how we can expand more nodes to the cluster. I guess we can always do the manual deployment, but are we looking to expand the script so it can do node expansion?
    btw, great series, thanks a lot for the effort you put in. It's amazing. Stuff like these used to be Udemy course for real. You are sharing them here, appreciate that!

    • @Jims-Garage
      @Jims-Garage  4 месяца назад

      Thanks. I could create a simple script to add new nodes, would simply be a copy paste of the longhorn script for the most part. The grand plan is for Ansible which I'm just about to start.

    • @looper6120
      @looper6120 4 месяца назад +1

      @@Jims-Garage That's perfect! I'd love to follow your Ansible series too. I've been trying to write up myself, but not having too much knowledge in Ansible DevOps. Super excited to fully autoemate everything there.
      Perfect content, and you have excellent taste in technology too, lol. almost have the same setup lollll.

  • @BromZlab
    @BromZlab 9 месяцев назад +2

    Thank you 😊. Nice video. Everthing works as i should 👍. One qustion, what is a desent size of the nodes?. I have 3 GB now. , its fast to move around on difrent servers, will 10 GB on 5 node mean 50 GB total?

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад +1

      Yes, it's literally just multiplication. Typically the master nodes can be smaller as we'll be hosting all pods on the workers. So something like 20GB master, 50 GB workers would be sufficient.

    • @BromZlab
      @BromZlab 9 месяцев назад

      @@Jims-Garage ok. Thank you

  • @kamleshpatel9152
    @kamleshpatel9152 2 месяца назад +1

    @Jim's Garage Can I use password instead of ssh certificate files as I have multiple proxmox in cluster and VMs are spread across multiple proxmox hosts?

    • @Jims-Garage
      @Jims-Garage  2 месяца назад

      Yes, but not without altering the script. I recommend checking out my recent deployment using Ansible. It's more stable

  • @andremens8420
    @andremens8420 2 месяца назад +1

    Hi Jim. Following this tutorial i have a question about the KVVERSION. This is defined but never used. Now version 0.8.0 is available i can change it in the sh file but in the kube-vip file the version is also hard coded. Is this as it should be or should it be updated with the also changed with the sed command.

    • @Jims-Garage
      @Jims-Garage  2 месяца назад +1

      You should be able to swap out the versions as needed. I recommend using the Ansible deployment though.

  • @addesigns2121
    @addesigns2121 9 месяцев назад +1

    Jim I want to start with thinking you for this mini Serie, my question is if i want to deploy this on multiple node for a true HA how can i accomplish this using this script? thanks in advance

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад +1

      Thanks, you're welcome. Don't change a thing. Just put the VMs on separate physical machines and ensure the IPs are as specified in the script. Just make sure the certs are all available.

  • @fishmeat69
    @fishmeat69 9 месяцев назад +1

    Thanks so much for this Jim! I'm happy to be a tester and contributor for the script on Raspberry Pi. Just curious as to how you would amend the script to run on a Pi, since you wouldn't be creating a VM for each node on a Pi - the Pi would just be the node.
    I have 3 Pi's, so for HA I assume all 3 Pi's would need to be a master and worker, since it allows any Pi to go down. Thoughts?

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад +1

      Fantastic! You don't need to worry about anything if you're using a physical device. Just change the IPs as you would normally. I'm hoping the script will detect the architecture and deploy the right executables. Having all nodes as Master and Worker isn't good at scale but should be fine for a small testing setup.

    • @fishmeat69
      @fishmeat69 9 месяцев назад +1

      @@Jims-Garage 100% agree it's not the best idea at scale, but I think it'll be fine to experiment with in my homelab :)

  • @petel9919
    @petel9919 7 месяцев назад +1

    Would it be better to add the ip address of the proxmox node to k3s.h and then scp the certs straight to where they are going ?

    • @Jims-Garage
      @Jims-Garage  7 месяцев назад

      It's subjective. Faster, yes, but I don't want anything except my personal machine to have access to Proxmox.

  • @EricVandenAkker
    @EricVandenAkker 6 месяцев назад +1

    I'm deploying a home lab with 3 workers and 1 control node. Would you still recommend kube-vip, and using your script as is?

    • @Jims-Garage
      @Jims-Garage  6 месяцев назад +1

      It should work by altering the array at the start.

  • @Superturisto
    @Superturisto 5 месяцев назад

    Hay Jim, I've changed versions of KUBE-VIP and k3s to latest 0.7.0 and 1.29 respectively and I got
    Unable to connect to the server: tls: failed to verify certificate: x509: certificate signed by unknown authority
    Unable to connect to the server: tls: failed to verify certificate: x509: certificate signed by unknown authority
    Unable to connect to the server: tls: failed to verify certificate: x509: certificate signed by unknown authority
    during script run

  • @JailbreakNation
    @JailbreakNation 6 месяцев назад +1

    This may be a dumb question but how are you able to sftp into your k3s-admin vm? When I try to do password authentication I get denied with the server returning "No supported authentication methods available (server sent:publickey)"

    • @Jims-Garage
      @Jims-Garage  6 месяцев назад +1

      You need the keys on the machine you're connecting from. Check the script, you should see it copy the keys.

  • @furmek
    @furmek 8 месяцев назад +1

    Great intro into k3s!
    Any idea what might be causing `kube-vp-ds-xxx` not showing up in pods?
    Which I guess is the root cause of nginx external-ip remaining in state.
    Logs for kube-vip-cloud-provider have no errors and show that nginx is getting an ip from the lb range.

    • @Jims-Garage
      @Jims-Garage  8 месяцев назад +1

      Do you have enough ram and disk space?

    • @furmek
      @furmek 8 месяцев назад +1

      @@Jims-Garage Thanks mate for taking the time to respond.
      I was bitten by this along the way so I've adjusted my template - 4GB ram, 4cores, 8GB disk. `df` shows 30% used on `/` on all 5 instances and plenty of free ram.
      The host itself still has over 32GB of free ram and over 1TB free space on disk, load < 0.8.
      I was trying to find deployment logs for kube-vip but can't find those.
      BTW, should I be able to reach anything by that main vip - this does not appear to respond to ping nor is it listed under `ip a` on any of the instances.

    • @Jims-Garage
      @Jims-Garage  8 месяцев назад

      @@furmek no, ignore the VIP. If you run kubectl and get a response you know it's working because it's using the VIP
      What happens if you go to the nginx loadbalancer IP? The one that kubectl get svc -n nginx

    • @furmek
      @furmek 8 месяцев назад

      @@Jims-Garage I gave your rke2 script a go, and it worked, second time, 8GB disk is almost big enough.
      IDK what the rc in k3s was. Migh have been something with not enough disk or with ssh-agent...
      In any case, thanks for this great tutorial and script. Waiting for the next part.

    • @MatrikServices
      @MatrikServices 8 месяцев назад

      I had the same. The issue was, I made a mistake in de name of the ssh key in the script so the script was not able to ssh to the other nodes. The nginx external ip was remaining in pending state and the summery at the end of the script output don't show the 3 kube-vp-ds-xx entries. Everything goes well after I set the right name for my ssh key in the script.

  • @gp2254
    @gp2254 9 месяцев назад +2

    my error starts to happen after the load balancer section pops up..Just fyi... not sure what to put in the virtual ip field or the loadbalcer ip range...nothing seems to work. pls help!

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад

      Can you describe the error?
      The virtual IP is just an IP address that is used by all of the nodes to communicate. It is shared/fails over between them. This enables high availability. Just use something that isn't being used.
      The loadbalancer IP is an IP address in your kube-vip range. This is the IP that the specific service will be available on.

    • @gp2254
      @gp2254 9 месяцев назад +1

      Thanks for the reply Jim! Much appreciated! So what happens after I run the script I start to get a non-stop scrolling error message saying that localhost:8080 is not reachable. I have this all setup on a separate subnet using PFsense on Proxmox and not sure if 8080 is being blocked on my verizon router or something on the Pfsense side. I also created a static route from my home lab to my primary network so everything is pingable and reachable between the 2 subnets. Lastly I do have nginx configured and may have some port forwarding setup on my primary router but not sure. Any ideas?@@Jims-Garage

    • @furmek
      @furmek 8 месяцев назад

      @@gp2254 compare your output to the output of the script from the vid. I had similar issue but it turned out that the problem was way earlier with the ssh certs (ssh-agent was causing some issues - `IdentitiesOnly yes`)

    • @andreasweber2573
      @andreasweber2573 4 месяца назад

      I got exactly the same problem and don't know how to fix it.
      @@gp2254

  • @declanmcardle
    @declanmcardle 9 месяцев назад +1

    "My internet isn't that fast"
    Set up your own 'caching' registry?
    Then your next video can have the docker pull commands going at 1/2.5/10Gb/s like when someone is on an AWS/GCP host and they do a docker pull command 🙂

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад

      That would be a fun project! 😊

  • @phk-r
    @phk-r 8 месяцев назад +1

    Great video as usual James!
    But I ran in to an unexpected problem. It seems like my nginx pod doesnt get any external IP. It just sits as "pending". Is there any limitations as to which subnet I can use for VIP and that range?
    How would I delete everything related to nginx and kubevip to be able to re-run the script? (Forgot to take snapshots...)

    • @Jims-Garage
      @Jims-Garage  8 месяцев назад

      Have you tried running kubectl get svc -n nginx again? It doesn't dynamically update.
      IMO, it would be quicker and safer to redo the VMs and make snapshots. Less chance of an error.

    • @phk-r
      @phk-r 8 месяцев назад +1

      @@Jims-Garage yeah i've been giving it some time but still pending. Went ahead and deployed rancher, same thing, doesnt get an external IP.
      Will probably go ahead and redo the whole thing and try with a VIP in the same Subnet as my servers.

    • @Jims-Garage
      @Jims-Garage  8 месяцев назад

      @@phk-r yes, VIP in the same subnet is recommended. Otherwise you'll need firewall rules.

    • @phk-r
      @phk-r 8 месяцев назад

      @@Jims-Garage hmm, not sure where im going wrong with this. Did a full reinstall on new VMs. This time putting the VIP and VIP range in the same subnet as my k3s VMs and the "admin" VM.
      Still dont get nginx to get an IP. No errors shown while running the script either.
      Tried restarting nginx and also reployed new pods. Didnt help.
      Gonna make some pizza for the kids and continue this later.
      Thanks for your feedback James. Really appreciate it!

    • @Jims-Garage
      @Jims-Garage  8 месяцев назад

      @@phk-r interesting, I'll spin up tonight and see how I get on

  • @dev-videos
    @dev-videos 9 месяцев назад +1

    very 👍

  • @Xmoo123
    @Xmoo123 4 месяца назад +1

    I’m getting “The connection to the server localhost:8080 was refused - did you specify the right host or port?”

    • @Jims-Garage
      @Jims-Garage  4 месяца назад

      Copy over the kubeconfig from one of your masternodes. Or access your cluster via kubectl on a masternode.

    • @Xmoo123
      @Xmoo123 4 месяца назад

      @@Jims-Garage New to this. But I googled the first part. It is said the kubeconfig should be in $home/.kube/? I do see the directory but it is empty.

    • @Xmoo123
      @Xmoo123 3 месяца назад

      @@Jims-GarageSo weird, installing with Proxmox no problems at all. Trying to install it on XCP-ng, same error.

  • @lawrenceneo2294
    @lawrenceneo2294 3 месяца назад +1

    A bit confused as to which machine is the one that you will actually have to use to run the script? Is it the proxmox host machine or a separate 6th VM for this purpose? Can you provide a network architecture diagram in your github to illustrate? Can you explain how will this machine running the script connect to each of the 5 VM (3 Servers, 2 workers), is it by SSH? How do we prep the 5 machines to ensure this script works? I like to quickly spin up 5 servers in my VMware Workstation to test this out but I am struggling to figure how how to prep the 5 machines so that this script can run and connect to each of the 5 machines to configure it as 3 Masters and 2 Workers.

    • @Jims-Garage
      @Jims-Garage  3 месяца назад +1

      Yes, this is a 6th VM. It's typical to remote adminstrate a cluster / use a bastion host etc.

    • @lawrenceneo2294
      @lawrenceneo2294 3 месяца назад +1

      1. The part where you explain about copying the certificate, can't figure it out which machine you are copying from and which machine you are pasting to, 2. There is a load-balancer machine so which of the 5 machines is the load-balancer? So in total, we need 1 VM for LB, 3 VM for Master, 2 VM for Workers and 1VM to run the script? Or 3 VM to run the Masters, 2 VM to run the Workers, 1 VM to run the script and where does the LB run on? Thank you so much if you can help to clarify, I know the answers are probably in the video, but if you can provide a simple architecture diagram, it will be so much easier to visualize.

    • @Jims-Garage
      @Jims-Garage  3 месяца назад +1

      @@lawrenceneo2294 the certificate is the SSH certificate I think. It's needed during the script for remote machines to connect between each other.
      The loadbalancer works across Kubernetes. It's a concept as well as a deployment (like Traefik).

    • @lawrenceneo2294
      @lawrenceneo2294 3 месяца назад

      @@Jims-Garage The script can't run successfully because many of the git URLs are not alive anymore. Could you do an update video?

    • @Jims-Garage
      @Jims-Garage  3 месяца назад

      @@lawrenceneo2294 that's surprising, are you sure it's not DNS issue on your VM? I'll check later. Can you add screenshots on discord?

  • @scottcarez
    @scottcarez 7 месяцев назад +1

    Almost everything seems to have worked correctly. The output showed the following error message which appears to have impacted the EXTERNAL_IP not getting set. error: the path "ipAddressPool.yaml" does not exist. This error message appears to line up with the kubectl apply -f ipAddressPool.yaml step on line 198 of the script. It looks like perhaps something went wrong when it tried to download the ipAddressPool up on line 180. There is no error reported there, but I am not sure what to do to correct the issue.

    • @scottcarez
      @scottcarez 7 месяцев назад

      I can see that the ipAddressPool file was downloaded to the machine I ran the script on, which was a separate machine from the k3s cluster.

    • @scottcarez
      @scottcarez 7 месяцев назад

      Alright I think I understand why the issue happened. I had placed the main script file in a folder off of my home dir. So when it created the ipAddressPool.yaml file which the script created at $HOME/ipAddressPool.yaml it failed to find it. I changed dir back to $HOME where the yaml file was located. When I then ran the kubectl apply -f ipAddressPool.yaml command I got the following error.
      error: error validating "ipAddressPool.yaml": error validating data: failed to download openapi: Get "localhost:8080/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

    • @scottcarez
      @scottcarez 7 месяцев назад +1

      Alright I got it resolved, had to build a small script to basically run all of Step 10 and now I can see the external ip address.

    • @Jims-Garage
      @Jims-Garage  7 месяцев назад

      @@scottcarez great, I'll double check the script to see if I can amend it.

  • @RunTheTape
    @RunTheTape 5 месяцев назад +1

    why not use ansible?

    • @Jims-Garage
      @Jims-Garage  5 месяцев назад +1

      I wanted to make the barrier to entry as low as possible. I will be moving towards Ansible deployment in the future (once I've covered Ansible)

  • @thbe51
    @thbe51 8 месяцев назад +1

    What's restorecon? Got error on this!

    • @Jims-Garage
      @Jims-Garage  8 месяцев назад

      It's used to set the security context of files. Where are you seeing this? What's the error message? What OS are you using?

    • @thbe51
      @thbe51 8 месяцев назад

      I have followed your video-instructions. It´s Ubuntu cloud-image 'lunar' via template. The error message is simply "not found". Restorecon can however be installed with policycoreutils (apt says!). Is this a Minecraft thing? The system seem to work anyway. Can see the nginx page 🙂 I might add that the whole thing is running on Proxmox 8.0.4.

    • @thbe51
      @thbe51 8 месяцев назад +1

      Me again..... I've found out that SELinux does not install. This from output from script: [INFO] Skipping installation of SELinux RPM. Seems that k3sup can be it??!

    • @Jims-Garage
      @Jims-Garage  8 месяцев назад

      @@thbe51 interesting, what Linux distro are you using?

    • @thbe51
      @thbe51 8 месяцев назад

      I'm using Ubuntu-cloud 23.04 just as in the video. Perhaps this is nothing to bother with. The system is working. Just installed portainer-agent and that thing is working. Your tutorial is the only one that I got to work. There is several others on the net but most of them is obsolete today. Things is rapidly changing these days..... 🙂@@Jims-Garage

  • @gp2254
    @gp2254 9 месяцев назад +1

    Jim can you help I am getting this error now in your script when going through the loadbalancer part > no kind "DaemonSet" is registered for version "apps/v1" in scheme "vendor/k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50" Please help! TY

    • @Jims-Garage
      @Jims-Garage  8 месяцев назад

      Hi, I've updated the script a little since then. Can you confirm if you still have the issue?

  • @maskon78
    @maskon78 5 месяцев назад

    Attention! Command in script ~/.ssh/config> maybe destroy your config-file for SSH. :( Please use