Deploying Rancher to Manage Kubernetes. Kubernetes At Home - Part 4
HTML-код
- Опубликовано: 27 июл 2024
- EDIT: Use the following command to expose Rancher: "kubectl expose deployment rancher --name=rancher-lb --port=443 --type=LoadBalancer -n cattle-system"
The 4th video in the 7 part mini-series detailing how to configure Kubernetes in your homelab.
This video deploys Rancher GUI to your existing cluster allowing you to manage your cluster through your browser!
Rancher Instructions:
github.com/JamesTurland/JimsG...
Rancher Page:
ranchermanager.docs.rancher.c...
Recommended Hardware: github.com/JamesTurland/JimsG...
Discord: / discord
Twitter: / jimsgarage_
Reddit: / jims-garage
GitHub: github.com/JamesTurland/JimsG...
00:00 - Introduction to Rancher
02:28 - Helm Install
03:28 - Installing Rancher
12:23 - Rancher GUI
17:01 - Outro - Наука
Idk how you pump out videos this fast. I could get used to this. Haha. Good work Jim and thank you.
My pleasure! Don't want to leave people hanging with a cluster just waiting to be used ....
fantastic, this also installed with zero errors. in Italy we say smooth as butter :)
I saw that after installing rancher the resources of my PVE were significantly reduced, at the moment everything is running on a PVE N95 with 16Gb RAM
That's great, thanks for reporting back
After two failed installations of K3S I have finaly have a succesful part 4. The reason was diskspace. I run now nodes with 20GB minimal diskspace.
You're an awesome teacher. I followed your instructions to the letter and got my rancher server working on my K3s home lab. You have a subscriber here.
Thanks, really appreciate the feedback
Just an advice for anyone learning kubernetes. while its totally possible to install rancher onto the main cluster, its not best practice. for a homelab, a separate docker installation for rancher to manage the productive cluster can be achieved easily and without much overhead. its also stated like this in the rancher docs under "tips for running rancher":
Run Rancher on a Separate Cluster
Don't run other workloads or microservices in the Kubernetes cluster that Rancher is installed on.
otherwise great content and thanks for your videos, great stuff!
For production I agree, good point 👍 but a single node docker container is also risky as you can lose cluster data (if you're running from this container). I think this is an acceptable compromise for a Homelab.
one of the most amazing aspiring youtubers Jimmy you are awesome been following you for my brand new home lab love your work
Thanks, Superman. Really appreciate the feedback and support
Flawless! I initially had an issue and things were not spinning up but then I jumped over to your Discord and found a similar issue. It was a storage issue for me. After increasing that and trying again. Rancher up and running beautifully.
I really do like the way you go through your steps and commands with explanations. This definitely helps if someone is new or if you need to research further an issue. I can see the amount of time you must take here. Much appreciated!
Great to hear!
Can you point me to the solution?, Was running a problem about space, added space but still I got an error about ephemeral storage but I haven't found where to set the limits. Reading the comments to see if someone has the same issue.
Thank you Jim. good video again. now Rancher is up and running :)
That's great, good job!
Thanks for the videos Jim, they've been very clear and easy to follow so far.
I had a little glitch around the 12:00 mark, the command "kubectl expose deployment rancher --name=rancher-lb --port=443 --type=LoadBalancer -n cattle-system service/rancher-lb exposed" ran and displayed the same error on your screen, but didn't seem to do anything else (i.e. "kubectl get svc -n cattle-system" didn't show an EXTERNAL-IP).
This worked though: "kubectl expose deployment rancher --name=rancher-lb --port=443 --type=LoadBalancer -n cattle-system" which then displayed "service/rancher-lb exposed" so it almost seems that on your screen capture the command and its output are run together, making it look like the output "service/rancher-lb exposed" are supposed to be command parameters.
Good job, sorry for the confusion
I’ve been waiting for this video. 🎉
Hope you enjoyed it!
Another excelent video to get Rancher up and running.
Thanks! More to come :)
Worked absolutely perfect (after fixing some of my own mistakes like trying to install to the very latest stable verson of k3s in the previous versions which will let you run into issues later with rancher).
But great K3S guides.. Much appreciate... Nice to delve into this topic again after a long time with such an easy method of installation!!! :)
Only problem I'm facing now is that my old ryzen 3600 with 48 gigs of memory and only 12 cores is too limited to do anything with proxmox... It keeps crashing my virtualized truenas scale with thos 5 k3s nodes up... I've noticed this before when you over provision the cores and especially memory...
Guess I need to go shopping.
Thanks, appreciate the feedback
Your videos are awesome, after setting up correctly the vm with storage, all went well.
Great, glad it worked for you
Easy to follow and complete, awesome stuff!
Thanks 👍
fantastic , excelent work , clear to the point,..... keep it up Jim you are awesome
Thanks, appreciate it.
Really helpful! Thank you for this video series. Has provided me tremendous learning.
For my remote installation, I had to set an external IP myself after exposing the port (was stuck in the pending state)
kubectl patch service rancher-lb \
-n cattle-system \
-p '{"spec": {"type": "LoadBalancer", "externalIPs":["192.168.3.61"]}}'
Then worked like a charm :) (Also important to note to set the server url to the correct one instead of localhost:XXXX if you're using SSH tunneling to get the webpage) :)
very very nice Sr.!
You're welcome 😁
No issue but wanted to thank you!
Much appreciated 👍
Really good video serie, cant wait untill traefik and longhorn hits. Im planning on building an bare metal cluster using this method. But im still waiting. Would be cool if you did some real life homelab apps aswell. For instance using tags for tagging hosts with usb devices (like zigbee dongles and so on).
Thanks, I'll cover taints and labels in the next video.
ditto 😉
Amazing video series. Thank you, my friend. Is that rancher-lb resilient? In other words, is it always available on all nodes and if a node fails, then the rancher-lb is still up and alive on another node? Does it act like a pod? Not sure if there's more of that info in a prior or later video from you. Thanks in advance!
Yes, the loadbalancer is cluster wide. If a node fails the pod should migrate and be available
Hi Jim, thanks for your great work. I installed version v1.28.8 +k3s1 oif Kubernetes to use Ranchers latest repo. It did install metall-lb automatically so I was wondering why it didn't when you installed yours. Is this because of the newer version of Kubernetes?
Hmm, not sure. It should have done.
Hi Jim, first of all, thank you very much for your effort to allow non-IT experts to try out the strengths of the Kubernetes cluster in their home lab.
I have run the suggested installation several times and always get an error with the following command: kubectl -n cattle-system rollout status deploy/rancher
After a few minutes I have got the following error message: "error: deployment "rancher" exceeded its progress deadline"
For the first time I ran out of the recommended 3.5 GB of storage space on the ubuntu virtual master1 server disk. Now that I have increased the storage space I have got the above error again. Should I increase the capacity for a successful rancher implementation, or is there anything else that would help a successful implementation? Thank you for your effort!
Did you make sure to shutdown the VM after altering the storage? A reboot will not work.
Thanks, I am going to check it.@@Jims-Garage
Hey Jim , great work and thanks for the Snippets on your Github.
All is working fine as expected, I shutted down the Masterst one by one to make a snapshot and the nginx site was all the time available.
But as I shutted down worker 1 (with nginx), the connection war broken. nginx didn't switch to worker 2. Shouldn't it?
Yes, it should failover to the other worker node. Can take a couple of minutes, I believe you can specify the tolerance (wait time).
7:04 you configured the cert-manager ... 12:25 shows the certificate is not valid, does the cert-manager not provide the certificate for the application on a browser level? How does cert-manager work in this setup?
Hey, thanks. Read up on self-signed certificates. The certificate is valid, just Google doesn't trust it because we created it ourselves. This is different to Traefik and letsencrypt where letsencrypt creates the certificate and is trusted by Google.
As a note, there seems to be a limit as to how far the alpha repo will allow. I tried this with k3s v1.29 and it said I had to use 1.28 or below. Version 1.27 was the latest supported as of this comment.
Hi Jim,
First of all "Merci Beaucoup" for your tuto, really appreciated. I 've followed your steps (only difference is UBUNTU 22.04) and despite 4 cores per VM (3 masters, 2 workers) and 4Gb RAM per VM on Proxmox, Rancher is displaying a total of 8 cores and 7.85 GB ram available. Very strange as all core in Rancher are marked as 4 cpu and 7.85GB Ram. Any ideas or leads to be investigated ?
I am using K3S 1.27.7+3KS2 and the "latest" Rancher image.
Thanks in advance for your help.
Thanks, other users have reported the same. I believe it's a bug with the latest rancher version.
So what happens if the IP assigned to the load balancer ever changes? I noticed on the rancher login page that it was set to the specific ip address.
Loadbalancer IPs should generally be static. All of mine are.
Hi Jim, great job!!! Everything works fine and without errors, till I try to install rancher - here I get the error "The connection to the server MY-FIRST-MASTER-IP:6443 was refused - did you specify the right host or port?" when I check the status of installation?!? After a minute I get "Error from server (ServiceUnavailable): apiserver not ready". The only difference to your directions is that I use debian 12 instead of ubuntu. Do you (or some one else) have an idea to this? Thanks a lot.
One important info: I resized the disks of all vm´s to 6 GB bevor I started deploying k3s, but during installation of rancher, master1 was 100% full... Next step is to resize disks to 10GB, then let´s see if error comes up again.
@@Glatze603 yes, increase disk size and try again. Let me know if it doesn't work.
@@Jims-Garage the worker disk space was the problem. I had previously expanded this from 6 to 8 GB and the disk usage is now (after successful Rancher installation) at 74% (worker1), 67% (worker2) and 82% (worker3). Depending on what comes next, the hard drive space will have to be increased further.
I get to the rollout step and then I get this:
Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available...
Waiting for deployment "rancher" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available...
Those last two lines keep repeating over and over. It never finishes. Any ideas?
Do your nodes have enough resources?
@@Jims-Garage That was it. Had to increase disk size to 5GB on master nodes and to 8GB on the workers in order for it to work. On to part 5!
@@danmcdaniel709 you probably want a little more than that if you can stretch. Minimum of 10GB in my experience
@@Jims-Garage I had this same issue (seems like in the comments a lot of us did). You might want to just go over increasing the HDD size as part of this guide. I was stumped for a while on it but just figured out how to do it.
Upon installing rancher-latest I first get the status of the pods ContainerCreating, then ErrImagePull, then ImagePullBackOff, then ContainerStatusUnknown / Evicted. I've followed your guides step-by-step but I'm not sure how to proceed now. Any idea what could be the problem?
It might be a bug with the latest version. You can alter the script and set to stable instead, give that a try. Also, worth upping resources in case it doesn't have enough.
@@Jims-Garage I'm dumb. I only allocated 4 GB to the HDDs and of course it wasn't enough. After increasing to 20 GB rancher installed without a problem.
But now I'm getting the error message "Ensuring load balancer" and "Error syncing load balancer: failed to ensure load balancer: no address pools could be found". I can confirm that the address pool is available and I can access NGINX on the first and rancher on the secnond IP of the address pool. What am I doing wrong?
Hey Jim, thanks for the videos having an issue where the webpages for the nginx and rancher are not loading, I can ping the ip's assigned successfully. This is strange as I can reach other VMs on the same vlan including promox itself.. What could be the problem?
What does kubectl get svc -n nginx show? Hop into Discord if you can, easier to diagnose.
@@Jims-Garage that gives message "no resources found in nginx namespace". Runing for namespace cattle-system shows rancher, rancher-lb and rancher-webhook with only the lb having an external IP. To see the nginx have to run "kubectl get svc" which shows the nginx-1 load balancer and kubernetes. Edit* after finding out the cmd to list namespaces, the nginx and kubernetes are in the default namespace.
@@Jr-hv1ct check that the traffic policy is cluster not local. (You'll see it in the updated GitHub manifest file)
@Jims-Garage do you mean the k3s script or the rancher file
@@Jr-hv1ct recommend you hop on Discord and share your configs
I keep getting a error code when i want to import the cluster : no objects passed to apply. Any thoughts
You shouldn't need to import. Does it not show local?
@@Jims-Garage Dear Sir,
I would like to express my sincere gratitude for your prompt response to my previous query. Upon careful consideration, I realize that my question may not have been entirely relevant, and for this, I offer my apologies for any inconvenience this may have caused. As someone who is still relatively new to the world of computers, I am continually in a learning process and exploring various approaches.
During my attempt to execute a simplified installation method on Rancher, I encountered an error message when adding a cluster. Currently, I am studying your videos to understand the process as you have suggested, but I find that some concepts still elude me. Nevertheless, I am determined to persevere and tackle this challenge. It involves a significant amount of concepts to learn, but nonetheless, I would like to thank you in advance for your support.
Yours sincerely,
@@levimeykens1614 you're most welcome. Hop into Discord if you'd like more support
How were you able to fix the error to expose the rancher service to a loadbalancer with the rancher-lb. I am trying the code and getting the same error. The other one is i could edit the come along rancher to loadbalancer but the one in this demo you have 3 rancher services along with the come along rancher service. I am stuck there. Help. Thanks
Don't use a KVM image for the VM, use standard.
@@Jims-Garage Thank you for reaching back. I use the standard and also i use the stable rancher helm release. But the error still arises. I am currently trying to expose the service to a loadbalancer. I want to know if a 3rd rancher service is required or do i just change the come along rancher service deployment to a loadbalancer?
@@victoranolu4376 yeah, you just need to specify a loadbalancer (with metallb)
@@Jims-Garage I am back again. I have deployed Rancher in Azure Kubernetes but i also have this issue of rancher not having a loadbalancer IP for its ingress. If i change the rancher service from ClusterIP to loadbalncer. It shows the Rancher homepage but fails to login with the pre-defined password set in deployment.
⚠ WARNING: `installCRDs` is deprecated, use `crds.enabled` instead.
cert-manager v1.15.1 has been deployed successfully!
I just wonder why…it’s ‘cattle system’??? 😂
Because the inventory are "cattle not pets". Quite brutal, but you get the idea. Inventory should be disposable/replaceable.
@@Jims-Garage IP branding namespace 😝