Hi Tim. Thanks for the tutorial. I have created a custom cluster. When I try to install longhorn, the Longhorn-manager pods are in CrashLoopBackOff and the installation fails. Can you help?
Do you have an opinion disk configuration? For example, my server has five disks in a striped RAID, resulting in 3TB of filesystem ext4. Should I reconfigure as RAID 0 and let longhorn manage the redundancy?
I tried OpenEBS, NFS, StorageOS, LocalProvisioner and they were ALL a pain to deploy and finicky to use. But so far Longhorn has been simple. The dedicated UI helps a ton whenever I allocate a pvc and can't the pod can't attach to the PV. Your Video also showed other features that I never explored like backups and snapshots. Keep up the great content! You are now one of my favorite channels. Subbed with Notification Bell
Dude, how can I give two thumbs up!? I was trying to solve this volumes issues for a while, and then you came and did it for me, again. Thank you so much!!!
@@notquitecopacetic oh lordy that's a loaded question lol. Ok. So Ill try to keep it brief on how I have my cluster setup. Whole thing is built with ansible and k3s. 3 master nodes - booting off ssd. Those 3 master nodes also run as longhorn storage nodes. Remaining nodes are currently running of SD cards. Those nodes are worker nodes only. The SSD boot is FAST for sure. But I don't have any issues (yet) with the sd cards.
It is so interesting. Never thought to be so interested into docker, kube, rancher. Since I started to look your videos. I am 25 years in the it business but now it’s time to learn new things and setup my home cluster services instead of installing everytime a new Ubuntu server for an application into a containerized docker cluster (based on proxmox) Thank you so much Tim 👍🏻. Btw is there a video to setup the cluster? In the setup Video (docker, kube, Rancher) you didn’t do it
Excellent video Tim! Demonstrating an automated disaster recovery would be an area to enhance this video further -- perhaps an idea for a follow-up video.
Love your videos mate, keep it up, you are amazing, clear and understandable and well explained on each topic, can you please make a video about proxmox solo on storage options and possibilities and ways to configure mate.
Great video and I like that you went more in depth. I'm trying to figure this out for an enterprise solution. Side note: love seeing your face, but sometimes the face cam blocks areas of the screen where you are typing or clicking. I've noticed this on a few vids. Not a big deal, but it would be helpful to see everything you are seeing.
Would you be able to make a tutorial for containerizing a non-catalog app and running it on a persistent volume? It would be super cool to see something like linuxserver/unifi running on Kubernetes. I don't think the community has done something like that before.
Thank you! Any app on my self-hosted playlist will work in this way. Just choose longhorn as your volume instead of bing mounting to a host path! ruclips.net/p/PL8cwSAAaP9W3bztxsC-1Huve-sXKUZmb5
@@TechnoTim support him, can u make a video explanation on how to use piece of longhorn with generic app like dockerhub ginx or dockerhub\ubuntu on rancher 2.6? i tried - unsucessfull.
@@TechnoTim what does longhorn has to do with certificates? coincidentally, half an hour ago I was able to have minio deployed using longhorn storage class and it works lime a charm when port-forwarding the minio and the console services to my local, however, minio operator is quite unorganized in terms of documentation, so I wasn't able to expose my minio with an ingress resource. I appreciate if you can help here. 🙏
Great video. Unfortunately I did not solve my issue with navidrome and its SQLite database concurrency problem. Even with all the nodes being VMs inside the same machine
I've actually been using the nfs-subdir-external-provisioner storage class to automatically mount a subdirectory from an exported FreeNAS NFS share. It works, but longhorn seems a lot more robust!
Great video! But btw longhorn is most likely not your default storage class! That's because you deployed k3s with the local-path storage provider and k3s always reapplys the Deployments in /var/lib/rancher/k3s/server/manifest (or might be in /etc/rancher or something I'm not quite sure right now). So even if do kubectl edit storageclass local-path and set it to not be the default it will automatically reapply the storage class yaml and set it back to the default. So either you edit it in there (.../manifest) or you just delete the file in there and use kubectl edit
As always, great video! I've been trying to setup longhorn for a while now and was lowkey hoping you'd make a vid so I could see how you did it. The setup is, as you said, incredibly simple. Which is awesome! The hardest part for me has been volumes failing to attach. They'll just get stuck in an attaching/detaching loop. I assume it's something to do with my networking config, and networking is the bane of my existence.
@@TechnoTim So I never found out what the root cause was, but I did find out RancherOS is explicitly not supported by Longhorn. Which is the OS my nodes were running. Re-upped with a less niche OS and things are running great :) Your docs on taints and tolerations were a lifesaver! Would have taken me hours to figure out otherwise.
What was the benefit of spinning up storage nodes versus attaching additional volumes to your existing agent nodes? That should keep the storage of Docker images and logs separate from Longhorn storage.
@@TechnoTim Have you looked at the fsGroup within the securityContext? I have not used one but you should be able to create a custom group/GUID on the host which owns the Longhorn data directory and then modify the Longhorn Daemonset or Deployment to use that GUID allowing it access. Other Pods without the correct securityContext should be denied access.
Fantastic video, as always 👍One thing I'm missing is a description of the lower layers. OK, we have these 4 Worker Nodes which probably are running on top of VMs in Proxmox but are they distributed across different physical servers? What are underlying VM disk devices? Having 4 WNodes and replica=2, how do you prevent that both primary and replica data does not go to WNodes running in same physical server? What is minimum number of servers to provide redundancy and avoid split-brain? Asking these question because this seem important from resilience point of view.
Monty, the guy who wrote MySQL and did the MariaDB fork, says the name is "ma-ree-ah", not "ma-rai-a" :) Oh, and BTW, he also says it's "my s q l", not "my sequel" (Sequel is not the same as SQL). The two databases are named after his daughters, My (pronounced as the first part of myriad, but even monty says "mai s q l") and Maria.
For somebody as paranoid as me, is there a quick and easy way to verify the integrity of a backed up volume? Like, mount it as a regular volume and check a file...?
@@TechnoTim ups sorry! I meant to say: that to my knowledge, it’s not possible to use a block storage device in a multi pod read / write config. For example: when scaling a Drupal / Wordpress server, I would use a few Webserver pods all accessing the same volume. This isn’t possible with Longhorn. NFS acts on a file system level -> this would work. I still have to find a solution similar to Longhorn but for multi pod setups 😅
Great video indeed - I deployed Longhorn before this video (but not by much) and I always used worker nodes for storage. Now I wanted to setup, as you did, nodes dedicated only to storage workload -- regardless how I use the taints, I cannot get it to work - if I set taint, all nodes go red, regardless what I set the taint, and the setting in the webconsole settings input. Any light on that?
You figure taints out? My experience is the same. I did the stuff Tim mentioned in his docs. Nodes say "Down" until I remove the taint mentioned in the docs. Longhorn docs say to remove all the disk and then edit the yaml. Haven't tried that.
Great information on Longhorn. Can you point me to setup information how we could use Kubernetes/Longhorn to create a development wordpress node that is disconnected from the production nodes and when the changes are implemented how it can be depliyed to the kubernetes node setup? Thanks in Advance
having revisited and use the taints in the docs, the storage nodes show at "DOWN" in the Longhorn Dashboard now, but the storage capacity seems right. weird
🎯 Key Takeaways for quick navigation: 00:00 Challenges *with Kubernetes storage.* 01:46 Longhorn: *Lightweight, reliable Kubernetes storage.* 04:49 Installing *Longhorn in Kubernetes.* 10:28 Using *tags instead of taints for storage nodes.* 11:23 Setting *up persistent volumes with Longhorn.* 16:58 Creating *backups and snapshots with Longhorn.* Made with HARPA AI
Hi. Excellent work. Can you make a video how to make backup and restore with Longhorn? I tried a few different ways and never succeed. Longhorn documentation is not very detailed and clear. With Snapshot always successfully return the data.
At 6:53 you mention you can add a drive to any device on your network. If you have a NAS, I assume you link to it's NFS path? Or how is this accomplished? Great video!
Great video, couple of questions: * You show that you have 2 replicas per volume in the "table view" but once you go into the volume details one can see 3 replicas, is that normal? * If we use 3 storage nodes, can we achieve HA by only having 2 replicas per volume, or does longhorn calculate quorum on replicas and not on nodes? * Pods: i see you drained a node and a new PV was create in longhorn, why so? Shouldn't it be possible to reuse the same PV on a different node? How do we know that PV1 and PV2 in your example are copies of each other? Is there any hint from longhorn? And what happens if the node 1 goes completely down, will the same principle apply?
I'm using nfs-provisioner because I don't want to use space from my proxmox cluster. One big problem I see with longhorm is replication taking too much space if all volume is duplicated on each node, in your case 4 time the space allocated and it can add up fast. I suggest 10gb network and nfs behind a zraid of ssd or in my case I created 2 storage class. The default use hdd and a nfs-ssd
Thank you! The only downside about the nfs-provisioner is that if my nfs goes down (reboot/upgrade/whatever) I lose the mounts for every pod in my cluster.
would love ot see this revisited in the context of harvester. Attempting to setup now and Harvester says the default storage class is harvester-longhorn. My Rancher install is a VM on harvester and passing Harvester back through so Rancher can deploy to it. Rancher doesn't show Longhorn as installed (by default) but since its running on Harvester, shouldn't it be Longhorn? IDK
it will use the remaining space of each drive, depends on how you use it. Long horn will create 3 replicas so something like total space = drives * space - n replicas (volume * 3 replicas)
Thank you for all the work you share ! What kind of file you'll find on the NFS server when doing a backup ? Also what do you think about de k3os ISO, I tried to work with it but really didn't get anything about Proxmox Cloud-Init... maybe an idea for a next video :-)
Thanks! I've always opted out of distros dedicated to kubernetes/k3s/rancher. Although I do gain some hardening, I lose more control over the OS than I'd like. Also, I am familiar with care and feeding of Ubuntu and not so much with k3os/rancherOS/etc...
Is there a way to bring in iscsi nas into longhorn? I have a Dell equal logic ps4000 (very old, I know) and I am having a hard time finding documentation on getting that storage so it's available for all my services. Thanks for all the great content!
Thanks for all your hard work. Learned a lot by watching your video's. Need little help with access copy the data/files to/from PV/replicas created by longhorn.
use kubectl cp. If you need them there when the container starts, mount the file system to another generic container (like ubuntu or busybox) and then kubectl cp the files there.
Again great video and with the video's you posted, i was finally able to install Kubernetes (k3s), Rancher, and Longhorn. There are a couple of things i want to mention though First about Longhorn; i created 3 more nodes for storage purposes and i attach 150gig for each node, but in Longhorn, i only see 128 gigs available. I thought it would be 450gig. What is the purpose of spinning more nodes? The second thing i want to mention is that when the Load Balancer was set up in the K3s video it was a 4 layer LB. By launching WordPress it gave me an error caused in the Rancher configuration page it asked for a 7 layer load balancer. I don't have that so i disabled that option. What will happen if the nodes become unavailable where WordPress is running, since i can connect to WordPress through the IP Address of the worker node it is running with a port number? I thought the idea was to connect through the LB and the LB is bringing you to the container you want to connect to independently of the worker node it is running. Sorry for the long comment.
Hi Tim, I have a question regarding this longhorn which is a good choice but still needs some tuning on the cpu and ram resources as without any specification it consumes a lot of cpu and ram when is doing his job so my question would be if you have some optimal configuration of cpu and ram requests and limits. I have deployed within k8s by using helm charts. Thank you and best regards !
Thanks for the great video Tim ;) How do you think Longhorn compares to OpenEBS Jiva? I really love how easy it is to manage volumes and backups in longhorn, but in the past Longhorn has been a bit unreliable for me, with volumes being disconnected on extensive writes, whereas OpenEBS has been rock solid. Have you encountered similar issues?
Is there a way to deploy longhorn without Rancher ? I can’t get Rancher to import my cluster. Probably because my cluster master’s are running on arm processors.
Here is a challenging use case im working on solving... i have around 20TB of longhorn storage in my cluster spread across 5 worker nodes with s3 backups enables. I would like to some how expose the longhorn storage through samba shares or NFS or iSCSI to my vmware stack or desktops for a more reliable storage DR storage option than i have. any ideas on how to accomplish this? I was thinking a container using a longhorn PV ruining NFS of some type and exposing it to my main network.
it works very well, of course it needs ssd disks otherwise the performances could reduce a lot it’s not the most faster and doesn’t support yet the disk encryption. There are other solutions like rook-ceph or trident by netapp, but longhorn from my point of view is the most reliable.
So i have been running longhorn for some time now and backing up to S3... some how i got one of my PVs corrupted and accidentally deleted the PV. i cant figure out how to restore from back becuase when i click on backups it shows nothing
Hmmm not sure. I've always been able to restore a backup from the gui and reconnect it to the container. Sometimes the service call fails and you have to click it multiple times. you can see the failures in the Chrome dev tools. It's kind of annoying because it fails silently.
How does Rancher Longhorn manage how much space is available for all of the nodes? I need more space. I added new hard drives that were twice the size as the previous.
OK, so ESXi was running 3 VMs. I had to go and expand the LVM space so that Rancher Longhorn could fully utilize the disk for /dev/sda3... sudo fdisk -l sudo growpart /dev/sda 3 # grows the partition; note the space sudo lvextend -l +100%FREE -r /dev/ubuntu-vg/ubuntu-lv
How are you getting your host disks mounted to your storage nodes? Mounting a host path from the hypervisor, or creating a VM disk? Also side question, if you didn't have any workloads that required VMs, would you roll with Kubernetes on baremetal? You could try out another rancher product called Harvester for VM management (its technically HCI though)
I can only speak for myself, but I would definitely be running bare metal + kubeVirt. (Thanks for mentioning harvester, haven't heard about that before!) Especially when considering that all of wikipedia is running on bare metal kubernetes clusters. Niantic with Pokemón Go are doing sth similar, they're running lxc containers as worker nodes, because they would otherwise run into the 100 pods per node limit. So for that reason why not? If they're doing it, it can't be that bad.
Harvester looks interesting for sure! My node disks are mounted via virtual disk. Since I dedicated 4 nodes to storage, I am just using the storage on those nodes. MY PVCs are pretty small, I just need them available!
You can passthrough the SSD/HDD to the VM, that's what I ended up doing after going crazy with ceph. My mental illness is in recovery right now, thanks to Tim
How are you managing storage in kubernetes?
BTW if you're new here welcome! Be sure to subscribe for more content like this!
Hi Tim.
Thanks for the tutorial. I have created a custom cluster. When I try to install longhorn, the Longhorn-manager pods are in CrashLoopBackOff and the installation fails. Can you help?
Do you have an opinion disk configuration?
For example, my server has five disks in a striped RAID, resulting in 3TB of filesystem ext4.
Should I reconfigure as RAID 0 and let longhorn manage the redundancy?
Random Q have you been able to add longhorn to your ansible script? Just watching the k3s + ansible video at the moment.
I tried OpenEBS, NFS, StorageOS, LocalProvisioner and they were ALL a pain to deploy and finicky to use. But so far Longhorn has been simple.
The dedicated UI helps a ton whenever I allocate a pvc and can't the pod can't attach to the PV.
Your Video also showed other features that I never explored like backups and snapshots. Keep up the great content! You are now one of my favorite channels. Subbed with Notification Bell
Than you so much! Looks like you have quite the experience in Rancher and Storage. Keep it up! Thanks for the comment!
Dude, how can I give two thumbs up!? I was trying to solve this volumes issues for a while, and then you came and did it for me, again.
Thank you so much!!!
Like & Subscribe works :) Thank you!
Ahhhh yes! My favorite time of the week, when Techno Tim releases a new vid.
Woohoo! (Also Woohoo for Saturday!)
3 years laters still top TechnoTim, awesome thanks for your contents
What a coincidence... I was just reading longhorn docs to use in our prod, and you just made it easier 😁 thanks
Nice!
@@TechnoTim I also wonder about the question Ariel said
Thanks! A storage node manager, redundancy, and backups were exactly what I was looking to find. It's a great plus that it has a nice UI, too.
This is freakin amazing! Thank you SO much for this video!
Glad you liked it!
I have longhorn installed on my 12 node pi4 cluster with a few of my nodes with extra SSD storage. Works great!
@@notquitecopacetic oh lordy that's a loaded question lol. Ok. So Ill try to keep it brief on how I have my cluster setup. Whole thing is built with ansible and k3s. 3 master nodes - booting off ssd. Those 3 master nodes also run as longhorn storage nodes. Remaining nodes are currently running of SD cards. Those nodes are worker nodes only. The SSD boot is FAST for sure. But I don't have any issues (yet) with the sd cards.
man happy I found your channel, planning for homelab for my projects, and you made my life easier.
Glad I could help!
Thanks Tim. This was very timely. I was just starting think about the issue.
Glad it was helpful!
Very thank you TIm, you provided me a new approach to the storage in k8s
very high quality content together with the documentation site, amazing work!
Much appreciated!
Tim : "Thanks ahead of time for the likes"
Me : Instantly likes video.
Thanks after the time for the likes!
It is so interesting. Never thought to be so interested into docker, kube, rancher. Since I started to look your videos. I am 25 years in the it business but now it’s time to learn new things and setup my home cluster services instead of installing everytime a new Ubuntu server for an application into a containerized docker cluster (based on proxmox) Thank you so much Tim 👍🏻. Btw is there a video to setup the cluster? In the setup Video (docker, kube, Rancher) you didn’t do it
Thank you! Glad you like it! Here it is! ruclips.net/video/UoOcLXfa8EU/видео.html
dude you rocks! just going straight to the points and showing all the good features
Thank you!
More Rancher how to's please. great job on this.
Excellent video Tim! Demonstrating an automated disaster recovery would be an area to enhance this video further -- perhaps an idea for a follow-up video.
T-Tim is the man!
Just stumbled on this vid. Really nice explanation and pretty on time for me. Just about to start exploring Longhorn. Thank you!
Welcome aboard!
Thanks for very useful content. Thanks a lot and Happy New Year!
Thank you! Happy new year!
fine, you convinced me to use rancher with k3s.
Same
Me too
keep up the good work dude! Seriously, your channel targets so well what I'm working on, month after month! Is there a telepathy thing there? ;)
haha! Thank you! 🤯
Thank you!!
I love k3s and looking forward to longhorn
I'm always learning a lot from your videos. Thanks for sharing!
thanks so much,now i have the good choice option fir storageclass
Great!
Thanks a lot for this video Tim! 🔥
Love your videos mate, keep it up, you are amazing, clear and understandable and well explained on each topic, can you please make a video about proxmox solo on storage options and possibilities and ways to configure mate.
Thank you! Possibly!
Great video and I like that you went more in depth. I'm trying to figure this out for an enterprise solution. Side note: love seeing your face, but sometimes the face cam blocks areas of the screen where you are typing or clicking. I've noticed this on a few vids. Not a big deal, but it would be helpful to see everything you are seeing.
Thanks for the feedback! I am usually pretty good about hiding the camera but missed a few scenes on this one. Noted!
Chaps, - don't symlink anything from /var/lib/longhorn as I did,all volumes will stay unbound. Learned hard way. And thanks for the video TIm!
Good call! I think you need to set up fstab maybe? Not sure. Let me know!
Well... I'm certain it is pronounced Maria and not Maria
Finally! I now know how to pronounce Maria! It's as simple as saying "Maria"!
Great Job! And thanks for writing good Description. Appreciated.
Thank you!
Maaria is the official Luigi approved pronounciation of that database.
Would you be able to make a tutorial for containerizing a non-catalog app and running it on a persistent volume? It would be super cool to see something like linuxserver/unifi running on Kubernetes. I don't think the community has done something like that before.
Thank you! Any app on my self-hosted playlist will work in this way. Just choose longhorn as your volume instead of bing mounting to a host path! ruclips.net/p/PL8cwSAAaP9W3bztxsC-1Huve-sXKUZmb5
@@TechnoTim support him, can u make a video explanation on how to use piece of longhorn with generic app like dockerhub
ginx or dockerhub\ubuntu on rancher 2.6? i tried - unsucessfull.
i'm using longhorn right now !
Thanks you
Would you please make videos on MinIO whenever you figure out the cert piece and possibly a video on setting up a local Git instance?
I'd love to get MinIO working (and it does with TrueNAS) but I think the issue is with Longhorn. It doesn't like self signed certs.
@@TechnoTim what does longhorn has to do with certificates?
coincidentally, half an hour ago I was able to have minio deployed using longhorn storage class and it works lime a charm when port-forwarding the minio and the console services to my local, however, minio operator is quite unorganized in terms of documentation, so I wasn't able to expose my minio with an ingress resource.
I appreciate if you can help here. 🙏
back again,
I was able to securely expose my minio (which is using longhorn volumes) with nginx 😍😍😍 will try to share the manifests later
Great video. Unfortunately I did not solve my issue with navidrome and its SQLite database concurrency problem. Even with all the nodes being VMs inside the same machine
Thanks I like your way of explaining things, this is to the point
Glad it was helpful!
Great video! Short video but good explanation of everything important.
Thank you! I try not to make them too long!
Thx man. I've been quite curious about this.
Glad you enjoy it!
I've actually been using the nfs-subdir-external-provisioner storage class to automatically mount a subdirectory from an exported FreeNAS NFS share. It works, but longhorn seems a lot more robust!
Yeah, I too use nfs client provisioner but I don’t have HA NFS! This gives you HA block storage!
Great video! But btw longhorn is most likely not your default storage class! That's because you deployed k3s with the local-path storage provider and k3s always reapplys the Deployments in /var/lib/rancher/k3s/server/manifest (or might be in /etc/rancher or something I'm not quite sure right now). So even if do kubectl edit storageclass local-path and set it to not be the default it will automatically reapply the storage class yaml and set it back to the default. So either you edit it in there (.../manifest) or you just delete the file in there and use kubectl edit
Good call! I've noticed too that I can have more than one default 🤔 ???
@@TechnoTim nope 😜
@@TechnoTim Did you ever discover a solution to this?
Excellent video Tim! how to Set up a Storage Class with failover capability using longhorn
please make a video about MinIO. Thanks
As always, great video! I've been trying to setup longhorn for a while now and was lowkey hoping you'd make a vid so I could see how you did it.
The setup is, as you said, incredibly simple. Which is awesome! The hardest part for me has been volumes failing to attach. They'll just get stuck in an attaching/detaching loop. I assume it's something to do with my networking config, and networking is the bane of my existence.
Be sure your nodes have all the dependencies installed. They are in my docs! Thank you!
@@TechnoTim So I never found out what the root cause was, but I did find out RancherOS is explicitly not supported by Longhorn. Which is the OS my nodes were running. Re-upped with a less niche OS and things are running great :) Your docs on taints and tolerations were a lifesaver! Would have taken me hours to figure out otherwise.
What was the benefit of spinning up storage nodes versus attaching additional volumes to your existing agent nodes? That should keep the storage of Docker images and logs separate from Longhorn storage.
Good point. Dedicating these nodes to this role allows greater control and security over these nodes.
@@TechnoTim Have you looked at the fsGroup within the securityContext? I have not used one but you should be able to create a custom group/GUID on the host which owns the Longhorn data directory and then modify the Longhorn Daemonset or Deployment to use that GUID allowing it access. Other Pods without the correct securityContext should be denied access.
What about setting up gitlab using helm next? Love the chanel btw ;D
Fantastic video, as always 👍One thing I'm missing is a description of the lower layers. OK, we have these 4 Worker Nodes which probably are running on top of VMs in Proxmox but are they distributed across different physical servers? What are underlying VM disk devices? Having 4 WNodes and replica=2, how do you prevent that both primary and replica data does not go to WNodes running in same physical server? What is minimum number of servers to provide redundancy and avoid split-brain? Asking these question because this seem important from resilience point of view.
Thanks for sharing! Great Video and love your glasses! What make and model are they please?
Warby Parker!
Please do a video on MinIO integration :)
Monty, the guy who wrote MySQL and did the MariaDB fork, says the name is "ma-ree-ah", not "ma-rai-a" :) Oh, and BTW, he also says it's "my s q l", not "my sequel" (Sequel is not the same as SQL). The two databases are named after his daughters, My (pronounced as the first part of myriad, but even monty says "mai s q l") and Maria.
This is great, man. Thank you. Can you make a comparison between it and ondat? and what is your opinion?
Thanks
For somebody as paranoid as me, is there a quick and easy way to verify the integrity of a backed up volume?
Like, mount it as a regular volume and check a file...?
Great intro video! Thanks!
Thank you for your video! It helped a lot!
I noticed, that Longhorn acts as a Block storage device -> it won't support
Sorry, what?
@@TechnoTim ups sorry!
I meant to say:
that to my knowledge, it’s not possible to use a block storage device in a multi pod read / write config.
For example: when scaling a Drupal / Wordpress server, I would use a few Webserver pods all accessing the same volume. This isn’t possible with Longhorn. NFS acts on a file system level -> this would work.
I still have to find a solution similar to Longhorn but for multi pod setups 😅
Great stuff ! Keep it going
Great video indeed - I deployed Longhorn before this video (but not by much) and I always used worker nodes for storage. Now I wanted to setup, as you did, nodes dedicated only to storage workload -- regardless how I use the taints, I cannot get it to work - if I set taint, all nodes go red, regardless what I set the taint, and the setting in the webconsole settings input. Any light on that?
You figure taints out? My experience is the same. I did the stuff Tim mentioned in his docs. Nodes say "Down" until I remove the taint mentioned in the docs. Longhorn docs say to remove all the disk and then edit the yaml. Haven't tried that.
Yep felt that minio cert pain
Great video! Typo in the thumbnail though: «longhoNrn» ;)
Good eye! Thank you, fixed!
Thanks Tim. You next step Harvester?
*3 nodes rke cluster with longhorn and harvester installed* Who needs proxmox?
You forget to mention that iscsi package installed on k8s nodes is required to use longhorn. Without that it never up.
Thanks! This is in the docs but thanks for calling it out here!
Great information on Longhorn. Can you point me to setup information how we could use Kubernetes/Longhorn to create a development wordpress node that is disconnected from the production nodes and when the changes are implemented how it can be depliyed to the kubernetes node setup? Thanks in Advance
Excellent video! loved it!
Thank you!
Great video! Thank you!
You are welcome!
having revisited and use the taints in the docs, the storage nodes show at "DOWN" in the Longhorn Dashboard now, but the storage capacity seems right. weird
🎯 Key Takeaways for quick navigation:
00:00 Challenges *with Kubernetes storage.*
01:46 Longhorn: *Lightweight, reliable Kubernetes storage.*
04:49 Installing *Longhorn in Kubernetes.*
10:28 Using *tags instead of taints for storage nodes.*
11:23 Setting *up persistent volumes with Longhorn.*
16:58 Creating *backups and snapshots with Longhorn.*
Made with HARPA AI
Hello Tim, your videos are so great! Can Longhorn be used in docker swarm?
Thank you! No, it's for kubernetes!
Hi. Excellent work. Can you make a video how to make backup and restore with Longhorn? I tried a few different ways and never succeed. Longhorn documentation is not very detailed and clear. With Snapshot always successfully return the data.
Not sure if you are aware of this but there is a typo in the video thumbnail @Techno Tim
Thanks so much, fixed but unfortunately it's cached, might not be next time you check!
Perfect
I learned a lot by watching the video. ❤ 🌹
Glad it was helpful!
Who the fuck thought it would be a good idea to call the things "taints" lol
At 6:53 you mention you can add a drive to any device on your network. If you have a NAS, I assume you link to it's NFS path? Or how is this accomplished? Great video!
Yes! That's right. These storage nodes can mount an nfs path too!
Awesome video!!!
By the way, in your thumbnail is a typo... ;-)
It might be cached! Try refreshing a few times!
Great video, couple of questions:
* You show that you have 2 replicas per volume in the "table view" but once you go into the volume details one can see 3 replicas, is that normal?
* If we use 3 storage nodes, can we achieve HA by only having 2 replicas per volume, or does longhorn calculate quorum on replicas and not on nodes?
* Pods: i see you drained a node and a new PV was create in longhorn, why so? Shouldn't it be possible to reuse the same PV on a different node? How do we know that PV1 and PV2 in your example are copies of each other? Is there any hint from longhorn? And what happens if the node 1 goes completely down, will the same principle apply?
I'm using nfs-provisioner because I don't want to use space from my proxmox cluster. One big problem I see with longhorm is replication taking too much space if all volume is duplicated on each node, in your case 4 time the space allocated and it can add up fast. I suggest 10gb network and nfs behind a zraid of ssd or in my case I created 2 storage class. The default use hdd and a nfs-ssd
Thank you! The only downside about the nfs-provisioner is that if my nfs goes down (reboot/upgrade/whatever) I lose the mounts for every pod in my cluster.
@@TechnoTim hmm have you looked at ceph?
would love ot see this revisited in the context of harvester. Attempting to setup now and Harvester says the default storage class is harvester-longhorn. My Rancher install is a VM on harvester and passing Harvester back through so Rancher can deploy to it. Rancher doesn't show Longhorn as installed (by default) but since its running on Harvester, shouldn't it be Longhorn? IDK
If i have 5 servers each with 1 tb disks, and i run longhorn on each. How much usable space do i have access to?
it will use the remaining space of each drive, depends on how you use it. Long horn will create 3 replicas so something like total space = drives * space - n replicas (volume * 3 replicas)
You mentioned MinIO in the video. MinIO itself is a storage. Can you show me how to use MinIO as the storage replacing Longhorn?
Thank you for all the work you share !
What kind of file you'll find on the NFS server when doing a backup ?
Also what do you think about de k3os ISO, I tried to work with it but really didn't get anything about Proxmox Cloud-Init... maybe an idea for a next video :-)
Thanks! I've always opted out of distros dedicated to kubernetes/k3s/rancher. Although I do gain some hardening, I lose more control over the OS than I'd like. Also, I am familiar with care and feeding of Ubuntu and not so much with k3os/rancherOS/etc...
Hey how about Ceph Rock storage in kubernetes?
Is there a way to bring in iscsi nas into longhorn? I have a Dell equal logic ps4000 (very old, I know) and I am having a hard time finding documentation on getting that storage so it's available for all my services. Thanks for all the great content!
There may be, but I know for sure you could use NFS
Thanks for all your hard work. Learned a lot by watching your video's. Need little help with access copy the data/files to/from PV/replicas created by longhorn.
use kubectl cp. If you need them there when the container starts, mount the file system to another generic container (like ubuntu or busybox) and then kubectl cp the files there.
Again great video and with the video's you posted, i was finally able to install Kubernetes (k3s), Rancher, and Longhorn. There are a couple of things i want to mention though First about Longhorn; i created 3 more nodes for storage purposes and i attach 150gig for each node, but in Longhorn, i only see 128 gigs available. I thought it would be 450gig. What is the purpose of spinning more nodes? The second thing i want to mention is that when the Load Balancer was set up in the K3s video it was a 4 layer LB. By launching WordPress it gave me an error caused in the Rancher configuration page it asked for a 7 layer load balancer. I don't have that so i disabled that option. What will happen if the nodes become unavailable where WordPress is running, since i can connect to WordPress through the IP Address of the worker node it is running with a port number? I thought the idea was to connect through the LB and the LB is bringing you to the container you want to connect to independently of the worker node it is running. Sorry for the long comment.
Hi Tim, I have a question regarding this longhorn which is a good choice but still needs some tuning on the cpu and ram resources as without any specification it consumes a lot of cpu and ram when is doing his job so my question would be if you have some optimal configuration of cpu and ram requests and limits. I have deployed within k8s by using helm charts. Thank you and best regards !
ur plant on the gold thing looks very sad .... did u give it some love and petting?
It always gets pets!
Thanks for the great video Tim ;)
How do you think Longhorn compares to OpenEBS Jiva?
I really love how easy it is to manage volumes and backups in longhorn, but in the past Longhorn has been a bit unreliable for me, with volumes being disconnected on extensive writes, whereas OpenEBS has been rock solid.
Have you encountered similar issues?
I haven't tried openEBS, how is it?
You dont have to formate yours disk to juse longhorn
Is there a way to deploy longhorn without Rancher ? I can’t get Rancher to import my cluster.
Probably because my cluster master’s are running on arm processors.
I'm using the latest Rancher version 2.6 and I don't see the wordpress app, do I need to add a new repository?
how can i add more storage if full ? should i add more nodes to the cluster or just add ssd ?
Here is a challenging use case im working on solving... i have around 20TB of longhorn storage in my cluster spread across 5 worker nodes with s3 backups enables. I would like to some how expose the longhorn storage through samba shares or NFS or iSCSI to my vmware stack or desktops for a more reliable storage DR storage option than i have. any ideas on how to accomplish this? I was thinking a container using a longhorn PV ruining NFS of some type and exposing it to my main network.
Is that any way we can increase the pv size?
it works very well, of course it needs ssd disks otherwise the performances could reduce a lot it’s not the most faster and doesn’t support yet the disk encryption. There are other solutions like rook-ceph or trident by netapp, but longhorn from my point of view is the most reliable.
Good call!
What about Ceph?
1:48 Longhorn is WindowsVista Alpha:p
Haha! You guessed it!
So i have been running longhorn for some time now and backing up to S3... some how i got one of my PVs corrupted and accidentally deleted the PV. i cant figure out how to restore from back becuase when i click on backups it shows nothing
Hmmm not sure. I've always been able to restore a backup from the gui and reconnect it to the container. Sometimes the service call fails and you have to click it multiple times. you can see the failures in the Chrome dev tools. It's kind of annoying because it fails silently.
How does Rancher Longhorn manage how much space is available for all of the nodes? I need more space. I added new hard drives that were twice the size as the previous.
OK, so ESXi was running 3 VMs. I had to go and expand the LVM space so that Rancher Longhorn could fully utilize the disk for /dev/sda3...
sudo fdisk -l
sudo growpart /dev/sda 3 # grows the partition; note the space
sudo lvextend -l +100%FREE -r /dev/ubuntu-vg/ubuntu-lv
Nice video, but how config aws s3 bucket? someone have a video or something? I dont know how confg s3 bucket with keys for longhorn.. :(
I did with minio!
How are you getting your host disks mounted to your storage nodes? Mounting a host path from the hypervisor, or creating a VM disk? Also side question, if you didn't have any workloads that required VMs, would you roll with Kubernetes on baremetal? You could try out another rancher product called Harvester for VM management (its technically HCI though)
I can only speak for myself, but I would definitely be running bare metal + kubeVirt. (Thanks for mentioning harvester, haven't heard about that before!)
Especially when considering that all of wikipedia is running on bare metal kubernetes clusters. Niantic with Pokemón Go are doing sth similar, they're running lxc containers as worker nodes, because they would otherwise run into the 100 pods per node limit.
So for that reason why not? If they're doing it, it can't be that bad.
Harvester looks interesting for sure! My node disks are mounted via virtual disk. Since I dedicated 4 nodes to storage, I am just using the storage on those nodes. MY PVCs are pretty small, I just need them available!
You can passthrough the SSD/HDD to the VM, that's what I ended up doing after going crazy with ceph. My mental illness is in recovery right now, thanks to Tim
What do you think of Dell's Powerflex vs Longhorn?
I don't know enough about Powerflex but I do know that Longhorn gives you shared kubernetes storage really easy!
Hi Tim,
In your setup, is it also possible to scale out the pod(wordpress)?
TIA
I am not sure! Most of my frontends are client side apps so I can scale to 10000 if I want!