I rarely comment and interact on RUclips but your videos are very thorough and easy to follow. So easy that I've signed up as a member (first time ever) to support your channel.Thank you for all that you do and keep up the good work helping the community.
i tried dabbling in the whole kubernetes a few times and failed hard, so lost lol this series has made it cake walk. very clear explanations, keep it up bro :D
Jim's life doesn't give me much team last few weeks but I discovered your channel not long time ago and you post a lot of quality content very often. Keep the good work and I hope you don't get burn by posting so much content :D Thank you for sharing
You're welcome, thanks 👍 I'm pushing the Kubernetes mini series out quickly as it's no fun having a half finished cluster. After episode 7 I'll be taking a small break.
Thank you for this video and series! I am planning to deploy Authentik but i want it to be HA. The problem is that it only uses Postgres which is not natively HA. While there are many replication options, they all require careful design and monitoring. Would Longhorn facilitate a multi-node, multi-cloud HA Postgres deployment?
@@Jims-Garage I have been looking into it and would be happy to share my thoughts if you are interested. If you are, let me know how you would like to connect.
I have been thinking about distributed storage. But for a home lab, I am not sure it would make sense to have it on virtual machines. If all the 3 virtual machines failed because of a hardware failure in the host node. So. So, where I wanted to go is, would failure resistance? Would it be better to have this storage distributed into different proxmox nodes?
Thank you for the great content. I was thinking about the same. So assuming someone has 2 proxmox nodes would that be as easy as migrating one of the longhorn VMs to the other node?
@@Jims-Garage did you get it to live migrate to another proxmox node? or was it a case of take a snapshot, stop vm, then migrate to next node and restart?
Another great video. The best so far in the series. Simpler maybe, but useful and straight to the bone. I wonder what are the specs to get it working properly in terms of latency, bandwidth, etc. Ceph for instance is quite demanding in that matter. I believe this is more relaxed.
Thanks 👍 I can only make it work across physical nodes with nvme and 10Gb. If I go lower (even 10Gb with SSD it regularly breaks on large syncs over 20GB). With 10Gb and nvme I haven't ever had an issue, even with 100GB syncs
@@Jims-Garage Oh okay, that is quite a lot too. I imagined it needed a more basic config, maybe 1G lan and regular SSD. But maybe not, I really see the power of this, but mainly if it runs in different physical nodes. yet it requires some power to do so.
Could you please describe why you're using LoadBalancer for longhorn-frontend and not using NGINX Ingress? To have Longhorn management separated from other workload?
That would be a more cloud native option, however, I'm using Traefik instead of Nginx and used the loadbalancer as an example (from previous videos). You can access it a number of ways, through the rancher UI, through an ingress, however you want really. In an ideal world you'd want a separate cluster for management, but for a homelab I think this is fine (as long as you understand the previous point). People typically won't have resources or appetite for 2 clusters.
@@Jims-Garage Sorry, maybe I was not clear about what I mean. I meant that you're using LoadBalancer type of resource and not an ingress. As far as I understand concept it requests port forwarding for load balancer in your cluster and if it's standard port (80, 443) there's a chance that these standard ports are already used by Traefik or other ingress controller providing Ingresses to your webapps in case of bare metal setups. In cloud environment new separate load balancer will be created but in a home lab number of resources is limited and we probably don't have load balancers creation provision. Ingress allows us to use one instance of NGINX / Traefic/ HAProxy / etc whatever to route all HTTP/HTTPS traffic to our services following Ingress rules (by domain name, path, etc). Of course we can (and we will) have multiple instances of our ingress controller, I said "one" just for simplicity here :)
What could I replace this with? I dont have enough resources on my machine to sping up another 3 VMs for longhorn. Should I use my existing RKE2 stack? Or should I use NFS mounts from my TrueNas server?
@@Jims-Garage Thanks Jim, ah maybe I misunderstood, sorry, in the outro, you mentioned the next videos and deployments. I'll check the Kubernetes playlist. Ta.
Hi Jim 😊 I've successfully runned the RKE2 scripts. In that case I have a cluster of 3 masters and 2 works. Then I runned the scripts for longhorn for RKE not K3S. And I was expecting to have the longhorn installed in my tree newly created images. But instead of that the longhorn appears to be installed on my two worker nodes. When I open the longhorn GUI I only have 2 nodes rather then 3 nodes. And these two nodes are my two worker nodes that were created with the RKE2 Script. I think this is smothing releted to labels and the why the script is looking to them. What mods can I do to the script to avoid this?. Thank you 😊
Be sure to add the 3 new VMs to the script. The workers will always have longhorn on them, they need to have the drivers in order to use longhorn volumes AFAIK. You can unschedule their use for storage in the longhorn GUI (this is what I do).
Sorry for asking so much, I'm new to kubernetes, rancher and longhorn. How can I expose longhorn?Everything is working, of course longhorn is configured to use cluster ip, and I wish it had its own ip and be able to connect to it independently. Than you!
@@Jims-Garage I created the ingress and cloned the longhorn-frontend service, and everything is working. The only thing that didn't work for me was adding the basic-auth as indicated in the longhorn documentation. I can open longhorn with its ip without entering the rancher, but it doesn't have any kind of authorization, it just opens without any restrictions. If I can't figure out how to fix that, then I have to decide whether to leave it this way or enter only from rancher. Thanks a lot.
thanks Jim for the great info, please tell me how to move cluster images to longhorn as if I removed or reseted the cluster don't download it again by other meaning (create local image registry using longhorn) ( after installing new cluster just i want to point to the local image registry and dont download images again like the disconnected or airgaped cluster)
0/5 nodes are available: 5 node(s) didn't match Pod's node affinity/selector. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling..
I deployed three nodes with longhorn, and each has 200 GB disk space. But after setting up the longhorn, I only have 70 GB available. Not sure what really happened, are we supposed to lose that much space on longhorn? thanks If I needed more space, would it work if I resize the disk, then restart the VM let the partition expand itself, will longhorn pick that up?
@@tolpacourt it can do, and that's how I've started running mine. However, in production it's good practice to split out compute, storage and management (even to separate clusters!).
has anyone done this recently ? I've started from the k3s video and cant get past this one. Once I run the longhorn script it basically fails and breaks everything. It also puts longhorn on the two worker nodes which I though was supposed to be seperate.
@@user-gr4vx8xz1l longhorn needs to be on any node being used for storage, and any worker node that is going to use the storage. You can disable provisioning on the worker node through the longhorn gui
@@Jims-Garage I think my setup might just be too slow. I’ve tried deploying longhorn through apps on rancher and it did work atleast it seems to have. It did take awhile though. Thank you for the videos!
Hey! I just ran the script and all seems to have worked without a hitch. When I go on the Longhorn Web UI I noticed it was installed on all nodes. Is there a fix to this? I've tried reading through the script and understand most of it except the longhorn.yaml file.
To anyone trying this today and happened to run into my issue of the longhorn nodes not appearing in longhorn..mine were tainted, found it after running “kubectl describe node” on the nodes
Again as always very good video so congrats!! After several failures I successfully installed it, I'm on RKE2 with 3 Master, 2 workers, 3 nodes for Longhorn, and 1 admin server. So now I have been trying to deploy a simple application, I'm looking to do a Nginix with a persistent volume. I deploy a Wordpress "as a test" and I'm able see the PVC of it that means that Longhorn is doing his job and here is where I got stuck, I have no idea how I will be accessing PVC if I browse into the node that's holding the Wordpress pod's, all folder and files of WP but, how I can do that and have it available like in a mout path. I plan to clone my repo into a PVC folder to run my application but, I have no idea where it is or how to access it. Thanks for all your help.
Hi Jim, Update: I was able get this working by manually adding the label (longhorn-true) in the rancher UI .. I clicked Nodes > Then clicked edit under the first master node > then clicked "labels and annotations" > then on the next screen I added a lable called Longhorn = True and clicked save and everything started pulling down successfully...Is there a reason the script did not automate this process? Can you kindly explain what part of the script needed to be modified as I am curious :) Please keep up the good work and keep the content coming!
@@GreedoShot good question. It's more "cloud-like" with distributed storage, but a traditional NAS would also be fine. I use longhorn and then backup to my NAS (belt and bracers).
hi Jim thanks for all this great material....I am getting this error when deploying Longhorn --> 0/5 nodes are available: 5 node(s) didn't match Pod's node affinity/selector. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling....Also when i Click the longhorn web ui managment from rancher I get this >>> { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "no endpoints available for service \"longhorn-frontend\"", "reason": "ServiceUnavailable", "code": 503 } Any help appreciated. Thanks!
Longhorn is one of dozens of storage solutions that have CSI (container storage interface) drivers to kubernetes. For home gamers, I would drop Longhorn in favor of TrueNAS (which has more general utility) and just use its CSI driver. Why make things harder than they need to be?
Thanks, I did mention that there are plenty of options and I might come back to TrueNAS in the future as I do use it. I chose longhorn as it's part of the rancher stack, more "enterprise" aligned, and more interesting to configure.
As I said, come onto Discord for support. It's very difficult in RUclips comments. I suspect there's something wrong in your config file but I cannot diagnose without seeing them. Many people have it working with the script as it is.
Well, in the last 3 months something must have changed, sadly. For me, this just goes into an infinite Error -> CrashLoopBackOff cycle. I'll have to look into what's going wrong.
@@Jims-Garage I found the problem. In your script, you only install open-iscsi wherever the script is run, _not_ on each of the Longhorn systems. The pod logs say that's what was missing, and indeed, after adding that to each VM, it started right up. You need to add a loop to SSH to each pod and apt-get open-iscsi. I also found that I had to add open-iscsi to the other two worker nodes from the previous 5 nodes before it would successfully complete the setup. Possibly because those also have longhorn=true set.
I'm thinking I may re-deploy my entire cluster without the longhorn tag set on the first two worker nodes. Perhaps that was an artifact of your previous experiments as you were getting the series ready, but it seems odd to put it on the smaller nodes not meant specifically for storage.
I rarely comment and interact on RUclips but your videos are very thorough and easy to follow. So easy that I've signed up as a member (first time ever) to support your channel.Thank you for all that you do and keep up the good work helping the community.
Thanks, Jerome. I really appreciate it.
YES!!!! Jim, you are a machine!
Ha, you're welcome :)
i tried dabbling in the whole kubernetes a few times and failed hard, so lost lol
this series has made it cake walk. very clear explanations, keep it up bro :D
@@richay117 much appreciated, thanks
@@Jims-Garage as a dark mode fan, thanks for the headsup for sunnies, that did crack me up haha
Jim's life doesn't give me much team last few weeks but I discovered your channel not long time ago and you post a lot of quality content very often.
Keep the good work and I hope you don't get burn by posting so much content :D
Thank you for sharing
You're welcome, thanks 👍 I'm pushing the Kubernetes mini series out quickly as it's no fun having a half finished cluster. After episode 7 I'll be taking a small break.
Your videos make this whole process seem so easy. Explaining each part as you go along, makes it even better!
Great, I'm glad that it's simple to understand.
Easy to understand, well executed, awesome tutorial. Thanks!
Thanks 👍
Thank you again 😊. I like the way you are doing this 😊👌
Cheers
jim your video helped me out on setting up a k3s cluser
Thanks, appreciate the feedback
Hey dude, loving this series, keep it up. ❤
No stopping yet! Thanks 👍
Good job man! I'm learning a lot! Thanks for your awesome work!
You're welcome 😁
FWIW I followed your guidelines using Almalinux 9.2 vms … probably had to change 1 single line to make it work smoothly ootb. Great job as always!
Awesome, thanks for sharing
Thanks man 👍
You're welcome 😁
Thank you for this video and series!
I am planning to deploy Authentik but i want it to be HA. The problem is that it only uses Postgres which is not natively HA. While there are many replication options, they all require careful design and monitoring.
Would Longhorn facilitate a multi-node, multi-cloud HA Postgres deployment?
It wouldn't give you HA, but it would give you failover. HA databases is extremely complicated (I don't know how to begin).
@@Jims-Garage I have been looking into it and would be happy to share my thoughts if you are interested. If you are, let me know how you would like to connect.
@@munroegarrett feel free to submit a pr
I have been thinking about distributed storage. But for a home lab, I am not sure it would make sense to have it on virtual machines. If all the 3 virtual machines failed because of a hardware failure in the host node. So.
So, where I wanted to go is, would failure resistance? Would it be better to have this storage distributed into different proxmox nodes?
Yes, I run 2 physical Proxmox nodes and split VMs across them. I also have backups to a separate NAS.
Thank you for the great content.
I was thinking about the same. So assuming someone has 2 proxmox nodes would that be as easy as migrating one of the longhorn VMs to the other node?
@@DimitrisKatrinakis yes
@@Jims-Garage did you get it to live migrate to another proxmox node? or was it a case of take a snapshot, stop vm, then migrate to next node and restart?
Another great video. The best so far in the series. Simpler maybe, but useful and straight to the bone. I wonder what are the specs to get it working properly in terms of latency, bandwidth, etc. Ceph for instance is quite demanding in that matter. I believe this is more relaxed.
Thanks 👍 I can only make it work across physical nodes with nvme and 10Gb. If I go lower (even 10Gb with SSD it regularly breaks on large syncs over 20GB). With 10Gb and nvme I haven't ever had an issue, even with 100GB syncs
@@Jims-Garage Oh okay, that is quite a lot too. I imagined it needed a more basic config, maybe 1G lan and regular SSD. But maybe not, I really see the power of this, but mainly if it runs in different physical nodes. yet it requires some power to do so.
Could you please describe why you're using LoadBalancer for longhorn-frontend and not using NGINX Ingress? To have Longhorn management separated from other workload?
That would be a more cloud native option, however, I'm using Traefik instead of Nginx and used the loadbalancer as an example (from previous videos). You can access it a number of ways, through the rancher UI, through an ingress, however you want really.
In an ideal world you'd want a separate cluster for management, but for a homelab I think this is fine (as long as you understand the previous point). People typically won't have resources or appetite for 2 clusters.
@@Jims-Garage Sorry, maybe I was not clear about what I mean.
I meant that you're using LoadBalancer type of resource and not an ingress. As far as I understand concept it requests port forwarding for load balancer in your cluster and if it's standard port (80, 443) there's a chance that these standard ports are already used by Traefik or other ingress controller providing Ingresses to your webapps in case of bare metal setups. In cloud environment new separate load balancer will be created but in a home lab number of resources is limited and we probably don't have load balancers creation provision.
Ingress allows us to use one instance of NGINX / Traefic/ HAProxy / etc whatever to route all HTTP/HTTPS traffic to our services following Ingress rules (by domain name, path, etc).
Of course we can (and we will) have multiple instances of our ingress controller, I said "one" just for simplicity here :)
Hi Jim, great video, with your help everything is up and running, but how we can access longhorn web gui directly without going through rancher
You add an ingress, same way you do with PiHole/Traefik coming up in later videos
What could I replace this with? I dont have enough resources on my machine to sping up another 3 VMs for longhorn. Should I use my existing RKE2 stack? Or should I use NFS mounts from my TrueNas server?
@@ekekw930 skip creating 3 more VMs and install it on the workers
Okay, I will try that out. I also guess this is faster than iSCSI or NFS?
Looking forward to Part 6, any ideas when that might drop? 🤔
@@neilbroomfield3080 I think that was the last in the follow on series. Check the Kubernetes playlist though as I have more videos.
@@Jims-Garage Thanks Jim, ah maybe I misunderstood, sorry, in the outro, you mentioned the next videos and deployments. I'll check the Kubernetes playlist. Ta.
There is a next one part 7 ruclips.net/video/ulKimzGWtqc/видео.html
Hi Jim 😊
I've successfully runned the RKE2 scripts.
In that case I have a cluster of 3 masters and 2 works.
Then I runned the scripts for longhorn for RKE not K3S.
And I was expecting to have the longhorn installed in my tree newly created images.
But instead of that the longhorn appears to be installed on my two worker nodes.
When I open the longhorn GUI I only have 2 nodes rather then 3 nodes.
And these two nodes are my two worker nodes that were created with the RKE2 Script.
I think this is smothing releted to labels and the why the script is looking to them.
What mods can I do to the script to avoid this?.
Thank you 😊
Be sure to add the 3 new VMs to the script. The workers will always have longhorn on them, they need to have the drivers in order to use longhorn volumes AFAIK. You can unschedule their use for storage in the longhorn GUI (this is what I do).
Sorry for asking so much, I'm new to kubernetes, rancher and longhorn. How can I expose longhorn?Everything is working, of course longhorn is configured to use cluster ip, and I wish it had its own ip and be able to connect to it independently. Than you!
Create a service and ingress similar to my PiHole video.
@@Jims-Garage I created the ingress and cloned the longhorn-frontend service, and everything is working. The only thing that didn't work for me was adding the basic-auth as indicated in the longhorn documentation. I can open longhorn with its ip without entering the rancher, but it doesn't have any kind of authorization, it just opens without any restrictions. If I can't figure out how to fix that, then I have to decide whether to leave it this way or enter only from rancher. Thanks a lot.
thanks Jim for the great info, please tell me how to move cluster images to longhorn as if I removed or reseted the cluster don't download it again by other meaning (create local image registry using longhorn) ( after installing new cluster just i want to point to the local image registry and dont download images again like the disconnected or airgaped cluster)
I'm not sure if you can do that. I think it's just for pod storage (I might be wrong).
0/5 nodes are available: 5 node(s) didn't match Pod's node affinity/selector. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling..
Check node labels and assign as required. The script should label them automatically. If not, add longhorn=true in the nodes section of rancher.
Add the the longhorn label on the 5 nodes I created in the beginning correct?@@Jims-Garage
@@gp2254 no, you create 3 more nodes for longhorn. 8 total, as per the video instructions.
i did all that :)@@Jims-Garage
Where do I label the nodes with longhorn true if they are not showing up in the rancher UI?
Go to Nodes -> 3 dots -> edit.
I deployed three nodes with longhorn, and each has 200 GB disk space. But after setting up the longhorn, I only have 70 GB available. Not sure what really happened, are we supposed to lose that much space on longhorn? thanks
If I needed more space, would it work if I resize the disk, then restart the VM let the partition expand itself, will longhorn pick that up?
Check the reservation amount in longhorn gui. You shouldn't lose that much, default is 20% I think
Would you use it as an alternative to S3 for example with nextcloud?
Not sure I understand, s3 as in Aws?
@@Jims-Garagei think he means generic object-storage
Hello, just wondering if longhorn can utilize shared storage? Ex. SAN LUN
Probably easiest to make the longhorn VM use it, then run Longhorn as normal
good channel thank you
Welcome!
Do you still need Longhorn if using ceph in a 3 node proxmox cluster?
I believe not, you could use the ceph CSI instead.
@@Jims-Garage I'll look into it. Thanks!
Why do we need these dedicated nodes? Why can't Longhorn run as a service on the existing worker nodes?
@@tolpacourt it can do, and that's how I've started running mine. However, in production it's good practice to split out compute, storage and management (even to separate clusters!).
has anyone done this recently ? I've started from the k3s video and cant get past this one. Once I run the longhorn script it basically fails and breaks everything. It also puts longhorn on the two worker nodes which I though was supposed to be seperate.
@@user-gr4vx8xz1l longhorn needs to be on any node being used for storage, and any worker node that is going to use the storage. You can disable provisioning on the worker node through the longhorn gui
@@Jims-Garage I think my setup might just be too slow. I’ve tried deploying longhorn through apps on rancher and it did work atleast it seems to have. It did take awhile though. Thank you for the videos!
Hey! I just ran the script and all seems to have worked without a hitch. When I go on the Longhorn Web UI I noticed it was installed on all nodes.
Is there a fix to this? I've tried reading through the script and understand most of it except the longhorn.yaml file.
You can make the masternodes non schedulable which is the best way, or on the nodes page in the GUI click edit node and make it non schedulable.
@@Jims-Garagegreat thanks! I love your videos by the way!!
To anyone trying this today and happened to run into my issue of the longhorn nodes not appearing in longhorn..mine were tainted, found it after running “kubectl describe node” on the nodes
Again as always very good video so congrats!! After several failures I successfully installed it, I'm on RKE2 with 3 Master, 2 workers, 3 nodes for Longhorn, and 1 admin server. So now I have been trying to deploy a simple application, I'm looking to do a Nginix with a persistent volume. I deploy a Wordpress "as a test" and I'm able see the PVC of it that means that Longhorn is doing his job and here is where I got stuck, I have no idea how I will be accessing PVC if I browse into the node that's holding the Wordpress pod's, all folder and files of WP but, how I can do that and have it available like in a mout path. I plan to clone my repo into a PVC folder to run my application but, I have no idea where it is or how to access it. Thanks for all your help.
Hi Jim, Update: I was able get this working by manually adding the label (longhorn-true) in the rancher UI .. I clicked Nodes > Then clicked edit under the first master node > then clicked "labels and annotations" > then on the next screen I added a lable called Longhorn = True and clicked save and everything started pulling down successfully...Is there a reason the script did not automate this process? Can you kindly explain what part of the script needed to be modified as I am curious :) Please keep up the good work and keep the content coming!
The script should have labelled the nodes, I'll look into it. Glad you figured out how to manually label them, I cover that in the next video haha.
lol yep I just noticed this AM when following the next episode :) thanks for the help!@@Jims-Garage
Is it correct that I manually added the node label or should I remove them through the rancher ui?@@Jims-Garage
Why would I choose longhorn over an external storage
@@GreedoShot good question. It's more "cloud-like" with distributed storage, but a traditional NAS would also be fine. I use longhorn and then backup to my NAS (belt and bracers).
I tried adding longhorn =true label to the master 1 node and things started to pull down but the 3 longhorn nodes still failed to show :./
You label the longhorn nodes, not the masternodes. If the masternodes aren't showing it's likely an issue with either the certs, system resources.
hi Jim thanks for all this great material....I am getting this error when deploying Longhorn --> 0/5 nodes are available: 5 node(s) didn't match Pod's node affinity/selector. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling....Also when i Click the longhorn web ui managment from rancher I get this >>> {
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "no endpoints available for service \"longhorn-frontend\"",
"reason": "ServiceUnavailable",
"code": 503
}
Any help appreciated. Thanks!
Did you increase storage size of the VM from the default 3GB?
Yes I made the Longhorn VMS a total of 120gb and my other nodes are currently 20gigs a piece
@@Jims-Garage
Do I also need to expand the size of all my master and worker nodes as well?
@@Jims-Garage
@@gp2254 did you tag the nodes with longhorn=true
@@gp2254 yes, masters I recommend 20GB, workers probably 50GB (for images)
Longhorn is one of dozens of storage solutions that have CSI (container storage interface) drivers to kubernetes. For home gamers, I would drop Longhorn in favor of TrueNAS (which has more general utility) and just use its CSI driver. Why make things harder than they need to be?
Thanks, I did mention that there are plenty of options and I might come back to TrueNAS in the future as I do use it. I chose longhorn as it's part of the rancher stack, more "enterprise" aligned, and more interesting to configure.
Hmm i have 5 nodes instead of 3. The three longhorn nodes and 2 worker nodes. I used the RKE2 Scripts
That's fine, the worker nodes need the Longhorn drivers to use longhorn. In the Longhorn UI make them non schedulable
Can this be an issue from the k3s script > # Array of all minus master
allnomaster1=($master2 $master3 $worker1 $worker2)
As I said, come onto Discord for support. It's very difficult in RUclips comments. I suspect there's something wrong in your config file but I cannot diagnose without seeing them. Many people have it working with the script as it is.
Codename for Windows Vista was Longhorn…
I'm old enough to remember 😢
Well, in the last 3 months something must have changed, sadly. For me, this just goes into an infinite Error -> CrashLoopBackOff cycle. I'll have to look into what's going wrong.
Check you have at least 4 cores, 4 GB ram and 10GB of storage per node
@@Jims-Garage 4GB, 4 cores, and 1TB... 😂
@@Jims-Garage I found the problem. In your script, you only install open-iscsi wherever the script is run, _not_ on each of the Longhorn systems. The pod logs say that's what was missing, and indeed, after adding that to each VM, it started right up. You need to add a loop to SSH to each pod and apt-get open-iscsi.
I also found that I had to add open-iscsi to the other two worker nodes from the previous 5 nodes before it would successfully complete the setup. Possibly because those also have longhorn=true set.
@@BonesMoses thanks, double check you're using the latest script, it does install iscsi on the Longhorn nodes. Good that you found it though.
I'm thinking I may re-deploy my entire cluster without the longhorn tag set on the first two worker nodes. Perhaps that was an artifact of your previous experiments as you were getting the series ready, but it seems odd to put it on the smaller nodes not meant specifically for storage.