Thanks for another good video. I like the short and sweet approach. Mostly just teasing at what the system can do and driving me to investigate other videos and to review the documentation. I've arrived late to the game and just started using Ceph in the last few months, but I'm all in now. I just love the concepts that Ceph is based upon. Replication across hosts, across drives, and erasure encoding for economy. It's really great stuff! I'm not in the business, but to think that I can play with these tools at home and learn the latest tech for servers and storage just blows my mind. And I'm doing it with "trash" hardware... What fun!
@@Jims-Garage I've been getting on a treat! I've put together a four node cluster with OS-oriented Ceph pools for the VMs. I run the requisite Home Assistant, Minecraft, Pihole and Unbound servers. I built the OS-oriented Ceph FS using largish nvme partitions for replicated block storage spread across hosts. I also built a "fast" working data file system using smallish nvme partitions for metadata and 2TB SATA SSDs for data pools. Those live on four nodes with one nvme and one SATA drive each. Last week I set up a 5-drive SATA USB enclosure and built a "slow" second tier store with replicated nvme pool for metadata and an erasure coded pool spread across the USB HDDs. That is the media library. I've got both Linux and Windows clients using the Ceph file systems. I'm just amazed that I got this all working as easily as I did. It's largely due to you and a few other RUclipsrs who inspire me. (And a wife who let's me geek out and spend too much time playing.) My next project is to get a cloud backup scheme in place for off site storage. I've got an iDrive S3 store good for 5TB and want to get that earning it's keep. Keep up the good work inspiring and teaching the rest of us!
The live migration should have happend without a ping being dropped. The disconnect you saw was only the serial console cutting over to the different hv. If you woulf have done it over ssh, you should have seen no dropped ping or at max one, depending on the speed of your switch.
Great content as usual. Just some notes on the ceph cluster itself. You want to set global flags like NOOUT,NOSCRUB and NO-DEEPSCRUB when rebooting ceph nodes as they will start to rebalance when the first node is down if you dont.
I bounce nodes up and down, take OSDs out, and do lots of monkey business all the time without turning off scrubs, etc. The system starts doing what it's supposed to and heals itself all nice and tidy, then reCephifies when things come back online. Why bother?
Just a thought. A nicer test of the ha might be to run your ping command from another node rather than the one you migrated. That way you can see if the service really is fully available to external clients
everytime i go to build out a project you put out a similar video going over it. If you somehow put out a video on how to use the zfs over iscsi storage option in proxmox I'll be floored
Hi Jim, Thank you, helpful video again :) May I ask you about HA proxmox+ceph physical and/or virtual network *separation* on this three(more) node HA systems? "Physical separation in HA environment is higly recommended" says the Ceph documentation & Proxmox dock + forum topics..
A few issues to think about when you do migration (live or offline): 1. Try to use the same hardware CPU generation and brand on the nodes. Live migration from Ryzen to older AMD CPUs does not work flawlessly, the destination vm will spike at 100 % CPU and be unresponsive. You will have to restart the vm, so no live migration in this use case. Maybe it has been fixed in Proxmox 8, I used Proxmox 7. 2. Live migration between different processor brands is not possible, so no live migration between AMD and Intel CPUs. 3. Migration (live or offline) of vms with USB-attached devices is not possible. That ruined my idea of having a Home Assistant vm with failover, sigh.
Why do you go to the effort of cloning and then moving the disks? You can choose the storage at the time you do the clone. Does that not work with Ceph?
@@Jims-Garage lol, but really it will help us me and one of my friend has arround 20,24 VM'S on VMware workstation and I wanted to migrate all to proxmox
You know.. HA, 25 gbit and all sorts of things, although cute and nice to play around with for me personally they are among the least interesting topics ever... I mean talking from a homelab perspective, nice to play around with but absolutely not needed in that seting. Ok firewall HA will be useful but all the ceph/HA... Just my personal $0.01. Would love to see a video on some of the 'promises' you made earlier like install truenas on that NAS you reviewed last time.
Great video!
Just as a head's up -- instead of initiating the migration via the command line, you can just click on the migrate button in the GUI.
Thanks for another good video. I like the short and sweet approach. Mostly just teasing at what the system can do and driving me to investigate other videos and to review the documentation.
I've arrived late to the game and just started using Ceph in the last few months, but I'm all in now. I just love the concepts that Ceph is based upon. Replication across hosts, across drives, and erasure encoding for economy. It's really great stuff!
I'm not in the business, but to think that I can play with these tools at home and learn the latest tech for servers and storage just blows my mind. And I'm doing it with "trash" hardware... What fun!
@@Larz99 thanks. Exactly, that's what homelab is all about! Let me know how you get on!
@@Jims-Garage I've been getting on a treat! I've put together a four node cluster with OS-oriented Ceph pools for the VMs. I run the requisite Home Assistant, Minecraft, Pihole and Unbound servers. I built the OS-oriented Ceph FS using largish nvme partitions for replicated block storage spread across hosts. I also built a "fast" working data file system using smallish nvme partitions for metadata and 2TB SATA SSDs for data pools. Those live on four nodes with one nvme and one SATA drive each.
Last week I set up a 5-drive SATA USB enclosure and built a "slow" second tier store with replicated nvme pool for metadata and an erasure coded pool spread across the USB HDDs. That is the media library.
I've got both Linux and Windows clients using the Ceph file systems.
I'm just amazed that I got this all working as easily as I did. It's largely due to you and a few other RUclipsrs who inspire me. (And a wife who let's me geek out and spend too much time playing.)
My next project is to get a cloud backup scheme in place for off site storage. I've got an iDrive S3 store good for 5TB and want to get that earning it's keep.
Keep up the good work inspiring and teaching the rest of us!
Thanks for another straight to the point video!
Thanks!
The live migration should have happend without a ping being dropped. The disconnect you saw was only the serial console cutting over to the different hv. If you woulf have done it over ssh, you should have seen no dropped ping or at max one, depending on the speed of your switch.
Thanks, yes I did check the output again and saw no dropouts. The next test is to HA the firewall, wish me luck.
Hi Jim
thank you for this video.We are waiting for your advance SDN video please.
thank you
Great content as usual. Just some notes on the ceph cluster itself. You want to set global flags like NOOUT,NOSCRUB and NO-DEEPSCRUB when rebooting ceph nodes as they will start to rebalance when the first node is down if you dont.
I bounce nodes up and down, take OSDs out, and do lots of monkey business all the time without turning off scrubs, etc. The system starts doing what it's supposed to and heals itself all nice and tidy, then reCephifies when things come back online.
Why bother?
Just a thought. A nicer test of the ha might be to run your ping command from another node rather than the one you migrated. That way you can see if the service really is fully available to external clients
The fun project to cover would be how to shut down a proxmox cluster with Ceph as it does not seem to have an out of the box solution.
I would always perform a full backup in case
everytime i go to build out a project you put out a similar video going over it. If you somehow put out a video on how to use the zfs over iscsi storage option in proxmox I'll be floored
Hi Jim,
Thank you, helpful video again :)
May I ask you about HA proxmox+ceph physical and/or virtual network *separation* on this three(more) node HA systems?
"Physical separation in HA environment is higly recommended" says the Ceph documentation & Proxmox dock + forum topics..
Check earlier videos in the series. Ceph runs over the thunderbolt 4 network which is totally separate from the containers and VMs.
@@Jims-Garage good point, thanks
A few issues to think about when you do migration (live or offline):
1. Try to use the same hardware CPU generation and brand on the nodes. Live migration from Ryzen to older AMD CPUs does not work flawlessly, the destination vm will spike at 100 % CPU and be unresponsive. You will have to restart the vm, so no live migration in this use case. Maybe it has been fixed in Proxmox 8, I used Proxmox 7.
2. Live migration between different processor brands is not possible, so no live migration between AMD and Intel CPUs.
3. Migration (live or offline) of vms with USB-attached devices is not possible. That ruined my idea of having a Home Assistant vm with failover, sigh.
❤
can you show Proxmox High Availability with Home Assistant Containers (LXCs or VMs) and Zigbee Stick?
It's possible but complex without multiple ZigBee sticks
@@Jims-Garage this sticks are cheap. Having to wait few days for parts without working home automations is much worser.
Try a network connected co-ordinator like the SLZB-06 that way no reliance on USB for Zigbee.
@@cossierob6143 nice, that something new to me. LAN access would make this much easier, probably
@@cossierob6143 which dongle would you recommend for poe? Too many models
🎉
super!
+1
Why do you go to the effort of cloning and then moving the disks? You can choose the storage at the time you do the clone. Does that not work with Ceph?
Agreed, and I mentioned it on screen. It's for people with existing VMs that want to move to the new Ceph storage.
Now just make a video of migration of virtualbox/vmware workstation/bare machine /esxi to proxmox
I did hyper-v, does that count? 😂
@@Jims-Garage lol, but really it will help us me and one of my friend has arround 20,24 VM'S on VMware workstation and I wanted to migrate all to proxmox
a small hickup and voilà
You know.. HA, 25 gbit and all sorts of things, although cute and nice to play around with for me personally they are among the least interesting topics ever... I mean talking from a homelab perspective, nice to play around with but absolutely not needed in that seting. Ok firewall HA will be useful but all the ceph/HA... Just my personal $0.01. Would love to see a video on some of the 'promises' you made earlier like install truenas on that NAS you reviewed last time.