Yes and it sounds strange, but this step-by-step instruction from one side, but realy helped to understand logic of proxmox and ceph from the other side.
Awesome @substandard649, glad it was helpful! Be sure to sign up on the forums and I can give more personalized help here: www.virtualizationhowto.com/community
Great video and thoroughly detailed. My only advice for properly monitoring a "migrating VM" would be to send a ping to the 'Migrating VM' from a different machine/VM. When doing anything from the VM being migrated, the process will pause in order to be transferred over to the new host (thus not showing any dropped packets "from" the VM's point of view). Keep up the good work!
This is great. I would love to see a realworld homelab version using 3 mini PCs and a 2.5gb switch. I think there are a lot of users like me running home assistant in a proxmox vm along with a bunch of containers for cctv / dns etc. There are no videos covering this ceph scenario and I need a hero 😊
Nice presentation and explanation of some key core steps of the procedure. Yet you omit to mention that -nodes should be the same from a h/w perspective, specially when VMs running are Win Servers since you could easily loose your License just by transferring it to a different node with different h/w specs -even if someone, might get it from just pausing the video and noticing that the 3 storages are the same on all the 3 nodes, a mention of that wouldn't t hurt. -finally a video like this, could be a nice start for several other ones, about maintaining and troubleshooting a cluster with ceph since usual stuff like a node went down for good or went down for a long time since parts need to be ordered in order to be fixed, (this will have as a result several syslog messages flood the user and you might want to show how to stop them or suppress them until the node gets fixed again)....etc
This is bullcrap. I have proxmox running on 3 tiny PC's (TRIGKEY, Intel, and an older mini-PC board), all 3 of them were once licensed for Windows 7, 10 and 11, I've transferred all their activations to my Microsoft's cloud account, which is essentially done just by/when activating and having logged in using a MS account. I then installed proxmox and erased the 3 machines. They even have different sized boot-SSD's, proxmox and ceph don't give a rat's ass. I can easily run/create a Win11 VM and transfer it without issues between the 3. Microsoft has all 3 hardware images in its database, so it's all fine with the OS moving from one to the other.
@@Meowbay Nice but give it a try with Windows Server Licences not plain OSes. You mentioned you gave it a try with Win 7 10 11. I stated Win Server OSes. I was at phone with Microsoft over an hour and rhey couldnt even give me a straight answer if the licence would be maintained or not after migration. Finally, I was talking about production environments where having the knowledge of what will happen is mantatory, and not homelab.
I’ve been planning to move from VMware & VSAN to “Pmox” :) & ceph for a while now. I just need the time to set everything up and test. I love that you virtualized this first! My used storage is about 90% testing vm’s like these. 🤷♂️
The best tutorial for clustering, 😊thank you sir....We will try it on three server devices, to be applied to the Republic of Indonesia radio data center...
FYI when clicking in CMD window it changes the title to select and the process running in CMD window pauses. For most of the demo it was in selection mode (paused ping command) it would be interesting to see how it worked without the select. Otherwise loved the demo and Ceph storage setup exactly what I was looking for.
Thank you! This was incredibly helpful with my setting up Ceph for the first time and showed all the details necessary to better understand it and test that it was working!
Thank you, that was very informative spot-on... One thing I did pickup, and this is my "wierdness", you might be trying a little to hard with the explicit descriptions. For example, the migration testing, you explicity call out the full hostnames several times - at this stage in the video, viewers are intimately familiar with the servers, so stating "server 1 to 2" would feel more natural.
Could go both ways - as a newbie I appreciate the explicit details as it’s exactly when presenters start saying generic-sounding “first box” or “the storage pool” is where I often get lost!
At 3:26 it would be useful to be mentioned the fact that ceph and HA would be highly benefit from a different network for their exchange data procedures. There would be the point where you should choose a different network (in case of course there is one to choose from) from the managed one. Yes it will work with the same network for everything but it won t be as performant as with a dedicated one. New edit: Stand corrected by 8:26 where you do mention it.
Best video out on this. What do you think of having a bunch of random drives? How much should I care about same processors, same drive models, same drive sizes?
Nice video, I'm planning upgrading to Proxmox CEPH Cluster this holidays. Promptly result from YT algorithm. BTW, that nested Cluster under Vsphere....😮
@djstraussp Thank you for the comment! Awesome to hear.....also sign up for the forums, would like to see how this project goes: www.virtualizationhowto.com/community
Awesome @pedroandresiveralopez9148! So glad to have you and thank you for your comment. Also, join up on the Discord server, I am trying to grow lots of good discussions here: discord.gg/Zb46NV6mB3
Totally agree. Dark mode may be the personal preference for a majority of people in day to day work on their own screen, but for RUclips videos, you should use light mode. Love your content.
Great video, it is a shame you had the command prompt window in "selected" mode when you did the demo of live migration, as this would have paused the pings though, but neat none the less.
Nice work, CEPH is really good, although I moved a VM from a different disk to the Pool and did not migrate seamless, never less I like the idea, can you make a video to show how to use CEPH in HA, Thank you
Great video, thanks! What are the pre-requisites for installing CEPH? I have a number of NUCs running in a Proxmox cluster with only 1 locally installed NVMe per node. Can CEPH be installed and configured to run in my environment by partition the NVMe drives? I can install CEPH component but obviously there's no disk to select in the guide to create the shared pool... On the flip side, if it is not possible, how do you remove all CEPH traces, I can't seem to find an intuitive way to do it (or at least not as easy as adding it)...
thank you so much, i think now i clearly understand how the storage requirements works. but what about the cpu/ ram sharing? I'm planning to build a cluster with enough storage and run vm on each one of then and fully utilize the hardware on each one of them, i don't know how the cluster is gonna be have when one of the nodes fail or should i spare some ram/cpu
I have the same question - can I make one physical server have much larger storage (eg via external HBA/SAS 12*3.5” enclosure) than others, to use as extra file storage?
Just as a side note, thats not an encrypted string. It is a JsonWebToken or JWT for short. The payload is not encrypted at all, it is digitally signed but not encrypted. It is plaintext base64 encoded.
Hi, it is the best tutorial i ever see so far on youtube, it is complete, however i have a question, since you said you are running each proxmox in Virtualbox, how did you manage to create a vm and not get the error message "KVM virtualisation configured, but not available" thank you for your help!
Excellent tutorial. Just 2 questions about Ceph: 1) What happens if the disks in my 3 servers have different sizes (e.g. 200, 500, 800GB) and if one of them are ssd disk and the other two are mechanical. 2) Where the MVs hard disk really lives , in the volumen across the 3 servers ? Thanks for your help.
Unfortunately your ping test during migration is useless, since you clicked in the cmd box at 14:38 and the ping stopped. To resume you would have to press enter, but you did not. You can see that the ping of 4 ms is freezed there. Would been interesting, if at least one ping is lost.
Hi, thanks for your great content, simple well explained. "Regarding Proxmox VE's High Availability features, if I have a critical Microsoft SQL Server VM, will the system effectively handle a scenario where one PVE node crashes or if there's a need to migrate the VM to another PVE? Specifically, I'm concerned about the risk of losing transactions during such events. How does Proxmox ensure data integrity and continuity for database applications like SQL Server in high availability setups?"
Hello, great video I was able to follow along. Question, what’s the difference between a cluster like this in proxmox and a kubernetes is K3 set up in proxmox trade off the benefits of one or the other, etc. Also, could you list some examples on what possible use scenarios and configurations etc. Thanks.
Interesting ... adding the second node gets into the cluster, but stays red (like unavailable); when trying to add the third node I get a "An error occurred on the cluster node: cluster not ready - no quorum?" error and the cluster join aborts. I have reinstalled from scratch all three nodes a couple of times, I have removed cluster and redone over and over again to no avail. Not working my side ...
I nicely went through this last week, but after buying a ton of hardware and reconfiguring i went with a clean install, and now I can't even get an OSD made. It's like the initial configuration haunts the disks til the end of time.
Thanks for very clear and concise tutorial. I had one question though. As the 'pool' is shared by three nodes, will it be possible to make the VM auto migrate to another host if one host goes down abruptly?
@subhajyotidas1609 Thank you for the comment! Yes, the Ceph storage pool acts like any other shared storage once configured. You just need to setup HA for your VMs and if a host goes down, the heartbeat timer will note the host is down and another Proxmox host will assume ownership of the VM and it will be restarted on the other host. Hit me up on the forums if you have any other questions or need more detailed explanations. Thanks @subhajyotidas1609 ! www.virtualizationhowto.com/community
Hello, You explained just home lab level conf but in Production, we need to add multiple monitors (other than node public IP subnet), different ceph cluster IP subnet, multiple mds servers, multiple ceph manager. And all this for high replication throughput, high resiliency, high availability. Can you please share a proper enterprise class network diagram fir all ceph services.
great video. do the ceph disks on each node need to be the same size?? I have 2 Dell servers and was going to run a mini micro PC as the 3rd node with 2TB in each of the Dells but 1TB in the Dell mini PC. would that work?
@valleyboy3613 thank you for the comment. See the forum thread here: forum.proxmox.com/threads/adding-different-size-osd-running-out-of-disk-space-what-to-look-out-for.100701/ as it helps to understand some of the considerations. Also, create a new Forum post on the VHT Forums if you need more detailed help: www.virtualizationhowto.com/community
Which is better, VMware or Proxmox? I have 3 nodes with 4 SSDs each, and all three have 10GB NICs. But for a high-performance high-availability environment, which is the better option, especially when it comes to VM performance with Windows? In your experience, is Proxmox with Ceph better, or VMware with vSAN?
Thank you!! An insightful video, can I configure a cluster and CEPH storage over 3 datacenter without a dedicated network link, only over the internet.
Is anyone aware of a proxmox/ceph performance tuning guide? I have a 3 node proxmox with SSD storage that natively gets 500MB/sec when writing directly to the disks. I have a 10gbe network and high end xeon servers. When those disks are in a proxmox/ceph cluster and reading/writing to ceph storage, I get about 30-50MB/sec. The speed of ceph is awful. I also have a SSD NAS over 10gbe lan, and the SSD NAS gets 450MB/sec on a raid-5 setup. I'm considering dumping my entire ceph cluster and just moving all the storage drives into a second NAS.
Very good tutorial.. But I have a question.. What kind of bandwidth you should have to use ceph.. I mean to ask is a gigabit is enough or one should use 10gig?
@niravraychura, thank you for the comment! Hop over to my Discord server to discuss this further either in the home lab discussion section or home-lab-pics channel: discord.gg/Zb46NV6mB3
If you're going to get serious about it you should have a 10G link and a dedicated Ceph network. Get a HW setup with 2x nics in it so one of them can be dedicated to the Ceph network.
thanks for the video. I am trying to set up Clustering and Ceph on nodes that have previously been configured. I have succeeded with Clustering. However, Ceph was installed but when I try to set up OSD, I get the error "Ceph is not compatible with disks backed by a hardware RAID controller". My ask is what can I do to remedy this?
I am planning on deploying multiple Dell R730XD in homelab environment. Was looking for a storage solution / NAS. Would you recommend using TrueNAS or CEPH? Can we create SMB / iSCSI shares on a CEPH cluster? How to add users / permissions?
Also, in the present video, you've added just 1 disk per node. How can we scale / expand our storage? Is it as simple as plugging in new drives and adding it to the OSD? Do we need to add the same amount of drives in each node?
3. Can we upgrade the size of the Ceph disk, eg: from a 50GB to a 1TB, if the 50GB is about to get full? 3a. How does one know the free space on ech host if the HDD is in a Ceph pool?
@fbifido2, thanks for the comments and questions. Hop over to the Discord server and we can have more detailed discussions there: discord.gg/Zb46NV6mB3
Love your video. However, I'm a bit disappointed in you. You made your nested Proxmox on a VMware ESXi setup. That should've been Proxmox :P Good job nonetheless.
My storage added to node 1 works fine but when I try to add the OSD to the other nodes it states no disks available.. Can the other 2 nodes share the USB drive connected to Node 1?? Or does the other 2 nodes need their own unused storage in order for Ceph to work? thanks.
@KingLouieX thank you for the comment! Sign up on the forums and create a new topic under "Proxmox help" and let's discuss this further: www.virtualizationhowto.com/community
Sort of but not really. Ceph is distributed storage across the cluster using dedicated drives for OSD's with a minimum of 3 nodes. You have to have a cluster before you build the storage, and you have to have drives installed in the nodes to build the ceph cluster. Data is distributed across the nodes so they are readily available if a node or drive / osd fails. You then have the option of turning on HA for the vm's so they can always be available on top of the data.
@@MikeDeVincentis Thanks for the explanation. However I still don't really understand. Does "distributed" mean, that each node has an exact replica of a given data set? Like a mirror? Or is it more like a RAID 0?
@@cheebadigga4092 more like raid 10. 3 copies of the data blocks spread across the nodes. Think raid but spread across multiple devices, not just drives inside one system.
That's how a tutorial should be done! Thoroughly explained and step-by-step detailed!!! THANK YOU SO VERY MUCH!!
Yes and it sounds strange, but this step-by-step instruction from one side, but realy helped to understand logic of proxmox and ceph from the other side.
This was a perfect tutorial, watched it once, built a test lab, everything worked as expected.
Awesome @substandard649, glad it was helpful! Be sure to sign up on the forums and I can give more personalized help here: www.virtualizationhowto.com/community
I can confirm this. I tried and it worked at first try. Really good.
Not only this is a perfect ProxMox/Ceph tutorial but amazing tutorial on how to make proper videos that deliver results! Thank you!
Great video and thoroughly detailed. My only advice for properly monitoring a "migrating VM" would be to send a ping to the 'Migrating VM' from a different machine/VM. When doing anything from the VM being migrated, the process will pause in order to be transferred over to the new host (thus not showing any dropped packets "from" the VM's point of view).
Keep up the good work!
Thank you @user-xn3bt5mz1x good point. Thanks for your comment.
The best Proxmox & Ceph tutorial, thank you.
@naami2004, awesome! Thank you for the comment and glad it was helpful.
Good job thank u
Oh man Plain simple language that novices like myself understand. Thank you this was very intuitive
Best detailed HOW to video in Proxmox universe...
This is great. I would love to see a realworld homelab version using 3 mini PCs and a 2.5gb switch. I think there are a lot of users like me running home assistant in a proxmox vm along with a bunch of containers for cctv / dns etc. There are no videos covering this ceph scenario and I need a hero 😊
@substandard649 sign up and join the VHT forums here and we can discuss any questions you have in more detail: www.virtualizationhowto.com/community
Good video sir, i played with this with a few Lenovo Mini machines and loved it !!
Nice presentation and explanation of some key core steps of the procedure. Yet you omit to mention that
-nodes should be the same from a h/w perspective, specially when VMs running are Win Servers since you could easily loose your License just by transferring it to a different node with different h/w specs
-even if someone, might get it from just pausing the video and noticing that the 3 storages are the same on all the 3 nodes, a mention of that wouldn't t hurt.
-finally a video like this, could be a nice start for several other ones, about maintaining and troubleshooting a cluster with ceph since usual stuff like a node went down for good or went down for a long time since parts need to be ordered in order to be fixed, (this will have as a result several syslog messages flood the user and you might want to show how to stop them or suppress them until the node gets fixed again)....etc
This is bullcrap. I have proxmox running on 3 tiny PC's (TRIGKEY, Intel, and an older mini-PC board), all 3 of them were once licensed for Windows 7, 10 and 11, I've transferred all their activations to my Microsoft's cloud account, which is essentially done just by/when activating and having logged in using a MS account. I then installed proxmox and erased the 3 machines. They even have different sized boot-SSD's, proxmox and ceph don't give a rat's ass. I can easily run/create a Win11 VM and transfer it without issues between the 3. Microsoft has all 3 hardware images in its database, so it's all fine with the OS moving from one to the other.
@@Meowbay Nice but give it a try with Windows Server Licences not plain OSes. You mentioned you gave it a try with Win 7 10 11. I stated Win Server OSes. I was at phone with Microsoft over an hour and rhey couldnt even give me a straight answer if the licence would be maintained or not after migration.
Finally, I was talking about production environments where having the knowledge of what will happen is mantatory, and not homelab.
Ceph is incredible nice distributed object storage solution which is open source. I need to check it out myself
I’ve been planning to move from VMware & VSAN to “Pmox” :) & ceph for a while now. I just need the time to set everything up and test. I love that you virtualized this first! My used storage is about 90% testing vm’s like these. 🤷♂️
The best tutorial for clustering, 😊thank you sir....We will try it on three server devices, to be applied to the Republic of Indonesia radio data center...
Did it work?
We definitely love the content, we appreciate your attention to detail!!!
Thanks SO MUCH for this video. It literally turned things around for me. Cheers from Panama.
Thanks for this awesome tutorial. It was easy to understand, also for an non native english speaker.
FYI when clicking in CMD window it changes the title to select and the process running in CMD window pauses. For most of the demo it was in selection mode (paused ping command) it would be interesting to see how it worked without the select. Otherwise loved the demo and Ceph storage setup exactly what I was looking for.
@PaulKling awesome! Thank you for the comment! Be sure to sign up on the forums: www.virtualizationhowto.com/community
Thank you! This was incredibly helpful with my setting up Ceph for the first time and showed all the details necessary to better understand it and test that it was working!
Super helpful with really clear steps and explanations, saved me a lot of time and learnt a lot too - many thanks.
Thank you, that was very informative spot-on...
One thing I did pickup, and this is my "wierdness", you might be trying a little to hard with the explicit descriptions. For example, the migration testing, you explicity call out the full hostnames several times - at this stage in the video, viewers are intimately familiar with the servers, so stating "server 1 to 2" would feel more natural.
Could go both ways - as a newbie I appreciate the explicit details as it’s exactly when presenters start saying generic-sounding “first box” or “the storage pool” is where I often get lost!
Thank for the video. Great explanation and it works like a charm !
Great Video sir, I appreciate the work you put in. It is well explained. Thank you.
At 3:26 it would be useful to be mentioned the fact that ceph and HA would be highly benefit from a different network for their exchange data procedures. There would be the point where you should choose a different network (in case of course there is one to choose from) from the managed one. Yes it will work with the same network for everything but it won t be as performant as with a dedicated one.
New edit: Stand corrected by 8:26 where you do mention it.
Best video out on this. What do you think of having a bunch of random drives? How much should I care about same processors, same drive models, same drive sizes?
Excellent presentation. Thank you.
Great tutorial ! I'm planning to buy some old thinclient (ryzen A10) to test this proxmox 8 ceph config !
Wow, this is an excellent tutorial. Thanks!
That's really cool! Thanks for the vid
You have got a new subscriber. Awesome tutorial.
Advice: in production environments use 10Gbps links on all servers, or else a "bottleneck" is generated if the disks are running at 6Gbps speed
Do you really think 6 Gbps is the speed you will get from a HDD??
@@ErikS-SSD perhaps. Or a raid array maybe
Bro forgot RAID arrays can get PRETTY fast, so.a single 100Gbps or dual 100Gbps happens to be the surefire way
that was a perfect introduction. Thank you.
the best guide i found tnx so much for your effort. just a question related to ceph , do you suggest/prefer quincy or reef?
tnx so much
Can't wait when proxmox dev team decided to deploy fault tolerance functionality into their product. It would be cool.
Great content, shoutoutz from Brazil...
Nice video, I'm planning upgrading to Proxmox CEPH Cluster this holidays. Promptly result from YT algorithm.
BTW, that nested Cluster under Vsphere....😮
@djstraussp Thank you for the comment! Awesome to hear.....also sign up for the forums, would like to see how this project goes: www.virtualizationhowto.com/community
Awesome tutorial thank you 😊
Great video. Thank you!
Wonderful Video! Thanks for your time and detailed explanations. I just found your YT channel and I am loving it so far.
Awesome @pedroandresiveralopez9148! So glad to have you and thank you for your comment. Also, join up on the Discord server, I am trying to grow lots of good discussions here: discord.gg/Zb46NV6mB3
Great tutorial !
please make ceph cluster tutorial on non-proxmox distribution
Your videos are great help. PS. I THINK light mode for tutorials would be better seeing details.
Totally agree. Dark mode may be the personal preference for a majority of people in day to day work on their own screen, but for RUclips videos, you should use light mode.
Love your content.
bro this is so cool
Great video, it is a shame you had the command prompt window in "selected" mode when you did the demo of live migration, as this would have paused the pings though, but neat none the less.
Nice work, CEPH is really good, although I moved a VM from a different disk to the Pool and did not migrate seamless, never less I like the idea, can you make a video to show how to use CEPH in HA, Thank you
Thanks. good stuff.
Great video, thanks! What are the pre-requisites for installing CEPH? I have a number of NUCs running in a Proxmox cluster with only 1 locally installed NVMe per node. Can CEPH be installed and configured to run in my environment by partition the NVMe drives? I can install CEPH component but obviously there's no disk to select in the guide to create the shared pool... On the flip side, if it is not possible, how do you remove all CEPH traces, I can't seem to find an intuitive way to do it (or at least not as easy as adding it)...
thank you so much, i think now i clearly understand how the storage requirements works. but what about the cpu/ ram sharing? I'm planning to build a cluster with enough storage and run vm on each one of then and fully utilize the hardware on each one of them, i don't know how the cluster is gonna be have when one of the nodes fail or should i spare some ram/cpu
One question. Do we need to have the same shared storage space accros all nodes for ceph to work properly ?
I have the same question - can I make one physical server have much larger storage (eg via external HBA/SAS 12*3.5” enclosure) than others, to use as extra file storage?
Just as a side note, thats not an encrypted string.
It is a JsonWebToken or JWT for short. The payload is not encrypted at all, it is digitally signed but not encrypted.
It is plaintext base64 encoded.
nicely done, subscribed now.
I don't have storage in the osd step how do I create?
super!
Hi, it is the best tutorial i ever see so far on youtube, it is complete, however i have a question, since you said you are running each proxmox in Virtualbox, how did you manage to create a vm and not get the error message "KVM virtualisation configured, but not available" thank you for your help!
Geat tutorial, thanks for sharing
Thanks for watching!
Excellent tutorial. Just 2 questions about Ceph: 1) What happens if the disks in my 3 servers have different sizes (e.g. 200, 500, 800GB) and if one of them are ssd disk and the other two are mechanical. 2) Where the MVs hard disk really lives , in the volumen across the 3 servers ? Thanks for your help.
Unfortunately your ping test during migration is useless, since you clicked in the cmd box at 14:38 and the ping stopped. To resume you would have to press enter, but you did not. You can see that the ping of 4 ms is freezed there. Would been interesting, if at least one ping is lost.
Hi, what configuration should I apply to enable the minimum number of available copies to be 1 for a cluster with 2 nodes and a qdevice? Cheers.
Hi,
thanks for your great content, simple well explained.
"Regarding Proxmox VE's High Availability features, if I have a critical Microsoft SQL Server VM, will the system effectively handle a scenario where one PVE node crashes or if there's a need to migrate the VM to another PVE? Specifically, I'm concerned about the risk of losing transactions during such events. How does Proxmox ensure data integrity and continuity for database applications like SQL Server in high availability setups?"
Its cool 😎
Hello, great video I was able to follow along. Question, what’s the difference between a cluster like this in proxmox and a kubernetes is K3 set up in proxmox trade off the benefits of one or the other, etc. Also, could you list some examples on what possible use scenarios and configurations etc. Thanks.
@markstanchin1692 thank you for the comment! Sign up on the VHT forums here and let's discuss it in more detail: www.virtualizationhowto.com/community
Interesting ... adding the second node gets into the cluster, but stays red (like unavailable); when trying to add the third node I get a "An error occurred on the cluster node: cluster not ready - no quorum?" error and the cluster join aborts. I have reinstalled from scratch all three nodes a couple of times, I have removed cluster and redone over and over again to no avail. Not working my side ...
I nicely went through this last week, but after buying a ton of hardware and reconfiguring i went with a clean install, and now I can't even get an OSD made. It's like the initial configuration haunts the disks til the end of time.
Thanks for very clear and concise tutorial.
I had one question though. As the 'pool' is shared by three nodes, will it be possible to make the VM auto migrate to another host if one host goes down abruptly?
@subhajyotidas1609 Thank you for the comment! Yes, the Ceph storage pool acts like any other shared storage once configured. You just need to setup HA for your VMs and if a host goes down, the heartbeat timer will note the host is down and another Proxmox host will assume ownership of the VM and it will be restarted on the other host. Hit me up on the forums if you have any other questions or need more detailed explanations. Thanks @subhajyotidas1609 ! www.virtualizationhowto.com/community
nice tutorial. thanks. Is it possible to attach an external Ceph pool to Proxmox cluster?
Yes, you can mount external RBD or CephFS to Proxmox.
Hello,
You explained just home lab level conf but in Production, we need to add multiple monitors (other than node public IP subnet), different ceph cluster IP subnet, multiple mds servers, multiple ceph manager.
And all this for high replication throughput, high resiliency, high availability. Can you please share a proper enterprise class network diagram fir all ceph services.
wait was the windos cm paused while it was migrating 🙈 ? (note the "Select" in the title bar)
great video. do the ceph disks on each node need to be the same size?? I have 2 Dell servers and was going to run a mini micro PC as the 3rd node with 2TB in each of the Dells but 1TB in the Dell mini PC. would that work?
@valleyboy3613 thank you for the comment. See the forum thread here: forum.proxmox.com/threads/adding-different-size-osd-running-out-of-disk-space-what-to-look-out-for.100701/ as it helps to understand some of the considerations. Also, create a new Forum post on the VHT Forums if you need more detailed help: www.virtualizationhowto.com/community
Is a 2 node a possibility with an external vm for quorum monitoring like a 2 node vsan?
Which is better, VMware or Proxmox? I have 3 nodes with 4 SSDs each, and all three have 10GB NICs. But for a high-performance high-availability environment, which is the better option, especially when it comes to VM performance with Windows? In your experience, is Proxmox with Ceph better, or VMware with vSAN?
can i use that ceph cluster for storing data outside of proxmox and not just for vms for proxmox?
Would this work with multiple location? One environment at home and one at my parents for ha and universal setup/ease of use?
1. what about host with more than 1 hdd/ssd? what should they do in the OSD part?
One osd per disk for spinning disk and one nvme / ssd can be used as wal for multiple osd I think
One question: If i add a disk to the CEPH pool its formated to 0 or the data keeped? Thank you
4:21 - I don't think that's an encrypted stream. That just looks like base64 encoded information.
thats what it is
Thank you!! An insightful video, can I configure a cluster and CEPH storage over 3 datacenter without a dedicated network link, only over the internet.
@gbengadaramola8581 Thank you for the comment, please sign up on the VHT forums and we can discuss it further: www.virtualizationhowto.com/community
I have the same question
Awesome! How about proxmox plus SAN storage?
what happen if pmox1(admin cluster) has crash and can't up again? and what if i re install pmox1?
How much node for it implementatiin ceph? I have 2 node
Thats funny. I have always heard, you couldn't do live migrations on a nested hypervisor setup.
I have this issue:
Ceph is not compatible with disks backed by a hardware RAID controller.
will this process work for Virtual Environment 6.2-4?
2. why u did not show total storage of pool? can we add more storage later? how to set that up?
Is anyone aware of a proxmox/ceph performance tuning guide? I have a 3 node proxmox with SSD storage that natively gets 500MB/sec when writing directly to the disks. I have a 10gbe network and high end xeon servers. When those disks are in a proxmox/ceph cluster and reading/writing to ceph storage, I get about 30-50MB/sec. The speed of ceph is awful. I also have a SSD NAS over 10gbe lan, and the SSD NAS gets 450MB/sec on a raid-5 setup. I'm considering dumping my entire ceph cluster and just moving all the storage drives into a second NAS.
Very good tutorial.. But I have a question.. What kind of bandwidth you should have to use ceph.. I mean to ask is a gigabit is enough or one should use 10gig?
@niravraychura, thank you for the comment! Hop over to my Discord server to discuss this further either in the home lab discussion section or home-lab-pics channel: discord.gg/Zb46NV6mB3
If you're going to get serious about it you should have a 10G link and a dedicated Ceph network. Get a HW setup with 2x nics in it so one of them can be dedicated to the Ceph network.
@@nyanates thank you for the answer 😇
thanks for the video. I am trying to set up Clustering and Ceph on nodes that have previously been configured. I have succeeded with Clustering. However, Ceph was installed but when I try to set up OSD, I get the error "Ceph is not compatible with disks backed by a hardware RAID controller". My ask is what can I do to remedy this?
@bioduntube thank you for the comment! Hit me up on the forums with this topic and let's discuss it further www.virtualizationhowto.com/community
As you know, dude your video is good but this over used comment is driving me nuts.
I am planning on deploying multiple Dell R730XD in homelab environment. Was looking for a storage solution / NAS. Would you recommend using TrueNAS or CEPH? Can we create SMB / iSCSI shares on a CEPH cluster? How to add users / permissions?
Also, in the present video, you've added just 1 disk per node. How can we scale / expand our storage? Is it as simple as plugging in new drives and adding it to the OSD? Do we need to add the same amount of drives in each node?
Don't forget to give us your feedback if you used CEPH, and how it is worked ?
3. Can we upgrade the size of the Ceph disk, eg: from a 50GB to a 1TB, if the 50GB is about to get full?
3a. How does one know the free space on ech host if the HDD is in a Ceph pool?
@fbifido2, thanks for the comments and questions. Hop over to the Discord server and we can have more detailed discussions there: discord.gg/Zb46NV6mB3
good
Love your video.
However, I'm a bit disappointed in you.
You made your nested Proxmox on a VMware ESXi setup.
That should've been Proxmox :P
Good job nonetheless.
My storage added to node 1 works fine but when I try to add the OSD to the other nodes it states no disks available.. Can the other 2 nodes share the USB drive connected to Node 1?? Or does the other 2 nodes need their own unused storage in order for Ceph to work? thanks.
@KingLouieX thank you for the comment! Sign up on the forums and create a new topic under "Proxmox help" and let's discuss this further: www.virtualizationhowto.com/community
what happens if one of the server fails in the cluster ? The virtual machine keeps running on another server (fault tolerance) or there is failover ?
Yes if you set up High Availability (HA) in the Proxmox UI.
Is it a hard requirement to have 3 nodes in order to form a functional PVE cluster?
Thank you for the comment! Sign up on the forums and I can give more personalized help here: www.virtualizationhowto.com/community
Poor man's hyperconverge-ish.. yes lets do it
So Ceph is "just" HA? Meaning, all nodes in the cluster basically see the same filesystem?
Sort of but not really. Ceph is distributed storage across the cluster using dedicated drives for OSD's with a minimum of 3 nodes. You have to have a cluster before you build the storage, and you have to have drives installed in the nodes to build the ceph cluster. Data is distributed across the nodes so they are readily available if a node or drive / osd fails. You then have the option of turning on HA for the vm's so they can always be available on top of the data.
@@MikeDeVincentis Thanks for the explanation. However I still don't really understand. Does "distributed" mean, that each node has an exact replica of a given data set? Like a mirror? Or is it more like a RAID 0?
@@cheebadigga4092 more like raid 10. 3 copies of the data blocks spread across the nodes. Think raid but spread across multiple devices, not just drives inside one system.
@@MikeDeVincentis ahhh thanks!
Having that nested inside vmware, really shows that vmware is just godlike, except pricing ....😂😂😂
Please have a look at Wazuh - Het Open Source Security Platform met Security Information and Event Management (SIEM)
Regards John 🤗
awesome tutorial!