Would like to see a performance comparison of a 3 node cluster running Ceph(PVE) between NVMe / SAS / SATA based setups using 8 to 12 drives per server.
nice video. I'd like to see you do a demo of migrating a typical Windows VM estate off a small/med vSphere cluster over onto Proxmox. I'm starting to see onprem VMW customers exploring their options for evacuating vSphere on next refresh due to the uncertainties being caused by Broadcom at present...
Hi, thank you I will show this to my manager, I love proxmox but, i could not present this as good as you did :). Thank you and waiting for other videos about this subject!
Nice Video and looking forward to SDN - SR-IOV with intel iGPUs for guest video transcoding acceleration or even for giving high performance network access to VMs is something not very well documented for Proxmox and would be a great usecase to have a video on.
Longtime vmware user. Never heard of proxmox. Looks interesting. Never heard of 45drives either. Wts, you explained vmware features, but would've been nice to explain ceph as that's another I've never heard of.
Well awesome! Welcome 🙏! Happy to see some new faces. Hope you stick around. Ceph is a unified software defined cluster solution that has native object , block and file system. We have dozens of videos on Ceph if you’d like to check out more!
I'd love to see the SDN features in use. I've been working alot with EVPN VXLAN right now and love to see how well it integrates and does the Type-5 routing. If it does that at all, Type 2 would be great as well.
@45Drives can you please either remove the monitor from the desk or use a sturdier desk so that it doesn't shake when the presenter moves or touches the desk? Love the series.
I got 64 ESXI 6.7 nodes running on older Dell M630s. Planning on switching over to proxmox over the next couple of months and deploying more NVME iSCSI storage servers. Honestly cant wait to try some of these features.
Still trying to get to grips with Proxmox in an enterprise environment as a replacement for vSphere (I have worked with VMware products since the outset), and I wonder if anybody has converted yet. In a greenfield setup it’s a no-briner, as long as it’s not VMware you going to be better off! The real challenge will be converting an existing client. For example, vSAN v Ceph they are both a type of distributed storage, but would you architect them the same? I don’t know Ceph that well (yet), but I suspect disk choices, the number, performance, physical layout would be different. Most vSAN setups I have seen just use just 2x NICs, VMware on AWS for example is 2x25GB in each host and use Distributed Switches to do Network IO Control so all the traffic uses those NIC’s with traffic prioritised by vSphere. Ceph looks like it wants dedicated NICs for Management/Cluster, VMs and Storage; maybe pushing the logic up to the physical network Q0S for example. Lots to consider. I am a big fan of Proxmox but I don’t know many clients that don’t have other products in their ecosystem be it Site Recovery Manager (DR) Backup with Veeam or Commvault which integrate into the storage API or even NSX they all have replacements in some shape or form but converting is another thing, especially if different server and network hardware are required, I dont think Proxmox SDN has T1/T2 routing like NSX?. Interesting times!
I'm in a similar boat but my company is considerering proxmox. I've never even heard of it so it concerns me to utilize it, plus I'm not a fan of OSS in an enterprise environment. Lack of support and mitigation of vulnerabilities.
@@iankester-haney3315 Support option is nice but doesn't alleviate my concerns around vulnerability risk and its mitigation. I also did research and someone posted it requires significant Linux experience to do a lot of things vmware abstracts for users. I'm def not any kind of linux person.
I'm another one in the same boat. been on VMW since 2005 and thought I'd be working with vSphere 'private cloud' until my retirement or the death of x86 whichever came first LOL. but I'm getting seriously concerned looking at the news that keeps coming and I had my very first refresh customer this week asking about Proxmox migration/transformation. oss is not usually a runner in my MS aligned customer base however. strange times.
I love CEPH and Proxmox. But one thing makes me think about it. Usually we never put the MON Service on OSD Nodes, what Proxmox does. Is this a big disadvantage? Or when does it get an disadvantage?
Well, I did mention ZFS - so that's included :P but if you'd like to see some in-depth content on the subject of D-RAID, let us know specifically what you'd like and what you'd like to potentially compare it to! I'd be happy to dive in !
I haven’t researched it nor studied it too much yet, but I found disk/storage management in ProxMox to appear difficult. I use it for a small lab so I’ve not sought to make resolution front & center.
Oh brother - storage is one of Proxmox’s absolute biggest highlights. They are so far ahead of other hypervisors with their storage support. I’d be happy to do a video on it!
@@mitcHELLOworld would like to see it. Now, the systems I have, it’s not clear to me how ProxMox sized the partitions relative to the available storage but it seemed suboptimal. The 7.x UI seemed limited. I’m not talking about Ceph and all that.
it would be more useful if it would present all of the nodes as one supernode..then balance the vm's across the nodes leaving enough room for the automatic migration...
I would like to see them compared on the same hardware so ‘real world’ test scenarios that involve simulated failures of things like disk, nodes and whatever else might go wrong, power outage could be done. We use VMWare at my workplace and I was surprised how unimpressed I found myself with VMWare in general. I use proxmox in a home lab but I’ve not had the chance to compare the two on equal footing. We all know proxmox is way cheaper and could potentially save millions in costs. The question is more about whether this is like getting a Windows user to switch to a Linux desktop or not. Is proxmox a real world replacement and what features does it have or not and how well do they perform in comparison? My guess is proxmox could do the job. AWS hypervisors are based on Linux XEN with hardware supporting features rather than doing pass-through. I am guessing Microsoft’s own cloud tech is Linux as well. ESXi probably is too. So how much value did they add for what they charge and can you build an equivalent enough system to replace it for ‘mainstream use’ with proxmox?
Is it possible to spin up a fault tolerant VM with Proxmox? You know - the type that actually runs in parallel on two or more hosts in case one goes down...
@@AndersHellquistNot that it is a problem, at least for us - we have setup redundant systems at the OS level instead - I guess that's the better way, after all, just make sure those machines don't run on the same host ;)
@@roysigurdkarlsbakk3842 Yes, I have never experienced a client using fault tolerance I practice and neither have I. Having smarter solutions is usually smarter.
Granted I only had access to ESXi so no fancy features. I want to learn proxmox put anytime somebody does a tutorial on how to do stuff in proxmox it looks like a huge pain.
I think you must have misunderstood something if you think that I said Proxmox doesn't have ZFS support. I did mention that ESXi does not have software RAID support, however. We heavily support ZFS and Ceph as storage solutions at 45Drives and we heavily support Proxmox as well. Apologies if that wasn't super clear to you!
please please please, do shipping to India, we have such negative amount of options like none for cool hardware here in india, its funny and makes me so sad and angry.
Would like to see a performance comparison of a 3 node cluster running Ceph(PVE) between NVMe / SAS / SATA based setups using 8 to 12 drives per server.
nice video. I'd like to see you do a demo of migrating a typical Windows VM estate off a small/med vSphere cluster over onto Proxmox. I'm starting to see onprem VMW customers exploring their options for evacuating vSphere on next refresh due to the uncertainties being caused by Broadcom at present...
ask and we shall deliver! I’ve actually already made that video. Check it out here: m.ruclips.net/video/6jCEe4sfe_g/видео.html
Nice to have you back my friend. Another excellent video from 45Drives. Great topic to cover on a video! Thanks and keep up the good work.
Hi, thank you I will show this to my manager, I love proxmox but, i could not present this as good as you did :). Thank you and waiting for other videos about this subject!
Happy to help! If you are interested we also have a Proxmox webinar we can deliver to your team ! Just reach out.
@@mitcHELLOworld Is there a link or prerecorded demo which we can watch?
@@jcsyjc If you would like to join tomorrow's Proxmox webinar, feel free to register here: www.45drives.com/contact/public-webinar/proxmox-webinar.php
Nice Video and looking forward to SDN - SR-IOV with intel iGPUs for guest video transcoding acceleration or even for giving high performance network access to VMs is something not very well documented for Proxmox and would be a great usecase to have a video on.
Automate VM with config deployments would be awesome. Like with terraform and packer to standup domains easily
Very easy to setup with Terraform and Ansible to automate deployment.
Longtime vmware user. Never heard of proxmox. Looks interesting. Never heard of 45drives either.
Wts, you explained vmware features, but would've been nice to explain ceph as that's another I've never heard of.
Well awesome! Welcome 🙏! Happy to see some new faces. Hope you stick around.
Ceph is a unified software defined cluster solution that has native object , block and file system. We have dozens of videos on Ceph if you’d like to check out more!
Interesting. I love Proxmox, ZFS and Ceph and barely heard of vmware. A strange world we all live in.
I'd love to see the SDN features in use. I've been working alot with EVPN VXLAN right now and love to see how well it integrates and does the Type-5 routing. If it does that at all, Type 2 would be great as well.
Awesome overview. Can we get some benchmarks from inside the VMs with ceph, on that crazy set up?
@45Drives can you please either remove the monitor from the desk or use a sturdier desk so that it doesn't shake when the presenter moves or touches the desk? Love the series.
*gets duct tape* No problem! We are always looking to improve our videos and this is great feedback. Thank you.
Looking forward to the SDN video. Like, really really looking forward to your SDN video!
I got 64 ESXI 6.7 nodes running on older Dell M630s. Planning on switching over to proxmox over the next couple of months and deploying more NVME iSCSI storage servers. Honestly cant wait to try some of these features.
Great video. Looking forward to the next video.
Can't wait for that SDN.
Still trying to get to grips with Proxmox in an enterprise environment as a replacement for vSphere (I have worked with VMware products since the outset), and I wonder if anybody has converted yet. In a greenfield setup it’s a no-briner, as long as it’s not VMware you going to be better off! The real challenge will be converting an existing client. For example, vSAN v Ceph they are both a type of distributed storage, but would you architect them the same? I don’t know Ceph that well (yet), but I suspect disk choices, the number, performance, physical layout would be different. Most vSAN setups I have seen just use just 2x NICs, VMware on AWS for example is 2x25GB in each host and use Distributed Switches to do Network IO Control so all the traffic uses those NIC’s with traffic prioritised by vSphere. Ceph looks like it wants dedicated NICs for Management/Cluster, VMs and Storage; maybe pushing the logic up to the physical network Q0S for example. Lots to consider.
I am a big fan of Proxmox but I don’t know many clients that don’t have other products in their ecosystem be it Site Recovery Manager (DR) Backup with Veeam or Commvault which integrate into the storage API or even NSX they all have replacements in some shape or form but converting is another thing, especially if different server and network hardware are required, I dont think Proxmox SDN has T1/T2 routing like NSX?. Interesting times!
I'm in a similar boat but my company is considerering proxmox. I've never even heard of it so it concerns me to utilize it, plus I'm not a fan of OSS in an enterprise environment. Lack of support and mitigation of vulnerabilities.
@@mcdonamwThe whole point of Proxmox is that you can get licenses for Support. Just like any other enterprise ready software.
@@iankester-haney3315 Support option is nice but doesn't alleviate my concerns around vulnerability risk and its mitigation. I also did research and someone posted it requires significant Linux experience to do a lot of things vmware abstracts for users. I'm def not any kind of linux person.
I'm another one in the same boat. been on VMW since 2005 and thought I'd be working with vSphere 'private cloud' until my retirement or the death of x86 whichever came first LOL. but I'm getting seriously concerned looking at the news that keeps coming and I had my very first refresh customer this week asking about Proxmox migration/transformation. oss is not usually a runner in my MS aligned customer base however. strange times.
Great video comparing the two. Do you think that the community is really missing anything going with Proxmox compared to VMWare?
This is amazing quality content and information, thx for that
I love CEPH and Proxmox. But one thing makes me think about it. Usually we never put the MON Service on OSD Nodes, what Proxmox does. Is this a big disadvantage? Or when does it get an disadvantage?
I rarely see anyone touch OVS in Proxmox. Can't believe everyone is happy with Linux Bridge.
@@wojtek-33this is true! Open Vswitch is pretty sweet for sure, though!
What about the newest option of D-Raid? I know you said you can do all these other ones but this one specifically is suuuuuper nice.
Well, I did mention ZFS - so that's included :P but if you'd like to see some in-depth content on the subject of D-RAID, let us know specifically what you'd like and what you'd like to potentially compare it to! I'd be happy to dive in !
@@mitcHELLOworld I would be super interested to see how they compare. specifically in speeds and also in rebuilding after a failed drive.
You are turning into a beefcake.
Looking good man!
Haha thank you! Always nice to have hard work recognized 🙏
now , we need to see the same video but adding proxmox datacenter manager , the new feature that proxmox release in alpha version
I haven’t researched it nor studied it too much yet, but I found disk/storage management in ProxMox to appear difficult. I use it for a small lab so I’ve not sought to make resolution front & center.
Oh brother - storage is one of Proxmox’s absolute biggest highlights. They are so far ahead of other hypervisors with their storage support. I’d be happy to do a video on it!
@@mitcHELLOworld would like to see it.
Now, the systems I have, it’s not clear to me how ProxMox sized the partitions relative to the available storage but it seemed suboptimal. The 7.x UI seemed limited.
I’m not talking about Ceph and all that.
it would be more useful if it would present all of the nodes as one supernode..then balance the vm's across the nodes leaving enough room for the automatic migration...
I would like to see them compared on the same hardware so ‘real world’ test scenarios that involve simulated failures of things like disk, nodes and whatever else might go wrong, power outage could be done. We use VMWare at my workplace and I was surprised how unimpressed I found myself with VMWare in general. I use proxmox in a home lab but I’ve not had the chance to compare the two on equal footing. We all know proxmox is way cheaper and could potentially save millions in costs. The question is more about whether this is like getting a Windows user to switch to a Linux desktop or not. Is proxmox a real world replacement and what features does it have or not and how well do they perform in comparison? My guess is proxmox could do the job. AWS hypervisors are based on Linux XEN with hardware supporting features rather than doing pass-through. I am guessing Microsoft’s own cloud tech is Linux as well. ESXi probably is too. So how much value did they add for what they charge and can you build an equivalent enough system to replace it for ‘mainstream use’ with proxmox?
You had me at 64 cores lol 😊
Is it possible to spin up a fault tolerant VM with Proxmox? You know - the type that actually runs in parallel on two or more hosts in case one goes down...
Not that I'm aware of. Only regular HA so far
@@AndersHellquistNot that it is a problem, at least for us - we have setup redundant systems at the OS level instead - I guess that's the better way, after all, just make sure those machines don't run on the same host ;)
@@roysigurdkarlsbakk3842 Yes, I have never experienced a client using fault tolerance I practice and neither have I. Having smarter solutions is usually smarter.
Is there a functional replacement for DRS?
Not yet but it's in progress.
Granted I only had access to ESXi so no fancy features. I want to learn proxmox put anytime somebody does a tutorial on how to do stuff in proxmox it looks like a huge pain.
Proxmox does have ZFS support. And ZFS provides software RAID.
I think you must have misunderstood something if you think that I said Proxmox doesn't have ZFS support. I did mention that ESXi does not have software RAID support, however. We heavily support ZFS and Ceph as storage solutions at 45Drives and we heavily support Proxmox as well.
Apologies if that wasn't super clear to you!
please please please, do shipping to India, we have such negative amount of options like none for cool hardware here in india, its funny and makes me so sad and angry.
We ship to India! Give our team a shout: info@45drives.com
He's Canadian, eh?