I wrote some of the ISO installer and Windows code for Citrix XenServer back when it was around v6. That team was the smartest group of people I've ever had the privilege to work with.
I’ve been using XCP-ng for about 2 years now in my home lab, have only two hosts but it’s been rock solid the whole time. Thanks for the heads up on this as I wasn’t aware 8.3 had dropped so I will give the update a try later today
I am more bullish on Proxmox. Run it at home for pfSense and a few VMs. Like how it uses the latest version of Debian along with the latest Linux kernel so hardware support is great. XCP-ng still has a 2TB limit due to using decrepite VHD storage. This is a pain if you're dealing with large database VMs.
Absolutely right. We were migrating from ESXi to XCP and ran into this issue for big VMs of our customers. Basically hindering us and we finally switched to proxmox
That 2TB limit should have been addressed years ago by Citrix but they didn't. Vates have recently stated that a solution is due within the next few weeks to possibly a few months and likely still uses SMAPI v1.
Upgraded to 8.3 after watching you do it on the livestream the other day. Went flawless. Been on XCPNG for a number of years in my homelab now and it does what I want it to (for the most part, but that's usually my fault when something doesn't work)
The reason Win 11 requires a TPM is simple. MSFT hardware partners needed a new hardware requirement to boost sales. They missed out on a sales boost when MSFT made 10 a free upgrade and screamed blue murder about it. Add almost a decade of resentment between 10 and 11
Like it, however hyper-convergence is the top priority for my current professional situation. Otherwise xcpng looks great! Xo lite looks like something id use alot.
I can relate to the expanded hardware compatibility as it relates to running on lab machines. I build a simple 2 host lab pool using a Dell Optiplex 3080 mini and Optiplex 5090 Micro. 8.2 installed without issue on the 3080 but I could not get any video booting on the 5090 but the 8.3Beta did work so I had to update the other machine in order to get a pool to work. Not anything I'd sell to a client as production but that was my lab experience.
Vmware users be advised - I love and continue to use XCP-NG in PROD environments, but my storage needs have increased in complexity moving from Vmware to XCP-NG. The method that Veeam does snapshotting in Vmware is more space efficient than the XO Delta backup. Per Vates, I need 2TB free to snap and backup a 2TB drive (4 TB total). Vmware+Veeam only needed about 2.4 TB total. Maybe the new partnership with Veeam will help fix this.
Yeah, the super inefficient (space wise) snapshots bit me in the butt in my little home lab. Apparently snapping an almost empty thin provsioned 60 gb drive three times causes the machine to take up 180 gigs of my 196 gig total... which of course makes it impossible for it to consolidate because it needs empty space to consolidate. Way worse than VMware here, and they're both laughably crap compared to good snapshots, like ZFS.
Just bought and installed a fanless "router" style N100 PC that came with four 2.5 gig i226:es - XCP-NG 8.3 worked perfectly out of the box. XO Lite is great but very limited still but it's nice to have at least basic controls remotely - like starting up your Orchestra if that's down. Running a pfSense virtualized on it with passed through NICs, and some other housekeeping type home servers.
The problem with XCP-NG 8.3 and why i ditched it in a cluster for Proxmox VE just recently is because the Centos version it is built on is so dam old, you can't even run any recent version of Ceph packages / drivers on it (RBD or otherwise)! I love XCP in most cases, but the super old Centos base it uses is becoming a right pain in the ass in some respects. If and when they fix that, I will seriously consider going back, as it has some features that Proxmox doesn't (like being able to live migrate between hosts NOT in the same cluster), but right now the Pro's just don't outweigh the cons sufficiently.
Dom0 isn't meant to be modified. Also, Xen is not KVM, it's vastly different (in XCP-ng, it's Xen handling all the important features, not the Dom0, unlike in KVM where it's the host itself). If you need to tinker or bend the solution to match your use cases, indeed it might not be the right fit :)
@@olivierlambert4101 Interesting to see a reply from a member of the XCP-NG Team themselves, thanks for that. I made my point because it's actually in your documentation that ceph-common (needed for RBD) while not officially recommended, can be installed (WITH INSTRUCTIONS ON HOW TO DO SO!) to dom0 and used which is great, except that the available packages will NOT talk to any recent version of ceph, especially Ceph Reef. Now that Caveat is not mentioned at all, and only by messing around did i figure that out. I then went looking to see if any newer packages were available, but the latest I can get is ~14.x (15.x if i import from other sources), where as the Ceph cluster I attempted to connect to is running 18.2.x Reef. I would have reasonably expected that if the possibility to use Ceph RBD was mentioned, that I could at least connect it to a modern cluster, else what is the point of even mentioning it in the documentation? The thing with Ceph packages is that to have any reasonable performance, they must run in Kernel Space, which to my knowledge implies it must run in Dom0 as I don't recall any Xen specific Ceph packages existing? I was originally going to use IXSystems Truecommand clustering to obtain redundancy via SMB / NFS, but IXSystems decided to deprecate that before it even got out of beta, and I needed storage redundancy on a budget (Small cluster, limited budget), so Ceph became the next idea, but when I discovered that XCP-NG simply would not talk to Ceph Reef, that was the nail in the coffin for XCP-NG. Yes, I looked at XOSAN v1, but just did not like it, and XOSAN v2 wasn't available at the time either (and I haven't recheck since).
@@KSSilenceAU If I'm a member of XCP-ng team, I'm also the creator of both XCP-ng & Xen Orchestra projects, and the CEO and co-founder of the company behind it. As per Ceph, our official documentation states clearly it's not officially supported (in the Storage page, see the table with the "Officially supported" in the dedicated row, Ceph isn't there). Also, specifically in the Ceph section, there's a big yellow warning about "it may work or not and it's not supported". If you aren't happy with the level of Ceph support, I can understand your frustration, but it's clearly documented that's not something supported nor working out of the box. It might be better in the future, but for now we have to choose priorities, and sadly there's not enough demand for that (vs other more pressing things). Also, if you think the documentation isn't clear enough, you have a link on the bottom of each page ("Edit") so you can improve it, contributions are welcome.
XCP-NG isn't meant to hack in a lot of crap into it, it's a corporate virtualization platform meant to run stable workloads. Home use of it is an option but support for bolting extra stuff to the hypervisor's dom0 can only make it less stable. If Proxmox serves better for that then fine, but the much more "chaotic" approach also makes it less attractive for corporations.
The style of the response that you get from the CEO is a HUGE part as to why I don't use xcp-ng over Proxmox VE. His response, basically, summarises down to RTFM. But that does nothing to address the actual, primary concern, which is that the source code base does not pull more up-to-date versions of Ceph. (I recently finally had to migrate off of CentOS and on to Rocky Linux, *because* CentOS is too old now, which is rather unfortunate. (Thanks IBM! [/s]))
I’m excited for this, and the later addition of networking and VM creation. I’ve been looking for a replacement for ESXi 7.3 and this might do it. Also Vates, if you guys offer a cheap subscription for us home users that just want to tinker and run like a dozen VMs I think that might be popular.
I was going to say, yeah, it's literally free and open source for home usage, and their forums are pretty active if you need any support. There's even tools that will build the management for you with about 30 seconds of input
That's not the impression I'm getting at all. Spoke to a guy doing a demo who worked for a major server manufacturer and he spoke of another larger corporation testing options to their VMware, they had already eliminated Proxmox but were cautiously optimistic about XCP-NG.
It would be great if the Terraform and Packer providers got some love, and some examples that work reliably with 8.3. I'm also looking for solid descriptions of how to deploy Flatcar Linux onto this platform.
I currently work with esxi and Im surprised by how common issues with upgrades breaking esxi is. I though running something expensive and supported like vmware would give you peace in mind but this is not the case. you need to take care of a vmware cluster just the same way you have to take care of a proxmox or xcp-ng cluster with the difference beeing that you need support for vmvares products while you can repair xcp-ng and proxmox by yourself.
I concur, esxi is suprisingly mediocre for an "industry leader" and "best of breed" and all the buzzwords. You can do 90% of what most companies use it for with Proxmox or xcp-ng or even HyperV and get a more sane and reliable host/cluster
I used to use ESXi in my "home lab" setup and I lost 1.5TB of data because I didn't realized that by default esxi deletes the VMs disks if you delete a VM without confirming if you want to do so... I was dumb To make things even worse the partion style and the filesystem is a nightmare to use recovery tools... I have found just one tool that was able to read from the partitions and recover my data but since it costs $800 I couldn't afford it I was short on storage so I didn't had any extra backups! lesson learned...
Details, details. Define common issues with upgrades. I used ESXi for at least 12 years?, in at least 12-15 machines. A few were the same machine/hardware, but at least 8-10 were different machines, CPUs, ages, generations. I never had an upgrade issue so I want to know the details of your experience as those issues can be operator errors or hardware choices.
Hardware support, in my opinion, is what makes proxmox a far superior solution. Both hypervisors do basically the same stuff in practice, but having to worry about hardware support really makes XCP-ng a hard recommendation.
I haven't tried XCP-NG yet, although it's looking like I need to. I am a huge Proxmox fan and due to the stability I have experienced running it I have not been tempted to try another. Does it have distributed storage options like CEPH? I find this invaluable in Proxmox being able to span many different disks in systems that don't have the same drive layout.
Some cool improvements. Will the VM snapshot disk exclusion functionally with other attached devices? For example, I have a USB Zigbee controller that I can't attach to my VM because of the snapshots that are created as part of my nightly backup job. I must attach it to another machine and then use USB over ethernet. Will I be able to attach the Zigbee controller and then exclude it from the snapshots?
At work We are actually running multiple sites with Hyper-V fail over clusters for our servers and VDI win10 vms. but I am starting to concider moving away from Microsoft since they are not licensed yet and the cost is insane. My only concern is veeam support which seems to be still in beta.
I don't understand how your costs will go down if you move? You still need to license your Windows servers, unless you're running Linux servers and Win10 VDI's only, then you can save on licensing for the Hyper-V host itself.
@@affieuk yes that's what I meant, to switch from windows server on the hosts to linux. The VM servers will still remain on windows if they already run on it and be licensed by virtual core if I am not mistaken.
@@Heartl3ss21 Yeah, it'll be core based. Last I looked a few years ago was 8 minimum, going up from there. Depending on number of VM's it's cheaper to move all Windows Server VM's to one node and license with Datacenter. Automatic activation is a nice bonus, but not by much since automation will take care of it either way.
@@affieuk true but who users anymore a single host to run critical services? You have to use at least two in fail over configuration and in that case you will have to license both hosts with data center since they both can have the full number of VMs at any given time
@@Heartl3ss21 Yup 100%, same goes if you run another hypervisor though. Microsoft licensing fees are crazy, but then there are lots of others that do the same. If you can use open source software for your needs and a support contract if needed, that would be the best outcome.
Proxmox is not an appliance! How long, how many steps and which other resources you would need if the Proxmox boot/system drive dies? If it requires more than 10 minutes, more than 3 steps or another system, it is not an appliance!
@LAWRENCESYSTEMS so looking forward to this; once the beta adds the ability to migrate those disks, I'm gonna be all over it (especially if it also increases that migration speed from 50MB/s)
Liked your videos, however it’s lagging behind proxmox from multiple reasons such as CPU’s and old kernel of the host pass through PCI devices should not ask a host reboot once you excluded from the host maybe it’s stable, but it’s lagging not for my performance, but from innovation and use ability of the components compares to Proxmox it was my previous hypervisor, but it’s still lagging behind the other competitors
I often hear comments about 'Proxmox having a more recent kernel,' but it’s worth clarifying that in XCP-ng, the hypervisor itself is not Linux, so the kernel version isn’t directly relevant to performance or functionality. This is a bit like focusing on the gas tank size of an electric car-it misses the key point. There are certainly meaningful discussions to be had about XCP-ng and Xen, and understanding these nuances helps keep the conversation relevant.
@@olivierlambert4101 thanks for addressing it , however XCP-NG is based on centos and it’s a fact, even with if I installed xen on Debian win latest kernel other feature will not work at the moment, such as support for vgpu, device pass through without reboot the host on every assignment of pcie device and more. Btw- I used to run Proxmox for 4 year on an enterprise gear company and pivot to XCP-NG , but now moved back to Proxmox because the simplicity of things such as cloud-init deploy templates in a several clicks, and no virtual sound device on vm or different disk type and controller and the last part when there isn’t any good vdi for XCP-ng
@@olivierlambert4101 actually it does related to the kernel, however only for several cases,hope that you can assist with clarifying - if the host has iGPU you won't be able to split it between the host and multiple VMs unless your host kernel version is 6.0 or higher.
@@liora2k It's even more complex than that. Even a recent kernel doesn't have access to all the host memory nor all the CPUs, because Dom0 is just a VM after all. So even if it's required, it might be not enough. So by design, the most important piece by far is the hypervisor itself.
Proxmox is not an appliance. How long, how many steps and which other resources you would need if the Proxmox boot/system drive dies? If it requires more than 10 minutes, more than 3 steps or another system, it is not an appliance!
vTPM is completely virtualized and does not need hardware TPM on the host. Afaik it stores the keys in a small virtual disk together with the virtual machine disks. So it's not as "secure" as a hardware TPM where the keys are stored inside a physical chip in the TPM device. But it's not meant to. It's main goal is make Windows 11 happy so you can install a VM.
@@marcogenovesi8570 yeah, if no one has access to the physical hypervisor machine, the vTPM virtual disk is "secure" enough. If you run a VM with win11 and a malware is installed, the malware won't be able to access any keys, as they are stored in a "TPM device"
I would have changed to XCP-ng if it had FULL ZFS features. Yes, you can use ZFS, but some actions/status/monitoring are still command driven and not implemented in the GUI. That does not usually stop me from choosing a platform, but given that many (most) of the apps/services I use, can be put into much faster and leaner Docker containers, I have been turning VMs off. For VMS, I ditched ESXi for TrueNAS Scale and never looked back. It offers me enough VM support for what I needed: Windows and Ubuntu full installs. All the services/apps that I had in the Ubuntu VMs have been migrated to Docker and the VMs shutdown. Still run W10 in a VM as it’s a critical part of my remote admin solution, but with an SSL VPN running in TN and Docker RDP solutions like Guacamole, it will be turned off in the next couple months as well. Heck, you can run Windows or Ubuntu in a container if you need to. All of this to say: VM support is not as important anymore.
@@LAWRENCESYSTEMS Huh, I'll have to ask do some digging. It uses Longhorn under the hood (though you can use other CSIs too like rook-ceph) which supports isci by default and nfs as an additional option.
You refer to this as the "best free" virtualization solution but your demos only show the Premium XOA... that's not exactly free so I'm failing to make the connection.
can xcpng be put directly on th net with restricted access tot he management? i do this with hyper-v and i am trying to find another hypervisor to replace it.
Run a VPN on the hypervisor machine wouldn't make any difference. Because it's Linux base is the reason why I would be considering doing it I've got windows locked down haven't had any issues so I was just curious since its linux-based and there's millions of properly secured Linux machines directly on the internet if we could use the built-in firewall unless they've disabled it to do the same thing. So let me restate it is the built-in Linux firewall enabled on that cpeng? If so then that answers my question.
It comes with a web UI now as Tom demonstrated but it's highly limited still. But of course that doesn't matter, since with the Xen Orchestra appliance you get extensive control of all your XCP-NG servers from one interface.
I haven't tried with GPUs, but I've tried with other things and it's been petty flawless, so I can't imagine it would be a problem. If you've had issues specifically with GPUs but other stuff's worked, I'd be happy to test it though
@@joshuawaterhousify If you could. I've had success passing through GPUs via KVM and Proxmox to Linux VMs but it's never worked for me to Windows VMs. Really need Windows VM with CUDA for local AI, RPA, AutoCAD & Premiere.
@ericneo2 may not be till the weekend, but I'll throw my 2070 Super in and see what I can do. I know nvidia blocked things on consumer GPUs with code 43 for a while, but I think they opened that up a bit ago? I've been meaning to give it a shot for a gaming VM for a little while. Testing will be on games, Davinci Resolve, and maybe some AI stuff, with a bit of blender or something to make sure that side works as well. Either way, if you're already on Proxmox and want to stick with KVM, check out Craft Computing; he's got tutorials for it for everything from direct pass through to vGPU
Sorry to Tell you that Bro, but that are all features Proxmox has since years ago. And XEN is notoriously unstable and hard to work with. And the worst part on this OS is that the New WebUI is pretty much a one to one copy of Proxmox UI. They didn’t rework the UI they basically stole it.
@@LAWRENCESYSTEMS Take a closer look at PVE and compare the WebUI from XCP to it and you’ll can clearly see that. I tried XCP and ran some stability test and compared it to Proxmox. The recovery times in case of a sudden host failure is much better on Proxmox, not only that is way harder to crash a Proxmox host compared to XCP and and believe me when I say I would love to find a good Proxmox alternative but there is non. You can fanboy as hard as you want but when is comes to running to business critical applications anywhere I prefer Proxmox over any other solution because it just works and is not a pain to set up and get going.
Well like every big release i go into this with high hopes and come out fed up and stuck with another £37k bill for a year for VMWare. It just don't work, the whole storage subsystem is a joke, the performance loss is criminal at best especially on all flash based storage Support for 25 50 and 100 Gig cards is laughable and when you do make the thing work it just wont work No vGPU Support at all WTF WHY NOT. Passthru works very well IF you can actually pass thru the devices you want. I just want my home lab to work without needing to pay for ESXi licenses, 40U of compute is expensive.
I wrote some of the ISO installer and Windows code for Citrix XenServer back when it was around v6. That team was the smartest group of people I've ever had the privilege to work with.
Are you venting or trying to brag about stuff noone cares about?
Hi Tom! Been waiting for a your coverage of 8.3, finally dropped, thanks! I hope to see much more in depth coverage of 8.3. Cheers, Boris!
I’ve been using XCP-ng for about 2 years now in my home lab, have only two hosts but it’s been rock solid the whole time. Thanks for the heads up on this as I wasn’t aware 8.3 had dropped so I will give the update a try later today
I am more bullish on Proxmox. Run it at home for pfSense and a few VMs. Like how it uses the latest version of Debian along with the latest Linux kernel so hardware support is great.
XCP-ng still has a 2TB limit due to using decrepite VHD storage. This is a pain if you're dealing with large database VMs.
Absolutely right. We were migrating from ESXi to XCP and ran into this issue for big VMs of our customers.
Basically hindering us and we finally switched to proxmox
This is great to know - thanks mate !
Well done!!!
That 2TB limit should have been addressed years ago by Citrix but they didn't. Vates have recently stated that a solution is due within the next few weeks to possibly a few months and likely still uses SMAPI v1.
Upgraded to 8.3 after watching you do it on the livestream the other day. Went flawless. Been on XCPNG for a number of years in my homelab now and it does what I want it to (for the most part, but that's usually my fault when something doesn't work)
The video that i was waiting for....greetings from Brazil...
Huehuehue BRBR
A review of Incus would be helpful as well. Thank you for the educational content.
The reason Win 11 requires a TPM is simple. MSFT hardware partners needed a new hardware requirement to boost sales. They missed out on a sales boost when MSFT made 10 a free upgrade and screamed blue murder about it. Add almost a decade of resentment between 10 and 11
Just need vGPU and VDI support, then I will be happy to move into XCP-NG.
By default no, but with a copying a binary from xenserver you can get it
Like it, however hyper-convergence is the top priority for my current professional situation. Otherwise xcpng looks great! Xo lite looks like something id use alot.
What are you on VMware? Have you tried Proxmox?
I can relate to the expanded hardware compatibility as it relates to running on lab machines. I build a simple 2 host lab pool using a Dell Optiplex 3080 mini and Optiplex 5090 Micro. 8.2 installed without issue on the 3080 but I could not get any video booting on the 5090 but the 8.3Beta did work so I had to update the other machine in order to get a pool to work. Not anything I'd sell to a client as production but that was my lab experience.
Vmware users be advised - I love and continue to use XCP-NG in PROD environments, but my storage needs have increased in complexity moving from Vmware to XCP-NG. The method that Veeam does snapshotting in Vmware is more space efficient than the XO Delta backup. Per Vates, I need 2TB free to snap and backup a 2TB drive (4 TB total). Vmware+Veeam only needed about 2.4 TB total. Maybe the new partnership with Veeam will help fix this.
Yeah, the super inefficient (space wise) snapshots bit me in the butt in my little home lab. Apparently snapping an almost empty thin provsioned 60 gb drive three times causes the machine to take up 180 gigs of my 196 gig total... which of course makes it impossible for it to consolidate because it needs empty space to consolidate. Way worse than VMware here, and they're both laughably crap compared to good snapshots, like ZFS.
Nice video! Really helpful and clear. Thanks 👍. Could you do a video to compare it with proxmox?
Just bought and installed a fanless "router" style N100 PC that came with four 2.5 gig i226:es - XCP-NG 8.3 worked perfectly out of the box. XO Lite is great but very limited still but it's nice to have at least basic controls remotely - like starting up your Orchestra if that's down. Running a pfSense virtualized on it with passed through NICs, and some other housekeeping type home servers.
XOLite is just a starting point which gives you convinient way to setup Xen Orchestra, which is much more capable
The problem with XCP-NG 8.3 and why i ditched it in a cluster for Proxmox VE just recently is because the Centos version it is built on is so dam old, you can't even run any recent version of Ceph packages / drivers on it (RBD or otherwise)!
I love XCP in most cases, but the super old Centos base it uses is becoming a right pain in the ass in some respects.
If and when they fix that, I will seriously consider going back, as it has some features that Proxmox doesn't (like being able to live migrate between hosts NOT in the same cluster), but right now the Pro's just don't outweigh the cons sufficiently.
Dom0 isn't meant to be modified. Also, Xen is not KVM, it's vastly different (in XCP-ng, it's Xen handling all the important features, not the Dom0, unlike in KVM where it's the host itself). If you need to tinker or bend the solution to match your use cases, indeed it might not be the right fit :)
@@olivierlambert4101 Interesting to see a reply from a member of the XCP-NG Team themselves, thanks for that.
I made my point because it's actually in your documentation that ceph-common (needed for RBD) while not officially recommended, can be installed (WITH INSTRUCTIONS ON HOW TO DO SO!) to dom0 and used which is great, except that the available packages will NOT talk to any recent version of ceph, especially Ceph Reef.
Now that Caveat is not mentioned at all, and only by messing around did i figure that out. I then went looking to see if any newer packages were available, but the latest I can get is ~14.x (15.x if i import from other sources), where as the Ceph cluster I attempted to connect to is running 18.2.x Reef.
I would have reasonably expected that if the possibility to use Ceph RBD was mentioned, that I could at least connect it to a modern cluster, else what is the point of even mentioning it in the documentation?
The thing with Ceph packages is that to have any reasonable performance, they must run in Kernel Space, which to my knowledge implies it must run in Dom0 as I don't recall any Xen specific Ceph packages existing?
I was originally going to use IXSystems Truecommand clustering to obtain redundancy via SMB / NFS, but IXSystems decided to deprecate that before it even got out of beta, and I needed storage redundancy on a budget (Small cluster, limited budget), so Ceph became the next idea, but when I discovered that XCP-NG simply would not talk to Ceph Reef, that was the nail in the coffin for XCP-NG. Yes, I looked at XOSAN v1, but just did not like it, and XOSAN v2 wasn't available at the time either (and I haven't recheck since).
@@KSSilenceAU If I'm a member of XCP-ng team, I'm also the creator of both XCP-ng & Xen Orchestra projects, and the CEO and co-founder of the company behind it.
As per Ceph, our official documentation states clearly it's not officially supported (in the Storage page, see the table with the "Officially supported" in the dedicated row, Ceph isn't there). Also, specifically in the Ceph section, there's a big yellow warning about "it may work or not and it's not supported".
If you aren't happy with the level of Ceph support, I can understand your frustration, but it's clearly documented that's not something supported nor working out of the box. It might be better in the future, but for now we have to choose priorities, and sadly there's not enough demand for that (vs other more pressing things).
Also, if you think the documentation isn't clear enough, you have a link on the bottom of each page ("Edit") so you can improve it, contributions are welcome.
XCP-NG isn't meant to hack in a lot of crap into it, it's a corporate virtualization platform meant to run stable workloads. Home use of it is an option but support for bolting extra stuff to the hypervisor's dom0 can only make it less stable. If Proxmox serves better for that then fine, but the much more "chaotic" approach also makes it less attractive for corporations.
The style of the response that you get from the CEO is a HUGE part as to why I don't use xcp-ng over Proxmox VE.
His response, basically, summarises down to RTFM.
But that does nothing to address the actual, primary concern, which is that the source code base does not pull more up-to-date versions of Ceph.
(I recently finally had to migrate off of CentOS and on to Rocky Linux, *because* CentOS is too old now, which is rather unfortunate. (Thanks IBM! [/s]))
Nah. Proxmox. This stuff is old and crusty.
I’m excited for this, and the later addition of networking and VM creation. I’ve been looking for a replacement for ESXi 7.3 and this might do it.
Also Vates, if you guys offer a cheap subscription for us home users that just want to tinker and run like a dozen VMs I think that might be popular.
You can compile from source which will have almost all of the features enabled. Just you won't get support.
I was going to say, yeah, it's literally free and open source for home usage, and their forums are pretty active if you need any support. There's even tools that will build the management for you with about 30 seconds of input
I think Proxmox can better capitalize on the VM Ware situation.
That's not the impression I'm getting at all. Spoke to a guy doing a demo who worked for a major server manufacturer and he spoke of another larger corporation testing options to their VMware, they had already eliminated Proxmox but were cautiously optimistic about XCP-NG.
@@KimmoJaskariThat might be true but if High availability and reliability is a consideration you still better off with Proxmox cause it just works.
It would be great if the Terraform and Packer providers got some love, and some examples that work reliably with 8.3. I'm also looking for solid descriptions of how to deploy Flatcar Linux onto this platform.
Packer builder for Xen is there, same as Terrafrom provider. Probably not so feature rich as alternative solutions but it's there.
I currently work with esxi and Im surprised by how common issues with upgrades breaking esxi is. I though running something expensive and supported like vmware would give you peace in mind but this is not the case. you need to take care of a vmware cluster just the same way you have to take care of a proxmox or xcp-ng cluster with the difference beeing that you need support for vmvares products while you can repair xcp-ng and proxmox by yourself.
I concur, esxi is suprisingly mediocre for an "industry leader" and "best of breed" and all the buzzwords. You can do 90% of what most companies use it for with Proxmox or xcp-ng or even HyperV and get a more sane and reliable host/cluster
I used to use ESXi in my "home lab" setup and I lost 1.5TB of data because I didn't realized that by default esxi deletes the VMs disks if you delete a VM without confirming if you want to do so... I was dumb
To make things even worse the partion style and the filesystem is a nightmare to use recovery tools... I have found just one tool that was able to read from the partitions and recover my data but since it costs $800 I couldn't afford it
I was short on storage so I didn't had any extra backups! lesson learned...
Details, details. Define common issues with upgrades.
I used ESXi for at least 12 years?, in at least 12-15 machines. A few were the same machine/hardware, but at least 8-10 were different machines, CPUs, ages, generations.
I never had an upgrade issue so I want to know the details of your experience as those issues can be operator errors or hardware choices.
@@marcogenovesi8570
Pls define the specific mediocre features of (FREE, before Broadcom) ESXi.
Thanks Tom. Always appreciated.
Switched to proxmox, since their installer completely ignores my nvme drives on a Genoa system.
Hardware support, in my opinion, is what makes proxmox a far superior solution. Both hypervisors do basically the same stuff in practice, but having to worry about hardware support really makes XCP-ng a hard recommendation.
I don't really worry about hardware support. I have XCP-ng on lots of Dells, Supermicro systems, Lenovo, and a variety of Mini-PC's
Side question, does XCP-NG support Big/Little Intel CPU cores?
For anyone wondering I did successfully install it on an MS-01 with a 12900H, and had no issues with the CPU so far.
Hi Tom!! Love your videos. Finally got this feature but how to exclude the raw disks which are passed to vm while backup or snapshots ?
Is it just me or is Tom beginning to look like Mr. Miyagi?
I haven't tried XCP-NG yet, although it's looking like I need to. I am a huge Proxmox fan and due to the stability I have experienced running it I have not been tempted to try another. Does it have distributed storage options like CEPH? I find this invaluable in Proxmox being able to span many different disks in systems that don't have the same drive layout.
Some cool improvements. Will the VM snapshot disk exclusion functionally with other attached devices? For example, I have a USB Zigbee controller that I can't attach to my VM because of the snapshots that are created as part of my nightly backup job. I must attach it to another machine and then use USB over ethernet. Will I be able to attach the Zigbee controller and then exclude it from the snapshots?
ill take a look when it hits 8.6, the current interface requires too many clicks to get things done.
At work We are actually running multiple sites with Hyper-V fail over clusters for our servers and VDI win10 vms. but I am starting to concider moving away from Microsoft since they are not licensed yet and the cost is insane. My only concern is veeam support which seems to be still in beta.
I don't understand how your costs will go down if you move? You still need to license your Windows servers, unless you're running Linux servers and Win10 VDI's only, then you can save on licensing for the Hyper-V host itself.
@@affieuk yes that's what I meant, to switch from windows server on the hosts to linux. The VM servers will still remain on windows if they already run on it and be licensed by virtual core if I am not mistaken.
@@Heartl3ss21 Yeah, it'll be core based. Last I looked a few years ago was 8 minimum, going up from there.
Depending on number of VM's it's cheaper to move all Windows Server VM's to one node and license with Datacenter. Automatic activation is a nice bonus, but not by much since automation will take care of it either way.
@@affieuk true but who users anymore a single host to run critical services? You have to use at least two in fail over configuration and in that case you will have to license both hosts with data center since they both can have the full number of VMs at any given time
@@Heartl3ss21 Yup 100%, same goes if you run another hypervisor though. Microsoft licensing fees are crazy, but then there are lots of others that do the same. If you can use open source software for your needs and a support contract if needed, that would be the best outcome.
Only problem I had with this was no support for all of the cores on the newer i9 cpus.
Proxmox is not an appliance!
How long, how many steps and which other resources you would need if the Proxmox boot/system drive dies?
If it requires more than 10 minutes, more than 3 steps or another system, it is not an appliance!
Thank you for the video question though. When it comes to pass through; what about AMD X3D graphics?
I have not tested that
What's the hours of Vates support since they are in France?
Still don’t support 2tb or more disks 🙃
The new storage server is in beta right now.
@LAWRENCESYSTEMS so looking forward to this; once the beta adds the ability to migrate those disks, I'm gonna be all over it (especially if it also increases that migration speed from 50MB/s)
A fix to the 2TB+ disk size limit according to Vates is due in the next few months.
@@y0jimbb0ttrouble98 yep, at the latest; I'm pumped!
Liked your videos, however it’s lagging behind proxmox from multiple reasons such as CPU’s and old kernel of the host pass through PCI devices should not ask a host reboot once you excluded from the host maybe it’s stable, but it’s lagging not for my performance, but from innovation and use ability of the components compares to Proxmox it was my previous hypervisor, but it’s still lagging behind the other competitors
I often hear comments about 'Proxmox having a more recent kernel,' but it’s worth clarifying that in XCP-ng, the hypervisor itself is not Linux, so the kernel version isn’t directly relevant to performance or functionality. This is a bit like focusing on the gas tank size of an electric car-it misses the key point. There are certainly meaningful discussions to be had about XCP-ng and Xen, and understanding these nuances helps keep the conversation relevant.
@@olivierlambert4101 thanks for addressing it , however XCP-NG is based on centos and it’s a fact, even with if I installed xen on Debian win latest kernel other feature will not work at the moment, such as support for vgpu, device pass through without reboot the host on every assignment of pcie device and more.
Btw- I used to run Proxmox for 4 year on an enterprise gear company and pivot to XCP-NG , but now moved back to Proxmox because the simplicity of things such as cloud-init deploy templates in a several clicks, and no virtual sound device on vm or different disk type and controller and the last part when there isn’t any good vdi for XCP-ng
@@olivierlambert4101 actually it does related to the kernel, however only for several cases,hope that you can assist with clarifying - if the host has iGPU you won't be able to split it between the host and multiple VMs unless your host kernel version is 6.0 or higher.
@@liora2k It's even more complex than that. Even a recent kernel doesn't have access to all the host memory nor all the CPUs, because Dom0 is just a VM after all. So even if it's required, it might be not enough.
So by design, the most important piece by far is the hypervisor itself.
Naaah…..Proxmox Gang here fool, represent 😂😂
proxmox needs quorumless clustering so bad
@@manitoba-op4jxYeah, I’ve been bitten in the ass at least twice because of this.
I love how the comments suggest I translate to English 😮
Proxmox is not an appliance.
How long, how many steps and which other resources you would need if the Proxmox boot/system drive dies?
If it requires more than 10 minutes, more than 3 steps or another system, it is not an appliance!
Question on vTPM. Does your host hardware have to have its own supported hardware TPM in order to host VMs with vTPMs?
vTPM is completely virtualized and does not need hardware TPM on the host. Afaik it stores the keys in a small virtual disk together with the virtual machine disks. So it's not as "secure" as a hardware TPM where the keys are stored inside a physical chip in the TPM device. But it's not meant to. It's main goal is make Windows 11 happy so you can install a VM.
@@marcogenovesi8570 yeah, if no one has access to the physical hypervisor machine, the vTPM virtual disk is "secure" enough. If you run a VM with win11 and a malware is installed, the malware won't be able to access any keys, as they are stored in a "TPM device"
I would have changed to XCP-ng if it had FULL ZFS features.
Yes, you can use ZFS, but some actions/status/monitoring are still command driven and not implemented in the GUI.
That does not usually stop me from choosing a platform, but given that many (most) of the apps/services I use, can be put into much faster and leaner Docker containers, I have been turning VMs off.
For VMS, I ditched ESXi for TrueNAS Scale and never looked back. It offers me enough VM support for what I needed: Windows and Ubuntu full installs.
All the services/apps that I had in the Ubuntu VMs have been migrated to Docker and the VMs shutdown.
Still run W10 in a VM as it’s a critical part of my remote admin solution, but with an SSL VPN running in TN and Docker RDP solutions like Guacamole, it will be turned off in the next couple months as well.
Heck, you can run Windows or Ubuntu in a container if you need to.
All of this to say:
VM support is not as important anymore.
How do you think XCP-ng compares to Rancher Harvester?
I have never used Harvester but it looks pretty basic compared to XCP-ng
@@LAWRENCESYSTEMS oh what features do you see as lacking?
@@AndrewMorris-wz1vq Documentation, iSCSI support, NFS support.
@@LAWRENCESYSTEMS Huh, I'll have to ask do some digging. It uses Longhorn under the hood (though you can use other CSIs too like rook-ceph) which supports isci by default and nfs as an additional option.
How did you get the model of the server to show in the host page?
It does that on most systems.
is it better than QEMU?
Proxmox is so much cooler..
You refer to this as the "best free" virtualization solution but your demos only show the Premium XOA... that's not exactly free so I'm failing to make the connection.
All the feature I show in the video can be done with the built from source XO ruclips.net/video/2wMmSm_ZeZ4/видео.htmlsi=d-RvNTTY_JRe6o5z
cant install it on a sub device, big issue
can xcpng be put directly on th net with restricted access tot he management? i do this with hyper-v and i am trying to find another hypervisor to replace it.
Smarter people would use a VPN for such things.
Run a VPN on the hypervisor machine wouldn't make any difference. Because it's Linux base is the reason why I would be considering doing it I've got windows locked down haven't had any issues so I was just curious since its linux-based and there's millions of properly secured Linux machines directly on the internet if we could use the built-in firewall unless they've disabled it to do the same thing. So let me restate it is the built-in Linux firewall enabled on that cpeng? If so then that answers my question.
So do you want to tell me that now I can install XCPNG like Proxmox? where GUI will be out of the box without what was so far? :)
Eventually that is what XO Lite will provide, it won't be as full featured as Xen Orchestra
@@LAWRENCESYSTEMS Im asking from Home User perspective - so seems to be pretty good alternative ;)
Does it come with richful web UI out of box, like Proxmox?
Keep waiting 😂
It's in beta right now, but they're working on it
It comes with a web UI now as Tom demonstrated but it's highly limited still. But of course that doesn't matter, since with the Xen Orchestra appliance you get extensive control of all your XCP-NG servers from one interface.
Aggravation Switch 🤣
You know too much. I hope you don't run into any Bond villains.
Does PCI Passthrough finally work for GPUs? Cause that would be a game changer.
I haven't tried with GPUs, but I've tried with other things and it's been petty flawless, so I can't imagine it would be a problem. If you've had issues specifically with GPUs but other stuff's worked, I'd be happy to test it though
@@joshuawaterhousify If you could. I've had success passing through GPUs via KVM and Proxmox to Linux VMs but it's never worked for me to Windows VMs.
Really need Windows VM with CUDA for local AI, RPA, AutoCAD & Premiere.
@ericneo2 may not be till the weekend, but I'll throw my 2070 Super in and see what I can do. I know nvidia blocked things on consumer GPUs with code 43 for a while, but I think they opened that up a bit ago? I've been meaning to give it a shot for a gaming VM for a little while.
Testing will be on games, Davinci Resolve, and maybe some AI stuff, with a bit of blender or something to make sure that side works as well.
Either way, if you're already on Proxmox and want to stick with KVM, check out Craft Computing; he's got tutorials for it for everything from direct pass through to vGPU
The stepping stone has been Nvidia literally blocking that on purpose on all consumer cards, I believe.
@KimmoJaskari I'm pretty sure they stopped actively blocking it though; I remember hearing that a while back.
Sorry to Tell you that Bro, but that are all features Proxmox has since years ago. And XEN is notoriously unstable and hard to work with. And the worst part on this OS is that the New WebUI is pretty much a one to one copy of Proxmox UI.
They didn’t rework the UI they basically stole it.
Thanks for making me laugh 😂
@@LAWRENCESYSTEMS Take a closer look at PVE and compare the WebUI from XCP to it and you’ll can clearly see that. I tried XCP and ran some stability test and compared it to Proxmox. The recovery times in case of a sudden host failure is much better on Proxmox, not only that is way harder to crash a Proxmox host compared to XCP and and believe me when I say I would love to find a good Proxmox alternative but there is non.
You can fanboy as hard as you want but when is comes to running to business critical applications anywhere I prefer Proxmox over any other solution because it just works and is not a pain to set up and get going.
Use whatever makes you happy
Misleading title
How so?
Linux 4.19? XEN? What? Feels like 2010…
The 4.19.0+ kernel is limiting some features for storage, since this kernel is EOL Dec. 2024, maybe we get something newer soon.
XCP-ng UI Very ugly.😅
First
Twirly
Well like every big release i go into this with high hopes and come out fed up and stuck with another £37k bill for a year for VMWare.
It just don't work, the whole storage subsystem is a joke, the performance loss is criminal at best especially on all flash based storage
Support for 25 50 and 100 Gig cards is laughable and when you do make the thing work it just wont work
No vGPU Support at all WTF WHY NOT.
Passthru works very well IF you can actually pass thru the devices you want.
I just want my home lab to work without needing to pay for ESXi licenses, 40U of compute is expensive.
XCP-NG should support KVM virtualization!
The entire point of it is to support Xen virtualization...
@@KimmoJaskari I don’t agree, they want to create an enterprise virtualization solution. They choose Xen as the tech to do that
@@serdalo5035 Xen is an enterprise virtualization solution created a while before KVM became available.
this is nonsense. The entire solution is built around Xen hypervisor. It's like saying XCP-NG should support HyperV virtualization
Hmmm, can I now take my mini PC running Proxmox finally over to XCP-NG 8.3. ... 8.2 wouldn't install on it.