XCP-ng 8.3: Is it the Best Free and Open-Source Virtualization Solution Yet?

Поделиться
HTML-код
  • Опубликовано: 24 ноя 2024

Комментарии • 143

  • @dfgdfg_
    @dfgdfg_ Месяц назад +70

    I wrote some of the ISO installer and Windows code for Citrix XenServer back when it was around v6. That team was the smartest group of people I've ever had the privilege to work with.

    • @michalrybinski3233
      @michalrybinski3233 Месяц назад

      Are you venting or trying to brag about stuff noone cares about?

  • @BorisGarami
    @BorisGarami Месяц назад +17

    Hi Tom! Been waiting for a your coverage of 8.3, finally dropped, thanks! I hope to see much more in depth coverage of 8.3. Cheers, Boris!

  • @DanMackAlpha
    @DanMackAlpha 27 дней назад +2

    I’ve been using XCP-ng for about 2 years now in my home lab, have only two hosts but it’s been rock solid the whole time. Thanks for the heads up on this as I wasn’t aware 8.3 had dropped so I will give the update a try later today

  • @adriftatlas
    @adriftatlas 29 дней назад +35

    I am more bullish on Proxmox. Run it at home for pfSense and a few VMs. Like how it uses the latest version of Debian along with the latest Linux kernel so hardware support is great.
    XCP-ng still has a 2TB limit due to using decrepite VHD storage. This is a pain if you're dealing with large database VMs.

    • @vincent3350
      @vincent3350 29 дней назад +10

      Absolutely right. We were migrating from ESXi to XCP and ran into this issue for big VMs of our customers.
      Basically hindering us and we finally switched to proxmox

    • @oscarcharliezulu
      @oscarcharliezulu 28 дней назад +1

      This is great to know - thanks mate !

    • @sacothemaster
      @sacothemaster 26 дней назад

      Well done!!!

    • @y0jimbb0ttrouble98
      @y0jimbb0ttrouble98 22 дня назад +1

      That 2TB limit should have been addressed years ago by Citrix but they didn't. Vates have recently stated that a solution is due within the next few weeks to possibly a few months and likely still uses SMAPI v1.

  • @awstott
    @awstott 29 дней назад +2

    Upgraded to 8.3 after watching you do it on the livestream the other day. Went flawless. Been on XCPNG for a number of years in my homelab now and it does what I want it to (for the most part, but that's usually my fault when something doesn't work)

  • @decastroal
    @decastroal Месяц назад +9

    The video that i was waiting for....greetings from Brazil...

  • @JamesJosephFinn
    @JamesJosephFinn 19 дней назад +1

    A review of Incus would be helpful as well. Thank you for the educational content.

  • @markkoops2611
    @markkoops2611 22 дня назад +2

    The reason Win 11 requires a TPM is simple. MSFT hardware partners needed a new hardware requirement to boost sales. They missed out on a sales boost when MSFT made 10 a free upgrade and screamed blue murder about it. Add almost a decade of resentment between 10 and 11

  • @DangoNetwork
    @DangoNetwork Месяц назад +7

    Just need vGPU and VDI support, then I will be happy to move into XCP-NG.

    • @araa5184
      @araa5184 29 дней назад

      By default no, but with a copying a binary from xenserver you can get it

  • @LIKKLEbitCsale
    @LIKKLEbitCsale Месяц назад +4

    Like it, however hyper-convergence is the top priority for my current professional situation. Otherwise xcpng looks great! Xo lite looks like something id use alot.

    • @LtdJorge
      @LtdJorge 29 дней назад

      What are you on VMware? Have you tried Proxmox?

  • @chrislex2598
    @chrislex2598 26 дней назад

    I can relate to the expanded hardware compatibility as it relates to running on lab machines. I build a simple 2 host lab pool using a Dell Optiplex 3080 mini and Optiplex 5090 Micro. 8.2 installed without issue on the 3080 but I could not get any video booting on the 5090 but the 8.3Beta did work so I had to update the other machine in order to get a pool to work. Not anything I'd sell to a client as production but that was my lab experience.

  • @fairsitetechnologies9813
    @fairsitetechnologies9813 27 дней назад +2

    Vmware users be advised - I love and continue to use XCP-NG in PROD environments, but my storage needs have increased in complexity moving from Vmware to XCP-NG. The method that Veeam does snapshotting in Vmware is more space efficient than the XO Delta backup. Per Vates, I need 2TB free to snap and backup a 2TB drive (4 TB total). Vmware+Veeam only needed about 2.4 TB total. Maybe the new partnership with Veeam will help fix this.

    • @KimmoJaskari
      @KimmoJaskari 17 дней назад

      Yeah, the super inefficient (space wise) snapshots bit me in the butt in my little home lab. Apparently snapping an almost empty thin provsioned 60 gb drive three times causes the machine to take up 180 gigs of my 196 gig total... which of course makes it impossible for it to consolidate because it needs empty space to consolidate. Way worse than VMware here, and they're both laughably crap compared to good snapshots, like ZFS.

  • @dariocaputo1083
    @dariocaputo1083 29 дней назад +2

    Nice video! Really helpful and clear. Thanks 👍. Could you do a video to compare it with proxmox?

  • @KimmoJaskari
    @KimmoJaskari 29 дней назад

    Just bought and installed a fanless "router" style N100 PC that came with four 2.5 gig i226:es - XCP-NG 8.3 worked perfectly out of the box. XO Lite is great but very limited still but it's nice to have at least basic controls remotely - like starting up your Orchestra if that's down. Running a pfSense virtualized on it with passed through NICs, and some other housekeeping type home servers.

    • @makeitcloudy
      @makeitcloudy 27 дней назад

      XOLite is just a starting point which gives you convinient way to setup Xen Orchestra, which is much more capable

  • @KSSilenceAU
    @KSSilenceAU Месяц назад +22

    The problem with XCP-NG 8.3 and why i ditched it in a cluster for Proxmox VE just recently is because the Centos version it is built on is so dam old, you can't even run any recent version of Ceph packages / drivers on it (RBD or otherwise)!
    I love XCP in most cases, but the super old Centos base it uses is becoming a right pain in the ass in some respects.
    If and when they fix that, I will seriously consider going back, as it has some features that Proxmox doesn't (like being able to live migrate between hosts NOT in the same cluster), but right now the Pro's just don't outweigh the cons sufficiently.

    • @olivierlambert4101
      @olivierlambert4101 Месяц назад +16

      Dom0 isn't meant to be modified. Also, Xen is not KVM, it's vastly different (in XCP-ng, it's Xen handling all the important features, not the Dom0, unlike in KVM where it's the host itself). If you need to tinker or bend the solution to match your use cases, indeed it might not be the right fit :)

    • @KSSilenceAU
      @KSSilenceAU Месяц назад +8

      ​@@olivierlambert4101 Interesting to see a reply from a member of the XCP-NG Team themselves, thanks for that.
      I made my point because it's actually in your documentation that ceph-common (needed for RBD) while not officially recommended, can be installed (WITH INSTRUCTIONS ON HOW TO DO SO!) to dom0 and used which is great, except that the available packages will NOT talk to any recent version of ceph, especially Ceph Reef.
      Now that Caveat is not mentioned at all, and only by messing around did i figure that out. I then went looking to see if any newer packages were available, but the latest I can get is ~14.x (15.x if i import from other sources), where as the Ceph cluster I attempted to connect to is running 18.2.x Reef.
      I would have reasonably expected that if the possibility to use Ceph RBD was mentioned, that I could at least connect it to a modern cluster, else what is the point of even mentioning it in the documentation?
      The thing with Ceph packages is that to have any reasonable performance, they must run in Kernel Space, which to my knowledge implies it must run in Dom0 as I don't recall any Xen specific Ceph packages existing?
      I was originally going to use IXSystems Truecommand clustering to obtain redundancy via SMB / NFS, but IXSystems decided to deprecate that before it even got out of beta, and I needed storage redundancy on a budget (Small cluster, limited budget), so Ceph became the next idea, but when I discovered that XCP-NG simply would not talk to Ceph Reef, that was the nail in the coffin for XCP-NG. Yes, I looked at XOSAN v1, but just did not like it, and XOSAN v2 wasn't available at the time either (and I haven't recheck since).

    • @olivierlambert4101
      @olivierlambert4101 Месяц назад +25

      @@KSSilenceAU If I'm a member of XCP-ng team, I'm also the creator of both XCP-ng & Xen Orchestra projects, and the CEO and co-founder of the company behind it.
      As per Ceph, our official documentation states clearly it's not officially supported (in the Storage page, see the table with the "Officially supported" in the dedicated row, Ceph isn't there). Also, specifically in the Ceph section, there's a big yellow warning about "it may work or not and it's not supported".
      If you aren't happy with the level of Ceph support, I can understand your frustration, but it's clearly documented that's not something supported nor working out of the box. It might be better in the future, but for now we have to choose priorities, and sadly there's not enough demand for that (vs other more pressing things).
      Also, if you think the documentation isn't clear enough, you have a link on the bottom of each page ("Edit") so you can improve it, contributions are welcome.

    • @KimmoJaskari
      @KimmoJaskari 29 дней назад +4

      XCP-NG isn't meant to hack in a lot of crap into it, it's a corporate virtualization platform meant to run stable workloads. Home use of it is an option but support for bolting extra stuff to the hypervisor's dom0 can only make it less stable. If Proxmox serves better for that then fine, but the much more "chaotic" approach also makes it less attractive for corporations.

    • @ewenchan1239
      @ewenchan1239 28 дней назад +6

      The style of the response that you get from the CEO is a HUGE part as to why I don't use xcp-ng over Proxmox VE.
      His response, basically, summarises down to RTFM.
      But that does nothing to address the actual, primary concern, which is that the source code base does not pull more up-to-date versions of Ceph.
      (I recently finally had to migrate off of CentOS and on to Rocky Linux, *because* CentOS is too old now, which is rather unfortunate. (Thanks IBM! [/s]))

  • @NatesRandomVideo
    @NatesRandomVideo 24 дня назад +8

    Nah. Proxmox. This stuff is old and crusty.

  • @napalmsteak
    @napalmsteak 29 дней назад +1

    I’m excited for this, and the later addition of networking and VM creation. I’ve been looking for a replacement for ESXi 7.3 and this might do it.
    Also Vates, if you guys offer a cheap subscription for us home users that just want to tinker and run like a dozen VMs I think that might be popular.

    • @Darkk6969
      @Darkk6969 29 дней назад +1

      You can compile from source which will have almost all of the features enabled. Just you won't get support.

    • @joshuawaterhousify
      @joshuawaterhousify 29 дней назад

      I was going to say, yeah, it's literally free and open source for home usage, and their forums are pretty active if you need any support. There's even tools that will build the management for you with about 30 seconds of input

  • @Alex4n3r
    @Alex4n3r 29 дней назад +9

    I think Proxmox can better capitalize on the VM Ware situation.

    • @KimmoJaskari
      @KimmoJaskari 29 дней назад +4

      That's not the impression I'm getting at all. Spoke to a guy doing a demo who worked for a major server manufacturer and he spoke of another larger corporation testing options to their VMware, they had already eliminated Proxmox but were cautiously optimistic about XCP-NG.

    • @jacobscheit4128
      @jacobscheit4128 27 дней назад

      @@KimmoJaskariThat might be true but if High availability and reliability is a consideration you still better off with Proxmox cause it just works.

  • @JamesWebster1975
    @JamesWebster1975 29 дней назад +1

    It would be great if the Terraform and Packer providers got some love, and some examples that work reliably with 8.3. I'm also looking for solid descriptions of how to deploy Flatcar Linux onto this platform.

    • @danonh1
      @danonh1 29 дней назад

      Packer builder for Xen is there, same as Terrafrom provider. Probably not so feature rich as alternative solutions but it's there.

  • @akahenke
    @akahenke Месяц назад +5

    I currently work with esxi and Im surprised by how common issues with upgrades breaking esxi is. I though running something expensive and supported like vmware would give you peace in mind but this is not the case. you need to take care of a vmware cluster just the same way you have to take care of a proxmox or xcp-ng cluster with the difference beeing that you need support for vmvares products while you can repair xcp-ng and proxmox by yourself.

    • @marcogenovesi8570
      @marcogenovesi8570 29 дней назад +3

      I concur, esxi is suprisingly mediocre for an "industry leader" and "best of breed" and all the buzzwords. You can do 90% of what most companies use it for with Proxmox or xcp-ng or even HyperV and get a more sane and reliable host/cluster

    • @jonylentz
      @jonylentz 29 дней назад

      I used to use ESXi in my "home lab" setup and I lost 1.5TB of data because I didn't realized that by default esxi deletes the VMs disks if you delete a VM without confirming if you want to do so... I was dumb
      To make things even worse the partion style and the filesystem is a nightmare to use recovery tools... I have found just one tool that was able to read from the partitions and recover my data but since it costs $800 I couldn't afford it
      I was short on storage so I didn't had any extra backups! lesson learned...

    • @pepeshopping
      @pepeshopping 28 дней назад

      Details, details. Define common issues with upgrades.
      I used ESXi for at least 12 years?, in at least 12-15 machines. A few were the same machine/hardware, but at least 8-10 were different machines, CPUs, ages, generations.
      I never had an upgrade issue so I want to know the details of your experience as those issues can be operator errors or hardware choices.

    • @pepeshopping
      @pepeshopping 28 дней назад

      @@marcogenovesi8570
      Pls define the specific mediocre features of (FREE, before Broadcom) ESXi.

  • @markalmada9662
    @markalmada9662 29 дней назад

    Thanks Tom. Always appreciated.

  • @rudypieplenbosch6752
    @rudypieplenbosch6752 29 дней назад +3

    Switched to proxmox, since their installer completely ignores my nvme drives on a Genoa system.

  • @jttech44
    @jttech44 13 дней назад

    Hardware support, in my opinion, is what makes proxmox a far superior solution. Both hypervisors do basically the same stuff in practice, but having to worry about hardware support really makes XCP-ng a hard recommendation.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  13 дней назад +1

      I don't really worry about hardware support. I have XCP-ng on lots of Dells, Supermicro systems, Lenovo, and a variety of Mini-PC's

  • @napalmsteak
    @napalmsteak 29 дней назад +6

    Side question, does XCP-NG support Big/Little Intel CPU cores?

    • @napalmsteak
      @napalmsteak 29 дней назад

      For anyone wondering I did successfully install it on an MS-01 with a 12900H, and had no issues with the CPU so far.

  • @srikantas2460
    @srikantas2460 29 дней назад

    Hi Tom!! Love your videos. Finally got this feature but how to exclude the raw disks which are passed to vm while backup or snapshots ?

  • @martinkeatings7126
    @martinkeatings7126 29 дней назад +1

    Is it just me or is Tom beginning to look like Mr. Miyagi?

  • @CyberSquatch007
    @CyberSquatch007 26 дней назад

    I haven't tried XCP-NG yet, although it's looking like I need to. I am a huge Proxmox fan and due to the stability I have experienced running it I have not been tempted to try another. Does it have distributed storage options like CEPH? I find this invaluable in Proxmox being able to span many different disks in systems that don't have the same drive layout.

  • @jig1056
    @jig1056 29 дней назад

    Some cool improvements. Will the VM snapshot disk exclusion functionally with other attached devices? For example, I have a USB Zigbee controller that I can't attach to my VM because of the snapshots that are created as part of my nightly backup job. I must attach it to another machine and then use USB over ethernet. Will I be able to attach the Zigbee controller and then exclude it from the snapshots?

  • @allandresner
    @allandresner 29 дней назад

    ill take a look when it hits 8.6, the current interface requires too many clicks to get things done.

  • @Heartl3ss21
    @Heartl3ss21 29 дней назад +1

    At work We are actually running multiple sites with Hyper-V fail over clusters for our servers and VDI win10 vms. but I am starting to concider moving away from Microsoft since they are not licensed yet and the cost is insane. My only concern is veeam support which seems to be still in beta.

    • @affieuk
      @affieuk 29 дней назад

      I don't understand how your costs will go down if you move? You still need to license your Windows servers, unless you're running Linux servers and Win10 VDI's only, then you can save on licensing for the Hyper-V host itself.

    • @Heartl3ss21
      @Heartl3ss21 29 дней назад

      @@affieuk yes that's what I meant, to switch from windows server on the hosts to linux. The VM servers will still remain on windows if they already run on it and be licensed by virtual core if I am not mistaken.

    • @affieuk
      @affieuk 29 дней назад +1

      @@Heartl3ss21 Yeah, it'll be core based. Last I looked a few years ago was 8 minimum, going up from there.
      Depending on number of VM's it's cheaper to move all Windows Server VM's to one node and license with Datacenter. Automatic activation is a nice bonus, but not by much since automation will take care of it either way.

    • @Heartl3ss21
      @Heartl3ss21 29 дней назад

      @@affieuk true but who users anymore a single host to run critical services? You have to use at least two in fail over configuration and in that case you will have to license both hosts with data center since they both can have the full number of VMs at any given time

    • @affieuk
      @affieuk 29 дней назад

      @@Heartl3ss21 Yup 100%, same goes if you run another hypervisor though. Microsoft licensing fees are crazy, but then there are lots of others that do the same. If you can use open source software for your needs and a support contract if needed, that would be the best outcome.

  • @AlexanderDeWolf-v7q
    @AlexanderDeWolf-v7q 26 дней назад

    Only problem I had with this was no support for all of the cores on the newer i9 cpus.

  • @pepeshopping
    @pepeshopping 28 дней назад +2

    Proxmox is not an appliance!
    How long, how many steps and which other resources you would need if the Proxmox boot/system drive dies?
    If it requires more than 10 minutes, more than 3 steps or another system, it is not an appliance!

  • @Lafiro
    @Lafiro 27 дней назад

    Thank you for the video question though. When it comes to pass through; what about AMD X3D graphics?

  • @mikeyfoofoo
    @mikeyfoofoo 29 дней назад

    What's the hours of Vates support since they are in France?

  • @jzcalderon
    @jzcalderon 29 дней назад +4

    Still don’t support 2tb or more disks 🙃

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  29 дней назад

      The new storage server is in beta right now.

    • @joshuawaterhousify
      @joshuawaterhousify 29 дней назад

      ​@LAWRENCESYSTEMS so looking forward to this; once the beta adds the ability to migrate those disks, I'm gonna be all over it (especially if it also increases that migration speed from 50MB/s)

    • @y0jimbb0ttrouble98
      @y0jimbb0ttrouble98 22 дня назад

      A fix to the 2TB+ disk size limit according to Vates is due in the next few months.

    • @joshuawaterhousify
      @joshuawaterhousify 22 дня назад

      @@y0jimbb0ttrouble98 yep, at the latest; I'm pumped!

  • @liora2k
    @liora2k Месяц назад +7

    Liked your videos, however it’s lagging behind proxmox from multiple reasons such as CPU’s and old kernel of the host pass through PCI devices should not ask a host reboot once you excluded from the host maybe it’s stable, but it’s lagging not for my performance, but from innovation and use ability of the components compares to Proxmox it was my previous hypervisor, but it’s still lagging behind the other competitors

    • @olivierlambert4101
      @olivierlambert4101 Месяц назад +7

      I often hear comments about 'Proxmox having a more recent kernel,' but it’s worth clarifying that in XCP-ng, the hypervisor itself is not Linux, so the kernel version isn’t directly relevant to performance or functionality. This is a bit like focusing on the gas tank size of an electric car-it misses the key point. There are certainly meaningful discussions to be had about XCP-ng and Xen, and understanding these nuances helps keep the conversation relevant.

    • @liora2k
      @liora2k Месяц назад

      @@olivierlambert4101 thanks for addressing it , however XCP-NG is based on centos and it’s a fact, even with if I installed xen on Debian win latest kernel other feature will not work at the moment, such as support for vgpu, device pass through without reboot the host on every assignment of pcie device and more.
      Btw- I used to run Proxmox for 4 year on an enterprise gear company and pivot to XCP-NG , but now moved back to Proxmox because the simplicity of things such as cloud-init deploy templates in a several clicks, and no virtual sound device on vm or different disk type and controller and the last part when there isn’t any good vdi for XCP-ng

    • @liora2k
      @liora2k 25 дней назад +1

      @@olivierlambert4101 actually it does related to the kernel, however only for several cases,hope that you can assist with clarifying - if the host has iGPU you won't be able to split it between the host and multiple VMs unless your host kernel version is 6.0 or higher.

    • @olivierlambert4101
      @olivierlambert4101 25 дней назад

      @@liora2k It's even more complex than that. Even a recent kernel doesn't have access to all the host memory nor all the CPUs, because Dom0 is just a VM after all. So even if it's required, it might be not enough.
      So by design, the most important piece by far is the hypervisor itself.

  • @seansingh4421
    @seansingh4421 29 дней назад +10

    Naaah…..Proxmox Gang here fool, represent 😂😂

    • @manitoba-op4jx
      @manitoba-op4jx 29 дней назад +5

      proxmox needs quorumless clustering so bad

    • @Jordan-hz1wr
      @Jordan-hz1wr 29 дней назад

      @@manitoba-op4jxYeah, I’ve been bitten in the ass at least twice because of this.

    • @markalmada9662
      @markalmada9662 29 дней назад

      I love how the comments suggest I translate to English 😮

    • @pepeshopping
      @pepeshopping 28 дней назад +1

      Proxmox is not an appliance.
      How long, how many steps and which other resources you would need if the Proxmox boot/system drive dies?
      If it requires more than 10 minutes, more than 3 steps or another system, it is not an appliance!

  • @john-r-edge
    @john-r-edge Месяц назад

    Question on vTPM. Does your host hardware have to have its own supported hardware TPM in order to host VMs with vTPMs?

    • @marcogenovesi8570
      @marcogenovesi8570 29 дней назад +6

      vTPM is completely virtualized and does not need hardware TPM on the host. Afaik it stores the keys in a small virtual disk together with the virtual machine disks. So it's not as "secure" as a hardware TPM where the keys are stored inside a physical chip in the TPM device. But it's not meant to. It's main goal is make Windows 11 happy so you can install a VM.

    • @gabriellando1
      @gabriellando1 29 дней назад

      ​@@marcogenovesi8570 yeah, if no one has access to the physical hypervisor machine, the vTPM virtual disk is "secure" enough. If you run a VM with win11 and a malware is installed, the malware won't be able to access any keys, as they are stored in a "TPM device"

  • @pepeshopping
    @pepeshopping 28 дней назад +1

    I would have changed to XCP-ng if it had FULL ZFS features.
    Yes, you can use ZFS, but some actions/status/monitoring are still command driven and not implemented in the GUI.
    That does not usually stop me from choosing a platform, but given that many (most) of the apps/services I use, can be put into much faster and leaner Docker containers, I have been turning VMs off.
    For VMS, I ditched ESXi for TrueNAS Scale and never looked back. It offers me enough VM support for what I needed: Windows and Ubuntu full installs.
    All the services/apps that I had in the Ubuntu VMs have been migrated to Docker and the VMs shutdown.
    Still run W10 in a VM as it’s a critical part of my remote admin solution, but with an SSL VPN running in TN and Docker RDP solutions like Guacamole, it will be turned off in the next couple months as well.
    Heck, you can run Windows or Ubuntu in a container if you need to.
    All of this to say:
    VM support is not as important anymore.

  • @AndrewMorris-wz1vq
    @AndrewMorris-wz1vq 28 дней назад

    How do you think XCP-ng compares to Rancher Harvester?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  28 дней назад

      I have never used Harvester but it looks pretty basic compared to XCP-ng

    • @AndrewMorris-wz1vq
      @AndrewMorris-wz1vq 28 дней назад

      @@LAWRENCESYSTEMS oh what features do you see as lacking?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  28 дней назад

      @@AndrewMorris-wz1vq Documentation, iSCSI support, NFS support.

    • @AndrewMorris-wz1vq
      @AndrewMorris-wz1vq 28 дней назад

      @@LAWRENCESYSTEMS Huh, I'll have to ask do some digging. It uses Longhorn under the hood (though you can use other CSIs too like rook-ceph) which supports isci by default and nfs as an additional option.

  • @thadrumr
    @thadrumr 20 дней назад

    How did you get the model of the server to show in the host page?

  • @eduardojavier112
    @eduardojavier112 11 дней назад

    is it better than QEMU?

  • @Ne0_Vect0r
    @Ne0_Vect0r 29 дней назад +2

    Proxmox is so much cooler..

  • @teknologyguy5638
    @teknologyguy5638 7 дней назад

    You refer to this as the "best free" virtualization solution but your demos only show the Premium XOA... that's not exactly free so I'm failing to make the connection.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  7 дней назад +1

      All the feature I show in the video can be done with the built from source XO ruclips.net/video/2wMmSm_ZeZ4/видео.htmlsi=d-RvNTTY_JRe6o5z

  • @michaelshes9562
    @michaelshes9562 15 дней назад

    cant install it on a sub device, big issue

  • @hescominsoon
    @hescominsoon 29 дней назад

    can xcpng be put directly on th net with restricted access tot he management? i do this with hyper-v and i am trying to find another hypervisor to replace it.

    • @pepeshopping
      @pepeshopping 28 дней назад

      Smarter people would use a VPN for such things.

    • @hescominsoon
      @hescominsoon 28 дней назад

      Run a VPN on the hypervisor machine wouldn't make any difference. Because it's Linux base is the reason why I would be considering doing it I've got windows locked down haven't had any issues so I was just curious since its linux-based and there's millions of properly secured Linux machines directly on the internet if we could use the built-in firewall unless they've disabled it to do the same thing. So let me restate it is the built-in Linux firewall enabled on that cpeng? If so then that answers my question.

  • @RK-ly5qj
    @RK-ly5qj Месяц назад +1

    So do you want to tell me that now I can install XCPNG like Proxmox? where GUI will be out of the box without what was so far? :)

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Месяц назад +3

      Eventually that is what XO Lite will provide, it won't be as full featured as Xen Orchestra

    • @RK-ly5qj
      @RK-ly5qj Месяц назад

      @@LAWRENCESYSTEMS Im asking from Home User perspective - so seems to be pretty good alternative ;)

  • @elksalmon84
    @elksalmon84 29 дней назад +2

    Does it come with richful web UI out of box, like Proxmox?

    • @jzcalderon
      @jzcalderon 29 дней назад +1

      Keep waiting 😂

    • @joshuawaterhousify
      @joshuawaterhousify 29 дней назад +1

      It's in beta right now, but they're working on it

    • @KimmoJaskari
      @KimmoJaskari 29 дней назад +3

      It comes with a web UI now as Tom demonstrated but it's highly limited still. But of course that doesn't matter, since with the Xen Orchestra appliance you get extensive control of all your XCP-NG servers from one interface.

  • @dfgdfg_
    @dfgdfg_ Месяц назад +5

    Aggravation Switch 🤣

  • @Tgspartnership
    @Tgspartnership Месяц назад

    You know too much. I hope you don't run into any Bond villains.

  • @ericneo2
    @ericneo2 29 дней назад +1

    Does PCI Passthrough finally work for GPUs? Cause that would be a game changer.

    • @joshuawaterhousify
      @joshuawaterhousify 29 дней назад

      I haven't tried with GPUs, but I've tried with other things and it's been petty flawless, so I can't imagine it would be a problem. If you've had issues specifically with GPUs but other stuff's worked, I'd be happy to test it though

    • @ericneo2
      @ericneo2 29 дней назад

      @@joshuawaterhousify If you could. I've had success passing through GPUs via KVM and Proxmox to Linux VMs but it's never worked for me to Windows VMs.
      Really need Windows VM with CUDA for local AI, RPA, AutoCAD & Premiere.

    • @joshuawaterhousify
      @joshuawaterhousify 29 дней назад

      @ericneo2 may not be till the weekend, but I'll throw my 2070 Super in and see what I can do. I know nvidia blocked things on consumer GPUs with code 43 for a while, but I think they opened that up a bit ago? I've been meaning to give it a shot for a gaming VM for a little while.
      Testing will be on games, Davinci Resolve, and maybe some AI stuff, with a bit of blender or something to make sure that side works as well.
      Either way, if you're already on Proxmox and want to stick with KVM, check out Craft Computing; he's got tutorials for it for everything from direct pass through to vGPU

    • @KimmoJaskari
      @KimmoJaskari 29 дней назад

      The stepping stone has been Nvidia literally blocking that on purpose on all consumer cards, I believe.

    • @joshuawaterhousify
      @joshuawaterhousify 29 дней назад

      @KimmoJaskari I'm pretty sure they stopped actively blocking it though; I remember hearing that a while back.

  • @jacobscheit4128
    @jacobscheit4128 27 дней назад

    Sorry to Tell you that Bro, but that are all features Proxmox has since years ago. And XEN is notoriously unstable and hard to work with. And the worst part on this OS is that the New WebUI is pretty much a one to one copy of Proxmox UI.
    They didn’t rework the UI they basically stole it.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  27 дней назад

      Thanks for making me laugh 😂

    • @jacobscheit4128
      @jacobscheit4128 27 дней назад

      @@LAWRENCESYSTEMS Take a closer look at PVE and compare the WebUI from XCP to it and you’ll can clearly see that. I tried XCP and ran some stability test and compared it to Proxmox. The recovery times in case of a sudden host failure is much better on Proxmox, not only that is way harder to crash a Proxmox host compared to XCP and and believe me when I say I would love to find a good Proxmox alternative but there is non.
      You can fanboy as hard as you want but when is comes to running to business critical applications anywhere I prefer Proxmox over any other solution because it just works and is not a pain to set up and get going.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  27 дней назад +2

      Use whatever makes you happy

  • @lewiskelly14
    @lewiskelly14 19 дней назад

    Misleading title

  • @MadalinIgnisca
    @MadalinIgnisca 29 дней назад +4

    Linux 4.19? XEN? What? Feels like 2010…

    • @minigpracing3068
      @minigpracing3068 29 дней назад +2

      The 4.19.0+ kernel is limiting some features for storage, since this kernel is EOL Dec. 2024, maybe we get something newer soon.

  • @noidnobb
    @noidnobb Месяц назад +3

    XCP-ng UI Very ugly.😅

  • @TechySpeaking
    @TechySpeaking Месяц назад +3

    First

  • @AnIdiotAboard_
    @AnIdiotAboard_ 18 дней назад

    Well like every big release i go into this with high hopes and come out fed up and stuck with another £37k bill for a year for VMWare.
    It just don't work, the whole storage subsystem is a joke, the performance loss is criminal at best especially on all flash based storage
    Support for 25 50 and 100 Gig cards is laughable and when you do make the thing work it just wont work
    No vGPU Support at all WTF WHY NOT.
    Passthru works very well IF you can actually pass thru the devices you want.
    I just want my home lab to work without needing to pay for ESXi licenses, 40U of compute is expensive.

  • @serdalo5035
    @serdalo5035 29 дней назад

    XCP-NG should support KVM virtualization!

    • @KimmoJaskari
      @KimmoJaskari 29 дней назад

      The entire point of it is to support Xen virtualization...

    • @serdalo5035
      @serdalo5035 29 дней назад

      @@KimmoJaskari I don’t agree, they want to create an enterprise virtualization solution. They choose Xen as the tech to do that

    • @marcogenovesi8570
      @marcogenovesi8570 28 дней назад +1

      @@serdalo5035 Xen is an enterprise virtualization solution created a while before KVM became available.

    • @marcogenovesi8570
      @marcogenovesi8570 28 дней назад +2

      this is nonsense. The entire solution is built around Xen hypervisor. It's like saying XCP-NG should support HyperV virtualization

  • @diablobarcelona
    @diablobarcelona 29 дней назад

    Hmmm, can I now take my mini PC running Proxmox finally over to XCP-NG 8.3. ... 8.2 wouldn't install on it.