I had a working FreeNAS 11.3-U5 VM running iSCSI. Using LSI flashed HBA with pci pass through. Worked fine. I upgraded to TrueNAS core 12.0 release and added a TrueNAS core 12 ISO in my local repo. Now all sort of (bleeding edge) problems! Namely the UI Webserver keeps going down when my other VMs try to access iSCSI Are there XCP guest tools for TrueNAS core? I didn’t see you install any.
I'd be interested to know this too. Coupling FreeNAS (which is already very picky about hardware) with XCP-ng sounds like it could introduce a problem for various hardware. I've been running FreeNAS 11.x virtualised in XCP-ng with an LSI 9207-8e passed through for a while without problems, but FreeNAS has great support for LSI HBAs, and I made sure to update the firmware on the 9207-8e to version 20.00.07.00, because I read that earlier versions (even including 20.00.06.00) had certain bugs which caused issues in a variety of situations.
Would passing in the SATA controller instead of individual SATA drives reduce overhead? Would the speed issue be less relevant to HDDs where their max is lower than even the worst case in your benchmarks? Or would performance degrade relative to max theoretical speeds of the slower drive? Would setting RAID in UEFI offload that work to the controller instead of FreeNAS for some further work offload? Thanks.
Does someone tried to compare the performance/stability/... for virtualization between TrueNAS and XCP-ng (or other type 1 hypervisor)? Because so far I understand, it's better to have TrueNAS natively running and mainly I'll run simple stuff like PLEX, Nextcloud and some game servers like Minecraft, TheForest and so on... But I'm also afraid if some software/games which requires a GPU (tested before on Virtualbox SVGA worked fine, FYI) and now I'm questining myself whether I should continue with XCP-ng and TrueNAS on VM OR TrueNAS natively and anything else through bhyve on TrueNAS (if needed)
It would have been nice to see some iperf results between a VM and the Truenas VM directly... without the SR side of things I mean. Can't help thinking that's adding complications and overhead.
Have you tested jails yet. I found when I had this setup with freenas when I deployed a jail it would randomly reboot the vm and an error saying freenas recovered from unscheduled reboot. Thanks
I've set up my XCP-NG/TrueNAS following this guide and it's working great, the only issue I'm having is that when the host reboots it can't reconnect the NFS share SR (since the TrueNAS VM isn't running yet) but even when the VM is finished booting the SR doesn't reconnect until I do it manually. Is there a way to get the XCP-NG to keep trying to reconnect the SR or even a timed script if it fails on boot? I've been searching around and I can't find anything.
A lot of reasons: 1. it's not real PCI passtrhough (extra layers). 2. Your SR is itself on top of a VM, on top of storage layer, on top of drives. So many layers explains easily the result (especially with the cost of each context switch now). You can't achieve great perfs with that setup. Also, it's not safe until we got "storage domains" (booting first the storage domain/VM and only then moutint the SR). Without this, on reboot, you won't be able to reach the NFS until the VM booted. It will require a manual intervention to "connect" the NFS after the TrueNAS VM is up. In the end, it's good for experiments, but not for real usage. TrueNAS in a VM serving NFS for other stuff is a valid case however. But not serving itself as an SR for the host its running on it.
Yes! When it comes down to production performance, having TrueNAS on real hardware acting as the back end for storage for XCP-NG is the best way to go.
@@LAWRENCESYSTEMS What I do, is install Dom0 on a SATADOM and presenting my disk controller via PCI passthrough by hiding it from Dom0 and attach it to my VM. Look inside XCP-NG on how to present real hardware such as controller!
Agreed. Hmmmm... IIRC, TrueNAS Scale is on Debian, XCP-NG is on CentOS. Perhaps XCP-NG on Debian leads to possible convergence? True-XCP-NAS-NG? A guy can dream...
Yay, more TrueNAS videos from Tom! A must watch!
I had a working FreeNAS 11.3-U5 VM running iSCSI. Using LSI flashed HBA with pci pass through. Worked fine.
I upgraded to TrueNAS core 12.0 release and added a TrueNAS core 12 ISO in my local repo.
Now all sort of (bleeding edge) problems! Namely the UI Webserver keeps going down when my other VMs try to access iSCSI
Are there XCP guest tools for TrueNAS core? I didn’t see you install any.
123456... Man, now I have to change the combination on my luggage.
I wouldn't quite call it nested virtualization. Usually nested virtualization means a VM within a VM.
What PCI passthrough hardware have you had issues with? Any particular hba pci card?
I'd be interested to know this too. Coupling FreeNAS (which is already very picky about hardware) with XCP-ng sounds like it could introduce a problem for various hardware. I've been running FreeNAS 11.x virtualised in XCP-ng with an LSI 9207-8e passed through for a while without problems, but FreeNAS has great support for LSI HBAs, and I made sure to update the firmware on the 9207-8e to version 20.00.07.00, because I read that earlier versions (even including 20.00.06.00) had certain bugs which caused issues in a variety of situations.
I'd like to know as well! According to the FreeNas Documentation, PCI passthrough should work very well with most LSI HBA's.
Would passing in the SATA controller instead of individual SATA drives reduce overhead? Would the speed issue be less relevant to HDDs where their max is lower than even the worst case in your benchmarks? Or would performance degrade relative to max theoretical speeds of the slower drive? Would setting RAID in UEFI offload that work to the controller instead of FreeNAS for some further work offload? Thanks.
I was wondering that too. If PCI passthrough is problematic, could we just create logical volumes for each drive and then pass that drive to truenas.
Does someone tried to compare the performance/stability/... for virtualization between TrueNAS and XCP-ng (or other type 1 hypervisor)?
Because so far I understand, it's better to have TrueNAS natively running and mainly I'll run simple stuff like PLEX, Nextcloud and some game servers like Minecraft, TheForest and so on...
But I'm also afraid if some software/games which requires a GPU (tested before on Virtualbox SVGA worked fine, FYI) and now I'm questining myself whether I should continue with XCP-ng and TrueNAS on VM OR TrueNAS natively and anything else through bhyve on TrueNAS (if needed)
It would have been nice to see some iperf results between a VM and the Truenas VM directly... without the SR side of things I mean. Can't help thinking that's adding complications and overhead.
Have you tested jails yet. I found when I had this setup with freenas when I deployed a jail it would randomly reboot the vm and an error saying freenas recovered from unscheduled reboot. Thanks
I've set up my XCP-NG/TrueNAS following this guide and it's working great, the only issue I'm having is that when the host reboots it can't reconnect the NFS share SR (since the TrueNAS VM isn't running yet) but even when the VM is finished booting the SR doesn't reconnect until I do it manually. Is there a way to get the XCP-NG to keep trying to reconnect the SR or even a timed script if it fails on boot? I've been searching around and I can't find anything.
I have never tried but you could probably put together a script in XCP-NG on a cron job.
Thanks for the demo have a great day
Where is there no BSD templates on XCP-NG :(
Is it possible that poor throughout was a result of resource starvation (RAM) in Dom0? Or perhaps context switch thrashing?
A lot of reasons: 1. it's not real PCI passtrhough (extra layers). 2. Your SR is itself on top of a VM, on top of storage layer, on top of drives. So many layers explains easily the result (especially with the cost of each context switch now). You can't achieve great perfs with that setup.
Also, it's not safe until we got "storage domains" (booting first the storage domain/VM and only then moutint the SR). Without this, on reboot, you won't be able to reach the NFS until the VM booted. It will require a manual intervention to "connect" the NFS after the TrueNAS VM is up.
In the end, it's good for experiments, but not for real usage. TrueNAS in a VM serving NFS for other stuff is a valid case however. But not serving itself as an SR for the host its running on it.
Yes! When it comes down to production performance, having TrueNAS on real hardware acting as the back end for storage for XCP-NG is the best way to go.
@@LAWRENCESYSTEMS What I do, is install Dom0 on a SATADOM and presenting my disk controller via PCI passthrough by hiding it from Dom0 and attach it to my VM. Look inside XCP-NG on how to present real hardware such as controller!
Too bad we can't use Xen with FreeNAS/TrueNAS as Dom0. What an incredibly powerful combination that would be!
Agreed. Hmmmm... IIRC, TrueNAS Scale is on Debian, XCP-NG is on CentOS. Perhaps XCP-NG on Debian leads to possible convergence?
True-XCP-NAS-NG? A guy can dream...
is it ok to run truenas and pfsense in same server with vm.?
I think TrueNAS in a VM is a bad idea, pfsense has some limitations.
@@LAWRENCESYSTEMS thank you for your answer.