Virtualise TrueNAS to Save Space & Power! SMB & NFS Guide with HBA!

Поделиться
HTML-код
  • Опубликовано: 17 окт 2024

Комментарии • 148

  • @ichnafi8512
    @ichnafi8512 9 месяцев назад +24

    Please let me clarify what "IT-Mode" is:
    "IT" in this case stands for "initiator target". In this mode evey disc is presented individually to the host.
    HBAs usually come in "IR" Mode, which is what ever Raid-Modes your HBA supports.

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад +3

      Thanks for adding this, pinned!

    • @JorgeGarciaM
      @JorgeGarciaM 9 месяцев назад +1

      @@Jims-Garage @ichnafi8512 basically IT-Mode is non-RAID mode?

    • @nadtz
      @nadtz 9 месяцев назад +2

      @@JorgeGarciaM IT mode basically just passes the disks along as opposed to IR mode where the controller software 'controls' the disks. You could technically use a HBA in raid mode and set it up as single drives so it's a bit more than just 'non raid' and for most HBA's also involves flashing the card to IT mode.

    • @joshhardin666
      @joshhardin666 4 месяца назад

      it looks like modern HBAs don't bother with IT mode anymore and it's IT mode by default. from what I've seen only old sas2 hba's have raid mode as default. I just picked up an lsi 9400-16i a couple of weeks ago, IT mode is default, there's no non-it firmware.

  • @bluesquadron593
    @bluesquadron593 9 месяцев назад +8

    TrueNAS rabbit hole is now wide open :) Snapshots, replication, 3-2-1 backup :)

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад +2

      Bingo! Left this too long, now it's done 👍

  • @BladeWDR
    @BladeWDR 9 месяцев назад +7

    Hey Jim, just a few notes.
    If you disable pre-enroll keys under the UEFI settings for the virtual machine, you don't need to go through the whole rigmarole with disabling secure boot in the bios.
    It's also generally not necessary to change the boot order since the virtual machine disc is blank at the time of first boot.
    You can also just remove the disk image from the DVD drive rather than deleting the entire device from the hardware configuration. This means you don't have to fully shut down the vm.

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад

      Thanks, I'll give that a try and adopt in future videos

    • @jonnyzeeee
      @jonnyzeeee 9 месяцев назад +1

      Agreed. And then you don't need to change the boot order of the ISO and remove it afterwards. Nice video, Jim!

  • @georgekaravasilis9197
    @georgekaravasilis9197 7 месяцев назад +2

    Hi Jim, I am commenting just by watching the 3 first minutes of your video to tell you how brilliant you provide the answers to my concerns before you even start showing stuff....

    • @Jims-Garage
      @Jims-Garage  7 месяцев назад +1

      Glad it was helpful!

  • @michaelprasuhn6590
    @michaelprasuhn6590 9 месяцев назад +15

    Protip: The LSI cards use i and e as the last letter to indicate internal vs. external ports.

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад +1

      Great, that's good to know, thanks!

  • @AaronMolligan
    @AaronMolligan 7 месяцев назад +1

    I've been using truenas as a vm for nearly 3yrs now. Its the only vm on my host that has "NEVER" given problems, it just simply works, rock solid software. It orginally was a test project but I started to store more important data on it, then I fully commited turned it into my main storage device. Did the same passthrough of my ssd's. Its an all SSD bulid with 2 nvme's for dedup & special vdev data.

    • @Jims-Garage
      @Jims-Garage  7 месяцев назад

      Awesome, that's great to hear

  • @NetBandit70
    @NetBandit70 9 месяцев назад +7

    Such a quandary... Proxmox with a TrueNAS Core VM, or just run TrueNAS Scale, which has quite a bit of virtualization functionality and some nice docker applications made easy. I generally lean toward Proxmox but I would probably go with TrueNAS Scale in this particular situation. I also think TrueNAS Core is going to be sunsetted in the next year or two.

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад +2

      You can migrate your zfs so choose whatever fits best for the time. I'd probably run docker in a separate VM but no reason why you can't use scale.

    • @NFTwizardz
      @NFTwizardz 8 месяцев назад

      Please help me lol I was just about to try and run a NAS OS VM in proxmox, then run debian and CASAos. Have 12600k 32gb ram 2x 8tb drives.
      Would like to run:
      win11 VM
      Home Assistant
      Jellyfin / plex
      Camera software one day
      But I need a raid option with mirror. Have other storages like m.2 SSD for bootdrive etc.
      Should i not do this? 😢

    • @NetBandit70
      @NetBandit70 8 месяцев назад

      @@NFTwizardz TrueNAS Core is probably your best bet. You can do a RAID ZMirror for your storage drives.... You'll still need a boot drive but any SATA or NVME drive(s) should be ok for that.

    • @NFTwizardz
      @NFTwizardz 8 месяцев назад

      @NetBandit70 hey thanks for replying, so run proxmox then VM or CT a truenas core? Then VM DEBIAN and install casaos?

    • @NetBandit70
      @NetBandit70 8 месяцев назад

      @@NFTwizardz No. Run TrueNAS Scale on the bare metal. It can do all the containers and virtual machines you need, plus it has an easy to use application container (docker) interface and repository.

  • @aractor
    @aractor 2 месяца назад +1

    Thank you for this! I was struggling with my truenas VM hanging at the BIOS in proxmox, and couldn't find any solution. This fixed everything & passthrough is working great!

    • @Jims-Garage
      @Jims-Garage  2 месяца назад

      @@aractor that's great to hear, good job 👍

  • @ShadVonHass
    @ShadVonHass 5 месяцев назад +2

    Thanks for the video! I've been running TrueNas under Proxmox for ~2 years now, but have just made a lot of changes and needed a refresher on reinstalling, specifically with the bios options and the SSD emulation part... not sure I had that set before, but hoping things will be somewhat snappier on my relatively old Intel 4th gen server. now because of it.

    • @Jims-Garage
      @Jims-Garage  5 месяцев назад

      That's awesome, thanks for the comment 😊

  • @happy9955
    @happy9955 7 месяцев назад +1

    i fell peaceful when watching his video

  • @dustinphillips605
    @dustinphillips605 Месяц назад +1

    Thanks for the details on configuring the UEFI in the VM. I was getting stuck on that.

    • @Jims-Garage
      @Jims-Garage  Месяц назад

      If you untick "pre enrol keys" you can ignore all of it 😂 recently discovered that.

  • @slammedmkv8825
    @slammedmkv8825 9 месяцев назад +1

    thank you SOOOO much for this video. I followed it and have everything set up and working.

  • @Maisonier
    @Maisonier 7 месяцев назад +2

    Amazing video! liked and subscribed.

  • @shockinsid
    @shockinsid 5 месяцев назад +1

    Excellent guide, very in depth. Thank you so very much!

  • @gaidin
    @gaidin 4 месяца назад +1

    I'd "Like" this video about 10 times if I could...it came in very handy! Even managed to passthrough a NVMe drive as L2ARC for the pool too :)

    • @Jims-Garage
      @Jims-Garage  4 месяца назад

      That's great, good job 👏

  • @remcolouter6899
    @remcolouter6899 Месяц назад +1

    Amazing video's Jim. I am learning a lot from them. I was thinking to utilize TrueNAS on Proxmox as well on a Zimacube Pro, because I really like Proxmox for the virtualization. Then I saw your video on HBA, so this probably saved me some disaster. Do you think the HBA solution will work on a Zimacube?

    • @Jims-Garage
      @Jims-Garage  Месяц назад +1

      Does it have a PCIe slot? If so it should do. Just be aware that it doesn't support full ECC memory.

    • @remcolouter6899
      @remcolouter6899 Месяц назад

      @@Jims-Garage Thanks, yes it does! PCIe x16 even I think. Challenge will be that all the disks are connected to a PCB behind the bay, PCB is connected to the motherboard with some unfamiliar connection. I think removal of the PCB is necessary, however powering the discs.. I am not sure.

  • @muneebabbas7141
    @muneebabbas7141 28 дней назад +1

    There's a bit of misinformation at the start of the video about requiring a HBA card as you can pass the onboard sata controller. There are definitely cases where there might be other devices in the same IOMMU group and it's not as clean but definitely doable. Good video overall :)

    • @Jims-Garage
      @Jims-Garage  28 дней назад

      @@muneebabbas7141 true I guess. I can't say I've seen many machines with multiple on board sata controllers

  • @aliwalil4160
    @aliwalil4160 9 месяцев назад +1

    I have an old RAID card in my workstation that goes 90C. I fastened a small fan with fishing cord to the heatsink to get the temps under control. Great video anyways

  • @SSBelmont
    @SSBelmont 7 месяцев назад +3

    Can create a video using the 20.10.2 TrueNAS and Proxmox and most importantly demo how to mount SMB and NFS in proxmox that are served up by the virtualized TrueNAS.

    • @Jims-Garage
      @Jims-Garage  7 месяцев назад

      I've covered most of these topics already. Check my TrueNAS for how to create SMB, then check my Proxmox backup server for how to mount it in Proxmox.

  • @chrisumali9841
    @chrisumali9841 9 месяцев назад +1

    Thanks for the video and demo, have a great day

  • @Th3K1ngK00p4
    @Th3K1ngK00p4 9 месяцев назад +1

    Interesting setup. I've always run my file server on bare metal. Was tempted to try something like this when I built a new one this past year, but opted not to. But now I'm debating a separate Proxmox build for VMs 🤔

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад +1

      I run mine bare metal as well. It's a good option if you need to consolidate and keep power down.

  • @mirko1989
    @mirko1989 День назад

    Secure boot is stopping me only if i put EFI disk on different storage than OS , am i doing something wrong , am i getting something wrong ?

  • @repairman2be250
    @repairman2be250 9 месяцев назад +1

    Thank you for your video. It might just come in handy. I did find a IBM rebranded LSI SAS3084E-R and I think an even better LSI SAS9217-8i in my junk box. I use Proxmox daily for various vm's. Also have a dedicated linux box that runs my email server. Time consolidation of email server and get truenas virtualised. Just happen to have an X99 board 128GB of RAM and an E5 2680 v4.

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад

      Nice, that should work well.

  • @gaidin
    @gaidin 5 месяцев назад +1

    Were the Mirrored NVMe drives just used as a Proxmox boot drive, and space for VMs? They weren't used at all as a ZFS Cache in the TrueNAS pool?

    • @Jims-Garage
      @Jims-Garage  5 месяцев назад

      No, the nvme is for VMs only. 2x SSDs for Proxmox and ISOs. TrueNAS doesn't really have a cache drive like unraid etc does. Always go for ram over cache from what I've read.

    • @gaidin
      @gaidin 5 месяцев назад

      @@Jims-Garage Thank you! That would explain why I'm not finding much around the traps that covers passing through NVMes to TrueNas for a ZFS Cache. The short version is that I just add more RAM haha. Thanks again for all of your tireless videos....I really find you so clear in the way you lay out your process and explain your thinking!

  • @fearthesmeag
    @fearthesmeag 7 месяцев назад +1

    great video Jim. As Im currently looking into turning my exisiting PC (i9 13900k CPU / z690-a MB & 64gb ram into a Proxmox OS with Truenas ontop, do I need an HBA for my drives? As my mobo has 6 sata ports of which I have two nvme's on the board, sata: 2 x 4tb 3.5" and 2 x 1tb SSD. I may increase the 3.5" storage down the road.

    • @Jims-Garage
      @Jims-Garage  7 месяцев назад

      Bare metal won't require a HBA (you can add one if you run out of sata ports though). Hba is mainly for passthrough to a VM.

    • @fearthesmeag
      @fearthesmeag 7 месяцев назад +1

      @@Jims-Garage thanks Jim, Im just currently watching your budget NAS build, and you mentioned Truenas (to minimise risk) build out as a bare metal, rather than a VM. Im happy with a VM for Truenas as I dont mind the risk - I will ensure data is backed up (321). Currently I have a Intel NUC (Proxmox) with its dedicated drives running VMs for testing & home/work LAB stuff and running low on resources.... The new PC will be a proxmox added into a DataCentre cluster, which will als be used for VMs / LXC's etc and media stuff, hence the NAS requirement.

    • @Jims-Garage
      @Jims-Garage  7 месяцев назад +1

      @@fearthesmeag OK, for a VM you will require a HBA.

  • @mintypockets8261
    @mintypockets8261 2 месяца назад +1

    Thanks, I've followed allot of generic guides on the virtulisation of TrueNas and couldn't get a stable build (passing through drives manually). I used this method for the scale version and it worked well - for some reason the throughput on the data between my storage devices saw a x4 boost (the VM settings/UEFI -seemed to make a big difference).

    • @Jims-Garage
      @Jims-Garage  2 месяца назад

      @@mintypockets8261 that's great to hear

  • @settlece
    @settlece 9 месяцев назад +1

    Hay Jim another fantastic video

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад +1

      Thanks, much appreciated

  • @fearthesmeag
    @fearthesmeag 6 месяцев назад +1

    Hey Jim, so Ive managed to source an HBA LSI 9207-8i (IT-Mode) - connected 2x new WD 4TiB each to the P1 & P2 sata cable from the module, btw, I have two SSD's connected to my motherb port 1 & 2 ports and two nvme (2TiB each). Proxmox found the drives without any issues, ran the IOMMU config settings as per your Double GPU Passthrough video, rebooted and no longer visible - which is correct. However, prior to all of this, I had ZFS pool setup which is now in a health state of 'suspended' I presume this was due to the config above, for the life of me, Im unable to destroy/remove the ZFS pool and start fresh in Truenas. Error:
    "command 'zpool list -vHPL zpool01' failed: not a valid block device" Is there a shell command I can destroy it or any other way? Cheers.

    • @Jims-Garage
      @Jims-Garage  6 месяцев назад

      Can't you format the drives?

    • @fearthesmeag
      @fearthesmeag 6 месяцев назад

      @@Jims-Garage thanks Jim, as I was unable to see them in Proxmox - which I think you mentioned in one of your vids, they will not show up under disks. I just removed the drives > format and placed them back into the server. I can see both of them, and the ZFSpool has been removed.

  • @dominicpascal5512
    @dominicpascal5512 4 месяца назад +1

    How about passing through the raw disks?
    I've done it and it works well. Can mount the zpool either in TrueNAS or Proxmox if necessary (and the VM is turned off). Don't really see any downside.

    • @Jims-Garage
      @Jims-Garage  4 месяца назад

      Passing through a raw disk doesn't give TrueNAS control. Things like SMART wouldn't work.

  • @gnajmacz
    @gnajmacz 6 месяцев назад +1

    I'm using truenas as a VM in proxmox, using the PCI passthrough of the lsi 9211-8i card, and unfortunately I can't get rid of checksum errors when detecting scrub. they always appear. Do you have any idea what I can do?

    • @Jims-Garage
      @Jims-Garage  6 месяцев назад

      It's possibly temperature related, add a fan to the HBA. Worth checking all cables as well.

    • @gnajmacz
      @gnajmacz 6 месяцев назад +1

      @@Jims-Garage I have a server case with good airflow, and the card has an additional fan. I tried using a different cable.

    • @Jims-Garage
      @Jims-Garage  6 месяцев назад

      @@gnajmacz hmm. Perhaps a damaged card, or the data itself was corrupted before copying? I'm no zfs expert though.

  • @ultravioletiris6241
    @ultravioletiris6241 7 месяцев назад +1

    I have a couple questions. Ive been watching your videos but havent seen em all yet
    1. Can you passthrough individual drives to the TrueNAS VM rather than an HBA like you demonstrate? I’m starting with a single large SSD to pass through, no RAID till next month.
    2. Do the solutions of setting up a TrueNAS VM or an unprivileged LXC NAS both work for using it as a central store for docker and kubernetes volumes? I’d like to have docker services on multiple smaller PCs but only use one PC for the main storage for docker and kubernetes.
    3. Another thing I’d like to do with either a container or VM NAS is to be able to centralize backups and snapshots before doing a single point cloud backup (another reason i want my docker services to access the NAS). Have you managed to make a NAS work well with cloud backups such as Kopia or Backblaze? Has it worked with everything that needs backed up, such as snapshots or backups sent over from other hosts?
    Sorry if my questions are newbish. Ive done a lot of proxmox passthrough and vms but I’ve never used a NAS before and even after multiple videos I’m still trying to figure out what a solution like this one can and cannot do. I also have never messed around with docker enough to specify remote volumes or anything.
    If you read all this , thank you very much! My favorite homelab channel

    • @Jims-Garage
      @Jims-Garage  7 месяцев назад

      1) No, you need a HBA AFAIK
      2) Yes, you can do that.
      3) Yes, I backup my NAS to GDrive. Check my 3 part backup series.

    • @ultravioletiris6241
      @ultravioletiris6241 7 месяцев назад +1

      @@Jims-Garage Hey there i just watched your video from a couple months ago on the budget NAS/server build. You mentioned putting proxmox on there and virtualizing TrueNAS, but how were you thinking of doing that without an HBA? Or were you thinking of adding an HBA?
      Im thinking of building a machine with 4 x slots for NVMe in ZFS RAID-Z , which is how you have your vm storage set up right? What size nvmes are you running? I found the part you use - PCIe x16 expansion for 4x NVMe drives. Does this work like an HBA for passing through to Proxmox? Also do you know what “gen” of nvme that your PCIe expansion is?
      Is the integrated graphics (in your budget NAS video) mostly to be able to plug a monitor into the server/NAS? Personally i have had bad luck trying to passthrough integrated AMD graphics in Proxmox, but I’m using a laptop size CPU.
      Also is your server the r730 or the r730xd?. It sounds like you are also considering upgrading your main server. What direction have you been looking for that?
      Sorry one more question. Do you have any guesstimate on how large of a VM is necessary to run the docker containers you’ve featured in this series? Mostly curious about cpu threads and RAM.
      Thanks Jim and sorry this got so long and demanding! I’ve learned so much from your series. Still trying to decide whether to host a lot of these services via docker or via kubernetes. Ive done some of these in docker last year, but some things i was too newb to figure out myself, and my current workstation (dual xeons, hp z620) is a little much to keep on all the time. I’m a lot more knowledgeable in IT generally now, and also ready to build a 24/7 server/cluster.

    • @Jims-Garage
      @Jims-Garage  7 месяцев назад +1

      @@ultravioletiris6241 if you're doing a virtual TrueNAS you need a HBA, simple as that.
      I use the firecuda 530 1TB (X4 with an Asus card). They're PCIe 4 but my dell r730 is only pcie3.
      I have the dell r730, not 730xd. It's fine, but you might be better off these days with a modern Ryzen or intel, depends how many PCIe lanes you need.
      An iGPU is used for hardware acceleration (e.g. Jellyfin), it's not used for a monitor out. You will only connect via SSH/web UI.
      My dell r730 is running two Kubernetes clusters and hits about 35% CPU usage, it's overkill.

  • @earthmusician
    @earthmusician 7 месяцев назад

    Have you ever had issues where you try to mount the nfs share but you end up getting an error that says, 'can't find in /etc/fstab.' or 'No such file or directory'? When I do showmount -e nasaddress it shows it is indeed available. Do I need to add some sort of special permissions somewhere?

  • @RedLeg13B
    @RedLeg13B 8 месяцев назад +1

    This was great! Thanks so much!

  • @berkano_plays
    @berkano_plays 5 месяцев назад

    Would this setup work with an external usb drive bay? When i pass USB devices i get a lot of disk errors on my dmesg output on the truenas side...
    I want to get a dedicated usb c pci card and pass that through instead of passing virtualized usb...

  • @hanstolboom2527
    @hanstolboom2527 9 месяцев назад +1

    Timely video, thanks very helpful

  • @ckthmpson
    @ckthmpson 9 месяцев назад +1

    Curious...in the OS Tab (Guest OS section), should the OS type be changed from Linux to perhap "Other", as I believe TrueNAS Core is FreeBSD based?

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад

      A good question, I believe it could, but that it doesn't make much of a difference anyway.

    • @ckthmpson
      @ckthmpson 9 месяцев назад

      @@Jims-Garage likely not. I think it primarily changes some defaults in terms of default hardware choices such as the virtual nics.
      Thanks for video. I am in the process of virtualizing a TrueNAS core instance. Was on a rather right budget, so I'm using a generic Asmedia PCIe 4x SATA controller passed through to the TrueNAS VM.

    • @edgyjorgensen3286
      @edgyjorgensen3286 7 месяцев назад +1

      TrueNAS SCALE is TrueNAS on Linux (Debian). 🥳

  • @darksidediver17921
    @darksidediver17921 6 месяцев назад

    Hello Jim, I have been learning a lot from your videos as I'm just beginning to home lab. I am having a hard time getting my HBA card to pass over to the VM. I have followed your instruction to the "T" and when I go back to the PVE Host my drives still show there and are not passed to the VM like in your Video. I went and tripled check the my IOMMU is enabled in my Bios, I've confirmed that my HBA is indeed in IT mode and my Bios also shows that its in IT mode, but it is still not sending the drives to the VM. Could you are anyone give me advice on what to do to get this to work right?

  • @EduardoReyesDPM
    @EduardoReyesDPM 9 месяцев назад +1

    amazing work on the video, ty

  • @Ecker00
    @Ecker00 9 месяцев назад +1

    Is there any downside to let Proxmox handle the ZFS and only present dumb virtual disks with Ext4 formatting to TrueNAS? Thinking then it's more easily included in the proxmox backup system, and I don't have to worry about backup inside truenas and multiple layers of file system managment.

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад

      That should work. It would be zfs on zfs. I don't know if there's much wasted overhead because of that though, possible double write as well but I'd have to check.

    • @Ecker00
      @Ecker00 9 месяцев назад

      @@Jims-Garage I mean don't use ZFS in TrueNAS, just Ext4 formatting. Double ZFS is usually a bad idea I've read. - I'm in the process of setting this up, just dealing with some networking first. Hopefully my approach is viable.

  • @DragoMorke
    @DragoMorke 2 месяца назад +1

    You don't need a hba if you still have sata connectors free on your motherboard. You can pass through the sata controller on the motherboard to the vm.
    I think it lmportant to mention this since this video is also helpful/targeted for users doing this the first time and why buy an extra Controller if you still have free connectors on their mobo.
    I got the Asus Pro WS-W680 ACE mATX mobo and it has 1 slim sas connector for 4 drives and another 4 normal sata connectors.
    That board also has ECC support and IPMI. Its just a bit pricy.
    I was researching how to best set up a NAS homelab system.

    • @Jims-Garage
      @Jims-Garage  2 месяца назад

      @@DragoMorke you are right, just that most consumer mobos won't have more than 1 controller.

    • @DragoMorke
      @DragoMorke 2 месяца назад +1

      @Jims-Garage sure, but they might still want to know how to pass it through, no matter if it's one or multiple. At least for me, your video was helpful. I just was not sure I'd it would apply for built-in controllers. I have some general experience (software developer and running nas on mint linux the hard way), but promox and truenas is still new to me.
      I'm excited to get a proper nas running with the asus ws w680 ace se board and ecc ram. Used more regular hardware before.

    • @Jims-Garage
      @Jims-Garage  2 месяца назад

      @@DragoMorke yeah it's a good motherboard, workstations are the sweet spot IMO. I considered that exact model for a time. Process for onboard controller should be the same, select it from the drop-down.

  • @NeonCoding
    @NeonCoding 9 месяцев назад

    Hey, was wondering if you'd have any advice, I'm looking to run a 24/7 effectively idling live stream, on the minimum possible hardware - if possible I want it to be able to queue up a playlist of videos, using something like VLC, and stream them to Twitch & RUclips (and maybe Facebook) - I have a budget but not a lot of it, and have options of a local Raspberry Pi 4 2GB, a Raspberry Pi 4 4GB, or a couple of command line VPSs, running Ubuntu Server - I could potentially install a desktop environment, but if I could run it command line that'd be preferable. I could get a new VPS for the project, but again its supposed to be for cost reductions so if possible I'd like to use one of those rigs. Any advice?

  • @shabadabadoo4326
    @shabadabadoo4326 6 месяцев назад

    This really confuses me, needing an HBA, because I was able to pass drives individually to my truenas scale VM, and it seemed to work fine(2x8TB HDD, in a mirrored zfs pool). Though I didn’t get into any of the fancy stuff like replication and snapshots. The serials were read without issue, and I was able to set up some containers in TrueNas scale, and put datasets in the pool. Is there something in particular I should be looking out for?

    • @Jims-Garage
      @Jims-Garage  6 месяцев назад

      You're not passing the devices through, you're creating a virtual drive and giving it to the VM. I imagine you don't see smart data in TrueNAS? For that you need a HBA, otherwise Proxmox has control over the disks.

  • @HnKDKS
    @HnKDKS 5 месяцев назад +1

    I would love to see a pfsense and Truenas Scale running under the same Proxmox... I know that hardware passthrough would be alot but seems like that would would for me under one machine. Looking at running pfsense, truenas and one ubuntu server machine (to run docker in one machine could help me downsize from 3 barebone machine to just one running a Ryzen 7900 cpu to rule them all). What are your thoughts? I only have 4 x 16TB hdd for a NAS that could be used on SATA connection but I would need deduplicated NIC (one SFP+ and one 10GBE nic for the pfsense).

    • @Jims-Garage
      @Jims-Garage  5 месяцев назад

      That should be fine, I have videos on OPNSense and Sophos XG firewall virtual, those should be a good pointer. The machine you mention is more than enough to run it. You'll probably want a HBA for the drives and a couple of NICs.

  • @DSVWARE
    @DSVWARE 9 месяцев назад +1

    have not tested this myself but you might not need to disable secure boot in the UEFI if you untick "pre-enroll keys" in the system tab when creating the VM
    I am in the process of deploying a simple SMB server and am following apalard's approach via LXC container

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад

      Thanks, I'll give that a try.

  • @joshhardin666
    @joshhardin666 4 месяца назад +1

    I wouldn't personally recommend doing a virtualized storage server due to the high memory demands of zfs. My storage server is running baremetal truenas scale with 8x14tb drives in raidz2, it's got 128gb of ram, and the zfs ARC takes up about 90% of the memory and that allows drastic improvements in caching and overall read speed. I'm in the middle of building out a much larger pool (16x 18tb drives (2 8 disk raidz2 vdevs) and i'm sure that it'll take even more advantage of having that additional memory for caching.

    • @Jims-Garage
      @Jims-Garage  4 месяца назад +1

      Interesting, I have 8x8TB and 6x16TB drives with 32GB ram. Runs fine for what I need in a homelab but probably would benefit from more ram in a multi user setup

  • @bobkoss280
    @bobkoss280 9 месяцев назад +1

    I got lost at the end. You created a separate dataset for nfs. Does that mean I have to duplicate the data?

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад

      You shouldn't share the same dataset by NFS and SMB, that will lead to problems. Instead, it's best to stick to one protocol (they both work in Windows and Linux). I use SMB for this reason.

  • @xgengamrgrl1591
    @xgengamrgrl1591 2 месяца назад +1

    FYI for anyone with a Lenovo P920, and possibly 720 and 520, the SATA controller for the eSATA port is separate from the backplane, along with the port next to it. You can safely pass that onboard controller to the VM. Attempting to pass the other controller crashes the host, obviously, so don’t do that.

    • @Jims-Garage
      @Jims-Garage  2 месяца назад

      @@xgengamrgrl1591 good to know, thanks for sharing

  • @danixdlolz
    @danixdlolz 9 месяцев назад +2

    Any disadvantage to passing through the disk instead of the controller? Besides losing smart data (I think)

    • @zippi777
      @zippi777 9 месяцев назад

      Yes, you lose smart data.....I don't think anything else.....

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад

      Passing through a disk isn't doing what you think it is, certainly for Proxmox. All it's doing is mapping the folder structure. For proper ZFS management it needs to be the entire device (AFAIK).

  • @ierosgr
    @ierosgr 9 месяцев назад +1

    Hi. A quick notice, At 19.27 you left the option rombar enabled. No reason for that. It isnt a gpu to load a rom file for example. No point leaving it checked.

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад

      Thanks, I did miss that explanation on reflection. ROM BAR isn't just for GPUs, it's for any PCIe device and allows it to map a portion of its memory to the host. This can be beneficial for devices, and I like to assign it with a HBA for some overhead.

    • @ierosgr
      @ierosgr 9 месяцев назад

      @@Jims-Garage Nice but my experience with rom bar (specially with gpus) was that after cheking it kept asking for a rom file to load. Why you mentioned about overhead at the end?

  • @mikescott4008
    @mikescott4008 12 дней назад

    Back looking at this and considering replacing my QNAP TS-873, but my instinct goes against this even though saves power and I'd sell it off... Originally I had the Dell T340 with a HBA330 installed as a TrueNAS server, then changed it to Proxmox and have a seperate QNAP.. I think I've seen the option to also set Proxmox up as a SMB server too.. mmm WIsh there was a decemt IX Systems reseller in the UK as I'd prob gone a different route to the QNAP originally.

  • @the_mad_swimbaiter455
    @the_mad_swimbaiter455 3 месяца назад +2

    I was under the impression you could just run an instance of TrueNAS in proxmox and just give it pass through to the physical drives? It seems you are the first video I've seen to say an HBA is needed... I'm a noob and now Im confused....

    • @Jims-Garage
      @Jims-Garage  3 месяца назад +1

      @@the_mad_swimbaiter455 for zfs features to work it expects the disk passed through via HBA. If you do it within Proxmox you aren't passing the disk through, things like SMART won't work.

    • @the_mad_swimbaiter455
      @the_mad_swimbaiter455 3 месяца назад +1

      @@Jims-Garage so if proxmox can do the zfs and clusters across systems is TrueNAS redundant? I was going to base my zima blade server on proxmox and run a TrueNAS, PLEX, and Vault warden with an additional Windows VM as my daily driver? I'm thinking ahead to clusters for redundancy? Thanks for your video, I'm just a hobbyist and i literally had never heard of an HBA lol. 🤦🏿‍♂️

    • @Jims-Garage
      @Jims-Garage  3 месяца назад +1

      @@the_mad_swimbaiter455 no, this is why I have TrueNAS on a dedicated machine.

    • @the_mad_swimbaiter455
      @the_mad_swimbaiter455 3 месяца назад

      @@Jims-Garage cool, good to know lol. I'm over complicating it. I tore apart my desktop and made a 2x2tb TrueNas server. I'm hooked on this stuff now and just trying to figure out how to tie it in. This all started with a Rapsberry Pi5 4tb SSD NAS running OMV. Lol. Thanks for the engagement, I'll stop bothering you now, but great content! I'm just hopping around your videos getting ideas lol.

    • @the_mad_swimbaiter455
      @the_mad_swimbaiter455 3 месяца назад +1

      ​@@Jims-Garage i have a zimablade I've been playing with and it only had 1 PCIe slot thats used for M.2 NVMe storage /OS. I got TrueNAS scale running in a vm and SCSI?d my storage drives attached to the blade into TrueNAS. I'm just thankful it works, but i fumbled through it

  • @vidmonkey
    @vidmonkey 7 месяцев назад +1

    Is the TrueNAS VM stored on the Proxmox boot drive?

    • @Jims-Garage
      @Jims-Garage  7 месяцев назад

      It can be, but I prefer to place my VMs on nvme

  • @vidmonkey
    @vidmonkey 9 месяцев назад +1

    Should the steps be the same if setting up Scale vs Core?

  • @brunekxxx91
    @brunekxxx91 9 месяцев назад

    I use a lxc container with just casaos (a docker managment docker container) that has samba shares. What do you think?

    • @keywal
      @keywal 9 месяцев назад +1

      Nothing wrong with that! LXC has direct access to the kernel so literally no overhead. True as though has some features for preserving your data - snapshots and cloud backups built in - I use truenas and a combo of lxc containers and casaos 😊

    • @richardbillington3185
      @richardbillington3185 9 месяцев назад

      Great video! Spent many hours in that virtualize TrueNAS "rabbit hole", came out on the LXC side of the fence too (there is not one size fits all answer here I believe), with Proxmox handling the ZFS and importantly the memory management. For me it boiled down to the fact I am comfortable with linux and the command line, plus, TBH the features in TrueNAS were well beyond what I required, just a handful of SMB shares didn't warrant 50% of my 16GB ram, I run my LXC with just 1GB for samba and filebrowser, a web gui is really useful on slow remote connections for uploading files. The rest of the ram, for me anyway allows me to run many services as LXC, Docker or VMs in Proxmox that I miss from my old Synology (Nextcloud, Portainer, Photo Prism, Gitlab and handful of Wordpress and NGINX websites and Traefik to name a few), I was really surprised how much I actually ran on the Synology with so little resources. Reckon there might be enough resource left to run through testing Kubernetes following your tutorials, high on my list for 2024 :-)

  • @User-ec2bh
    @User-ec2bh 9 месяцев назад +1

    In case anyone is interested: these hba consume a lot of power, mine used 10W in idle with no HDD connected and got very hot. Besides getting hot it has no temperature sensor so there was no way for me to know, if the zip-tied fan on it was still working.
    Because of all that I ended up with a second ZFS inside proxmox and created an SMB via a Copilot LXC.

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад

      I would attach the fan to a mobo header, then you can monitor if it has failed.

  • @stephanc7192
    @stephanc7192 9 месяцев назад +1

    Great video

  • @nyccontrabass3489
    @nyccontrabass3489 7 месяцев назад +1

    Is there a dell to Lsi model number list?

    • @Jims-Garage
      @Jims-Garage  7 месяцев назад

      You should be able to find the chip used on their website.

  • @_mult
    @_mult 9 месяцев назад +1

    35:15 How then to share the music catalog via nfs and smb?

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад +1

      I show you how to mount it in Windows and Linux. Once it's mounted it's the same as accessing a normal folder.

    • @_mult
      @_mult 9 месяцев назад

      How to mount one folder in linux via nfs, and in windows via smb?@@Jims-Garage

  • @gekl
    @gekl 9 месяцев назад +2

    For those who want to flash the firmware in IT mode : ruclips.net/video/v5v8TCcvA8s/видео.htmlsi=zQF6dwYKDLmBFF71

  • @pepeshopping
    @pepeshopping 9 месяцев назад +1

    Ah the experts teaching you wrong.
    WHY install VMs with Safe Boot?
    Maybe W11 if you don’t know how to bypass that check during install.
    Install in Legacy/BIOS mode, or do not select Safe Boot (Enroll Keys) to avoid all those unnecessary boot changes…

    • @Jims-Garage
      @Jims-Garage  9 месяцев назад +2

      I've never stated I'm an expert. I will adopt this for future videos.
      I assume you mean secure boot, safe boot is entirely different. There are reasons for using secure boot but in a homelab probably not.

  • @zyghom
    @zyghom 9 месяцев назад

    TrueNAS is not designed to be modified/customised by end user - whatever you want to tune, it will NOT survive next upgrade.
    I would really reconsider this decision.
    Different issue is when you have full bare metal spare - then, probably, TrueNAS is the system of choice.
    But not on PVE, where 95% of OS things are done already.

    • @pepeshopping
      @pepeshopping 9 месяцев назад +1

      Not absolutely true. You can put your commands in the Truenas INIT/boot scripts and NO UPGRADE would delete your scripts, utilities or custom binaries correct!?