EPYC TrueNAS Scale Build and VM Install
HTML-код
- Опубликовано: 31 янв 2025
- Between the power draw and cooling requirements of running my own server rack, it's time for some much needed consolidation, starting with virtualizing my TrueNAS and Proxmox servers into one massively overkill box.
Oh, and I'll be installing TrueNAS Scale, based on Debian Linux!
But first... What am I drinking???
A two-fer of disappointments unfortunately. Between the Plug and Play IPA from Matchless, to the KCBC Kung Fu Karaoke, there's just not a lot of good things I can say.
Links to items below may be affiliate links for which I may be compensated
Check out parts from today's build:
Supermicro MBD-H11SSL-I-O Socket SP3: amzn.to/3f8H79P
AMD Epyc 7601 32-Core CPU: ebay.to/3ngX3xc
I've got merch, and you can get it too!
craftcomputing...
Follow me on Twitter @CraftComputing
Support me on Patreon or Floatplane and get access to my exclusive Discord server. Chat with myself and the other hosts on Talking Heads all week long.
/ craftcomputing
www.floatplane...
Music:
Acid Trumpet by Kevin MacLeod
Link: incompetech.fi...
License: filmmusic.io/s...
I love tech youtubers that actually leave their mistakes in their videos. Mistakes are not a loss, they are just data conducive to performing better in the future.
Everytime I watch one of your videos, I want more stuff.....my wife and I are already talking about setting up a rack in the garage and priced out some AC options.....thanks a lot you glorious glorious bastard you
I appreciate your humor and personality! I've been watching you for awhile. Always enjoy it!
2 years later I finally find this video, right when I plan to do a very similar thing on Xeon E5 2687w V4. Love all the information I get from your videos and watch all that I can, but, this one was serendipitous to say the least! Keep up the awesome work!
Gotta say you probably didn't intend it but you have one of the only and best videos I could find for passing an HBA and assoc. disks directly to a TrueNas VM. You are the best Jeff! I have SMART data!!!
Nice build!
If that is possible, it'd be really interesting to see a direct comparison of the wattage needed in the typical idle situation between all the servers you had before, and the one virtual host you have afterwards =)
There is one thing I learned when scaling our data center up was anything using DDR3 these days, while still great hardware, is extremely less efficient than modern server hardware and can be done with less chassis all around. Its wild you can scale 7 servers down to 1 and the efficiency will be awesome!
And modern hardware also tends to have a way lower idle power draw. And it is somewhat likely that your home lab will spend a lot of its lifetime at low or even no load.
Agreed, my Ryzen 5800X runs circles around my Lenovo D30 (2x E5-1680v2). Still, for a cheap entry, these offer great performance.
I have an old client we condensed 38U down to 10 last year after we did a rack audit and realized how old some of their stuff was. Keeping in mind 8U was replacig all the servers on 2 2u machines, another 2x2u for failover/load bearing machines and we were able to throw in 2 1u offline backup machines. Was a fun project for me as I got to do all the testing before we rolled out and a happy client is always a good thing. That said my home machines are still old v2/3 xeon machines because my needs are minor. one test machine and a freenas box.
If only there were some remote KVM based on a pi....and a video about how to do it....with beer...(drinkable beer at that)
I think that super micro board has BMC that includes kvm with dedicated network port so no need to use separate kvm on pi, I don't know why craft computing does not use it, because that's the point of using server motherboard.
Well i guess there is pikvm (But that takes many resources so a raspberry pi might not run it (even if it's named PI kvm so yeah)
@@vaidkun😊😊😊😊😊😊
What would i do in life without a new craft computing video? Got to feed my server need without spending more money ;)
I do gotta say, I'm glad the home lab era is here. I myself run quite a few servers in my own home lab to include DL360 Gen8 x2 (1 with 352 GB DDR3, and the other unused until I setup the HVM), Synology NAS RS2421+, Dell PowerEdge R330, and an old HP DL380 Gen 5, running off a Tripp Lite SMART1500LCD 1500VA, and a Alcatel 3750G 52-port switch. If you have time, I'd like to get some additional tips for setup. Been playing around with enterprise servers for a while and found more efficient methods of hosting cloud services and VMs that I've been utilizing. Gotta love corporates who won't listen, lol.
This is a great evolution. I have a very similar setup to your old TrueNAS server and I want to virtualize it but I came to the same conclusion that the IvyBridge wasn't enough horsepower.
The thing im most excited for is the linux drivers for that fusion iodrive. I bought a 1.2tb one a couple years ago to use as a cache only to find out it was basically impossible at that time. This will probably give me a reason to upgrade from the freenas OS im still running on my server
I knew this video was coming! Thank you Jeff!!
I used one of those IO-drives in my Lenovo St550 as a storage volume, they worked well and ESXI loves them. NO issues at all.
I second all the others that say to create a bridge. I did that for a time to dedicate ports from my 4 port Gb card to TrueNAS, a Windows VM and Plex and then I decided to do a LAGG group of all 4 ports. I don't have 10Gb or a real need for it yet so I am sticking with what I have. I bought a second 4 port card for future expansion and will do that as a LAGG as well, once I get a 16 port switch that supports it.
more specifically, definitely prefer openvswitch bridges, the performance is much better than the default linux bridging.
Did you manage to make file sharing to a single client use these 4 ports, i.e. you managed to achieve a transfer of e.g. 400MB / s to the client?
@@peny1981 I haven’t down a full test but the LAGG config doesn’t really combine the ports to use as one. The way it works allows for multiple lanes so if one is busy with a large transfer of data then it moves to the next. It is like the difference between one lane on an interstate or highway and multiple lanes.
Thanks for leaving in the learning process - It helps a ton when I'm doing research.
FreeNas, er TrueNas is so set it (up) and forget it that I often don't keep up with it's new offerings. TrueNAS Scale sounds great, thanks for the info. I'm going to try this out for my new storage / plex server.
Hi Jeff,
just wanted to add my findings on virtualized TrueNas Scale with ZFS encryption:
As CPU you have to set "host", as otherwise no AES / AVX extensions work inside the TrueNas VM.
Without the default CPU type "kvm64" that proxmox has, AVX is not supported.
So you get a bad transfer rate and high CPU usage without this setting.
One can check if the CPU instructions are available with the linux commands:
"cpuid | egrep -i "(avx|aes)" | sort | uniq | grep true"
came here to install truenas inside a virtual machine. stayed for a random dude running to his garage multiple times. cheers
I appreciate that you left in the part where it didn't work to pass through the NICs
Loving the server series!
Literally just did this and imported my pool that was originally on arch, then proxmox and then truenas scale on bare metal. I had some trouble with VM features I needed that were missing in scale but really wanted the appliance experience to manage, monitor and share my pools (via NFS, iscsi and democratic CSI). Proxmox to the rescue.
This is gonna be Epyc!
I appreciate the beer grading at the end. The computer talk was good too, but the beer made the video.
Got strong Oceans 11 vibes on that build montage.. 👌
From one Jeff to another... That was a good intro! 😂
This is one of the reasons I bought a new epyc based rack mount server. 24 sata bays and 8 can also accept u.2 nvme, all in a single box and can expand ram to 1tb as needed. Sure it wasn’t cheap hardware wise but I don’t have to worry about lack of vmw processor support and can support w bunch of workloads
Nice! Has always looking forward to this.
ballooning RAM off. Useful, thank you 👍
So was switching from TN-Core to TN-Scale more than anything else driven by wanting to accommodate those nice VSL3/4 expansion cards? Maybe I missed the justification from the video.
Kindest regards, friends and neighbours.
Thank you for this tutorial! Been virtualizing TruenasCore in proxmox. But wanting to reinstall it bare metal due to proxmox cluster issues and want Truenas on 24/7. Gonna try Scale for Debian driver support!
These are my favorite kind of videos.
You can open a *second* beer? Dude, I'd be at least a 12-pack in by that point. :-)
Your videos are always good. I didn't even have NAS. but I'm building one from scrap hardware I have.
That's where we all start :-)
I love the Level1Techs vibes ❤️❤️
Hey Jeff, in terms of truenas scale and reverting back to truenas core, the new beta of scale is using newer version of ZFS with update that can't be undone. I am sure someone mentioned this already but just in case here you go.
That's true as far as I know too, there's no undo button for the file system.
You missed the fact that network devices share the same IOMMU group, which is the blocker for splitting. The solution would probably be to create a separate bridge to map the VM nic to.
In Virtualbox I was able to get networking working by using a bridged adapter instead of NAT in the networking settings. After that I was able to reach the web server at the IP address shown in the vm after startup.
Ahh, yes, I was waiting for TrueNAS Scale to show up here. It certainly looks interesting based on what I saw on their site. That being said, I kinda expected TrueNAS Scale to be the bare-metal hypervisor in this situation... Certainly makes sense why you wouldn't do it _yet_ 😉
Oh don't worry, I will be testing out its VM chops before too long ;-)
I would love one of those Craft Computing glasses!
craftcomputing.store
@@CraftComputing but do you ship internationally?
@@StuMcDonaldStuey Of course! We ship to over 70 countries, and rates are VERY affordable.
Thanks for the videos. I am fun of them. You're great. Watching them with a beer on my table :)
I am working on a storages and I was never able to have near bare metal disk access speeds with Proxmox. SSD Speed 520M/s Under Proxmox 200M/s But with an Hypervisor like Esxi or Hyper-V that's much much better. If it is a project about disk access, is it a good move to do it on Proxmox?
You can revert back to CORE but ONLY if you DONT perform the ZFS upgrade on the Pools you imported from CORE.
I need that beer! Purely for the nostalgic can.
When you are actually installing TN, at ~ 6:49 in, you jump over the setup of the OS, System, and Hard Disk. Can you talk about those parts? You can't install/create a VM without storage, and I'd like to have it work efficiently. Thanks.
Where is the video on splitting ports from a NIC? Having trouble finding it.
Awesome channel! Thanks for the tutorials. I got a question... Is it possible to do map a iSCSI drive from a TrueNAS server to the Proxmox hypervisor?
Jeff, Do you know: Will SCALE be free like CORE after the beta or will I have to kill the VM and make a new CORE-Deb VM after SCALE comes out of beta?
YES! Scale is Free. Enterprise will continue to be the paid variant, and there will be a support plan available for Scale.
AFAIK
As usual, love the chanel. If you're into it, I would love to see some Emby content.
Your videos are great, but sometimes leave out key info like the details of the VM.. which BIOS did you choose in your VM? etc
-"I will be passing on more devices than just the HBA"
- Please do GPU Passthrough on TrueNAS Scale VM!!!
Also should i go pure metal or the Proxmox way only for TrueNAS Scale installation?
I've been running Ubuntu with ZFS on my R720 for almost a year now. It has 8x 1TB SFF Enterprise SATA drives in RAIDZ2, connected to a HBA that is in passthrough to my vNAS VM. The rest of the box runs ESXi7, with dual E5-2680v2 and 256GB of DDR3 LRDIMM RAM. I'm kinda proud of my little creation :D
That's a dynamite setup!
Did I miss the instructions? I don't see them anywhere. Thanks!
You should tackle IPv6 for homelabs in a future video (or video series).
IPv6 isn't wise to use unless you have good understanding of network security (unlike IvP4, IvP6 isn't masked behind a NAT or even use NAT, and each computer / device will recieve both public and local IvP6 address), also there's no speed gains on IvP6, all IvP6 gives you, is public IP to each device due to fact that world isn't going to run out of those for next million years or so (quite literally)
How do you implement the APC UPS auto shutdown under Proxmox?
Seems like this is trying to compete with Unraid, given its features seem very similar. Native ZFS support is a big feature over Unraid IMO. Very interested to follow its development.
Hey Jeff, any updates on running Scale on bare metal and hosting your VM's from there? I only host a couple of VM's in addition to Core but more granular control and better PCI passthrough would be excellent.
Not quite feeling the love with a virtualized NAS setup. I get the need to consolidate HW if you’re low on real estate but isn’t there a significant i/o performance hit vs a dedicated box TrueNAS implementation?
Not to mention I was away from home for a month and my Proxmox node was/still unable to access remotely networked shares for some reason. Not sure I trust the Proxmox dependency for local storage access.
So when are we going to get the epic solar farm build?
Depends on the budget for my garage rebuild....
You may have said but I missed it, but what NAS chassis is that?
Thanks a ton for the great content. I have found your videos quite helpful as I find my way around this “new world” of self-hosting / home lab setup.
In a Proxmox + TrueNAS setup, what is best approach for ZFS Storage Pool. Is it best to setup the zpool in Proxmox for use by the NAS software or is it better to setup the zpool from within the NAS software?
Jeff, now that it has been some time I wanted to ask your thoughts on its use within Proxmox. Does it meet your goals and efforts? I ask because I am thinking about moving to TrueNAS scale from TNCore but want to apply this across a Proxmox cluster for HA. Thoughts on that approach? Or from a NAS perspective should I look to abstract the NAS from the cluster / HA and make it a data resource at the datacenter level of the cluster?
I've been looking at the same thing and thinking more like ceph is a better answer. HA storage to go with the HA compute. A bit easier to add new capacity also, unless you can expand ZFS vdevs? I know it's something that was being explored.
8:46 there are no instructions linked below... might want to link the docs or something.
Now I know where those 2650 v2's are coming from 😏
I think it's in my end but for a second or two it was looking like the audio and video was out of sync. Around 8:22 to 8:27 ( where I stop the video). Hopefully it's at my end only :)
❤️These videos!
6:59 damn... what settings were SKIPPED between GENERAL and CPU?
I've run into the same network passthrough problem on my HP ML350 gen9 server. It has 4 GbE ports, I intended to dedicate one to a specific vm but no luck. As far as I know it's some kind of IRQ management problem.
Wondering when you made this since TrueNAS Scale 21.08 BETA has been out for a few weeks now. I tried to sidegrade from TrueNAS Core on an older N54L microserver, but the DB migration kept failing. Waiting a bit longer, but really want the Linux version!
I had downloaded the 06 beta a couple weeks ago in prep for this video. I didn't see the 08 update until after I filmed.
But I was able to update to 08 direct from the GUI in about 5 minutes.
Early engagement, all hail the mighty algorithm
So with this setup you would use Proxmox for setting up VMs and not the virtualization inside TrueNAS Scale?
Was that 2 sata drives on a pcie card? I've never seen that before.
me with a sandybridge i5 NAS and half a laptop running in a shelf using a gigalan network: yeah I like servers too!
Do you not use IPMI for remote server access? You could have rebooted from your desk...
Maybe I’m a simpleton, what PCIE card is being populated @4:45 in the video?
And is it possible to use within a standard windows 10 environment?
Super grovy background music.
10:36 this is actually exactly why I have a single fallback Ethernet card lol.
God damn it Jeff, just finished a TrueNAS Core build, and suddenly I'm doing TrueNAS Scale. Well its the weekend and no better time to play with new software and toys.
Question, since its built on Debian, I'm assuming some better community support forth coming. Assuming they do not lockdown Scale?
So, where are you posting your WTS ads? :)
Was wondering when the 32 core Epyc was going to replace a bunch of the old lower core count servers in that rack. Only issue now is redundancy with one box running most of it.
There's a 64-core box right below it in the rack. I'll likely have the more critical VMs as cold standbys on that box.
SR-IOV for splitting devices, provided they support SR-IOV
Hello, unfortunately my eng is not good enough so I could have missed this part in your video. Do you think that this solution proxmox + truenas scale (virtualized) can be a good idea?
Because my original idea was to use truenas scale to do all (NAS & vm), but, the actual "limitation" of the vm in truenas is the usb, if I want to use an usb stick or another usb device inside of the vm I have to pass the entire controller, so I can't have 2 vms that use different usb devices at same time. I have never tried proxmox but I think (I hope) that it is possible to pass only the device to a vm instead of the controller
I try not to look at the power draw for my server. The dozen HDDs and two SSDs probably don't take up much, but I hate think what the idle power draw of the 2 x X5675 CPUs is. It did have 2 x E5645 CPUs, but one of the services it's running needed faster cores. US$20 each for the 2 x X5675 was a lot cheaper than a whole platform upgrade.
Sounds like you were running like 3.5kW draw, including cooling, to keep a home datacenter running, lol.
At what point does it become economically imperative to put a solar farm on your roof / yard, and become your own powerplant, just to get your power bill under control?
I'll be looking into solar next summer :-D
@@CraftComputing That sounds awesome!
Have you done a video on Virtual Machines? I had VMWare on my last pc but I don't remember how I did it. I also didn't know how to use it.
I know this is kinda an old video now but I used it to get true truenas working recently.. question. Is the way you connected the hhd to trunas is that considered a passthrough or is a true piece passthrough different again? Is there any downside to doing it this way? My board has 3 sasmini ports so not sure I can figure out how to pass those through and why add another card if I don't need to its a server board with threadripper pro cpu
Why use Proxmox in this case, TrueNAS Scale has the same virtulization backend (KVM) so it would make more sense to simply skip Proxmox entirely.
Wouldn't it be better to skip TrueNAS instead and just use Proxmox?
@@Tanax13 no, Proxmox doesn't provide the proper NAS bits, TrueNAS Scale does as well as KVM and docker.
@@mikkelgeorgsen Right. In that case I second your first question; Why use Proxmox?
Why are you putting TrueNAS Scale inside of Proxmox when TrueNAS Scale is also a hypervisor?
Because of my motherboard I could pass through my HBA into my VM sk I had to add them as a virtio kvm drive. Hopefully that doesn't bite me in the ass in the future.
That's a big ouch. "cache drives" aren't just cache drives. As I'm sure you discovered already, you should have really removed the SLOG devices from the pool before the migration.
Hi Jif,
Thank you for the tutorial,
I had virtualized TrueNAS on proxmox and Created SMB Share to use through my network. but I have a weird issue which any time I do copy data to the shared Folder or copy anything out of it everything get's slow on proxmox and all the other VM's hang and once copy is done all the VM's back to normal. any idea what is going on?
Thanks
Epyc and Proxmox goodness.
Thank you for the video. I got a R720 for home lab. I set the IT mode and installed Proxmox on a ssd. It works good and I can see the disk. I tried to follow you guide but after I installed Truenas as a vm, Truenas cant see the hard drive disks to create a pool. I am able to see them in proxmox. Do you have any suggestions?
Just curious on where the link is for those written instructions for ProxMox PCI passthrough?
I am trying to get my nas (upgraded to scale) and my milestone 2019 server in one box. I was looking and this ProxMox looks like it may do it. Make sure you dont upgrade your pools as once you do this running scale you cant go back to core ... I was able to buy some drives and fix the issue being mine were created 10 years ago when I started using Freenas and they still are alive being I just resilvered as they failed. I was gonna do a VM in Scale but this might be a better idea as I am running older HP Proliant G7 for the milestone and a newer Xeon For the nas
oh AHDH I forgot to say thank you which is why I comment in the first place. Never used the prox if that's bare metal VM i'm going to try it
I am new to this whole home server thingy, and I may be a bit slow here, but; why do you run Proxmox and Truenas inside Proxmox? Why not just Truenas?!
Hey Jeff! Nice video :D
Btw, would you be able to make a video of how to build a dedicated server with scalable resources?
For example: I want to host a dedicated virtualmin machine, some websites would potentially grow a lot and I would need to add a new HDD/SSD. How could I add this HDD/SSD into the machine and let the storage capacity grow inside the mentioned system, without having to move files to this new storage unit? I've head about Storage Pool, but I'm unsure how to use it.
This is different from having a Emby server for example where you can just point a new location for new files.
I was thinking about having a big LVM Volume and then: 1 Logical unit for General Purpose (like OS) and another Logical unit for /home folder, which I could expand whenever needed, just by adding new storage into this LVM Volume. But how would this work in practice? Could you make a video about it?
Thanks ;D
6:57 skips over the most important part of mounting the drives....
Anyone know what add in card he used for those two 2.5 SSDs? You can see it at 4:42
I've been running TrueNas on ESX for about 2 years. Decided a few weeks ago to switch to Proxmox. Total disaster, after a week of troubleshooting, I had to go back to ESX. The throughput was just appalling. I run a 900P Optane which is partition into two 15Gb Slogs and 220Gb L2ARC. i THINK it was something funky going on with the 900Ps passthrough, as when I removed it from the pools, performance returned, and performance to the 900P alone (using dd) was as expected, but when working as part of other pools, it fell apart. Maybe an IRQ issue? Gutted I never got it to work :( Hope you have more success!
Hey Jeff! Can you please provide a link for the pci-e card holding the dual ssds you've put in the far right pci-e slot?
You got it! Warning though, I couldn't get it working in Proxmox for some reason. It was running in my TrueNAS server for a year no problem though.
amzn.to/3lfK9ND
@@CraftComputing Thanks, I'll get back to you with the results if I can make it work!
nice, what ram and cpu sink did you use for the build ?