the important thing to realise, is when a drive fails (and it will) the pool has Zero redundancy, any additional failure and you loose everything. And guess what when you replace the failed drive it does a big rebuild (many hours) putting the existing drives under significant load.
Awesome brother. It's awesome you do GIF. Love the support and this makes me want to support your channel. I use truenas scale and love it. Smart People make great videos. The Digital life is a new to me but he is very educated on the Truenas Scale ZFS system. LOVE IT. KEEP THEM COMING BROTHER.
Great setup, however i would not use raidz1 in a 12 disk pool, the risk to loose more then one disk is for me to high, i use raidz3 for my 12 disk pools...... just my 2 cents. Thanks for the vid.....
Raidz3 seems a bit of overkill to me. At work we run a selfbuild TrueNAS server for backing up Xen VMs with around 270 TB netto capacity and in raidz2 setup. This allows two HDDs to fail and the array still functions. Not using fast enterprise level SSDs as read / write cache for the pool seems to be a no no for me. ZFS in the end is not THAT memory intense unless you do deduplication. A fast CPU plus 8 GB of RAM will be fine to serve a ZFS pool.
Are you crazy :o RAIDZ1 on a 12-drive array... You must hate your data.!!! It is very common for a 2nd (or third, or more) drive to fail during a rebuild, which guarantees you data loss, and depends on when and how bad it fails, you could very well lose ALL of your data... On a 12 drive RAID array, I wouldn't ever consider using anything less than RAIDZ3. In reality though, I would use two 6-drive RAIDZ2 arrays
one of the few youtubers i watch that talks fast enough that I dont need to use 1.25x-1.5x speed while watching 😅😅 great build and very insightful walkthrough of the parts selection!!
RAIDZ1 on 12 disks aren't recommended and you're limited to 1 vdev's speed and IOPS.. Even with RAIDZ2. It is better in your setup to make two 6-disk RAIDZ1 and combine them into a stripe. It is supported in zfs.
Please do not use Raid-Z1 on a pool with that many disks, there is a good chance you will have 2 disks fail within a few days of each other, especially if you bought them at the same time. if you lose a 2nd disk before it has resilvered, bye bye data.
From performance and reliability standpoint its better to use multiple VDEVs. In your case probably 2xraid-z1. Still from my point of view its beter to use vdev with multiple parity (z2+), otherwise with some bad luck you can have some unrecoverable read errors while resilvering if one drive dies. Still good content, thank you :)
You really should not use a RaidZ1 on that amount of raw storage. In case 1 drive fails and you swap it to rebuild the volume, you put a lot of pressure on your remaining drives and that for a long time (because of huge drive capacity). There is a huge possibility that another drive fails during the rebuild process and since you are only using RaidZ1, all your data will be lost. Only use RaidZ1 for small deployments (4 drives or less with low capacity) and have good backups. I would suggest at least RaidZ2. And as always Raid is not a Backup. Trotzdem gutes Video, hab selber ein ähnliches Setup, allerdings hab ich TrueNAS noch in Proxmox virtualisiert mit PCI-e Passthrough von dem HBA.
For reference, consider the ASRock Rack MBs for home server use. They use standard desktop chipsets, but include some handy server features. For instance, the X570D4U-2L2T includes multiple 1Gb & 10Gb NICs, IPMI, SATA DOM, etc. crammed onto a MicroATX MB.
It seems like you put a 1TB SDD as the boot drive. WHAT A WASTE! One of the things that bothers me about TrueNAS is that it only uses about ~3-6GB of space on the boot drive, and hides the rest of the drive so you can't use it for storage. I recommend that you us a 64 or 128GB SDD for boot (maybe even two for mirror boot), if you don't want to waste all this space. I like how ProxMox does it where it creates an LVM for its system, and an LVM Light for the rest of the drive that you can use for storage. But, fine, when you spend all that cash on 12 4TB drives, 1TB wasted is no big deal, but come on, why waste it?
@@christianlempa I currently use 64gig USB flash drive for my TrueNAS Core. Nice thing about this setup is you backup the config file and then reinstall on a different boot drive. Restore configuration and boom you're done.
(you maybe should have mentioned the the reason why half your ram is already in use by the cache. That's normal. It automatically takes up 50% of any amount of ram)
what the shit, why would you run RAIDZ1 on 12 disks?? Either Raidz2/3 or 2x vdev raidz1/2/3. - Resilvering of a drive is known to destroy more drives. Edit: Nevermind, just saw your new video haha.
I don’t think you need to use ECC on TrueNAS… why are you so stubborn about it? And you went with 12 drives setup z1 that’s a lot more risky. ECC on one hand but taking a lot more risk on z1….
Hello 👋, you can tell my why i see connections lost on true nas when i use iscsi with proxmox , but iscsi work perfectly, but true nas show me connections lost ip 192...... . Please 🙏 i search for solution but not solve.
Great video...but. You set up 12 drives in a single VDEV, RAIDZ1? I'd seriously reconsider that decision, unless none of the data going to it is all that important; if a single drive fails, and then another fails during the intense 4 hour rebuild time (as can happen)...well, you'd lose everything. And no one wants that! (Use RAIDZ2, minimum!)
In ZFS the total pool IOPS are equal to 1 disk IOPS * N-of-vdevs, and since you have a single vdev, the IOPS are equal to a single WD Red Plus 4TB, so your config is pretty bad also from a performance point of view, other than being quite unreliable. The only thing that's not so bad is the streaming performance (as long as you are using a large block size) With 12 disks, my choices would probably be: reliability: 2x (Z2 w/ 6 disks) performance: 3x (Z1 w/ 4 disks). You can also avoid parity arrays and get even more IOPS but the usable space decreases drastically. P.S. Unless things are changed from the past (have not played with SCALE yet), that 1TB NVMe drive is completely wasted as boot drive. Ideally the OS should stay on a small SATA SSD (best if a couple in mirror) and that NVMe would be better used as level 2 ARC cache
Thanks, great feedback. I probably will change my config to something else, once I have the chance to do so. Still haven't decided what exactly, but 2x Z2 would mean I'm loosing 4 disks which is 30%. Seems like a lot to me. I probably will go with a single Z2 or even Z3 for the 12x4tb as a big data pool for backups and video files and add a second vdev with 4x SSDs. What do you think of that idea?
@@christianlempa The recommended number of disks per vdev is between 3 and 9 and more than 12 is not recommended, so a single Z2 with 12 disk is pretty much an explicitly unrecommended configuration. I know that loosing all that space sucks, but this is the price if you want to do things the right way. Since you are not storing mission critical data (right? 😛) you can configure the pool with two Z1 6 disks vdevs and a very frequent backup 🙂.
@@christianlempa for many reasons, the most obvious is rebuilding time (it could take a week or more and during the process you could loose more disks because they are under high stress) but also space efficiency (due to parity and padding increased complexity) and other joyful reasons (like further performance degradation) that you can discover deep diving into technical documentation if you want 🙂 Of course, once you are aware of all the risks and limitations, if they are still within your "margins of acceptance" you are free to configure your pool as you wish, I just wanted to make you aware 😉
Hi Christian, the HDD status LED-s in your Inter-Tech enclosure works out of the box with TrueNas scale? (Needs any additional wiring or this is native SAS feature?) In next days I would like to send out my Shopping list with Inter-Tech or Supermicro CSE-216 cases... Thank for your Time you spend to your RUclips channel, this is a great starting point day by day for my me time... 🤟
Hi Christian, the HDD status LED-s in your Inter-Tech enclosure works out of the box with TrueNas scale? (Needs any additional wiring or this is nagive SAS feature?) In next days I would like to send out my Shopping list with Inter-Tech or Supermicro CSE-216 cases... Thank for your Time you spend to your RUclips channel, this is a great starting point day by day for my me time... 🤟
Unbuffered ECC is the best/fastest but you can double the accessible RAM amount if you use buffered ECC. Buffered ECC is a bit slower though. If you have enough RAM slots then you will be OK with unbuffered...
20:20 When you connect two devices directly do you use a cross-over cable? I have a TrueNAS Mini X (Diskless) on order and already have drives, plus an unmanaged switch that will connect the iSystems’ machine and an AppleTV which is also wired to an Eero 6 mesh network. My hope is to be able to watch media from the NAS on the TV even if the internet is down. Eero seems to need cloud/internet :(
2 года назад+1
I also builded my Truenas scale finally 1 week ago. TBH: If I dont have an old desktop to refurbish I wouldnt go for your customer PC build. A refurbished or used Supermicro board has usual a proper CPU and IPMI port and maybe even a 10g nic. Much more support on ECC and is more reliable on a 24/7 job and needs much less power. You CPU alone is a 65W listed. A m-ITX board with a Xeon (8C/16T) is listed at 45W for example. A used HP microproliant gen8 is highly modable and also offer more value with ILO on the used sector. So at the end - I personal wouldnt recommend this parts, but everyone priors different things and its nice that more people get into truenas scale in general.
About the Storage Controller. I never rly understood those. In the moment, im thinking about to upgrade my existing fujitsu primergy tx1320 m3 server from the standard 4 to 8 connectable disks. In the official data sheet of the server are some controllers listed to upgrade the sas connections. however i dont rly understand, Do i have to use a spezific one from fujitsu or is any raid controller with equal connections usable? I would be happy if some one could explain me what is important by choosing the right raid controller. THX
Hello youtube here are the steps to get apt working on truenas if u don't have the right zfs permission.. First connect to the shell of Ur server if its by ssh or directly on the server. Then u type `chmod apt` this command will allow u the user to have access to the command now u need to update Ur repository's by doing sudo apt update or apt update then do apt-get upgrade then if u want and I highly recommend this is. Is to add the official Ubuntu repos to the sources.list. u can do this by typing. nano /etc/apt/sources.list Now u can edit where u want to download applications from that are based on debian and this will give u endless possibilities on how u wanna use ur server u can even install and configure beef wich is an hacking tool ment for ethical hacking by doing.. sudo apt install beef-xss & y Note u can only install beef if u added the official kali repos to ur sources.list If u get an error while doing sudo apt update do a ufw allow behind the command and make it accept unknown repos so u can use kali it's repo to . Thx for reading this long command and I hope it helped u out
How do you use the 48T of storage space? do you have a data replication server at a different place? I don't like data-replication service, I choose a power switched USB hub and attach large capacity of SATA HDD to periodically backup most important files.
I have quite recently tried to put all data into one giant hdd for archive purpose. rsync kept failing on verification. It turned out at the end one of the memory modules (non-ECC) in the system was failing. Without verification I would not know it and have ended up with broken data. ECC all the way if you need reliable storage.
Thanks a ton for the great content. I have found your videos quite helpful as I find my way around this “new world” of self-hosting / home lab setup. In a Proxmox + TrueNAS or OMV setup, what is best approach for ZFS Storage Pool. Is it best to setup the zpool in Proxmox for use by the NAS software or is it better to setup the zpool from within the NAS software?
One downside to using a desktop motherboard is the lack of a management interface. If something happens and the system is hard to get to, troubleshooting can be a pain. There are fairly reasonable priced boards from supermicro or asrock rack
Why get so many drive bays and use 4TB drives, when 20-22TB drives are available? You could get the same capacity in a much smaller and less power hungry system and by using SSD cache it wouldn't be slower. Also NAS drives are overprices compared to enterprise drives like Seagate X18/X20.
I have this cpu running, but with the current pve kernel ecc is not reporting correctly. Should be fixed with about 5.17. I tested the ecc function with the same Mainboard in Win10.
@@peterfeurstein6085 Yeah I guess it is based on Linux Kernel, I had no chance to get it working. If you have the same on PVE, hmm. Glad that I replaced it.
Even the cofounder/ current developer of ZFS doesn't require/encourage people to use ECC. So I don't see a necessity to do so. There's also a Hacker News article regarding this topic. Nonetheless I enjoyed your recent videos. The Proxmox Packer one was really awesome. I combined it with a gitlab pipeline and it now it throws me out fresh new images once a week.
Sign, why don't you stop arguing about ECC? It's recommended by IX Systems in the official docs, and by any IT professional. Btw, thanks for the positive feedback, but you need to understand that when you make a video like this, you can't skip over ECC.
@@christianlempa It wasn't meant to be rude. I thought it was worth mentioning it, since most of the concerns about ECC are regarding ZFS. Have a nice day anyway.
I built a NAS many years ago with a 24bay Norco case however I upgraded this to a used Dell T630 Server which was around 1000 Euro so much cheaper than a DIY build and much much higher quality parts 12G SAS3 backplane and included controller. upgrade CPUs to 14 core dual CPU for 70 Euro also can fit so much more RAm (128G and might add another 128G) . Best of all this server is so so quiet. had it in my apartment loungeroom. You can get other Dell servers cheaper but with less bays (mine is 18 bay) and used a cheap Sun F80 Warpdrive for proxmox /datastore) Great vid and remember it was fun building my 1st server similar to yours. Enjoyed your vid Christian Maybe you could do a vid on buying a cheap Dell server maybe a T320 or T620 or the like?? and building into a truenas server for the people who don't know much about building hardware? Also used SAS drives are much much cheaper too for these servers
Very informative and interesting video. The only thing I don't think is so great is the hard disk choice. Price (€) per TB including shipping. 14TB are the sweet spot at the moment. In addition, there is warranty 5 years, helium filled, faster and power consumption with only 5 hard drives 24/7.
Inter-Tech, I have some cases here of this brand. they are really nice and good priced. You can find them in The Netherlands (I ordered them thru Amazon Germany). Do you also ordered the rails for the case ?
Fascinating! I'm not sure if I missed this on the video, but why wouldn't you go for the maximum available capacity per drive, say 18 or 20Tb, to optimize costs and maximize the capacity per drive slot? Or was your main point to have as many drives as possible for the enhanced transfer speed?
In US there are NORCO chassis with the same interior. But all of them are from China. Supermicro motherboard will cost same amount of money but they are only for intel CPU's. For Ryzen CPU's there are Asrock server boards available. The main reason to buy server board is having ECC memory and IPMI.
@@valleyboy3613 Airflow mostly depends on what fans you use. Stock Dell fans are very loud. And in Norco case you can use fans that you want 80 or 120 mm.
Great video as always. I work in Enterprise Infrastructure and we have seen multiple drives fail before at nearly the same time and the added strain of a typical rebuild on the other drives increases the likelihood of another drive failing. As such, I would recommend at least ZFS RAIDZ2.
Can you run Plex GPU transcoding using trueNAS Scale? Never simple to have Plex in containers and access the GPU. Also can you install latest version of Plex? Usually the version provided is pretty old.
How did you configure the Adaptec asr-71605 so it detects the hard drives? I bought the same card and passed it through to the TrueNAS Scale VM. I can detect it using lspci but none of the drives are detected when I want to create my pool. Thanks.
Can all the data also be uploaded automatically to Google Drive? So that if there is damage to the hard disk, we still have a backup of all data in the cloud.
your rebuild time is going to suck..and you have a very high chance of a second failure during the rebuild. Z2 would be a better ption IMO..i don't use z at all i always use mirrored vdevs. rebuild is only from it's miror/s..yes it isn't "storage efficient" i'll take the increased reliability and performance you get with mirrored vdevs...
8:11 that's because (at least in my country) ECC memories are not readily available as compare to regular one and they tend to be lot expressive as much as twice or three times the cost of regular one
Can you provide more information on the fan controller you touched on in the video? I've followed your build spec to the letter and the fan controller is not something listed on your kit page. Thanks
Do you have any updates to this a year later? I'm considering building something like this for fun and for my Plex/Jellyfin server. Any recommendations for a chassis i can get in the US?
Not yet, I'm still trying to figure out what to do with my NAS project. But I'm working on some pretty heavy refresh as this project was just too power hungry for me :/
Ok. The suspense is killing me. I'm eagerly awaiting the refresh. As soon as you post I'll start buying my parts. Finding a good chassis has been hard as intertech is German and I'm in Los Angeles.@@christianlempa
BTW I just bought a sysracks 42 U rack and running my old intel macbook pro as a server running some docker containers and home assistant. Unfortunately even though it has 64 GB of ram and a 6 TB HD, laptops don't make good servers.
I needed a network storage array so I started down this road. I tried to find an affordable solution with ECC but was told to not use risen because it does not support hardware transcoding for a black server. So I wouldn't need a secondary video card and I really did not want that period I am looking to build a 150TB array. Older Zion did not have quick sync And I was unable to find any Intel atom processors available so I ended up going with an i3 and non ECC memory.
Thank you so much for sharing this video it's very helpful. Can you tell me how can I see all my hard drives and space in my TrueNas interface. So I have about 10 -TB with five drives but I'm not seeing the amount of the disk spaces
Great video. Nice build. Like you said software is also important. I'm wondering if you use any software to manage your personal photo/video libraries. If you do, what are they ?
I just built my first TrueNAS core system (debating on starting over & installing TrueNAS Scale on instead), and have ~31TB of drives in mine but have only setup a pool with 4x 3TB WD RED drives so far for media storage & streaming. would you recommend for movie streaming and uploading to the server that I upgrade from my dual 1Gig onboard NICs as I see you went with 10Gig setup? I"m wondering if 10Gig would be overkill for my usage but if it isn't I'm curious what the cheapest compatible setup might be for a 10Gig connection (would need a 10Gig NIC, 10GB or multi-Gig switch & either CAT6A copper or SFP+ transceivers (seems like quite the investment) since it doesn't seem like there's a way to Team/bond my two 1Gb onboard NICS (only aggregate them assuming I have a switch that supports aggregation as well). Or should I just consider investing in a Multi-Gig (2.5Gb NIC & switch)? I do plan on creating other pools for Backups and possibly a pool for running VMs down the road possibly if that makes any difference.
thanks for the video, unfortunately, I can't use TrueNas since it didn't have Delete Permissions in ACL, because its needed in our case, so I'm stuck with Windows or xpenology, since Synology has this option in advanced permissions section
Is it better to have many small HDD's or a few larger capacity ones? For example using 6x14TB on ZFS1 vs 11x8TB on ZFS2 both system giving approx 70TB usable space.
Ich glaube, ich hätte - der Temperatur wegen - die Festplatten auch in jede zweite Reihe geschoben. So sitzen die nicht aufeinander und haben etwas mehr Platz zum Atmen.
hi, i'am from indonesia, Nice Content. you always creating High Quality Content and before i watch this video, i already install Truenas Scale on My IBM System X3100 M4. interest another part video Truenas Scale. Thanks Christian.
I've been seeing some fairly cheap 24 bay supermicro combos (case CPU mobo and ram) on ebay and have been thinking about picking one up, this is a nice setup though and that is a nice case. Hadn't heard of that brand before.
Good video, but why did you not go the easy route? I have a Dell R420 with 196Gb Ram, dual 10c20t CPU's, and 4 12Tb NAS drives. Altogether at a cost of just under $1200.
The powersupplies and fans produce too much noise, the server rack is right beside my YT studio. So I needed to find a silent case with hardware that is also efficient.
Excellent content as usual! From the video, it seems like an Adaptec ASR 71605 - Can you please confirm/share the exact model of the RAID controller? Thank you in advance.
so what was the model number for the 16 port sas card? because my server is on x470 and it has similar limitation on pcie lanes. would be helpful. many thanks
Nice server ! Do you have any shared storage solution for a Proxmox cluster ? In my homelab I have NFS shares but the NAS becomes a Single Point Of Failure 😕 Maybe the scaling system of TrueNAS would help 🤔
ZFS-1 on 12 drives?! I'd either do a single ZFS-2 pool or split it into a combined 2x6 drive ZFS-1 pools.
the important thing to realise, is when a drive fails (and it will) the pool has Zero redundancy, any additional failure and you loose everything. And guess what when you replace the failed drive it does a big rebuild (many hours) putting the existing drives under significant load.
Killer build! I hadn't seen that server chassis before. Great value for the money there.
Thank you! 😉
Awesome brother. It's awesome you do GIF. Love the support and this makes me want to support your channel. I use truenas scale and love it. Smart People make great videos. The Digital life is a new to me but he is very educated on the Truenas Scale ZFS system. LOVE IT. KEEP THEM COMING BROTHER.
You're brave, running raidz1 with 44TB and 12 disks!? I run raidz2 with my 8 drive 20TB TrueNAS.
I'm a maniac 🤣
Great setup, however i would not use raidz1 in a 12 disk pool, the risk to loose more then one disk is for me to high, i use raidz3 for my 12 disk pools...... just my 2 cents. Thanks for the vid.....
Yeah, that's a valid point. Well, at least you can say in a few years "told you so", when I need to restore it from Cloud ;)
@@christianlempa with your internet connection how long does it take for a complete restore of the pool?
Raidz3 seems a bit of overkill to me. At work we run a selfbuild TrueNAS server for backing up Xen VMs with around 270 TB netto capacity and in raidz2 setup. This allows two HDDs to fail and the array still functions. Not using fast enterprise level SSDs as read / write cache for the pool seems to be a no no for me. ZFS in the end is not THAT memory intense unless you do deduplication. A fast CPU plus 8 GB of RAM will be fine to serve a ZFS pool.
@@HolgerBeetz what cpu and mb do you use?
@@PingPongOblong Xeon Gold 5222 CPUs Dual CPUs and some standard Supermicro MoBo which comes with the Storage Package from their Vendor
Are you crazy :o RAIDZ1 on a 12-drive array... You must hate your data.!!! It is very common for a 2nd (or third, or more) drive to fail during a rebuild, which guarantees you data loss, and depends on when and how bad it fails, you could very well lose ALL of your data... On a 12 drive RAID array, I wouldn't ever consider using anything less than RAIDZ3. In reality though, I would use two 6-drive RAIDZ2 arrays
we can argue about raidz2, kinda makes sense, but raidz3? come on...
I also came to complain about your RaidZ1 choice.. Otherwise I approve 😂
Thanks! :D Yeah it's a valid point, I need to admit ;)
one of the few youtubers i watch that talks fast enough that I dont need to use 1.25x-1.5x speed while watching 😅😅 great build and very insightful walkthrough of the parts selection!!
Thank you 😂🙏
RAIDZ1 on 12 disks aren't recommended and you're limited to 1 vdev's speed and IOPS.. Even with RAIDZ2. It is better in your setup to make two 6-disk RAIDZ1 and combine them into a stripe. It is supported in zfs.
You can tune the ZFS (ARC) memory usage.. The default is 50% memory.
Lawrence Systems has a video on that.
Please do not use Raid-Z1 on a pool with that many disks, there is a good chance you will have 2 disks fail within a few days of each other, especially if you bought them at the same time.
if you lose a 2nd disk before it has resilvered, bye bye data.
From performance and reliability standpoint its better to use multiple VDEVs. In your case probably 2xraid-z1. Still from my point of view its beter to use vdev with multiple parity (z2+), otherwise with some bad luck you can have some unrecoverable read errors while resilvering if one drive dies.
Still good content, thank you :)
Thanks mate, I guess I will change my pool to raid-z2, might be the better decision.
You really should not use a RaidZ1 on that amount of raw storage. In case 1 drive fails and you swap it to rebuild the volume, you put a lot of pressure on your remaining drives and that for a long time (because of huge drive capacity). There is a huge possibility that another drive fails during the rebuild process and since you are only using RaidZ1, all your data will be lost.
Only use RaidZ1 for small deployments (4 drives or less with low capacity) and have good backups.
I would suggest at least RaidZ2. And as always Raid is not a Backup.
Trotzdem gutes Video, hab selber ein ähnliches Setup, allerdings hab ich TrueNAS noch in Proxmox virtualisiert mit PCI-e Passthrough von dem HBA.
For reference, consider the ASRock Rack MBs for home server use. They use standard desktop chipsets, but include some handy server features. For instance, the X570D4U-2L2T includes multiple 1Gb & 10Gb NICs, IPMI, SATA DOM, etc. crammed onto a MicroATX MB.
Wow interesting, thank you! I'll take a look at these boards
So you have any cheaper or older boards that are similar that you could suggest
@@PingPongOblong Unfortunately, I do not know of any. The ASRock Rack boards are really rather unique and as such they are $$.
I would always use ECC Memory when storing important files.
But, if there isn't important data on the stake, I still like to use zfs. Even without ECC
Yeah I 100% agree on that!
Check out lawrence‘s video on the (no) need for ecc when using zfs
It seems like you put a 1TB SDD as the boot drive. WHAT A WASTE! One of the things that bothers me about TrueNAS is that it only uses about ~3-6GB of space on the boot drive, and hides the rest of the drive so you can't use it for storage.
I recommend that you us a 64 or 128GB SDD for boot (maybe even two for mirror boot), if you don't want to waste all this space. I like how ProxMox does it where it creates an LVM for its system, and an LVM Light for the rest of the drive that you can use for storage.
But, fine, when you spend all that cash on 12 4TB drives, 1TB wasted is no big deal, but come on, why waste it?
Hmm good question, honestly, I don't have any valid reason for that :D
@@christianlempa I currently use 64gig USB flash drive for my TrueNAS Core. Nice thing about this setup is you backup the config file and then reinstall on a different boot drive. Restore configuration and boom you're done.
@@Darkk6969 This is appealing. How's the perf running from the boot drive?
12 disk pool with raidz1 !!!!! a newbie demonstrating a very very risky setup to other newbies… good luck!!
thanks, good luck to you too *g*
High Q-U-A-L-I-T-Y content as always!! Bravo!
Glad you enjoyed it!
(you maybe should have mentioned the the reason why half your ram is already in use by the cache. That's normal. It automatically takes up 50% of any amount of ram)
Thanks for sharing!
what the shit, why would you run RAIDZ1 on 12 disks?? Either Raidz2/3 or 2x vdev raidz1/2/3.
- Resilvering of a drive is known to destroy more drives.
Edit: Nevermind, just saw your new video haha.
XD
12 drives with RAIDZ1? Can't tell if it's brave or not smart... but... you do you man.
Let's call it Brave, okay? 😅
Extremely risky….
I don’t think you need to use ECC on TrueNAS… why are you so stubborn about it? And you went with 12 drives setup z1 that’s a lot more risky. ECC on one hand but taking a lot more risk on z1….
If you want to do it right, just use ECC as the recommended way. About rz1 you're absolutely right, that needs to be changed.
Hello 👋, you can tell my why i see connections lost on true nas when i use iscsi with proxmox , but iscsi work perfectly, but true nas show me connections lost ip 192...... . Please 🙏 i search for solution but not solve.
Great video...but. You set up 12 drives in a single VDEV, RAIDZ1? I'd seriously reconsider that decision, unless none of the data going to it is all that important; if a single drive fails, and then another fails during the intense 4 hour rebuild time (as can happen)...well, you'd lose everything. And no one wants that! (Use RAIDZ2, minimum!)
Fair point, I probably do
How much power does it consume on average? How fast is your Internet speed? Thanks
+1 for this question. I'm also very curious about the power consumption
In ZFS the total pool IOPS are equal to 1 disk IOPS * N-of-vdevs, and since you have a single vdev, the IOPS are equal to a single WD Red Plus 4TB, so your config is pretty bad also from a performance point of view, other than being quite unreliable. The only thing that's not so bad is the streaming performance (as long as you are using a large block size)
With 12 disks, my choices would probably be: reliability: 2x (Z2 w/ 6 disks) performance: 3x (Z1 w/ 4 disks). You can also avoid parity arrays and get even more IOPS but the usable space decreases drastically.
P.S. Unless things are changed from the past (have not played with SCALE yet), that 1TB NVMe drive is completely wasted as boot drive. Ideally the OS should stay on a small SATA SSD (best if a couple in mirror) and that NVMe would be better used as level 2 ARC cache
Thanks, great feedback. I probably will change my config to something else, once I have the chance to do so. Still haven't decided what exactly, but 2x Z2 would mean I'm loosing 4 disks which is 30%. Seems like a lot to me. I probably will go with a single Z2 or even Z3 for the 12x4tb as a big data pool for backups and video files and add a second vdev with 4x SSDs. What do you think of that idea?
@@christianlempa The recommended number of disks per vdev is between 3 and 9 and more than 12 is not recommended, so a single Z2 with 12 disk is pretty much an explicitly unrecommended configuration. I know that loosing all that space sucks, but this is the price if you want to do things the right way. Since you are not storing mission critical data (right? 😛) you can configure the pool with two Z1 6 disks vdevs and a very frequent backup 🙂.
The question would be why it's not recommended apart from being not so performant. Anyway I might give the 2x Z2 idea a shot.
@@christianlempa for many reasons, the most obvious is rebuilding time (it could take a week or more and during the process you could loose more disks because they are under high stress) but also space efficiency (due to parity and padding increased complexity) and other joyful reasons (like further performance degradation) that you can discover deep diving into technical documentation if you want 🙂
Of course, once you are aware of all the risks and limitations, if they are still within your "margins of acceptance" you are free to configure your pool as you wish, I just wanted to make you aware 😉
@@dariopetrusic4215 No worries mate, I appreciate useful feedback! That makes totally sense to me. I guess I'll go with 2x z2 then.
Its a mistake to have more than 10 HDD in a single VDEV sir.
You'd need to make 2 vdevs,
Yeah, I'm considering changing that soon! Thank you mate
Hi Christian,
the HDD status LED-s in your Inter-Tech enclosure works out of the box with TrueNas scale? (Needs any additional wiring or this is native SAS feature?)
In next days I would like to send out my Shopping list with Inter-Tech or Supermicro CSE-216 cases...
Thank for your Time you spend to your RUclips channel, this is a great starting point day by day for my me time... 🤟
Hi Christian,
the HDD status LED-s in your Inter-Tech enclosure works out of the box with TrueNas scale? (Needs any additional wiring or this is nagive SAS feature?)
In next days I would like to send out my Shopping list with Inter-Tech or Supermicro CSE-216 cases...
Thank for your Time you spend to your RUclips channel, this is a great starting point day by day for my me time... 🤟
Unbuffered ECC is the best/fastest but you can double the accessible RAM amount if you use buffered ECC. Buffered ECC is a bit slower though. If you have enough RAM slots then you will be OK with unbuffered...
20:20 When you connect two devices directly do you use a cross-over cable? I have a TrueNAS Mini X (Diskless) on order and already have drives, plus an unmanaged switch that will connect the iSystems’ machine and an AppleTV which is also wired to an Eero 6 mesh network. My hope is to be able to watch media from the NAS on the TV even if the internet is down. Eero seems to need cloud/internet :(
I also builded my Truenas scale finally 1 week ago. TBH: If I dont have an old desktop to refurbish I wouldnt go for your customer PC build. A refurbished or used Supermicro board has usual a proper CPU and IPMI port and maybe even a 10g nic. Much more support on ECC and is more reliable on a 24/7 job and needs much less power. You CPU alone is a 65W listed. A m-ITX board with a Xeon (8C/16T) is listed at 45W for example. A used HP microproliant gen8 is highly modable and also offer more value with ILO on the used sector.
So at the end - I personal wouldnt recommend this parts, but everyone priors different things and its nice that more people get into truenas scale in general.
About the Storage Controller. I never rly understood those.
In the moment, im thinking about to upgrade my existing fujitsu primergy tx1320 m3 server from the standard 4 to 8 connectable disks.
In the official data sheet of the server are some controllers listed to upgrade the sas connections. however i dont rly understand,
Do i have to use a spezific one from fujitsu or is any raid controller with equal connections usable?
I would be happy if some one could explain me what is important by choosing the right raid controller. THX
Hello youtube here are the steps to get apt working on truenas if u don't have the right zfs permission..
First connect to the shell of Ur server if its by ssh or directly on the server.
Then u type `chmod apt` this command will allow u the user to have access to the command now u need to update Ur repository's by doing sudo apt update or apt update then do apt-get upgrade then if u want and I highly recommend this is. Is to add the official Ubuntu repos to the sources.list. u can do this by typing.
nano /etc/apt/sources.list
Now u can edit where u want to download applications from that are based on debian and this will give u endless possibilities on how u wanna use ur server u can even install and configure beef wich is an hacking tool ment for ethical hacking by doing..
sudo apt install beef-xss & y
Note u can only install beef if u added the official kali repos to ur sources.list
If u get an error while doing sudo apt update do a ufw allow behind the command and make it accept unknown repos so u can use kali it's repo to .
Thx for reading this long command and I hope it helped u out
How do you use the 48T of storage space? do you have a data replication server at a different place? I don't like data-replication service, I choose a power switched USB hub and attach large capacity of SATA HDD to periodically backup most important files.
I have quite recently tried to put all data into one giant hdd for archive purpose. rsync kept failing on verification. It turned out at the end one of the memory modules (non-ECC) in the system was failing. Without verification I would not know it and have ended up with broken data. ECC all the way if you need reliable storage.
Why every RUclipsr builds Nas with 40+ terabytes? Come on who on earth need that much storage? :D
Thanks a ton for the great content. I have found your videos quite helpful as I find my way around this “new world” of self-hosting / home lab setup.
In a Proxmox + TrueNAS or OMV setup, what is best approach for ZFS Storage Pool. Is it best to setup the zpool in Proxmox for use by the NAS software or is it better to setup the zpool from within the NAS software?
😳👍
One downside to using a desktop motherboard is the lack of a management interface.
If something happens and the system is hard to get to, troubleshooting can be a pain.
There are fairly reasonable priced boards from supermicro or asrock rack
Do you have a local backup server? How do you backup something of this scale as a person that isnt a company?
I wouldn't use RaidZ1 across that many drives 😨
Well you don't need to :D
Cause with no ecc at least for me...i had great problems of lag cause i want it to run 24/7 no matter what.
Actually the Ryzen Pro APUs support ECC.
super videos but you are ... shouting on all of them ;-) we hear you!
Why get so many drive bays and use 4TB drives, when 20-22TB drives are available? You could get the same capacity in a much smaller and less power hungry system and by using SSD cache it wouldn't be slower. Also NAS drives are overprices compared to enterprise drives like Seagate X18/X20.
Cheaper and redundancy but unless it’s mission critical I kind of agree with you
lol imagine a server with no BMC, false economy #1 lol.
Pretty sure this WD is smr. It will make problems with zfs.
Another disadvantage of the Ryzen 7 5750G is that it's on PCI-e 3.0 instead of 4.0, and I believe it also has fewer PCI-e lanes.
Ryzen 7 PRO 5750G supports ECC memory
Didn't work for me
I have this cpu running, but with the current pve kernel ecc is not reporting correctly. Should be fixed with about 5.17.
I tested the ecc function with the same Mainboard in Win10.
@@peterfeurstein6085 Yeah I guess it is based on Linux Kernel, I had no chance to get it working. If you have the same on PVE, hmm. Glad that I replaced it.
Great video! Can you please do a tutorial on Trunanas Scale ACL permissions?
Thanks mate! Well maybe at some point, i'll put that on the backlog
I would have taken the smallest He drives as way way less power hungry in idle
truenas is wonderful , but it will take all of my boot disk when i install it on a physical machine . It is a waste of my ssd
Even the cofounder/ current developer of ZFS doesn't require/encourage people to use ECC. So I don't see a necessity to do so. There's also a Hacker News article regarding this topic. Nonetheless I enjoyed your recent videos. The Proxmox Packer one was really awesome. I combined it with a gitlab pipeline and it now it throws me out fresh new images once a week.
Sign, why don't you stop arguing about ECC? It's recommended by IX Systems in the official docs, and by any IT professional. Btw, thanks for the positive feedback, but you need to understand that when you make a video like this, you can't skip over ECC.
@@christianlempa It wasn't meant to be rude. I thought it was worth mentioning it, since most of the concerns about ECC are regarding ZFS. Have a nice day anyway.
@@christianlempa Coming from a guy using RAIDZ1 on a 12 disk array production machine :)
@@deckardstp yeah don't worry, I just went over this discussion too many times, it's all good 😉
I've just saw this video after buying EEC ram with a 3200g... I'm freaking mad
:(
I built a NAS many years ago with a 24bay Norco case however I upgraded this to a used Dell T630 Server which was around 1000 Euro so much cheaper than a DIY build and much much higher quality parts 12G SAS3 backplane and included controller. upgrade CPUs to 14 core dual CPU for 70 Euro also can fit so much more RAm (128G and might add another 128G) . Best of all this server is so so quiet. had it in my apartment loungeroom. You can get other Dell servers cheaper but with less bays (mine is 18 bay) and used a cheap Sun F80 Warpdrive for proxmox /datastore) Great vid and remember it was fun building my 1st server similar to yours. Enjoyed your vid Christian Maybe you could do a vid on buying a cheap Dell server maybe a T320 or T620 or the like?? and building into a truenas server for the people who don't know much about building hardware? Also used SAS drives are much much cheaper too for these servers
Very informative and interesting video. The only thing I don't think is so great is the hard disk choice. Price (€) per TB including shipping. 14TB are the sweet spot at the moment. In addition, there is warranty 5 years, helium filled, faster and power consumption with only 5 hard drives 24/7.
I found this was one of the best price per TB values. Sure, I could save a little bit, but I wanted to see how this big pool of HDDs performs ;)
Inter-Tech, I have some cases here of this brand.
they are really nice and good priced.
You can find them in The Netherlands (I ordered them thru Amazon Germany).
Do you also ordered the rails for the case ?
Interesting, yeah I also ordered the rails from them.
Fascinating! I'm not sure if I missed this on the video, but why wouldn't you go for the maximum available capacity per drive, say 18 or 20Tb, to optimize costs and maximize the capacity per drive slot? Or was your main point to have as many drives as possible for the enhanced transfer speed?
In US there are NORCO chassis with the same interior. But all of them are from China. Supermicro motherboard will cost same amount of money but they are only for intel CPU's. For Ryzen CPU's there are Asrock server boards available. The main reason to buy server board is having ECC memory and IPMI.
I have a 24 bay Norco case but airfolow is terrible ... I bought a Used Dell T630 Instead all up much cheaper than a DIY build like my 1st server
@@valleyboy3613 Airflow mostly depends on what fans you use. Stock Dell fans are very loud. And in Norco case you can use fans that you want 80 or 120 mm.
iperf3 is ALWAYS one thread.
how do you backup this machine?
Snapshots and Cloud Backup
Great video as always. I work in Enterprise Infrastructure and we have seen multiple drives fail before at nearly the same time and the added strain of a typical rebuild on the other drives increases the likelihood of another drive failing. As such, I would recommend at least ZFS RAIDZ2.
Thanks mate!
Hi, Please make a video on how to install a Web server with Apache PHP MariaDB or MySQL on TruneNas Scale
Can you run Plex GPU transcoding using trueNAS Scale? Never simple to have Plex in containers and access the GPU. Also can you install latest version of Plex? Usually the version provided is pretty old.
How did you configure the Adaptec asr-71605 so it detects the hard drives? I bought the same card and passed it through to the TrueNAS Scale VM. I can detect it using lspci but none of the drives are detected when I want to create my pool. Thanks.
You need to set the controller in HBA mode, check the settings in your controller via the BIOS
@@christianlempa Thank you! Worked like a charm.
Can all the data also be uploaded automatically to Google Drive? So that if there is damage to the hard disk, we still have a backup of all data in the cloud.
I think you can do that
Scale is not up to snuff (yet) as far as performance goes compared to truenas 12/13....
your rebuild time is going to suck..and you have a very high chance of a second failure during the rebuild. Z2 would be a better ption IMO..i don't use z at all i always use mirrored vdevs. rebuild is only from it's miror/s..yes it isn't "storage efficient" i'll take the increased reliability and performance you get with mirrored vdevs...
Thanks mate, yeah I'm probably rebuilding the setup at some point
I agree. He made a whole video about his NAS build like a pro but screwed up on the storage pool like a noob
8:11 that's because (at least in my country) ECC memories are not readily available as compare to regular one and they tend to be lot expressive as much as twice or three times the cost of regular one
No, I don't believe it's just the availability and price. It's just because some IT guys just like to argue about everything...
Throwing ECC memory in your home server is fine.. but that 5700G would have actual tangible benefits
I'm not sure, honestly, I regret building this server :/ I might do an update video on some of the mistakes and stuff soon
Really looking forward to an updated video as I would like to build a TrueNAS scale server and I'm a total noob.@@christianlempa
What do you think about ddr5 memory? As there is no real ecc with it just "ondie" ecc which work differently
Can you provide more information on the fan controller you touched on in the video? I've followed your build spec to the letter and the fan controller is not something listed on your kit page. Thanks
You said you'd link the zfs video in the description, but it isn't there.
What? Let me fix that! Thanks for the heads up :)
Do you have any updates to this a year later? I'm considering building something like this for fun and for my Plex/Jellyfin server. Any recommendations for a chassis i can get in the US?
Not yet, I'm still trying to figure out what to do with my NAS project. But I'm working on some pretty heavy refresh as this project was just too power hungry for me :/
Ok. The suspense is killing me. I'm eagerly awaiting the refresh. As soon as you post I'll start buying my parts. Finding a good chassis has been hard as intertech is German and I'm in Los Angeles.@@christianlempa
BTW I just bought a sysracks 42 U rack and running my old intel macbook pro as a server running some docker containers and home assistant. Unfortunately even though it has 64 GB of ram and a 6 TB HD, laptops don't make good servers.
I understand that you are running proxmox and truenas scale on different server. How do you add the zfs pool by truenas scale onto proxmox?
Cool build but how does the power supply breathe? The case doesn't seem to have any ventilation holes for it
I needed a network storage array so I started down this road. I tried to find an affordable solution with ECC but was told to not use risen because it does not support hardware transcoding for a black server. So I wouldn't need a secondary video card and I really did not want that period I am looking to build a 150TB array. Older Zion did not have quick sync And I was unable to find any Intel atom processors available so I ended up going with an i3 and non ECC memory.
Woow no I can see how much expensive are DDR4 ECC memories O.o.
If you run into issues with that Intel 10G Nic pick up a Chelsio
how do I buy a Case there? I cant find a "register" on the website itself of Inter-Tech
There's also some smb tuning..see linus tech tips on some samba tuning...DUAL-RTX2060-O6G-EVO
What should I tune here?
Thank you so much for sharing this video it's very helpful. Can you tell me how can I see all my hard drives and space in my TrueNas interface. So I have about 10 -TB with five drives but I'm not seeing the amount of the disk spaces
Thank you, glad you enjoyed it! I don't know what could be the cause of your issue, maybe check out the truenas forums or our discord?
Great video. Nice build. Like you said software is also important. I'm wondering if you use any software to manage your personal photo/video libraries. If you do, what are they ?
For personal stuff I use Google Drive
Truenas comes with support for packages.
Nextcloud is one that i recommended!
I just built my first TrueNAS core system (debating on starting over & installing TrueNAS Scale on instead), and have ~31TB of drives in mine but have only setup a pool with 4x 3TB WD RED drives so far for media storage & streaming. would you recommend for movie streaming and uploading to the server that I upgrade from my dual 1Gig onboard NICs as I see you went with 10Gig setup? I"m wondering if 10Gig would be overkill for my usage but if it isn't I'm curious what the cheapest compatible setup might be for a 10Gig connection (would need a 10Gig NIC, 10GB or multi-Gig switch & either CAT6A copper or SFP+ transceivers (seems like quite the investment) since it doesn't seem like there's a way to Team/bond my two 1Gb onboard NICS (only aggregate them assuming I have a switch that supports aggregation as well). Or should I just consider investing in a Multi-Gig (2.5Gb NIC & switch)? I do plan on creating other pools for Backups and possibly a pool for running VMs down the road possibly if that makes any difference.
thanks for the video, unfortunately, I can't use TrueNas since it didn't have Delete Permissions in ACL, because its needed in our case, so I'm stuck with Windows or xpenology, since Synology has this option in advanced permissions section
Is it better to have many small HDD's or a few larger capacity ones?
For example using 6x14TB on ZFS1 vs 11x8TB on ZFS2 both system giving approx 70TB usable space.
It's better to not use as many hard drives than I use. At some point it becomes slow and unusable. Split the pools into multiple smaller ones.
48 TB ! Amazing storage ! But ehm... Why than not go for the round, psychological more pleasing, number of 50 TB ? LOL
Hm, computers don't round numbers :D Thanks for the feedback btw!
Ich glaube, ich hätte - der Temperatur wegen - die Festplatten auch in jede zweite Reihe geschoben. So sitzen die nicht aufeinander und haben etwas mehr Platz zum Atmen.
hi, i'am from indonesia, Nice Content.
you always creating High Quality Content and before i watch this video, i already install Truenas Scale on My IBM System X3100 M4.
interest another part video Truenas Scale.
Thanks Christian.
Thank you! Of course i'll do a second video about Kubernetes ;)
if your after performance a pool should have no more than 8 disks thats usually the money spot.
BLAZINGLY FAST
At 20:53, you actually mean 10Gbit. Or 1GB. 700MB/s is a lot more than one gigabit per second :)
Yep thats true ;)
I've been seeing some fairly cheap 24 bay supermicro combos (case CPU mobo and ram) on ebay and have been thinking about picking one up, this is a nice setup though and that is a nice case. Hadn't heard of that brand before.
How do you mount TrueNAS ZFS with Proxmox? Please explain me...
Using NFS
Chris von welcher firma ist das case ?
Good video, but why did you not go the easy route? I have a Dell R420 with 196Gb Ram, dual 10c20t CPU's, and 4 12Tb NAS drives. Altogether at a cost of just under $1200.
The powersupplies and fans produce too much noise, the server rack is right beside my YT studio. So I needed to find a silent case with hardware that is also efficient.
I bet the ECC memory fight is because the ecc memory cost more. Great video thanks
I think so too :/ thank you bro!
amazing
Excellent content as usual! From the video, it seems like an Adaptec ASR 71605 - Can you please confirm/share the exact model of the RAID controller? Thank you in advance.
Thanks! You should find the exact model on the kit page.
so what was the model number for the 16 port sas card? because my server is on x470 and it has similar limitation on pcie lanes. would be helpful. many thanks
You find it on my kit page
Nice server !
Do you have any shared storage solution for a Proxmox cluster ?
In my homelab I have NFS shares but the NAS becomes a Single Point Of Failure 😕
Maybe the scaling system of TrueNAS would help 🤔
Thanks! No I'm just running two machines, the TrueNAS and Proxmox.