Stop reading my mind. It's like you're looking at my Google history and making videos about all the things I search for. Also, striped, no mirror, is the best layout. Speeeeeed. (obligatory /s since someone might think this recommendation is serious)
Not as bad as watching it with 12 3tb drives full of data, that you'd love to put in your machine, but still waiting for new drives to move the data to first before putting them into the array
Same, except I’m too broke to buy a few parity drives and too scared to move all my data to a single drive for the time being cause with my luck it will fail and lose everything
A very satisfying and comprehensive video. Not as exhaustive as some articles out there, but concise and comprehensive enough for 20 min. Good job with the tests.
Good informative vid! Me myself I'm running Proxmox, with a TrueNAS Core VM, six 4TB hard drives in a RAIDZ2 arrangement, thru a HBA passed through. ~14.5 tebibytes with redundancy peace of mind, Backblaze cloud backup for the essentials like the family photo album. I'm a happy camper
I am new to this, and i know what that means, but have a hard time wrapping my head around how that works. What would you say is a good spot to look for beginners?
@@wantu2muchHm my reply kept getting deleted on mobile (Revanced) hope this one sticks. Depends on what your setup looks like ultimately. If you're like me and virtualize TrueNAS, it really wants bare metal control of the drives so you'll want something like an LSI host bus adapter you can buy on eBay, connect the drives to that, and pass that entire PCI card to your TrueNAS VM. I bought an LSI 9218-8i for about 66 bucks. Note it has to be in IT mode, the auction should say so. If you connect the drives directly to the motherboards SATA connectors you'd need to pass the whole SATA controller to TrueNAS, I didn't want to do that as I had SATA SSDs for VM and container storage for Proxmox to use, so HBA it was.
How did you decide between installing TrueNAS as a VM over bare metal? I'm trying to decide that very question. I'm not sure what other VMs I might run on the same box...
@@CptBlackEye Good question. I was deliberating between TrueNAS Scale and Proxmox for a good while as I was getting hardware together. On the one hand, I could have a fairly recently released NAS solution that can do hypervisor things in TrueNAS Scale, or I could opt for two solutions, a hypervisor and a NAS that only do those specific tasks, separately. I decided on the latter because Proxmox has been around for 15 years, FreeNAS (now TrueNAS Core) a little longer than that, so they have plenty of stability and more importantly documentation and discussion. TrueNAS Scale is comparatively very young and when it comes to the backbone of my system I prefer to lean on projects that have long histories and are known to be stable and very good at their particular task, and again docs and forums should I need them for any "advanced maneuvers". So by using Proxmox for an OS and TrueNAS Core in a VM I could have the best of both worlds. Thanks for asking!
@@CptBlackEyeI had a long comment explaining it that appears to have been autofiltered or something. Short answer then, I wanted a long-developed stable hypervisor and a long-developed stable NAS with plenty of docs and community experience for each. So the answer was Proxmox with TrueNAS Core in a VM for the best of both.
Honestly the best, most straightforward and simple video to help understand ZFS. It's pretty complicated and can be annoying specially on the first deployment. I'm currently running a 5 wide RAIDZ1 with a hot spare but I'm about to configure either to two RAIDZ1 or 3 way mirror, not sure since I run a mixed workload of VMs and movie storage.
Great video! I went through tons of research last year and decided to use 4 vdev mirrors in my 8 drive NAS. The 4TB drives were recycled from a mining venture I shut down, and I chose the 4 vdev mirrors because I can add capacity just by replacing 2 drives at a time, which I just did by replacing 2 of the 4TB drives with 10TB drives. Resilvering took about 7 hours for each drive swap. Unbalanced vdevs are not ideal, but it works fine for my usage.
Perfect timing! I'm sitting here with a NAS on my to do list. I picked up a LSI SAS 9300-16i (HBA) and 10 identical 500gb laptop drives to practice NAS building and operating. Being new to ZFS (first time installing), this video really helped!
@@michaelbouckley4455 Good observation; however, I believe that since these are used laptop HDDs, the likelihood of batch failure is reduced. I'm strongly leaning towards a pool with two 4-drive Z1 vdevs, leaving me 2 drives as spares. While these are 2.5" drives, the case does have space for me to change over to 4x 3.5" NAS drives later.
I worked for large corporations as a network administrator. The servers had various types of RAID configurations. I've also been an amateur photographer longer than I worked with computers. So, I have thousands of photographs. At first they were on a single hard drive on my computer. Then I also had them on an external drive. Now they reside on my local computer and a Synology NAS. The NAS is configured with mirrored drives. If I was a professional photographer I would back up my files regularly and also use off site storage. But, what I use now works well enough for me.
Great video 👍 14:00 -- the way you laid out the four FIO Benchmark commands 'vertically' is so pleasing visually. It perfectly exemplifies how you get that refined artistry is needed with technical topics. Kindest regards, neighbours and friends. P.s. Production quality remains highly surpassing.
I have currently have 3 drives and have bought 3 used drives with the same capacity and have been at an impasse on which direction to go for both in nas os and resilience level to go for The video was extremely helpful
casually building a nas whit 2.5" 1 tb hdds . hoping to use a 4 tb drive for bakups , i was wandering what to use as third bakups , blue rays, or tape ? what is cheaper and easyer ?
@@MatteoComensoli be advised optical recordable media might not be as great in terms of longevity as is advertised. I've often encountered recordable optical media which failed to read after several years of proper storage. Unlike factory recorded disks, recordable optical media relies on a different technology for its days layer, which can deteriorate much sooner than expected. Take is also durable, until it isn't. Sometimes rust starts flaking off of the base later. So for backup, I'd use a third HDD.
This is outstanding timing. I needed to explain this to someone I know and taking the time was hard. With a video he can watch it until he understands all the details. Thank you so much!
Same. I was hoping to get a Dell Poweredge R730 as it would be more than what I need, so I could use it for my media server, game servers, and even backup data, if I ever managed to sort through it first that is... Was thinking of just buying several 4tb SSD's when I can and then run the raid 50 that it supports.
I've been running TrueNAS for 5 or 6 years on an old supermicro board in a rosewill rackmount case. I started it with a 1150 celeron CPU, and like 8GB of ram. upgraded to sx14TB drives when spinning rust was cheap, and have recently updated it to 32GB (max :D) and bought an e3-1271 to put in as well. SAS drives got cheap all of the sudden, so i now have 6x4TB SAS drives, and i'm waiting for the HBA SAS card to run them with. it's been an adventure, and i've learned a lot. I also still don't trust it, because i have all of my important data backed up on cold storage every month or so. I need a legit backup, but that requires more research! :D
Great video! I love the mix between benchmarking / deep dive and practicality. This is the kind of thing I come to RUclips for: a lucid, conversational explanation of high-level concepts and tradeoffs, with some helpful pointers to dive deeper if I need to. A+ stuff in my opinion. Exciting to see that MS-01 make another appearance as well. It seems like that machine made quite a splash in the homelab community, it's a pretty amazing little machine. I'm running a 5-disk setup of (very cheap, very used) 2.5" drives in RAIDZ1 in a virtualized Truenas install on a proxmox VM. This setup has been hosting streaming media for a couple years without issue. Since I've never dealt with a drive failure seeing those resilvering times makes me a bit nervous, so that bit was helpful to see called out.
This video could not be more perfect. I'm currently in the process of figuring out a NAS build and have been back and forth about how to balance reliability, performance, capacity, and cost. The easy to understand explanations about # of drive failures, streams and IOPS, and capacity helps figure out which configurations offer the best balance for my needs! Obviously there's a lot more details that could be added, but for the purposes of an introductory explainer this hit the nail on the head. Seems like the 2x Raidz1 (3/4 drive vdevs) and 2x raidz2 offer the best protection while maintaining a reasonable speed and capacity.
I moved to unraid as it kicked on less drive when I access data and it have less noise ( I play/ work in the same room the Nas living ) But minimum still kicked on 3 when set to dual parity
Thank you for this super informative video. it really cleared up a lot of things for me. I have been struggling to decide if i even want to build a NAS in the first place and there really isnt a perfect answer. Its been intersting going through this entire circle just to realise that DAS might still be the right thing for my use case.
This is a great explainer, however I'm mildly disappointed that backups only got a token mention at the end of the video. It would be better if it was emphasized that ZFS fault tolerance is not a substitute for backups, and terminology like "if you lose a vdev you lose the whole pool and your data" is better conveyed as "if you lose a vdev, you lose the pool and have to restore from your backups". I hang out quite a bit on zfs forums (because I'm a heavy ZFS user) and I can't tell you how many times posts pop up where the user had something bad happen to their pool and wants help to try not to lose their data because they didn't back anything up. ZFS is great, but ZFS fault tolerance is not a replacement for proper backups!
Was going to comment something similar. "Raid is not a backup". Now uptime is great, of course, so is speed assuming you're not limited by your network speed, but backups are critical.
While of course your point is valid, I'd guess nearly all of the intended audience of this video is already well aware that "raid (of any type) is not a backup". Continuing to pound that drum honestly starts to get a little annoying after a while. At some point you have to move on to more advanced topics and trust that your viewers are escalating with you. If he covered all the basics in every video, they would be unwatchable to most viewers. I think he struck the right balance by reminding everyone to backup their data at the close of the video.
It's interesting that most of your performance benchmarks have sequential write bandwidth higher than sequential read bandwidth. For most raw disk drives, the sequential read generally outperforms the sequential write performance while random writes can outperform random reads. As you stated, it looks like ZFS still has some caching and write coalescence going on. Excellent video. Thanks ☺️
Good video explaining the differences. There's always a lot of factors to consider when setting up any type of NAS. One of those being how many physical drives you have in your NAS to begin with. What I've found with drive capacities is there is a sweet spot with cost vs capacity. That seems to shift up as larger drives get cheaper over the years. The last time I set up a new NAS a couple of years ago, 8TB was the sweet spot for cost per TB. As I only had 4 drives, I'm limited on capacity (unless I add an external cab and that has it's unique difficulties). When I was purchasing drives, my local was out of 8TB NAS drives, so I ended up buying a combination of NAS and desktop drives. Eventually as they came back in stock, I replaced the desktop drives with NAS and put the desktops into an external cab. The NAS was configured as RAID 5 so if I lost a drive, no biggie. The external cab was configured as JBOD. My reasoning was with the external cab, there was a complete mirrored backup. Since it's not on all the time, the drives should outlast the NAS drives. So if one drive fails in the NAS RAID, its easily replaced. If disaster happens and more drives fail, there's always the external cab and since it's not in a raid, can be read by any OS to backup from. And as you stated, the more drives you have running, the higher cost of electricity it uses. So while it might be nice to have your NAS running in RAID 6 or mirrored for redundancy, realistically you can achieve something similar except your backup is offline. It just means you need to be proactive in doing regular backups periodically. And BTW With a lot of consumer NAS' now coming with SSD capabilities, it really helps eliminate a lot of the bottlenecks with reading and writing to disk arrays no matter how you have the drives configured.
What I’ve done before is used striped vdevs for active use data, but with an active replication to something with raid-z2 for immediate backup. That way all my capacity and performance is used for making work go faster, but it is immediately backed up to a more resilient pool. If the striped pool goes down, I would work off the other until the weekend to re-build. However, I wouldn’t recommend this at all, its jank was honestly more effort than it is worth and you should just work off of something more resilient in the first place. Call the marginally increased access time the CODB and call it a day.
Last year, i rebuilt my server into a Proxmox host running a TrueNAS VM for a four-drive raidz1 array w/four Toshiba 4TB drives, and tt has been fantastic. And on older hardware, too! 😊
I've actually got a media server running 4 WD Red Plus 4TB drives in a raidz1 going strong for the last 2 and a half years, best choice I made imo. It's built with a bunch of leftover parts from Ryzen upgrades and an LSI SATA/SAS controller, no raid. It's gone through a lot of revisions as I've slowly either upgraded my main PC and the server got the old CPUs (first 1800X, now 3900X) and RAM, or I added a used RTX A4000 after finally getting rid of the RX 550 thats been the center of plex transcoding, overkill sure, but definitely worth it. Plus when transferring files, I've yet to reach the limit of the drives, the 1gbps network gets maxed out first. My next upgrade is definitely going to be the network itself.
I am looking into building a new NAS for me. Currently the best option for me with 4 16TB drives seems to be 2 mirrored vdevs, leaving up to two drive failures, usable 36TB (with ZFS actually like 29TB). This way I have a good failure tolerance, I will backup the data anyway, and I can extend the NAS easily by another 16TB mirror in a few years, so I end up with 48 (38) TB while getting good performance. For me this seems like the sweetspot right now. If I would get more drives I could think about a raidz2 with multiple vdevs but then expanding would be a lot more difficult. Also electricity isn‘t cheap where I live and having 4 drives atm, that consume about 30 Watt under load and probably around 4W when spun down (yes I spin my drives down), this is not too bad imo. Just need to think about my backup though…
Hey man, I think you need to double check your test methodology or maybe you swapped your read and write data in the charts because it doesn't make sense for writes to be faster than reads. Writes require you to compute parity, which you can only do after the data is written and checksumed, so it should to take longer than just reading the data from the disk. I think it's likely there's something like ARC enabled for write caching that is not giving you raw disk performance and you are really seeing memory write speed, just like ARC gave you memory read speed.
Perfect timing, I’m starting my build next week when the case arrives. I do wish you went more in detail about different types like what’s the difference between ZFS vs BTRFS (did I spell that right?)
I know you weren't looking for an answer from someone else, but ZFS is a block level raid and they impose limits around how they implement it. BTRFS is using file level raid and it has a nice advantage, it's just working with files. So if you run RAID1 normally in BTRFS it creates a copy of a file across disks, always ensuring 2 copies, but you can make that 3 copies, and there is no issue with drive expansion, since it's all based on files. You do have to rebalance the drives once a disk is lost so you can restore the raid. Also you have btrfs scrub to prevent bitrot. I'm personally running BTRFS in Raid1 with just file mirrors across disks. I have used ZFS in the past, but I don't either is better or worse than the other, but you don't have the ram requirements for BTRFS that ZFS has.
I've had a 24 bay Netapp storage shelf sitting for a few months that I've been meaning to setup on a TrueNas server, but have been stuck on how best to setup the pool. This video helped a lot! I decided on 4 Vdevs of 6 Drives in RaidZ1
Thank you for the video, and your hard work. It's something i want to set up for my self but find it kind of overwhelming. You make it easier to understand. I thank you for your hard work.
What we need is some filesystem that does different RAID levels on different directories, with a shared pool of storage. So you can (for example) effectively have RAID 1 for critical documents and RAID 0 for downloads.
@@terrydaktyllus1320 On an unlimited budget, we would indeed use RAID 1 for everything. But budgets in the real world are far from unlimited. For the most part, it would be silly to use extra disk space to back up downloads since if lost, they can simply be downloaded again, RAID 0 is perfect for that. (Archival downloads that are no longer available is a different matter.)
I generally advise most people to run a simple mirror till their capacity needs outstrip the highest or second highest capacity drive they can buy. Having said that, this is a great breakdown once people push past that threshold. One other thing to consider might be the impact of network links on all this, or maybe a follow-up video that went deeper about the impacts of caching on saturating network links.
This was very helpful! But I have a couple of questions left: What are my options (and limitations) when I want to expand my storage? Can I add one disk to a vdev? Do I have to create a new vdev with multiple disks? Do they need to be the same size? What are the differences between TrueNAS en Unraid? Maybe a follow up video? ;-)
There's lots of videos on the topic. So far, can't add disks to vdevs (it was mentioned in the video), only can add vdevs and they have to have same number of drives. There's talk that there will be easier expansion functionality in future, but it's only talk so far
One point I would make from experience is: with hot swap bays the operative word is HOT. High capacity drives physically fill the entire sled and there's basically zero airflow so they will get pretty toasty, especially when doing parity checks. It got so bad that I switched away from my case with hotswap bays in favour of a custom mountn g solution that stays nice and cool even when it's absolutely thrashing the drives. As a bonus I took the opportunity to switch to a better sata card which cut the parity check time by 75%
Literally just received 4 extra HDDs to expand my 4 drive TrueNAS Core set up. The info in this video came in very handy. I had 95% decided on the config I was going to go for, but your results helped me confirm what I wanted to do. Think I'll also go from Core to Scale and swap out the motherboard and CPU for something a little newer that I have from an old build.
what are your thoughts on drive sleeping? if i access my 40-50 drive NAS half a dozen times a day should sleeping the drives when not in use be smart due to power consumption or am i risking the longevity by sleeping them, I have had NASs for over a decade and always disabled sleep, but after seeing your wattage measurements im thinking maybe im making the wrong choice with the amount of drives im running?
i had a setup with 3x 24 drive sas chassis attached to it, they all ran mirrored pairs and no two drives of a pair were in the same chassis so you could lose an entire chassis and the array would be highly compromised but still up and running, this ran iscsi/fibre channel storage for virtual machines before SSDs were large/affordable enough to just make the entire array out of SSDs.
Most eye-popping part of this video (and I appreciate this so much) was the part about wattage and pricing on electricity. Thanks for that pro-tip because I've been obsessing over larger drives. Does the pricing principal/outcome differ with SSDs?
Actuallly when a hard drive is next to his failure, the raid card (or hba) will tell the server that that drive has now a predicted failure. This happens when a drive takes too long to write in a specific sector of it, and the raid card knows that and he adds 1 to the predicted failure counts. Though you can reset this count and have the PF light go off, it is really a bad idea because the drive still has a pretty bad sector and it will eventually fail in some other hours of operation. If you have 2 PF in a raid 5 (or z1) then you'll have a hard time praying for the other drive to fail during the rebuild process. Is always recommended to keep at maximum X n. of PF on your system depending on the rendundancy rate of your raid (ex. raid 5> 1 possible failure > 1 PF max). You should order a new drive immediately and subsitute that.
why the mirrors didn't scale up - have you any data on the internal architecture of the PCI bus in how the drives are connected to the host? Is there more than one HBA in the machine? curious if the mirror performance would increase if you, for example, set up the mirror such that the drives were on different HBAs assuming more than one HBA....
i just set up a new TrueNAS Server and went with a RaidZ1 configuration with three 14 TB Toshiba drives for data storage and a mirrored NVME vdev for Meta-Data and small files. This Pool is for large amounts af data like videofiles or diskimages. in addition i added an nvme and an ssd pool for data which is accessed more often for faster access and less powerdraw. For backup i run a second smaller TrueNAS server because RAID is not a backup.
This also on the most part also applies to btrfs as well. Btrfs has a few advantages, like file based raid, which means drives can have varying size, like raid1 with a 3TB + 1TB + 1TB + 1TB, default config will balance writes across all drives, making the 3TB drive a live mirror of the other 3; another awesome feature is dynamic arrays, done right you can grow and shrink an active array. And I’ve never broken a btrfs raid5 or 6, and I have tried, so I don’t believe a regular user would trigger the mythical write hole; and even if you did, aren’t you glad you had a backup.
about your questions in the intro. 6 to 8 drives in a striped mirror is something i prefer. i use normally normal sata hard drives but using a proper raid controller with its features. i buy the cheapest drives i can find to match my targeted capacity. i match my lowest drive data speed to the speed of my Ethernet connection. in my case i have a old 3,5 inch drive 12 bay server with a single 6 core xeon with a sata raid controller. it is sata not sas. the server has 4 port 10gbit nic and i have a in my pc an 5gbit nic so 5gbit is the minimum speed i try to reach. i use 8 drives at the moment and typical a hdd reaches 80-160 MB/s. i could use ssd but i use the in ubuntu build in ssh/sftp file transfer and because how the file-transfer works, i dont expect multiple random access it is only one file transferred at the time. i use this method because it works seamlessly in the file manager so i have no need for SMB or something similar. in fact all of my servers and computer running on ubuntu. i dont have other systems that are connected to the same NAS besides my phone or tv but then i use the plex media service i have installed for music and video. the tv uses a 1gbit connection but cant use it at full speed for some reason it is only able of 80mbs and my wifi to my phone is only 2.4Ghz. so yea my pc is the fastest device in my network and my servers dont share a constantly connection between them only for backups. in theory i am able to take advantage of 10gbit but at the moment it is only used between the servers for a backup. i have connected 2 of the 10gbit Ethernet ports to my switch in case i want to upgrade but for now it is fast enough. also 10gbit stuff is expensive
Great video! I am curious if truenas scale would have different results specifically in write performance. When i upgraded from truenas core to scale with the same system, i went from 1.1GB/s to about 700MB/s. A lot of things could have been a factor, but i have done some research online and found others with the same conclusion.
Thank you so much! This is exactly the video I needed right now. I'm planning to buy 3x 18TB drives for RaidZ1 right now as electricity cost here is pretty high and I don't need a lot of resilience.
any down to using larger parity drives to future proof expansion? or is better to just add drives in a set vdev size? my storage will consist mostly of media (movies, music, and TV) also security cam footage some what currently have (6) 12 tb drive slated for new build, already filled nearly 12 tb in existing 4 bay nas.
Loved the video. Tired of external USB drives eventually just failing and having to scramble. Too many videos, photos and such I need to keep. Have a 8 bay (available) server that I am going to configure and still torn between RaidZ1 or RaidZ2 with 2 vdevs 2TB drives... Maybe go 2 and get a JBOD later if necessary. But this did help narrow things down a LOT for me. Thanks again.
I'm starting from square one with learning how to build a NAS that I ultimately want to run PLEX on. Starting out small with what I already have, so I have three 1TB SATA SSDs and a 1TB external SSD that I'm not even sure if I can use, and a 500GB SSD that I plan to use as the primary drive for the OS and any extra gobbeldegook. A question I have is would it even be possible to use the external SSD in RAIDZ1 with the internal drives, or would I be outtta luck on that option? An extra terabyte of storage would be nice with starting at this low level.
It is not just the hard drive that can fail. The SATA controller can fail too and if you are unlucky, one controller has two ports and will take down your array.
Thanks for the vid, found it very helpful since I'm looking to expand my storage! I'm currently running a single RAIDz1 config with 4x8TB HDDs, but with a special metadata vdev on 2 mirrored NVME SSDs.
I have been thinking about this exact thing for my first at home server setup. I am repurposing a 3770k build from back in the day, looking at getting four 12tb drives for storage while downclocking the cpu and undervolting/underclocking gpu for power efficiency since that old of a chip doesn't have quicksync... Based on what you've shown, it seems like raid 5 may be the ticket for me :)
I have 7 HDDs in a RaidZ2, being in RaidZ2 has already saved me from having to redownload the files again, I had 1 HDD fail, I replaced it and started to resilver and I had a 2nd fail about 70% of the way through, that was the quickest I drove to my local Bestbuy in years. I do have a copy of all that data on my desktop as well but its about 5TB of photos over the last 20 years so you can think about how long that would take to redownload if a 3rd drive failed.
I went with StableBit DrivePool on Windows instead. 2x / 3x duplication depending on on the data; don't care about parity or traditional RAID setups for the pool. Plug & play. Can change, add, remove any drive; mixed drives, mixed sizes. Does read striping too. Currently at 50TB with 8 disks plus an NVMe drive for write caching / landing; just one big pool. A pool that I can just add another disk in, and it will rebalance automatically and my total TB goes up. Also the data on the disk aren't unreadable, obfuscated or proprietary ; you can drop them into another system or go into the pool folders yourself; so it's another way to recover if something does goes wrong. Super easy.
wow, that was a really nice comparasing of different raid configs. I'm currently running a home server with 4x4Tb drives in RaidZ1. However, afterwards i've trow in an crappy 128 Gig nvme ssd that was lying around for some time as L2ARC. I haven't done any testing but it looks like jellyfin started to load web interface a bit faster. L2ARC (as well as other types of vdevs) wasn't mentioned in the video but i think it may be interesting to see how it and it's size will affect perfomance.
Nice video, mate-very informative! One thing I don’t like about ZFS and RAID-Z is that all disks need to be the same size and once created it's hard to resize by adding additional drive (Let's say in case of RAID-Z1 you have a 3 disks and want to add another one because you slowly ran out of space). Personally, I prefer using software RAID (like RAID 5 or 6) along with LVM, which I think offers more flexibility when dealing with disks of different sizes and configurations and dynamic reconfiguration in case of physical drives reconfiguration. You can assign your logical volumes to specific physical volumes based on the desired resilience or speed. Synology uses a similar approach with their SHR. Guys what do you think?
Currently using unraid for my nas. Like the idea if I fail on a rebuild I don't loose the whole array. Just what was on that disk. Am looking into trying out truenas scale. Going to have to see If I can find a good course slow enough for me. Setting up a machine for it as soon as I can get a case and a cache drive. Have everything else to try it. Never really worked with zfs.
This was absolutely outstanding timing. I was just debating between going with 2 mirror drives in my HP EliteDesk Ubuntu server, or take a spare system in a Zalman Z9 Plus for a dedicated TrueNAS rig. Either way, I still don't have anywhere close to the amount of money I'd need for even 1 hard drive. So there's that 😂
I run btrfs raid 1. I am on a budget, and being able to expand the raid one disk at a time is invaluable. Also being able to shrink it (if a drive fails and there is space left it is better to run smaller than degraded )
I like how Unraid manages drives because you can mix and match and expand by adding one drive at a time. As for how many drives you need, as many as you can afford. There's no such thing as too much storage.
Needs some definitions for us noobs. * What is an iop? * What do the colors mean in your drive diagrams? * I grasp the basic idea of a vdev - a virtual drive made up of one or more physical drives, but stuff you said implied you might mirror or tripe within the vdev and/or between vdevs? Did I understand that correctly? And if so, why or why not would I do either? Would I ever want to do both? * What about drives of unequal size? * What about used drives from eBay (that may or may not be enterprise grade) * A year in and I want to add capacity - assume I have more drive bays - what are the easy or best ways to do it? Am I stuck using the same arrangement I had when I started with a small budget? Is changing arrangement possible? Can it be done in-situ?
Awaiting my Mini PC to arrive to build my home server / NAS with Unraid. It will have two 4TB NVMe SSDs. One will be a Time Machine backup for my Mac. The other will be the server, which I will experiment with... I may add further storage later (using a DAS). The Mini PC (which is a Beeline EQR6) has a thunderbolt 3 port and a couple of USB 3.2 gen 2 ports, so there are some (external) upgrade options. There are dual Gigabit LAN ports but I could go for a 10Gb adapter if I want it later... But by the time I consider upgrading, my needs may have changed and I may opt for a different NAS, but for now it's a relatively inexpensive, all flash, home server / NAS solution.
If I'm running anything important on storage and there is a chance I'll need decent IOPS it's striped mirrors all the way for me. I can generally live with the lower capacity in order to get better performance and resilience. If it's mostly sequential reads, then a 1 or 2 way parity set up is sufficient.
The only uses that I could really come up with raidz3 is a big media server. That has 12+ drives. Also on side note when your showing test result saying if high or lower is better. This will make it a easier for your average joe.
Wouldn't it be best to just use a Zx raid with a single wide vdev? This makes it much simpler when needing to replace drives and making you less anxious of losing an extra drive in the case of z2+. I think that I would be pretty scared to lose any data after losing a drive in a 2x "z1 vdev" setup.
This video singlehandedly answered all the MOST OBVIOUS questions about ZFS and RAID that, for some reason, will not come up in web searches. Me: "What is the actual read/write performance of each raidz level?" Internet: "What is performance...? Anyway, then there's raidz3 with 3 parity drives..."
Can you explain how to network a nas? I just bought a Qnap direct attached storage device because I didn't want to get my NAS hacked and open to the Internet. With the ability to attach it to a nas down the road.
In my situation (plex server with 2-3 streams at most simultaneously, wanting to spend the least on power and hardware) it always comes back to using a single large drive (~20TB) with multiple external manual backups.
I am getting a 6 Bay NAS from yougreen next month. I have made every effort possible to only use flash storage. The past 7 years of video editing I have been using NVMe drives. However after upgrading to the Canon and shooting intra frame 4K I am running out of space and options lol. I want to get the highest capacity drives but it is literally insane for me to spend $1,400 on drives.
Great video! My NAS is in development. Making choices trying programs, eventually a backup for photos and streaming media. I was aiming for a low power build, and started with the ASRock N100M with a HBA running @50% due to pcie constraints. So I net 6 drivers one of which is the SSD on the motherboard. Any suggestions would be appreciated.
I’ve been running a z3 with 7 -8TB drives for about 29TB of storage I’d like to grow it some , but need to also update the drives as well so, I may just end up with two smaller z3 mirrored so as to ensure my speeds stay up. Idk yet. The charts were helpful to recognize trends at least if they aren’t exactly true to what I would find, it’s still helpful for deciding. I have archive video and audio that cannot be replaced. So that’s why the z3. I prefer to be more paranoid about lost drives than speed
Sounds like a good set up, I have an 8 x 4TB setup in RAIDZ2. That's replicated to a 6 x 6TB RAIDZ2 and a 3 x 14TB RAIDZ1 on two separate machines. I'm as paranoid as you, so it's reassuring to know I have 3 copies and it's very easy to do - just costs money, though you can spread drive cost over time and it's probably safer as well (not getting a batch of drives and increasing chance of multiple failures). Not necessarily advice, just saying what I do - mine holds both personal and business data, the business data goes to backblaze as well.
This video was insanely helpful!! I didn't understand fully why avoiding wide v-devs was a good thing until you explained it in this video, so thank you for that. :)
4x 1tb Samsung 870 evos for vm storage in z1, but I’m in necessity for a large backup volume. Currently I’m building an external das connected over Sas to an hba passed through to a truenas vm. I’m thinking of 6-8 drives currently.
If you have 1 disk with a failure rate per year of 5% that is the chance it will fail per year. If you have 100 of those, the chance that one of them will fail in a year is 99.4%
I run a 5 bay Synology using raid 5, with 8TB drives loosing 1 drives capacity to the raid. I use cloud sync to synchronise my volume in real time to backblaze. Then overnight I backup to a second Synology nas at my second house 50 miles away over a vpn tunnel that nas uses raid 5 over 4 disks. I also keep a hot swap disk and sled unplugged next to each nas they are identical model and size. So I can swap in a replacement as soon as a disk fails. Both nas devices have 2x2tb cache ssds.
I run OMV in an Proxmox VM with direct access to 2 12TB drives which are mirrored. In the near future I would like to add a 22~24TB drive, and mirror it to the two 12TB which will be striped. Is this a good set-up idea? And how would I go about setting this up with the data still on there? Any help and pointers are appreciated!
I’m hoping someone hear can answer my question. I plan to re-build my NAS with 8 x 16TB drives in total, across two striped vdevs. My problem is that I have 25ish terabytes of data on a couple of drives that I want to use for the planned array. Can I build one vdev of four drives, copy the data across, and then use the donar drives to create the second vdedv, and then set them up as a stripped pool without loosing the data stored on the first vdev? Or does the act of created a striped pool wipe all 8 drives?
Inspired by the Supermicro Mini server iX System sells, I did my own DIY version. Based on a Supermicro X11SCL-iF ITX motherboard, E-2236 6 core/12 thread Xeon, 32gigs of ECC ram, 32 gig DOM for boot, 256gig NVMe drive for apps, LSI 9300-8i HBA with 4 Seagate 6tb 12 gig SAS drives running in RAIDZ1, 2xSSDs(1 for Windows, for certain things, 1 is just blank for whatever), all in a Supermicro mini tower chassis.
I am currentlt using unraid with xfs and drive pooling with a single parity drive. I started this way because inoriginally had a lot of different sized drives. Due to a flood and rebuilding from insurance money i could probably use zfs in unraid. (I had an off site backup of my data)
Stop reading my mind.
It's like you're looking at my Google history and making videos about all the things I search for.
Also, striped, no mirror, is the best layout. Speeeeeed.
(obligatory /s since someone might think this recommendation is serious)
weeee
You are joking right? 😂 Stripe?
One man's search history is another man's treasures.
While being another man's nightmare.
True raid 5 I still a good option😅
you should really specify thats a joke, some people will take you seriously
me watching this video with no spare drives nor a spare device as a nas
Not as bad as watching it with 12 3tb drives full of data, that you'd love to put in your machine, but still waiting for new drives to move the data to first before putting them into the array
Same
Same, except I’m too broke to buy a few parity drives and too scared to move all my data to a single drive for the time being cause with my luck it will fail and lose everything
I have 4 4tb wd red nas hard drives but no device to put them in. My old PC doesn't turn on any more
@@somebody943so 2 mirror drives for luck?
A very satisfying and comprehensive video. Not as exhaustive as some articles out there, but concise and comprehensive enough for 20 min. Good job with the tests.
Are there any more exhaustive articles out there that you can recommend to supplement this video? Thank you in advance!
@@kevinoneill2170 moderation doesn't let links through, sorry ):
Good informative vid! Me myself I'm running Proxmox, with a TrueNAS Core VM, six 4TB hard drives in a RAIDZ2 arrangement, thru a HBA passed through. ~14.5 tebibytes with redundancy peace of mind, Backblaze cloud backup for the essentials like the family photo album. I'm a happy camper
I am new to this, and i know what that means, but have a hard time wrapping my head around how that works. What would you say is a good spot to look for beginners?
@@wantu2muchHm my reply kept getting deleted on mobile (Revanced) hope this one sticks.
Depends on what your setup looks like ultimately. If you're like me and virtualize TrueNAS, it really wants bare metal control of the drives so you'll want something like an LSI host bus adapter you can buy on eBay, connect the drives to that, and pass that entire PCI card to your TrueNAS VM. I bought an LSI 9218-8i for about 66 bucks. Note it has to be in IT mode, the auction should say so.
If you connect the drives directly to the motherboards SATA connectors you'd need to pass the whole SATA controller to TrueNAS, I didn't want to do that as I had SATA SSDs for VM and container storage for Proxmox to use, so HBA it was.
How did you decide between installing TrueNAS as a VM over bare metal? I'm trying to decide that very question. I'm not sure what other VMs I might run on the same box...
@@CptBlackEye Good question. I was deliberating between TrueNAS Scale and Proxmox for a good while as I was getting hardware together. On the one hand, I could have a fairly recently released NAS solution that can do hypervisor things in TrueNAS Scale, or I could opt for two solutions, a hypervisor and a NAS that only do those specific tasks, separately.
I decided on the latter because Proxmox has been around for 15 years, FreeNAS (now TrueNAS Core) a little longer than that, so they have plenty of stability and more importantly documentation and discussion. TrueNAS Scale is comparatively very young and when it comes to the backbone of my system I prefer to lean on projects that have long histories and are known to be stable and very good at their particular task, and again docs and forums should I need them for any "advanced maneuvers". So by using Proxmox for an OS and TrueNAS Core in a VM I could have the best of both worlds.
Thanks for asking!
@@CptBlackEyeI had a long comment explaining it that appears to have been autofiltered or something. Short answer then, I wanted a long-developed stable hypervisor and a long-developed stable NAS with plenty of docs and community experience for each. So the answer was Proxmox with TrueNAS Core in a VM for the best of both.
Honestly the best, most straightforward and simple video to help understand ZFS. It's pretty complicated and can be annoying specially on the first deployment. I'm currently running a 5 wide RAIDZ1 with a hot spare but I'm about to configure either to two RAIDZ1 or 3 way mirror, not sure since I run a mixed workload of VMs and movie storage.
Great video! I went through tons of research last year and decided to use 4 vdev mirrors in my 8 drive NAS. The 4TB drives were recycled from a mining venture I shut down, and I chose the 4 vdev mirrors because I can add capacity just by replacing 2 drives at a time, which I just did by replacing 2 of the 4TB drives with 10TB drives. Resilvering took about 7 hours for each drive swap. Unbalanced vdevs are not ideal, but it works fine for my usage.
Perfect timing! I'm sitting here with a NAS on my to do list. I picked up a LSI SAS 9300-16i (HBA) and 10 identical 500gb laptop drives to practice NAS building and operating. Being new to ZFS (first time installing), this video really helped!
there is a bit more risk of drive failures with identical drives, ie multiple failures around the same time.
@@michaelbouckley4455 Good observation; however, I believe that since these are used laptop HDDs, the likelihood of batch failure is reduced. I'm strongly leaning towards a pool with two 4-drive Z1 vdevs, leaving me 2 drives as spares. While these are 2.5" drives, the case does have space for me to change over to 4x 3.5" NAS drives later.
I worked for large corporations as a network administrator. The servers had various types of RAID configurations. I've also been an amateur photographer longer than I worked with computers. So, I have thousands of photographs. At first they were on a single hard drive on my computer. Then I also had them on an external drive. Now they reside on my local computer and a Synology NAS. The NAS is configured with mirrored drives. If I was a professional photographer I would back up my files regularly and also use off site storage. But, what I use now works well enough for me.
Get cheap refurbished drive and make offsite backup. Thank me later. 😊
Great video 👍
14:00 -- the way you laid out the four FIO Benchmark commands 'vertically' is so pleasing visually.
It perfectly exemplifies how you get that refined artistry is needed with technical topics.
Kindest regards, neighbours and friends.
P.s. Production quality remains highly surpassing.
I have currently have 3 drives and have bought 3 used drives with the same capacity and have been at an impasse on which direction to go for both in nas os and resilience level to go for
The video was extremely helpful
If you ask the datahoarder subreddit everybody should use striped vdevs because if your HDD fails you can just recover from your backup.
And there's logic to that. Question remains, what do you store your backup on :)
@@BoraHorzaGobuchulnot a striped backup pool 😂
casually building a nas whit 2.5" 1 tb hdds . hoping to use a 4 tb drive for bakups , i was wandering what to use as third bakups , blue rays, or tape ? what is cheaper and easyer ?
@@MatteoComensoli be advised optical recordable media might not be as great in terms of longevity as is advertised. I've often encountered recordable optical media which failed to read after several years of proper storage. Unlike factory recorded disks, recordable optical media relies on a different technology for its days layer, which can deteriorate much sooner than expected.
Take is also durable, until it isn't. Sometimes rust starts flaking off of the base later.
So for backup, I'd use a third HDD.
Kind of agree. But I wouldn’t trust an harddrive that hasn’t been powered for 20 years either….
This is outstanding timing. I needed to explain this to someone I know and taking the time was hard. With a video he can watch it until he understands all the details. Thank you so much!
Same. I was hoping to get a Dell Poweredge R730 as it would be more than what I need, so I could use it for my media server, game servers, and even backup data, if I ever managed to sort through it first that is... Was thinking of just buying several 4tb SSD's when I can and then run the raid 50 that it supports.
I've been running TrueNAS for 5 or 6 years on an old supermicro board in a rosewill rackmount case. I started it with a 1150 celeron CPU, and like 8GB of ram. upgraded to sx14TB drives when spinning rust was cheap, and have recently updated it to 32GB (max :D) and bought an e3-1271 to put in as well. SAS drives got cheap all of the sudden, so i now have 6x4TB SAS drives, and i'm waiting for the HBA SAS card to run them with.
it's been an adventure, and i've learned a lot. I also still don't trust it, because i have all of my important data backed up on cold storage every month or so. I need a legit backup, but that requires more research! :D
Great video! I love the mix between benchmarking / deep dive and practicality. This is the kind of thing I come to RUclips for: a lucid, conversational explanation of high-level concepts and tradeoffs, with some helpful pointers to dive deeper if I need to. A+ stuff in my opinion. Exciting to see that MS-01 make another appearance as well. It seems like that machine made quite a splash in the homelab community, it's a pretty amazing little machine.
I'm running a 5-disk setup of (very cheap, very used) 2.5" drives in RAIDZ1 in a virtualized Truenas install on a proxmox VM. This setup has been hosting streaming media for a couple years without issue. Since I've never dealt with a drive failure seeing those resilvering times makes me a bit nervous, so that bit was helpful to see called out.
This video could not be more perfect. I'm currently in the process of figuring out a NAS build and have been back and forth about how to balance reliability, performance, capacity, and cost. The easy to understand explanations about # of drive failures, streams and IOPS, and capacity helps figure out which configurations offer the best balance for my needs! Obviously there's a lot more details that could be added, but for the purposes of an introductory explainer this hit the nail on the head. Seems like the 2x Raidz1 (3/4 drive vdevs) and 2x raidz2 offer the best protection while maintaining a reasonable speed and capacity.
as person planning to build DIY NAS - just what I was looking for, nice!
Literally working on setting up/configuring my TrueNAS Scale server up and needed this explanation. Thank you!
This was a great video. I feel like it cured some form of decision remorse i had when setting up my trueNAS pool. Thanks for the video.
I moved to unraid as it kicked on less drive when I access data and it have less noise ( I play/ work in the same room the Nas living )
But minimum still kicked on 3 when set to dual parity
Thank you for this super informative video. it really cleared up a lot of things for me. I have been struggling to decide if i even want to build a NAS in the first place and there really isnt a perfect answer. Its been intersting going through this entire circle just to realise that DAS might still be the right thing for my use case.
This is a great explainer, however I'm mildly disappointed that backups only got a token mention at the end of the video. It would be better if it was emphasized that ZFS fault tolerance is not a substitute for backups, and terminology like "if you lose a vdev you lose the whole pool and your data" is better conveyed as "if you lose a vdev, you lose the pool and have to restore from your backups". I hang out quite a bit on zfs forums (because I'm a heavy ZFS user) and I can't tell you how many times posts pop up where the user had something bad happen to their pool and wants help to try not to lose their data because they didn't back anything up.
ZFS is great, but ZFS fault tolerance is not a replacement for proper backups!
Was going to comment something similar. "Raid is not a backup". Now uptime is great, of course, so is speed assuming you're not limited by your network speed, but backups are critical.
While of course your point is valid, I'd guess nearly all of the intended audience of this video is already well aware that "raid (of any type) is not a backup". Continuing to pound that drum honestly starts to get a little annoying after a while. At some point you have to move on to more advanced topics and trust that your viewers are escalating with you. If he covered all the basics in every video, they would be unwatchable to most viewers. I think he struck the right balance by reminding everyone to backup their data at the close of the video.
RAID is redundancy...
Plus...he needs another topic to cover for more content :)
@@haydenc2742 raid provides fault tolerance for up-time, not permanent data persistence. Backups (done properly) provide data persistence.
Really appreciate your 101 videos. You have such a knack for breaking down complex ideas into easy to understand terms.
It's interesting that most of your performance benchmarks have sequential write bandwidth higher than sequential read bandwidth. For most raw disk drives, the sequential read generally outperforms the sequential write performance while random writes can outperform random reads. As you stated, it looks like ZFS still has some caching and write coalescence going on. Excellent video. Thanks ☺️
Good video explaining the differences.
There's always a lot of factors to consider when setting up any type of NAS. One of those being how many physical drives you have in your NAS to begin with. What I've found with drive capacities is there is a sweet spot with cost vs capacity. That seems to shift up as larger drives get cheaper over the years.
The last time I set up a new NAS a couple of years ago, 8TB was the sweet spot for cost per TB. As I only had 4 drives, I'm limited on capacity (unless I add an external cab and that has it's unique difficulties). When I was purchasing drives, my local was out of 8TB NAS drives, so I ended up buying a combination of NAS and desktop drives. Eventually as they came back in stock, I replaced the desktop drives with NAS and put the desktops into an external cab. The NAS was configured as RAID 5 so if I lost a drive, no biggie. The external cab was configured as JBOD. My reasoning was with the external cab, there was a complete mirrored backup. Since it's not on all the time, the drives should outlast the NAS drives. So if one drive fails in the NAS RAID, its easily replaced. If disaster happens and more drives fail, there's always the external cab and since it's not in a raid, can be read by any OS to backup from.
And as you stated, the more drives you have running, the higher cost of electricity it uses. So while it might be nice to have your NAS running in RAID 6 or mirrored for redundancy, realistically you can achieve something similar except your backup is offline. It just means you need to be proactive in doing regular backups periodically.
And BTW With a lot of consumer NAS' now coming with SSD capabilities, it really helps eliminate a lot of the bottlenecks with reading and writing to disk arrays no matter how you have the drives configured.
What I’ve done before is used striped vdevs for active use data, but with an active replication to something with raid-z2 for immediate backup. That way all my capacity and performance is used for making work go faster, but it is immediately backed up to a more resilient pool. If the striped pool goes down, I would work off the other until the weekend to re-build. However, I wouldn’t recommend this at all, its jank was honestly more effort than it is worth and you should just work off of something more resilient in the first place. Call the marginally increased access time the CODB and call it a day.
Last year, i rebuilt my server into a Proxmox host running a TrueNAS VM for a four-drive raidz1 array w/four Toshiba 4TB drives, and tt has been fantastic. And on older hardware, too! 😊
Another consideration is growing the pool over time without restarting from scratch. With striped mirrors it just adding 2 more drives.
I've actually got a media server running 4 WD Red Plus 4TB drives in a raidz1 going strong for the last 2 and a half years, best choice I made imo. It's built with a bunch of leftover parts from Ryzen upgrades and an LSI SATA/SAS controller, no raid. It's gone through a lot of revisions as I've slowly either upgraded my main PC and the server got the old CPUs (first 1800X, now 3900X) and RAM, or I added a used RTX A4000 after finally getting rid of the RX 550 thats been the center of plex transcoding, overkill sure, but definitely worth it. Plus when transferring files, I've yet to reach the limit of the drives, the 1gbps network gets maxed out first. My next upgrade is definitely going to be the network itself.
I am looking into building a new NAS for me. Currently the best option for me with 4 16TB drives seems to be 2 mirrored vdevs, leaving up to two drive failures, usable 36TB (with ZFS actually like 29TB). This way I have a good failure tolerance, I will backup the data anyway, and I can extend the NAS easily by another 16TB mirror in a few years, so I end up with 48 (38) TB while getting good performance. For me this seems like the sweetspot right now. If I would get more drives I could think about a raidz2 with multiple vdevs but then expanding would be a lot more difficult. Also electricity isn‘t cheap where I live and having 4 drives atm, that consume about 30 Watt under load and probably around 4W when spun down (yes I spin my drives down), this is not too bad imo.
Just need to think about my backup though…
Hey man, I think you need to double check your test methodology or maybe you swapped your read and write data in the charts because it doesn't make sense for writes to be faster than reads. Writes require you to compute parity, which you can only do after the data is written and checksumed, so it should to take longer than just reading the data from the disk. I think it's likely there's something like ARC enabled for write caching that is not giving you raw disk performance and you are really seeing memory write speed, just like ARC gave you memory read speed.
Perfect timing, I’m starting my build next week when the case arrives. I do wish you went more in detail about different types like what’s the difference between ZFS vs BTRFS (did I spell that right?)
I know you weren't looking for an answer from someone else, but ZFS is a block level raid and they impose limits around how they implement it. BTRFS is using file level raid and it has a nice advantage, it's just working with files. So if you run RAID1 normally in BTRFS it creates a copy of a file across disks, always ensuring 2 copies, but you can make that 3 copies, and there is no issue with drive expansion, since it's all based on files. You do have to rebalance the drives once a disk is lost so you can restore the raid. Also you have btrfs scrub to prevent bitrot. I'm personally running BTRFS in Raid1 with just file mirrors across disks. I have used ZFS in the past, but I don't either is better or worse than the other, but you don't have the ram requirements for BTRFS that ZFS has.
I've had a 24 bay Netapp storage shelf sitting for a few months that I've been meaning to setup on a TrueNas server, but have been stuck on how best to setup the pool. This video helped a lot! I decided on 4 Vdevs of 6 Drives in RaidZ1
Thank you for the video, and your hard work. It's something i want to set up for my self but find it kind of overwhelming. You make it easier to understand. I thank you for your hard work.
What we need is some filesystem that does different RAID levels on different directories, with a shared pool of storage. So you can (for example) effectively have RAID 1 for critical documents and RAID 0 for downloads.
It is called LVM thin provisioning.
@@terrydaktyllus1320 On an unlimited budget, we would indeed use RAID 1 for everything. But budgets in the real world are far from unlimited. For the most part, it would be silly to use extra disk space to back up downloads since if lost, they can simply be downloaded again, RAID 0 is perfect for that. (Archival downloads that are no longer available is a different matter.)
I generally advise most people to run a simple mirror till their capacity needs outstrip the highest or second highest capacity drive they can buy. Having said that, this is a great breakdown once people push past that threshold. One other thing to consider might be the impact of network links on all this, or maybe a follow-up video that went deeper about the impacts of caching on saturating network links.
This was very helpful! But I have a couple of questions left: What are my options (and limitations) when I want to expand my storage? Can I add one disk to a vdev? Do I have to create a new vdev with multiple disks? Do they need to be the same size? What are the differences between TrueNAS en Unraid? Maybe a follow up video? ;-)
There's lots of videos on the topic. So far, can't add disks to vdevs (it was mentioned in the video), only can add vdevs and they have to have same number of drives. There's talk that there will be easier expansion functionality in future, but it's only talk so far
One point I would make from experience is: with hot swap bays the operative word is HOT. High capacity drives physically fill the entire sled and there's basically zero airflow so they will get pretty toasty, especially when doing parity checks. It got so bad that I switched away from my case with hotswap bays in favour of a custom mountn g solution that stays nice and cool even when it's absolutely thrashing the drives. As a bonus I took the opportunity to switch to a better sata card which cut the parity check time by 75%
Literally just received 4 extra HDDs to expand my 4 drive TrueNAS Core set up. The info in this video came in very handy. I had 95% decided on the config I was going to go for, but your results helped me confirm what I wanted to do. Think I'll also go from Core to Scale and swap out the motherboard and CPU for something a little newer that I have from an old build.
Absolutely one of your best videos to date, sir! Well done!
what are your thoughts on drive sleeping? if i access my 40-50 drive NAS half a dozen times a day should sleeping the drives when not in use be smart due to power consumption or am i risking the longevity by sleeping them, I have had NASs for over a decade and always disabled sleep, but after seeing your wattage measurements im thinking maybe im making the wrong choice with the amount of drives im running?
i had a setup with 3x 24 drive sas chassis attached to it, they all ran mirrored pairs and no two drives of a pair were in the same chassis so you could lose an entire chassis and the array would be highly compromised but still up and running, this ran iscsi/fibre channel storage for virtual machines before SSDs were large/affordable enough to just make the entire array out of SSDs.
Thanks for this overview!
Did you mix up writes and reads in all diagrams of the fio tests? Read should be mostly always equal or higher than write.
That's what I was wondering. It looks like he flat out swapped them...
Most eye-popping part of this video (and I appreciate this so much) was the part about wattage and pricing on electricity. Thanks for that pro-tip because I've been obsessing over larger drives. Does the pricing principal/outcome differ with SSDs?
Great video!
I think that this is actually one of the best explainer videos, on this topic, that I've seen.
Amazing video. I'm about to set up my first Nas and this helped me A LOT with deciding on the storage layout.
Actuallly when a hard drive is next to his failure, the raid card (or hba) will tell the server that that drive has now a predicted failure. This happens when a drive takes too long to write in a specific sector of it, and the raid card knows that and he adds 1 to the predicted failure counts. Though you can reset this count and have the PF light go off, it is really a bad idea because the drive still has a pretty bad sector and it will eventually fail in some other hours of operation. If you have 2 PF in a raid 5 (or z1) then you'll have a hard time praying for the other drive to fail during the rebuild process. Is always recommended to keep at maximum X n. of PF on your system depending on the rendundancy rate of your raid (ex. raid 5> 1 possible failure > 1 PF max). You should order a new drive immediately and subsitute that.
What are you using to encode your videos? The audio has compression artifacts I rarely hear on RUclips.
This was a very good introduction to storage and you covered a lot of important topics. I will defiantly be sharing this one.
why the mirrors didn't scale up - have you any data on the internal architecture of the PCI bus in how the drives are connected to the host? Is there more than one HBA in the machine? curious if the mirror performance would increase if you, for example, set up the mirror such that the drives were on different HBAs assuming more than one HBA....
Brilliant discussion. Gets me a long way toward setting up my NAS. Thank you!
i just set up a new TrueNAS Server and went with a RaidZ1 configuration with three 14 TB Toshiba drives for data storage and a mirrored NVME vdev for Meta-Data and small files. This Pool is for large amounts af data like videofiles or diskimages. in addition i added an nvme and an ssd pool for data which is accessed more often for faster access and less powerdraw. For backup i run a second smaller TrueNAS server because RAID is not a backup.
Power draw difference between a HDD and a SSD is minimal. SSD only makes sense if datacenter class. QLC consumer ones fail much faster than HDDs.
This also on the most part also applies to btrfs as well. Btrfs has a few advantages, like file based raid, which means drives can have varying size, like raid1 with a 3TB + 1TB + 1TB + 1TB, default config will balance writes across all drives, making the 3TB drive a live mirror of the other 3; another awesome feature is dynamic arrays, done right you can grow and shrink an active array. And I’ve never broken a btrfs raid5 or 6, and I have tried, so I don’t believe a regular user would trigger the mythical write hole; and even if you did, aren’t you glad you had a backup.
about your questions in the intro. 6 to 8 drives in a striped mirror is something i prefer. i use normally normal sata hard drives but using a proper raid controller with its features.
i buy the cheapest drives i can find to match my targeted capacity. i match my lowest drive data speed to the speed of my Ethernet connection.
in my case i have a old 3,5 inch drive 12 bay server with a single 6 core xeon with a sata raid controller. it is sata not sas. the server has 4 port 10gbit nic and i have a in my pc an 5gbit nic so 5gbit is the minimum speed i try to reach. i use 8 drives at the moment and typical a hdd reaches 80-160 MB/s.
i could use ssd but i use the in ubuntu build in ssh/sftp file transfer and because how the file-transfer works, i dont expect multiple random access it is only one file transferred at the time.
i use this method because it works seamlessly in the file manager so i have no need for SMB or something similar. in fact all of my servers and computer running on ubuntu.
i dont have other systems that are connected to the same NAS besides my phone or tv but then i use the plex media service i have installed for music and video.
the tv uses a 1gbit connection but cant use it at full speed for some reason it is only able of 80mbs and my wifi to my phone is only 2.4Ghz. so yea my pc is the fastest device in my network and my servers dont share a constantly connection between them only for backups. in theory i am able to take advantage of 10gbit but at the moment it is only used between the servers for a backup.
i have connected 2 of the 10gbit Ethernet ports to my switch in case i want to upgrade but for now it is fast enough. also 10gbit stuff is expensive
Great video! I am curious if truenas scale would have different results specifically in write performance. When i upgraded from truenas core to scale with the same system, i went from 1.1GB/s to about 700MB/s. A lot of things could have been a factor, but i have done some research online and found others with the same conclusion.
Thank you so much! This is exactly the video I needed right now.
I'm planning to buy 3x 18TB drives for RaidZ1 right now as electricity cost here is pretty high and I don't need a lot of resilience.
The best youtube video layout needs to involve Hardware Haven
any down to using larger parity drives to future proof expansion? or is better to just add drives in a set vdev size? my storage will consist mostly of media (movies, music, and TV) also security cam footage some what currently have (6) 12 tb drive slated for new build, already filled nearly 12 tb in existing 4 bay nas.
Loved the video. Tired of external USB drives eventually just failing and having to scramble. Too many videos, photos and such I need to keep.
Have a 8 bay (available) server that I am going to configure and still torn between RaidZ1 or RaidZ2 with 2 vdevs 2TB drives... Maybe go 2 and get a JBOD later if necessary. But this did help narrow things down a LOT for me. Thanks again.
RAIDZ2 is sometimes the best precaution to take, but if it's your first experience, practice and don't trust the machine/drives until you're ready.
I'm starting from square one with learning how to build a NAS that I ultimately want to run PLEX on. Starting out small with what I already have, so I have three 1TB SATA SSDs and a 1TB external SSD that I'm not even sure if I can use, and a 500GB SSD that I plan to use as the primary drive for the OS and any extra gobbeldegook.
A question I have is would it even be possible to use the external SSD in RAIDZ1 with the internal drives, or would I be outtta luck on that option? An extra terabyte of storage would be nice with starting at this low level.
At 14:07 it seems the seq. write parameters are not correct, they seems to be the same as seq. read ones
It is not just the hard drive that can fail. The SATA controller can fail too and if you are unlucky, one controller has two ports and will take down your array.
Thanks for the vid, found it very helpful since I'm looking to expand my storage!
I'm currently running a single RAIDz1 config with 4x8TB HDDs, but with a special metadata vdev on 2 mirrored NVME SSDs.
I have been thinking about this exact thing for my first at home server setup. I am repurposing a 3770k build from back in the day, looking at getting four 12tb drives for storage while downclocking the cpu and undervolting/underclocking gpu for power efficiency since that old of a chip doesn't have quicksync... Based on what you've shown, it seems like raid 5 may be the ticket for me :)
Hey thanks for the really interesting NAS drive video. When I get to build a NAS I think going with mirrored vdevs makes a lot of sense.
I have 7 HDDs in a RaidZ2, being in RaidZ2 has already saved me from having to redownload the files again, I had 1 HDD fail, I replaced it and started to resilver and I had a 2nd fail about 70% of the way through, that was the quickest I drove to my local Bestbuy in years.
I do have a copy of all that data on my desktop as well but its about 5TB of photos over the last 20 years so you can think about how long that would take to redownload if a 3rd drive failed.
I went with StableBit DrivePool on Windows instead. 2x / 3x duplication depending on on the data; don't care about parity or traditional RAID setups for the pool. Plug & play. Can change, add, remove any drive; mixed drives, mixed sizes. Does read striping too. Currently at 50TB with 8 disks plus an NVMe drive for write caching / landing; just one big pool. A pool that I can just add another disk in, and it will rebalance automatically and my total TB goes up. Also the data on the disk aren't unreadable, obfuscated or proprietary ; you can drop them into another system or go into the pool folders yourself; so it's another way to recover if something does goes wrong. Super easy.
wow, that was a really nice comparasing of different raid configs. I'm currently running a home server with 4x4Tb drives in RaidZ1. However, afterwards i've trow in an crappy 128 Gig nvme ssd that was lying around for some time as L2ARC. I haven't done any testing but it looks like jellyfin started to load web interface a bit faster.
L2ARC (as well as other types of vdevs) wasn't mentioned in the video but i think it may be interesting to see how it and it's size will affect perfomance.
Nice video, mate-very informative! One thing I don’t like about ZFS and RAID-Z is that all disks need to be the same size and once created it's hard to resize by adding additional drive (Let's say in case of RAID-Z1 you have a 3 disks and want to add another one because you slowly ran out of space). Personally, I prefer using software RAID (like RAID 5 or 6) along with LVM, which I think offers more flexibility when dealing with disks of different sizes and configurations and dynamic reconfiguration in case of physical drives reconfiguration. You can assign your logical volumes to specific physical volumes based on the desired resilience or speed. Synology uses a similar approach with their SHR. Guys what do you think?
Big benefit of striped-mirrors vs RAID-6 (Z2) is lack of parity calculation; mirrors are far less-strain on low-power systems and very large arrays.
Currently using unraid for my nas. Like the idea if I fail on a rebuild I don't loose the whole array. Just what was on that disk. Am looking into trying out truenas scale. Going to have to see If I can find a good course slow enough for me. Setting up a machine for it as soon as I can get a case and a cache drive. Have everything else to try it. Never really worked with zfs.
This was absolutely outstanding timing. I was just debating between going with 2 mirror drives in my HP EliteDesk Ubuntu server, or take a spare system in a Zalman Z9 Plus for a dedicated TrueNAS rig.
Either way, I still don't have anywhere close to the amount of money I'd need for even 1 hard drive. So there's that 😂
I run btrfs raid 1. I am on a budget, and being able to expand the raid one disk at a time is invaluable. Also being able to shrink it (if a drive fails and there is space left it is better to run smaller than degraded )
I like how Unraid manages drives because you can mix and match and expand by adding one drive at a time. As for how many drives you need, as many as you can afford. There's no such thing as too much storage.
Needs some definitions for us noobs.
* What is an iop?
* What do the colors mean in your drive diagrams?
* I grasp the basic idea of a vdev - a virtual drive made up of one or more physical drives, but stuff you said implied you might mirror or tripe within the vdev and/or between vdevs? Did I understand that correctly? And if so, why or why not would I do either? Would I ever want to do both?
* What about drives of unequal size?
* What about used drives from eBay (that may or may not be enterprise grade)
* A year in and I want to add capacity - assume I have more drive bays - what are the easy or best ways to do it? Am I stuck using the same arrangement I had when I started with a small budget? Is changing arrangement possible? Can it be done in-situ?
Awaiting my Mini PC to arrive to build my home server / NAS with Unraid. It will have two 4TB NVMe SSDs. One will be a Time Machine backup for my Mac. The other will be the server, which I will experiment with... I may add further storage later (using a DAS). The Mini PC (which is a Beeline EQR6) has a thunderbolt 3 port and a couple of USB 3.2 gen 2 ports, so there are some (external) upgrade options. There are dual Gigabit LAN ports but I could go for a 10Gb adapter if I want it later...
But by the time I consider upgrading, my needs may have changed and I may opt for a different NAS, but for now it's a relatively inexpensive, all flash, home server / NAS solution.
If I'm running anything important on storage and there is a chance I'll need decent IOPS it's striped mirrors all the way for me. I can generally live with the lower capacity in order to get better performance and resilience. If it's mostly sequential reads, then a 1 or 2 way parity set up is sufficient.
The only uses that I could really come up with raidz3 is a big media server. That has 12+ drives. Also on side note when your showing test result saying if high or lower is better. This will make it a easier for your average joe.
Wouldn't it be best to just use a Zx raid with a single wide vdev? This makes it much simpler when needing to replace drives and making you less anxious of losing an extra drive in the case of z2+.
I think that I would be pretty scared to lose any data after losing a drive in a 2x "z1 vdev" setup.
Thanks for the test without cache, this was helpful.
This video singlehandedly answered all the MOST OBVIOUS questions about ZFS and RAID that, for some reason, will not come up in web searches.
Me: "What is the actual read/write performance of each raidz level?"
Internet: "What is performance...? Anyway, then there's raidz3 with 3 parity drives..."
Can you explain how to network a nas?
I just bought a Qnap direct attached storage device because I didn't want to get my NAS hacked and open to the Internet.
With the ability to attach it to a nas down the road.
In my situation (plex server with 2-3 streams at most simultaneously, wanting to spend the least on power and hardware) it always comes back to using a single large drive (~20TB) with multiple external manual backups.
I am getting a 6 Bay NAS from yougreen next month. I have made every effort possible to only use flash storage. The past 7 years of video editing I have been using NVMe drives. However after upgrading to the Canon and shooting intra frame 4K I am running out of space and options lol. I want to get the highest capacity drives but it is literally insane for me to spend $1,400 on drives.
Great video! My NAS is in development. Making choices trying programs, eventually a backup for photos and streaming media. I was aiming for a low power build, and started with the ASRock N100M with a HBA running @50% due to pcie constraints. So I net 6 drivers one of which is the SSD on the motherboard. Any suggestions would be appreciated.
I’ve been running a z3 with 7 -8TB drives for about 29TB of storage I’d like to grow it some , but need to also update the drives as well so, I may just end up with two smaller z3 mirrored so as to ensure my speeds stay up. Idk yet. The charts were helpful to recognize trends at least if they aren’t exactly true to what I would find, it’s still helpful for deciding.
I have archive video and audio that cannot be replaced. So that’s why the z3. I prefer to be more paranoid about lost drives than speed
Sounds like a good set up, I have an 8 x 4TB setup in RAIDZ2. That's replicated to a 6 x 6TB RAIDZ2 and a 3 x 14TB RAIDZ1 on two separate machines. I'm as paranoid as you, so it's reassuring to know I have 3 copies and it's very easy to do - just costs money, though you can spread drive cost over time and it's probably safer as well (not getting a batch of drives and increasing chance of multiple failures). Not necessarily advice, just saying what I do - mine holds both personal and business data, the business data goes to backblaze as well.
You mentioned putting links to all those other resources in the description but I don't see them. I would appreciate those links if you get to it.
This video was insanely helpful!! I didn't understand fully why avoiding wide v-devs was a good thing until you explained it in this video, so thank you for that. :)
Thanks for the in depth look. It can be hard to find good, basic info regarding this topic.
On time video . I was looking to upscale my NAS . Just running mirrored 1tb . (Very low I know ). HH. ❤
4x 1tb Samsung 870 evos for vm storage in z1, but I’m in necessity for a large backup volume. Currently I’m building an external das connected over Sas to an hba passed through to a truenas vm. I’m thinking of 6-8 drives currently.
The more drives you use the bigger chance of a failure. It is better to use 2x2TB than 4x1TB.
If you have 1 disk with a failure rate per year of 5% that is the chance it will fail per year. If you have 100 of those, the chance that one of them will fail in a year is 99.4%
I run a 5 bay Synology using raid 5, with 8TB drives loosing 1 drives capacity to the raid. I use cloud sync to synchronise my volume in real time to backblaze. Then overnight I backup to a second Synology nas at my second house 50 miles away over a vpn tunnel that nas uses raid 5 over 4 disks. I also keep a hot swap disk and sled unplugged next to each nas they are identical model and size. So I can swap in a replacement as soon as a disk fails. Both nas devices have 2x2tb cache ssds.
I run OMV in an Proxmox VM with direct access to 2 12TB drives which are mirrored. In the near future I would like to add a 22~24TB drive, and mirror it to the two 12TB which will be striped. Is this a good set-up idea? And how would I go about setting this up with the data still on there? Any help and pointers are appreciated!
It's worth noting that starting with the Trashcan, Apple would set up it's Macs with striped pairs to boost performance, safety be dammed.
I think the minimum per vdev if you care about the data is raidz2 because then you still have some redundancy while reslivering a drive
I’m hoping someone hear can answer my question.
I plan to re-build my NAS with 8 x 16TB drives in total, across two striped vdevs.
My problem is that I have 25ish terabytes of data on a couple of drives that I want to use for the planned array.
Can I build one vdev of four drives, copy the data across, and then use the donar drives to create the second vdedv, and then set them up as a stripped pool without loosing the data stored on the first vdev? Or does the act of created a striped pool wipe all 8 drives?
I run Basic on my pool. Lessens the risk of a crash/rebuild by doing the writes myself.
Absolutely incredible video, thank you! So helpful.
Inspired by the Supermicro Mini server iX System sells, I did my own DIY version. Based on a Supermicro X11SCL-iF ITX motherboard, E-2236 6 core/12 thread Xeon, 32gigs of ECC ram, 32 gig DOM for boot, 256gig NVMe drive for apps, LSI 9300-8i HBA with 4 Seagate 6tb 12 gig SAS drives running in RAIDZ1, 2xSSDs(1 for Windows, for certain things, 1 is just blank for whatever), all in a Supermicro mini tower chassis.
I am currentlt using unraid with xfs and drive pooling with a single parity drive. I started this way because inoriginally had a lot of different sized drives. Due to a flood and rebuilding from insurance money i could probably use zfs in unraid. (I had an off site backup of my data)
OFFSITE BACKUP FOR THE WIN!
Offsite backup coming in clutch.
I'm trying to build a replacement for a single drive NAS. Looks like a 3 drive with parity will be my go-to