Choosing The BEST Drive Layout For Your NAS

Поделиться
HTML-код
  • Опубликовано: 22 фев 2024
  • For FREE breakfast for life with HelloFresh, use code HARDWAREHAVENFREE at bit.ly/3TsaRoT! One breakfast item per box while subscription is active.
    ► Want to support the channel and unlock some perks in the process?
    Become a RAID member on Patreon or RUclips!
    🔓 Patreon: / hardwarehaven
    🔓 RUclips: / @hardwarehaven
    Resilver Times:
    louwrentius.com/zfs-resilver-...
    ixSystems Storage Pool Layout White Paper:
    static.ixsystems.co/uploads/2...
    TechnoTim - TrueNAS Performance Guide:
    • Getting the Most Perfo...
    Lawrence Systems - ZFS is a COW:
    • Why The ZFS Copy On Wr...
    Lawrence Systems - Fixing ARC Memory In Scale:
    • How To Get the Most Fr...
    ---------------------------------------------------
    Music (in order):
    "Hardware Haven Theme" -Me ( • Hardware Haven Theme M... )
    "Sunshower" - LATASHÁ( / best-music-pro.. )
    "If You Want To" - Me
    ---------------------------------------------------
    🎥 Curious About the equipment I use to make my videos?
    Click Here ► hardwarehaven.media/gear
    ---------------------------------------------------
    Timestamps:
  • НаукаНаука

Комментарии • 304

  • @amethystdene
    @amethystdene 3 месяца назад +315

    me watching this video with no spare drives nor a spare device as a nas

    • @BoraHorzaGobuchul
      @BoraHorzaGobuchul 3 месяца назад +4

      Not as bad as watching it with 12 3tb drives full of data, that you'd love to put in your machine, but still waiting for new drives to move the data to first before putting them into the array

    • @CupsterMc
      @CupsterMc 3 месяца назад +1

      Same

    • @somebody943
      @somebody943 3 месяца назад +3

      Same, except I’m too broke to buy a few parity drives and too scared to move all my data to a single drive for the time being cause with my luck it will fail and lose everything

    • @danzen6246
      @danzen6246 3 месяца назад +2

      I have 4 4tb wd red nas hard drives but no device to put them in. My old PC doesn't turn on any more

    • @dakiletsplay16
      @dakiletsplay16 3 месяца назад

      ​@@somebody943so 2 mirror drives for luck?

  • @JeffGeerling
    @JeffGeerling 3 месяца назад +278

    Stop reading my mind.
    It's like you're looking at my Google history and making videos about all the things I search for.
    Also, striped, no mirror, is the best layout. Speeeeeed.
    (obligatory /s since someone might think this recommendation is serious)

    • @amethystdene
      @amethystdene 3 месяца назад +5

      weeee

    • @Kevin-oj2uo
      @Kevin-oj2uo 3 месяца назад +4

      You are joking right? 😂 Stripe?

    • @jjones503
      @jjones503 3 месяца назад +8

      One man's search history is another man's treasures.
      While being another man's nightmare.

    • @JoseAlba87
      @JoseAlba87 3 месяца назад +1

      True raid 5 I still a good option😅

    • @rysterstech
      @rysterstech 3 месяца назад +9

      you should really specify thats a joke, some people will take you seriously

  • @Tock46
    @Tock46 3 месяца назад +49

    If you ask the datahoarder subreddit everybody should use striped vdevs because if your HDD fails you can just recover from your backup.

    • @BoraHorzaGobuchul
      @BoraHorzaGobuchul 3 месяца назад +20

      And there's logic to that. Question remains, what do you store your backup on :)

    • @twinssword
      @twinssword 3 месяца назад +5

      ​@@BoraHorzaGobuchulnot a striped backup pool 😂

    • @MatteoComensoli
      @MatteoComensoli 2 месяца назад

      casually building a nas whit 2.5" 1 tb hdds . hoping to use a 4 tb drive for bakups , i was wandering what to use as third bakups , blue rays, or tape ? what is cheaper and easyer ?

    • @BoraHorzaGobuchul
      @BoraHorzaGobuchul 2 месяца назад +2

      @@MatteoComensoli be advised optical recordable media might not be as great in terms of longevity as is advertised. I've often encountered recordable optical media which failed to read after several years of proper storage. Unlike factory recorded disks, recordable optical media relies on a different technology for its days layer, which can deteriorate much sooner than expected.
      Take is also durable, until it isn't. Sometimes rust starts flaking off of the base later.
      So for backup, I'd use a third HDD.

    • @EwenorKvM
      @EwenorKvM 2 месяца назад

      Kind of agree. But I wouldn’t trust an harddrive that hasn’t been powered for 20 years either….

  • @user-mx6hu9yv6l
    @user-mx6hu9yv6l 3 месяца назад +54

    A very satisfying and comprehensive video. Not as exhaustive as some articles out there, but concise and comprehensive enough for 20 min. Good job with the tests.

    • @kevinoneill2170
      @kevinoneill2170 3 месяца назад +4

      Are there any more exhaustive articles out there that you can recommend to supplement this video? Thank you in advance!

    • @user-mx6hu9yv6l
      @user-mx6hu9yv6l 3 месяца назад

      @@kevinoneill2170 moderation doesn't let links through, sorry ):

  • @TheQuickSilver101
    @TheQuickSilver101 3 месяца назад +41

    This is outstanding timing. I needed to explain this to someone I know and taking the time was hard. With a video he can watch it until he understands all the details. Thank you so much!

    • @Suzuki_Hiakura
      @Suzuki_Hiakura 3 месяца назад

      Same. I was hoping to get a Dell Poweredge R730 as it would be more than what I need, so I could use it for my media server, game servers, and even backup data, if I ever managed to sort through it first that is... Was thinking of just buying several 4tb SSD's when I can and then run the raid 50 that it supports.

  • @Dr.Dkbt_JD-PhD-MD-MBA
    @Dr.Dkbt_JD-PhD-MD-MBA 3 месяца назад +14

    Honestly the best, most straightforward and simple video to help understand ZFS. It's pretty complicated and can be annoying specially on the first deployment. I'm currently running a 5 wide RAIDZ1 with a hot spare but I'm about to configure either to two RAIDZ1 or 3 way mirror, not sure since I run a mixed workload of VMs and movie storage.

  • @Jojje94
    @Jojje94 3 месяца назад +24

    Good informative vid! Me myself I'm running Proxmox, with a TrueNAS Core VM, six 4TB hard drives in a RAIDZ2 arrangement, thru a HBA passed through. ~14.5 tebibytes with redundancy peace of mind, Backblaze cloud backup for the essentials like the family photo album. I'm a happy camper

    • @wantu2much
      @wantu2much 3 месяца назад +4

      I am new to this, and i know what that means, but have a hard time wrapping my head around how that works. What would you say is a good spot to look for beginners?

    • @Jojje94
      @Jojje94 3 месяца назад +10

      @@wantu2muchHm my reply kept getting deleted on mobile (Revanced) hope this one sticks.
      Depends on what your setup looks like ultimately. If you're like me and virtualize TrueNAS, it really wants bare metal control of the drives so you'll want something like an LSI host bus adapter you can buy on eBay, connect the drives to that, and pass that entire PCI card to your TrueNAS VM. I bought an LSI 9218-8i for about 66 bucks. Note it has to be in IT mode, the auction should say so.
      If you connect the drives directly to the motherboards SATA connectors you'd need to pass the whole SATA controller to TrueNAS, I didn't want to do that as I had SATA SSDs for VM and container storage for Proxmox to use, so HBA it was.

    • @CptBlackEye
      @CptBlackEye 3 месяца назад

      How did you decide between installing TrueNAS as a VM over bare metal? I'm trying to decide that very question. I'm not sure what other VMs I might run on the same box...

    • @Jojje94
      @Jojje94 3 месяца назад

      @@CptBlackEye Good question. I was deliberating between TrueNAS Scale and Proxmox for a good while as I was getting hardware together. On the one hand, I could have a fairly recently released NAS solution that can do hypervisor things in TrueNAS Scale, or I could opt for two solutions, a hypervisor and a NAS that only do those specific tasks, separately.
      I decided on the latter because Proxmox has been around for 15 years, FreeNAS (now TrueNAS Core) a little longer than that, so they have plenty of stability and more importantly documentation and discussion. TrueNAS Scale is comparatively very young and when it comes to the backbone of my system I prefer to lean on projects that have long histories and are known to be stable and very good at their particular task, and again docs and forums should I need them for any "advanced maneuvers". So by using Proxmox for an OS and TrueNAS Core in a VM I could have the best of both worlds.
      Thanks for asking!

    • @Jojje94
      @Jojje94 3 месяца назад +1

      @@CptBlackEyeI had a long comment explaining it that appears to have been autofiltered or something. Short answer then, I wanted a long-developed stable hypervisor and a long-developed stable NAS with plenty of docs and community experience for each. So the answer was Proxmox with TrueNAS Core in a VM for the best of both.

  • @philfrisbie4170
    @philfrisbie4170 3 месяца назад +11

    Great video! I went through tons of research last year and decided to use 4 vdev mirrors in my 8 drive NAS. The 4TB drives were recycled from a mining venture I shut down, and I chose the 4 vdev mirrors because I can add capacity just by replacing 2 drives at a time, which I just did by replacing 2 of the 4TB drives with 10TB drives. Resilvering took about 7 hours for each drive swap. Unbalanced vdevs are not ideal, but it works fine for my usage.

  • @TheMatthewLedbetter
    @TheMatthewLedbetter 3 месяца назад +5

    I've been running TrueNAS for 5 or 6 years on an old supermicro board in a rosewill rackmount case. I started it with a 1150 celeron CPU, and like 8GB of ram. upgraded to sx14TB drives when spinning rust was cheap, and have recently updated it to 32GB (max :D) and bought an e3-1271 to put in as well. SAS drives got cheap all of the sudden, so i now have 6x4TB SAS drives, and i'm waiting for the HBA SAS card to run them with.
    it's been an adventure, and i've learned a lot. I also still don't trust it, because i have all of my important data backed up on cold storage every month or so. I need a legit backup, but that requires more research! :D

  • @Bikes-with-Ben
    @Bikes-with-Ben 3 месяца назад +2

    Literally working on setting up/configuring my TrueNAS Scale server up and needed this explanation. Thank you!

  • @chromerims
    @chromerims 3 месяца назад +2

    Great video 👍
    14:00 -- the way you laid out the four FIO Benchmark commands 'vertically' is so pleasing visually.
    It perfectly exemplifies how you get that refined artistry is needed with technical topics.
    Kindest regards, neighbours and friends.
    P.s. Production quality remains highly surpassing.

  • @AdrianBacon
    @AdrianBacon 3 месяца назад +21

    This is a great explainer, however I'm mildly disappointed that backups only got a token mention at the end of the video. It would be better if it was emphasized that ZFS fault tolerance is not a substitute for backups, and terminology like "if you lose a vdev you lose the whole pool and your data" is better conveyed as "if you lose a vdev, you lose the pool and have to restore from your backups". I hang out quite a bit on zfs forums (because I'm a heavy ZFS user) and I can't tell you how many times posts pop up where the user had something bad happen to their pool and wants help to try not to lose their data because they didn't back anything up.
    ZFS is great, but ZFS fault tolerance is not a replacement for proper backups!

    • @saramae9878
      @saramae9878 3 месяца назад +3

      Was going to comment something similar. "Raid is not a backup". Now uptime is great, of course, so is speed assuming you're not limited by your network speed, but backups are critical.

    • @kf4hqf2
      @kf4hqf2 3 месяца назад +2

      While of course your point is valid, I'd guess nearly all of the intended audience of this video is already well aware that "raid (of any type) is not a backup". Continuing to pound that drum honestly starts to get a little annoying after a while. At some point you have to move on to more advanced topics and trust that your viewers are escalating with you. If he covered all the basics in every video, they would be unwatchable to most viewers. I think he struck the right balance by reminding everyone to backup their data at the close of the video.

    • @haydenc2742
      @haydenc2742 3 месяца назад +2

      RAID is redundancy...
      Plus...he needs another topic to cover for more content :)

    • @AdrianBacon
      @AdrianBacon 3 месяца назад

      @@haydenc2742 raid provides fault tolerance for up-time, not permanent data persistence. Backups (done properly) provide data persistence.

  • @ROotHM
    @ROotHM 3 месяца назад +1

    Really appreciate your 101 videos. You have such a knack for breaking down complex ideas into easy to understand terms.

  • @shazzbot9
    @shazzbot9 3 месяца назад

    Great video! I love the mix between benchmarking / deep dive and practicality. This is the kind of thing I come to RUclips for: a lucid, conversational explanation of high-level concepts and tradeoffs, with some helpful pointers to dive deeper if I need to. A+ stuff in my opinion. Exciting to see that MS-01 make another appearance as well. It seems like that machine made quite a splash in the homelab community, it's a pretty amazing little machine.
    I'm running a 5-disk setup of (very cheap, very used) 2.5" drives in RAIDZ1 in a virtualized Truenas install on a proxmox VM. This setup has been hosting streaming media for a couple years without issue. Since I've never dealt with a drive failure seeing those resilvering times makes me a bit nervous, so that bit was helpful to see called out.

  • @jonathanm3904
    @jonathanm3904 3 месяца назад +4

    I have currently have 3 drives and have bought 3 used drives with the same capacity and have been at an impasse on which direction to go for both in nas os and resilience level to go for
    The video was extremely helpful

  • @CptBlackEye
    @CptBlackEye 3 месяца назад +4

    Perfect timing! I'm sitting here with a NAS on my to do list. I picked up a LSI SAS 9300-16i (HBA) and 10 identical 500gb laptop drives to practice NAS building and operating. Being new to ZFS (first time installing), this video really helped!

    • @michaelbouckley4455
      @michaelbouckley4455 3 месяца назад +1

      there is a bit more risk of drive failures with identical drives, ie multiple failures around the same time.

    • @CptBlackEye
      @CptBlackEye 3 месяца назад

      @@michaelbouckley4455 Good observation; however, I believe that since these are used laptop HDDs, the likelihood of batch failure is reduced. I'm strongly leaning towards a pool with two 4-drive Z1 vdevs, leaving me 2 drives as spares. While these are 2.5" drives, the case does have space for me to change over to 4x 3.5" NAS drives later.

  • @Geforcion
    @Geforcion 3 месяца назад +6

    as person planning to build DIY NAS - just what I was looking for, nice!

  • @ghangj
    @ghangj 3 месяца назад +3

    This was a great video. I feel like it cured some form of decision remorse i had when setting up my trueNAS pool. Thanks for the video.

  • @DIYDaveOK
    @DIYDaveOK 3 месяца назад +2

    Absolutely one of your best videos to date, sir! Well done!

  • @ewenchan1239
    @ewenchan1239 3 месяца назад +1

    Great video!
    I think that this is actually one of the best explainer videos, on this topic, that I've seen.

  • @leo_craft1
    @leo_craft1 3 месяца назад +12

    Actuallly when a hard drive is next to his failure, the raid card (or hba) will tell the server that that drive has now a predicted failure. This happens when a drive takes too long to write in a specific sector of it, and the raid card knows that and he adds 1 to the predicted failure counts. Though you can reset this count and have the PF light go off, it is really a bad idea because the drive still has a pretty bad sector and it will eventually fail in some other hours of operation. If you have 2 PF in a raid 5 (or z1) then you'll have a hard time praying for the other drive to fail during the rebuild process. Is always recommended to keep at maximum X n. of PF on your system depending on the rendundancy rate of your raid (ex. raid 5> 1 possible failure > 1 PF max). You should order a new drive immediately and subsitute that.

  • @peterschmidt9942
    @peterschmidt9942 2 месяца назад

    Good video explaining the differences.
    There's always a lot of factors to consider when setting up any type of NAS. One of those being how many physical drives you have in your NAS to begin with. What I've found with drive capacities is there is a sweet spot with cost vs capacity. That seems to shift up as larger drives get cheaper over the years.
    The last time I set up a new NAS a couple of years ago, 8TB was the sweet spot for cost per TB. As I only had 4 drives, I'm limited on capacity (unless I add an external cab and that has it's unique difficulties). When I was purchasing drives, my local was out of 8TB NAS drives, so I ended up buying a combination of NAS and desktop drives. Eventually as they came back in stock, I replaced the desktop drives with NAS and put the desktops into an external cab. The NAS was configured as RAID 5 so if I lost a drive, no biggie. The external cab was configured as JBOD. My reasoning was with the external cab, there was a complete mirrored backup. Since it's not on all the time, the drives should outlast the NAS drives. So if one drive fails in the NAS RAID, its easily replaced. If disaster happens and more drives fail, there's always the external cab and since it's not in a raid, can be read by any OS to backup from.
    And as you stated, the more drives you have running, the higher cost of electricity it uses. So while it might be nice to have your NAS running in RAID 6 or mirrored for redundancy, realistically you can achieve something similar except your backup is offline. It just means you need to be proactive in doing regular backups periodically.
    And BTW With a lot of consumer NAS' now coming with SSD capabilities, it really helps eliminate a lot of the bottlenecks with reading and writing to disk arrays no matter how you have the drives configured.

  • @HartenDylan
    @HartenDylan 3 месяца назад

    This video could not be more perfect. I'm currently in the process of figuring out a NAS build and have been back and forth about how to balance reliability, performance, capacity, and cost. The easy to understand explanations about # of drive failures, streams and IOPS, and capacity helps figure out which configurations offer the best balance for my needs! Obviously there's a lot more details that could be added, but for the purposes of an introductory explainer this hit the nail on the head. Seems like the 2x Raidz1 (3/4 drive vdevs) and 2x raidz2 offer the best protection while maintaining a reasonable speed and capacity.

  • @wantu2much
    @wantu2much 3 месяца назад +1

    Thank you for the video, and your hard work. It's something i want to set up for my self but find it kind of overwhelming. You make it easier to understand. I thank you for your hard work.

  • @verzagen7550
    @verzagen7550 3 месяца назад +2

    I've actually got a media server running 4 WD Red Plus 4TB drives in a raidz1 going strong for the last 2 and a half years, best choice I made imo. It's built with a bunch of leftover parts from Ryzen upgrades and an LSI SATA/SAS controller, no raid. It's gone through a lot of revisions as I've slowly either upgraded my main PC and the server got the old CPUs (first 1800X, now 3900X) and RAM, or I added a used RTX A4000 after finally getting rid of the RX 550 thats been the center of plex transcoding, overkill sure, but definitely worth it. Plus when transferring files, I've yet to reach the limit of the drives, the 1gbps network gets maxed out first. My next upgrade is definitely going to be the network itself.

  • @Ben79k
    @Ben79k 3 месяца назад +1

    Thank you for this super informative video. it really cleared up a lot of things for me. I have been struggling to decide if i even want to build a NAS in the first place and there really isnt a perfect answer. Its been intersting going through this entire circle just to realise that DAS might still be the right thing for my use case.

  • @DIYDaveOK
    @DIYDaveOK 3 месяца назад +2

    Last year, i rebuilt my server into a Proxmox host running a TrueNAS VM for a four-drive raidz1 array w/four Toshiba 4TB drives, and tt has been fantastic. And on older hardware, too! 😊

  • @slawomircaluch878
    @slawomircaluch878 3 месяца назад

    Thanks for the test without cache, this was helpful.

  • @spynightrider
    @spynightrider 3 месяца назад +1

    Absolutely incredible video, thank you! So helpful.

  • @jburnash
    @jburnash 3 месяца назад +1

    This was a *really* good explanation of the various ZFS vdev types, pros and cons. I've been in IT (mostly large enterprises) for decades, so my initial take was - do I already know this stuff? The answer was "yes for normal RAID" configs, but "no" for ZFS specific ones. Much appreciated - keep up the good work!
    *Note* - I find your videos in particular very calming. I don't know if it's the music, the speed at which you speak - but regardless, points for that as well as the extremely high quality video production. 👍

  • @Kartratte
    @Kartratte 2 месяца назад

    Thank you. I se so much testing and work in this video. 👍

  • @deechvogt1589
    @deechvogt1589 3 месяца назад

    Hey thanks for the really interesting NAS drive video. When I get to build a NAS I think going with mirrored vdevs makes a lot of sense.

  • @frankwong9486
    @frankwong9486 3 месяца назад +3

    I moved to unraid as it kicked on less drive when I access data and it have less noise ( I play/ work in the same room the Nas living )
    But minimum still kicked on 3 when set to dual parity

  • @philipthatcher2068
    @philipthatcher2068 2 месяца назад

    Excellent video. Very well explained.

  • @TheRealInscrutable
    @TheRealInscrutable 2 месяца назад

    Needs some definitions for us noobs.
    * What is an iop?
    * What do the colors mean in your drive diagrams?
    * I grasp the basic idea of a vdev - a virtual drive made up of one or more physical drives, but stuff you said implied you might mirror or tripe within the vdev and/or between vdevs? Did I understand that correctly? And if so, why or why not would I do either? Would I ever want to do both?
    * What about drives of unequal size?
    * What about used drives from eBay (that may or may not be enterprise grade)
    * A year in and I want to add capacity - assume I have more drive bays - what are the easy or best ways to do it? Am I stuck using the same arrangement I had when I started with a small budget? Is changing arrangement possible? Can it be done in-situ?

  • @LEGnewTube
    @LEGnewTube 3 месяца назад +1

    Great video! Thank you for the info!

  • @andyrechenberg
    @andyrechenberg 2 месяца назад

    It's interesting that most of your performance benchmarks have sequential write bandwidth higher than sequential read bandwidth. For most raw disk drives, the sequential read generally outperforms the sequential write performance while random writes can outperform random reads. As you stated, it looks like ZFS still has some caching and write coalescence going on. Excellent video. Thanks ☺️

  • @cnlawrence1183
    @cnlawrence1183 3 месяца назад +1

    Great breakdown! Thank you.

  • @MaidLucy
    @MaidLucy 3 месяца назад

    Thank you so much! This is exactly the video I needed right now.
    I'm planning to buy 3x 18TB drives for RaidZ1 right now as electricity cost here is pretty high and I don't need a lot of resilience.

  • @steven44799
    @steven44799 Месяц назад

    i had a setup with 3x 24 drive sas chassis attached to it, they all ran mirrored pairs and no two drives of a pair were in the same chassis so you could lose an entire chassis and the array would be highly compromised but still up and running, this ran iscsi/fibre channel storage for virtual machines before SSDs were large/affordable enough to just make the entire array out of SSDs.

  • @cheeseisgreat24
    @cheeseisgreat24 3 месяца назад +1

    What I’ve done before is used striped vdevs for active use data, but with an active replication to something with raid-z2 for immediate backup. That way all my capacity and performance is used for making work go faster, but it is immediately backed up to a more resilient pool. If the striped pool goes down, I would work off the other until the weekend to re-build. However, I wouldn’t recommend this at all, its jank was honestly more effort than it is worth and you should just work off of something more resilient in the first place. Call the marginally increased access time the CODB and call it a day.

  • @harrisonm65
    @harrisonm65 3 месяца назад

    Literally just received 4 extra HDDs to expand my 4 drive TrueNAS Core set up. The info in this video came in very handy. I had 95% decided on the config I was going to go for, but your results helped me confirm what I wanted to do. Think I'll also go from Core to Scale and swap out the motherboard and CPU for something a little newer that I have from an old build.

  • @mabs-O_o
    @mabs-O_o 2 месяца назад

    This also on the most part also applies to btrfs as well. Btrfs has a few advantages, like file based raid, which means drives can have varying size, like raid1 with a 3TB + 1TB + 1TB + 1TB, default config will balance writes across all drives, making the 3TB drive a live mirror of the other 3; another awesome feature is dynamic arrays, done right you can grow and shrink an active array. And I’ve never broken a btrfs raid5 or 6, and I have tried, so I don’t believe a regular user would trigger the mythical write hole; and even if you did, aren’t you glad you had a backup.

  • @YountFilm
    @YountFilm 18 дней назад

    This video singlehandedly answered all the MOST OBVIOUS questions about ZFS and RAID that, for some reason, will not come up in web searches.
    Me: "What is the actual read/write performance of each raidz level?"
    Internet: "What is performance...? Anyway, then there's raidz3 with 3 parity drives..."

  • @Ozz465
    @Ozz465 Месяц назад

    Just what i needed . good stuff

  • @NiHaoMike64
    @NiHaoMike64 3 месяца назад +2

    What we need is some filesystem that does different RAID levels on different directories, with a shared pool of storage. So you can (for example) effectively have RAID 1 for critical documents and RAID 0 for downloads.

    • @sgstudioofficial
      @sgstudioofficial 3 месяца назад

      It is called LVM thin provisioning.

    • @terrydaktyllus1320
      @terrydaktyllus1320 3 месяца назад

      Why would you not just store all data at the best possible resilience and service availability level and have done with it? There's too much "number wanking" when it comes to NAS solutions for home environments where it's simply being shared across members of a single family at most.
      I use RAID 5 with three storage drives and one parity, ext4 works fine. No messing around with LVM or volume management, if I need to control access to specific users then just use Linux permissions. It has worked fine that way for me for over a decade now.
      Data value is not determined so much by how you store and access it when in use but how you back it up, how many backups you make, how often you do it and where you store the backups safely.

    • @NiHaoMike64
      @NiHaoMike64 3 месяца назад

      @@terrydaktyllus1320 On an unlimited budget, we would indeed use RAID 1 for everything. But budgets in the real world are far from unlimited. For the most part, it would be silly to use extra disk space to back up downloads since if lost, they can simply be downloaded again, RAID 0 is perfect for that. (Archival downloads that are no longer available is a different matter.)

  • @K-o-R
    @K-o-R 2 месяца назад

    One point I would make from experience is: with hot swap bays the operative word is HOT. High capacity drives physically fill the entire sled and there's basically zero airflow so they will get pretty toasty, especially when doing parity checks. It got so bad that I switched away from my case with hotswap bays in favour of a custom mountn g solution that stays nice and cool even when it's absolutely thrashing the drives. As a bonus I took the opportunity to switch to a better sata card which cut the parity check time by 75%

  • @tachyongti
    @tachyongti 2 месяца назад

    this was a great video. thanks!

  • @Game_Rebel
    @Game_Rebel 3 месяца назад

    Thanks for the vid, found it very helpful since I'm looking to expand my storage!
    I'm currently running a single RAIDz1 config with 4x8TB HDDs, but with a special metadata vdev on 2 mirrored NVME SSDs.

  • @MarcoGPUtuber
    @MarcoGPUtuber 3 месяца назад +2

    The best youtube video layout needs to involve Hardware Haven

  • @SteveBrownRacing
    @SteveBrownRacing 3 месяца назад

    I generally advise most people to run a simple mirror till their capacity needs outstrip the highest or second highest capacity drive they can buy. Having said that, this is a great breakdown once people push past that threshold. One other thing to consider might be the impact of network links on all this, or maybe a follow-up video that went deeper about the impacts of caching on saturating network links.

  • @theWSt
    @theWSt 3 месяца назад

    This is a very helpful video, thx a lot!

  • @speedracer9132
    @speedracer9132 3 месяца назад +3

    Perfect timing, I’m starting my build next week when the case arrives. I do wish you went more in detail about different types like what’s the difference between ZFS vs BTRFS (did I spell that right?)

    • @fang64
      @fang64 3 месяца назад +1

      I know you weren't looking for an answer from someone else, but ZFS is a block level raid and they impose limits around how they implement it. BTRFS is using file level raid and it has a nice advantage, it's just working with files. So if you run RAID1 normally in BTRFS it creates a copy of a file across disks, always ensuring 2 copies, but you can make that 3 copies, and there is no issue with drive expansion, since it's all based on files. You do have to rebalance the drives once a disk is lost so you can restore the raid. Also you have btrfs scrub to prevent bitrot. I'm personally running BTRFS in Raid1 with just file mirrors across disks. I have used ZFS in the past, but I don't either is better or worse than the other, but you don't have the ram requirements for BTRFS that ZFS has.

  • @sokolovwlad
    @sokolovwlad 3 месяца назад

    wow, that was a really nice comparasing of different raid configs. I'm currently running a home server with 4x4Tb drives in RaidZ1. However, afterwards i've trow in an crappy 128 Gig nvme ssd that was lying around for some time as L2ARC. I haven't done any testing but it looks like jellyfin started to load web interface a bit faster.
    L2ARC (as well as other types of vdevs) wasn't mentioned in the video but i think it may be interesting to see how it and it's size will affect perfomance.

  • @CheapSushi
    @CheapSushi 2 месяца назад

    I went with StableBit DrivePool on Windows instead. 2x / 3x duplication depending on on the data; don't care about parity or traditional RAID setups for the pool. Plug & play. Can change, add, remove any drive; mixed drives, mixed sizes. Does read striping too. Currently at 50TB with 8 disks plus an NVMe drive for write caching / landing; just one big pool. A pool that I can just add another disk in, and it will rebalance automatically and my total TB goes up. Also the data on the disk aren't unreadable, obfuscated or proprietary ; you can drop them into another system or go into the pool folders yourself; so it's another way to recover if something does goes wrong. Super easy.

  • @mineturte
    @mineturte 2 месяца назад

    This video was insanely helpful!! I didn't understand fully why avoiding wide v-devs was a good thing until you explained it in this video, so thank you for that. :)

  • @sandphotoNL
    @sandphotoNL 3 месяца назад +1

    This was very helpful! But I have a couple of questions left: What are my options (and limitations) when I want to expand my storage? Can I add one disk to a vdev? Do I have to create a new vdev with multiple disks? Do they need to be the same size? What are the differences between TrueNAS en Unraid? Maybe a follow up video? ;-)

    • @BoraHorzaGobuchul
      @BoraHorzaGobuchul 3 месяца назад +3

      There's lots of videos on the topic. So far, can't add disks to vdevs (it was mentioned in the video), only can add vdevs and they have to have same number of drives. There's talk that there will be easier expansion functionality in future, but it's only talk so far

  • @GavAttackO
    @GavAttackO 3 месяца назад +1

    This was absolutely outstanding timing. I was just debating between going with 2 mirror drives in my HP EliteDesk Ubuntu server, or take a spare system in a Zalman Z9 Plus for a dedicated TrueNAS rig.
    Either way, I still don't have anywhere close to the amount of money I'd need for even 1 hard drive. So there's that 😂

  • @gjkrisa
    @gjkrisa 3 месяца назад

    this was pretty good. thank you.

  • @PreDaToReLeaSeD
    @PreDaToReLeaSeD 3 месяца назад +2

    what are your thoughts on drive sleeping? if i access my 40-50 drive NAS half a dozen times a day should sleeping the drives when not in use be smart due to power consumption or am i risking the longevity by sleeping them, I have had NASs for over a decade and always disabled sleep, but after seeing your wattage measurements im thinking maybe im making the wrong choice with the amount of drives im running?

  • @keyboard_g
    @keyboard_g 3 месяца назад +1

    Another consideration is growing the pool over time without restarting from scratch. With striped mirrors it just adding 2 more drives.

  • @Booclap
    @Booclap 3 месяца назад +1

    Great video! I am curious if truenas scale would have different results specifically in write performance. When i upgraded from truenas core to scale with the same system, i went from 1.1GB/s to about 700MB/s. A lot of things could have been a factor, but i have done some research online and found others with the same conclusion.

  • @guywhoknows
    @guywhoknows 2 месяца назад

    I'm old and back in the day with 512mb was a massive drive, and we started to get bigger drives 1.6gb.. things were unreliable to say the least.
    Raid wasn't a thing, but when you wrote the redundancy guides.. well you know there are a few ways to get it done.
    Raid 0 no back up, well that's not true. Well it is, but..
    What you would do is raid strip and then copy (mirror) or what's R1.
    We used to write the code and it's really simple.
    The idea was that if you had two strips you could have fast read and writes, but you could also write back to another R1. Then have the data disrupted after to single drives as in two.
    Remember though, drives were about 60mb/s if you were lucky maybe 36-45mb/s. Or about 12mb/s (which was still fast, as programs were only a few kB.
    The raid card took away the manual set up and made things easy. But the general idea was not to put them together in any event.
    I still use the manual way, with my jbod we have small fast drives that end up on large drives on two systems. A SAN basically.
    Being able to off load a few GB of data fast and the system mess about with back up routines is nice.
    But live data is a different ball game. and caches too.
    Thank goodness for append.!!
    That wasn't a thing way back.
    There were a lot of issues with multi access. As saving the file would write one person's but not the other. 😅
    Still today, San not nas. Nas's are not redundant.

  • @keyboard_g
    @keyboard_g 3 месяца назад

    Great explanation 👍👍

  • @mikemiller1208
    @mikemiller1208 3 месяца назад

    Two omissions. Firstly the level of risk of completing resilverinhg following a failure is underestimated. People will be using identical disks from the same batch frequently, and this massively increases the risk of subsequent failures during recovery. Mirrors recover via a simple copy operation which is many times less stressful on the remaining disk(s).
    The other omitted factor is expanding your storage when you have no more drive slots left. In mirrored setups you can just swap out a pair or triple of disks for higher capacity ones, one disk at a time. This gives a sane, reasonably priced pool expansion option for SOHo users.

  • @sriprasad22
    @sriprasad22 3 месяца назад +1

    On time video . I was looking to upscale my NAS . Just running mirrored 1tb . (Very low I know ). HH. ❤

  • @LtGen.Failure
    @LtGen.Failure 3 месяца назад +1

    i just set up a new TrueNAS Server and went with a RaidZ1 configuration with three 14 TB Toshiba drives for data storage and a mirrored NVME vdev for Meta-Data and small files. This Pool is for large amounts af data like videofiles or diskimages. in addition i added an nvme and an ssd pool for data which is accessed more often for faster access and less powerdraw. For backup i run a second smaller TrueNAS server because RAID is not a backup.

    • @_sneer_
      @_sneer_ 3 месяца назад

      Power draw difference between a HDD and a SSD is minimal. SSD only makes sense if datacenter class. QLC consumer ones fail much faster than HDDs.

  • @juicycs
    @juicycs 3 месяца назад

    Thank you for this video I was really wondering about this for setting up home made Nas how many drives I would need

  • @riverfrontww
    @riverfrontww 3 месяца назад +1

    any down to using larger parity drives to future proof expansion? or is better to just add drives in a set vdev size? my storage will consist mostly of media (movies, music, and TV) also security cam footage some what currently have (6) 12 tb drive slated for new build, already filled nearly 12 tb in existing 4 bay nas.

  • @dennisolsson3119
    @dennisolsson3119 Месяц назад

    I run btrfs raid 1. I am on a budget, and being able to expand the raid one disk at a time is invaluable. Also being able to shrink it (if a drive fails and there is space left it is better to run smaller than degraded )

  • @AugustoCarmo
    @AugustoCarmo 3 месяца назад

    great content!!

  • @zeroturn7091
    @zeroturn7091 3 месяца назад

    I run Basic on my pool. Lessens the risk of a crash/rebuild by doing the writes myself.

  • @mohammadmekayelanik7408
    @mohammadmekayelanik7408 3 месяца назад

    A Brilliant video!👍

  • @David_Quinn_Photography
    @David_Quinn_Photography 3 месяца назад +1

    I have 7 HDDs in a RaidZ2, being in RaidZ2 has already saved me from having to redownload the files again, I had 1 HDD fail, I replaced it and started to resilver and I had a 2nd fail about 70% of the way through, that was the quickest I drove to my local Bestbuy in years.
    I do have a copy of all that data on my desktop as well but its about 5TB of photos over the last 20 years so you can think about how long that would take to redownload if a 3rd drive failed.

  • @deathmog91
    @deathmog91 3 месяца назад +1

    The only uses that I could really come up with raidz3 is a big media server. That has 12+ drives. Also on side note when your showing test result saying if high or lower is better. This will make it a easier for your average joe.

  • @christopherhunt147
    @christopherhunt147 3 месяца назад

    Currently using unraid for my nas. Like the idea if I fail on a rebuild I don't loose the whole array. Just what was on that disk. Am looking into trying out truenas scale. Going to have to see If I can find a good course slow enough for me. Setting up a machine for it as soon as I can get a case and a cache drive. Have everything else to try it. Never really worked with zfs.

  • @revealingfacts4all
    @revealingfacts4all 3 месяца назад +1

    why the mirrors didn't scale up - have you any data on the internal architecture of the PCI bus in how the drives are connected to the host? Is there more than one HBA in the machine? curious if the mirror performance would increase if you, for example, set up the mirror such that the drives were on different HBAs assuming more than one HBA....

  • @TimoKellerAtGplus
    @TimoKellerAtGplus 3 месяца назад +2

    Thanks for this overview!
    Did you mix up writes and reads in all diagrams of the fio tests? Read should be mostly always equal or higher than write.

    • @nicholasthesilly
      @nicholasthesilly 3 месяца назад

      That's what I was wondering. It looks like he flat out swapped them...

  • @pizzlespettime
    @pizzlespettime Месяц назад

    I am getting a 6 Bay NAS from yougreen next month. I have made every effort possible to only use flash storage. The past 7 years of video editing I have been using NVMe drives. However after upgrading to the Canon and shooting intra frame 4K I am running out of space and options lol. I want to get the highest capacity drives but it is literally insane for me to spend $1,400 on drives.

  • @fmlazar
    @fmlazar 3 месяца назад +1

    It's worth noting that starting with the Trashcan, Apple would set up it's Macs with striped pairs to boost performance, safety be dammed.

  • @danknemez
    @danknemez 3 месяца назад +3

    Very solid video! I've been in the process of building a new 8x4TB NVMe NAS for a bit now and I understand your pain with disk benchmarking perfectly :D
    For an easier ARC bypass, there is the --direct=1 flag for fio, and it is worth noting ARC-less performance benchmarks are still very important depending on what you're running, since database servers and virtual machines will generally do direct sync IO, bypassing ARC, in order to get ever so slightly better power loss resiliency.
    Personally gonna bite the European electricity cost bullet and run a Threadripper NAS to support all those NVMes and RAM requirements, since I have all the components from my old workstation and buying anything with similar capabilities to save 50W or so would have 10y+ ROI. Also probably settling on 2x 4-wide Z1 VDEVs since it gives that extra performance push which is nice for 40GbE and also nudges the usable capacity over a nice round 20TiB, instead of 19.5TiB (yea Z2 has slightly more capacity overhead, even with Z1 its not a direct "1 drive of capacity")

  • @neccros007
    @neccros007 3 месяца назад

    Inspired by the Supermicro Mini server iX System sells, I did my own DIY version. Based on a Supermicro X11SCL-iF ITX motherboard, E-2236 6 core/12 thread Xeon, 32gigs of ECC ram, 32 gig DOM for boot, 256gig NVMe drive for apps, LSI 9300-8i HBA with 4 Seagate 6tb 12 gig SAS drives running in RAIDZ1, 2xSSDs(1 for Windows, for certain things, 1 is just blank for whatever), all in a Supermicro mini tower chassis.

  • @sethperry6616
    @sethperry6616 3 месяца назад

    I have a 4u unraid NAS, and a 2u truenas NAS. Unraid for bulk storage, and truenas using 15k drives for speed.

  • @DanielMiller82
    @DanielMiller82 3 месяца назад

    In my situation (plex server with 2-3 streams at most simultaneously, wanting to spend the least on power and hardware) it always comes back to using a single large drive (~20TB) with multiple external manual backups.

  • @KiraSlith
    @KiraSlith 3 месяца назад

    There's a reason Intel and company always charged extra for mixed vdevs and RAID10 on their RAID controllers, mixed storage pools are absolute performance and redundancy beasts. True optimal is 2, 3 drive vdevs in RAIDz1 for workstations, and 2, 4 drive vdevs in RAIDz2 for industrial appliances.

  • @DaemonForce
    @DaemonForce 3 месяца назад

    So far I do Windows Server for personal NAS as it's only a JBOD system and I don't really need anything, even a GUI.
    16TB for long term archive storage. 4TB for scratch/ingest. Boots from a 16GB flash.
    When I finally need a production server I'm dragging out the antique FX rack, loading up the sole 5.25" bay with a 8xSATA cage loaded with 512GB SSDs and putting the contents in RAID 10 for connection over 10GbE SFP. Anything more is a waste of the bay and in the future I'll just opti-connect some 8xNVMe bay to one of the local 5.25" bays on my workstation. Fast af boiiiiii!

  • @mariograterol6489
    @mariograterol6489 3 месяца назад

    I recently build my cheap nas with a 14 years old hardware, Socket 1156 i5-650 / 16 gb ram ddr3 1333 mhz / 4x 2TB HDD / 1x 120Gb SSD and the setup of the pool is 2 vdevs mirror and I used the nas just for my personal backup and backup projects for my work, I dont wanna used my nas for streaming or editing video I just want to make a backups of my stuff. And I decide that config bc I read a lot about that recomendation for this cases and I know I have the half of storage but I prefer more resilence than performance, is more safe and easealy to replace a HDD if you lost some disk but now with this video I put my desicion on doubt LOL. What is your recomendation for that kind of scenarios? Btw thanks for the illustrative explanation.

  • @wolvnmastr
    @wolvnmastr 3 месяца назад

    I am currentlt using unraid with xfs and drive pooling with a single parity drive. I started this way because inoriginally had a lot of different sized drives. Due to a flood and rebuilding from insurance money i could probably use zfs in unraid. (I had an off site backup of my data)

  • @clintcolombin
    @clintcolombin 3 месяца назад

    I'm trying to build a replacement for a single drive NAS. Looks like a 3 drive with parity will be my go-to

  • @kristof9497
    @kristof9497 3 месяца назад

    Thanks

  • @marksterling8286
    @marksterling8286 3 месяца назад

    I run a 5 bay Synology using raid 5, with 8TB drives loosing 1 drives capacity to the raid. I use cloud sync to synchronise my volume in real time to backblaze. Then overnight I backup to a second Synology nas at my second house 50 miles away over a vpn tunnel that nas uses raid 5 over 4 disks. I also keep a hot swap disk and sled unplugged next to each nas they are identical model and size. So I can swap in a replacement as soon as a disk fails. Both nas devices have 2x2tb cache ssds.

  • @Paberu85
    @Paberu85 2 месяца назад

    raid 0 - considered as universal evil that should be avoided at all costs, and while there's no redundancy and failure recovery possibility, consider the fact that number of disk accesses for read/write operations will be divided by the number of disks in stripe. Basically, writing 1Gb of data to a raid 0 array effectively means each drive will write 333Mb of data effectively reducing overall per drive wear.

  • @TazzSmk
    @TazzSmk Месяц назад

    most important thing is you can't add another drive to expand RaidZ pool, so it's ultimately a waste of money to either buy many large capacity drives upfront, or replace all drives with all bigger drives wasting money again (and no, uneven vdevs won't help either),
    and also it's debatable whether to use 8 disks as two vdevs with RaidZ1 where likehood of failure will be way higher than one vdev with RaidZ2,
    ZFS is so 2000 technology, not meant for casual NAS systems...

  • @BrendanxP
    @BrendanxP 2 месяца назад

    I run OMV in an Proxmox VM with direct access to 2 12TB drives which are mirrored. In the near future I would like to add a 22~24TB drive, and mirror it to the two 12TB which will be striped. Is this a good set-up idea? And how would I go about setting this up with the data still on there? Any help and pointers are appreciated!

  • @06racing
    @06racing 3 месяца назад +1

    Can you explain how to network a nas?
    I just bought a Qnap direct attached storage device because I didn't want to get my NAS hacked and open to the Internet.
    With the ability to attach it to a nas down the road.

  • @SafetySheepRnD
    @SafetySheepRnD 3 месяца назад +1

    Hey man, I think you need to double check your test methodology or maybe you swapped your read and write data in the charts because it doesn't make sense for writes to be faster than reads. Writes require you to compute parity, which you can only do after the data is written and checksumed, so it should to take longer than just reading the data from the disk. I think it's likely there's something like ARC enabled for write caching that is not giving you raw disk performance and you are really seeing memory write speed, just like ARC gave you memory read speed.

  • @acenio654
    @acenio654 2 месяца назад

    I think the minimum per vdev if you care about the data is raidz2 because then you still have some redundancy while reslivering a drive

  • @JamesTenniswood
    @JamesTenniswood 3 месяца назад

    I use a few ssds and merge fs (plus a daily backup)

  • @nwallace
    @nwallace 3 месяца назад +2

    You mentioned putting links to all those other resources in the description but I don't see them. I would appreciate those links if you get to it.

  • @pharmdiddy5120
    @pharmdiddy5120 3 месяца назад

    Great video! I'd love more like it, hey tbh i watch em all anyway. How do SATA ssds stack up to these as far as cost / longevity?