Taking a look at RAIDZ expansion

Поделиться
HTML-код
  • Опубликовано: 3 дек 2024
  • In this video, I explore the process of expanding ZFS pools by adding a single drive to a RAIDZ vdev. This method simplifies the expansion of ZFS pools, making ZFS expansion easier than before. I look into the steps involved, the underlying mechanics, and the performance speeds during the expansion.
    Here is a PDF that goes over how ZFS expansion works under the hood: openzfs.org/w/...
  • НаукаНаука

Комментарии • 48

  • @Skukkix23
    @Skukkix23 16 часов назад +2

    Amazing. I remember an interview with someone involved in this about 3 years ago who said: "it''s a feature we're working on, but math is hard". I guess it's hella complicated to make sure data intergrity is guaranteed. So glad to see this

  • @jimscomments
    @jimscomments 9 часов назад

    As always a great video. Nice 11 minute overview of the new feature with an explanation of adding a drive to the pool.

  • @alpine7840
    @alpine7840 День назад +2

    Wonderfull video bro! Thank you!

  • @traolin5877
    @traolin5877 День назад +1

    Love this channel. Happy holidays

  • @millenniumpilot
    @millenniumpilot День назад +2

    Danke!

  • @ewenchan1239
    @ewenchan1239 20 часов назад +1

    Great video!

  • @gardnerjp1
    @gardnerjp1 День назад +2

    Great video! Thanks!

  • @parl-88
    @parl-88 День назад

    Great Video! Thanks for making it.

  • @MaplePengiun
    @MaplePengiun День назад

    Thanks for the great recap! Easy and to the point.
    I know one option for home users prior was doing mirrors of larger disks and adding additional mirrors later. I did that with a pair of 14 TBs and then added another pair of 14s, but the data didn't redistribute so now I have to think about how I'm going to shuffle everything around to get the best performance. I'll definitely be adding a pair of ssds to the pool as a mirrored special/metadata vdev cause I know that can help a ton.

    • @ElectronicsWizardry
      @ElectronicsWizardry  День назад

      That's one disadvantage of ZFS, it doesn't typically move data once its been written to disk. I have seen the mirrors recommended as a way to expand ZFS easily, but the disadvantage I see is its only about 50% of usable space, and often users that want easy expansion also want the most usable space on their array.

  • @lifefromscratch2818
    @lifefromscratch2818 День назад +1

    Thanks for this explanation. You answered a couple questions that I hadn't gotten answered from some other videos I've seen on it.

    • @ElectronicsWizardry
      @ElectronicsWizardry  День назад

      Glad I was able to cover more than others.

    • @lifefromscratch2818
      @lifefromscratch2818 День назад +1

      @ElectronicsWizardry I always enjoy that you cover a more technical explanation of what's happening under the hood rather than just showing how to do it from the UI. At the end of the day I'm just an amateur UI user, but understanding how it works behind the scenes allows me to make better choices overall.

  • @heckyes
    @heckyes День назад +8

    Amazing. I wonder how long this will take to land in a proxmox update.

    • @ElectronicsWizardry
      @ElectronicsWizardry  День назад +9

      Proxmox 8.3 came out about a week ago with ZFS 2.2.6 which doesn't have raidz expansion(2.3 and newer adds raidz expansion). I'd guess it will be a few months to half a year until v8.4 which will likely have ZFS v2.3 or newer with raidz expansion.

    • @heckyes
      @heckyes День назад +1

      @@ElectronicsWizardry You're a wealth of information. Love the channel. If I wasn't broke I'd absolutely throw you a few dollars. Hoping to in 2025!
      Thanks for all you do.

    • @FaridAbbasbayli
      @FaridAbbasbayli День назад +1

      @@ElectronicsWizardry I have just gotten myself an Aoostar WTR as my first NAS and I was really worried about buying only 3 drives (for budget reasons) for the available 4 bays. Now, the fact that I will be able to expand the array down the line puts my mind at ease, even if I can't do it right now.

  • @harrythehandyman
    @harrythehandyman День назад +1

    RAID-Z expansion is probably going to be more practical in a all SSD pool as large enterprise SSD is still very expensive. For spinning rust, save up and add an entire VDEV is probably more beneficial in terms of speed and performance.

  • @frederichardy8844
    @frederichardy8844 13 часов назад +1

    It can be VERY slow. It’s now Dec 3 14:00:00 2024 and my zpool status is :

    expand: expansion of raidz2-0 in progress since Mon Nov 25 18:07:49 2024
    62.4T / 117T copied at 96.7M/s, 53.53% done, 6 days 19:04:16 to go

    Ok it’s a big vdev (adding 1 drive to 18TB*8 12G SAS drives) but I hope they are working on some optimizations!

  • @ambient00101
    @ambient00101 6 часов назад

    Great video! Thank you. Does having a log, special or cache volume have any impact on expanding a pool?

    • @ElectronicsWizardry
      @ElectronicsWizardry  2 часа назад

      Other devices in the pool like a log, special or cache shouldn't affect how this works, but I haven't run testing to see if there are issues myself.

  • @warlordattack
    @warlordattack День назад +1

    Nice video, could you do one about full rebalancing after adding drive please :)

    • @ElectronicsWizardry
      @ElectronicsWizardry  День назад +1

      Just to be clear you want a vidoe going over a near full ZFS array, and adding a disk, and dealing with the overhead and non optimal space usage. I'll think about how I can make a video about this, but it should just work and give the new drives worth of space. If you want the larger parity stripes you can copy each file/folder off then back on to get a bit of space back after expansion.

  • @homehome4822
    @homehome4822 День назад +3

    Have you tried fast dedup?

    • @ElectronicsWizardry
      @ElectronicsWizardry  День назад +3

      Nope. Let me give it a shot and possibly make a video on it in the future. Thanks for the idea.

  • @monish05m
    @monish05m 6 часов назад

    My old SMR raidz1 3x4tb seagate took 2days18hours to add 1 additional drive, my all CMR raidz1 3x4tb seagate took 2 hours adding 1 drive.

  • @dlfzstuff4343
    @dlfzstuff4343 День назад +1

    Has this feature been added to the latest TruenasScale need to add 2 new drives to my 4 Great Video !!!! Thanks

    • @ElectronicsWizardry
      @ElectronicsWizardry  День назад +1

      Yea its in Truenas 24.10 Electric Eel, and should be a upgrade withing the TrueNAS settings. Then you can upgrade your ZFS pools and add the drives.

  • @jasonmako343
    @jasonmako343 День назад +1

    It may be fine to use with small drives and datasets, but found it very impractical with large drives and lots of data. I have a RAIDZ2 with 6x20TB drives with 70TB used. I purchased 3x more drives to expand it to 9x20TB but after it took days to just add one of the drives I realized it was a waste of time. It was going to take weeks to add three drives and then re-write all the data. And if something goes awry during the process you could lose all your data. How well does it handle drive failures during an expansion operation? I don't know. In the end, I decided to ZFS replicate the data to another TrueNAS, destroy the pool, make a new pool with all the drives, and replicate the data back. Sadly this also takes a long time. It took me 6 days to do a full ZFS replication on my network, and it's going to take that long to restore it. It sucks either way!

    • @ElectronicsWizardry
      @ElectronicsWizardry  День назад

      Thanks for sharing your experience. HDDs have gotten larger in capacity faster than they have increased in performance so rebuilds and expansions take longer. Especially with how ZFS doesn't max out a HDDs sequential speeds with its expansion operations. I didn't test a drive failure during a expand, but I can give that a shot if you want. Multiple weeks seems way to long to me for a expansion or rebuild to take.
      Dual actuator drives or another technology to make HDDs faster is almost necessary with 20TB+ drives due to how long they take to rebuild these days.

  • @JonDisnard
    @JonDisnard 11 часов назад

    ZFS sucks, and so do many other modern volume aware filesystems. Adding a disk is one thing, and I'm general expansion is boring. The evil trick is shrinking, and reducing storage, which simply doesn't exist in ZFS. Nope, just go ahead and take a backup, nuke the whole thing, start all over with smaller storage. This is not a quality of "enterprise" storage solutions.

    • @totoritko
      @totoritko 10 часов назад

      This used to be a problem, but ZFS nowadays has top-level vdev removal as well, so you *can* shrink pool capacity, if you really need to.

    • @JonDisnard
      @JonDisnard 9 часов назад

      @totoritko that's great, thanks. I didn't know because I've stopped following zfs in recent years. However, the pool can shrink, can the actual zfs filesystem volume shrink?

    • @totoritko
      @totoritko 9 часов назад

      @@JonDisnard Yes, obviously assuming that at the end you have enough capacity to hold all the actually stored data on the pool. Say you start out with an 8TB pool, holding 2TB of data and you want to shrink it to 4TB. Well, connect a new 4TB drive, add a top-level vdev with the 4TB disk and then remove the old vdev where that 8TB was. ZFS will shuffle the data around to the remaining 4TB drive and then remove the 8TB top-level vdev. I think there's a caveat that you can't do individual leaf vdev replacements in say a raidz. So if e.g. you have a 4x4TB raidz1 and want to reduce each drive from 4TB to 2TB, you can't just replace the drives one-by-one in place. Rather, you need to hook up all the smaller 4x2TB smaller disks, then make a new raidz1 out of those, then remove the old raidz1 made up of 4x4TB.

  • @karlioskarlios5934
    @karlioskarlios5934 День назад

    Why anyone chose zfs for a home server? i recommend ext4 and snapraid better for a casual user, or maybe raid5. ZFS is high resource consumption, complex, rigid and destroy consumer SSDs.

    • @ElectronicsWizardry
      @ElectronicsWizardry  День назад +4

      I'm not sure about others, but I have leaded all the ZFS quirks, and like the feature set provided for easy snapshots and other features like send/receive. I think ZFS can also be simpler as it can replace raid software/hardware + volume manage + filesystem in a more traditional setup. I see how snapraid+ext4 can be nice for mixed drive setups, but have found it to be a bit more complex to setup, and limits you to one drive's performance. I should look at how many writes ZFS does, but personally haven't seen a major difference compared to other parity raid solutions.

    • @totoritko
      @totoritko День назад

      What's a "casual user" is up for debate. If they don't know a drive from a drinks coaster, then sure, ZFS is overkill. But if they know a bit about storage and data management, ZFS has great utility for home users. The rest of what you wrote is just a bunch of urban legends and "old wives' tales". "ZFS is high resource consumption" - false. It consumes no more resources than any large-data storage system (so don't use it in your embedded micro controller, but it'll be quite happy with systems with 4GB or more of RAM). "complex" - false, it's no more complex than any filesystem + LVM combo. In fact, one might argue it's simpler, because there are fewer configuration pitfalls. "rigid" - false, see topic of the current video. It used to be somewhat true, but vdev removal has been available for quite a while now and it has insane flexibility when it comes to data management. "And destroy consumer SSDs" - that's just a weapons-grade falsehood. Consumer SSDs are just fine under ZFS. If anything, its COW nature does far better wear-leveling than traditional filesystems like Ext4 do.

    • @ElectronicsWizardry
      @ElectronicsWizardry  День назад

      I will also argue that many homelab users have a goal of learning different storage methods they may use in other arrays. I have seen many users use a solution so they can learn more for work/resume experience so they may not be picking the best solution but rather a solution that lets they learn a specific technology.

    • @leito1996
      @leito1996 День назад

      Easy, fast backups using send/receive is enough to use it for me apart from being necessary (unless you go full overkill mode and go ceph) for VMs replication between proxmox nodes

    • @karlioskarlios5934
      @karlioskarlios5934 16 часов назад

      @@totoritko Isn´t recommended 1Gb RAM per 1Tb storage anymore?. Rigid isnt false, zfs is a rigid filesystem and the new expandability option dont solve this at all. And ZFS eat consumer SSDs for breakfast (COW nature). Some people take up ZFS as their new religion, but it is not perfect, nor is it the best solution for all users.