TrueNAS Tutorial: Expanding Your ZFS RAIDz VDEV with a Single Drive

Поделиться
HTML-код
  • Опубликовано: 28 сен 2024

Комментарии • 66

  • @TechnoTim
    @TechnoTim 15 дней назад +52

    Woohoo it's finally here (almost)!

    • @Yuriel1981
      @Yuriel1981 15 дней назад

      I know right! This was the only thing I considered a compromise when I choose TrueNas Scale as my main NAS OS. Now all good, well mostly lol. Still waiting on a Custom app setup video since everything is moving to Docker.

  • @KenOttaviano
    @KenOttaviano 15 дней назад

    This is AWESOME. I will definitely make use of this down the road when I inevitably run out of space.

  • @vidx9
    @vidx9 15 дней назад

    This is a super long awaited feature!

  • @Robert-Fisher-g6c
    @Robert-Fisher-g6c 15 дней назад +3

    Nice feature. Will they add the option to create a raid Z1 with only 2 drives so you can add drives as needed in the future?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  15 дней назад +3

      no

    • @yossimaslaton4386
      @yossimaslaton4386 15 дней назад

      The only way to get redundancy with 2 drives is to mirror the data.

    • @brandonchappell1535
      @brandonchappell1535 15 дней назад

      I coulda swore i seen this in feature list recently, i couldnt find it just now though, so i could be wrong too, take it with salt

  • @DailyTechNews.mp4
    @DailyTechNews.mp4 15 дней назад

    This is GREAT 👍👍

  • @wildmanjeff42
    @wildmanjeff42 15 дней назад

    Thanks for the video ! Is this going to be a Truenas Scale only feature or will it be worked into Core. I have not changed over yet, and use my Truenas for storage only, but will when I have need to.
    I do have Core loaded on a Trial machine to see how it works and seems great so far.

  • @peterworsley
    @peterworsley 15 дней назад +1

    Where or how can you use the Zpool command in scale as it doesn't work from shell on the web page

  • @raymacrohon1137
    @raymacrohon1137 5 дней назад

    Hi Lawrence. I’m new in the NAS world and just copied a DIY Nas built from one of the youtubers. Installed TruenNAS scale and done. I just want to ask for an answer to my problem because I can tell just from watching only two of your videos that you are a master when it comes to these stuff.
    See I have video files in my external 4Tb ssd that I wanted to copy to or store in my Nas drive. The question is, how do I copy those files to the Nas drive without passing or going through the internet. Like is there a way to just connect the external drive to the Nas server and do a copy and paste? Or like directly connect a laptop and the Nas server using one of the NIC port? Please help me or just give me an idea how or what to do. I would greatly appreciate your help. Thank you in advance Lawrence.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  4 дня назад +1

      There is not any easy way to do that in TrueNAS

    • @raymacrohon1137
      @raymacrohon1137 4 дня назад

      @@LAWRENCESYSTEMS Yay! That’s so sad. I have like almost 10Tb of movie files to move to my Nas drive. This is gonna hurt my internet data cap when I move it the conventional way.

  • @Yuriel1981
    @Yuriel1981 15 дней назад

    Can you still swap a drive in a mirror resilver swap the other then extend as well?

  • @RobbieE56
    @RobbieE56 14 дней назад

    My NAS PC crashed (5 20TB HDDs currently setup in a VDEV in RAIDZ1) while it was at the waiting for expansion to start step. Once it rebooted, it now shows my VDEV is 6 wide, but the pool size didn't expand. From what I can tell, the data that was previously on the 5 drive VDEV is fine and intact, but now I'm not sure how to actually get the pool to extend to the new drive. Any tips/recommendations?
    Thanks for the videos!

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  13 дней назад

      Not an issue I have encountered so I have not tested.

    • @RobbieE56
      @RobbieE56 13 дней назад

      @@LAWRENCESYSTEMS Weirdly enough I had a scrub that took about 3 days, and once that finished the pool ended up expanding. No idea if the scrub did anything, but it's working now!

  • @RumenBlack
    @RumenBlack 14 дней назад

    How would this work if you have multiple vdevs in a pool?
    Would you have to add at least 1 disk to each vdev in the pool or can you still expand a single disk in a single vdev?

  • @russellbaker4256
    @russellbaker4256 15 дней назад

    Is ZFS able to recover if you reboot (say) in the middle of the vdev extension process?

  • @Raidflex
    @Raidflex 15 дней назад

    Will this be possible on Truenas Core?

  • @frederichardy8844
    @frederichardy8844 12 дней назад

    Expand or extend they should choose a term, not using 2 for the same thing. As you can also expand a vdev (without adding a drive) after you just change the last one with a higher capacity, it is confusing...

  • @arubial1229
    @arubial1229 15 дней назад +3

    glitchy videos lately

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  15 дней назад +1

      I have no idea how to fix that because they are not in the originals

  • @GenericUser833
    @GenericUser833 15 дней назад +4

    I've said this on the BSD video too. While I'm happy some form of VDEV expansion is finally here, it's half-baked at best. They should be embarrassed to ship software in such a state.

    • @dltmap
      @dltmap 15 дней назад +2

      Why do you think it is half baked?

    • @dlsniper
      @dlsniper 15 дней назад

      Can you please provide more information about this?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  15 дней назад +6

      The lack of details from GenericUser833 makes me think they might not be a credible source.

    • @GenericUser833
      @GenericUser833 15 дней назад +1

      @@LAWRENCESYSTEMS Just because you all didn't watch and/or comprehend the BSD foundation video does not make me a non-credible source. The system does not see the full usable space is one issue (it's close, but not the full usable). The other you said yourself - only new data is written with the new parity across 4 disks. This is why I'm paid more than you. I had doubts about you years ago, but you've just proven to me that my read on you was correct. Very arrogant and confident in that. I'll add this trash channel back to my blocklist.

    • @whosscruffylookin95
      @whosscruffylookin95 15 дней назад +2

      ​@@GenericUser833bro who hurt you

  • @moseph_v3904
    @moseph_v3904 15 дней назад +5

    Glad this feature is finally arriving soon! And thank you for the demo. You always provide a good balance of providing enough information and details, and not providing way too much info and details, lol
    I think it's worth pointing out that if you want to rebalance your pool, and you do so using that script or another method similar to that, if you have snapshots on your pool, make sure you have enough free space on your pool to accommodate the effective doubling of your used data! And don't forget to not let the total used data exceed 90% of total capacity (I've heard some people say 80% and others say you can go as high as 95%. I myself try to avoid going over 80%, but I consider 90% to be critical).
    If you don't have enough free capacity, you can delete any/all snapshots to ensure your pool isn't holding onto the old data. But I would strongly suggest you have your data backed up first. (You should be backing up regularly anyways). The same above applies to your backup target as well. One exception might be if you have dedup enabled, though I've never tested that myself.

  • @AlexKidd4Fun
    @AlexKidd4Fun 15 дней назад +3

    4:40 Tom, I watched the presentation video and I think your simplification isn't exactly right. The data blocks and parity blocks are all being shifted around so they span across the stripe width, what isn't happening though is the parity is not being regenerated/recalculated. The new drive has a higher write count because it simply doesn't have any existing data to read for the process its only receiving blocks it didnt have before. Performance suffers slightly after expansion because the data with existing parity doesn't benefit from the new ratio of 3 data + 1 parity until you run the rebalance script. What the rebalance script does is read the data in and rewrites them out forcing the process of regenerating parity and then getting the benefit of new ratio of say 4 data + 1 parity. I hope that's not clear as mud! Thanks so much for the informative videos! 👍🙂👍

  • @gedavids84
    @gedavids84 15 дней назад +2

    Still no capability to remove a drive, right?

  • @philosoaper
    @philosoaper 15 дней назад +2

    I kinda want to do this and extend my 7x18TB raidz2 mechanical drive pool and run the in-place rebalancing...but I'd probably die of old age before it's done.

    • @RobbieE56
      @RobbieE56 14 дней назад

      No kidding lol. the scrub task has been running on my 5x20TB pool for two days now and is at 67%

  • @minigpracing3068
    @minigpracing3068 15 дней назад +2

    Thanks, this is something that might be important to me down the road.

  • @rchrstphr-smp1043
    @rchrstphr-smp1043 11 часов назад

    What about memory usage? Has it increased? Does the x memory for x terabyte rule continue with this expansion? Do you notice any change?

  • @RockTheCage55
    @RockTheCage55 15 дней назад +1

    awesome things for going over all of this. I didn't even know this change was on the docket. Makes zfs an alternative to either unraid or SHR (Synology Hybrid Raid). Granted not 'as' flexible but flexible enough.

  • @chaddr18
    @chaddr18 2 дня назад

    i have zraid2 and 4 disk i add 2 more and cant expand

  • @StormasM
    @StormasM 9 дней назад

    I faced a problem. This new feature calculates the pool capacity incorrectly. In my case, I have 6 x 2.73TiB hdd, if i use the 6th disk raidz2, i get a 10.77TiB pool. But if I create raidz2 from the 4th disk I get it 5.16TiB. Add +1 HDD disk expands space to 6.48TiB. Add another hdd pool expands to 7.8TiB. Missing nearly 3TiB of pool space. In your video i see similar problem.
    In video raidz1 Expansion_Demo_Pool:
    3hdd x 1.75TiB = 3.35TiB
    Add 1 1.75TiB hdd = 4.54 TiB (missing 0.56 TiB)
    Add another 1.75TiB hdd = 5.69TiB (missing 1.12TiB)
    I bet if the raid was created from scratch, its capacity would be about 7tib

  • @LucaCoronica
    @LucaCoronica 8 дней назад

    Try also BTRFS, in terms of flexibility I think it is more interesting!

  • @Bamxcore1
    @Bamxcore1 8 дней назад

    Would you be able to swap out drives to bigger sized ones in the future using this?

  • @BrandonFong
    @BrandonFong 8 дней назад

    started my first truenas build and super glad this feature is coming

  • @AceBoy2099
    @AceBoy2099 12 дней назад

    Im looking into moving my truenas from vm to bare metal, is your older video on updating from core to scale still relavent or could it be updated, is it the same when system swapping too? Should i start fresh and copy data, or move data drives over and import? Also whats the best suggestion for "expanding the z value" of a pool, for example currently i have a 3disk z1, whats the best way to move/copy EVERYTHING (data, snapshots, etc) to a new, say, z2 pool?

  • @frederichardy8844
    @frederichardy8844 12 дней назад

    It's not fast... And there's no warnging when you add a drive while the vdev is already expanding. Just a very fast test and it worked but I will do a more serious test trying to add 3 drives and if it's possible it would be nice to have it in the GUI (select multiple drives to add).

  • @mnoxman
    @mnoxman 15 дней назад

    "Future's so bright, I got to wear shades."
    Reminds me of having a LVM mirror under AIX in the mid 90s in order to expand a rootvg mirror (800MB to 2GB). I also remember working with Veritas Storage Foundations (VXFS) to do the same thing. You added a disk to the VG you chose a disk to migrate to the new disk and "evacuated" the PE/PPs from that drive to the new drive. AIX had the same limitation where the rootvg did not have any free PPs until both disks were the same. Veritas though would gladly let you make a new drive out of the not used PEs on that drive. What's old is new again.

  • @fluffyfloof9267
    @fluffyfloof9267 15 дней назад

    Hi! I see you're using the watch tool - probably to visualise the command line for this video. However, zpool status as well as iostat accepts a numerical parameter at the end, which defines the delay between each output - it even accepts fractions like 0.5 - does the same as watch (well, it's not refreshing the page, it's just appending new output).

  • @arturk3810
    @arturk3810 15 дней назад

    Yes, awaited update. Thank You. Please more.

  • @iONsJeepingAdventures
    @iONsJeepingAdventures 15 дней назад

    Have a 8 drive zfs Plex server just lost a drive server down hope it’s a fast change and up

  • @39zack
    @39zack 15 дней назад

    🫨cat6
    😺😺😺😺😺😺

  • @Valerius7777
    @Valerius7777 15 дней назад

    Awesome 👏

  • @brankojurisa6613
    @brankojurisa6613 14 дней назад

    Why they didn't implement going from 2 mirror drives to let's say 3 drive RAID Z1. It is the same principle, rearranging data parity to 3 drives from 2 identical. It is copying data to new drive and deleting from existing. It is not heavier task than going from 3 to 4 drives RAID Z1

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  13 дней назад +1

      Mirrors and RAIDz are too different for that to work according to the people that write the code.

  • @intertan
    @intertan 14 дней назад

    I know its still in beta but wondering what performance hit one might get if you slowly build up your storage capacity vs all at once. Start with a 3 drive z2 and max out at 9 drives vs getting all 9 at once

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  13 дней назад

      All data written prior to expansion maintains the stripe width at which it was written there for can only be read at the speed from the drives it was written to.

  • @pr0jectSkyneT
    @pr0jectSkyneT 14 дней назад

    Will this feature also be available for TrueNAS Core?