I know right! This was the only thing I considered a compromise when I choose TrueNas Scale as my main NAS OS. Now all good, well mostly lol. Still waiting on a Custom app setup video since everything is moving to Docker.
Thanks for the video ! Is this going to be a Truenas Scale only feature or will it be worked into Core. I have not changed over yet, and use my Truenas for storage only, but will when I have need to. I do have Core loaded on a Trial machine to see how it works and seems great so far.
Hi Lawrence. I’m new in the NAS world and just copied a DIY Nas built from one of the youtubers. Installed TruenNAS scale and done. I just want to ask for an answer to my problem because I can tell just from watching only two of your videos that you are a master when it comes to these stuff. See I have video files in my external 4Tb ssd that I wanted to copy to or store in my Nas drive. The question is, how do I copy those files to the Nas drive without passing or going through the internet. Like is there a way to just connect the external drive to the Nas server and do a copy and paste? Or like directly connect a laptop and the Nas server using one of the NIC port? Please help me or just give me an idea how or what to do. I would greatly appreciate your help. Thank you in advance Lawrence.
@@LAWRENCESYSTEMS Yay! That’s so sad. I have like almost 10Tb of movie files to move to my Nas drive. This is gonna hurt my internet data cap when I move it the conventional way.
My NAS PC crashed (5 20TB HDDs currently setup in a VDEV in RAIDZ1) while it was at the waiting for expansion to start step. Once it rebooted, it now shows my VDEV is 6 wide, but the pool size didn't expand. From what I can tell, the data that was previously on the 5 drive VDEV is fine and intact, but now I'm not sure how to actually get the pool to extend to the new drive. Any tips/recommendations? Thanks for the videos!
@@LAWRENCESYSTEMS Weirdly enough I had a scrub that took about 3 days, and once that finished the pool ended up expanding. No idea if the scrub did anything, but it's working now!
How would this work if you have multiple vdevs in a pool? Would you have to add at least 1 disk to each vdev in the pool or can you still expand a single disk in a single vdev?
Expand or extend they should choose a term, not using 2 for the same thing. As you can also expand a vdev (without adding a drive) after you just change the last one with a higher capacity, it is confusing...
I've said this on the BSD video too. While I'm happy some form of VDEV expansion is finally here, it's half-baked at best. They should be embarrassed to ship software in such a state.
@@LAWRENCESYSTEMS Just because you all didn't watch and/or comprehend the BSD foundation video does not make me a non-credible source. The system does not see the full usable space is one issue (it's close, but not the full usable). The other you said yourself - only new data is written with the new parity across 4 disks. This is why I'm paid more than you. I had doubts about you years ago, but you've just proven to me that my read on you was correct. Very arrogant and confident in that. I'll add this trash channel back to my blocklist.
Glad this feature is finally arriving soon! And thank you for the demo. You always provide a good balance of providing enough information and details, and not providing way too much info and details, lol I think it's worth pointing out that if you want to rebalance your pool, and you do so using that script or another method similar to that, if you have snapshots on your pool, make sure you have enough free space on your pool to accommodate the effective doubling of your used data! And don't forget to not let the total used data exceed 90% of total capacity (I've heard some people say 80% and others say you can go as high as 95%. I myself try to avoid going over 80%, but I consider 90% to be critical). If you don't have enough free capacity, you can delete any/all snapshots to ensure your pool isn't holding onto the old data. But I would strongly suggest you have your data backed up first. (You should be backing up regularly anyways). The same above applies to your backup target as well. One exception might be if you have dedup enabled, though I've never tested that myself.
4:40 Tom, I watched the presentation video and I think your simplification isn't exactly right. The data blocks and parity blocks are all being shifted around so they span across the stripe width, what isn't happening though is the parity is not being regenerated/recalculated. The new drive has a higher write count because it simply doesn't have any existing data to read for the process its only receiving blocks it didnt have before. Performance suffers slightly after expansion because the data with existing parity doesn't benefit from the new ratio of 3 data + 1 parity until you run the rebalance script. What the rebalance script does is read the data in and rewrites them out forcing the process of regenerating parity and then getting the benefit of new ratio of say 4 data + 1 parity. I hope that's not clear as mud! Thanks so much for the informative videos! 👍🙂👍
I kinda want to do this and extend my 7x18TB raidz2 mechanical drive pool and run the in-place rebalancing...but I'd probably die of old age before it's done.
awesome things for going over all of this. I didn't even know this change was on the docket. Makes zfs an alternative to either unraid or SHR (Synology Hybrid Raid). Granted not 'as' flexible but flexible enough.
I faced a problem. This new feature calculates the pool capacity incorrectly. In my case, I have 6 x 2.73TiB hdd, if i use the 6th disk raidz2, i get a 10.77TiB pool. But if I create raidz2 from the 4th disk I get it 5.16TiB. Add +1 HDD disk expands space to 6.48TiB. Add another hdd pool expands to 7.8TiB. Missing nearly 3TiB of pool space. In your video i see similar problem. In video raidz1 Expansion_Demo_Pool: 3hdd x 1.75TiB = 3.35TiB Add 1 1.75TiB hdd = 4.54 TiB (missing 0.56 TiB) Add another 1.75TiB hdd = 5.69TiB (missing 1.12TiB) I bet if the raid was created from scratch, its capacity would be about 7tib
Im looking into moving my truenas from vm to bare metal, is your older video on updating from core to scale still relavent or could it be updated, is it the same when system swapping too? Should i start fresh and copy data, or move data drives over and import? Also whats the best suggestion for "expanding the z value" of a pool, for example currently i have a 3disk z1, whats the best way to move/copy EVERYTHING (data, snapshots, etc) to a new, say, z2 pool?
It's not fast... And there's no warnging when you add a drive while the vdev is already expanding. Just a very fast test and it worked but I will do a more serious test trying to add 3 drives and if it's possible it would be nice to have it in the GUI (select multiple drives to add).
"Future's so bright, I got to wear shades." Reminds me of having a LVM mirror under AIX in the mid 90s in order to expand a rootvg mirror (800MB to 2GB). I also remember working with Veritas Storage Foundations (VXFS) to do the same thing. You added a disk to the VG you chose a disk to migrate to the new disk and "evacuated" the PE/PPs from that drive to the new drive. AIX had the same limitation where the rootvg did not have any free PPs until both disks were the same. Veritas though would gladly let you make a new drive out of the not used PEs on that drive. What's old is new again.
Hi! I see you're using the watch tool - probably to visualise the command line for this video. However, zpool status as well as iostat accepts a numerical parameter at the end, which defines the delay between each output - it even accepts fractions like 0.5 - does the same as watch (well, it's not refreshing the page, it's just appending new output).
Why they didn't implement going from 2 mirror drives to let's say 3 drive RAID Z1. It is the same principle, rearranging data parity to 3 drives from 2 identical. It is copying data to new drive and deleting from existing. It is not heavier task than going from 3 to 4 drives RAID Z1
I know its still in beta but wondering what performance hit one might get if you slowly build up your storage capacity vs all at once. Start with a 3 drive z2 and max out at 9 drives vs getting all 9 at once
All data written prior to expansion maintains the stripe width at which it was written there for can only be read at the speed from the drives it was written to.
Woohoo it's finally here (almost)!
I know right! This was the only thing I considered a compromise when I choose TrueNas Scale as my main NAS OS. Now all good, well mostly lol. Still waiting on a Custom app setup video since everything is moving to Docker.
This is AWESOME. I will definitely make use of this down the road when I inevitably run out of space.
This is a super long awaited feature!
Nice feature. Will they add the option to create a raid Z1 with only 2 drives so you can add drives as needed in the future?
no
The only way to get redundancy with 2 drives is to mirror the data.
I coulda swore i seen this in feature list recently, i couldnt find it just now though, so i could be wrong too, take it with salt
This is GREAT 👍👍
Thanks for the video ! Is this going to be a Truenas Scale only feature or will it be worked into Core. I have not changed over yet, and use my Truenas for storage only, but will when I have need to.
I do have Core loaded on a Trial machine to see how it works and seems great so far.
Where or how can you use the Zpool command in scale as it doesn't work from shell on the web page
you have to be root
Hi Lawrence. I’m new in the NAS world and just copied a DIY Nas built from one of the youtubers. Installed TruenNAS scale and done. I just want to ask for an answer to my problem because I can tell just from watching only two of your videos that you are a master when it comes to these stuff.
See I have video files in my external 4Tb ssd that I wanted to copy to or store in my Nas drive. The question is, how do I copy those files to the Nas drive without passing or going through the internet. Like is there a way to just connect the external drive to the Nas server and do a copy and paste? Or like directly connect a laptop and the Nas server using one of the NIC port? Please help me or just give me an idea how or what to do. I would greatly appreciate your help. Thank you in advance Lawrence.
There is not any easy way to do that in TrueNAS
@@LAWRENCESYSTEMS Yay! That’s so sad. I have like almost 10Tb of movie files to move to my Nas drive. This is gonna hurt my internet data cap when I move it the conventional way.
Can you still swap a drive in a mirror resilver swap the other then extend as well?
My NAS PC crashed (5 20TB HDDs currently setup in a VDEV in RAIDZ1) while it was at the waiting for expansion to start step. Once it rebooted, it now shows my VDEV is 6 wide, but the pool size didn't expand. From what I can tell, the data that was previously on the 5 drive VDEV is fine and intact, but now I'm not sure how to actually get the pool to extend to the new drive. Any tips/recommendations?
Thanks for the videos!
Not an issue I have encountered so I have not tested.
@@LAWRENCESYSTEMS Weirdly enough I had a scrub that took about 3 days, and once that finished the pool ended up expanding. No idea if the scrub did anything, but it's working now!
How would this work if you have multiple vdevs in a pool?
Would you have to add at least 1 disk to each vdev in the pool or can you still expand a single disk in a single vdev?
Yes
Is ZFS able to recover if you reboot (say) in the middle of the vdev extension process?
I did not test, but yes it should.
Will this be possible on Truenas Core?
Not sure
Expand or extend they should choose a term, not using 2 for the same thing. As you can also expand a vdev (without adding a drive) after you just change the last one with a higher capacity, it is confusing...
glitchy videos lately
I have no idea how to fix that because they are not in the originals
I've said this on the BSD video too. While I'm happy some form of VDEV expansion is finally here, it's half-baked at best. They should be embarrassed to ship software in such a state.
Why do you think it is half baked?
Can you please provide more information about this?
The lack of details from GenericUser833 makes me think they might not be a credible source.
@@LAWRENCESYSTEMS Just because you all didn't watch and/or comprehend the BSD foundation video does not make me a non-credible source. The system does not see the full usable space is one issue (it's close, but not the full usable). The other you said yourself - only new data is written with the new parity across 4 disks. This is why I'm paid more than you. I had doubts about you years ago, but you've just proven to me that my read on you was correct. Very arrogant and confident in that. I'll add this trash channel back to my blocklist.
@@GenericUser833bro who hurt you
Glad this feature is finally arriving soon! And thank you for the demo. You always provide a good balance of providing enough information and details, and not providing way too much info and details, lol
I think it's worth pointing out that if you want to rebalance your pool, and you do so using that script or another method similar to that, if you have snapshots on your pool, make sure you have enough free space on your pool to accommodate the effective doubling of your used data! And don't forget to not let the total used data exceed 90% of total capacity (I've heard some people say 80% and others say you can go as high as 95%. I myself try to avoid going over 80%, but I consider 90% to be critical).
If you don't have enough free capacity, you can delete any/all snapshots to ensure your pool isn't holding onto the old data. But I would strongly suggest you have your data backed up first. (You should be backing up regularly anyways). The same above applies to your backup target as well. One exception might be if you have dedup enabled, though I've never tested that myself.
4:40 Tom, I watched the presentation video and I think your simplification isn't exactly right. The data blocks and parity blocks are all being shifted around so they span across the stripe width, what isn't happening though is the parity is not being regenerated/recalculated. The new drive has a higher write count because it simply doesn't have any existing data to read for the process its only receiving blocks it didnt have before. Performance suffers slightly after expansion because the data with existing parity doesn't benefit from the new ratio of 3 data + 1 parity until you run the rebalance script. What the rebalance script does is read the data in and rewrites them out forcing the process of regenerating parity and then getting the benefit of new ratio of say 4 data + 1 parity. I hope that's not clear as mud! Thanks so much for the informative videos! 👍🙂👍
True
Still no capability to remove a drive, right?
I kinda want to do this and extend my 7x18TB raidz2 mechanical drive pool and run the in-place rebalancing...but I'd probably die of old age before it's done.
No kidding lol. the scrub task has been running on my 5x20TB pool for two days now and is at 67%
Thanks, this is something that might be important to me down the road.
What about memory usage? Has it increased? Does the x memory for x terabyte rule continue with this expansion? Do you notice any change?
awesome things for going over all of this. I didn't even know this change was on the docket. Makes zfs an alternative to either unraid or SHR (Synology Hybrid Raid). Granted not 'as' flexible but flexible enough.
i have zraid2 and 4 disk i add 2 more and cant expand
I faced a problem. This new feature calculates the pool capacity incorrectly. In my case, I have 6 x 2.73TiB hdd, if i use the 6th disk raidz2, i get a 10.77TiB pool. But if I create raidz2 from the 4th disk I get it 5.16TiB. Add +1 HDD disk expands space to 6.48TiB. Add another hdd pool expands to 7.8TiB. Missing nearly 3TiB of pool space. In your video i see similar problem.
In video raidz1 Expansion_Demo_Pool:
3hdd x 1.75TiB = 3.35TiB
Add 1 1.75TiB hdd = 4.54 TiB (missing 0.56 TiB)
Add another 1.75TiB hdd = 5.69TiB (missing 1.12TiB)
I bet if the raid was created from scratch, its capacity would be about 7tib
Try also BTRFS, in terms of flexibility I think it is more interesting!
Would you be able to swap out drives to bigger sized ones in the future using this?
started my first truenas build and super glad this feature is coming
Im looking into moving my truenas from vm to bare metal, is your older video on updating from core to scale still relavent or could it be updated, is it the same when system swapping too? Should i start fresh and copy data, or move data drives over and import? Also whats the best suggestion for "expanding the z value" of a pool, for example currently i have a 3disk z1, whats the best way to move/copy EVERYTHING (data, snapshots, etc) to a new, say, z2 pool?
It's not fast... And there's no warnging when you add a drive while the vdev is already expanding. Just a very fast test and it worked but I will do a more serious test trying to add 3 drives and if it's possible it would be nice to have it in the GUI (select multiple drives to add).
"Future's so bright, I got to wear shades."
Reminds me of having a LVM mirror under AIX in the mid 90s in order to expand a rootvg mirror (800MB to 2GB). I also remember working with Veritas Storage Foundations (VXFS) to do the same thing. You added a disk to the VG you chose a disk to migrate to the new disk and "evacuated" the PE/PPs from that drive to the new drive. AIX had the same limitation where the rootvg did not have any free PPs until both disks were the same. Veritas though would gladly let you make a new drive out of the not used PEs on that drive. What's old is new again.
Hi! I see you're using the watch tool - probably to visualise the command line for this video. However, zpool status as well as iostat accepts a numerical parameter at the end, which defines the delay between each output - it even accepts fractions like 0.5 - does the same as watch (well, it's not refreshing the page, it's just appending new output).
Yes, awaited update. Thank You. Please more.
Have a 8 drive zfs Plex server just lost a drive server down hope it’s a fast change and up
🫨cat6
😺😺😺😺😺😺
Awesome 👏
Why they didn't implement going from 2 mirror drives to let's say 3 drive RAID Z1. It is the same principle, rearranging data parity to 3 drives from 2 identical. It is copying data to new drive and deleting from existing. It is not heavier task than going from 3 to 4 drives RAID Z1
Mirrors and RAIDz are too different for that to work according to the people that write the code.
I know its still in beta but wondering what performance hit one might get if you slowly build up your storage capacity vs all at once. Start with a 3 drive z2 and max out at 9 drives vs getting all 9 at once
All data written prior to expansion maintains the stripe width at which it was written there for can only be read at the speed from the drives it was written to.
Will this feature also be available for TrueNAS Core?
I have no idea.
@@LAWRENCESYSTEMS 😭