I know right! This was the only thing I considered a compromise when I choose TrueNas Scale as my main NAS OS. Now all good, well mostly lol. Still waiting on a Custom app setup video since everything is moving to Docker.
Glad this feature is finally arriving soon! And thank you for the demo. You always provide a good balance of providing enough information and details, and not providing way too much info and details, lol I think it's worth pointing out that if you want to rebalance your pool, and you do so using that script or another method similar to that, if you have snapshots on your pool, make sure you have enough free space on your pool to accommodate the effective doubling of your used data! And don't forget to not let the total used data exceed 90% of total capacity (I've heard some people say 80% and others say you can go as high as 95%. I myself try to avoid going over 80%, but I consider 90% to be critical). If you don't have enough free capacity, you can delete any/all snapshots to ensure your pool isn't holding onto the old data. But I would strongly suggest you have your data backed up first. (You should be backing up regularly anyways). The same above applies to your backup target as well. One exception might be if you have dedup enabled, though I've never tested that myself.
Great video. Thank you Lawrence. However, I would recommend that before starting the ZFS vdev expansion, one should disable all scheduled scrub tasks (Data Protection page on Truenas Scale). Re-enable it after the expansion. My expansion of a 7 drive vdev was interrupted by a scrub and it took a lot of painful hours to get it running again.
This is a game changer! Definitely was a sticking point for me not going with truenas in the past for storage because i like the flexibility in something like unraid to be able to add as I go. Definitely will be moving to TrueNAS Scale on my next storage upgrade.
Thank you for the video, unfortunately one of the 3 new drives I queued up to add was faulty and threw a wrench into the whole situation but hey that's the risk we take lol
awesome things for going over all of this. I didn't even know this change was on the docket. Makes zfs an alternative to either unraid or SHR (Synology Hybrid Raid). Granted not 'as' flexible but flexible enough.
"Future's so bright, I got to wear shades." Reminds me of having a LVM mirror under AIX in the mid 90s in order to expand a rootvg mirror (800MB to 2GB). I also remember working with Veritas Storage Foundations (VXFS) to do the same thing. You added a disk to the VG you chose a disk to migrate to the new disk and "evacuated" the PE/PPs from that drive to the new drive. AIX had the same limitation where the rootvg did not have any free PPs until both disks were the same. Veritas though would gladly let you make a new drive out of the not used PEs on that drive. What's old is new again.
4:40 Tom, I watched the presentation video and I think your simplification isn't exactly right. The data blocks and parity blocks are all being shifted around so they span across the stripe width, what isn't happening though is the parity is not being regenerated/recalculated. The new drive has a higher write count because it simply doesn't have any existing data to read for the process its only receiving blocks it didnt have before. Performance suffers slightly after expansion because the data with existing parity doesn't benefit from the new ratio of 3 data + 1 parity until you run the rebalance script. What the rebalance script does is read the data in and rewrites them out forcing the process of regenerating parity and then getting the benefit of new ratio of say 4 data + 1 parity. I hope that's not clear as mud! Thanks so much for the informative videos! 👍🙂👍
Hi! I see you're using the watch tool - probably to visualise the command line for this video. However, zpool status as well as iostat accepts a numerical parameter at the end, which defines the delay between each output - it even accepts fractions like 0.5 - does the same as watch (well, it's not refreshing the page, it's just appending new output).
Great walktrough! i am expanding my RaidZ1 right now. but progress looks very slow. still 5 days to go. can i use my zpool while it is expanding? or will this cause for troubles?
How big is your Z1 pool? I have a 81% full 145TiB Z2 pool and am thinking about expanding. But when it takes so much time I bet it’s easier to just make it anew and copy from my backup
Thanks for the video ! Is this going to be a Truenas Scale only feature or will it be worked into Core. I have not changed over yet, and use my Truenas for storage only, but will when I have need to. I do have Core loaded on a Trial machine to see how it works and seems great so far.
Did you have to hit expand on the pool after extending the vdev with the individual disks? I know the space accounting is a current issue, my useable capacity and availability hasn't changed in ui which I expected. However, when running a zpool list, my size, cap, free, hasn't changed either. Im in the process of extending a second vdev after I completed the first raidz2 vdev extension so I haven't attempted to hit expand yet but need to confirm if this is required to see the updated pool statistics. After the first vdev extension the pool raw size should have updated otherwise but it has not.
I have a single drive vdev, but recently purchased 5 more drives. Should I do 2 3 disk Raidz1 vdevs or a single 5 disk Raid Z2 vdev and have the extra disk as a backup, on the shelf ready in case one fails?
SSH into Truenas, then run tmux, then to split the screen like that do Control + B then " and you'll get a split screen. Then to get the status windows like that run the command watch -n 1 sudo zpool status {YOUR POOL HERE} , then switch to the other pane in tmux (Control + B then o ) then type the other command watch -n 1 sudo zpool iostat -v {YOUR POOL HERE} , I just had to figure this same thing out myself
@@deepaknanda1113 I appreciate that info. I'm very new truenas and server stuff. I'm leaning towards core because I dont really understand stuff like proxmox and portainer. Core seams simpler to use.
I thought this feature was there all along lol... I actually planned on adding a new drive after Electric Eel release but when I finally added it to my pool it did not extend as far as I expected. I had 5x4tb with total usable capacity of raidz2 at 10.7tb. The extra drive only got me to 12.8tb. Now apparently I need to manually rewrite all data in the vdev to get a new parity ratio... What does that mean? Taking all the data off the pool? How does that save time compared to just destroying the pool and starting over?
I kinda want to do this and extend my 7x18TB raidz2 mechanical drive pool and run the in-place rebalancing...but I'd probably die of old age before it's done.
@@RobbieE56 this is odd. I have a 10x20TB RaidZ2 81% full and it takes about 24h to scrub it. But I think about restoring from backup to expand. With 10gbit it should be faster than expanding Around 3 days I think
@@s.k.6823 I'm not sure why it took so long, but it finally finished and expanded sometime during day 4 into day 5. I was starting to get a little stressed for a second lol. All good now though
@@RobbieE56 i had this once actually on my backup (asymmetrical vdevs) and I stopped scubbing and restarted and it just worked as normal. Something was hanging. I first thought its the asymmetry of vdevs
What would be best way to configure a new ~100TB array with redundancy in a way that is expandable if I need more storage in the future? Does this new TrueNAS feature allow for the addition of another identical drive to add that drive's capacity to the total array? Looking to build a TrueNAS Scale system for the first time with ~8 18TB drives in RAIDZ-2, but the case I'm looking at can handle tons more drives if I need more storage in the future. Thanks!
@@LAWRENCESYSTEMS Ahh, so I can expand easily by adding an additional RAID-Z2 VDEV? Would it be possible to add 6 18TB drives in RAID-Z2 as an expansion, or would it have to match the original 8 drives?
@LAWRENCESYSTEMS yes. That is why I'm going to delete the whole raid and pool (Z1) then configure it to Z3. But where do you think i can temporarily relocate my files inside my Z1? I dont have same or bigger capacity storage like my current Z1. Any advice?
Im looking into moving my truenas from vm to bare metal, is your older video on updating from core to scale still relavent or could it be updated, is it the same when system swapping too? Should i start fresh and copy data, or move data drives over and import? Also whats the best suggestion for "expanding the z value" of a pool, for example currently i have a 3disk z1, whats the best way to move/copy EVERYTHING (data, snapshots, etc) to a new, say, z2 pool?
If I buy 3 IronWolf disks and create a RaidZ for a very low activity NAS, will the drives ever sleep? I dont want full power consumption of the drives when they are only accessed 1-2 hours a day on average
Hi Lawrence. I’m new in the NAS world and just copied a DIY Nas built from one of the youtubers. Installed TruenNAS scale and done. I just want to ask for an answer to my problem because I can tell just from watching only two of your videos that you are a master when it comes to these stuff. See I have video files in my external 4Tb ssd that I wanted to copy to or store in my Nas drive. The question is, how do I copy those files to the Nas drive without passing or going through the internet. Like is there a way to just connect the external drive to the Nas server and do a copy and paste? Or like directly connect a laptop and the Nas server using one of the NIC port? Please help me or just give me an idea how or what to do. I would greatly appreciate your help. Thank you in advance Lawrence.
@@LAWRENCESYSTEMS Yay! That’s so sad. I have like almost 10Tb of movie files to move to my Nas drive. This is gonna hurt my internet data cap when I move it the conventional way.
@@raymacrohon1137 just map a network drive of the NAS to your desktop with the external drive connected. this will only use local network so it will not affect internet usage/limits
How would this work if you have multiple vdevs in a pool? Would you have to add at least 1 disk to each vdev in the pool or can you still expand a single disk in a single vdev?
Is there a way, to find out or calculate how much the perfomance difference is? I will expand a 3x 18TB HDD pool with one additional 18TB HDD. Is it really worth copying the data back and forth, to sync the data all over the four drives?
@@LAWRENCESYSTEMS Thanks for the reply. My Expansion is running now, seems to take over 48 hours. Would you recommend to use the script you mentioned afterwards? Am I right and this can be used on the pool directly and it rearranges all data and paraty, distributing over all 4 disks, so without copying all data to another vdev and back?
I know its still in beta but wondering what performance hit one might get if you slowly build up your storage capacity vs all at once. Start with a 3 drive z2 and max out at 9 drives vs getting all 9 at once
All data written prior to expansion maintains the stripe width at which it was written there for can only be read at the speed from the drives it was written to.
My NAS PC crashed (5 20TB HDDs currently setup in a VDEV in RAIDZ1) while it was at the waiting for expansion to start step. Once it rebooted, it now shows my VDEV is 6 wide, but the pool size didn't expand. From what I can tell, the data that was previously on the 5 drive VDEV is fine and intact, but now I'm not sure how to actually get the pool to extend to the new drive. Any tips/recommendations? Thanks for the videos!
@@LAWRENCESYSTEMS Weirdly enough I had a scrub that took about 3 days, and once that finished the pool ended up expanding. No idea if the scrub did anything, but it's working now!
I faced a problem. This new feature calculates the pool capacity incorrectly. In my case, I have 6 x 2.73TiB hdd, if i use the 6th disk raidz2, i get a 10.77TiB pool. But if I create raidz2 from the 4th disk I get it 5.16TiB. Add +1 HDD disk expands space to 6.48TiB. Add another hdd pool expands to 7.8TiB. Missing nearly 3TiB of pool space. In your video i see similar problem. In video raidz1 Expansion_Demo_Pool: 3hdd x 1.75TiB = 3.35TiB Add 1 1.75TiB hdd = 4.54 TiB (missing 0.56 TiB) Add another 1.75TiB hdd = 5.69TiB (missing 1.12TiB) I bet if the raid was created from scratch, its capacity would be about 7tib
Nice update and thanks for the video. I'm new to TruNAS and getting the following error when trying to Extend my pool... [EZFS_BADTARGET] cannot attach /dev/disk/by-partuuid/9de08ec2-ebd9-44e3-a58a-3a19e3592d70 to raidz1-0: raidz_expansion feature must be enabled in order to attach a device to raidz No idea where to enable it.
@@CODYRIOT Thanks, I did figure it out but forgot to update my comment here.... Found the missing setting, needed to update the feature flags for the existing pool. Also the pool expantion takes a couple of days if you have data. My 3x4tb pool is taking 3/4 days to add an additional 4tb. Use "sudo zpool status" in the shell to check the status of expansion.
@@CODYRIOT Did you run "sudo zpool status" in the shell window? I'm adding my second drive atm and its still 2 days away from being completed and it only shows the extra space after its completed.
@@Shen3002 it said something about an error that needed to be corrected before proceeding and I couldn't get it to tell me what it wanted so I ended up rebooting it hoping that would help but it did not. I am going to try replacing the drive I installed with another one as I'm worried it has write issues.
Why they didn't implement going from 2 mirror drives to let's say 3 drive RAID Z1. It is the same principle, rearranging data parity to 3 drives from 2 identical. It is copying data to new drive and deleting from existing. It is not heavier task than going from 3 to 4 drives RAID Z1
Expand or extend they should choose a term, not using 2 for the same thing. As you can also expand a vdev (without adding a drive) after you just change the last one with a higher capacity, it is confusing...
It's not fast... And there's no warnging when you add a drive while the vdev is already expanding. Just a very fast test and it worked but I will do a more serious test trying to add 3 drives and if it's possible it would be nice to have it in the GUI (select multiple drives to add).
Now I have to wait for Unraid to implement it, I just can't stand TreuNAS GUI which still doesn't work properly (like creating a network bridge that doesn't behave like a network bridge, unless you create it from the CLI. Why is there even an option in the GUI when it doesn't work?). Also when having a single GPU, TrueNAS for some reason needs is (even it can run without it when there is no GPU installed) so I can't pass it to a VM for my dockers, because apps in TrueNAS are borderline useless.
Woohoo it's finally here (almost)!
I know right! This was the only thing I considered a compromise when I choose TrueNas Scale as my main NAS OS. Now all good, well mostly lol. Still waiting on a Custom app setup video since everything is moving to Docker.
This one feature has just saved me 3k. I thought I was going to have to update all 6 drives in my vdev. Now, I just need one. Life is good.
Nice
This is exactly why I stuck with mirrors
started my first truenas build and super glad this feature is coming
Glad this feature is finally arriving soon! And thank you for the demo. You always provide a good balance of providing enough information and details, and not providing way too much info and details, lol
I think it's worth pointing out that if you want to rebalance your pool, and you do so using that script or another method similar to that, if you have snapshots on your pool, make sure you have enough free space on your pool to accommodate the effective doubling of your used data! And don't forget to not let the total used data exceed 90% of total capacity (I've heard some people say 80% and others say you can go as high as 95%. I myself try to avoid going over 80%, but I consider 90% to be critical).
If you don't have enough free capacity, you can delete any/all snapshots to ensure your pool isn't holding onto the old data. But I would strongly suggest you have your data backed up first. (You should be backing up regularly anyways). The same above applies to your backup target as well. One exception might be if you have dedup enabled, though I've never tested that myself.
Great video. Thank you Lawrence. However, I would recommend that before starting the ZFS vdev expansion, one should disable all scheduled scrub tasks (Data Protection page on Truenas Scale). Re-enable it after the expansion. My expansion of a 7 drive vdev was interrupted by a scrub and it took a lot of painful hours to get it running again.
This is a game changer! Definitely was a sticking point for me not going with truenas in the past for storage because i like the flexibility in something like unraid to be able to add as I go. Definitely will be moving to TrueNAS Scale on my next storage upgrade.
Thank you for the video, unfortunately one of the 3 new drives I queued up to add was faulty and threw a wrench into the whole situation but hey that's the risk we take lol
This is a super long awaited feature!
awesome things for going over all of this. I didn't even know this change was on the docket. Makes zfs an alternative to either unraid or SHR (Synology Hybrid Raid). Granted not 'as' flexible but flexible enough.
"Future's so bright, I got to wear shades."
Reminds me of having a LVM mirror under AIX in the mid 90s in order to expand a rootvg mirror (800MB to 2GB). I also remember working with Veritas Storage Foundations (VXFS) to do the same thing. You added a disk to the VG you chose a disk to migrate to the new disk and "evacuated" the PE/PPs from that drive to the new drive. AIX had the same limitation where the rootvg did not have any free PPs until both disks were the same. Veritas though would gladly let you make a new drive out of the not used PEs on that drive. What's old is new again.
Thanks, this is something that might be important to me down the road.
This is AWESOME. I will definitely make use of this down the road when I inevitably run out of space.
This is what i was waiting for😍
Yes, awaited update. Thank You. Please more.
This is awesome for a home user where I don’t have tons and tons of discs I have four and I would like to go to a fifth This is extremely exciting.
4:40 Tom, I watched the presentation video and I think your simplification isn't exactly right. The data blocks and parity blocks are all being shifted around so they span across the stripe width, what isn't happening though is the parity is not being regenerated/recalculated. The new drive has a higher write count because it simply doesn't have any existing data to read for the process its only receiving blocks it didnt have before. Performance suffers slightly after expansion because the data with existing parity doesn't benefit from the new ratio of 3 data + 1 parity until you run the rebalance script. What the rebalance script does is read the data in and rewrites them out forcing the process of regenerating parity and then getting the benefit of new ratio of say 4 data + 1 parity. I hope that's not clear as mud! Thanks so much for the informative videos! 👍🙂👍
True
Can't figure out hot to use the script to rebalance, would somebody be so kind to help?
I am in the same boat. I can't quite get it figured out.
Nice feature. Will they add the option to create a raid Z1 with only 2 drives so you can add drives as needed in the future?
no
The only way to get redundancy with 2 drives is to mirror the data.
I coulda swore i seen this in feature list recently, i couldnt find it just now though, so i could be wrong too, take it with salt
Hi! I see you're using the watch tool - probably to visualise the command line for this video. However, zpool status as well as iostat accepts a numerical parameter at the end, which defines the delay between each output - it even accepts fractions like 0.5 - does the same as watch (well, it's not refreshing the page, it's just appending new output).
Great walktrough! i am expanding my RaidZ1 right now. but progress looks very slow. still 5 days to go. can i use my zpool while it is expanding? or will this cause for troubles?
How big is your Z1 pool?
I have a 81% full 145TiB Z2 pool and am thinking about expanding. But when it takes so much time I bet it’s easier to just make it anew and copy from my backup
@s.k.6823 mine was 3x 6tb expander with another 6tb Pool it was 81% full
Well I guess the expansion ended now... How was it ? have you tried accessing your data while expanding ? Was there any slow down with you r system ?
@@maxLagachette yes i noticed slower read and write speeds but I could still use it.
Thanks for the video ! Is this going to be a Truenas Scale only feature or will it be worked into Core. I have not changed over yet, and use my Truenas for storage only, but will when I have need to.
I do have Core loaded on a Trial machine to see how it works and seems great so far.
Can I go from two mirrored disks to five RAIDZ1?
it wasn't hard to make a stripe a mirrored in truenas scale. i hope they do add the ability to convert it to a raidz later
Did you have to hit expand on the pool after extending the vdev with the individual disks? I know the space accounting is a current issue, my useable capacity and availability hasn't changed in ui which I expected. However, when running a zpool list, my size, cap, free, hasn't changed either. Im in the process of extending a second vdev after I completed the first raidz2 vdev extension so I haven't attempted to hit expand yet but need to confirm if this is required to see the updated pool statistics. After the first vdev extension the pool raw size should have updated otherwise but it has not.
This is GREAT 👍👍
I have a single drive vdev, but recently purchased 5 more drives. Should I do 2 3 disk Raidz1 vdevs or a single 5 disk Raid Z2 vdev and have the extra disk as a backup, on the shelf ready in case one fails?
Still no capability to remove a drive, right?
And this will never come, coz every file is scattered among all discs, so u can only add.
How do I monitor the raid expansion like that at @4:52
I also would like to know how to do this
SSH into Truenas, then run tmux, then to split the screen like that do Control + B then " and you'll get a split screen. Then to get the status windows like that run the command watch -n 1 sudo zpool status {YOUR POOL HERE} , then switch to the other pane in tmux (Control + B then o ) then type the other command watch -n 1 sudo zpool iostat -v {YOUR POOL HERE} , I just had to figure this same thing out myself
Does anybody know if this is availae in truesnas core?
I doubt it will come to core.
@LAWRENCESYSTEMS I really appreciate the reply. Thank you!
@@GolgiGuy102 u can move ur pool from core to scale, without data loss.
@@deepaknanda1113 I appreciate that info. I'm very new truenas and server stuff. I'm leaning towards core because I dont really understand stuff like proxmox and portainer. Core seams simpler to use.
@@deepaknanda1113 Is this as simple as selecting the different Train in the Upgrade section?
Where or how can you use the Zpool command in scale as it doesn't work from shell on the web page
you have to be root
Ahhhh yes, right when I finally bought a Synology because this was a problem. SMH. This is awesome though. Excited for future upgrades.
I thought this feature was there all along lol... I actually planned on adding a new drive after Electric Eel release but when I finally added it to my pool it did not extend as far as I expected. I had 5x4tb with total usable capacity of raidz2 at 10.7tb. The extra drive only got me to 12.8tb. Now apparently I need to manually rewrite all data in the vdev to get a new parity ratio... What does that mean? Taking all the data off the pool? How does that save time compared to just destroying the pool and starting over?
Try it, then come back and let us know. We will all learn together
Awesome 👏
I kinda want to do this and extend my 7x18TB raidz2 mechanical drive pool and run the in-place rebalancing...but I'd probably die of old age before it's done.
No kidding lol. the scrub task has been running on my 5x20TB pool for two days now and is at 67%
@@RobbieE56 this is odd. I have a 10x20TB RaidZ2 81% full and it takes about 24h to scrub it. But I think about restoring from backup to expand. With 10gbit it should be faster than expanding
Around 3 days I think
@@s.k.6823 I'm not sure why it took so long, but it finally finished and expanded sometime during day 4 into day 5. I was starting to get a little stressed for a second lol. All good now though
@@RobbieE56 i had this once actually on my backup (asymmetrical vdevs) and I stopped scubbing and restarted and it just worked as normal. Something was hanging.
I first thought its the asymmetry of vdevs
What would be best way to configure a new ~100TB array with redundancy in a way that is expandable if I need more storage in the future? Does this new TrueNAS feature allow for the addition of another identical drive to add that drive's capacity to the total array? Looking to build a TrueNAS Scale system for the first time with ~8 18TB drives in RAIDZ-2, but the case I'm looking at can handle tons more drives if I need more storage in the future. Thanks!
Adding VDEV's symmetrically is a good way ruclips.net/video/11bWnvCwTOU/видео.htmlsi=f38DhlVbKVxdB1mA
@@LAWRENCESYSTEMS Ahh, so I can expand easily by adding an additional RAID-Z2 VDEV? Would it be possible to add 6 18TB drives in RAID-Z2 as an expansion, or would it have to match the original 8 drives?
Would you be able to swap out drives to bigger sized ones in the future using this?
you can already do that by resilvering the array and replacing all the drives
What will be the solution if my NAS is almost full and I want to make my Z1 to Z3 but I don't have bigger or at least the same capacity as my NAS?
You can't covert from Z1 to Z2 or Z3
@LAWRENCESYSTEMS yes. That is why I'm going to delete the whole raid and pool (Z1) then configure it to Z3. But where do you think i can temporarily relocate my files inside my Z1? I dont have same or bigger capacity storage like my current Z1. Any advice?
Im looking into moving my truenas from vm to bare metal, is your older video on updating from core to scale still relavent or could it be updated, is it the same when system swapping too? Should i start fresh and copy data, or move data drives over and import? Also whats the best suggestion for "expanding the z value" of a pool, for example currently i have a 3disk z1, whats the best way to move/copy EVERYTHING (data, snapshots, etc) to a new, say, z2 pool?
If I buy 3 IronWolf disks and create a RaidZ for a very low activity NAS, will the drives ever sleep? I dont want full power consumption of the drives when they are only accessed 1-2 hours a day on average
Nope
@@LAWRENCESYSTEMS :( then its probably not the correct file system for my personal backup
Hi Lawrence. I’m new in the NAS world and just copied a DIY Nas built from one of the youtubers. Installed TruenNAS scale and done. I just want to ask for an answer to my problem because I can tell just from watching only two of your videos that you are a master when it comes to these stuff.
See I have video files in my external 4Tb ssd that I wanted to copy to or store in my Nas drive. The question is, how do I copy those files to the Nas drive without passing or going through the internet. Like is there a way to just connect the external drive to the Nas server and do a copy and paste? Or like directly connect a laptop and the Nas server using one of the NIC port? Please help me or just give me an idea how or what to do. I would greatly appreciate your help. Thank you in advance Lawrence.
There is not any easy way to do that in TrueNAS
@@LAWRENCESYSTEMS Yay! That’s so sad. I have like almost 10Tb of movie files to move to my Nas drive. This is gonna hurt my internet data cap when I move it the conventional way.
@@raymacrohon1137 just map a network drive of the NAS to your desktop with the external drive connected. this will only use local network so it will not affect internet usage/limits
What about memory usage? Has it increased? Does the x memory for x terabyte rule continue with this expansion? Do you notice any change?
ruclips.net/video/xp6g-8VS06M/видео.htmlsi=cUAm3T62gRpTH0Sl
How would this work if you have multiple vdevs in a pool?
Would you have to add at least 1 disk to each vdev in the pool or can you still expand a single disk in a single vdev?
Yes
Is there a way, to find out or calculate how much the perfomance difference is? I will expand a 3x 18TB HDD pool with one additional 18TB HDD. Is it really worth copying the data back and forth, to sync the data all over the four drives?
Performance will vary based on a lot of things so you will have to test the speed of the new vs old files.
@@LAWRENCESYSTEMS Thanks for the reply. My Expansion is running now, seems to take over 48 hours. Would you recommend to use the script you mentioned afterwards? Am I right and this can be used on the pool directly and it rearranges all data and paraty, distributing over all 4 disks, so without copying all data to another vdev and back?
I know its still in beta but wondering what performance hit one might get if you slowly build up your storage capacity vs all at once. Start with a 3 drive z2 and max out at 9 drives vs getting all 9 at once
All data written prior to expansion maintains the stripe width at which it was written there for can only be read at the speed from the drives it was written to.
Thanks!
Is there a recommended PCIe to SATA board? Best compatibility with TrueNAS?
LSI 9300 HBAs
Can you still swap a drive in a mirror resilver swap the other then extend as well?
Yes
My NAS PC crashed (5 20TB HDDs currently setup in a VDEV in RAIDZ1) while it was at the waiting for expansion to start step. Once it rebooted, it now shows my VDEV is 6 wide, but the pool size didn't expand. From what I can tell, the data that was previously on the 5 drive VDEV is fine and intact, but now I'm not sure how to actually get the pool to extend to the new drive. Any tips/recommendations?
Thanks for the videos!
Not an issue I have encountered so I have not tested.
@@LAWRENCESYSTEMS Weirdly enough I had a scrub that took about 3 days, and once that finished the pool ended up expanding. No idea if the scrub did anything, but it's working now!
@@RobbieE56Great Info !!
I faced a problem. This new feature calculates the pool capacity incorrectly. In my case, I have 6 x 2.73TiB hdd, if i use the 6th disk raidz2, i get a 10.77TiB pool. But if I create raidz2 from the 4th disk I get it 5.16TiB. Add +1 HDD disk expands space to 6.48TiB. Add another hdd pool expands to 7.8TiB. Missing nearly 3TiB of pool space. In your video i see similar problem.
In video raidz1 Expansion_Demo_Pool:
3hdd x 1.75TiB = 3.35TiB
Add 1 1.75TiB hdd = 4.54 TiB (missing 0.56 TiB)
Add another 1.75TiB hdd = 5.69TiB (missing 1.12TiB)
I bet if the raid was created from scratch, its capacity would be about 7tib
Did you even watch the video? This is a know drawback with raidz expansion which has an existing solution.
Is ZFS able to recover if you reboot (say) in the middle of the vdev extension process?
I did not test, but yes it should.
Is this out of beta yet?
yes
Nice update and thanks for the video.
I'm new to TruNAS and getting the following error when trying to Extend my pool...
[EZFS_BADTARGET] cannot attach /dev/disk/by-partuuid/9de08ec2-ebd9-44e3-a58a-3a19e3592d70 to raidz1-0: raidz_expansion feature must be enabled in order to attach a device to raidz
No idea where to enable it.
Found the answer in a truenas forum. You have to upgrade the pool first then it will work!
@@CODYRIOT Thanks, I did figure it out but forgot to update my comment here....
Found the missing setting, needed to update the feature flags for the existing pool.
Also the pool expantion takes a couple of days if you have data. My 3x4tb pool is taking 3/4 days to add an additional 4tb.
Use "sudo zpool status" in the shell to check the status of expansion.
@@Shen3002 mine is 6x6tb and I added a second one. The expand took on my new 6tb, but it didn't expand the amount of storage at all...
@@CODYRIOT Did you run "sudo zpool status" in the shell window? I'm adding my second drive atm and its still 2 days away from being completed and it only shows the extra space after its completed.
@@Shen3002 it said something about an error that needed to be corrected before proceeding and I couldn't get it to tell me what it wanted so I ended up rebooting it hoping that would help but it did not. I am going to try replacing the drive I installed with another one as I'm worried it has write issues.
Will this feature also be available for TrueNAS Core?
I have no idea.
@@LAWRENCESYSTEMS 😭
Why they didn't implement going from 2 mirror drives to let's say 3 drive RAID Z1. It is the same principle, rearranging data parity to 3 drives from 2 identical. It is copying data to new drive and deleting from existing. It is not heavier task than going from 3 to 4 drives RAID Z1
Mirrors and RAIDz are too different for that to work according to the people that write the code.
Will this be possible on Truenas Core?
Not sure
Expand or extend they should choose a term, not using 2 for the same thing. As you can also expand a vdev (without adding a drive) after you just change the last one with a higher capacity, it is confusing...
It's not fast... And there's no warnging when you add a drive while the vdev is already expanding. Just a very fast test and it worked but I will do a more serious test trying to add 3 drives and if it's possible it would be nice to have it in the GUI (select multiple drives to add).
I am more a fan of expanding by adding new vdevs with each mirrored drives...
i have zraid2 and 4 disk i add 2 more and cant expand
Have a 8 drive zfs Plex server just lost a drive server down hope it’s a fast change and up
Now I have to wait for Unraid to implement it, I just can't stand TreuNAS GUI which still doesn't work properly (like creating a network bridge that doesn't behave like a network bridge, unless you create it from the CLI. Why is there even an option in the GUI when it doesn't work?). Also when having a single GPU, TrueNAS for some reason needs is (even it can run without it when there is no GPU installed) so I can't pass it to a VM for my dockers, because apps in TrueNAS are borderline useless.
🫨cat6
😺😺😺😺😺😺
Try also BTRFS, in terms of flexibility I think it is more interesting!
glitchy videos lately
I have no idea how to fix that because they are not in the originals