ZFS Essentials: Array Disk Conversion to ZFS or Other Filesystems - No Data Loss, No Parity Break!

Поделиться
HTML-код
  • Опубликовано: 13 июл 2023
  • Discover how to reformat a disk within your Unraid array to the robust ZFS file system (or any other file system) in this comprehensive tutorial. Unraid's recent addition of ZFS support opens up new possibilities for data management and having an array drive as ZFS opens up alot of possibilities such as zfs replication between 2 zpools.
    Please, if you can and want to support the channel and donate you can do so by Paypal here goo.gl/dw6MLW or check my patreon page / spaceinvaderone
    Bitcoin donations 3LMxDzcwPdjXQmmeBPzfvUgjYFDTqDAgQF
    Folder or dataset script
    github.com/SpaceinvaderOne/li...
    Auto convert folders to datasets
    github.com/SpaceinvaderOne/Un...
    ----------------------------------------------------------------------------------------------------------------
    Need a VPN?
    PIA is popular with Unraid users as its easy to setup with various vpn download containers - www.privateinternetaccess.com...
    Torguard is also an excellent VPN again with both openvpn and wireguard protocls supported.
    Get 50% off for life using code spaceinvaderone torguard.net/aff.php?aff=6005
    ----------------------------------------------------------------------------------------------------------------
    Need a cheap windows 10 license for around $10
    consogame.com/software/window...
    ----------------------------------------------------------------------------------------------------------------
    Need to buy something from amazon? Then please use my link to help the channel :)
    USA - amzn.to/3kCikfU
    UK - amzn.to/2UsYb1f
    USA link - USB HDD Docking station amzn.to/3v754WG
    UK Link - USB HDD Docking station amzn.to/3hLenYp
    HighPoint RocketStor 6414S amzn.to/3fiXv9s USA
    Mini SAS 26-Pin SFF-8088 Male to Mini SAS 26-Pin SFF-8088
    amzn.to/2V4x9kT USA
    amzn.to/3xfkxEl UK
    ----------------------------------------------------------------------------------------------------------------
    A big thankyou to Limetechnology for all the great work that they put into always improving Unraid OS.
  • НаукаНаука

Комментарии • 130

  • @kannznichkaufen
    @kannznichkaufen Год назад +17

    Great video again! Easy to understand and well explained.
    What I look forward to, is hearing about Your thoughts and ideas, how to layout a new server from scratch making the best use of zfs pools (with their own parity) and zfs disks in an array protected by unraid parity and which to use best for different purposes.

  • @i_clixx4700
    @i_clixx4700 Год назад

    your videos are so helpful. built the unraid using your video and a couple of other youtube. keep up the good work

  • @mattfrekwentflier8342
    @mattfrekwentflier8342 5 месяцев назад +2

    Thank you for making this series of ZFS videos! I changed all of my disks, both array and cache, to ZFS. Took me almost a week in total, but well worth it.

    • @mikegeorgia825
      @mikegeorgia825 3 месяца назад +1

      Working on this now. 25 Array disks ha. Lots of fun. Had to create a backup server with enough storage to just move the data off to start the change over incase of any issues along the way.

  • @squeak751
    @squeak751 Год назад +3

    Thanks. I hope you get the next video out soon. I think my biggest fear will be trying to move all 70tb server and making it all ZFS. But I'm gonna wait for a video guide from you before trying that out. Lol 😆 🤣

  • @gswhite
    @gswhite Год назад +6

    Love this series, and learning more about the benefits of ZFS on UNRAID.
    Now you have your first Pool from a drive, will you be showing how to expand that pool with existing or new drives please?

  • @seantellsit1431
    @seantellsit1431 11 месяцев назад +2

    Can’t wait for the next video!

  • @rywightis
    @rywightis Год назад +2

    Man, this is great. Now I have to make the tough choice on whether or not it's worth the switch for me :P. REQUEST though - Would love to see a video on "Unraid Connect vs WireGuard/Tailscale" and your opinions on their functionality/security. Keep up the great work

  • @nexusasus
    @nexusasus Год назад

    Wow finally here🎉, great job Ed

  • @jellolabs2852
    @jellolabs2852 Год назад +2

    Always the best stuff

    • @SpaceinvaderOne
      @SpaceinvaderOne  Год назад +1

      I plan on Sunday :)

    • @jellolabs2852
      @jellolabs2852 Год назад

      @@SpaceinvaderOne is the ZFS auto snapshots video coming out/planned sometimes ? I can't find anything online so my only hope is spaceinvader

  • @ishqem
    @ishqem Год назад

    thnx i've been waiting for this. keep it up

  • @axios640
    @axios640 2 месяца назад

    just added 3 x 12TB HDD to my 90%+ full 5 HDD array and upgraded to a 2TB SSD Cache... saw this video and changed those new HDDs and cache to zfs in no time. Just started the move of close to 12 TB to another and ETA is 53hrs+ for one drive LOL. Looks like it's gonna be a week to get this baby fully up and running haha. Thank you for your helpful videos as always!

  • @IEnjoyCreatingVideos
    @IEnjoyCreatingVideos Год назад

    Nice video Ed! Thanks for sharing it with us!😎JP

  • @plethoraplenty
    @plethoraplenty 8 месяцев назад

    well done! I was curious about how this process worked.

  • @SpaceFahad
    @SpaceFahad Год назад

    Much appreciated!

  • @floydblackbourn7232
    @floydblackbourn7232 Год назад

    Appreciate the timeliness of your videos. They frequently arrive when I need / want them the most. I hope the next part is uploaded soon. I am in the process of assembling a larger chassis to allow retiring several windows servers and smaller UnRaid Boxes. It would be really good to have the next step or steps I need to follow to get it done right the first time on this server. While here I will ask Does Making an Auto Backup / Failover Server from a year and half ago work on 6.12.3 and if so where is Part 2? If not can we expect an updated version? It still has value if it can be implemented.

  • @DannyStammers
    @DannyStammers Год назад +1

    This is wáy faster and easier than what I did a few days ago. After emptying my drive I tried using the clear-me script to zero the drive, and reformat it afterwards. Since the script failed twice, I was forced to break parity and 'repair' it with unassigned devices. Totally forgot about maintenance mode.

    • @naturalcarr9478
      @naturalcarr9478 11 месяцев назад

      I thought it was just me that clear-,me didn't like. It doesn't even detect my empty drives anymore, and as a bonus it exits the terminal when it fails so I have to ssh back in every time.

  • @richardbeirne827
    @richardbeirne827 Год назад +5

    Excellent video as always!
    You mentioned a while ago about comparing Ryzen vs Raptor Lake for Plex transcoding, focusing on power efficiency. Any ETA on that? I'm in exactly the same position, and am finding very little info online about different options, other than anecdotes.

    • @ShadowManceri
      @ShadowManceri Год назад +1

      Also worth noting that Unraid does use 6.1 lts kernel and there are power optimizations in newer kernels. Both AMD (P-state being now default) and Intel. So 6.5 kernel would show bit different results, most likely.

  • @grahammoran3047
    @grahammoran3047 Год назад

    Hey Mate,
    Just new to unraid and your channel is the only one I can find with any relevance, keep it up. I was wondering if you would consider doing a video for newbies? I have a Dell R720 with 16 bays, currently with a 8 drive array and a 512 Cache drive. Followed your video on Nextcloud and have Maria DB installed also.
    My main worry is backup. I don’t want to transfer thousands of photos over without knowing how to get everything back if it goes belly up. Can I put more drives in and create a new pool just to back everything up? Is it possible for one backup for all kind of thing? If I had to start again to just run this one thing that would rebuild everything on my server? Unraid is defo user friendly compared to truenas but in some ways it’s more daunting, RUclips is full of truenas videos but very few unraid ones.
    Anyway you have a new subscriber so if you feel like going back in some basic videos it’s appreciated 👍

  • @jasonlee3247
    @jasonlee3247 Год назад

    Good video as always. I just can’t see the benefit of this hybrid approach (random disk sizes apart). I love the ease of use of unraid but I still prefer Truenas and it’s non-reliance on a “main array”.

  • @mrandyburns
    @mrandyburns Год назад +1

    Great video as always ... and like a good drama, it's keeping us wanting more!! I saw the comments on not needing all drives as ZFS and XFS being good for larger files. I like the idea of one ZFS drive as I guess it's easier on the RAM and has the benefit of parity protection. I have my main array as BRFS, @SpaceinvaderOne do you think it's worth working through drives to go to XFS?

    • @thereallantesh
      @thereallantesh Год назад

      I converted all my disks a couple of years back from ReiserFS to XFS, but I only did it because ReiserFS is no longer being developed, and is in essence a dead file system. It was a lot of work. In your situation I don't know that I'd want to go through all of that.

  • @hisnameispaull
    @hisnameispaull 11 месяцев назад

    Thank you for this! My question is: currently my media (plex) folder is set to spread the data across all drives... so is it wise to convert only one of my 4 disks to zfs, leaving the others as xfs, and re-spreading my media data across my 4 drives (my new zfs disk and my other 3 xfs disks)?

  • @savageaus81
    @savageaus81 Год назад +1

    If we were to do this 1 drive at a time will all drives be each a zpool or will there be a way to add all array disks to a zpool without losing data?

  • @JonasHerkel
    @JonasHerkel 10 месяцев назад +2

    Should I exclude other shares from the zfs disk?

  • @Apollopayne25
    @Apollopayne25 Год назад

    When created the zfs drive, does/can it spin down when nothing been accessed on drive? Like currently I’m using xfs and got the drive to spin down after 15 mins if not been active. Thank you again for awesome videos

  • @geekdomo
    @geekdomo Год назад

    YES!

  • @leonardocarvalho4944
    @leonardocarvalho4944 Год назад

    Greetings from Brazil! Well explained video! Just help me with one thing: I have multiple disk sizes. Can i convert them to zfs?

  • @mwright154
    @mwright154 Год назад

    What is the fio command used to test drive/pool speed?

  • @ThanasisPolitis
    @ThanasisPolitis 10 месяцев назад +1

    Great video as always, thank you!
    One question I have, at 06:24 your server shows action buttons inside the Index page.
    My server runs 6.12.4 and the only button available is "Done". -- is this an addon?

  • @Gragorg
    @Gragorg Год назад +1

    Great video! What kind of extra Ram use is to be expected vs XFS?

    • @le_potate3861
      @le_potate3861 Год назад +1

      By default, unRAID allows 1/8 of total system ram to be used for zfs arc caching. In my instance, I have 128gb installed so the default setting is to use 16gb. This can be changed manually by modifying a config file since Lime tech can't be bothered to give us an option in the GUI.

  • @redwolf_dane4672
    @redwolf_dane4672 Год назад

    If i convert all the drives in the array excempt parity, to zfs can i still have different sized disks and add just 1 disk at a time?

  • @cooletijdgast
    @cooletijdgast 11 месяцев назад

    So following this tutorial, you could essentially reformat all your drives to zfs right? I have 5 x 4 TB drives in my server now, and would like to change them to ZFS, but since I only have 1TB of storage left in total. Could I place an extra 4TB drive in, put all of the data from 1 drive to the new drive, and format the old drive till I did all the drives?

  • @peterbeck2815
    @peterbeck2815 7 месяцев назад

    Hi, very good video, but may I ask where you found the banner you are using?
    Have a good day

  • @ClockworkEIf
    @ClockworkEIf 8 месяцев назад

    Does using unbalance work as it should if some media files are separated in different disks? For example, a movie is in disk1, its subtitles in disk2, series episodes separated in different disks etc?

  • @user-sy3jo8sy3m
    @user-sy3jo8sy3m Год назад

    If we convert all the drives in the array to ZFS do we get the benefit from a multi-disk ZFS pool? Thanks for the videos!

  • @techpchouse
    @techpchouse 11 месяцев назад

    is it a good idea to run Lightroom on a vm to color grade and sort all of my photos?

  • @psycupratdi
    @psycupratdi Год назад

    How do you get all devices (array, pool) and ZFS Master on the same "main" tab?

  • @ThePirateGod
    @ThePirateGod Год назад +4

    What's the benefits to changing the main array to ZFS?

  • @squeak751
    @squeak751 9 месяцев назад

    Question. I have 10 disk. 8 are 14tbs and 2 20tbs. I have 2 20tb Parity drives.
    Could I just delete and formate a drive one at a time then have it get rebuilt via Parity? This way I don't have to have my server down for several weeks?

  • @fredrikgreven
    @fredrikgreven Год назад

    Is there any benefit using encrypted vs non encrypted. Thanks

  • @foxtrot1787
    @foxtrot1787 Год назад

    Can you do this with a unassigned drive rather than in the array?

  • @ClanLawrence
    @ClanLawrence Год назад +1

    Loving this ZFS series. Quick question. Do we have to exclude the ZFS disk from other shares? Or can it be used by them just like a regular disk as well as for ZFS Backups?

    • @mattym00
      @mattym00 9 месяцев назад

      Hey Clan,
      Did you ever answer this question yourself?
      I am wondering the same thing

    • @ClanLawrence
      @ClanLawrence 9 месяцев назад

      @@mattym00 It doesn't need to be excluded, but I have done so and now only use it for backups, including ZFS replications

    • @mattym00
      @mattym00 9 месяцев назад

      @@ClanLawrence yea thanks.
      I figured the same I think I'll exclude it till such time I need the space.
      Cheers for the quick reply legend

  • @lukehodgkinson4261
    @lukehodgkinson4261 11 месяцев назад

    I don't have a parity drive. Whats is the steps when you dont have a parity cheers. Do i just use unbalance to move files back again omce formatted instead of parity doing it?

  • @fandibus
    @fandibus 2 месяца назад

    Does ZFS in unraid require using matched drive sizes?

  • @innversion86
    @innversion86 Год назад +1

    Great video. How do you get all of those options when browsing the file structure in the gui?

    • @JadeMonkee
      @JadeMonkee Год назад

      I was wondering the same thing...

    • @TunTin-yv3qq
      @TunTin-yv3qq Год назад

      same here

    • @drmetroyt
      @drmetroyt 11 месяцев назад +2

      It's because of some plugin dynamix file manager which allows to do it in GUI

  • @TaldrenDR
    @TaldrenDR Год назад

    But how do you import a ZFS pool created via your video in Unraid 6.11 to 6.12? Or am I stuck forever on 6.11?

  • @ramachockalingam471
    @ramachockalingam471 Год назад +2

    Great video as always! Quick question, if i have only two disks in my array (1 parity and 1 storage), can i still use this technique to convert the whole array to zfs?

    • @_TbT_
      @_TbT_ Год назад

      No, because you can’t clean out one data drive for this.

    • @ramachockalingam471
      @ramachockalingam471 Год назад

      @@_TbT_ damn, guess I'll just continue with xfs for now then. Thanks

    • @JamieStuff
      @JamieStuff Год назад

      Sort of. If both drives are the same size you can remove your parity drive and format it to zfs, then move all of your data to it, then set the old data drive as the parity drive.

    • @ramachockalingam471
      @ramachockalingam471 Год назад

      @@JamieStuff I thought zfs doesn't need a parity drive? Isn't the parity striped on both drives?

    • @E5rael
      @E5rael 11 месяцев назад

      @@ramachockalingam471 If I've understood correctly, the parity will be striped on both drives only if you've assigned the drives to a separate zfs pool. If your drives are on the regular array, even if they're formatted as zfs, they'll behave largely the same as any other drives in your array, where redundancy is provided by the parity drive(s).

  • @AlyredV2
    @AlyredV2 Год назад +1

    Thanks as always! Question regarding parity: though Parity Drive 1 uses XOR, I believe Parity Drive 2 uses a different algorithm. Any issues with parity calculations using this method with two parity drives?

    • @JamieStuff
      @JamieStuff Год назад

      Yes, Parity 2 does use a different algorithm. However, since the parity drives are filesystem agnostic, whenever it changes the data on the data drives, both parity drives are updated accordingly. So, no issues doing this with two parity drives.

    • @AlyredV2
      @AlyredV2 Год назад

      @@JamieStuff So the Reed-Solomon algorithm is also zero based? Since formatting the disk assumes that all bits are set to 0, I understand that the first parity drive uses XOR and I get how that works mathematically, but hadn't investigated Reed-Solomon and wondered if it would need to be recalculated.

  • @babisral
    @babisral 5 месяцев назад

    D I need a lot of ram to use ZFS?

  • @gmr4lfe
    @gmr4lfe Год назад +1

    If you converted all your disks to zfs could you add them to one pool gets the benefits of the speed and other features etc? Also would you get the the use of disk spin down if all disks are in one pool within unraid? For example if you tv shows folder on one disk and movies on another after spin down does it activate all or just one when accessing it be of those folders?

    • @SpaceinvaderOne
      @SpaceinvaderOne  Год назад +1

      To get the full benefit of zfs you would need all the disks in one pool. But you cant convert the array into one large pool. this is not possible. You would have to copy the data elsewhere then use the array disks for a large zpool in Unraid. At present you still must have one disk in the Unraid array (i am sure this will change in near future) but no reason that "disk" couldnt be a usb flash drive. So you wouldnt need to "waste" an hdd. Squid mentioned this to me a while ago about doing this if you dont want to use the unraid array and only use pols in Unraid

    • @simplysuperc
      @simplysuperc 11 месяцев назад

      @@SpaceinvaderOne will there be the ability to use a ZFS Pool as a Secondary storage? For instance, if I'm currently using Cache as Primary, and Array as Secondary (to speed up my writes) - will the functionality be added in to support doing the same with a ZFS Pool? i.e. Write to Cache as Primary, and ZFS Pool as Secondary? What I want to do is have 4x14TB drives in a 2x mirrored vdev (this gives me 28TB usable space; akin to Raid 10 but with ZFS bitrot protection, etc.) but have the files initially written to my faster SSD drive which is my Cache pool.

  • @siggigalam8458
    @siggigalam8458 11 месяцев назад

    @ 6:35, what plugin gives you all those options? Thanks!

    • @drmetroyt
      @drmetroyt 11 месяцев назад +2

      Plug-in name : Dynamix file manager

  • @onisama9589
    @onisama9589 Год назад +2

    I have a 14 drive array (2 parity 12 drives for storage). Can I use this method to systematically convert one drive at a time until the entire array is ZFS?

    • @E5rael
      @E5rael 11 месяцев назад

      I can't see why not. Sure, it'd probably be quite time-consuming, but possible.

  • @Boltran
    @Boltran Год назад

    Thanks for your video. Instead of converting an existing drive, I added a new drive, formatted as ZFS. However, I noticed that the new drive will never spin down, and also keeps the parity drives active. (Spindown is set to 15 minutes). I did enable compression and deduplication on the new drive; compression and no deduplication on the cache drive. I made sure 'zfs set logbias=latency poolname'. Do you have any suggestions how to enable the drive to go to sleep during no file activity? Did you experience something similar?
    I have another computer that uses a zfs pool, that I configured from scratch on a standard Slackware 15.0 distribution. Those drives do go to sleep, and they are
    also set to compression and deduplication.

    • @Boltran
      @Boltran Год назад

      Interesting. After a long period, the zfs drive finally did spin down...

    • @JiyuuTenshi
      @JiyuuTenshi 11 месяцев назад +1

      Do you have the ZFS Master plugin installed? Because it seems to wake all of the ZFS disks up when you go to the Main tab and will keep them from going to sleep because it regularly checks their status. Once you go to a different tab, the ZFS disks spin down as expected.

  • @axios640
    @axios640 2 месяца назад

    So, I've been trying to get this done but having some issues where the transfer speed between two drives go down to 0 MB/S or the whole server freezing up. Trying to transfer close to 12TB worth of data each.... any ideas?

  • @w.schobel1514
    @w.schobel1514 Год назад

    Great Vid.
    I tried your script the conversion from folder to dataset and it worked for me - first day. But now, to weeks later I get an error "Skipping folder ... due to insufficient space" . Can you give me a tip where to look for the reason for this error?

    • @JiyuuTenshi
      @JiyuuTenshi 11 месяцев назад

      You probably don't have enough space on the drive to make a copy of the content of the folder you're trying to convert. The script can't just convert a folder to a dataset, it needs to rename the original folder, create a dataset with its original name and then copy its content into the new dataset, so at least briefly you need double the space it currently uses for the conversion.

  • @dzablow
    @dzablow Год назад

    Just waiting on an explanation for a steam game library with multi-user access. I've ran into issues with SMB and multiple people running the same game at once, but I've heard ZFS (and I think isci) will solve this

  • @DonAlcombright
    @DonAlcombright Год назад +1

    Do you recommend compression on for media sharing (Plex)? Or no compression (my assumption).

    • @SpaceinvaderOne
      @SpaceinvaderOne  Год назад

      Depends on the files really. Some media files will compress more. But if you are using a highly compressed format such as h265 you will save very little space and i dont think it worth it.. thanks for watching

  • @couzin2000
    @couzin2000 3 месяца назад

    Once you've reformatted the disk to ZFS, I'm assuing you use Unbalanced to bring all the data back to the original, newly reformatted disk?
    Instead, can one remove an array drive, reformat that very drive and have it replicated by parity?

  • @patrickhimebaugh775
    @patrickhimebaugh775 Год назад

    This doesn’t work In maintenance mode for luks:xfs drives unbalance sees 0bytes, if I start the array normally I can see the size data.

  • @stuxb
    @stuxb Год назад +1

    Thanks for the video! A couple of questions:
    - Would you just repeat these steps to make all your discs separate ZFS pools (i.e. no RAIDz)?
    - Is there any reason you wouldn't want all your discs to be ZFS?
    - If you made all your discs ZFS, should the parity drive also be ZFS?

    • @AnimusAstralis
      @AnimusAstralis Год назад +1

      Parity drive doesn’t have any file system (by design)

    • @39zack
      @39zack Год назад +1

      1: yes
      2: don't know
      3: no, parity drive has no file system

    • @SpaceinvaderOne
      @SpaceinvaderOne  Год назад +7

      In my opinion, you don't need ZFS or other copy-on-write filesystems for all Unraid array drives. XFS can be more suitable for large files like movies, especially on full disks. I prefer having drives full of related files, like TV shows, on my server. This way, when you watch the next episode on Emby, it's on the same disk. This prevents the whole array from spinning up as all TV shows are, for example, on an 18TB drive. I believe journaling filesystems handle full disks a bit better than copy-on-write systems like ZFS. That's not to say ZFS can't handle full disks, it certainly can but its performance may suffer slightly. This is just my preference.

    • @iolsen94
      @iolsen94 Год назад

      @@SpaceinvaderOne this makes sense, but would the movies/tv shows folders not benefit the most from the compress/error checking of zfs? At least for me my two largest shares are those two and even 1% compression would save me almost a TB

    • @JamieStuff
      @JamieStuff Год назад +1

      @@iolsen94 I'd be surprised if zfs compression could compress the data any further than x264/x265/whatever.

  • @mwright154
    @mwright154 Год назад

    Trying to use Unbalance but 4 of my disk are missing. I've disabled Docker/VM/Mover, any ideas?

  • @tylerdo4482
    @tylerdo4482 2 месяца назад

    Probably a dumb question, but how do you get the select option in your browse disk and the option to manipulate files (and perform the calculate occupied space)? I only get. a browse /mnt/disk3 with no options whatsoever.

    • @tylerdo4482
      @tylerdo4482 2 месяца назад

      found it. It was the dynamix folder manager

  • @Murderhoboh
    @Murderhoboh 11 месяцев назад

    This is pretty neat. If you have multiple disks reformatted to zfs, data put back on them, each disk is its own pool, and a folder Media spread across multiple disks, can you convert it to datasets and have the files available and visible like normal?

    • @MagnonEntertain
      @MagnonEntertain 11 месяцев назад +1

      Ive tested it. Array with 2 ZFS Data disks. If your media is scattered on both zfs disks, its visible as usual. over /mnt/user/SHARE.
      The Shares/Dataset on each Disk is independent tho. If you wanna have all your media on 1 Dataset, you gotta have it all on 1 single zfs disk. Otherwise there are bits of it on disk1s dataset, disk2s dataset.

    • @Murderhoboh
      @Murderhoboh 11 месяцев назад

      @@MagnonEntertain Make sense. Another thought was to free up 3 disks, create a 6 slot zfs raidz pool, add the free disks to it, copy "Media" over to the pool, and add each disk to the zfs pool as its emptied. Is it feasible?

  • @drosskills
    @drosskills 11 месяцев назад

    Huge request: Could you do a video on setting up Cloudflare tunnels with Swag? Thanks!

  • @lslchn
    @lslchn Год назад

    Question:
    1. Will the parity disk still protects the disk that is converted to single zfs?
    2. Your previous video provide guide to convert appdata folders to datasets, so I am guessing the "Appdata Backup" plugin is unable to backup the datasets hence we need this zfs disk for backup?

    • @gswhite
      @gswhite Год назад

      Hi, the 'appdata' plugin will still work as the dataset is still seen as a directory. Try to 'SSH' into your server and then change directory to your 'appdata' directory(dataset). You will see what for yourself.
      If appdata backup can read the directory the contents will still be backed up.

    • @lslchn
      @lslchn Год назад

      @@gswhite Thanks, I will have to give it a try. Currently haven't converted them yet due to above questions

  • @PaulShadwell
    @PaulShadwell 7 месяцев назад

    When I look in the disk I don't see all the buttons like you have (DONE, JOBS, SEARCH, DELETE etc) I only see DONE. Am I missing a plugin?

    • @PaulShadwell
      @PaulShadwell 7 месяцев назад +1

      Found the answer from one of your previous videos. The plugin is called Dynamix File Manager for anyone else who's wondering this.

  • @DannyStammers
    @DannyStammers Год назад

    I've noticed that copying in unbalanced to/from a zfs drive is unbelievably slow. I've first copied 11TB to a newly formatted zfs drive, with speeds never exeeding 70Mb/sec. Now, I've formatted the old drive as zfs and copying things back. Now the max speed seems 60MB/sec. I'm already 35hrs in and it tells me it will take 15hrs more. Is this the effect of keeping parity, or is there something else going on?

    • @briantichenor3277
      @briantichenor3277 Год назад

      Assuming the slowest drive in the group is 5400rpm, this seems about right on time to copy. The issue is that when you're doing the moves, it's maintaining parity. So every bit that gets copied back and forth is written to the target, read on target, then written on parity and read on parity (to verify). Could speed up the process by taking the parity drives out of the array, moving files and then rebuilding parity at the end. But your array won't have parity until the rebuild is complete (which will likely take ~30+ hours given the size of your drives and speed). So while SpaceInvader One's path is probably marginally slower, its much safer since you avoid the run risk of data loss if any of the drives in the array fails prior to parity being rebuilt

  • @patrickschouten4769
    @patrickschouten4769 11 месяцев назад

    Something weird since doing this. I have a 5 disk array. 1 Parity drive, 4 data drives, one of them now ZFS. Now the drives won't spin down. There is a minimal amount of reads consistently happening on the xfs drives and a minimal amount of writes on the zfs drive, but dockers and vm's are disabled and no activity showing in file watcher. Anyone else seeing this behavior?

  • @helixx23
    @helixx23 Год назад

    Algo gogogo

  • @vi_EviL_iv
    @vi_EviL_iv 3 месяца назад

    @spaceinvader Can you make an updated video with the latest UNRAID OS video for noobs? Will show a walk though on how to set up installation with statics IP address, set up plex on nvme only that stores all metadata, set up a folder tree like in windows and only share media files for plex only and leave everything else on private, browse the unraid like in windows when you crate folders and put your documents, music, videos, pictures, mapp it to your PC and browse from your PC the files you have in UNRAID if possible? Also set up a ZFS, how to expand it, replace and recover from data lost or HDD failure, please?

  • @rendeaust
    @rendeaust Год назад

    Any advantages if I'll make the entire array zfs formatted?

    • @Autchirion
      @Autchirion Год назад

      And if I might add. And if so, how to do this, so that it’s one pool?
      I‘ve got 24 TB of data and can’t just move this to some spare disks.

    • @savageaus81
      @savageaus81 Год назад

      I just asked the same question but also read an answer to it. You will need to copy off all the data and recreate the array as a zpool BUT unraid will still require 1 disk (could be a usb drive) but it has to have 1 drive.

    • @Autchirion
      @Autchirion Год назад

      @@savageaus81 thank you, quite unfortunate. But then I’ll have to organise my data better

  • @silvernight5050
    @silvernight5050 Год назад

    Out of curiosity does unraid use open ZFS or normal ZFS

    • @SpaceinvaderOne
      @SpaceinvaderOne  Год назад +2

      OpenZFS is what Unraid uses. "Normal" ZFS is Oracle's ZFS and is closed-source and not publicly developed it is only used in Oracle products such as the Oracle Solaris operating system and various Oracle storage appliances. OpenZFS on the other hand is open-source and community-developed. You will find it in Linux, freeBSD systems etc.

  • @kumper33
    @kumper33 Год назад +9

    About to shut down the computer for the day; I guess not! Here we go!

    • @robl7532
      @robl7532 10 месяцев назад

      Haha perfect response!

  • @familiekruit6068
    @familiekruit6068 Год назад

    You don’t even need to start the array in maintenance mode and erase the disk. You can simply stop the array, change the file system, start the array and let it format unmountable devices. Your parity will still be valid.

    • @SpaceinvaderOne
      @SpaceinvaderOne  Год назад +6

      Yes your right its not essential but I think it good practice to erase first. Unraid uses wipefs to do this. Wipefs will erase filesystem signatures. By erasing these signatures, you remove any traces of the previous file system. This can be helpful to avoid conflicts when creating a new file system. I find this can happen sometimes when reformatting for example a btrfs drive sometimes. It will say its reformatted but then appear still as btrfs after.
      Also I did quite a few tests for this swapping from one file system to another and back again. I found that when not erasing first i sometimes (not always) got about 12 blocks out of sync when running a parity check after. This never happened when erasing first. I think for the extra time it takes to do its worth it .

  • @thany3
    @thany3 11 месяцев назад

    If you do this for every drive, can you make it into a RADI-Z array?

    • @E5rael
      @E5rael 11 месяцев назад

      If I've understood correctly, you'd have to move the drives away from the regular array, and assign them to a separate ZFS pool, for which you can do the Raid-Z configuration.

    • @thany3
      @thany3 11 месяцев назад

      @@E5rael But you can't have no drives in the main array, correct? Or has that limitation been lifted by now, in lieu of the new ZFS features?

  • @IndigoVikingTV
    @IndigoVikingTV Год назад +1

    If you're like me, sitting here wondering why you don't have "ZFS Master" in your Unraid Dashboard like SpaceinvaderOne has. It's a plugin you have to add.
    I formatted some disks as ZFS and thought that this was a native unraid feature that would show up if you had zfs pools, it's not.

  • @nkerboute
    @nkerboute Год назад

    would love to see how to move the complete array to ZFS in case the array consists of 1 disk and 1 parity only

    • @drmetroyt
      @drmetroyt 11 месяцев назад

      +1

    • @E5rael
      @E5rael 11 месяцев назад

      Another commenter here suggested you should convert your parity drive to a regular drive and format it as zfs. Then move your data to this new drive, and the use your old data drive as your new parity drive.

  • @ALERTua
    @ALERTua Год назад +1

    Please make the music quieter. Thank you.