SO thankful to you for this video. I have the exact same issue on my TB4. Couldn't find a video explaining the process very clearly - most seemed very overcomplicated - especially for a raid system that is supposed to allow for easy drive replacement in this scenario. Going to replace my drive tonight thanks to you!
Interesting! And helpful! I’m considering replacing my old external WD drives, and now I’m learning how RAIDs work. Maybe I should buy several moderate-speed SSDs for archiving purposes and replace old magnetic drives altogether.
With RAID-5, you have single redundancy. If one drive fails, that redundancy is gone, hence when a second drive fails before the redundancy is restored, you are out of luck, all your data become inaccessible, and your only hope is in sending your drive to a recovery service ($$$), and I'm not sure how good these services are at recovering parts of a RAID array. The one big issue here is that the amount of data shuffling in the process of restoring a RAID-5 array makes it *more* likely for a second drive to fail *while restoring the array*. OWC claims that there is limited demand for RAID-6 - but this would resolve the issue of a double disk failure! Sadly, I suspect it will take OWC at least a year to have RAID-6 added to SoftRAID... :-( I used to have a Drobo 5D3 box that had RAID-6 equivalent capabilities (double redundancy and self-healing, i.e., drive replacement on-the-fly / hot drive swap) for many years - it was not a high-performance solution, but it kept my data safe for years - until the device's logic board needed repair, and the Drobo company filed for bankruptcy... :-(
This was exactly my question. I wanted to validate the new drive while it's rebuilding. The last time I replaced a drive I let it rebuild, then validated the new drive, then added the new drive to the RAID. I am going to do the same thing even though it will take much longer. I also backed everything up before I started.
I have an 8 BAY 98TB and followed the steps for replacing 1. 14tb disk, so far its been REBUILDING FOR 11DAYS and it still says rebuilding and the is no time estimate ? any advice ?
My OWC TB4: I back all my drives up to it. Then shut the OWC drive off for 2 weeks (was gone). Now drive will not turn on after being off for two weeks. Amber light on front shows power. Drive is connected to computer. Green light inside array housing is on. No drives spinning up. This thing has been touchy from the get go - long story. OWC seems to be putting out junk. No response from support.
ok, so why would all my drives work in bays 3 and 4, but not in bays 1 and 2. All drives work with no errors depending on where they're plugged in. This was a sudden occurrence.
EXACTLY what I needed! Thank you!
SO thankful to you for this video. I have the exact same issue on my TB4. Couldn't find a video explaining the process very clearly - most seemed very overcomplicated - especially for a raid system that is supposed to allow for easy drive replacement in this scenario.
Going to replace my drive tonight thanks to you!
Interesting! And helpful! I’m considering replacing my old external WD drives, and now I’m learning how RAIDs work. Maybe I should buy several moderate-speed SSDs for archiving purposes and replace old magnetic drives altogether.
Very helpful. Thanks for sharing.
Thanks for sharing. Very helpful!
This video is great. Do you happen to know if it would be the same process while replacing 2? I had 2 fail at the exact same time :(
With RAID-5, you have single redundancy. If one drive fails, that redundancy is gone, hence when a second drive fails before the redundancy is restored, you are out of luck, all your data become inaccessible, and your only hope is in sending your drive to a recovery service ($$$), and I'm not sure how good these services are at recovering parts of a RAID array. The one big issue here is that the amount of data shuffling in the process of restoring a RAID-5 array makes it *more* likely for a second drive to fail *while restoring the array*. OWC claims that there is limited demand for RAID-6 - but this would resolve the issue of a double disk failure! Sadly, I suspect it will take OWC at least a year to have RAID-6 added to SoftRAID... :-(
I used to have a Drobo 5D3 box that had RAID-6 equivalent capabilities (double redundancy and self-healing, i.e., drive replacement on-the-fly / hot drive swap) for many years - it was not a high-performance solution, but it kept my data safe for years - until the device's logic board needed repair, and the Drobo company filed for bankruptcy... :-(
Thanks for the video. So no need to validate drives before removing the faulty one?
This was exactly my question. I wanted to validate the new drive while it's rebuilding. The last time I replaced a drive I let it rebuild, then validated the new drive, then added the new drive to the RAID. I am going to do the same thing even though it will take much longer. I also backed everything up before I started.
Can you provide a link to the replacement hard drive? Also I have the same Thunderbay, can we use a SSD drive to replace?
Did yours really fail after 2500 hours of use? Im at 12,000 hours on my Toshiba MD04ACA400. Should I be replacing them?
Thanks!!!
I assume the HDD has to be both of the same size and the same RPM? But, does it have to be of the same manufacturer?
I have an 8 BAY 98TB and followed the steps for replacing 1. 14tb disk, so far its been REBUILDING FOR 11DAYS and it still says rebuilding and the is no time estimate ? any advice ?
That seems like a long time. I’d call the company.
Question, can I just use this for normal storage for games?
I am running out of storage due to games and need M O R E
My OWC TB4: I back all my drives up to it. Then shut the OWC drive off for 2 weeks (was gone). Now drive will not turn on after being off for two weeks. Amber light on front shows power. Drive is connected to computer. Green light inside array housing is on. No drives spinning up. This thing has been touchy from the get go - long story. OWC seems to be putting out junk. No response from support.
ok, so why would all my drives work in bays 3 and 4, but not in bays 1 and 2. All drives work with no errors depending on where they're plugged in. This was a sudden occurrence.