I have a DS1819+ running RAID6 with five 10TB drives installed. I have to replace Drive 1 that is currently failing. 1). Is the repair time calculated strictly by how much actual data I have stored on the NAS, regardless of the Storage Pool total capacity -OR- is it calculated by the Storage Pool capacity, regardless of how much data I have stored on it? 2). For future reference, if I install a Hot Spare drive, at what point will that Hot Spare initiate a RAID rebuild/repair? only after a complete drive failure OR at some pre-determined point of drive failure? 3). When running a Hot Spare, can that speed up the repair process any? (beyond just waiting on me to get around to initiating the repair - for example, can it in any way already have the Parity Consistency Check done ahead of time in order to instantly begin the actually rebuilding step?
What can I do if there is no option for action - I can get to repair other ways and I get "no drives are available" but there is an identical drive in the second slot and it shows it but it states it is not initialized.
Great tutorial! I dreaded doing the rebuild the two WD Red drives that failed and two that are hanging by the thread. Once I build the first two (fingers crossed) then will do another rebuild on the other two WD Red drive. I have 918+. BTW got a warranty on all four drives.
Hi quick question, (noobie) when I purchased my synology I had everything installed (operating software etc) on my 6tb nas drive. I am planning to change the drive to a bigger capacity but the question is would I need to reinstall everything? As it contains plex settings dsm software etc Thanks I’m advance
Okay, what do you do when you're repairing drive 1 and during the repair,the Nas starts to overheat? Because that's the problem I'm having right now. I'm trying to repair 8 TB WD Red. On a DS 218+
What do you mean over heat? is it the NAS or the Drives? Unless its actually flashing you a warning I would not worry about it. If you are having issues you can just manually turn the fans up
Very good and easy to understand - looking forward to your next video. On my wish list is more about virtual machine manager. I have a W10 now running but is not impressed by the performance - I am sure/hope that can be improved....Thanks.
Hi! Can you please let me know what to do? I cannot create a storage pool and if I tried it says, "System failed to create." I noticed one of my DRIVE is "not initialized" while the other drive is "initialized." What seems to be the problem? Thanks!
What is the logic behind repair not seeing that the "new" disk, is actually the same disk and that the data should be 99% the same? I don't see the point in erasing the whole disk and causing so much stress on the system as well as opening a big half day window of time when the data is vulnerable. I had an issue when a volume crashed in the past (single disk, not even SHR) the crashed occurred for an accidental disconnection of the disk and I was force to backup externally, rebuild the volume from scratch and restore. The whole process required me 24+ hours and during all that time, the data was still there intact as the volumen was "crashed" but in read-only mode. I understand Synology is very picky about failures but this is really extreme. Good video by the way :)
Glad you like the video! Honestly that's a big part of RAID in general, its a really rare circumstance when a drive would get unplugged and then plugged back in, all while being fine. Then the NAS knows that the drive does not have the info on it that it should. But the only way to know what is missing is to go through the entire drive and check what it should be and compare. Instead of doing that it just starts from scratch and rebuilds
I would like to backup a file server to a synology and hot swap drives out for offline storage of backup. If the synology gets ransomeware and i have to put in the offline drive, will it pick up where it left off, will I have to do a restore? It looks like from your video the drive will be in a degraded state just from removing it. Is this not the intended use for hot swappable drives? Will the drive i removed be potentially compromised from hot swapping? How can i do something similar to this to do offline storage?
Is it better to swap out the faulty drive while Synology is still on, or better to turn it off > swap it > turn on the system? I am paranoid about losing my data
So if you have a hot swap synology (most are but double check) its actually best to leave the synology running if you have already one failed drive as the most common time for a drive to fail is actually when it boots up. Tip: before pulling out the drive 1) backup all of your data, and do a second backup / copy of those files you could not love without. 2) in DSM select the failed drive and hit "identify drive" that way it will blink and you will know which one it is.
So hypothetically, if you have a larger Nas with several drives and one failed, could you just reconfigure your pool so that it is just one less drive and then you would lose the capacity of that drive but regain the 1 disk fault tolerance? Assuming that you have enough unused space to lose that capacity.
@@SpaceRexWill Understood. Great videos btw. I've been learning a lot from you and a couple others about my new Nas ds420j, trying to get away from Google.
hmmm I solve that Synology show me, that both of 2 disk are fit and healthy, but volume is degraded .... you should tell us, how to deal with such a situation, thank you
@@SpaceRexWill Can you explain or make a video on how to do this in DSM 7? I have a drive with System Partition Failed error. Its telling me to do a repair but I can't find any repair options.
Tried this and it failed, over and over with two different brand new drives. Once it's done the new drive is in a "crashed" state just like the old one.
@@SpaceRexWill Power went out and then the UPS was making noise, the lady unsure what to do just turned off the UPS so she could have quiet instead of the beeping and then when the NAS was powered back on it was like that.
I feel like I've breathed on my NAS and another drive fails. I've been really disappointed in both the synology NAS and the drives. Spent 1000s of dollars at this point on 3 replacement drives within 3 years
@@SpaceRexWill Agreed, All Seagate Ironwolf and Ironwolf Pro. I've had hard drives on my PC that have lasted years, so wondered if the synology was screwing up the drives. It felt like too much bad luck
I'm starting to dislike Synology. Bought the DS920+ and it's been up for two weeks and great. Woke up to it beeping, "Volume partition failure." it rebuilt fine but if I turn it off and on, it starts all over again. Synology hasn't responded to my ticket yet. Thinking of returning the whole mess! and getting a different brand.
16 hours for few TB?? GEEZUS crist... copy paste would made that "Repair" faster :D if average repair speed is 100MB/s and we saw faster in your screen, this means in 16 hours it had to repair 5+ TB.
Had to swap out a failing drive. Did I go to the Synology site for instructions? No I came here because I knew you'd explain it clearly. Thanks!
haha glad you like my videos!
your channel is the “go to” place for NAS users. Keep up the great work!
thanks!
Another clear and instructive video.
Thank you.
I have a DS1819+ running RAID6 with five 10TB drives installed. I have to replace Drive 1 that is currently failing. 1). Is the repair time calculated strictly by how much actual data I have stored on the NAS, regardless of the Storage Pool total capacity -OR- is it calculated by the Storage Pool capacity, regardless of how much data I have stored on it? 2). For future reference, if I install a Hot Spare drive, at what point will that Hot Spare initiate a RAID rebuild/repair? only after a complete drive failure OR at some pre-determined point of drive failure? 3). When running a Hot Spare, can that speed up the repair process any? (beyond just waiting on me to get around to initiating the repair - for example, can it in any way already have the Parity Consistency Check done ahead of time in order to instantly begin the actually rebuilding step?
you let me love my NAS more my friend : )
SpaceRex, you are the best! Nuf said :)
Thanks!
When you suggest that a back up is performed, you should show this in your video? where should you back up with 3.6 TB
What can I do if there is no option for action - I can get to repair other ways and I get "no drives are available" but there is an identical drive in the second slot and it shows it but it states it is not initialized.
Great tutorial. You presented everything that I needed to know in under 10 minutes. Thank you.
Great tutorial! I dreaded doing the rebuild the two WD Red drives that failed and two that are hanging by the thread. Once I build the first two (fingers crossed) then will do another rebuild on the other two WD Red drive. I have 918+. BTW got a warranty on all four drives.
Hi quick question, (noobie) when I purchased my synology I had everything installed (operating software etc) on my 6tb nas drive. I am planning to change the drive to a bigger capacity but the question is would I need to reinstall everything? As it contains plex settings dsm software etc
Thanks I’m advance
can you please do this same procedure with DSM 7. the interface is different and does not have a "add drive" option.
Thanks for this video. I replaced a defective drive successfully with your help. Good explanation of all steps.
Awesome! I know how nerve-racking it can be! How long did the rebuild take?
Okay, what do you do when you're repairing drive 1 and during the repair,the Nas starts to overheat? Because that's the problem I'm having right now. I'm trying to repair 8 TB WD Red. On a DS 218+
What do you mean over heat? is it the NAS or the Drives? Unless its actually flashing you a warning I would not worry about it. If you are having issues you can just manually turn the fans up
Very good and easy to understand - looking forward to your next video. On my wish list is more about virtual machine manager. I have a W10 now running but is not impressed by the performance - I am sure/hope that can be improved....Thanks.
Honestly there is not a ton you can do with windows :/.
If you can run it on Linux I would try a Ubuntu VM. They are so much more light weight
Hi! Can you please let me know what to do? I cannot create a storage pool and if I tried it says, "System failed to create." I noticed one of my DRIVE is "not initialized" while the other drive is "initialized." What seems to be the problem? Thanks!
I have same problem. Did you ever figure out how to fix this?
What is the logic behind repair not seeing that the "new" disk, is actually the same disk and that the data should be 99% the same? I don't see the point in erasing the whole disk and causing so much stress on the system as well as opening a big half day window of time when the data is vulnerable.
I had an issue when a volume crashed in the past (single disk, not even SHR) the crashed occurred for an accidental disconnection of the disk and I was force to backup externally, rebuild the volume from scratch and restore. The whole process required me 24+ hours and during all that time, the data was still there intact as the volumen was "crashed" but in read-only mode. I understand Synology is very picky about failures but this is really extreme.
Good video by the way :)
Glad you like the video! Honestly that's a big part of RAID in general, its a really rare circumstance when a drive would get unplugged and then plugged back in, all while being fine. Then the NAS knows that the drive does not have the info on it that it should. But the only way to know what is missing is to go through the entire drive and check what it should be and compare. Instead of doing that it just starts from scratch and rebuilds
I would like to backup a file server to a synology and hot swap drives out for offline storage of backup. If the synology gets ransomeware and i have to put in the offline drive, will it pick up where it left off, will I have to do a restore? It looks like from your video the drive will be in a degraded state just from removing it. Is this not the intended use for hot swappable drives? Will the drive i removed be potentially compromised from hot swapping? How can i do something similar to this to do offline storage?
This really is not what it is designed for.
I would instead use hyper backup with this setup
Is it better to swap out the faulty drive while Synology is still on, or better to turn it off > swap it > turn on the system? I am paranoid about losing my data
So if you have a hot swap synology (most are but double check) its actually best to leave the synology running if you have already one failed drive as the most common time for a drive to fail is actually when it boots up.
Tip: before pulling out the drive 1) backup all of your data, and do a second backup / copy of those files you could not love without. 2) in DSM select the failed drive and hit "identify drive" that way it will blink and you will know which one it is.
So hypothetically, if you have a larger Nas with several drives and one failed, could you just reconfigure your pool so that it is just one less drive and then you would lose the capacity of that drive but regain the 1 disk fault tolerance? Assuming that you have enough unused space to lose that capacity.
The problem with this is that would require a shirk of the storage volume that is not supported by most RAID types.
You cannot do this on synology
@@SpaceRexWill Understood. Great videos btw. I've been learning a lot from you and a couple others about my new Nas ds420j, trying to get away from Google.
hmmm I solve that Synology show me, that both of 2 disk are fit and healthy, but volume is degraded .... you should tell us, how to deal with such a situation, thank you
Thank you So much. Great tutorial.
Many thanks for the tutorial
thank u mr nas man
Seems like this has changed in DSM 7 as I don't see the actions option for the storage pool. Regret updating for sure.
They still do. Just moved
@@SpaceRexWill Can you explain or make a video on how to do this in DSM 7? I have a drive with System Partition Failed error. Its telling me to do a repair but I can't find any repair options.
So you have a raid 5/1 or SHR?
Also you need to place another drive that has not failed in there.
@@SpaceRexWill I have a SHR 4 BAY with all bays currently occupied with a 1 disk redundancy.
Tried this and it failed, over and over with two different brand new drives. Once it's done the new drive is in a "crashed" state just like the old one.
Crashed is not degraded. If you have a crashed volume you need to contact synology support
@@SpaceRexWill It says degraded in the volume menu and in the pool it shows one of four is crashed. So, which is it?
If it says one of your pools or volumes has crashed that has to be recovered. What happened?
@@SpaceRexWill Power went out and then the UPS was making noise, the lady unsure what to do just turned off the UPS so she could have quiet instead of the beeping and then when the NAS was powered back on it was like that.
Ah yes that sounds like a volume crash rather than a degrade. Contact Synology support
I feel like I've breathed on my NAS and another drive fails. I've been really disappointed in both the synology NAS and the drives. Spent 1000s of dollars at this point on 3 replacement drives within 3 years
wow that sucks! What drives did you get? That sounds like you got a really bad batch of hard drives
@@SpaceRexWill Agreed, All Seagate Ironwolf and Ironwolf Pro. I've had hard drives on my PC that have lasted years, so wondered if the synology was screwing up the drives. It felt like too much bad luck
I'm starting to dislike Synology. Bought the DS920+ and it's been up for two weeks and great. Woke up to it beeping, "Volume partition failure." it rebuilt fine but if I turn it off and on, it starts all over again. Synology hasn't responded to my ticket yet. Thinking of returning the whole mess! and getting a different brand.
that sounds like one of your disks might be DOA
@@SpaceRexWill How do you know which one though. Im about ready to throw it in a box and get a refund since Synology isn't answering any tech support.
@Suicide Kyd I'm tired of people that think they know fking everything. for your info the backplane was defective like you!
16 hours for few TB?? GEEZUS crist... copy paste would made that "Repair" faster :D if average repair speed is 100MB/s and we saw faster in your screen, this means in 16 hours it had to repair 5+ TB.
This is configured to the "slower" rebuild setting. It allows the performance of the NAS to be acceptable the whole time during rebuild