Hey man, I just wanted to thank you for all of your videos, I appreciate it very much. I'm a software engineer that's really interested in home servers, and your guides have helped me understand so much about my NAS as well as Linux, which I am not as familiar with. Your videos are always very professional and concise. Keep up the great work!
SpaceRex I appreciate all your work. I've learned a lot from you. One thing I'd add about RAIDs, particularly for those with 8 bay NASes, is that you don't need to have all 8 bays as a single volume. You can split them into multiple volumes each with their own RAID. For example, you could have 2 x 4 drive RAID 5s instead of 1x 8 drive RAID 5 or 6. Or you can mix and match. You could have 1x 6 drive RAID 5 or 6 and 1 x 2 drive RAID 10. Etc. This is helpful when you have different workloads such as regular file storage that gets a regular level of use versus say security cameras running 24 hours a day that are constantly writing and reading and deleting. So keep your regular files on the 6 drive RAID 5 or 6 and your security cameras on the 2 drive RAID 10. That way when the sec cam drives fail more quickly due to higher load you won't fubar your regular files and you won't need to worry about rebuilding the RAID should a failure occur because of the sec cam load. Also, with the RAID 10 for the sec cams you can easily rebuild if a drive fails and you keep your potentially important sec cam footage.
It's not letting me edit, so adding a bit more here. Having multiple volumes/RAIDs is also helpful for things like video files when you're running a Plex server. Keep your regular files in one RAID and your video files in another. Again, separate out based on workloads and/or file types. Likely your regular files have different importance than your video files, except for perhaps important family videos you absolutely don't want to lose.
Ok, now it makes sense. I couldn't figure out how you could backup 7 drives with one but after your explanation it makes perfect sense. Actually as soon as you started the explanation the light turned on and I finally got it. Thanks for posting this.
Just recently came across your channel. I think I have a pretty good understanding of RAID and how they work but, never really understood the "secret" of parity. Loved your simple explanation of "odd parity" . I run Promise Pegasus 2's in my home setup all running in Raid 5. Last week lost one drive on my R8 (8 X 4TB) replaced with a spare and took almost 30 hrs to rebuild. I have been thinking about getting a Synology but, just have not pulled the trigger yet. Thanks for the great content.
I appreciate the heck out of you and your videos. I know absolutely nothing about computers in general and I’ve built an entire network off your videos that’s insane. I’ve spent a small fortune but worth every penny for video editing in my buisness so thankyou
Thank you for this incredibly informative video on RAID levels. I really appreciate how clearly and in detail you explained the different RAID levels and their pros and cons. This helped me understand which RAID level is most suitable for my needs. Your insights and examples were very valuable. Keep up the great work.
Thank you for the thorough explanation! I'm relatively new to NAS and I am about to set up a Synology 1522+ with 5 16TB drives, and this tutorial was exactly what I needed.
Your parity explanation reminds me of "The missile knows where it is because it knows where it isn't". Thank you for explaining it, everyone else glossed over the most interesting part of RAID.
For main use most home customers should use RAID6/SHR2 (unless they have a really good backup strategy and don't mind restoring from backup if using RAID5 or SHR1) disk size should also be taken into consideration large disks over 8+10tb should use SHR2/RAID6 due to rebuild times also more disks doesn't mean significantly longer rebuild time (larger disks are straight up longer rebuild time) and CPUs have been fast enough to handle single and dual parity rebuild for quite some time (like over 10 years) Other things not touched and probably outside this topic is Checksum,, zfs (truenas or QuTS hero, not qts) and asustor (as long as you ticked the snapshot box to enable btrfs volume) by default uses Checksum at volume creation, Synology on the other hand still defaults to Checksum off when creating share Folders and can't be ticked afterwards (unless you have a specific reason to not use Checksum like write intensive vm or data base it should be ticked as it allows filesystem level self heal or worst case report what files are corrupted instead of it been silent corruption )
The 5/6 discussion has been going on a really long time and there is a ton of misinformation out there. A 20TB drive running at only 100 MB/s could would be completely rebuilt in under 3 days. The problem comes down to what was mentioned in the video: what else is going on on the NAS. Another huge part is the more disks you add the more chances of failure you have, and the slower you go due to the fact that all the random data has to be read from every single drive. This is what the real risk is. The more drives you have not only does it run slower but they are more drives that can fail. This myth has been perpetrated online due to people misunderstanding uncorrectable drive error rates and making sites like this: magj.github.io/raid-failure/ Rebuild times for home users who are not using drives 24/7 are significantly lower then they are for enterprise.
@@SpaceRexWill I just prefer still have redundancy when a drive has failed, bad block repair and self heal (btrfs) available on main nas/server units,, backups just RAID5 as if they fail you just Re run backup (unless it was an offsite backup then it be preferable that we don't have to re run a full backup over the Internet again if it's a large backup) Raid array doesn't know what data is stored above it, it will rebuild from Start to end regardless what data is stored there (as far as raid is concerned everything is sequential when rebuilding, any reads and writes that come in will be slower) Just find a lot of novice users who have a lot of trust in Raid1/5 and don't run a backup and lose 2 disks when trying to repair , don't run a data scrub and smart extended scan before replacing a disk (and don't run monthly data scrub or 3 monthly smart extended scan and snapshots if available on that platform) that website wraps 100% error free to 0% lol (never seen that site before) an UNC if using a enterprise or nas drive shouldn't result in a rebuild failure in RAID5 it will just have missing bits of data (if the disk full fails/hangs up for more then 7 seconds then it will destroy the array)
Raid 5 has only the better benefit of upgrading free space usable and making parity on two separate drive takes longer to calculate both read and writing vs copying mirroring
One issue not discussed is the vulnerability of RAID 5 and RAID6 to Unrecoverable Read Errors during a rebuild. The larger the drive the greater the chance these will crop up. In the old days we would call these bad sectors. Today, most consumer drives are rated 1014 regarding their Unrecoverable Read Error Rate. That means that there is a chance that you get a read error every 12.5 TB of data. Now, I have worked in medium sized businesses and have seen RAID 5 Array rebuilds take days to accomplish. When it all works, it is great. When you have one of these errors pop up, your array is toast. Been there, seen that and got the Tee Shirts numerous times. Another thing to remember, is that while a rebuild is going on, the end users will see a definite decrease in the speeds of data access as there is not the full array to read from and write to so that parity bit is leaned on heavily. Due to these situations, especially where you cannot afford to lose the data, RAID 1 and RAID 10 become the recommendations. Again, the larger the drives in use, the greater the possibility that there will be Unrecoverable Read Errors during the rebuild.
So this is something that was an issue wayyy more prevalent back in the early 2000's. Today UNC's are much less common, and if you are scrubbing regularly should get taken care of if they do pop up. Even then Synology's MDADM/BTRFS RAID5 can still rebuild with a UNC as they have additional checksums in place that can backup single bitrot. If the 12TB UNC really was statistically random then you could not read the entire data stored within a 16TB drive without corruption, but you can
@@SpaceRexWill This is still an issue today as drive sizes have kept increasing. I was still seeing issues like this in 2015 in some smaller and mid-sized companies. Data scrubbing does not necessarily take care of this. First, you have to enable and schedule the data scrubbing and if it is not Btrfs, you are still vulnerable. Many a Synology user goes for a basic setup and never goes into the trenches to really ensure they are as protected as they can be. If the data is critical, I would not be wagering that data integrity on the hope that it "should" be able to rebuild it. Depending on the load on the unit, a rebuild of a large RAID 5 array could take days and cripple performance while it is happening. Again, I saw this happen many times and experienced this with a large physician's group with their EHR system. Theory is always great, but from practical experience, and I have 40 years of that specializing in Backup and Disaster Recovery, you can never totally rely on that. While I hate losing drive space with a RAID 10, I know I and my customers are better protected than in a RAID 5 and RAID 6 array. This was a great video, but I think that it is certainly worth mentioning this important consideration.
Hi @@stevemccarthy4713 Is this longer recovery period applicable to SSD HDD under 1 TB in RAID 5. I'm interested to know your opinion on this. I agree with larger traditional drives, just as a remark.
These videos are great! Is there a place in your video (or another video) where you discuss the recommended RAID if you are using NAS as a Time Machine backup?
Question... if you use SHR-1 with 2 drives you say it is basically RAID 1. If you add a 3rd disk will it auto-magically convert your 3 disks to RAID5? or do you have to reformat the entire array? if you are just starting out would you recommend starting out with a minimum of 3 drives so you can start off in "Raid 5". Thanks!
@@darckanbu47 still waiting on an answer to this. If you reformat your array, does that mean you lose all your data and so you better have a backup first? Is this one of many possible footguns with RAID?
Nostalgic to hear talk about RAID. Remember the old days with the large raid packs and all the maintenance hassles with the spinning disks braking, data management issues and expansion nightmares. Haven’t designed any systems with raid in years all clusters today running ZFS. From a systems point this make soooo much sense and is needed for the VM architecture with HA and/or migration. Maintenance is less of a hassle and performance is golden. TruNAS with ZFS pools and ssd buffers are a cool way to go. My homelab has 4 Proxmox servers 2 NAS and one dedicated Proxmox backup server. All my systems are on ZFS or XFS today and it’s smooth sailing.
@@timramich RAID needs much knowledge and skills to maintain. Many systems relays on ZFS to function, pfSense, Proxmox. Remember ZFS is part of the Linux kernel.
@@nalle475 No it doesn't (pertaining to your RAID claim). You just make an array and format a filesystem on top of it. There's so much specific crap to setup about ZFS, because it's an actual filesystem and has so many features, that it takes people with IT experience to set it up properly. It's not for the novice. I wouldn't ever use it for my data unless I had a paid IT person managing it for me. I don't see what having it as part of the kernel has to do with anything.
if I get a 5 bay nas and setup raid 5 with 24TB harddrives in 3 bays, can I use the fourth bay for samsung ssd just to run steam games and have the game history backup to the other 3 harddrives? (keeping in mind that raid uses the size of the smallest harddrive) In this case, would I need to first setup the raid 5 with the 3 24TB harddrives, and then connect the ssd afterwards in the fourth bay and have it somehow backup to the raid5? Or would it need to be part of the raid5 from the beginning?
Hi! I was wondering if you could help me out. A few years ago I bought the Synology DS918+ 4-bay NAS and mounted 2 10TB drives with RAID 0. As my capacity has reached its limit now I need to expand it and bought 2 additional 10TB drives. With RAID 0 I understand that cannot just expand my storage pool, correct? What would be the best way to add this additional 20TB capacity without losing any of my existing data? Thanks in advance!
Great video!! I know you recommend the 1522+. What raid would it be to setup disk bays 1-4 with 1 then for disk bay 5 have a copy of the info on disk bays 1-4? So Bays 1-4 has 20tbs total for all 4, then disk 5 has another 20tb which is a constant copy of bays 1-4. Is this possible with the 1522+ and what raid would this be? Thank you
I will join in the hearty thanks for all of your great videos. It has been amazing to learn from you. I have a Synology DS220+. I currently have only 1 of the 2 bays in use. I am using a 6TB Ironwolf NAS drive and have just purchased a second 6TB Ironwolf drive for the second bay. I am using my has primarily as a Plex server but am also putting ALL my digital media on it...pictures, movies, ect. I am NOT using it as a backup for my computer. I am not quite sure how to add the second drive....do I just plug it in like I did the first one? I've been watching your videos on backing up and I'm not sure if I should be using RAID (and what level) or actually doing a backup. I don't want to lose all the hard work I've done converting all my DVDs and BlueRays into MP4's so I want to make sure if the drive fails, I still have everything. What would you recommend. Can I backup to the second drive or do I have to have something external? If you have a video on this....I'm sorry. I went through the available videos and didn't see one but I might have missed it. I'd appreciate some advice!! Thanks so much.
Thank you for your gr8 videos. I just came across your channel. Just bought a NAS 1522+ with 3 x 4tb and planning on getting another 2 more in coming prime deal if available for cheaper price. My question is if I allocate my nas in SHR2 and later can I convert to SHR1, if space is needed with out starting from scratch?
Looking for a recommendation: I have an older Seagate NAS (SRN04W) with 4 ea 1TB hard drives that is on its' last legs. I've had a couple of issues over the last 12 months that lead me to believe I need a new NAS/Solution, WIN SERVER 2012 EOL being a factor. I've watched a bunch of your videos and am leaning toward the Synology DS1522+. Scenario: - Small business (4 computers) - NAS is used primarily for file storage - NAS is also used for desktop file backup - Your recommended RAID? Based on your videos I'm thinking RAID 5 - Do you have a desktop software backup recommendation for use with Synology - I'll probably go with the SSDs as hard drives based on your videos, any specific recommendations (2-4TB) If you have an Amazon link that helps us both, it'll be appreciated.
Can a raid 5 or 6 rebuild on remaining drives with out replacing the broken drive if replacement is not possible (like on a Sun F80 pcie card) with a loss of storage space yes, but not data, as long as there is room? Can a raid 0 of ssd´s be raid 1 with a single hdd of the same total size and basically not use the hdd as the ssd R0 will always be faster but the hdd provides redundancy or will the hdd set the speed limit?
Once RAID5/6 is built, if you lose a drive, the array becomes degraded. It can't rebuild onto existing disks, so the array will stay degraded until the broken disk is replaced and rebuilt. In the meantime, strain on the remaining drives will increase until the replaced drive is fully running again. As for your 2nd question, it might be theoretically possible but would be so speed limited it would be pointless to do so. This is where you get into more exotic storage solutions such as using storage tiers so you can get the benefit of SSD and HDD.
Thank you for all of your very helpful videos! I’ve watched quite a few of them already. I just bought the Synology DS923+ and 4 HDDs but I haven’t set them up yet. I’m brand new to the NAS world! I’m debating between SHR 2, RAID 6, and RAID 10. Despite reading extensively about them, I still have no idea which to choose. I’m a single user at the moment but I’d like to allow my family to store their data as well. I will primarily store photos and music on my NAS but I’m also beginning to get into video editing. Can you please help?
Dude, love your work, you've been beyond helpful. I have two questions; I've just upgraded to a DS1621+ after your recommendation to another of my comment. I'm about to upgrade that to 10gbe. I saw in another of your videos that you maxed out 2 x 10gbe ports with the nvme ssds installed. If, to begin with, I'm directly hooking my Mac Studio in to my DS1921+, there's no advantage to a second 10gbe port. For future proofing it, given the Mac studio has only 1x 10gbe, there's never going to be a benefit for me having 2 ports in my nas versus a single one, right? Secondly, in terms of RAID- this is a really helpful video. I'm a colorist, I want my clients to be able to dump their projects and data directly on to my NAS, and then for me to be able to grade heavy codecs (Arri Log, RED r3d files etc) 4k footage directly from my nas. I'm guessing 10gbe will help. I'm thinking RAID 5 or SHR1 will give me the best performance and adequate fault tolerance, would you agree? LASTLY (so sorry!) would you recommend I ran 2x m2 nvme SSDs as cache for reading and working off these heavy codecs? TYSM man, hugely appreciated.
Mmm ... let ask you this, I did migrate my old 1520+ shr to my new 1621xs+ the funny thing was that is not supposed to be compatible with the shr but everything went through and is working fine, do you think should I back up everything and run raid 5 or 6 or just live it?
That would be a terrible waste of space as the array would only ever the size of the smallest or co-equal disk in the array. So instead of 6 x 20TB for example, you have 20TB only and 5 duplicate copies of the data.
Id recommend raid 6 why cause what happens when 1 drive failed and u put a new drive in and then while that drive is rebuilding and another drive failed now that in raid 5 well u just lost 2 drives while rebuilding 1 so a raid 6 would come in handy so that way if u rebuilding a failed drive and another drive failed well u be fine
What about RAID 50 vs RAID 6 or even RAID 60? We know that the limitation is the same with RAID 10, (because it's the same RAID just striped, so RAID 5 + 0, or RAID 6 + 0) which "locks" you into your storage amount, unless your rebuild the RAID from scratch!! what's the benefits of these if any? I know there is a big cost increase the larger you go!
Yeah, these dont get used much. The way these are now done is with LVM or something similar, where data is striped across multiple arrays, but not in a RAID0 way, but rather a JBOD way. Really helps with performance with random reads
Purchased a 920+ and 4x 14TB drives a month ago. Still struggling on the kind of RAID to implement. Either SHR with all 4 drives, 42TB of free space left and no spare. Or SHR with 3 drives, 28TB of free space left and a spare disk. Or SHR-2 with all 4 drives, 28TB of free space left and no spare. Any suggestions?
How much data space do you need? i.e. is 28TB enough? I personally like to keep a cold spare on hand and use SHR so 1 lost to redundancy but most of my data is video I could be without for ages as I restore from backup
For 4 drives I would go shr1. Or do 3 disks and have the fourth as a hyperbackup destination of things you really need (depending on how much space you need)
@@SpaceRexWill Good idea. Can the Hyperbackup drive also live its own life within the NAS or do i have to SATA it my PC? Going for 3 drives takes away 14TB of storage though and i have 10 4TB consumer drives filled with 9 year stuff that i'd like to sort and ship on the raid. My guess is that some stuff will be outdated and can be eliminated, so I'll end up RAIDing maybe a third of the data, maybe only 50% who knows in which case the 28TB would be sufficient but if not, I'll be crying over those 14TB i could have added. I should mention that i have another empty 6TB WD Red Pro in my shelf that i extracted from a consumer NAS (myhome). I'm thinking of maybe going for SHR-1 on four drives and using that 6TB as a hyperbackup drive.
The big thing that this video is missing is error correction. No hardware raid does any error correction nowadays, so when your data corrupts it does not autorestore. With software raid like ZFS it does. hardware RAID should be avoided at all cost.
@@SpaceRexWill taking the example of synology, the data scrubbing feature handles this not the hardware raid itself. So the software piece is quite important.
Given the size of today’s drives, no one should EVER use RAID 5 or SHR 1. The chances are high that you’ll experience another disk failure during a rebuild, and if you only have a RAID 5, you lose all your data in that event.
How are the chances "high" that you'll have a 2nd drive fail during the "up to 24 hrs" that it may take to rebuild the drive? If you propose that, for an average user, a drive may last an average of 8 years, that's about a 0.0003% chance that a drive would fail on any given day, and something like a 0.0000001% chance that they'd both fail on the same day. How is that "high" probability?
@@benjaminlhargrave I can only think in the event that the cause of the failure is related to something that may damage several components, like an electric surge.
@@benjaminlhargraveI think you meant 0.03%, never the less, the likelihood is perhaps a lot higher, given that all drives would be 8 years old. The odds aren’t linear across the time span of the drives life. That being said, you’d be very unlucky to experience a failure during the perhaps 3 days it was rebuilding. 24 hours is very ambitious with the size of drives these days. I’m surprised hot spare wasn’t mentioned in this video, as that would reduce the chances of a drive breaking during a rebuild, due to the rebuild taking place straightaway, vs when you realise and acting on it.
Hey man, I just wanted to thank you for all of your videos, I appreciate it very much. I'm a software engineer that's really interested in home servers, and your guides have helped me understand so much about my NAS as well as Linux, which I am not as familiar with. Your videos are always very professional and concise. Keep up the great work!
Hey glad you like the content! Really means a lot!
I agree. Great vids and presenter
SpaceRex I appreciate all your work. I've learned a lot from you.
One thing I'd add about RAIDs, particularly for those with 8 bay NASes, is that you don't need to have all 8 bays as a single volume. You can split them into multiple volumes each with their own RAID. For example, you could have 2 x 4 drive RAID 5s instead of 1x 8 drive RAID 5 or 6. Or you can mix and match. You could have 1x 6 drive RAID 5 or 6 and 1 x 2 drive RAID 10. Etc. This is helpful when you have different workloads such as regular file storage that gets a regular level of use versus say security cameras running 24 hours a day that are constantly writing and reading and deleting. So keep your regular files on the 6 drive RAID 5 or 6 and your security cameras on the 2 drive RAID 10. That way when the sec cam drives fail more quickly due to higher load you won't fubar your regular files and you won't need to worry about rebuilding the RAID should a failure occur because of the sec cam load. Also, with the RAID 10 for the sec cams you can easily rebuild if a drive fails and you keep your potentially important sec cam footage.
It's not letting me edit, so adding a bit more here.
Having multiple volumes/RAIDs is also helpful for things like video files when you're running a Plex server. Keep your regular files in one RAID and your video files in another. Again, separate out based on workloads and/or file types. Likely your regular files have different importance than your video files, except for perhaps important family videos you absolutely don't want to lose.
Your videos are very much appreciated
Thanks a ton man!
Ok, now it makes sense. I couldn't figure out how you could backup 7 drives with one but after your explanation it makes perfect sense. Actually as soon as you started the explanation the light turned on and I finally got it. Thanks for posting this.
Glad I could help!
Thank you for your hard work and dedication to providing free knowledge for those stepping into the IT Industry.
My pleasure!
Just recently came across your channel. I think I have a pretty good understanding of RAID and how they work but, never really understood the "secret" of parity. Loved your simple explanation of "odd parity" . I run Promise Pegasus 2's in my home setup all running in Raid 5. Last week lost one drive on my R8 (8 X 4TB) replaced with a spare and took almost 30 hrs to rebuild. I have been thinking about getting a Synology but, just have not pulled the trigger yet. Thanks for the great content.
Glad you liked it! Was not sure if people were interested in diving into the weeds on pairty!
are you using Mac OS X? If so, which version and are you experiencing the "Disk not Ejected Safely" issue? cheers
I appreciate the heck out of you and your videos. I know absolutely nothing about computers in general and I’ve built an entire network off your videos that’s insane. I’ve spent a small fortune but worth every penny for video editing in my buisness so thankyou
Thanks so much man!
Thank you for this incredibly informative video on RAID levels. I really appreciate how clearly and in detail you explained the different RAID levels and their pros and cons. This helped me understand which RAID level is most suitable for my needs. Your insights and examples were very valuable. Keep up the great work.
this video is amazing and underrated. Best video on RAID I can find.
Thank you for the thorough explanation! I'm relatively new to NAS and I am about to set up a Synology 1522+ with 5 16TB drives, and this tutorial was exactly what I needed.
Your parity explanation reminds me of "The missile knows where it is because it knows where it isn't". Thank you for explaining it, everyone else glossed over the most interesting part of RAID.
Haven’t bought a synology yet but I’m getting closer every new Video you put out😉
For main use most home customers should use RAID6/SHR2 (unless they have a really good backup strategy and don't mind restoring from backup if using RAID5 or SHR1) disk size should also be taken into consideration large disks over 8+10tb should use SHR2/RAID6 due to rebuild times
also more disks doesn't mean significantly longer rebuild time (larger disks are straight up longer rebuild time)
and CPUs have been fast enough to handle single and dual parity rebuild for quite some time (like over 10 years)
Other things not touched and probably outside this topic is Checksum,, zfs (truenas or QuTS hero, not qts) and asustor (as long as you ticked the snapshot box to enable btrfs volume) by default uses Checksum at volume creation, Synology on the other hand still defaults to Checksum off when creating share Folders and can't be ticked afterwards (unless you have a specific reason to not use Checksum like write intensive vm or data base it should be ticked as it allows filesystem level self heal or worst case report what files are corrupted instead of it been silent corruption )
The 5/6 discussion has been going on a really long time and there is a ton of misinformation out there.
A 20TB drive running at only 100 MB/s could would be completely rebuilt in under 3 days. The problem comes down to what was mentioned in the video: what else is going on on the NAS. Another huge part is the more disks you add the more chances of failure you have, and the slower you go due to the fact that all the random data has to be read from every single drive. This is what the real risk is. The more drives you have not only does it run slower but they are more drives that can fail. This myth has been perpetrated online due to people misunderstanding uncorrectable drive error rates and making sites like this: magj.github.io/raid-failure/
Rebuild times for home users who are not using drives 24/7 are significantly lower then they are for enterprise.
@@SpaceRexWill I just prefer still have redundancy when a drive has failed, bad block repair and self heal (btrfs) available on main nas/server units,, backups just RAID5 as if they fail you just Re run backup (unless it was an offsite backup then it be preferable that we don't have to re run a full backup over the Internet again if it's a large backup)
Raid array doesn't know what data is stored above it, it will rebuild from Start to end regardless what data is stored there (as far as raid is concerned everything is sequential when rebuilding, any reads and writes that come in will be slower)
Just find a lot of novice users who have a lot of trust in Raid1/5 and don't run a backup and lose 2 disks when trying to repair , don't run a data scrub and smart extended scan before replacing a disk (and don't run monthly data scrub or 3 monthly smart extended scan and snapshots if available on that platform)
that website wraps 100% error free to 0% lol (never seen that site before) an UNC if using a enterprise or nas drive shouldn't result in a rebuild failure in RAID5 it will just have missing bits of data (if the disk full fails/hangs up for more then 7 seconds then it will destroy the array)
Raid 6 seems pointless you lose a disk and rebuilding takes longer than mirroring data which doesn't have to be calculated and when replaced
16tb drives well four of those on raid 6 has no benefit 16+16= 32 raid 10 copying using whole direct disks equally same usable amount of data
Raid 5 has only the better benefit of upgrading free space usable and making parity on two separate drive takes longer to calculate both read and writing vs copying mirroring
One issue not discussed is the vulnerability of RAID 5 and RAID6 to Unrecoverable Read Errors during a rebuild. The larger the drive the greater the chance these will crop up. In the old days we would call these bad sectors. Today, most consumer drives are rated 1014 regarding their Unrecoverable Read Error Rate. That means that there is a chance that you get a read error every 12.5 TB of data. Now, I have worked in medium sized businesses and have seen RAID 5 Array rebuilds take days to accomplish. When it all works, it is great. When you have one of these errors pop up, your array is toast. Been there, seen that and got the Tee Shirts numerous times. Another thing to remember, is that while a rebuild is going on, the end users will see a definite decrease in the speeds of data access as there is not the full array to read from and write to so that parity bit is leaned on heavily. Due to these situations, especially where you cannot afford to lose the data, RAID 1 and RAID 10 become the recommendations. Again, the larger the drives in use, the greater the possibility that there will be Unrecoverable Read Errors during the rebuild.
So this is something that was an issue wayyy more prevalent back in the early 2000's. Today UNC's are much less common, and if you are scrubbing regularly should get taken care of if they do pop up. Even then Synology's MDADM/BTRFS RAID5 can still rebuild with a UNC as they have additional checksums in place that can backup single bitrot.
If the 12TB UNC really was statistically random then you could not read the entire data stored within a 16TB drive without corruption, but you can
@@SpaceRexWill This is still an issue today as drive sizes have kept increasing. I was still seeing issues like this in 2015 in some smaller and mid-sized companies. Data scrubbing does not necessarily take care of this. First, you have to enable and schedule the data scrubbing and if it is not Btrfs, you are still vulnerable. Many a Synology user goes for a basic setup and never goes into the trenches to really ensure they are as protected as they can be. If the data is critical, I would not be wagering that data integrity on the hope that it "should" be able to rebuild it. Depending on the load on the unit, a rebuild of a large RAID 5 array could take days and cripple performance while it is happening. Again, I saw this happen many times and experienced this with a large physician's group with their EHR system. Theory is always great, but from practical experience, and I have 40 years of that specializing in Backup and Disaster Recovery, you can never totally rely on that. While I hate losing drive space with a RAID 10, I know I and my customers are better protected than in a RAID 5 and RAID 6 array. This was a great video, but I think that it is certainly worth mentioning this important consideration.
Hi @@stevemccarthy4713 Is this longer recovery period applicable to SSD HDD under 1 TB in RAID 5. I'm interested to know your opinion on this. I agree with larger traditional drives, just as a remark.
These videos are great! Is there a place in your video (or another video) where you discuss the recommended RAID if you are using NAS as a Time Machine backup?
Nice work!
That was an excellent explanation.
Fantastic overview with great outlining of pros and cons, exactly what I was looking for! You get a like and much thanks from me!
Thanks a lot for your videos. I really do appreciate your explanations as being a non-professional user of Synology equipment.
deeply explained. Thanks again.
Thx for your implication and helping the world with your videos !
Appreciate it a lot
Question... if you use SHR-1 with 2 drives you say it is basically RAID 1. If you add a 3rd disk will it auto-magically convert your 3 disks to RAID5? or do you have to reformat the entire array? if you are just starting out would you recommend starting out with a minimum of 3 drives so you can start off in "Raid 5". Thanks!
This is actually my same question can u upgrade to another raid or you need to re format all drives and and create another raid
@@darckanbu47 still waiting on an answer to this. If you reformat your array, does that mean you lose all your data and so you better have a backup first? Is this one of many possible footguns with RAID?
Thank you so much spaceRex
One fantastic video. Thank you.
Nostalgic to hear talk about RAID. Remember the old days with the large raid packs and all the maintenance hassles with the spinning disks braking, data management issues and expansion nightmares. Haven’t designed any systems with raid in years all clusters today running ZFS. From a systems point this make soooo much sense and is needed for the VM architecture with HA and/or migration. Maintenance is less of a hassle and performance is golden. TruNAS with ZFS pools and ssd buffers are a cool way to go.
My homelab has 4 Proxmox servers 2 NAS and one dedicated Proxmox backup server. All my systems are on ZFS or XFS today and it’s smooth sailing.
ZFS sounds intriguing, but for a layperson such as myself, I think it could be actually dangerous.
@@timramich RAID needs much knowledge and skills to maintain.
Many systems relays on ZFS to function, pfSense, Proxmox. Remember ZFS is part of the Linux kernel.
@@nalle475 No it doesn't (pertaining to your RAID claim). You just make an array and format a filesystem on top of it. There's so much specific crap to setup about ZFS, because it's an actual filesystem and has so many features, that it takes people with IT experience to set it up properly. It's not for the novice. I wouldn't ever use it for my data unless I had a paid IT person managing it for me. I don't see what having it as part of the kernel has to do with anything.
@@timramich Backups and snapshots are the most important and they’re so much easier and better with ZFS. Bit roth is less likely with ZFS.
@@nalle475 Dude, whatever you say. That kind of stuff takes professional IT level knowledge to do.
Could you make a video dealing with ssd cache on Synology? Actually Imdont feel an improvement.
if I get a 5 bay nas and setup raid 5 with 24TB harddrives in 3 bays, can I use the fourth bay for samsung ssd just to run steam games and have the game history backup to the other 3 harddrives? (keeping in mind that raid uses the size of the smallest harddrive)
In this case, would I need to first setup the raid 5 with the 3 24TB harddrives, and then connect the ssd afterwards in the fourth bay and have it somehow backup to the raid5? Or would it need to be part of the raid5 from the beginning?
Hi! I was wondering if you could help me out. A few years ago I bought the Synology DS918+ 4-bay NAS and mounted 2 10TB drives with RAID 0. As my capacity has reached its limit now I need to expand it and bought 2 additional 10TB drives. With RAID 0 I understand that cannot just expand my storage pool, correct? What would be the best way to add this additional 20TB capacity without losing any of my existing data? Thanks in advance!
You would have to just create a new volume 2 with
Great video!! I know you recommend the 1522+. What raid would it be to setup disk bays 1-4 with 1 then for disk bay 5 have a copy of the info on disk bays 1-4? So Bays 1-4 has 20tbs total for all 4, then disk 5 has another 20tb which is a constant copy of bays 1-4. Is this possible with the 1522+ and what raid would this be? Thank you
Hi and tanks for all Your episod, i need help to charge my Synology’s DS1621+ With 4 disk in it in RAID 10 to SHR, if it possible, many. Thanks
I will join in the hearty thanks for all of your great videos. It has been amazing to learn from you. I have a Synology DS220+. I currently have only 1 of the 2 bays in use. I am using a 6TB Ironwolf NAS drive and have just purchased a second 6TB Ironwolf drive for the second bay. I am using my has primarily as a Plex server but am also putting ALL my digital media on it...pictures, movies, ect. I am NOT using it as a backup for my computer. I am not quite sure how to add the second drive....do I just plug it in like I did the first one? I've been watching your videos on backing up and I'm not sure if I should be using RAID (and what level) or actually doing a backup. I don't want to lose all the hard work I've done converting all my DVDs and BlueRays into MP4's so I want to make sure if the drive fails, I still have everything. What would you recommend. Can I backup to the second drive or do I have to have something external? If you have a video on this....I'm sorry. I went through the available videos and didn't see one but I might have missed it. I'd appreciate some advice!! Thanks so much.
Thank you for your gr8 videos. I just came across your channel. Just bought a NAS 1522+ with 3 x 4tb and planning on getting another 2 more in coming prime deal if available for cheaper price. My question is if I allocate my nas in SHR2 and later can I convert to SHR1, if space is needed with out starting from scratch?
Looking for a recommendation:
I have an older Seagate NAS (SRN04W) with 4 ea 1TB hard drives that is on its' last legs. I've had a couple of issues over the last 12 months that lead me to believe I need a new NAS/Solution, WIN SERVER 2012 EOL being a factor.
I've watched a bunch of your videos and am leaning toward the Synology DS1522+.
Scenario:
- Small business (4 computers)
- NAS is used primarily for file storage
- NAS is also used for desktop file backup
- Your recommended RAID? Based on your videos I'm thinking RAID 5
- Do you have a desktop software backup recommendation for use with Synology
- I'll probably go with the SSDs as hard drives based on your videos, any specific recommendations (2-4TB)
If you have an Amazon link that helps us both, it'll be appreciated.
Can a raid 5 or 6 rebuild on remaining drives with out replacing the broken drive if replacement is not possible (like on a Sun F80 pcie card) with a loss of storage space yes, but not data, as long as there is room?
Can a raid 0 of ssd´s be raid 1 with a single hdd of the same total size and basically not use the hdd as the ssd R0 will always be faster but the hdd provides redundancy or will the hdd set the speed limit?
Once RAID5/6 is built, if you lose a drive, the array becomes degraded. It can't rebuild onto existing disks, so the array will stay degraded until the broken disk is replaced and rebuilt. In the meantime, strain on the remaining drives will increase until the replaced drive is fully running again. As for your 2nd question, it might be theoretically possible but would be so speed limited it would be pointless to do so. This is where you get into more exotic storage solutions such as using storage tiers so you can get the benefit of SSD and HDD.
With btrfs you can expand and retract any raid level.
You cannot (at least with Synology's implementation, using MDADM for the RAID) expand RAID 0 or RAID 10
Thank you for all of your very helpful videos! I’ve watched quite a few of them already. I just bought the Synology DS923+ and 4 HDDs but I haven’t set them up yet. I’m brand new to the NAS world! I’m debating between SHR 2, RAID 6, and RAID 10. Despite reading extensively about them, I still have no idea which to choose. I’m a single user at the moment but I’d like to allow my family to store their data as well. I will primarily store photos and music on my NAS but I’m also beginning to get into video editing. Can you please help?
I have and 8 bay R820 dell power edge server. 4 HDD 980 GB and 4 SSD 1.6 TB. How would you configure Raid and set up. Virtual drive for Ubuntu .
Dude, love your work, you've been beyond helpful. I have two questions; I've just upgraded to a DS1621+ after your recommendation to another of my comment. I'm about to upgrade that to 10gbe. I saw in another of your videos that you maxed out 2 x 10gbe ports with the nvme ssds installed. If, to begin with, I'm directly hooking my Mac Studio in to my DS1921+, there's no advantage to a second 10gbe port. For future proofing it, given the Mac studio has only 1x 10gbe, there's never going to be a benefit for me having 2 ports in my nas versus a single one, right? Secondly, in terms of RAID- this is a really helpful video. I'm a colorist, I want my clients to be able to dump their projects and data directly on to my NAS, and then for me to be able to grade heavy codecs (Arri Log, RED r3d files etc) 4k footage directly from my nas. I'm guessing 10gbe will help. I'm thinking RAID 5 or SHR1 will give me the best performance and adequate fault tolerance, would you agree? LASTLY (so sorry!) would you recommend I ran 2x m2 nvme SSDs as cache for reading and working off these heavy codecs? TYSM man, hugely appreciated.
Mmm ... let ask you this, I did migrate my old 1520+ shr to my new 1621xs+ the funny thing was that is not supposed to be compatible with the shr but everything went through and is working fine, do you think should I back up everything and run raid 5 or 6 or just live it?
Should be totally fine! The SHR not being allowed is just a performance thing. That’s why they don’t let you in the high end units
Hey so what your saying is that I can and should use Raid 1 on a 6 bay nas?
That would be a terrible waste of space as the array would only ever the size of the smallest or co-equal disk in the array. So instead of 6 x 20TB for example, you have 20TB only and 5 duplicate copies of the data.
Id recommend raid 6 why cause what happens when 1 drive failed and u put a new drive in and then while that drive is rebuilding and another drive failed now that in raid 5 well u just lost 2 drives while rebuilding 1 so a raid 6 would come in handy so that way if u rebuilding a failed drive and another drive failed well u be fine
great video
What about RAID 50 vs RAID 6 or even RAID 60? We know that the limitation is the same with RAID 10, (because it's the same RAID just striped, so RAID 5 + 0, or RAID 6 + 0) which "locks" you into your storage amount, unless your rebuild the RAID from scratch!! what's the benefits of these if any? I know there is a big cost increase the larger you go!
Yeah, these dont get used much.
The way these are now done is with LVM or something similar, where data is striped across multiple arrays, but not in a RAID0 way, but rather a JBOD way.
Really helps with performance with random reads
Funny enough I've just bought my first server and was already thinking that I need to learn about "raid" and how it works.
Simple explanation but make me understand
But with shr you can mix and match drives with different capacities. You cant do this with raid 5 and 6
Purchased a 920+ and 4x 14TB drives a month ago. Still struggling on the kind of RAID to implement.
Either SHR with all 4 drives, 42TB of free space left and no spare.
Or SHR with 3 drives, 28TB of free space left and a spare disk.
Or SHR-2 with all 4 drives, 28TB of free space left and no spare.
Any suggestions?
How much data space do you need? i.e. is 28TB enough? I personally like to keep a cold spare on hand and use SHR so 1 lost to redundancy but most of my data is video I could be without for ages as I restore from backup
For 4 drives I would go shr1. Or do 3 disks and have the fourth as a hyperbackup destination of things you really need (depending on how much space you need)
@@SpaceRexWill Good idea. Can the Hyperbackup drive also live its own life within the NAS or do i have to SATA it my PC? Going for 3 drives takes away 14TB of storage though and i have 10 4TB consumer drives filled with 9 year stuff that i'd like to sort and ship on the raid. My guess is that some stuff will be outdated and can be eliminated, so I'll end up RAIDing maybe a third of the data, maybe only 50% who knows in which case the 28TB would be sufficient but if not, I'll be crying over those 14TB i could have added. I should mention that i have another empty 6TB WD Red Pro in my shelf that i extracted from a consumer NAS (myhome). I'm thinking of maybe going for SHR-1 on four drives and using that 6TB as a hyperbackup drive.
@@IMBlakeley Thanks for your input. See my answer to it in my comment to SpaceRex. Cheers.
thx 🤟
0 is even 1 is odd
add 2 odd answer is even
add 2 even answer is even
add odd and even answer is odd
The big thing that this video is missing is error correction. No hardware raid does any error correction nowadays, so when your data corrupts it does not autorestore. With software raid like ZFS it does. hardware RAID should be avoided at all cost.
Actually a lot of hardware raid would still do checksum validation. Totally depends on the implication
@@SpaceRexWill taking the example of synology, the data scrubbing feature handles this not the hardware raid itself. So the software piece is quite important.
i hope you're alright you sound out of breath !!
Hello people of earth. I need help with Server+ study material. I'm new to the field. Please help
Given the size of today’s drives, no one should EVER use RAID 5 or SHR 1. The chances are high that you’ll experience another disk failure during a rebuild, and if you only have a RAID 5, you lose all your data in that event.
How are the chances "high" that you'll have a 2nd drive fail during the "up to 24 hrs" that it may take to rebuild the drive? If you propose that, for an average user, a drive may last an average of 8 years, that's about a 0.0003% chance that a drive would fail on any given day, and something like a 0.0000001% chance that they'd both fail on the same day. How is that "high" probability?
@@benjaminlhargrave was thinking the same thing. It sounds very unlikely
@@benjaminlhargrave I can only think in the event that the cause of the failure is related to something that may damage several components, like an electric surge.
@@sproidthats why backups are for. Raid is never a backup just redundancy for the storage
@@benjaminlhargraveI think you meant 0.03%, never the less, the likelihood is perhaps a lot higher, given that all drives would be 8 years old. The odds aren’t linear across the time span of the drives life. That being said, you’d be very unlucky to experience a failure during the perhaps 3 days it was rebuilding. 24 hours is very ambitious with the size of drives these days.
I’m surprised hot spare wasn’t mentioned in this video, as that would reduce the chances of a drive breaking during a rebuild, due to the rebuild taking place straightaway, vs when you realise and acting on it.